title
stringlengths
1
827
uuid
stringlengths
36
36
pmc_id
stringlengths
5
8
search_term
stringclasses
44 values
text
stringlengths
0
8.58M
Medical malpractice claims in Hepatology: Rates, Reasons, and Results
89753d75-5302-4027-a324-a56d9aee91d5
10109843
Internal Medicine[mh]
Chronic Liver disease (CLD) is increasingly common, complex, and costly. Accurate diagnosis of the underlying etiology and timely diagnosis of cirrhosis complications, such as hepatic encephalopathy (HE), ascites, and hepatocellular carcinoma (HCC) are necessary to avoid substantial patient harm. Liver disease care can be complicated and risky. CLD increases the risk of even routine medical care. In addition, patients with liver disease often require invasive procedures. The only true cure for cirrhosis is transplantation, which depends on a limited pool of donor organs requiring challenging decisions regarding transplant candidacy. Each factor raises the stakes of hepatology practice and imbues care with widespread concerns of legal liability. Data are limited, however, regarding liability and its sources. Gastroenterology practice (which includes hepatology) is associated with the sixth highest per-physician rate of annual malpractice claims but a below-average rate of damages awarded per closed claim. In 2014, gastroenterology was associated with 12.1 claims per 1000 physician years. However, little is known about hepatology-specific claims. In an audit of all 85 claims against gastroenterologists from England and Wales, 1987-1996, only 1 was related to liver disease. From 1985 to 2005, 500 claims were made regarding procedural complications in gastroenterology, 50 of which were related to liver disease-related procedures (eg, biopsy). As of 2019, 1562 judicial opinions had been published regarding liver disease-related lawsuits. These focused on the denial of insurance based on alcohol abstinence, denial of waitlisting or hepatitis C therapy for prisoners, and rare claims of age or religious discrimination. Unfortunately, most prior reports lack the granular details required to inform clinicians about sources of liability and the practice improvements needed to avoid it. Without this granularity, lessons from these experiences are often lost rather than leveraged to improve quality and safety. Herein, we review 2 sources to define the hepatology liability landscape and thus, the opportunities for care improvement: our institution’s experience and all cases from a national legal liability insurer. Our study consists of 2 main data sources: institutional data and national data. The institutional data allowed a detailed assessment of patient and provider factors and provided an estimate of the incidence of claims among our institutional cohort with CLD, cirrhosis, and transplant-related care. The national data was extensively coded for contributing factors and was assessed for conceptual drivers of liability risk. We evaluated all ‘closed’ cases from both sources, which refers to all cases with a conclusion, including settlement, dismissal, and withdrawal. This study was approved as exempt from review by the University of Michigan Medical School Institutional Review Board. Local claims We conducted a review of our institution’s registry of liability claims using a keyword search in the Executive Summary field. The terms are summarized in Supplementary Table 1, http://links.lww.com/HC9/A247 . The search included loss dates between 2012 and 2021 with a status of ‘closed.’ Each case was rendered into a clinical summary from which the primary diagnosis and allegations were derived. All records retrieved were reviewed by 3 clinicians to determine if CLD and/or liver transplantation were involved in the case. Cases were determined to be related to cirrhosis if the patient had cirrhosis, liver transplant if it involved a patient who was undergoing posttransplant or a patient who was undergoing liver transplantation, and CLD if it involved liver diseases or diagnostics not involving patients with cirrhosis or liver transplantation. In addition, the responsible service(s), patient outcome, case status (closed or settled), and indemnity payment amounts were extracted. The incidence of claims was calculated for people with cirrhosis and liver transplantation. This was accomplished by searching the electronic medical records for patients with validated diagnosis codes for cirrhosis and for all patients who had received or undergone a liver transplant. We could not determine a denominator for CLD as many (eg, NAFLD, alcohol-associated liver disease) have insensitive diagnostic codes. National claims We rendered all CLDs and cirrhosis complications into validated diagnostic codes to pull closed cases attributed to those diagnostic codes from Candello’s Comparative Benchmarking System. (Supplementary Table 1, http://links.lww.com/HC9/A247 ) Candello is a division of the Risk Management Foundation and the Controlled Risk Insurance Company. The Candello database represents ~one-third of all US malpractice claims from medical centers throughout the country, representing roughly 550 hospitals and health systems. Claims are analyzed by Candello-employed clinical experts, and major contributing factors to each malpractice claim are assigned. In this study, all claims with the loss year of 2012 through 2021 involving patients with final diagnosis codes relating to liver disease were reviewed. Basic demographic data were collected for each case, along with the responsible service, procedure, injury severity, indemnity payment amounts, and contributing factors. The coding of contributing factors is determined based on a multitiered taxonomy structure that is designed to capture the clinical and legal variables of each malpractice claim by utilizing medical records and legal claim files. The contributing factors are designed to identify opportunities for improvement or medical errors that led to the specific patient's injury or death. Case severity is defined using the National Association of Insurance Commissioners severity scale: low (0–2), medium (3–5), and high (6–9). Low scores suggest a legal issue alone; high scores are consistent with patient death. We conducted a review of our institution’s registry of liability claims using a keyword search in the Executive Summary field. The terms are summarized in Supplementary Table 1, http://links.lww.com/HC9/A247 . The search included loss dates between 2012 and 2021 with a status of ‘closed.’ Each case was rendered into a clinical summary from which the primary diagnosis and allegations were derived. All records retrieved were reviewed by 3 clinicians to determine if CLD and/or liver transplantation were involved in the case. Cases were determined to be related to cirrhosis if the patient had cirrhosis, liver transplant if it involved a patient who was undergoing posttransplant or a patient who was undergoing liver transplantation, and CLD if it involved liver diseases or diagnostics not involving patients with cirrhosis or liver transplantation. In addition, the responsible service(s), patient outcome, case status (closed or settled), and indemnity payment amounts were extracted. The incidence of claims was calculated for people with cirrhosis and liver transplantation. This was accomplished by searching the electronic medical records for patients with validated diagnosis codes for cirrhosis and for all patients who had received or undergone a liver transplant. We could not determine a denominator for CLD as many (eg, NAFLD, alcohol-associated liver disease) have insensitive diagnostic codes. We rendered all CLDs and cirrhosis complications into validated diagnostic codes to pull closed cases attributed to those diagnostic codes from Candello’s Comparative Benchmarking System. (Supplementary Table 1, http://links.lww.com/HC9/A247 ) Candello is a division of the Risk Management Foundation and the Controlled Risk Insurance Company. The Candello database represents ~one-third of all US malpractice claims from medical centers throughout the country, representing roughly 550 hospitals and health systems. Claims are analyzed by Candello-employed clinical experts, and major contributing factors to each malpractice claim are assigned. In this study, all claims with the loss year of 2012 through 2021 involving patients with final diagnosis codes relating to liver disease were reviewed. Basic demographic data were collected for each case, along with the responsible service, procedure, injury severity, indemnity payment amounts, and contributing factors. The coding of contributing factors is determined based on a multitiered taxonomy structure that is designed to capture the clinical and legal variables of each malpractice claim by utilizing medical records and legal claim files. The contributing factors are designed to identify opportunities for improvement or medical errors that led to the specific patient's injury or death. Case severity is defined using the National Association of Insurance Commissioners severity scale: low (0–2), medium (3–5), and high (6–9). Low scores suggest a legal issue alone; high scores are consistent with patient death. Local claims Our search returned 39 claims, 15 for patients with non-cirrhotic CLD, 13 with cirrhosis, and 11 with transplant-related conditions (Table ). For patients with non-cirrhotic liver disease, the vast majority had cancer involving the liver, with claims related to surgical or periprocedural complications (eg, bleeding from a biopsy). There were 2 cases of nosocomial transmission of viral hepatitis during procedures and 1 patient who received a false diagnosis of liver disease due to lab error. Severe injury due to perioperative/periprocedural complications and DILI resulted in the settled claims, with mean damages of $304,000 (range $9,000–$1,100,000). Both cases of DILI resulted from the claim of inadequate monitoring after the drug initiation: 1 for isoniazid and 1 for disulfiram. Liver transplant-related care resulted in 11 claims, including 4 settled claims, with mean damages of $485,000 (range $12,000-$1,130,000). The claims occurred among 1751 transplant patients cared for over the study period (0.6% incidence). All but 2 involved perioperative/periprocedural bleeding, infection, air embolus, and bile leak complications. There were 2 claims for in-hospital falls in the postoperative period. Patients with cirrhosis accounted for 13 claims out of 22,230 unique patients seen during the study period (0.06% incidence). Five cases were settled with mean damages of $385,000 (range $12,000–$850,000). Most claims involved periprocedural complications like infection and bleeding. The bleeding complications arose from vascular injury (eg, jugular vein laceration during dialysis catheter placement, epigastric artery laceration during paracentesis, intercostal artery laceration during thoracentesis, and umbilical vein laceration during radiofrequency ablation). One patient experienced a reaction to a prophylactic plasma transfusion. There were 2 claims related to opioid-induced or benzodiazepine-induced HE resulting in falls or other complications that precluded transplantation. Failure to list for liver transplant in a timely fashion was claimed for 1 deteriorating patient, but the plaintiff's attorney closed the case, electing not to pursue it. Two patients experienced complications attributed to iodinated contrast agents: acute kidney injury and burns due to skin exposure. Finally, 1 patient developed respiratory arrest during nasogastric tube placement. National claims Overall, 94 closed claims relating to liver disease were retrieved from the Candello database out of 102,575 claims (0.09%) (Table ). The severity of the injury in the claims was classified as “high” in 77% of claims, and 32% were settled with a median indemnity of $470,606. All liver diseases were searched, but not all were involved in claims. Fifty-two (55%) cases pertained to HCC, 12 (13%) related to hepatitis C (both acute and chronic), and other liver conditions accounted for a small number of cases. Each case was assessed for its contributing factors. The most common was diagnosis (56%), driven by a failure/delay in ordering a diagnostic test, failure to appreciate and reconcile a symptom, sign, or result, and the misinterpretation or delayed interpretation of a diagnostic study. Miscommunication between providers and between providers and patients was involved in 22% of cases. Patient behavior-related factors (nonadherence with scheduled appointments, treatments, or diagnostic testing) were involved in 20% of cases. Selection or management of therapy played a role in 7% of cases. Very rarely were issues identified relating to technical skill (4%), house staff supervision (3%), or weekend/holiday care (1%). Among the numerous (52, 55%) HCC cases, the most common reasons for liability—in 37 (71%) of cases—included the failure to screen candidates and to identify patients at risk for HCC (ie, failure to diagnose cirrhosis in a patient with known CLD). In addition, 40% of HCC claims involved failures of communication, for instance, the failure to read the medical record, failure to communicate radiology findings to the responsible clinician, and ineffective communication during transitions of care. For the 6 (6%) cases relating to acute hepatitis C, a common thread was the failure to assess for chronicity resulting in CLD, while the 5 (5%) involving chronic hepatitis C resulted from a failure to diagnose (and treat) before decompensation. Procedure and treatment-related complications played a role in 13 (25%) of claims. Other claims involved the quality of diagnosis, such as he failure to diagnose varices endoscopically, identify patients with spontaneous bacterial peritonitis based on cell counts, and initiate appropriate management in a patient with seronegative autoimmune hepatitis whose delayed diagnosis resulted in liver failure. Periprocedural complications were common, such as a case involving liver failure after transjugular portosystemic shunt and hemorrhage after paracentesis in a patient whose direct-acting anticoagulant was not temporarily held. The only case attributed to NAFLD was a case of undiagnosed HE that resulted in premature hospital discharge, a fall, and a fracture. HE resulted in another claim where cerebral herniation occurred in a patient with acute liver failure. Finally, DILI in a case where administered antibiotics were found to be inappropriate resulted in a claim involving a patient with alcohol-associated liver disease. Our search returned 39 claims, 15 for patients with non-cirrhotic CLD, 13 with cirrhosis, and 11 with transplant-related conditions (Table ). For patients with non-cirrhotic liver disease, the vast majority had cancer involving the liver, with claims related to surgical or periprocedural complications (eg, bleeding from a biopsy). There were 2 cases of nosocomial transmission of viral hepatitis during procedures and 1 patient who received a false diagnosis of liver disease due to lab error. Severe injury due to perioperative/periprocedural complications and DILI resulted in the settled claims, with mean damages of $304,000 (range $9,000–$1,100,000). Both cases of DILI resulted from the claim of inadequate monitoring after the drug initiation: 1 for isoniazid and 1 for disulfiram. Liver transplant-related care resulted in 11 claims, including 4 settled claims, with mean damages of $485,000 (range $12,000-$1,130,000). The claims occurred among 1751 transplant patients cared for over the study period (0.6% incidence). All but 2 involved perioperative/periprocedural bleeding, infection, air embolus, and bile leak complications. There were 2 claims for in-hospital falls in the postoperative period. Patients with cirrhosis accounted for 13 claims out of 22,230 unique patients seen during the study period (0.06% incidence). Five cases were settled with mean damages of $385,000 (range $12,000–$850,000). Most claims involved periprocedural complications like infection and bleeding. The bleeding complications arose from vascular injury (eg, jugular vein laceration during dialysis catheter placement, epigastric artery laceration during paracentesis, intercostal artery laceration during thoracentesis, and umbilical vein laceration during radiofrequency ablation). One patient experienced a reaction to a prophylactic plasma transfusion. There were 2 claims related to opioid-induced or benzodiazepine-induced HE resulting in falls or other complications that precluded transplantation. Failure to list for liver transplant in a timely fashion was claimed for 1 deteriorating patient, but the plaintiff's attorney closed the case, electing not to pursue it. Two patients experienced complications attributed to iodinated contrast agents: acute kidney injury and burns due to skin exposure. Finally, 1 patient developed respiratory arrest during nasogastric tube placement. Overall, 94 closed claims relating to liver disease were retrieved from the Candello database out of 102,575 claims (0.09%) (Table ). The severity of the injury in the claims was classified as “high” in 77% of claims, and 32% were settled with a median indemnity of $470,606. All liver diseases were searched, but not all were involved in claims. Fifty-two (55%) cases pertained to HCC, 12 (13%) related to hepatitis C (both acute and chronic), and other liver conditions accounted for a small number of cases. Each case was assessed for its contributing factors. The most common was diagnosis (56%), driven by a failure/delay in ordering a diagnostic test, failure to appreciate and reconcile a symptom, sign, or result, and the misinterpretation or delayed interpretation of a diagnostic study. Miscommunication between providers and between providers and patients was involved in 22% of cases. Patient behavior-related factors (nonadherence with scheduled appointments, treatments, or diagnostic testing) were involved in 20% of cases. Selection or management of therapy played a role in 7% of cases. Very rarely were issues identified relating to technical skill (4%), house staff supervision (3%), or weekend/holiday care (1%). Among the numerous (52, 55%) HCC cases, the most common reasons for liability—in 37 (71%) of cases—included the failure to screen candidates and to identify patients at risk for HCC (ie, failure to diagnose cirrhosis in a patient with known CLD). In addition, 40% of HCC claims involved failures of communication, for instance, the failure to read the medical record, failure to communicate radiology findings to the responsible clinician, and ineffective communication during transitions of care. For the 6 (6%) cases relating to acute hepatitis C, a common thread was the failure to assess for chronicity resulting in CLD, while the 5 (5%) involving chronic hepatitis C resulted from a failure to diagnose (and treat) before decompensation. Procedure and treatment-related complications played a role in 13 (25%) of claims. Other claims involved the quality of diagnosis, such as he failure to diagnose varices endoscopically, identify patients with spontaneous bacterial peritonitis based on cell counts, and initiate appropriate management in a patient with seronegative autoimmune hepatitis whose delayed diagnosis resulted in liver failure. Periprocedural complications were common, such as a case involving liver failure after transjugular portosystemic shunt and hemorrhage after paracentesis in a patient whose direct-acting anticoagulant was not temporarily held. The only case attributed to NAFLD was a case of undiagnosed HE that resulted in premature hospital discharge, a fall, and a fracture. HE resulted in another claim where cerebral herniation occurred in a patient with acute liver failure. Finally, DILI in a case where administered antibiotics were found to be inappropriate resulted in a claim involving a patient with alcohol-associated liver disease. Though the desire to avoid litigation is a strong motivator for clinical decision-making, there is limited empirical data on which to ground this concern. Furthermore, this desire may detract from the broader aim of quality improvement and result in time spent away from the patient care and research activities for which most clinicians entered the profession. Data and guidance are needed. Our comprehensive review of medicolegal claims in hepatology practice advances the aims of quality care with 3 major findings. First, we detail the hitherto poorly described sources of legal claims in hepatology. Second, we define both the per-patient risk of legal claims in hepatology and the proportion of legal claims relating to liver care. Third, we provide several concrete areas for practice improvement informed by the risk of liability that could result in improved patient safety. What leads to legal claims in hepatology and why? We summarize our findings in Figure . The national data suggests that care discrepant from guidelines and the failure to recognize and respond in a timely fashion to liver-related conditions is the key driver of liability. Most claims involve liver cancer and a failure either to follow screening guidelines for at-risk patients or to correctly identify who should be screened. Other cases involve a failure to diagnose and treat conditions when patients present, allowing an undifferentiated condition to worsen (eg, hepatitis C, autoimmune hepatitis, spontaneous bacterial peritonitis, and HE). Taken together, these gaps in care can be addressed with quality improvement that aims to identify patients at risk for cirrhosis complications and link them to effective interventions. Tools such as best practice advisories can alert providers to the role of hepatitis C screening or liver cancer screening and may support clinicians (and quality improvement efforts) accordingly. - Measures such as noninvasive testing like FIB-4 to trigger outpatient referral can indicate to providers when hepatology consultation may be beneficial. For hospitalized patients, pathways to standardize care and indicate the role of inpatient hepatology consultation for conditions such as HE or symptomatic ascites could improve outcomes. Similarly, alerts that flag the therapies that may be more toxic in patients with cirrhosis (ie, benzodiazepines or opioids) may provide just-in-time reminders or nudges to prescribers as a risk mitigation strategy. While the contributing factors and conceptual drivers of liability risk, including communication and delayed recognition, may not differ between claims observed from other medical conditions, the systematic responses required to address vulnerabilities may be specific to the care needed by patients with CLD. In contrast, our local data reminds us that the specific sources of legal claims facing any given institution are variable, highlighting both strengths and gaps in care process. For example, falls relating to HE figures in both local and national data possibly represent widespread pitfalls in the management of HE. Conversely and notably, HCC (particularly the timely diagnosis of HCC) comprises most claims nationally, but Michigan lacks any claims relating to HCC screening. There are multiple potential explanations. First, care quality can be highly variable. Some centers may have more robust systems in place for care delivery than others. For example, the University of Michigan has had a prospective disease management software before the index date for this study that reminds clinicians when at-risk patients are due for HCC screening. This approach has been modernized into a system-wide dashboard with provider alerts. Second, the similarities between the local and national databases may explain which types of events are more likely to result in legal action regardless of the culture. Both datasets appear to suggest that the severity of the patient outcome influences the risk of a lawsuit. In the Michigan experience, most claims involved a patient's death. In the national dataset, 77% of claims involved a high-severity injury (inclusive of death). For this reason, outcome-driven claims like procedural complications and rare adverse events such as DILI can be found in both datasets. Indeed, prior studies have found that as many as 35% of claims do not relate to medical errors. In many cases, a proactive process of disclosure might be able to mitigate the risk of claims, but high-severity injuries may result in claims regardless. Instructive cases Our study highlights potentially overlooked sources of risk. First, we found multiple cases of falls because HE or after transplant. In the short term, fall risk can be reduced through psychoactive medication stewardship, recognition of and timely therapy for HE, and deliberate but cautious patient mobilization. - In addition, a proactive approach to addressing malnutrition, frailty, and sarcopenia may reduce the risk of falls before and after liver transplantation. Second, the case of a transfusion reaction with harm is instructive. While multiple professional organizations recommend avoiding prophylactic blood product transfusions before low-risk procedures, plasma and platelets are frequently given to patients with cirrhosis undergoing paracentesis or upper endoscopy. This offers no benefits but exposes patients to risk. Understanding and reframing the liability risk related to blood product transfusion may help align clinical practices with guideline recommendations. Third, the risk of liability due to DILI highlights the importance of antibiotic stewardship and close monitoring of patients receiving potentially hepatotoxic therapies. The rate of liver-related claims In a national medicolegal liability claims database, CLD resulted in 94 cases or 0.09% (9 per 10,000) of the total volume of liability claims. At Michigan, 0.06% (6 per 10,000) of patients with cirrhosis were involved with a legal claim. The rate was 0.6% (60 per 10,000) among all patients who underwent liver transplantation. Benchmarks for rates of medical professional liability claims are lacking. Our data helps address this knowledge gap. First, rates of claims may not be related to the prevalence of the disease. Cirrhosis may impact 0.3–1% of the population and CLD at least 10%; even without adjusting for the severity of the disease and health care utilization, this suggests that the rate of CLD-related claims is lower than its prevalence. Second, when comparing with rates from other publications, it is important to consider the timeframe used for rate determination. While our incidence rates pertain to any claim per-patient seen over a 10-year period, prior studies have evaluated claims per encounter over much narrower cross-sectional periods. Indeed, 13 liability claims resulted from every 10,000 hospitalizations in a 1984 study from New York state and 12 per 10,000 hospitalized patients in Utah and Colorado in 1992. Risk-management events (not necessarily legal claims) result from 37 per 10,000 primary care clinic visits. Patients with cirrhosis require frequent procedures and incur elevated risks of complications, which would intuitively increase the risk of claims. However, there may be reasons why care for a patient with CLD could result in lower-than-expected rates of liability claims, given the frequent social isolation and socioeconomic factors reducing access to legal support. This may explain why we found that claims are 10-fold more likely among patients undergoing a transplant. Patients undergoing liver transplantation are preselected based on social support and the ability to navigate the medical system while also incurring elevated risks due to the complexity of liver transplantation. Preventative measures The optimal methods to reduce the risk of liability claims are unclear. In addition to the condition-focused suggestions above, there are multiple systematic approaches that could be helpful. First, many claims result from complications of care or missed diagnoses. All interventions have risks, but each complication, when viewed closely, often had missed opportunities to course correct. A failure-to-rescue is an event where there is a failure to recognize or delay in responding to a patient experiencing a complication. Developing system-wide approaches to detect deterioration (eg, sepsis alerts for hypotension or other signs) and respond to clinical changes (rapid response teams) may result in timely care for a patient prior to an adverse event. Second, centers may approach medical errors with different strategies explaining some of the variances in which cases result in claims. For example, Michigan developed in 2001 a model wherein the University of Michigan Health System performs active surveillance for medical errors, fully discloses found errors to patients, and offers compensation when it is at fault. This program, also referred to as the ‘Michigan Model,’ decreased legal claims, lawsuits per month, time to claim resolution, and costs of liability management. A culture of early reporting, comprehensive care review, honesty, transparency, and a focus on lessons learned favorably impacts outcomes for patients and providers alike. This model allows for timely and objective care reviews, opens necessary dialogue with patients/families to prevent conjecture/the need to hire an attorney, shares lessons learned along with changes made to prevent a recurrence, and promotes healing for all involved. The implementation of this model was associated with reduced lawsuits (2.13–0.75 per 100,000 patient encounters) and reduced costs for total liability (RR, 0.41 [CI, 0.26–0.66]), patient compensation (RR, 0.41 [CI, 0.26–0.67]). Though future studies are required to determine the sources of heterogeneity in claim type and burden, as well as the impact on hepatology claims, the way an institution handles medical errors may have an impact on the rate and type of liability claims that are opened. Contextual Factors These data must be interpreted in the context of the study design. First, regional variation in care delivery and quality may mean that the landscape of liability claims may differ in regions not included in this study. Second, the lower granularity of data available in the national claims data allows for fewer case-specific lessons regarding the sources of liability. Finally, it is important to observe that claim data, including claim frequency and settlement values, are significantly influenced by jurisdiction, the existence of damage caps and other laws that vary from state to state, and insurance coverage. We lack data on the cities/states involved in the national database. We summarize our findings in Figure . The national data suggests that care discrepant from guidelines and the failure to recognize and respond in a timely fashion to liver-related conditions is the key driver of liability. Most claims involve liver cancer and a failure either to follow screening guidelines for at-risk patients or to correctly identify who should be screened. Other cases involve a failure to diagnose and treat conditions when patients present, allowing an undifferentiated condition to worsen (eg, hepatitis C, autoimmune hepatitis, spontaneous bacterial peritonitis, and HE). Taken together, these gaps in care can be addressed with quality improvement that aims to identify patients at risk for cirrhosis complications and link them to effective interventions. Tools such as best practice advisories can alert providers to the role of hepatitis C screening or liver cancer screening and may support clinicians (and quality improvement efforts) accordingly. - Measures such as noninvasive testing like FIB-4 to trigger outpatient referral can indicate to providers when hepatology consultation may be beneficial. For hospitalized patients, pathways to standardize care and indicate the role of inpatient hepatology consultation for conditions such as HE or symptomatic ascites could improve outcomes. Similarly, alerts that flag the therapies that may be more toxic in patients with cirrhosis (ie, benzodiazepines or opioids) may provide just-in-time reminders or nudges to prescribers as a risk mitigation strategy. While the contributing factors and conceptual drivers of liability risk, including communication and delayed recognition, may not differ between claims observed from other medical conditions, the systematic responses required to address vulnerabilities may be specific to the care needed by patients with CLD. In contrast, our local data reminds us that the specific sources of legal claims facing any given institution are variable, highlighting both strengths and gaps in care process. For example, falls relating to HE figures in both local and national data possibly represent widespread pitfalls in the management of HE. Conversely and notably, HCC (particularly the timely diagnosis of HCC) comprises most claims nationally, but Michigan lacks any claims relating to HCC screening. There are multiple potential explanations. First, care quality can be highly variable. Some centers may have more robust systems in place for care delivery than others. For example, the University of Michigan has had a prospective disease management software before the index date for this study that reminds clinicians when at-risk patients are due for HCC screening. This approach has been modernized into a system-wide dashboard with provider alerts. Second, the similarities between the local and national databases may explain which types of events are more likely to result in legal action regardless of the culture. Both datasets appear to suggest that the severity of the patient outcome influences the risk of a lawsuit. In the Michigan experience, most claims involved a patient's death. In the national dataset, 77% of claims involved a high-severity injury (inclusive of death). For this reason, outcome-driven claims like procedural complications and rare adverse events such as DILI can be found in both datasets. Indeed, prior studies have found that as many as 35% of claims do not relate to medical errors. In many cases, a proactive process of disclosure might be able to mitigate the risk of claims, but high-severity injuries may result in claims regardless. Our study highlights potentially overlooked sources of risk. First, we found multiple cases of falls because HE or after transplant. In the short term, fall risk can be reduced through psychoactive medication stewardship, recognition of and timely therapy for HE, and deliberate but cautious patient mobilization. - In addition, a proactive approach to addressing malnutrition, frailty, and sarcopenia may reduce the risk of falls before and after liver transplantation. Second, the case of a transfusion reaction with harm is instructive. While multiple professional organizations recommend avoiding prophylactic blood product transfusions before low-risk procedures, plasma and platelets are frequently given to patients with cirrhosis undergoing paracentesis or upper endoscopy. This offers no benefits but exposes patients to risk. Understanding and reframing the liability risk related to blood product transfusion may help align clinical practices with guideline recommendations. Third, the risk of liability due to DILI highlights the importance of antibiotic stewardship and close monitoring of patients receiving potentially hepatotoxic therapies. In a national medicolegal liability claims database, CLD resulted in 94 cases or 0.09% (9 per 10,000) of the total volume of liability claims. At Michigan, 0.06% (6 per 10,000) of patients with cirrhosis were involved with a legal claim. The rate was 0.6% (60 per 10,000) among all patients who underwent liver transplantation. Benchmarks for rates of medical professional liability claims are lacking. Our data helps address this knowledge gap. First, rates of claims may not be related to the prevalence of the disease. Cirrhosis may impact 0.3–1% of the population and CLD at least 10%; even without adjusting for the severity of the disease and health care utilization, this suggests that the rate of CLD-related claims is lower than its prevalence. Second, when comparing with rates from other publications, it is important to consider the timeframe used for rate determination. While our incidence rates pertain to any claim per-patient seen over a 10-year period, prior studies have evaluated claims per encounter over much narrower cross-sectional periods. Indeed, 13 liability claims resulted from every 10,000 hospitalizations in a 1984 study from New York state and 12 per 10,000 hospitalized patients in Utah and Colorado in 1992. Risk-management events (not necessarily legal claims) result from 37 per 10,000 primary care clinic visits. Patients with cirrhosis require frequent procedures and incur elevated risks of complications, which would intuitively increase the risk of claims. However, there may be reasons why care for a patient with CLD could result in lower-than-expected rates of liability claims, given the frequent social isolation and socioeconomic factors reducing access to legal support. This may explain why we found that claims are 10-fold more likely among patients undergoing a transplant. Patients undergoing liver transplantation are preselected based on social support and the ability to navigate the medical system while also incurring elevated risks due to the complexity of liver transplantation. The optimal methods to reduce the risk of liability claims are unclear. In addition to the condition-focused suggestions above, there are multiple systematic approaches that could be helpful. First, many claims result from complications of care or missed diagnoses. All interventions have risks, but each complication, when viewed closely, often had missed opportunities to course correct. A failure-to-rescue is an event where there is a failure to recognize or delay in responding to a patient experiencing a complication. Developing system-wide approaches to detect deterioration (eg, sepsis alerts for hypotension or other signs) and respond to clinical changes (rapid response teams) may result in timely care for a patient prior to an adverse event. Second, centers may approach medical errors with different strategies explaining some of the variances in which cases result in claims. For example, Michigan developed in 2001 a model wherein the University of Michigan Health System performs active surveillance for medical errors, fully discloses found errors to patients, and offers compensation when it is at fault. This program, also referred to as the ‘Michigan Model,’ decreased legal claims, lawsuits per month, time to claim resolution, and costs of liability management. A culture of early reporting, comprehensive care review, honesty, transparency, and a focus on lessons learned favorably impacts outcomes for patients and providers alike. This model allows for timely and objective care reviews, opens necessary dialogue with patients/families to prevent conjecture/the need to hire an attorney, shares lessons learned along with changes made to prevent a recurrence, and promotes healing for all involved. The implementation of this model was associated with reduced lawsuits (2.13–0.75 per 100,000 patient encounters) and reduced costs for total liability (RR, 0.41 [CI, 0.26–0.66]), patient compensation (RR, 0.41 [CI, 0.26–0.67]). Though future studies are required to determine the sources of heterogeneity in claim type and burden, as well as the impact on hepatology claims, the way an institution handles medical errors may have an impact on the rate and type of liability claims that are opened. These data must be interpreted in the context of the study design. First, regional variation in care delivery and quality may mean that the landscape of liability claims may differ in regions not included in this study. Second, the lower granularity of data available in the national claims data allows for fewer case-specific lessons regarding the sources of liability. Finally, it is important to observe that claim data, including claim frequency and settlement values, are significantly influenced by jurisdiction, the existence of damage caps and other laws that vary from state to state, and insurance coverage. We lack data on the cities/states involved in the national database. This study has provided the rates and reasons for medical malpractice claims in hepatology. Liability claims in hepatology indicate several areas for proactive attention to improve care quality. These include identification of patients with cirrhosis, linkage to care for liver cancer screening, safe prescribing of potentially hepatotoxic medications, and fall prevention for patients with cirrhosis and in the peri-transplant period. SUPPLEMENTARY MATERIAL
Ontario, Quebec and Alberta lead record family medicine residency vacancies
4676831b-e084-4ceb-8984-b590a1e22df8
10110336
Family Medicine[mh]
“Med schools and the hidden curriculum, which teaches that family medicine is a less desirable choice,” bear some responsibility for this mismatch in supply and demand, according to Jane Philpott, dean of Queen’s University’s faculty of health sciences and former federal minister of health. “But more importantly, it’s about the primary care system, which needs a radical overhaul.” Transitioning to team-based primary care models, which provide better work–life balance for clinicians and enable them to delegate tasks that can be done by other health professionals, “is the fundamental fix that we need,” Philpott tweeted. With more than 6.5 million Canadians currently lacking family care providers, she called for a “loud public outcry” and political will at all levels of government to ensure everyone has a primary care team. According to Ontario College of Family Physicians president-elect Jobin Varughese, many medical trainees don’t see the predominant fee-for-service model as one that works for primary care. Medical training often takes place in team-based care environments where learners experience the benefits of collaborating with other professionals, making the transition to fee-for-service family practice a tough sell, Varughese said. “They’re used to seeing maybe four or five people an hour, and now they’re asked to see six to eight people an hour and expected to keep the same level of quality of care. Nobody’s going to raise their hand for that.” The Ontario government appears to recognize this problem, expanding and creating 18 new primary care teams in the latest provincial budget, Varughese said. “It’s a great first start, but we need to expand that. I’d like to see it five, 10 times higher.” Provinces struggling to fill family medicine residency positions could also look to British Columbia, which had the fewest vacancies, for different approaches to physician compensation. In February, B.C. introduced a new longitudinal payment model that compensates family physicians based on the number of patients they see in a day, the size and complexity of their patient panel, and the time they spend on both direct clinical care and administrative tasks such as reviewing lab results or updating patient records. “Nobody likes complete uncertainty when it comes to pay,” Varughese said. “What B.C. introduced was certainty. Having the ability to know what sort of income you’re going to have lets you take a breath.” Others cite the political climates in the provinces with the most family medicine residency vacancies as another factor driving trainees away. Quebec has seen primary care fallout from unpopular health care reforms in recent years. However, the province has also increased the overall number of family medicine residency positions. Meanwhile, health minister Christian Dubé’s office attributes the residency mismatch partly to the pandemic, which has limited trainees’ exposure to family medicine during medical school practicums. In Alberta, the number of family medicine vacancies after the first iteration of the match has been climbing steadily, from 11 under the New Democrats in 2019 to 42 under the United Conservative Party (UCP) this year. “This is a huge red flag for the sustainability and viability of family medicine in Alberta,” tweeted Liana Hwang, an Alberta-based family physician working in refugee health and obstetrics. Some physicians point to UCP hostility as a factor in declining interest in family medicine residencies in the province. Under former leader Jason Kenney, the UCP tore up Alberta’s master agreement with doctors in 2019 and imposed a new contract without consultation, prompting the Alberta Medical Association (AMA) to file a $255-million lawsuit against the province. The AMA later dropped the suit when the government repealed its power to terminate or replace physician compensation agreements at will. Kenney’s successor, Danielle Smith, has likewise proven controversial, starting her premiership by scrapping the Alberta Health Services board and firing Alberta’s chief medical officer of health. “Alarming, but not surprising, that bright young medical students do not want to train or practise in Alberta under this government,” tweeted Tehseen Ladha, an Edmonton-based pediatrician and assistant professor at the University of Alberta. AMA president Fredrykka Rinaldi told CMAJ the growing number of family medicine residency vacancies after the first iteration of the match is “very concerning.” “We are not competing well, and years of uncertainty and strain on primary care in this province have been a disincentive to learners looking for a place to train,” Rinaldi said. Noel DaCunha, president of the Alberta College of Family Physicians, said family medicine is becoming “increasingly complex, and seemingly undervalued, we think more so in Alberta’s health system.” DaCunha said significant system transformation is needed, but he is optimistic about proposed investments to modernize primary health care. “We strongly believe in working with health care partners in comprehensive teams in supported environments that attract the best physicians and other primary care providers.” Todd Anderson, dean of the Cumming School of Medicine at the University of Calgary, said B.C.’s new compensation model is one to watch. If the province remains an outlier with almost no empty family medicine residency spots in years to come, Anderson said, “then that would be something I think the other provinces have to take into consideration.” In the meantime, the University of Calgary is increasing the number of placements for international medical graduates and developing an admission pathway for students interested in primary care, exploring a model Philpott implemented at Queen’s where graduates can skip the match process and go straight into family medicine. The University of Calgary is also using a new medical school curriculum this year that is more generalist-taught and patient-centric. “What we want to do is have more role models for primary care,” explained Anderson. “Students will choose specialties where they see role models, and right now, we have a curriculum that’s taught by specialists.”
Assessment of the water sources for potential channels of faecal contamination within Vhembe District Municipality using sanitary inspections and hydrogen sulphide test
edabb57b-2003-49b5-8d28-3c0aae32714d
10110511
Microbiology[mh]
The demand for reliable water sources for domestic use is growing, especially among the poor in the majority of developing countries, and South Africa is no exception. It is estimated that 785 million people worldwide still use unimproved drinking water sources , which include unprotected wells and springs and surface water. Most of the people depending on unimproved drinking water sources live in developing regions of sub-Saharan Africa and Southern Asia . Water utilities struggle to sustainably deliver clean water to millions of people, particularly in rural communities, although the level of faecal contamination in the water is low due to close monitoring of water sources – . In addition to the lack of access to clean water sources, rural communities also lack access to improved sanitation. A report by the World Health Organization (WHO) revealed that 673 million people still practise open defaecation, and 2 billion people lack access to proper sanitation worldwide . Open defaecation is practised in the fields, bushes and bodies of water or other open spaces. These ‘faecal fields’ potentially pose health problems to rural communities and place water sources at risk of flooding with faecal material from surrounding areas during heavy rains . Unsanitary practices such as defaecation in stream channels and riverbeds during dry seasons have been reported to contribute to faecal contamination on boundaries of water bodies . However, unimproved water sources have been found to harbour higher rates of faecal contamination, which is one of the main causes of waterborne diseases in households for those who use these water sources for drinking purposes. For example, in 2021, 23 countries reported cholera outbreaks, mainly in the WHO Regions of Africa and the Eastern Mediterranean and this trend continued into 2022 with over 29 countries reporting cholera cases or outbreaks. Sub-Saharan Africa remains the epicentre of cholera outbreaks from 1989 to 2022 . Acute diarrhoeal outbreaks have been reported in Pakistan and Sudan , . In 2016, diarrhoeal diseases brought on by poor access to water, sanitation, and hygiene resulted in 1.6 million fatalities and 105 million DALYs (disability-adjusted life years) in low- and middle-income countries . Most diarrhoeal deaths can be avoided through adequate management of water sources, sanitation, and better hygiene practices. In South Africa, pollution of rivers is well-known. Nevertheless, most river health studies and programmes have mainly concentrated on evaluating the river water quality of large rivers, like the Great Letaba, Limpopo, Crocodile, Olifants, Thukela, Orange, Vaal, and Inkomati. The majority of the smaller rivers that feed into these larger rivers have been overlooked or forgotten . Monyai et al. evaluated the water quality of tributaries of the Luvuvhu River in the Limpopo Province, South Africa (Thulamela Local Municipality, Vhembe District), including Dzindi, Mutshindudi, Mvudi, and Lukunde, and results showed a high level of total acidity in some water sources indicating water pollution . Diffuse or non-point source pollution remains the significant barrier to meeting good water quality standards, especially in rural communities with limited resources . Furthermore, unlike point source pollution, which arises from a single source (for example sewage or industrial effluent discharge points), non-point source pollution does not come from a single source and is difficult to manage . Faecal pollution of water sources in rural communities is mainly caused by non-point source pollution such as human faeces and animal droppings, agriculture pollutants, poor sanitation management, etc , . Therefore, using a sanitary inspection (SI) to identify potential sources of faecal pollution and pathways of pollution may assist in the management of high-priority risk concerns, thereby protecting public health. In rural communities, human and animal waste are the common sources of surface water and groundwater pollution . Domestic wastewater from on-site wastewater disposal systems (such as septic tanks and pit-latrines) contains a number of enteric pathogens that could pose a health risks to groundwater. In addition, the excreta of humans and warm-blooded animals could potentially be utilised as fertilisers in agriculture since they contain plant nutrients . In some cases, heavy rainfall events can have a significant impact on microbial water quality due to runoff that carries faecal matter into surface water sources, and water contaminated by enteric pathogens (especially water from unprotected sources) may pose risks of waterborne diseases . Microbial water quality must be monitored regularly. In a case where microbial contamination is detected, water should be subjected to minimum treatment such as boiling and addition of bleach prior to human consumption. Rural water sources are the least monitored, yet they have the highest levels of faecal contamination. Monitoring for faecal contamination of drinking water in rural areas is limited by the lack of laboratory resources, funding and skilled personnel . Creating an awareness of water contamination and the risks involved and thus the need for regular monitoring of microbial quality and minimum treatment of water can be accomplished, even in rural communities with limited resources. There is, therefore, a need for the use of an affordable test kit for field tests of water quality in rural communities. Hydrogen sulphide (H 2 S) detection tests, which are cost-effective can be used to evaluate whether bacteria of faecal origin are present in the water . These bacteria can reduce organic sulphur to sulphide as H 2 S gas. This test kit method relies on the detection of faecal coliform bacteria that produce hydrogen sulphide rather than non-faecal coliform bacteria. These faecal coliform bacteria are present in the intestines and faeces of warm-blooded animals. It is a method for examining the microbiological quality of drinking water on-site . The H 2 S strip test works as a presence/absence test; the solution will change colour to black in the presence of H 2 S producing organisms. With the goal of enhancing the quality of drinking water and reducing the burden of diseases associated with water, the H 2 S strip test is an effective method that enables the users to determine whether their water source is fit for consumption . For more than 30 years, H 2 S strip tests have been used successfully to identify faecal contamination in water around the world . The H 2 S tests are nonetheless comparable to thermotolerant coliform tests while being able to detect a considerably wider variety of microorganisms; an average sensitivity (CI95 80–92%) and specificity (CI95 72–90%) of 87% and 82%, respectively, have been reported . Hence, culture-based methods and molecular microbiology can be used to confirm the H 2 S assays for bacterial genera related to faecal contamination. For environments with limited resources, the H 2 S strip test is a particularly effective water quality monitoring instrument . The government of India has gradually approved the tests for use in community water points for initial monitoring, with positive results requiring laboratory-based testing to confirm them. However, the WHO Guidelines for Drinking Water do not currently suggest the H 2 S test. In spite of this, the H 2 S strip test has become an essential component of any household-based rural water quality surveillance programme. This technique has been promoted by UNICEF and is frequently used as a presence/absence test in developing nations and outlying regions . The H 2 S test kit is easy to use and affordable for the average family. Manja et al. proposed the hydrogen sulphide (H 2 S) method as a low-cost field test to identify faecal pollution of water in such settings. The approach of empowering communities by equipping them with the above-mentioned simple tools and training local facilitators is seen to be successful and has the potential to be replicated in rural communities. However, the role and efficacy of H 2 S tests for sanitary risk assessment and water quality testing at the level of rural communities have not yet been investigated in diverse water sources used in rural communities with intermittent or no water supply. Methods such as modified sanitary inspections and the hydrogen sulphide test may be used in rural regions because they need to be aware of any possible contamination risk. Therefore, the goal of this study was to assess the relationship between observed sanitary risks and the hydrogen sulphide strip test results in the identification of faecal contamination in various water sources. These two methods may be used as drinking water quality management tools to raise an awareness among rural community members of the faecal contamination of their water sources. The following four objectives were pursued to achieve the main goal of the study: surveying; using sanitary inspections (SIs) to assess the risk of microbial contamination; evaluating the effectiveness of the use of H 2 S paper strips in the research area; and establishing a connection between the sanitary inspection and the H 2 S strip test for microbial risk categorisation. In order to reduce the disease burden, this study also serves as a foundation for future extensive research on water, sanitation, and hygiene in this region. Demographic information of the study areas The results in Table already published by Murei et al. were considered to highlight the level of education, the employment rate and the predominant waterborne diseases in the study areas. Briefly, most of the participants attained either primary, secondary, or tertiary level education and very few did not go to school at all. The overall survey showed that almost 70.5% of the residents of the Vhembe District Municipality (VDM) are employed. Of all the participants, 15.1% reported that they experience diarrhoeal disease, with 40% of them indicating the occurrence of frequent episodes of diarrhoea. Water sources used in the Vhembe District Municipality As can be seen in Table , most of the households in the rural communities under this study used piped water supplied by the municipality as their main water supply and only about 8.4% used alternative water sources. However, people frequently turn to alternative water sources and this is due to the fact that the water supply is inconsistent. Overall, most people rely on rainwater (n = 333, 47.1%) and boreholes (n = 123, 17.4%) for drinking, irrigation, and other domestic purposes. Other alternative water sources that are used includes springs 6.9%, dams 1.6%, hand-dug wells 1.3%, and rivers 7.5% with females being the ones who mainly fetch water for households. One hundred and forty-six respondents indicated that they treat this water source before drinking mostly using household treatment methods such as boiling (37%) and bleaching (26.0%). The water is used for agricultural (irrigating crops) and domestic purposes, which include drinking, cooking, washing clothes, house cleaning and bathing. Maize production and other seasonal crops made up most agricultural practices. Cattle (71.2%), donkeys (11.7%), goats (7.6%), and dogs (5.3%) are among the animals seen in the area near water sources. Sanitation-related status in rural communities Table depicts the sanitation-related status in target villages under the present study. Almost every household in the Vhembe District Municipality has a toilet with 90.9% having pit-latrines and 3.8% having flush toilets connected to a septic tank. Some (2.3%) respondents stated that they still practise open defaecation due to a lack of access to toilets in their yards. About 17% of the respondents indicated that they dispose of soiled diapers in refuse bags with solid waste that are collected by the municipality, 9.5% inside the toilets, and 14.7% in open pits. For 29.7% of the households studied, the calculated distance between septic tanks/toilets and the water source was greater than 50 m, while 70.3% of households were found to have the toilet/septic tank near the water source. The soil type in the study area was found to be mainly loamy (76.2%) and only 23.3% was very fine sand. Sanitary inspection The potential for pollution of water sources and the degree of danger are determined by human activities near these water sources. Figure illustrates the various human and animal activities that cause water contamination in the VDM. Agriculture accounted for 20% of all observed activities, followed by the presence of pit latrines (18%) and evidence of open defaecation (16%), which were the most frequently encountered activities close to water sources. The Thohoyandou Wastewater Treatment Plant discharges its effluent into the Mvudi River. In half (4/8) of the surface area documented in the research region, diaper disposal sites were seen close to the water sources. With the exception of boreholes and protected springs, domestic animals were detected practically everywhere in areas surrounding water sources. Sources of faecal pollution have been identified in rural communities of the Vhembe District Municipality. The data captured included faecal matter (e.g. from humans or warm-blooded animals) around water bodies, animals grazing, agricultural activities, and illegal dumping sites are also shown, depicting poor waste management as can be seen in Fig. . The percentage sanitary risk scores and risk rating (Table ) were determined according to the World Health Organization rating for water sources. The Luvuvhu River was identified as having the highest sanitary risk score, at 100%, followed by Nandoni Dam and Tshivhulani Spring, both of which had a sanitary risk score of 87.5%. These high sanitary risk scores are a cause for concern as these water sources are used by community members for domestic purposes. The only water source with the lowest risk score was Tshakhuma Spring (12.5%). Water quality analysis using hydrogen sulphide test The study revealed that during the wet season, almost all the surface water samples from rivers and dams were found to be positive for H 2 S production, with the exception of the Mutshindudi River, where 75% of water samples tested positive (Fig. ). No H 2 S gas producing bacteria of faecal origin were found in the water samples of the springs in Tshilapfene (during both wet and dry seasons), Tshivhulani, and Dididi (during the wet season). The water samples of only two springs were found to have H 2 S gas producing bacteria of faecal origin: 100% of the Tshidzini Spring water samples tested positive for H 2 S production for both dry and wet seasons, while 100% of Tshivhulani Spring water samples tested positive for H 2 S production during the dry season. None of the borehole water samples were found to test positive for H 2 S production in both wet and dry seasons. Correlation between sanitary risk score and H 2 S strip test results The overall results showed a significant and strong positive correlation (r = 0.623, p = 0.003 in the wet season and r = 0.504, p = 0.017 in the dry season) between sanitary risk score and H 2 S strip test results. The results in Table already published by Murei et al. were considered to highlight the level of education, the employment rate and the predominant waterborne diseases in the study areas. Briefly, most of the participants attained either primary, secondary, or tertiary level education and very few did not go to school at all. The overall survey showed that almost 70.5% of the residents of the Vhembe District Municipality (VDM) are employed. Of all the participants, 15.1% reported that they experience diarrhoeal disease, with 40% of them indicating the occurrence of frequent episodes of diarrhoea. As can be seen in Table , most of the households in the rural communities under this study used piped water supplied by the municipality as their main water supply and only about 8.4% used alternative water sources. However, people frequently turn to alternative water sources and this is due to the fact that the water supply is inconsistent. Overall, most people rely on rainwater (n = 333, 47.1%) and boreholes (n = 123, 17.4%) for drinking, irrigation, and other domestic purposes. Other alternative water sources that are used includes springs 6.9%, dams 1.6%, hand-dug wells 1.3%, and rivers 7.5% with females being the ones who mainly fetch water for households. One hundred and forty-six respondents indicated that they treat this water source before drinking mostly using household treatment methods such as boiling (37%) and bleaching (26.0%). The water is used for agricultural (irrigating crops) and domestic purposes, which include drinking, cooking, washing clothes, house cleaning and bathing. Maize production and other seasonal crops made up most agricultural practices. Cattle (71.2%), donkeys (11.7%), goats (7.6%), and dogs (5.3%) are among the animals seen in the area near water sources. Table depicts the sanitation-related status in target villages under the present study. Almost every household in the Vhembe District Municipality has a toilet with 90.9% having pit-latrines and 3.8% having flush toilets connected to a septic tank. Some (2.3%) respondents stated that they still practise open defaecation due to a lack of access to toilets in their yards. About 17% of the respondents indicated that they dispose of soiled diapers in refuse bags with solid waste that are collected by the municipality, 9.5% inside the toilets, and 14.7% in open pits. For 29.7% of the households studied, the calculated distance between septic tanks/toilets and the water source was greater than 50 m, while 70.3% of households were found to have the toilet/septic tank near the water source. The soil type in the study area was found to be mainly loamy (76.2%) and only 23.3% was very fine sand. The potential for pollution of water sources and the degree of danger are determined by human activities near these water sources. Figure illustrates the various human and animal activities that cause water contamination in the VDM. Agriculture accounted for 20% of all observed activities, followed by the presence of pit latrines (18%) and evidence of open defaecation (16%), which were the most frequently encountered activities close to water sources. The Thohoyandou Wastewater Treatment Plant discharges its effluent into the Mvudi River. In half (4/8) of the surface area documented in the research region, diaper disposal sites were seen close to the water sources. With the exception of boreholes and protected springs, domestic animals were detected practically everywhere in areas surrounding water sources. Sources of faecal pollution have been identified in rural communities of the Vhembe District Municipality. The data captured included faecal matter (e.g. from humans or warm-blooded animals) around water bodies, animals grazing, agricultural activities, and illegal dumping sites are also shown, depicting poor waste management as can be seen in Fig. . The percentage sanitary risk scores and risk rating (Table ) were determined according to the World Health Organization rating for water sources. The Luvuvhu River was identified as having the highest sanitary risk score, at 100%, followed by Nandoni Dam and Tshivhulani Spring, both of which had a sanitary risk score of 87.5%. These high sanitary risk scores are a cause for concern as these water sources are used by community members for domestic purposes. The only water source with the lowest risk score was Tshakhuma Spring (12.5%). The study revealed that during the wet season, almost all the surface water samples from rivers and dams were found to be positive for H 2 S production, with the exception of the Mutshindudi River, where 75% of water samples tested positive (Fig. ). No H 2 S gas producing bacteria of faecal origin were found in the water samples of the springs in Tshilapfene (during both wet and dry seasons), Tshivhulani, and Dididi (during the wet season). The water samples of only two springs were found to have H 2 S gas producing bacteria of faecal origin: 100% of the Tshidzini Spring water samples tested positive for H 2 S production for both dry and wet seasons, while 100% of Tshivhulani Spring water samples tested positive for H 2 S production during the dry season. None of the borehole water samples were found to test positive for H 2 S production in both wet and dry seasons. 2 S strip test results The overall results showed a significant and strong positive correlation (r = 0.623, p = 0.003 in the wet season and r = 0.504, p = 0.017 in the dry season) between sanitary risk score and H 2 S strip test results. Public health concerns are brought on by unsafe water, poor sanitation and hygiene, and some serious health conditions may even be fatal . According to the Constitution of the Republic of South Africa, 1996, everyone has the right to safe drinking water and proper sanitation . However, this is a luxury in rural communities and a scarce resource in peri-urban areas and townships. Rural communities are isolated, making it challenging for national surveillance agencies to regularly visit and provide advice on concerns relating to the safety of drinking water . Water and sanitation in rural areas are less monitored, and the contamination of drinking water is due to non-point pollution sources, and these are difficult to manage. Therefore, community members must find a way of managing their water sources and equip themselves with water quality monitoring tools. Understanding the main causes of water contamination and how to mitigate them depends to a large extent on education. According to the demographic data from the research region, 97.7% of people attended school, and just a small percentage did not. Therefore, community members will be able to monitor their water quality and take appropriate action as needed with the help of workshops and training for basic household water quality monitoring tools like H 2 S and sanitary inspections. Our findings revealed that 70.5% of people are employed, meaning that if they prioritise their water safety, they will be able to purchase cheap H 2 S test paper strips. According to current figures, environmental factors account for 94% of the burden of diarrhoeal illnesses, with contaminated water, inadequate sanitation and poor hygiene serving as their primary causes . This H 2 S strip test could be very helpful in our study area since 40% of the people surveyed who experience diarrhoea indicated that it occurs frequently. In the present study, it was noted that every village in the VDM has a water scheme designated to it that supplies households with treated potable water; however, the water supply is not reliable. This was confirmed by Murei et al. who pointed out that most of the water pipes in VDM may be dry for weeks, months and even years. Therefore, residents end up depending on untreated or contaminated surface water and groundwater. In this research area, it was found that various alternative water sources such as rainwater, rivers, dams, hand-dug wells, springs and boreholes are used. Unfortunately, various domestic animals such as cattle, donkeys, goats and dogs were found near water sources. Animal waste contamination of recreational waters, on the other hand, provides a risk to human health since waterborne disease agents such as Campylobacte r spp., Cryptosporidium parvum , and E. coli O157:H7 can be transmitted from animals to humans . People who use the water from these rivers for household purposes may be at risk of health problems if nothing is done. This means that sanitary practices should consider local knowledge of probable animal disease sources and environmental pathways to people. The present findings also revealed that mainly females (78.3%) are responsible for collecting water from the water sources, and some of them claimed to purify both surface water and underground water before using it for drinking purposes. Therefore, women should be given priority during H 2 S and sanitary inspection workshops and training, since they are the primary caregivers in the households. This will help in reducing waterborne diseases across such communities. None of the respondents mentioned using ceramic filters to treat their water while the ceramic pots are produced by rural communities in the study areas; the majority either boil their water or they use bleach. Therefore, thorough knowledge of household water treatment techniques should also be shared with community members to give them the opportunity to select the appropriate and most affordable option. Agricultural activities are common sources of freshwater contamination . Many people in the study areas are subsistence farmers having small gardens in their yards, and some use animal waste as fertilisers. The majority of these activities were found close to water sources. Cattle regularly produce manure (faecal matter) on the banks of rivers, which contaminates the water supplies. This manure may enter water bodies during periods of heavy rain, which could cause eutrophication, which makes the water unfit for human use and for aquatic life. A study of the Nandoni Dam on the Luvuvhu River found that eutrophication remained a threat to the water quality of this dam . Hence, action must be taken in order to prevent and reduce the incidence of eutrophication. Human actions such as inadequate sanitation management can potentially contaminate underground water. Pit-latrines and flush toilets are the on-site sanitation systems that are most commonly utilised in the research region; 90.9% of homes use pit-latrines because of insufficient water supply in the area. Additionally, the bulk of people who flush the toilet use groundwater from backyard private boreholes. Depending on the kind of soil and water table level, latrines and septic tanks are frequently connected to a soak pit (or soakaway), allowing contaminants to leach directly into groundwater sources . About 70.3% of pit-latrines in the study area are near water sources. The presence of toilets in close proximity to water sources poses a high risk of faecal contamination as microorganisms can migrate from the latrines to the drinking water source , . The local government should also emphasise the enforcement that specifies the minimum distance between the water source and the toilet depending on soil characteristics of the residential stand and the depth of the water table. In our study area, open defaecation is practised (2.3% of the respondents of our survey stated that they still regularly defaecate in the open) and human and animal excreta have been observed close to water sources , . Another problem is the disposal of soiled baby diapers close to shrubs or water sources. These activities pose a health risk to the general public as they create risks to water sources during hazardous events like heavy rainfall events causing floods, surface runoff, and seepage. One of the areas in South Africa where flooding has caused significant destruction, including the loss of life, property, and infrastructure, is the Luvuvhu River . Hence, sanitary inspections, adequate sanitation management, and education of community members should be done in order to minimise this risk. Based on the overall findings of the sanitary inspection, agricultural operations (20%) were the activities most frequently seen close to water sources, followed by pit latrines (18%), and open defaecation (16%). In the study area, differences were observed in the information gathered from the household questionnaires and the sanitary inspections at the water sources. About 2.3% of homes reported open defaecation, and 16% of sanitary inspections found open defaecation close to a water source. According to the WHO , a licensed professional must travel to the site of the water supply as part of a sanitary inspection and thoroughly inspect the neighbourhood for circumstances that could lead to contamination. With that said, therefore, this study also recommends sanitary inspections to be done in water source locations, and this will require inspectors or community members to visit the fields and do observations according to checklists recording all hazards observed. Sanitary risk scores were computed in accordance with WHO guidelines . The present findings showed that drilled boreholes had a low risk, protected springs had a medium risk, and surface water from rivers or dams had a low to moderate risk. Unprotected springs and hand-dug wells were found to have high to very high risks. These findings unmistakably show that protected springs and groundwater from drilled boreholes may be safer than other water sources. Similar results were obtained by Bindra ; in addition, people typically view these sources as being of considerably higher quality than more common sources like ponds and streams . These findings demonstrate how important it is to safeguard water sources like springs in order to prevent water contamination. It was found that the H 2 S strip test can accurately detect faecal contamination of drinking water. This strategy has proven to be a useful tool for monitoring water quality and rapid screening of a large number of water samples . Studies have shown that the majority of rivers that flow through communities were highly contaminated when compared to rivers exposed to less human activity , . Similar results were also obtained in the present study area, where samples collected at the Tshivhulani sampling points on the Mutshindudi River which are situated far from any households had negative H 2 S test values indicating low risk. Conversely, positive H 2 S test results were recorded during both the dry and wet seasons for the samples collected at sampling points on the Mvudi River in Maniini and the Luvuvhu River in Dididi, which are in close proximity to households. This finding clearly shows that human activity has a greater impact on the degradation of river water quality. In rural areas, it is necessary to teach residents and household members on how to use the H 2 S strip test for water quality assessment. The use of sanitary inspection combined with the H 2 S strip test could be very effective as screening tools for faecal contamination of water sources in rural communities with low resources. This study identified the human and animal activities that may lead to water contamination especially with faecal matter using sanitary inspections and also showed the effectiveness of the H 2 S strip test in the study area. These findings further showed a strong positive correlation ( r ) between these two methods. These results indicate that rural community members should be made aware of the affordable tools that are available to ensure the safety of their drinking water and should receive training in the use of these tools. Inconsistency in microbial water quality testing in VDM was reported with the worst-case scenario of testing once a year . Hence, these tools can be used in local water treatment plants for regular and consistent monitoring of water quality; however, they should not be used as replacement tests for the other laboratory-based water quality tests. It is evident from the data presented in this paper that H 2 S-producing organisms are consistently associated with the sanitary risk in water sources. Combining a sanitary inspection with an H 2 S strip test in the identification of faecal contamination in various water sources can assist in detecting faecal pollution originating from humans and warm-blooded animals in springs, dams, boreholes, hand-dug wells, and rivers. Water quality assessment in rural areas could become more common and widespread due to the availability of affordable tools such as H 2 S paper strip testing and sanitary inspections to identify human and animal excrement and agricultural practices linked to water pollution. Knowledge of contamination risks will result in the prevention of waterborne infections and a reduction in the number of diarrhoeal deaths. Effective water and sanitation management depends on having a thorough understanding of the local water resources, as well as their limitations and dangers. Rural communities need to be made aware of the risks associated with contamination of water sources and drinking water and discussion forums should be set up. This study calls for disseminating knowledge and educating people in rural communities with limited resources on these cost-effective tools for water quality monitoring. Governmental organisations should also become engaged, provide alternatives, and assist the community members in taking ownership of the management of their drinking water resources. Study site This study was conducted in the Vhembe District Municipality of the Limpopo Province, South Africa (Fig. ). The study population is estimated population of the Vhembe District Africa was approximately 1,393,949 people with 53.3% females and 46.7% males. This area has various water sources such as rivers, dams, springs, boreholes and hand-dug wells. It falls within the savannah biome with a sub-tropical climate that has hot, wet summers and cool winters. Vhembe District Municipality has a subtropical climate with distinct rainy and dry seasons. The average annual temperature in the area is around 22–24 °C. During the dry season, which typically occurs from April to September, the temperatures are cooler, with average highs ranging from 23 to 25 °C. Rainfall is highly seasonal, falling primarily during the summer months (October–March) with average highs ranging from 28 to 30 °C, and is heavily influenced by topography. The wettest months are January through March. The average annual precipitation ranges from 450 mm on the low-lying plains to more than 2 300 mm in the mountains. Subsistence farming supports a large proportion of the population. Forestry and agriculture are two of the most important land-use activities. This study concentrated on three rivers in the Vhembe District Municipality. They were chosen because of their proximity to human communities that rely on them for water for drinking, cooking, bathing and washing of clothes. The Luvuvhu River passes through three local municipalities of the Vhembe District Municipality, including Makhado, Thulamela and Collins Chabane. Nandoni Dam is the major dam in the Luvuvhu River catchment. Some of the water sources in the Vhembe District Municipality are indicated in Fig. . Ethical consideration Ethical clearance approval was granted by the Faculty of Science Research Ethics Committee (FCRE) at the Tshwane University of Technology (TUT) and the Vhembe District Municipality. All research methods were performed in accordance with the relevant guidelines and regulations. Informed consent to participate in the study was obtained from the borehole owners in selected villages. The aim and objectives of the study were provided to the study participants and sampling permission was granted. The Tshwane University of Technology Research Ethics Committee (FCRE 2019/08/003 (FCPS 03) (SCI) and 20 March 2020) gave their approval for the project. Collection of demographic information and water and sanitation data Data for water sources and sanitation facilities were collected from Vhembe District Municipality in order to gain a general overview of the research area. Additionally, a general discussion about water and sanitation was held with municipal officials, local leaders, and community members. Briefly, a total of 35 rural villages in the Vhembe District Municipality were selected randomly from three different local municipalities. Water samples, demographic information (education, employment and diarrhoeal disease) and sanitation data (alternative water sources, purpose of water, water treatment method, and sanitation facility data) were collected between March 2020 and March 2021. Data from this preliminary inspection have been published in 2022 by Murei et al. For the purpose of the present study, these data were used to assess the sanitary risk. It was noted in this preliminary study that community members of the Vhembe District Municipality reportedly lack the necessary information and the understanding of the external risk linked to water resources and sanitation. Therefore, by utilising sanitary inspections, an evaluation of the water resources available in that area as well as of the hazards related to those resources was conducted. Sanitary inspection A sanitary inspection was conducted to locate any risks and hazardous events that could affect the water resources. As part of the sanitary inspection, the location of each water source was visited and the local environment was thoroughly inspected for scenarios that could lead to contamination. A standardised questionnaire with a few predetermined questions was used to conduct sanitary inspections. Local languages were used for those who have difficulty in speaking or understanding English adequately. The most fundamental and common problems that could cause water system pollution were included in these surveys. Sanitary inspections were performed using a mixture of on-site inspection data and interviews of community members and water and wastewater operators. In general, these questions were written in a way that only YES or NO could be used as a response (Table ). A risk factor is present when the response is YES, but it is absent when the response is NO. The level of safety of the water supply was then graded using a risk score (e.g., very high risk (7–8), high risk (5–6), medium risk (3–4) and low risk (1–2), which was determined by counting the number of YES responses as described by WHO . Sample collection In the three local municipalities of the Vhembe District Municipality that were selected, 816 water samples were collected between March and April 2021 for the wet season and between June and August 2021 for the dry season. The rivers, dams, springs, boreholes, and hand-dug wells were all used to collect samples. For each season, samples were collected in four-cycle intervals at each sampling point, resulting in a total of 408 water samples (wet and dry). Table shows the total number of samples collected for each sampling site per water source. The water sampling points were in different areas which included Mutshindudi River: Tshivhulani area, Ngwedi River: Tshidzini area, and Luvuvhu River: Mhinga area, Gandlanani area, and Dididi/Maniini area. Dams sampled included Nandoni Dam, Thathe Vondo Dam and Tshakhuma Dam. The springs located in Tshilapfene, Tshidzini, Dididi, Tshivhulani and Tshakhuma were sampled. A total volume of 1 L was collected using sterilised containers for microbiological water quality testing. Water samples were transported to the Microbiology Laboratory at Tshwane University of Technology in cooler boxes containing ice at 4 °C. The analysis was done within 6 h of collection. Water quality analysis using hydrogen sulphide strip test The hydrogen sulphide strip test was performed using H 2 S paper strips (Macherey–Nagel, Monitoring & Control Laboratories, Johannesburg, South Africa), according to the manufacturer’s instructions with slight modifications. Briefly, a test tube containing approximately 9 mL of tryptic soy broth (Thermo Fisher Scientific, Johannesburg, South Africa) was prepared and 1 mL of water sample was added; thereafter the H 2 S paper strip was inserted into the test tube and secured by a ball of cotton wool so as to maintain it at the top centre of the tube. Results of the H 2 S strip test are reported as positive or negative. The colour changes of the paper strip from white to black indicated the presence of H 2 S gas, thereby indicating that water is contaminated with bacteria of faecal origin such as coliform bacteria. Statistical analysis Microsoft Excel 2019 and Statistical Package for the Social Sciences (SPSS) Version 28 were used for statistical analysis. The correlation between concentration of sanitary score and H 2 S gas production in various water sources was analysed using Pearson’s correlation coefficient (r). This study was conducted in the Vhembe District Municipality of the Limpopo Province, South Africa (Fig. ). The study population is estimated population of the Vhembe District Africa was approximately 1,393,949 people with 53.3% females and 46.7% males. This area has various water sources such as rivers, dams, springs, boreholes and hand-dug wells. It falls within the savannah biome with a sub-tropical climate that has hot, wet summers and cool winters. Vhembe District Municipality has a subtropical climate with distinct rainy and dry seasons. The average annual temperature in the area is around 22–24 °C. During the dry season, which typically occurs from April to September, the temperatures are cooler, with average highs ranging from 23 to 25 °C. Rainfall is highly seasonal, falling primarily during the summer months (October–March) with average highs ranging from 28 to 30 °C, and is heavily influenced by topography. The wettest months are January through March. The average annual precipitation ranges from 450 mm on the low-lying plains to more than 2 300 mm in the mountains. Subsistence farming supports a large proportion of the population. Forestry and agriculture are two of the most important land-use activities. This study concentrated on three rivers in the Vhembe District Municipality. They were chosen because of their proximity to human communities that rely on them for water for drinking, cooking, bathing and washing of clothes. The Luvuvhu River passes through three local municipalities of the Vhembe District Municipality, including Makhado, Thulamela and Collins Chabane. Nandoni Dam is the major dam in the Luvuvhu River catchment. Some of the water sources in the Vhembe District Municipality are indicated in Fig. . Ethical clearance approval was granted by the Faculty of Science Research Ethics Committee (FCRE) at the Tshwane University of Technology (TUT) and the Vhembe District Municipality. All research methods were performed in accordance with the relevant guidelines and regulations. Informed consent to participate in the study was obtained from the borehole owners in selected villages. The aim and objectives of the study were provided to the study participants and sampling permission was granted. The Tshwane University of Technology Research Ethics Committee (FCRE 2019/08/003 (FCPS 03) (SCI) and 20 March 2020) gave their approval for the project. Data for water sources and sanitation facilities were collected from Vhembe District Municipality in order to gain a general overview of the research area. Additionally, a general discussion about water and sanitation was held with municipal officials, local leaders, and community members. Briefly, a total of 35 rural villages in the Vhembe District Municipality were selected randomly from three different local municipalities. Water samples, demographic information (education, employment and diarrhoeal disease) and sanitation data (alternative water sources, purpose of water, water treatment method, and sanitation facility data) were collected between March 2020 and March 2021. Data from this preliminary inspection have been published in 2022 by Murei et al. For the purpose of the present study, these data were used to assess the sanitary risk. It was noted in this preliminary study that community members of the Vhembe District Municipality reportedly lack the necessary information and the understanding of the external risk linked to water resources and sanitation. Therefore, by utilising sanitary inspections, an evaluation of the water resources available in that area as well as of the hazards related to those resources was conducted. A sanitary inspection was conducted to locate any risks and hazardous events that could affect the water resources. As part of the sanitary inspection, the location of each water source was visited and the local environment was thoroughly inspected for scenarios that could lead to contamination. A standardised questionnaire with a few predetermined questions was used to conduct sanitary inspections. Local languages were used for those who have difficulty in speaking or understanding English adequately. The most fundamental and common problems that could cause water system pollution were included in these surveys. Sanitary inspections were performed using a mixture of on-site inspection data and interviews of community members and water and wastewater operators. In general, these questions were written in a way that only YES or NO could be used as a response (Table ). A risk factor is present when the response is YES, but it is absent when the response is NO. The level of safety of the water supply was then graded using a risk score (e.g., very high risk (7–8), high risk (5–6), medium risk (3–4) and low risk (1–2), which was determined by counting the number of YES responses as described by WHO . In the three local municipalities of the Vhembe District Municipality that were selected, 816 water samples were collected between March and April 2021 for the wet season and between June and August 2021 for the dry season. The rivers, dams, springs, boreholes, and hand-dug wells were all used to collect samples. For each season, samples were collected in four-cycle intervals at each sampling point, resulting in a total of 408 water samples (wet and dry). Table shows the total number of samples collected for each sampling site per water source. The water sampling points were in different areas which included Mutshindudi River: Tshivhulani area, Ngwedi River: Tshidzini area, and Luvuvhu River: Mhinga area, Gandlanani area, and Dididi/Maniini area. Dams sampled included Nandoni Dam, Thathe Vondo Dam and Tshakhuma Dam. The springs located in Tshilapfene, Tshidzini, Dididi, Tshivhulani and Tshakhuma were sampled. A total volume of 1 L was collected using sterilised containers for microbiological water quality testing. Water samples were transported to the Microbiology Laboratory at Tshwane University of Technology in cooler boxes containing ice at 4 °C. The analysis was done within 6 h of collection. The hydrogen sulphide strip test was performed using H 2 S paper strips (Macherey–Nagel, Monitoring & Control Laboratories, Johannesburg, South Africa), according to the manufacturer’s instructions with slight modifications. Briefly, a test tube containing approximately 9 mL of tryptic soy broth (Thermo Fisher Scientific, Johannesburg, South Africa) was prepared and 1 mL of water sample was added; thereafter the H 2 S paper strip was inserted into the test tube and secured by a ball of cotton wool so as to maintain it at the top centre of the tube. Results of the H 2 S strip test are reported as positive or negative. The colour changes of the paper strip from white to black indicated the presence of H 2 S gas, thereby indicating that water is contaminated with bacteria of faecal origin such as coliform bacteria. Microsoft Excel 2019 and Statistical Package for the Social Sciences (SPSS) Version 28 were used for statistical analysis. The correlation between concentration of sanitary score and H 2 S gas production in various water sources was analysed using Pearson’s correlation coefficient (r).
Building a second-opinion tool for classical polygraph
aad4313f-fd7f-46c0-ad46-1aa9ff928cc2
10110587
Forensic Medicine[mh]
Safety of clients’ money and data (e.g. transactions) is at the heart of banking culture and reputation. As one of the instruments to safeguard clients, banks use polygraph screenings (PS). These are performed when hiring candidates to prevent the hiring of untrustworthy people. To detect an infringement early, employees with sensitive roles are screened regularly. The PS topics include drug abuse, gambling addiction, insider trading, disclosure of confidential information, bribery, corruption, and misappropriation and fraud (sample screening questions are in Suppl. Table ). The finance industry is not alone in applying PS; other examples being critical sectors such as aviation, manufacturing companies, and federal law enforcement agencies throughout the world , . The classical polygraph is a device that records cardiovascular activity (such as heart rate), thoracic and abdominal respirations, galvanic skin response (a.k.a. electrodermal activity, or EDA), and tremor. An examiner asks questions of, and accepts « yes » or « no » answers from, the person being screened (examinee). There are many good overviews of classical polygraph and questioning methods – . Unorthodox lie detection studies analyze video and audio (including facial expressions , , pupil reaction , and delays between question and answer ) , electromyography (EMG) , electroencephalogram (EEG) , magnetic resonance tomography (MRT) , , or writing pattern (keystroke dynamics) in addition to or instead of classical polygraph data. Some of these studies even get a chance to pilot in the new fields, such as the iBorderCtrl lie detector pilot in EU airports , or the VeriPol deception detection pilot by Spanish police on written insurance claims , . Yet, in the traditional fields, we are unaware of any cases where classical polygraphs are substituted with unorthodox systems. Classical polygraph remains the instrument of choice in the traditional areas, such as hiring screening, and criminal and internal investigations. Polygraph has a long history of drawing criticism from psychology , and law scientists , as well as from the public and state , . A major concern is that this method does not detect lie and truth reliably. And yet, “ paradoxically, although Congress expressed deep concerns about the efficacy of the technology, the EPPA permits the use of lie detectors in circumstances in which the accuracy of the results is of paramount importance: national defense, security, and legitimate ongoing investigations ” . Critical related work provides many arguments for why polygraph screening may fail at detecting a lie or mark a truthful answer as a lie. For example: “ Polygraph tests do not assess deceptiveness, but rather are situations designed to elicit and assess fear ” . A truthful junior manager may fear being called a corruptor more than a coldblooded, corrupted senior manager fears being caught lying by a polygraph examiner. Another example of constructive critique is a grounded call for standardization of polygraph screening procedures and examiner education . Of all concerns, in this paper we tackle only one: the need for quality assessment (QA) of examiner work. Examiner errors happen, for example, when an examiner is inexperienced, exhausted or distracted, or biased . A simple QA solution exists: always have another examiner review the screening and confirm or disprove the conclusions of the original examiner . To QA a polygraph examiner report, another examiner needs to review the recording of the screening, including the polygram (a graphical representation of recorded sensor data coupled with the examiner’s questions and the examinee’s answers), sometimes audio and video recording, and to compare his conclusions with the original report. In our experience, QA takes at least half the time it took to perform the screening. An average screening takes at least two hours. Thus, QAs are costly in terms of both time and money. For this reason, and to the best of our knowledge, industrial internal security departments QA screenings infrequently or not at all. We also note that having other examiners to QA all screenings is not a bulletproof solution. Some examiners mistakes come not from the examiner’s bias or fatigue, but from the fact that the case is hard. In hard cases, the second examiner may make just the same mistake the original examiner did. The overview of our experimental framework is as follows. Our main approach is to train a binary classifier model and apply it to each of the real 2094 screenings in our possession to see if the model score contradicts the examiner conclusion. To avoid applying the model to screenings that it saw during the training, we use standard stratified fivefold validation. Here we hypothesize that the model will not train to make the errors that human examiners do because the share of examiner errors is minor. We also decide to deviate from related work by not implementing polygraph examiner rules as features. We do this to avoid our model being trapped in the same way that human examiners are trapped, when some rules are disputable and have exceptions. Our secondary experimental goals are as follows: Ideate and test novel features that would uplift the AUC of the models. In particular, consider features of novel (not physiological) nature, such as job description and magnetic storms. Build models for each screening topic individually, to see if this uplifts the quality and if the AUCs of models differ from topic to topic. The primary success benchmark of our experiments is the success in finding real examiner errors in the screenings marked by the models we trained. We implement this primary benchmark by handing the screenings flagged by the model as examiners’ errors to the human examiners for verification. The secondary benchmark that we used during the modelling process is the AUC of the models. This secondary benchmark reflects only indirectly how good the models are at catching the examiners’ errors because AUC is calculated on noisy targets containing these unknown errors. The third benchmark is the best AUC of most related work (0.85 by Slavcovic ). This AUC can be considered an upper bound, because it is obtained on criminal investigation polygraph data, known to have much more predictive power than job screenings . Here we report devising and testing in the field an ML tool to QA the examiner reports for PS performed on classical polygraph. A small number of reports, marked suspicious by this tool, will be handed to another examiner for QA. Such a tool would allow for semi-automatic double-checking of all new and historical reports, without hiring additional examiners. An additional advantage of such a tool is that if examiners are sure all their work will be QA-ed, they will make decisions more carefully. Our results neither justify nor solidify the practice of classical polygraph screenings. Rather, we consider our results as a temporary and partial patch that helps to eliminate a specific type of error of this method, until better methods are devised and put into practice. More broadly, we believe we make a step towards rethinking classical polygraph practices. Below we describe the steps, from a basic model to a validation of the final model, that succeeded at exposing real examiner errors in historical field screenings. Basic second-opinion model on examiner conclusions We built a basic second-opinion model by training a model on the historical data of 2094 field polygraph screening recordings (PSRs) including Deception Indicated (DI) attributes set by the examiners who conducted the screenings. The intended use is to raise a red flag whenever an examiner conclusion contradicts the conclusion inferenced by the model. We present the quality metrics of the basic model in Table a, in column «all topics». Our major quality metrics are ROC AUC, and TPR for an FPR at 0.05. We note that we are forced to use these indirect quality metrics, because they measure how the model mimics the conclusions of the examiners. In fact, counter-intuitively, and contrary to the goals of related work, we do not want a perfect model predicting examiner conclusions in up to 100% screenings, because then we would flag no candidates for examiner errors. In practice, we are interested in validating that the model detects erroneous examiner conclusions, of which at this stage we had absolutely no knowledge. Tables and present the importance of each of the features based on 10 raw polygraph signals and age and sex of the examinees; the details of second-level feature construction are described in the section on “ ”. Figure depicts where the model and an examiner’s conclusions agree and disagree, depending on the model score. We also decided to measure the quality of the model applied to each of seven screening topics (Table a). To the best of our knowledge, we are the first to report the model quality on separate topics within a standard employee screening, and this approach will help us as shown below. Using alternative data to improve the model quality Measuring the performance of a model involves spending approx. 100 examiner hours, because it involves a real examiner QA-ing thoroughly dozens of PSRs flagged by the model. In case of failure, i.e. in the case of finding no examiner errors in the flagged screenings, no second chance would be given to waste another 100 h of the limited resource. We tried to maximize our one chance by doing our best to increase the quality of the basic model before flagging suspicious conclusions for manual QA. We hypothesized that information about geomagnetic storms on Earth and weather conditions in the city on the date of the screening may help the model to predict. The intuition behind this assumption is that, during storms and under different weather conditions, humans might behave slightly differently, resulting in slightly different raw physiological measurements or the sensors might provide slightly shifted measurements, or both. We also collected examiner ID, hoping that these data may be of help to the model, because different examiners might provoke slightly different physiological reactions in examinees, or the polygraph devices assigned to each examiner might have slightly different signal measurement deviations. We also collected roles (e.g. job positions) of the examinees, because people of different education and training may tell the truth and lie differently. Table b presents a basic model re-trained with these alternative data and Table shows the importance of these features (full description of all features is in Suppl. Table ). The uplift each data source is providing to the model based only on physiological signals is shown in Table . All alternative data types uplifted the quality of the model; however, we decided to keep only age, sex and job roles for production (Table c). Weather showed anomalously high uplift and importance, and we feared that this is because, for technical reasons, our dataset is highly unbalanced by the percentage of DI labels per city. To exclude city bias, we cut the dataset to one city but weather still was high in feature importance. Thus, we believe weather is significant alternative data. However, on the full dataset, weather could leak city information, and the model could get the city bias from the unbalanced dataset. While examiner ID provided a moderate uplift, the nature of this feature requires further investigation before relying on it in production. For example, if it is not the examiner ID but the examiner’s polygraph device ID that helps inferencing, then we will have wrong scoring when an examiner changes his device. A model built for one topic performs marginally better Here we investigate if training a separate model for each topic will result in even better quality as compared to the basic model with job position. We had DI labels enough for training for one topic only, i.e. for drug abuse (137 DI labels). Table shows that we gain a + 2% ROC AUC (6% relative uplift) if we train a model for the drug abuse topic only. This allows us to speculate that people may lie differently on different topics, and thus separating the topics makes it easier for the model to learn and to inference. More data and research are needed to confirm this hypothesis. A model built on one topic can handle other topics with varying quality We applied the Drug Abuse model from the previous paragraph to inferencing the other six topics (Table ). Compared to the universal model, the performance of the Drug Abuse model varies from topic to topic. We conclude that a model trained on one topic can handle other topics, albeit with insignificant quality degradation for some topics. Vague questions are hard not only for people and examiners, but for models too In Table we observed that the basic model performs on some topics (such as drug abuse and criminal history) significantly better than on other topics (such as unreported income or IRD violation). This observation is in line with a long-standing issue in the screenings: people just cannot confidently answer questions when they are not sure about the answer. At the bank, we have hundreds of IRDs, dozens of pages each, not to mention the versioning, so some people are not sure if they never violated a single IRD. Similarly with unreported income: some people start asking themselves questions like « if I got cash from a relative, is this an income? », etc. As with any common knowledge without quantitative proof, there were heated debates whether topics like «IRD violation» are effective or need to be more specific. Our finding helped to end this never-ending discussion in our organization. We also could use this observation to improve the quality of the basic model. If we found a couple of topics that confuse people, the basic model must be confused training on these. We tried to remove these confusing labels from the trainset all together. However, and counter-intuitively, in Table we can see that this idea did not improve the quality of the model significantly, and the quality of the inferencing for confusing topics dropped or did not change. We presume we did not observe a significant positive effect because the share of confusing topic DI labels in the trainset is insignificant. Topic as alternative data The universal model we built and described above does not use topics as features for training and inference. The reason is that we sought a model that can score any screening topic, not just the seven topics we have training data for. In Suppl. Table we report how topics used as additional data for building a universal model reflect on the quality of the model. We can see that knowing topics helps the model to better score «confusing» IRD topic whereas other topic quality remains unchanged. Before adding topic labels as alternative data, we balanced the dataset with regard to topics. This balancing resulted in cutting the drug abuse topic with DI labels fourfold. We observed in Suppl. Table that this cut decreased the inferencing quality of the DA topic, which had always been an unexplained leader before. The quality of the DA topic became on par with a couple of the forerunning topics, such as corruption and criminal history. This observation explains the previous domination of the DA topic; it was because it benefited from a significantly larger minority class (DI label) than other topics. Ensembling and extra data We measured the uplift from adding fresh 189 DIs and also experimented with various model ensembling architectures. The results are displayed in Table . Cumulatively, ensembling and extra data lifted AUC by 5% on all topics, and up to 11% on selected topics. Ensembling is explained in “ ”. Validating ML-based second-opinion in the field We now have two advanced models: a Universal model (ensemble with alternative data), and a Drug Abuse model (one topic model with alternative data). Here we report the summary of the test to find examiner errors among 2094 field historical screenings. We selected screenings where the examiner concluded NDI, but a model voted for DI strongly. Based on Drug Abuse model top scores, we selected 15 NDI examiner conclusions as candidates for examiner errors on drug abuse topic. Similarly, based on Universal model top scores, we selected 15/5/5 NDI conclusions on corruption/confidential information leak/criminal history topics. Thus, we ended up with 40 conclusions (candidates for examiner errors) in 36 screenings. We handed these 36 screenings for thorough blind QA to two examiners. The examiners did not know the results of the screenings, and did not share their QA results with each other. The reason for performing two QAs is because if it happened that there was a discrepancy between the original conclusions and one QA, we would have the word of one examiner against the word of another examiner, which would not constitute an original examiner error per se. One screening (one conclusion on corruption topic) was later removed from QA procedure for technical reasons. We have extremely experienced examiners, and there is a common assumption at the bank that the examiner error rate might be anywhere between 0.0 and 1.0% of all screenings. An examiner error is an extraordinary and critical situation that nobody remembers happening once during QAs performed from time to time for years. Thus, our test success criteria was to find at least one examiner error in 39 conclusions inside 35 screenings. The summary of the two QAs is presented in Table . The distributions of scores for two relevant topics are shown in Fig. . By QA-ing 39 examiner conclusions in 35 screenings (out of 2094 screenings) we identified 30 problematic conclusions, where either plain examiner errors are confirmed by two QAs (13 conclusions) or where QAs do not agree (17 conclusions). The remaining 9 conclusions are model errors, where an original examiner did not make a mistake as confirmed by both QAs. We expected that there would be some cases of QAs not concurring because some examiner mistakes could be hard calls where a decision is not obvious. For such hard cases, usually a concilium is called where examiners discuss their conflicting conclusions and come to an agreement. In this context, we are satisfied that a significant portion of the test (17 of 39 conclusions) has ended up in a concilium. It is dangerous to rely on hard call conclusions that require a concilium, without a concilium. We do not publish the results of the concilium since the results do not contribute towards the results of the paper. We note that in several problematic screenings (DI set by one or both manual QAs), the examiners who conducted the QA made side notes that an examinee practiced counter-measures. Thus we conclude that our models catch some counter-measures. Missing a counter-measure is an examiner error by definition, but we were not sure we would catch anything beyond trivial errors. We conclude that our models are fit for a one-year pilot, where 100% inflow of new screenings (approx. one hundred a day) will be scored. Manual QAs will be mandated in case of conflict between examiner conclusions and model scores on the topics. The exact model threshold will vary during the pilot, in part depending on the current load of the examiner team. The pilot will start at the end of 2022, after interfacing with a production polygraph report system is completed. We built a basic second-opinion model by training a model on the historical data of 2094 field polygraph screening recordings (PSRs) including Deception Indicated (DI) attributes set by the examiners who conducted the screenings. The intended use is to raise a red flag whenever an examiner conclusion contradicts the conclusion inferenced by the model. We present the quality metrics of the basic model in Table a, in column «all topics». Our major quality metrics are ROC AUC, and TPR for an FPR at 0.05. We note that we are forced to use these indirect quality metrics, because they measure how the model mimics the conclusions of the examiners. In fact, counter-intuitively, and contrary to the goals of related work, we do not want a perfect model predicting examiner conclusions in up to 100% screenings, because then we would flag no candidates for examiner errors. In practice, we are interested in validating that the model detects erroneous examiner conclusions, of which at this stage we had absolutely no knowledge. Tables and present the importance of each of the features based on 10 raw polygraph signals and age and sex of the examinees; the details of second-level feature construction are described in the section on “ ”. Figure depicts where the model and an examiner’s conclusions agree and disagree, depending on the model score. We also decided to measure the quality of the model applied to each of seven screening topics (Table a). To the best of our knowledge, we are the first to report the model quality on separate topics within a standard employee screening, and this approach will help us as shown below. Measuring the performance of a model involves spending approx. 100 examiner hours, because it involves a real examiner QA-ing thoroughly dozens of PSRs flagged by the model. In case of failure, i.e. in the case of finding no examiner errors in the flagged screenings, no second chance would be given to waste another 100 h of the limited resource. We tried to maximize our one chance by doing our best to increase the quality of the basic model before flagging suspicious conclusions for manual QA. We hypothesized that information about geomagnetic storms on Earth and weather conditions in the city on the date of the screening may help the model to predict. The intuition behind this assumption is that, during storms and under different weather conditions, humans might behave slightly differently, resulting in slightly different raw physiological measurements or the sensors might provide slightly shifted measurements, or both. We also collected examiner ID, hoping that these data may be of help to the model, because different examiners might provoke slightly different physiological reactions in examinees, or the polygraph devices assigned to each examiner might have slightly different signal measurement deviations. We also collected roles (e.g. job positions) of the examinees, because people of different education and training may tell the truth and lie differently. Table b presents a basic model re-trained with these alternative data and Table shows the importance of these features (full description of all features is in Suppl. Table ). The uplift each data source is providing to the model based only on physiological signals is shown in Table . All alternative data types uplifted the quality of the model; however, we decided to keep only age, sex and job roles for production (Table c). Weather showed anomalously high uplift and importance, and we feared that this is because, for technical reasons, our dataset is highly unbalanced by the percentage of DI labels per city. To exclude city bias, we cut the dataset to one city but weather still was high in feature importance. Thus, we believe weather is significant alternative data. However, on the full dataset, weather could leak city information, and the model could get the city bias from the unbalanced dataset. While examiner ID provided a moderate uplift, the nature of this feature requires further investigation before relying on it in production. For example, if it is not the examiner ID but the examiner’s polygraph device ID that helps inferencing, then we will have wrong scoring when an examiner changes his device. Here we investigate if training a separate model for each topic will result in even better quality as compared to the basic model with job position. We had DI labels enough for training for one topic only, i.e. for drug abuse (137 DI labels). Table shows that we gain a + 2% ROC AUC (6% relative uplift) if we train a model for the drug abuse topic only. This allows us to speculate that people may lie differently on different topics, and thus separating the topics makes it easier for the model to learn and to inference. More data and research are needed to confirm this hypothesis. We applied the Drug Abuse model from the previous paragraph to inferencing the other six topics (Table ). Compared to the universal model, the performance of the Drug Abuse model varies from topic to topic. We conclude that a model trained on one topic can handle other topics, albeit with insignificant quality degradation for some topics. In Table we observed that the basic model performs on some topics (such as drug abuse and criminal history) significantly better than on other topics (such as unreported income or IRD violation). This observation is in line with a long-standing issue in the screenings: people just cannot confidently answer questions when they are not sure about the answer. At the bank, we have hundreds of IRDs, dozens of pages each, not to mention the versioning, so some people are not sure if they never violated a single IRD. Similarly with unreported income: some people start asking themselves questions like « if I got cash from a relative, is this an income? », etc. As with any common knowledge without quantitative proof, there were heated debates whether topics like «IRD violation» are effective or need to be more specific. Our finding helped to end this never-ending discussion in our organization. We also could use this observation to improve the quality of the basic model. If we found a couple of topics that confuse people, the basic model must be confused training on these. We tried to remove these confusing labels from the trainset all together. However, and counter-intuitively, in Table we can see that this idea did not improve the quality of the model significantly, and the quality of the inferencing for confusing topics dropped or did not change. We presume we did not observe a significant positive effect because the share of confusing topic DI labels in the trainset is insignificant. The universal model we built and described above does not use topics as features for training and inference. The reason is that we sought a model that can score any screening topic, not just the seven topics we have training data for. In Suppl. Table we report how topics used as additional data for building a universal model reflect on the quality of the model. We can see that knowing topics helps the model to better score «confusing» IRD topic whereas other topic quality remains unchanged. Before adding topic labels as alternative data, we balanced the dataset with regard to topics. This balancing resulted in cutting the drug abuse topic with DI labels fourfold. We observed in Suppl. Table that this cut decreased the inferencing quality of the DA topic, which had always been an unexplained leader before. The quality of the DA topic became on par with a couple of the forerunning topics, such as corruption and criminal history. This observation explains the previous domination of the DA topic; it was because it benefited from a significantly larger minority class (DI label) than other topics. We measured the uplift from adding fresh 189 DIs and also experimented with various model ensembling architectures. The results are displayed in Table . Cumulatively, ensembling and extra data lifted AUC by 5% on all topics, and up to 11% on selected topics. Ensembling is explained in “ ”. We now have two advanced models: a Universal model (ensemble with alternative data), and a Drug Abuse model (one topic model with alternative data). Here we report the summary of the test to find examiner errors among 2094 field historical screenings. We selected screenings where the examiner concluded NDI, but a model voted for DI strongly. Based on Drug Abuse model top scores, we selected 15 NDI examiner conclusions as candidates for examiner errors on drug abuse topic. Similarly, based on Universal model top scores, we selected 15/5/5 NDI conclusions on corruption/confidential information leak/criminal history topics. Thus, we ended up with 40 conclusions (candidates for examiner errors) in 36 screenings. We handed these 36 screenings for thorough blind QA to two examiners. The examiners did not know the results of the screenings, and did not share their QA results with each other. The reason for performing two QAs is because if it happened that there was a discrepancy between the original conclusions and one QA, we would have the word of one examiner against the word of another examiner, which would not constitute an original examiner error per se. One screening (one conclusion on corruption topic) was later removed from QA procedure for technical reasons. We have extremely experienced examiners, and there is a common assumption at the bank that the examiner error rate might be anywhere between 0.0 and 1.0% of all screenings. An examiner error is an extraordinary and critical situation that nobody remembers happening once during QAs performed from time to time for years. Thus, our test success criteria was to find at least one examiner error in 39 conclusions inside 35 screenings. The summary of the two QAs is presented in Table . The distributions of scores for two relevant topics are shown in Fig. . By QA-ing 39 examiner conclusions in 35 screenings (out of 2094 screenings) we identified 30 problematic conclusions, where either plain examiner errors are confirmed by two QAs (13 conclusions) or where QAs do not agree (17 conclusions). The remaining 9 conclusions are model errors, where an original examiner did not make a mistake as confirmed by both QAs. We expected that there would be some cases of QAs not concurring because some examiner mistakes could be hard calls where a decision is not obvious. For such hard cases, usually a concilium is called where examiners discuss their conflicting conclusions and come to an agreement. In this context, we are satisfied that a significant portion of the test (17 of 39 conclusions) has ended up in a concilium. It is dangerous to rely on hard call conclusions that require a concilium, without a concilium. We do not publish the results of the concilium since the results do not contribute towards the results of the paper. We note that in several problematic screenings (DI set by one or both manual QAs), the examiners who conducted the QA made side notes that an examinee practiced counter-measures. Thus we conclude that our models catch some counter-measures. Missing a counter-measure is an examiner error by definition, but we were not sure we would catch anything beyond trivial errors. We conclude that our models are fit for a one-year pilot, where 100% inflow of new screenings (approx. one hundred a day) will be scored. Manual QAs will be mandated in case of conflict between examiner conclusions and model scores on the topics. The exact model threshold will vary during the pilot, in part depending on the current load of the examiner team. The pilot will start at the end of 2022, after interfacing with a production polygraph report system is completed. Slavcovic’s work on analyzing raw polygraph data twenty years ago remains the most relevant to our work. Similarities are that: (i) both studies work with raw classical polygraph data, (ii) both sets of data are collected in the field as opposed to data collected from volunteers instructed to lie, and (iii) accuracies of our models are roughly equivalent. We differ in the following: i. Slavcovic warned that data may contain examiner errors; we aimed at finding such errors and succeeded at catching examiner mistakes in historical records. ii. We experimentally drew the lower bound of examiner error rate in the field (≈ 1.5 %). This bound did not previously exist to the best of our knowledge. We believe this finding will provide factual motivation for QA. iii. We showed the promise of novel data sources for accuracy of lie detection models, including examiner ID, examinee job role, wind, atmospheric pressure, and geomagnetic storms. iv. We make part of the data accessible by reasonable request to facilitate academic research. v. Data are of very different nature. Our data are hiring and regular screenings of civil personnel as opposed to Slavcovic's data from army criminal investigations. vi. Distributions of classes (lie detected/not detected) differ significantly; our number of records is an order of magnitude higher; the classical polygraphs are of different manufacturers and decades; data sampling rates are 31 Hz vs 60 Hz (ours vs Slavkovic’s). vii. Neither Slavkovic nor any work we know of looked into differentiating topics at training and at inferencing times. We showed that one can profit from both. Differences v and vi make it hard to compare model accuracies; the nature of the data is very different. Even so, we were surprised that we did not achieve significantly better accuracy twenty years later. We agree with related work in that criminal investigations are easier to classify than routine civil personnel screenings . Thus, we may have a significantly better model, but AUCs are on par with Slavcovic because emotions in our dataset are harder to classify, and because civil screening questions are much broader than criminal investigation questions. Honts and Amato recruited 80 volunteers to mock lies and truthful answers in the screenings . In half of the screenings, volunteers watched videotaped questions instead of an examiner asking questions, and a special algorithm (RI Score) scored the answers instead of an examiner evaluation. Honts and Amato conclude that the automated screening scenario was more accurate than that carried out by a human polygraph examiner. Honts and Amato neither aimed at finding, nor found, any examiner errors. We differ in that Honts and Amato automate the screenings while we automate examiner conclusion verification. We do not substitute examiner with automated scoring. Moreover, in our setup, we do not show the examiner the scores of our tool (to exclude the possibility of the tool results influencing the conclusion of the examiner). The RI Score is rule-based and apparently requires additional markup by an examiner to calculate, whereas our ML models need no markup in addition to NCCA ASCII standard. Mambreyan et al. show that artificial bias in data with regards to sex leads to overestimating the quality of deception detection models running on video . They refer to a work that built a model on a dataset of videos, where 65% of woman and only 27% of men lied. A model can learn sex from video, and use it to infer the truth/deception label, disregarding any other data. We made sure that we do not have artificial bias in the alternative data we use (sex, age, roles). Particulary for sex data, we demontstrate the balance in Suppl. Table . Abouelenien et al. measure effectiveness of physiological, linguistic, and thermal features in deception detection on a laboratory dataset of size 149 and three synthetic topics (mock crime, attitude towards abortion, and best friend) . We explore other alternative data sources, using field dataset and real topics, and in addition our main goal is a tool to hunt for examiner errors. This is the first detailed disclosure of building and testing a second-opinion tool for classical polygraph. Yet, the subject is immense, and we may have just scratched the surface. To start with, the manual field validation (by double QA-ing candidates for examiner errors) covered only 39 conclusions but, as we explained, even this tiny test required approximately 80 examiner hours (not including a concilium to sort out discrepancies between the two QAs). We hope to grow these experimental statistics after putting the models to live pilot. Our trainset is contaminated with few and unknown examiner errors and, at least until the field validation, we ran the risk of finding no examiner errors because of the models learning to make all the same mistakes examiners do. Making a gold standard trainset involves double QA-ing hundreds of screenings. While the field validation has proven that our second-opinion tool catches some examiner errors, we still cannot exclude the risk that models are confused by the most common examiner mistakes. The running of our tool in production slowly but surely will grow the golden dataset of screenings QA-ed by three examiners (and conclium in some cases), thus producing a first ever gold standard accessible to academics. With manual QAs we validated errors where an examiner set NDI erroneously, but we did not investigate erroneous DI labels because we lack DI labels. Less than 7% of the screenings in the archive contain DI labels. Applying our work to detect this second type of errors is a direction for future work. We decided to not implement examiner textbooks because in the examiner community we hear many discussions on exemptions to almost any textbook rule. To avoid being dragged into these heated, undocumented discussions we decided to use features that do not depend on examiner textbooks or scoring methods. We started with plain and simple raw signal features (min, mean, max on a window). We also started with gradient boosting models. The plan was to up our feature and model game after having the baseline research pipeline built. When we obtained 0.85 + AUC examiner conclusion inference quality, and keeping in mind that we shall avoid a perfect model as explained above, we decided that this is enough for a pilot. We believe that developing sophisticated raw signal features and employing neural networks more suited for time series (such as LSTM) is a good avenue for future work. We tested several unorthodox data sources for uplift to conclusion prediction models. While there are some preliminary and promising results, most of these are inconclusive and need more data and investigation. Ethics information All methods were performed in accordance with relevant guidelines and regulations. This study neither required nor used any human participants. The study analyzes legacy polygraph screening data that is collected as part of a standard screening process of hiring candidates and employees with critical roles. The hiring candidates and employees sign a written agreement to be screened, including an informed consent for the Bank to store and to utilize the screening data. Internal Security of the Bank anonymized the data before handing it over to the authors of this study. Dataset description We possess an archive of 2094 field polygraph screening recordings (PSRs) including Deception Indicated (DI) attributes set by examiners who conducted the screenings. These polygraph screenings (PS) were performed on bank personal with critical roles before hiring, before promotion, or every year, depending on their role. A PS includes a subset of 14 topics, including drug abuse and corruption. PSRs store physiological signals of the examinee, audio, and questions as strings. Each question data includes three time-stamps relative to each repetition: the start of the question by the polygraph examiner (PE), the end of the question and the moment of the answer. Each question had a type assigned to it (see Suppl. Table for the list of question types). In addition to physiological signals (listed in Suppl. Table ) the examinee’s sex, age, and job position is recorded. The screenings were performed on Polyconius polygraph, model 7. Feature engineering The basic task was to make inference examiner conclusions (DI or NDI) for a certain topic in the screening. To build a model we presented data in the following format: each row in the dataset is a record of PS by a certain topic in a particular test for an examinee. Targets are DI or NDI. There may be a bias due to such target-setting, because an examinee may not lie in all tests on a topic during screening. Physiological signals were extracted from a time window defined by time-stamps of the question’s data. Thus, initially a row is a time series of the physiological signals for a given repetition on given topic. For every repetition of the relevant and comparison questions we generate the basic statistics: minimum, maximum, mean, amplitude and standard deviation. Further, we used minimum, maximum, mean and standard deviation as aggregate functions at each step. The repetition’s data we grouped by the question. An additional feature is to characterize the difference between the first repetition and the next ones. Similarly, the question’s data we grouped by topic for each test. At the end, each row in the dataset comprises 600 features extracted from PSR for a certain topic in a particular test (Suppl. Fig. ) for an examinee, with the label (DI/NDI). Models We used gradient boosting with a two-level stacking ensemble to avoid the curse of dimensionality. The first-level model trained on 600 physiological features for a topic, inferencing DI/NDI for each test inside a screening. The second-level model aggregated the output of the first-level model for all tests for a topic. At the second-level we have the following features: pred_proba_max—the maximal probability of DI among the tests; pred_proba_mean—the mean probability of DI; pred_proba_min—the minimal probability of DI among the tests; pred_proba_diff—difference between the maximal and mean values. These probabilities are concatenated with alternative data (biographical data, weather data, geomagnetic storm data). The obtained dataset is fed to the input of the second model, which gives the probability of DI on screening for a topic. Basic model This model does not receive information about topics during training and inference (Fig. ). Information about topic type is saved for further aggregation by screening. For example, the drug abuse (DA) topics of each test are aggregated in the first screening. The second-level model result is an estimation of probability of DI for any given topic of a screening. One-topic model The logic of model construction and feature generation is the same as in the basic model. The difference is that for training, we used features of one topic only. The data is filtered by a single topic before the first-level model is applied. Figure shows how the ensemble is trained on the drug abuse topic, while other topics are filtered out. Universal model We decided to use the best sides of the models described above, so we built an ensemble of existing architectures. After a series of experiments, the architecture showed the best result where we use averaging the confidence of the following models (Fig. ): a basic model built on boosting, using alternative data; a model of a single topic, built on boosting, using alternative data; a basic model built on a random forest. This ensemble was applied to all topics except drug abuse. Aside from traditional advantages of ensembles, here the rationale for using models of different architectures (e.g. gradient boosting and random forest) together is that it hopefully will eliminate some pure model errors and will highlight label (target) errors that constitute examiner errors. Training Since we did not have much data (2094 files), we used the stratified group K-Fold Validation to evaluate each of the historical screening. We set the K value equal to 5 and, using the developed a framework of stacking standard models, and received 5 models, trained on 80% of data each. The custom values that we used as hyperparameters for the standard classifiers of the open source libraries are presented in Table . Standard hyperparameters can be found in the documentation of the open source ML frameworks, links to the documentation are in Suppl. Table . Validating We evaluated the quality of the model using a test set of each validation step described in the Training subsection above. Thus, at this stage, we evaluate success of the model in the classical understanding of machine learning, i.e. as improvements in the main metrics (ROC-AUC, TPR, FPR). Testing Our main focus was not to build a high quality ML model for polygram classification, but to use an ML model to detect I-type ML model errors (FP) in polygraph screenings. Since a I-type error from the model’s perspective is equivalent to the II-type error (FN) from an examiner’s point of view, this procedure allowed us to find potential labeling errors in our historical sample, where an examiner did not indicate deception when the deception should have been indicated. After we got desired results at the validation stage, we use an expensive resource, examiners, to re–check polygrams that were likely to contain examiners’ errors, as explained above in the “ ” section, “ ” section. All methods were performed in accordance with relevant guidelines and regulations. This study neither required nor used any human participants. The study analyzes legacy polygraph screening data that is collected as part of a standard screening process of hiring candidates and employees with critical roles. The hiring candidates and employees sign a written agreement to be screened, including an informed consent for the Bank to store and to utilize the screening data. Internal Security of the Bank anonymized the data before handing it over to the authors of this study. We possess an archive of 2094 field polygraph screening recordings (PSRs) including Deception Indicated (DI) attributes set by examiners who conducted the screenings. These polygraph screenings (PS) were performed on bank personal with critical roles before hiring, before promotion, or every year, depending on their role. A PS includes a subset of 14 topics, including drug abuse and corruption. PSRs store physiological signals of the examinee, audio, and questions as strings. Each question data includes three time-stamps relative to each repetition: the start of the question by the polygraph examiner (PE), the end of the question and the moment of the answer. Each question had a type assigned to it (see Suppl. Table for the list of question types). In addition to physiological signals (listed in Suppl. Table ) the examinee’s sex, age, and job position is recorded. The screenings were performed on Polyconius polygraph, model 7. The basic task was to make inference examiner conclusions (DI or NDI) for a certain topic in the screening. To build a model we presented data in the following format: each row in the dataset is a record of PS by a certain topic in a particular test for an examinee. Targets are DI or NDI. There may be a bias due to such target-setting, because an examinee may not lie in all tests on a topic during screening. Physiological signals were extracted from a time window defined by time-stamps of the question’s data. Thus, initially a row is a time series of the physiological signals for a given repetition on given topic. For every repetition of the relevant and comparison questions we generate the basic statistics: minimum, maximum, mean, amplitude and standard deviation. Further, we used minimum, maximum, mean and standard deviation as aggregate functions at each step. The repetition’s data we grouped by the question. An additional feature is to characterize the difference between the first repetition and the next ones. Similarly, the question’s data we grouped by topic for each test. At the end, each row in the dataset comprises 600 features extracted from PSR for a certain topic in a particular test (Suppl. Fig. ) for an examinee, with the label (DI/NDI). We used gradient boosting with a two-level stacking ensemble to avoid the curse of dimensionality. The first-level model trained on 600 physiological features for a topic, inferencing DI/NDI for each test inside a screening. The second-level model aggregated the output of the first-level model for all tests for a topic. At the second-level we have the following features: pred_proba_max—the maximal probability of DI among the tests; pred_proba_mean—the mean probability of DI; pred_proba_min—the minimal probability of DI among the tests; pred_proba_diff—difference between the maximal and mean values. These probabilities are concatenated with alternative data (biographical data, weather data, geomagnetic storm data). The obtained dataset is fed to the input of the second model, which gives the probability of DI on screening for a topic. This model does not receive information about topics during training and inference (Fig. ). Information about topic type is saved for further aggregation by screening. For example, the drug abuse (DA) topics of each test are aggregated in the first screening. The second-level model result is an estimation of probability of DI for any given topic of a screening. The logic of model construction and feature generation is the same as in the basic model. The difference is that for training, we used features of one topic only. The data is filtered by a single topic before the first-level model is applied. Figure shows how the ensemble is trained on the drug abuse topic, while other topics are filtered out. We decided to use the best sides of the models described above, so we built an ensemble of existing architectures. After a series of experiments, the architecture showed the best result where we use averaging the confidence of the following models (Fig. ): a basic model built on boosting, using alternative data; a model of a single topic, built on boosting, using alternative data; a basic model built on a random forest. This ensemble was applied to all topics except drug abuse. Aside from traditional advantages of ensembles, here the rationale for using models of different architectures (e.g. gradient boosting and random forest) together is that it hopefully will eliminate some pure model errors and will highlight label (target) errors that constitute examiner errors. Since we did not have much data (2094 files), we used the stratified group K-Fold Validation to evaluate each of the historical screening. We set the K value equal to 5 and, using the developed a framework of stacking standard models, and received 5 models, trained on 80% of data each. The custom values that we used as hyperparameters for the standard classifiers of the open source libraries are presented in Table . Standard hyperparameters can be found in the documentation of the open source ML frameworks, links to the documentation are in Suppl. Table . We evaluated the quality of the model using a test set of each validation step described in the Training subsection above. Thus, at this stage, we evaluate success of the model in the classical understanding of machine learning, i.e. as improvements in the main metrics (ROC-AUC, TPR, FPR). Our main focus was not to build a high quality ML model for polygram classification, but to use an ML model to detect I-type ML model errors (FP) in polygraph screenings. Since a I-type error from the model’s perspective is equivalent to the II-type error (FN) from an examiner’s point of view, this procedure allowed us to find potential labeling errors in our historical sample, where an examiner did not indicate deception when the deception should have been indicated. After we got desired results at the validation stage, we use an expensive resource, examiners, to re–check polygrams that were likely to contain examiners’ errors, as explained above in the “ ” section, “ ” section. Supplementary Information.
Building Resident Quality Improvement Knowledge and Engagement Through a Longitudinal, Mentored, and Experiential Learning-Based Quality Improvement Curriculum
90a23aa8-d02a-4f0f-a9a6-7b858cf40281
10110773
Internal Medicine[mh]
After completing the curriculum, learners will be able to: 1. Understand key quality improvement (QI) concepts and principles, as measured by the Quality Improvement Knowledge Application Tool Revised. 2. Apply their QI knowledge by designing and implementing team-based QI projects using the Institute for Healthcare Improvement model for improvement. To provide high-quality patient care and improve patient outcomes, quality improvement (QI) training has been increasingly recognized as an essential component of resident medical education. Two of the six core competencies for resident training—as put forth by the Accreditation Council for Graduate Medical Education (ACGME)—are problem-based learning and improvement and system-based practice. Residents must demonstrate the ability to analyze the care they provide and play an active role in system improvement projects using QI methods. In addition to the core competencies, the ACGME also lists internal medicine milestones for which residents are expected to create, implement, and assess sustainable QI initiatives at the institutional level. Residency QI curricula reported in the literature exist in varying forms, with examples such as distinct QI rotations, asynchronous teaching via online modules, longitudinal experiential learning, chart audits, and participation in institutional QI initiatives. – Previous reviews have described characteristics of successful trainee QI curriculums, including learner buy-in, adequate teacher expertise and coaching, mixed teaching methods, inclusion of a QI project with adequate time for project completion, and a supportive institutional culture. , There currently is no consensus, however, on the most effective components of QI curricula, and few studies have included objective measurements of curricular success. Conversely, challenges to engaging residents in QI that have been previously described in the literature include competing demands, didactics not connected with meaningful work, suboptimal and incomplete experiential learning, lack of clear accountability, lack of timely and relevant data, and lack of faculty coach and role model. Our internal medicine residency program aimed to develop a longitudinal, experiential, and multilayer mentored QI curriculum that would incorporate the key components of successful QI curricula, address some of the barriers identified in the literature, and meet ACGME core competencies and resident milestones. Here, we present our curriculum, which was developed using QI process improvement principles, and demonstrate objective improvement in resident knowledge of QI principles and successful institutional changes. Curriculum Overview The QI curriculum was composed of a mandatory five-part longitudinal series presented in dedicated 3-hour educational sessions during ambulatory education blocks. Our internal medicine residency was divided into four large cohorts rotating through a dedicated 2-week ambulatory clinic every 8 weeks. Residents participated during their second year of residency, and each session was taught to 10–12 second-year residents in each block. Since 2016, the curriculum has been led by the internal medicine chief resident for quality and safety (CRQS) and overseen by their lead QI faculty mentor. The CRQS was an annual chief residency position sponsored by the Department of Veterans Affairs National Center for Patient Safety (NCPS). Each chief resident had previously completed the residency QI curriculum and the Institute of Healthcare Improvement (IHI) Basic Certification in Quality and Safety. The chief resident also concurrently participated during their chief year in a national QI and patient safety curriculum, which was a longitudinal and experiential experience organized by the NCPS CRQS program. Each of the five QI sessions had specific objectives developed after the IHI model for improvement and incorporated ACGME core competencies and milestones. Residents applied the QI tools they had learned by working on a concurrent longitudinal QI project with dedicated time during these sessions. Each of the five sessions is described below. Session 1—Introduction to Basic QI Concepts Residents were given an overview of the QI curriculum and reasons for learning QI. To generate interest in QI training, residents were asked prior to the session to provide examples of challenging aspects of clinical care, referred to as pain points. During the session, the CRQS discussed how QI tools could be applied to address some of these pain points. Using a real-world internal medicine patient care vignette, residents learned common QI tools, including how to define a problem from a QI perspective by identifying key stakeholders, determining root causes through fishbone diagrams and the 5 Whys tool, and creating a problem statement based on their investigations ( and ). Session 2—Applying QI Concepts to Resident-Initiated QI Projects Residents learned how to write SMART (specific, measurable, attainable, relevant, and time-bound) aim statements and brainstormed interventions using IHI change concepts, intervention action hierarchies, and impact-versus-effort grids . The brainstormed QI project ideas were conducted in parallel with the curriculum in groups of two to four residents. Residents were provided with guidance regarding a feasible QI project by discussing the role of personal and stakeholder interest, project scope, and mentorship. Using a dedicated QI workbook with defined prompts , each project group applied the QI tools learned in session 1 to its projects. The groups then presented their projects to their peers for feedback. Residents were expected to identify and meet with a faculty mentor prior to session 3. Session 3—Data Collection and Interpretation Residents learned the similarities and differences between clinical research and QI. The remainder of the session focused on data collection and analysis. This included creating outcome, process, and balancing measures and learning how to interpret run charts to identify nonrandom variation in data . Each project team had dedicated time to develop its data collection plan and define project-related measures for its projects using the QI workbook. Session 4—Project Presentation The project teams presented their work-in-progress projects to a panel of faculty QI experts and clinical leaders for feedback. Components of this 10- to 15-minute presentation included project background, problem statement, process map, fishbone diagram, aim statement, intervention ideas, and a data collection plan. The faculty QI experts asked questions and provided feedback about the projects. Residents used a template to standardize their project presentations to the QI experts and institutional leaders. Session 5—Spreading Change Residents learned how to implement change via PDSA (plan-do-study-act) cycles, change management, and sustainability planning. They were taught common pitfalls for QI projects and how to set up their projects for long-term success . They also completed their QI project charter and developed a road map for project planning for the following year when they would be expected to continue to work on their QI projects . Multilayer QI Project Mentorship All resident QI projects received threefold mentorship from a faculty mentor, the CRQS, and institutional QI and clinical leaders. Prior to the start of the QI curriculum annually, the CRQS recruited and developed a list of faculty mentors with prior QI training and/or ongoing institutional QI projects. Residents identified their project and faculty mentor either through this list or independently based on their area of interest. The CQRS provided faculty mentors with information about mentorship expectations and project milestones . The CRQS also checked in with project groups every clinic block and provided added longitudinal mentorship via their QI expertise, including guidance regarding appropriate and feasible project scopes within the time constraints of residents’ other clinical responsibilities. Residents presented their work-in-progress projects to local QI experts and clinical leaders for added feedback, organizational knowledge, and institutional support. Resources People and responsibilities The CRQS and their lead QI faculty mentor met every other week for 30 minutes to plan and review the curriculum. The CRQS spent an additional 10–15 hours monthly to develop and coordinate the curriculum and mentor resident projects. Faculty mentors were expected to meet with their resident group at least once every 8 weeks to establish appropriate project scope, investigate the root causes of their project problem, and develop high-impact interventions. Faculty QI experts were additional faculty at our institution with QI expertise and/or leadership roles. They were invited to attend the work-in-progress sessions (four total for all ambulatory blocks) ad hoc. A faculty guide offered teaching objectives, team goals, and timelines for each session. The QI curriculum was dynamic and underwent real-time and annual changes based on our trainees’ educational needs, feedback from trainees and faculty, and results from validated QI skill assessment surveys. Materials Materials for the sessions included the following: • PowerPoint presentations for each session. • Resident QI workbook. • Faculty mentor expectations and milestones. • Potential faculty mentors and project ideas. • Cloud-based sharing technology so that resident teams and the CRQS could interface and review dedicated project workbooks on an ongoing basis. Space Before the COVID-19 pandemic, the sessions required a conference room with projector for ambulatory sessions. During and after the COVID-19 pandemic, sessions were conducted as virtual meetings on Zoom. Data Collection and Analysis Two resident cohorts completed the validated Quality Improvement Knowledge Application Tool Revised (QIKAT-R) prior to and at the completion of the five-part education series for the 2020–2022 academic years. A publicly available tool assessing the global application of core QI skills, the QIKAT-R had been previously used in this pre-post testing format. The QIKAT-R was graded on a scale of 0 (poor) to 9 (excellent). The CRQS distributed three unique QIKAT-R scenarios to each resident both before and after the curriculum. The CRQS was masked to the resident who completed each QIKAT-R but not to pre-post assessment. One additional investigator also scored the QIKAT-R and was masked both to pre-post intervention and to the resident. For data analysis, pre- and postcurriculum QIKAT-R results were analyzed via unpaired t tests using GraphPad Prism 8. In the 2020–2022 academic years, residents were also offered optional surveys at the end of their QI sessions to seek their overall impression and feedback. Resident projects were collated based on themes such as guideline concordance, communication, wellness, electronic medical record documentation, and more. The QI curriculum was composed of a mandatory five-part longitudinal series presented in dedicated 3-hour educational sessions during ambulatory education blocks. Our internal medicine residency was divided into four large cohorts rotating through a dedicated 2-week ambulatory clinic every 8 weeks. Residents participated during their second year of residency, and each session was taught to 10–12 second-year residents in each block. Since 2016, the curriculum has been led by the internal medicine chief resident for quality and safety (CRQS) and overseen by their lead QI faculty mentor. The CRQS was an annual chief residency position sponsored by the Department of Veterans Affairs National Center for Patient Safety (NCPS). Each chief resident had previously completed the residency QI curriculum and the Institute of Healthcare Improvement (IHI) Basic Certification in Quality and Safety. The chief resident also concurrently participated during their chief year in a national QI and patient safety curriculum, which was a longitudinal and experiential experience organized by the NCPS CRQS program. Each of the five QI sessions had specific objectives developed after the IHI model for improvement and incorporated ACGME core competencies and milestones. Residents applied the QI tools they had learned by working on a concurrent longitudinal QI project with dedicated time during these sessions. Each of the five sessions is described below. Session 1—Introduction to Basic QI Concepts Residents were given an overview of the QI curriculum and reasons for learning QI. To generate interest in QI training, residents were asked prior to the session to provide examples of challenging aspects of clinical care, referred to as pain points. During the session, the CRQS discussed how QI tools could be applied to address some of these pain points. Using a real-world internal medicine patient care vignette, residents learned common QI tools, including how to define a problem from a QI perspective by identifying key stakeholders, determining root causes through fishbone diagrams and the 5 Whys tool, and creating a problem statement based on their investigations ( and ). Session 2—Applying QI Concepts to Resident-Initiated QI Projects Residents learned how to write SMART (specific, measurable, attainable, relevant, and time-bound) aim statements and brainstormed interventions using IHI change concepts, intervention action hierarchies, and impact-versus-effort grids . The brainstormed QI project ideas were conducted in parallel with the curriculum in groups of two to four residents. Residents were provided with guidance regarding a feasible QI project by discussing the role of personal and stakeholder interest, project scope, and mentorship. Using a dedicated QI workbook with defined prompts , each project group applied the QI tools learned in session 1 to its projects. The groups then presented their projects to their peers for feedback. Residents were expected to identify and meet with a faculty mentor prior to session 3. Session 3—Data Collection and Interpretation Residents learned the similarities and differences between clinical research and QI. The remainder of the session focused on data collection and analysis. This included creating outcome, process, and balancing measures and learning how to interpret run charts to identify nonrandom variation in data . Each project team had dedicated time to develop its data collection plan and define project-related measures for its projects using the QI workbook. Session 4—Project Presentation The project teams presented their work-in-progress projects to a panel of faculty QI experts and clinical leaders for feedback. Components of this 10- to 15-minute presentation included project background, problem statement, process map, fishbone diagram, aim statement, intervention ideas, and a data collection plan. The faculty QI experts asked questions and provided feedback about the projects. Residents used a template to standardize their project presentations to the QI experts and institutional leaders. Session 5—Spreading Change Residents learned how to implement change via PDSA (plan-do-study-act) cycles, change management, and sustainability planning. They were taught common pitfalls for QI projects and how to set up their projects for long-term success . They also completed their QI project charter and developed a road map for project planning for the following year when they would be expected to continue to work on their QI projects . Residents were given an overview of the QI curriculum and reasons for learning QI. To generate interest in QI training, residents were asked prior to the session to provide examples of challenging aspects of clinical care, referred to as pain points. During the session, the CRQS discussed how QI tools could be applied to address some of these pain points. Using a real-world internal medicine patient care vignette, residents learned common QI tools, including how to define a problem from a QI perspective by identifying key stakeholders, determining root causes through fishbone diagrams and the 5 Whys tool, and creating a problem statement based on their investigations ( and ). Residents learned how to write SMART (specific, measurable, attainable, relevant, and time-bound) aim statements and brainstormed interventions using IHI change concepts, intervention action hierarchies, and impact-versus-effort grids . The brainstormed QI project ideas were conducted in parallel with the curriculum in groups of two to four residents. Residents were provided with guidance regarding a feasible QI project by discussing the role of personal and stakeholder interest, project scope, and mentorship. Using a dedicated QI workbook with defined prompts , each project group applied the QI tools learned in session 1 to its projects. The groups then presented their projects to their peers for feedback. Residents were expected to identify and meet with a faculty mentor prior to session 3. Residents learned the similarities and differences between clinical research and QI. The remainder of the session focused on data collection and analysis. This included creating outcome, process, and balancing measures and learning how to interpret run charts to identify nonrandom variation in data . Each project team had dedicated time to develop its data collection plan and define project-related measures for its projects using the QI workbook. The project teams presented their work-in-progress projects to a panel of faculty QI experts and clinical leaders for feedback. Components of this 10- to 15-minute presentation included project background, problem statement, process map, fishbone diagram, aim statement, intervention ideas, and a data collection plan. The faculty QI experts asked questions and provided feedback about the projects. Residents used a template to standardize their project presentations to the QI experts and institutional leaders. Residents learned how to implement change via PDSA (plan-do-study-act) cycles, change management, and sustainability planning. They were taught common pitfalls for QI projects and how to set up their projects for long-term success . They also completed their QI project charter and developed a road map for project planning for the following year when they would be expected to continue to work on their QI projects . All resident QI projects received threefold mentorship from a faculty mentor, the CRQS, and institutional QI and clinical leaders. Prior to the start of the QI curriculum annually, the CRQS recruited and developed a list of faculty mentors with prior QI training and/or ongoing institutional QI projects. Residents identified their project and faculty mentor either through this list or independently based on their area of interest. The CQRS provided faculty mentors with information about mentorship expectations and project milestones . The CRQS also checked in with project groups every clinic block and provided added longitudinal mentorship via their QI expertise, including guidance regarding appropriate and feasible project scopes within the time constraints of residents’ other clinical responsibilities. Residents presented their work-in-progress projects to local QI experts and clinical leaders for added feedback, organizational knowledge, and institutional support. People and responsibilities The CRQS and their lead QI faculty mentor met every other week for 30 minutes to plan and review the curriculum. The CRQS spent an additional 10–15 hours monthly to develop and coordinate the curriculum and mentor resident projects. Faculty mentors were expected to meet with their resident group at least once every 8 weeks to establish appropriate project scope, investigate the root causes of their project problem, and develop high-impact interventions. Faculty QI experts were additional faculty at our institution with QI expertise and/or leadership roles. They were invited to attend the work-in-progress sessions (four total for all ambulatory blocks) ad hoc. A faculty guide offered teaching objectives, team goals, and timelines for each session. The QI curriculum was dynamic and underwent real-time and annual changes based on our trainees’ educational needs, feedback from trainees and faculty, and results from validated QI skill assessment surveys. Materials Materials for the sessions included the following: • PowerPoint presentations for each session. • Resident QI workbook. • Faculty mentor expectations and milestones. • Potential faculty mentors and project ideas. • Cloud-based sharing technology so that resident teams and the CRQS could interface and review dedicated project workbooks on an ongoing basis. Space Before the COVID-19 pandemic, the sessions required a conference room with projector for ambulatory sessions. During and after the COVID-19 pandemic, sessions were conducted as virtual meetings on Zoom. The CRQS and their lead QI faculty mentor met every other week for 30 minutes to plan and review the curriculum. The CRQS spent an additional 10–15 hours monthly to develop and coordinate the curriculum and mentor resident projects. Faculty mentors were expected to meet with their resident group at least once every 8 weeks to establish appropriate project scope, investigate the root causes of their project problem, and develop high-impact interventions. Faculty QI experts were additional faculty at our institution with QI expertise and/or leadership roles. They were invited to attend the work-in-progress sessions (four total for all ambulatory blocks) ad hoc. A faculty guide offered teaching objectives, team goals, and timelines for each session. The QI curriculum was dynamic and underwent real-time and annual changes based on our trainees’ educational needs, feedback from trainees and faculty, and results from validated QI skill assessment surveys. Materials for the sessions included the following: • PowerPoint presentations for each session. • Resident QI workbook. • Faculty mentor expectations and milestones. • Potential faculty mentors and project ideas. • Cloud-based sharing technology so that resident teams and the CRQS could interface and review dedicated project workbooks on an ongoing basis. Before the COVID-19 pandemic, the sessions required a conference room with projector for ambulatory sessions. During and after the COVID-19 pandemic, sessions were conducted as virtual meetings on Zoom. Two resident cohorts completed the validated Quality Improvement Knowledge Application Tool Revised (QIKAT-R) prior to and at the completion of the five-part education series for the 2020–2022 academic years. A publicly available tool assessing the global application of core QI skills, the QIKAT-R had been previously used in this pre-post testing format. The QIKAT-R was graded on a scale of 0 (poor) to 9 (excellent). The CRQS distributed three unique QIKAT-R scenarios to each resident both before and after the curriculum. The CRQS was masked to the resident who completed each QIKAT-R but not to pre-post assessment. One additional investigator also scored the QIKAT-R and was masked both to pre-post intervention and to the resident. For data analysis, pre- and postcurriculum QIKAT-R results were analyzed via unpaired t tests using GraphPad Prism 8. In the 2020–2022 academic years, residents were also offered optional surveys at the end of their QI sessions to seek their overall impression and feedback. Resident projects were collated based on themes such as guideline concordance, communication, wellness, electronic medical record documentation, and more. The CQRS position was introduced to our residency program in 2016, and the curriculum was developed and piloted during the 2016–2017 academic year. Using feedback from residents and faculty, the curriculum underwent iterative revisions. The full curriculum and workbook were presented to the first cohort of second-year residents starting in the 2020–2021 academic year. Since 2016, a total of 234 internal medicine residents have completed our QI curriculum and developed 67 QI projects. Project themes have included improving transitions of care, reducing resident burnout, standardizing documentation, improving interdisciplinary communication, increasing age-appropriate cancer screening, and more. Our trainees have presented their QI work at various local, regional, and national venues. Prior to the COVID-19 pandemic, all 11 of the QI projects in the 2018–2019 academic year were presented at the institution's annual Graduate Medical Education QI conference, three projects were presented at regional or national conferences, and one project was published as a journal article. Since the COVID-19 pandemic, nine QI projects have been presented at the institution's conferences and two projects at regional or national conferences. In addition, several QI projects have been adapted at a systems level, including usage of fecal immunochemical testing for colorectal cancer screening at one of the resident clinic sites and a process for designating surrogate decision markers on inpatient medicine services, the latter of which received an award for best abstract at our institutional quality conference. After initiation of the full curriculum in the 2020–2021 academic year, 59 residents (71%) completed the QIKAT-R precurriculum, and 65 residents (78%) completed the evaluation postcurriculum. The mean QIKAT-R scores were 4.56 prior to adoption of the curriculum and 6.48 postcurriculum ( p < .001). Two resident cohorts of 83 total residents from the 2020–2022 academic years were also asked to rate the quality and effectiveness of the curriculum. Using a 5-point Likert scale (1 = poor, 5 = excellent ), residents who completed anonymous surveys rated their satisfaction with the QI curriculum as 4.10 ( n = 61, 73% response rate). On another 5-point Likert scale (1 = strongly disagree, 5 = strongly agree ), self-assessed understanding of writing an effective problem statement was rated 4.73 ( n = 15, 18% response rate), writing an effective aim statement was rated 4.47 ( n = 17, 20% response rate), stakeholder identification was rated 4.67 ( n = 15, 18% response rate), creating fishbone diagrams was rated 4.07 ( n = 42, 51% response rate), creating process maps was rated 4.29 ( n = 42, 51% response rate), and identifying interventions was rated 4.47 ( n = 17, 20% response rate). We developed and implemented a longitudinal, experiential, and mentored QI curriculum that was offered during dedicated ambulatory educational sessions for internal medicine residents. Our QIKAT-R results, resident feedback, and project output demonstrate that the curriculum led to improved trainee QI knowledge and systems-level improvements. Our curriculum contains many of the mechanisms associated with successful QI curricula and addresses many of the challenges to resident engagement identified in the previous literature, along with additional features adding to the richness of the curriculum and the resident experience. , The curricular strategy is a hybrid teaching model using just-in-time learning through combining didactic lectures on key QI concepts with experiential, hands-on learning to reinforce important QI principles. Our curriculum covers topics ranging from continuous process improvement to process mapping and change management; a prior study showed that only a quarter to a half of resident curricula contain these topics. Our curriculum is also highly structured, with clear learning objectives for each session and project deliverables provided through the QI workbook, work-in-progress presentations, and QI charters. Given competing demands with clinical workload, residents have protected time during embedded ambulatory education sessions for project work and operate in teams to facilitate project progress and completion. Furthermore, adult learning theories show that individuals learn best when they need to acquire the knowledge and skills for goal fulfillment. , Therefore, we use real-world internal medicine cases and resident-generated examples of clinical challenges and pain points to increase resident buy-in and interest in QI. Moreover, during our curriculum, residents are given the opportunity to develop their own projects based on their clinical interests. Mentorship and bidirectional alignment of institutional projects and learner-selected QI projects are often cited as keys to successful QI curricula, but their impact has not been objectively measured. We believe the structured multilayer mentorship from faculty and the CRQS within our curriculum is a critical component for successful resident QI projects. We have found that proactively recruiting and providing a list of faculty mentors with ongoing institutional QI projects and interest in trainee participation eases the identification of an appropriate mentor and aligns resident work with institutional priorities. Faculty and CRQS mentorship also result in appropriate project scoping and more effective interventions due to their prior clinical and research experiences and institutional knowledge. Stakeholder and institutional support for resident-initiated projects also eases implementation of project interventions. Therefore, faculty mentorship aids in removing or minimizing barriers to QI project work that would otherwise lead to resident frustration and disengagement with QI if left unaddressed. The CRQS's providing faculty mentors with milestones sets clear expectations (i.e., outcome and process measures) and facilitates project navigation. The residents receive additional feedback from other faculty QI experts and clinical leaders during the work-in-progress session, which helps to generate institutional interest and support for resident projects and demonstrates to the residents the importance of QI within our department. This added layer of mentorship and feedback by local QI experts and clinical leaders is unique to our curriculum and, to our knowledge, has not been previously described in the literature. We also feel that inclusion and leadership of the CRQS—who is also receiving mentorship and gaining additional higher-level QI skills—are critical components. Prior to implementation of this position in 2016, only a handful of residents had successfully completed a successful QI project, with most working individually with a faculty member. The limitations of our curriculum include the lack of a dedicated staff member to assist with access and retrieval of institutional data, which has been further exacerbated during the COVID-19 pandemic due to shifts in institutional priorities. We have addressed several of these delays through novel electronic health record education sessions that teach Epic self-reporting tools for timely data access and retrieval given Epic's use at our institution. Our program's analysis and objective success were also affected by a loss of dedicated ambulatory time for QI education during the COVID-19 pandemic. The pandemic resulted in shifts in staffing requirements at our institution, which were reflected in decreased scholarly output due to competing clinical responsibilities. Additionally, while we appreciate the importance of interdisciplinary engagement for successful QI projects, it remains challenging to coordinate this during ambulatory education sessions given competing clinical and educational demands. Rather, residents are encouraged to reach out to interdisciplinary stakeholders and meet outside of dedicated time when needed. Furthermore, the anonymous resident surveys had variable response rates and therefore may not fully capture resident understanding of key QI concepts. However, most residents had a favorable view of the QI curriculum, and approximately three-quarters of trainees completed the previously validated QIKAT-R with a significant increase in their scores postcurriculum. The cases provided in our curriculum are built on real-world experiences at our institution, which helps with resident engagement for our program but may not necessarily be applicable to other centers. Therefore, in our faculty guide , we have provided examples of how our cases can be updated for other specialties for increased engagement. Despite these challenges, we believe our curriculum is a comprehensive introduction to QI for residents. Through our longitudinal curriculum and with faculty and institutional support, our residents apply core QI concepts taught in didactics to their own QI projects and thereby implement meaningful change at our institution. Our curriculum adds a new dimension to the existing published QI educational content by introducing multiple avenues for QI mentorship, creating flexible time for project work, developing a process for resident accountability and mentor guidelines, incorporating flexible and standardized project work tools, and generating institutional support for residents to disseminate their QI work at various local, regional, and national meetings. We believe our curriculum fulfills the ACGME core competencies and can serve as a model for other clinical training programs. Session 1 Slides.pptx Session 1 Workbook.pptx Session 2 Slides.pptx Session 2 Workbook.pptx Session 3 Slides.pptx Session 4 Work-in-Progress Presentation Template.pptx Session 5 Slides.pptx QI Charter Template.docx Faculty Milestones.docx Faculty Guide.docx Resident Survey.docx All appendices are peer reviewed as integral parts of the Original Publication.
How to diagnose COVID-19 in family practice? Usability of complete blood count as a COVID-19 diagnostic tool: a cross-sectional study in Turkey
f6605c60-789c-4b17-88ce-30378cd1f123
10111184
Family Medicine[mh]
Background As it is known, the novel COVID-19 first seen in Wuhan in China was defined by the WHO on 31 December 2019 and declared as a pandemic on 11 March 2020. It continues to spread around the world, with more than 300 million positive cases and more than 5.5 million deaths as of January 2022. This shows that 1 out of every 25 people all over the world is positive for COVID-19 and reveals that the struggle should be continued. The diagnosis of COVID-19 is still performed worldwide by real-time reverse-transcription PCR (RT-PCR) test. However, due to the cost of test kits and long result times, it would be very beneficial for physicians to use an inexpensive, practical and fast-resulting ancillary test that can be used in the diagnosis of COVID-19. The fact that the complete blood count test (CBC), which has been found to be the cheapest test in COVID-19 researches, can be easily performed in all health institutions, including family medicine, reveals that it should be investigated in terms of its usability as a diagnostic tool in COVID-19. There are some studies in the literature on the use of CBC test parameters in COVID-19. In these studies, parameter ratios such as neutrophil/lymphocyte ratio (NLR) and platelet/lymphocyte ratio (PLR) were investigated, and it was determined that especially increased NLR was associated with poor prognosis and mortality. In addition, among these studies, a study reporting that the NLR is higher in COVID-19-positive patients at the time of first admission to the hospital is noteworthy in order to demonstrate its diagnostic usability. There are also some studies developed for the usability of machine learning models in detecting COVID-19, confirming that CBC can be used diagnostically. These studies express the potential of CBC as a diagnostic tool, and therefore, a comprehensive study is needed for the use of CBC parameters in the diagnosis of COVID-19 for use in family medicine. Objective With the decrease in mortality rates in COVID-19 and the increasing tendency of COVID-19 cases to be accepted as colds like influenza, more practical and cheaper methods are needed, especially in family practice. For this purpose, the use of CBC test parameters in the diagnosis of COVID-19 was investigated by comparing the admission CBC test results of the patients who had a positive or negative RT-PCR. As it is known, the novel COVID-19 first seen in Wuhan in China was defined by the WHO on 31 December 2019 and declared as a pandemic on 11 March 2020. It continues to spread around the world, with more than 300 million positive cases and more than 5.5 million deaths as of January 2022. This shows that 1 out of every 25 people all over the world is positive for COVID-19 and reveals that the struggle should be continued. The diagnosis of COVID-19 is still performed worldwide by real-time reverse-transcription PCR (RT-PCR) test. However, due to the cost of test kits and long result times, it would be very beneficial for physicians to use an inexpensive, practical and fast-resulting ancillary test that can be used in the diagnosis of COVID-19. The fact that the complete blood count test (CBC), which has been found to be the cheapest test in COVID-19 researches, can be easily performed in all health institutions, including family medicine, reveals that it should be investigated in terms of its usability as a diagnostic tool in COVID-19. There are some studies in the literature on the use of CBC test parameters in COVID-19. In these studies, parameter ratios such as neutrophil/lymphocyte ratio (NLR) and platelet/lymphocyte ratio (PLR) were investigated, and it was determined that especially increased NLR was associated with poor prognosis and mortality. In addition, among these studies, a study reporting that the NLR is higher in COVID-19-positive patients at the time of first admission to the hospital is noteworthy in order to demonstrate its diagnostic usability. There are also some studies developed for the usability of machine learning models in detecting COVID-19, confirming that CBC can be used diagnostically. These studies express the potential of CBC as a diagnostic tool, and therefore, a comprehensive study is needed for the use of CBC parameters in the diagnosis of COVID-19 for use in family medicine. With the decrease in mortality rates in COVID-19 and the increasing tendency of COVID-19 cases to be accepted as colds like influenza, more practical and cheaper methods are needed, especially in family practice. For this purpose, the use of CBC test parameters in the diagnosis of COVID-19 was investigated by comparing the admission CBC test results of the patients who had a positive or negative RT-PCR. Study design This study was designed as a retrospective, cross-sectional study. Setting The study was carried out in a tertiary university hospital, which is one of the two large hospitals in one of the large cities in the Eastern part of Turkey, serving the surrounding provinces and serving approximately 4.5 million people. Informed consent Patients admitted to the hospital and undergoing CBC and RT-PCR tests for COVID-19 were included in the study. During these procedures, necessary informed consents were obtained by the relevant health professionals. However, due to the retrospective design of the study in the form of scanning the hospital archive, it was not possible to obtain informed consent from the patients for participation in our study. Patient and public involvement Patients or the public were not involved in the design, or conduct, or reporting, or dissemination plans of our research. Participants In our study, patients over the age of 18 who were diagnosed as U07.3 (COVID-19) or Z03.8 (observation for other suspected diseases and conditions) according to ICD-10 (International Classification of Diseases 10th Revision) codes during the study period were searched in the hospital archive. Those who had these diagnoses and had CBC test with RT-PCR test on the same day were included in the study. Patients over the age of 70 were excluded from the study because of the possibility of concomitant haematological problems in advanced age. The eligible patients were investigated in terms of any disease or medication that could affect the CBC test result, by searching the other previous diagnoses and chronic medication reports of the patients. Oncological cases, patients with malignant neoplasm, haematological or chronic disease, medications, and pregnant women were excluded from the study. The RT-PCR test results for COVID-19 of the patients included in the study were scanned from the database of the Ministry of Health. Missing test results or the patients who did not have a simultaneous CBC test were excluded. Patients with multiple hospital admissions and multiple RT-PCR test results, but not exceeding 3-months intervals, these subsequent tests were ignored due to the risk of possible effects of previous COVID-19 positivity on the CBC. Although the RT-PCR test results of some patients were negative, it was observed that the test was repeated 48 hours later, and this second test result was found as positive. In these patients, the first negative test result was accepted as false negative and rejected, and the second positive result was evaluated. If simultaneous CBC test was not found with the second RT-PCR test in these patients, first CBC test result was included in the study. Instruments RT-PCR tests were studied with the Bio-Speedy COVID-19 quantitative qPCR detection kit (Bioeksen Molecular Diagnostics, Istanbul, Turkey) in the reference laboratories of the Ministry of Health. CBC was performed in our laboratories on the automated haematology analyser Sysmex XN-10 (Sysmex, Kobe, Japan). Variables Patients who met the inclusion criteria were divided into two as COVID-19-positive and COVID-19-negative according to RT-PCR test results, considering internationally accepted laboratory reference values. Among the CBC test parameters of both groups, white cell count (WCC), neutrophil, lymphocyte, monocyte, basophil, eosinophil, red cell count (RCC), haemoglobin, haematocrit, mean corpuscular volume (MCV), mean corpuscular haemoglobin (MCH), MCH concentration (MCHC), RCC distribution width (RDW), platelet, mean platelet volume (MPV), platelet distribution width (PDW), plateletcrit (PCT), platelet large cell ratio (PLCR) and immature granulocyte (IG) levels have been investigated. From these parameters, NLR, neutrophil/monocyte ratio (NMR), lymphocyte/monocyte ratio (LMR), PLR, IG/neutrophil ratio and IG/lymphocyte ratio were calculated and compared between groups. The values of these test parameters are given in internationally accepted units. Sample size The ethical approval was obtained in May 2020, but the study was conducted later, in February 2021. For the sample size, it was aimed to include all patients who applied to our hospital for COVID-19 and met the inclusion criteria. The hospital archive records were searched and filtered in a retrospective design, from February 2021 to the previous dates, to the onset of the pandemic (March 2020), and the eligible patients, who were diagnosed with U07.3 (COVID-19) or Z03.8 (observation for other suspected diseases and conditions) according to ICD-10 codes, were included. Statistical analysis SPSS V.23.0 (IBM) program was used for statistical analysis. Missing values were excluded before the analyses. The data were investigated with the Kolmogorov-Smirnov test whether they were normally distributed or not. Categorical data were presented with frequency and percentage, numerical data were given with mean±SD, and with median and IQR. Student’s t-test was used to compare two normally distributed independent groups, and Mann-Whitney U test was used for data that did not show normal distribution. χ 2 test was used in the analysis of categorical data. Binary logistic regression analysis was performed for the statistically significant parameters. The cut-off values were identified by the receiver operating characteristic (ROC) curve analysis. A p<0.05 was accepted for statistical significance in the whole study. This study was designed as a retrospective, cross-sectional study. The study was carried out in a tertiary university hospital, which is one of the two large hospitals in one of the large cities in the Eastern part of Turkey, serving the surrounding provinces and serving approximately 4.5 million people. Patients admitted to the hospital and undergoing CBC and RT-PCR tests for COVID-19 were included in the study. During these procedures, necessary informed consents were obtained by the relevant health professionals. However, due to the retrospective design of the study in the form of scanning the hospital archive, it was not possible to obtain informed consent from the patients for participation in our study. Patients or the public were not involved in the design, or conduct, or reporting, or dissemination plans of our research. In our study, patients over the age of 18 who were diagnosed as U07.3 (COVID-19) or Z03.8 (observation for other suspected diseases and conditions) according to ICD-10 (International Classification of Diseases 10th Revision) codes during the study period were searched in the hospital archive. Those who had these diagnoses and had CBC test with RT-PCR test on the same day were included in the study. Patients over the age of 70 were excluded from the study because of the possibility of concomitant haematological problems in advanced age. The eligible patients were investigated in terms of any disease or medication that could affect the CBC test result, by searching the other previous diagnoses and chronic medication reports of the patients. Oncological cases, patients with malignant neoplasm, haematological or chronic disease, medications, and pregnant women were excluded from the study. The RT-PCR test results for COVID-19 of the patients included in the study were scanned from the database of the Ministry of Health. Missing test results or the patients who did not have a simultaneous CBC test were excluded. Patients with multiple hospital admissions and multiple RT-PCR test results, but not exceeding 3-months intervals, these subsequent tests were ignored due to the risk of possible effects of previous COVID-19 positivity on the CBC. Although the RT-PCR test results of some patients were negative, it was observed that the test was repeated 48 hours later, and this second test result was found as positive. In these patients, the first negative test result was accepted as false negative and rejected, and the second positive result was evaluated. If simultaneous CBC test was not found with the second RT-PCR test in these patients, first CBC test result was included in the study. RT-PCR tests were studied with the Bio-Speedy COVID-19 quantitative qPCR detection kit (Bioeksen Molecular Diagnostics, Istanbul, Turkey) in the reference laboratories of the Ministry of Health. CBC was performed in our laboratories on the automated haematology analyser Sysmex XN-10 (Sysmex, Kobe, Japan). Patients who met the inclusion criteria were divided into two as COVID-19-positive and COVID-19-negative according to RT-PCR test results, considering internationally accepted laboratory reference values. Among the CBC test parameters of both groups, white cell count (WCC), neutrophil, lymphocyte, monocyte, basophil, eosinophil, red cell count (RCC), haemoglobin, haematocrit, mean corpuscular volume (MCV), mean corpuscular haemoglobin (MCH), MCH concentration (MCHC), RCC distribution width (RDW), platelet, mean platelet volume (MPV), platelet distribution width (PDW), plateletcrit (PCT), platelet large cell ratio (PLCR) and immature granulocyte (IG) levels have been investigated. From these parameters, NLR, neutrophil/monocyte ratio (NMR), lymphocyte/monocyte ratio (LMR), PLR, IG/neutrophil ratio and IG/lymphocyte ratio were calculated and compared between groups. The values of these test parameters are given in internationally accepted units. The ethical approval was obtained in May 2020, but the study was conducted later, in February 2021. For the sample size, it was aimed to include all patients who applied to our hospital for COVID-19 and met the inclusion criteria. The hospital archive records were searched and filtered in a retrospective design, from February 2021 to the previous dates, to the onset of the pandemic (March 2020), and the eligible patients, who were diagnosed with U07.3 (COVID-19) or Z03.8 (observation for other suspected diseases and conditions) according to ICD-10 codes, were included. SPSS V.23.0 (IBM) program was used for statistical analysis. Missing values were excluded before the analyses. The data were investigated with the Kolmogorov-Smirnov test whether they were normally distributed or not. Categorical data were presented with frequency and percentage, numerical data were given with mean±SD, and with median and IQR. Student’s t-test was used to compare two normally distributed independent groups, and Mann-Whitney U test was used for data that did not show normal distribution. χ 2 test was used in the analysis of categorical data. Binary logistic regression analysis was performed for the statistically significant parameters. The cut-off values were identified by the receiver operating characteristic (ROC) curve analysis. A p<0.05 was accepted for statistical significance in the whole study. Participants In the study, a total of 3267 patient records were retrieved between March 2020 and February 2021, who were diagnosed with U07.3 (COVID-19) or Z03.8 (observation for other suspected diseases and conditions) and their CBC tests were performed on the same date. According to the exclusion criteria of our study, patients under the age of 18 and over the age of 70 (n=636), oncological cases (n=187), organ and tissue transplant patients (n=75), haematological disease (n=30), pregnant women (n=7), chronic diseases with reported drug use (n=149), patients with missing information (n=61), and therefore, a total of 1145 patients were excluded . RT-PCR test results and simultaneous CBC test results for COVID-19 of 2122 patients who were eligible to be included in the study were checked one by one. A total of 1144 individuals who were absent, missing RT-PCR test or not performed with CBC simultaneously were excluded from the study. Thus, a total of 978 people were included in the study. Descriptive data The mean age of the included 978 patients was 41.5±14.5 years, with 53.9% (n=527) male and 46.1% (n=451) female. 39.4% (n=385) of the participants were found to be COVID-19-positive and 60.6% (n=593) COVID-19-negative. The mean age of COVID-19-positive patients was 39.0±14.0 years, younger than the COVID-19-negative group (43.0±14.7), and this was found to be significantly different (p<0.001). While the percentage of women was higher in the COVID-19-positive group (51.7%), the proportion of men in the COVID-19-negative group was higher (57.5%), which was statistically significant (p=0.005). Main results The comparison of CBC test parameters of patients with positive and negative COVID-19 test results is presented in . Accordingly, COVID-19-positive patients were found to have statistically significantly lower WCC, neutrophil, lymphocyte, monocyte, basophil, eosinophil, platelet, IG, MCH and MCHC, compared with negative patients (p≤0.001). The percentage of neutrophil was found to be significantly lower in COVID-19-positive patients (p<0.001), while the percentages of lymphocyte, basophil and monocyte were found to be significantly increased(p<0.001). Another significant result was that the PCT was lower in COVID-19-positive patients (p<0.001). No significant difference was found in the eosinophil per cent, RCC, haemoglobin, haematocrit, MCV, MPV, PDW and PLCR values of the patients (p>0.05). When the ratios of CBC parameters were examined, it was determined that the NLR, neutrophil/monocyte ratio and IG/lymphocyte ratios were significantly decreased in the COVID-19-positive group (p<0.001), while decrease in the platelet/lymphocyte and IG/neutrophil ratios was found at the level of p<0.05 . Binary logistic regression analysis was performed for statistically significant CBC test parameters. Accordingly, low lymphocyte count (OR 0.695; 95% CI 0.597 to 0.809) and low RDW-coefficient of variation (CV) level (OR 0.887; 95% CI 0.818 to 0.962) were significantly related with COVID-19 positivity . When ROC analysis was conducted on the lymphocyte levels, the area under curve (AUC) was found as 0.566 , and the cut-off value of lymphocyte was found as 0.745 in 96.1% sensitivity and 90.6% specificity. However, the AUC of RDW-CV was 0.526, and the cut-off value was 12.35 in %73.2 sensitivity and 72.7% specificity for the diagnosis of COVID-19. In the study, a total of 3267 patient records were retrieved between March 2020 and February 2021, who were diagnosed with U07.3 (COVID-19) or Z03.8 (observation for other suspected diseases and conditions) and their CBC tests were performed on the same date. According to the exclusion criteria of our study, patients under the age of 18 and over the age of 70 (n=636), oncological cases (n=187), organ and tissue transplant patients (n=75), haematological disease (n=30), pregnant women (n=7), chronic diseases with reported drug use (n=149), patients with missing information (n=61), and therefore, a total of 1145 patients were excluded . RT-PCR test results and simultaneous CBC test results for COVID-19 of 2122 patients who were eligible to be included in the study were checked one by one. A total of 1144 individuals who were absent, missing RT-PCR test or not performed with CBC simultaneously were excluded from the study. Thus, a total of 978 people were included in the study. The mean age of the included 978 patients was 41.5±14.5 years, with 53.9% (n=527) male and 46.1% (n=451) female. 39.4% (n=385) of the participants were found to be COVID-19-positive and 60.6% (n=593) COVID-19-negative. The mean age of COVID-19-positive patients was 39.0±14.0 years, younger than the COVID-19-negative group (43.0±14.7), and this was found to be significantly different (p<0.001). While the percentage of women was higher in the COVID-19-positive group (51.7%), the proportion of men in the COVID-19-negative group was higher (57.5%), which was statistically significant (p=0.005). The comparison of CBC test parameters of patients with positive and negative COVID-19 test results is presented in . Accordingly, COVID-19-positive patients were found to have statistically significantly lower WCC, neutrophil, lymphocyte, monocyte, basophil, eosinophil, platelet, IG, MCH and MCHC, compared with negative patients (p≤0.001). The percentage of neutrophil was found to be significantly lower in COVID-19-positive patients (p<0.001), while the percentages of lymphocyte, basophil and monocyte were found to be significantly increased(p<0.001). Another significant result was that the PCT was lower in COVID-19-positive patients (p<0.001). No significant difference was found in the eosinophil per cent, RCC, haemoglobin, haematocrit, MCV, MPV, PDW and PLCR values of the patients (p>0.05). When the ratios of CBC parameters were examined, it was determined that the NLR, neutrophil/monocyte ratio and IG/lymphocyte ratios were significantly decreased in the COVID-19-positive group (p<0.001), while decrease in the platelet/lymphocyte and IG/neutrophil ratios was found at the level of p<0.05 . Binary logistic regression analysis was performed for statistically significant CBC test parameters. Accordingly, low lymphocyte count (OR 0.695; 95% CI 0.597 to 0.809) and low RDW-coefficient of variation (CV) level (OR 0.887; 95% CI 0.818 to 0.962) were significantly related with COVID-19 positivity . When ROC analysis was conducted on the lymphocyte levels, the area under curve (AUC) was found as 0.566 , and the cut-off value of lymphocyte was found as 0.745 in 96.1% sensitivity and 90.6% specificity. However, the AUC of RDW-CV was 0.526, and the cut-off value was 12.35 in %73.2 sensitivity and 72.7% specificity for the diagnosis of COVID-19. In our study, 26 different CBC test parameters and 6 different ratios of patients were compared. Accordingly, in COVID-19-positive patients, some parameters were found to be low (WCC, neutrophil, lymphocyte, monocyte, basophil, eosinophil, platelet and IG counts, neutrophil and IG percentages, RDW-CV and PCT), and some were higher (percentages of lymphocyte, monocyte and basophil). In addition, the calculated NLR, NMR, PLR, IG/neutrophil and IG/lymphocyte ratios were found to be significantly decreased. However, according to the logistic regression analysis, only the decrease in lymphocyte and lower RDW-CV values were found to be significantly associated with the COVID-19 positivity. There have been some efforts to develop and improve laboratory test results for the diagnosis and the detection of prognostic factors of COVID-19. In a systematic review, these models were collected and discussed to come up with a machine learning model using laboratory data, but the heterogeneity of the sample sizes, the populations, algorithms, analytical methods and the usage of different laboratory tests and clinical parameters were emphasised as the barriers to the development of machine learning. CBC was found to be a frequent accompanying laboratory test in these studies. Since CBC is a practical, easy and cost-effective test, there are some promising studies in the literature within the scope of CBC results to be used in the diagnosis of COVID-19. A prediction analysis was conducted by Joshi et al , for the identification of PCR negative patients by CBC and patients’ sex, and they concluded that, CBC results were better for allocating the COVID-19 test results. In another study, Formica et al developed a criterion that included three CBC parameters with age, and found that CBC based scores have potential for diagnosis of COVID-19. Likewise, some studies have focused on machine learning models, including CBC, and these models have been found useful for diagnostic purposes. However, these machine learning models may involve some complex calculations, and require adaptation and integration into diagnostic systems, and therefore, somehow difficult to use and adapt to every population. To evaluate the utility of CBC in the diagnosis of COVID-19, our study investigated 26 different parameters and 6 different ratios of CBC, an in-depth, fully focused study of CBC parameters was conducted, and among these, lymphocyte and RDW-CV values were prominent. In a study conducted by Sayed et al , hospitalisation CBC values of patients were investigated, and NLR was found to be higher in COVID-19 patients. In that study, the authors focused on NLR values but also stated that neutrophil values were at low-to-normal levels (3.8 in COVID-19-positive and 4.4 in control), which in turn shows that the main reason for high NLR is actually low lymphocyte levels (0.9 in COVID-19-positive and 2.1 in control), which is similar to our study. However, in our study, NLR was found to be lower in COVID-19-positive patients. Looking deeper, it is clear that neutrophil values were lower in COVID-19 patients in both our study and Sayed et al ’s study (3.55 vs 3.8, respectively), but the neutrophil values of the control groups were different (6.56 vs 4.4, respectively). The neutrophil values of the control group of our study were more normal as expected. The reason for this difference should be attributed to the study design of our study, as the RT-PCR test can be negative at the onset of the disease and turn positive when retested within 2 days, so we received these cases as COVID-19-positive. Therefore, negative and positive cases can be considered more accurate in our study. Considering the studies investigating the CBC other than diagnostic purposes, a study conducted on a large group of patients in Spain investigated the NLR, PLR and NPR ratios of COVID-19-positive patients to determine the risk of admission to the intensive care unit. They determined that, especially high NPR was found to be significantly associated with the risk of patients being admitted to the intensive care unit. In another study, in which they investigated the relationship of these same ratios with hospital mortality, it was found that these values were significantly higher at the time of admission to the hospital in patients who died. In a study conducted in Turkey, the relationship between admission CBC parameters and the severity of COVID-19 and intensive care indication were investigated, and it was found that higher NLR, and MLR levels and lower PLR levels were significantly different. In another study, it was determined that low haemoglobin, WCC and lymphocyte values were associated with poor prognosis in terms of intubation, intensive care and death. In a cohort study with a smaller group of patients, WCC value, NLR and PLR ratios were found to be associated with mortality. In a study investigating the difference of CBC parameters compared with the healthy group, low lymphocyte and platelets, high WCC, neutrophil, NLR and PLR rates were found in COVID-19-positive patients. This study investigated the differences in CBC parameters of COVID-19-positive patients compared with the healthy group, on a smaller number of subjects than our study. In our study, unlike that study, WCC, neutrophil, NLR and PLR rates were found to be low in COVID-19-positive patients, and lower lymphocyte value was found to be significantly associated with the COVID-19 positivity. In this study, which was carried out with a large patient group, the usability of CBC test parameters in the diagnosis of COVID-19 was investigated for a practical use in family medicine. It was determined that most of the CBC parameters and ratios were significantly differed in COVID-19-positive patients. Meaningfully, low lymphocyte count and low RDW-CV level can be used in the diagnosis of COVID-19. Reviewer comments Author's manuscript
Antimalarial Imidazopyridines Incorporating an Intramolecular Hydrogen Bonding Motif: Medicinal Chemistry and Mechanistic Studies
3e272441-e4d8-49b3-b530-c71d68a3567f
10111423
Pharmacology[mh]
Synthetic protocols reported in the literature were adapted to access the target compounds. The general protocol followed an eight-step synthetic route from commercially available 2,4-dichloro-5-nitropyridine . A nucleophilic substitution was first performed to introduce a para -methoxybenzyl (PMB) amino-protecting group to form the 2-chloro- N -(4-methoxybenzyl)-5-nitropyridin-4-amine intermediate a in quantitative yield. This intermediate was subjected to a second nucleophilic substitution reaction under microwave irradiation with tert -butyl (2-aminoethyl)(ethyl)carbamate in the presence of triethylamine and N , N -dimethylformamide (DMF) to produce tert -butyl ethyl(2-((4-((4-methoxybenzyl)amino)-5-nitropyridin-2-yl)amino)ethyl)carbamate b . The nitro group was then reduced using palladium on carbon (10% Pd/C) under H 2 gas to deliver the corresponding aniline c . This intermediate was reacted with the appropriate carboxylic acids in the presence of 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide, hydrochloride (EDCI·HCl), and a catalytic amount of 4-dimethylaminopyridine (DMAP) in dichloromethane (DCM) to produce the amide intermediates d.1 – d.19 . The imidazole ring was allowed to form at this stage by heating in 2 M aqueous NaOH and absolute ethanol at 80 °C to yield intermediates e.1 – e.19 . The appropriate cyclized intermediate was subjected to boc-deprotection followed by reductive amination with 4-chloro-2-hydroxybenzaldehyde in the presence of sodium borohydride to yield the penultimate intermediates f.1 – f.19 . Finally, the PMB group was removed using neat trifluoroacetic acid (TFA) to afford the desired imidazopyridine analogues 1–19 in moderate yields (50–75%). To probe the subcellular localization of the target compounds within the parasite, a representative fluorescent probe of one of the compounds was designed and synthesized by attaching 7-nitrobenz-2-oxa-1,3-diazole (NBD) on the ethylenediamine side chain through an ethyl linker, as illustrated in . Compound 14-NBD was synthesized through an 11-step synthetic route from commercially available 2,4-dichloro-5-nitropyridine. A nucleophilic substitution reaction on the para -methoxybenzyl (PMB) amino-protecting group to form the 2-chloro- N -(4-methoxybenzyl)-5-nitropyridin-4-amine intermediate a , followed by a second nucleophilic substitution reaction involving the 2-chloro- N -(4-methoxybenzyl)-5-nitropyridin-4-amine intermediate with N -boc ethylenediamine in the presence of triethylamine and N,N -dimethylformamide (DMF), produced intermediate g . The nitro group was then reduced with Zn in acetic acid to deliver the corresponding aniline intermediate h . This intermediate was reacted with 3-trifluoromethyl benzoic acid in the presence of 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide, hydrochloride (EDCI·HCl), and a catalytic amount of 4-dimethylaminopyridine (DMAP) in dichloromethane (DCM) to produce the amide intermediate i . At this stage, the imidazole ring was allowed to form by heating in 2 M aqueous NaOH and absolute ethanol at 80 °C to yield intermediate j . The cyclized intermediate was subjected to boc-deprotection followed by reductive amination with 4-chloro-2-hydroxybenzaldehyde in the presence of sodium borohydride to deliver the intermediate k . The PMB group was removed using neat trifluoroacetic acid (TFA), resulting in an intermediate l . Reductive amination of N -boc glycinal was then carried out in the presence of sodium cyanoborohydride to deliver the penultimate intermediate m . Finally, removal of the boc group, followed by a nucleophilic substitution reaction with NBD-chloride, yielded the target fluorescent probe 14-NBD . In Vitro Asexual Blood-Stage Antiplasmodium Activity and Cytotoxicity All the compounds were evaluated for in vitro antiplasmodium activity against both the drug-sensitive NF54 and multidrug-resistant K1 strains of P. falciparum , and the SAR is discussed with respect to IC 50 values on the NF54 strain. Aromatic groups bearing small non-polar meta - or para - electron-withdrawing substituents displayed better antiplasmodium potency, with compound 14 (IC 50 = 0.08 μM) having the highest potency. Incorporation of heteroatoms in the saturated cyclic substituents as exemplified in compounds 2 (IC 50 = 1.67 μM), 3 (IC 50 = 2.37 μM) and 13 (IC 50 = 1.03 μM) was detrimental to antiplasmodium activity. Further, electron-withdrawing substituents such as a fluoro (-F) or a trifluoromethyl (-CF 3 ) on the cyclohexane ring led to comparable antiplasmodium activity in comparison to their unsubstituted congeners as shown in the matched pairs 9 (IC 50 = 0.67 μM), 10 (IC 50 = 0.86 μM) and 11 (IC 50 = 0.69 μM), 12 (IC 50 = 0.93 μM). However, the presence of the electron drawing -CF 3 group on the cyclopropane was detrimental to activity while the -CH 3 group was more tolerated compared to the unsubstituted analogue. Finally, changes in ring size did not have any significant effect on antiplasmodium activity as exemplified in compounds 5 (IC 50 = 0.39 μM), 9 (IC 50 = 0.67 μM), 1 (IC 50 = 0.34 μM), 11 (IC 50 = 0.69 μM), and 15 (IC 50 = 0.67 μM). The cytotoxicity of these analogues was determined against the Chinese hamster ovarian (CHO) cell line using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay. All the compounds showed a favorable cytotoxicity profile on account of their selectivity indices (SI > 10), with compound 14 (SI = 466.25) exhibiting the highest SI. Metabolic Stability Studies of Selected Analogues Selected imidazopyridine analogues exhibiting sub-micromolar in vitro asexual blood stage antiplasmodium activity (IC 50 < 1 μM), suitable solubility (> 50 μM), and an acceptable selectivity profile relative to the mammalian Chinese hamster ovarian cell line (SI > 10) were evaluated for metabolic stability in mouse, rat, and human liver microsomes . These compounds were generally not stable across all three species of microsomes, although 1 and 5 were more stable in human liver microsomes than in rodent microsomes. The metabolic stability of these analogues were assessed by the hepatic ratio ( E H ). Metabolite Identification Studies for Compound 14 Considering the generally poor microsomal metabolic stability displayed by selected compounds, metabolite identification studies in mouse liver microsomes were undertaken on one of them, compound 14 . Four (4) metabolites were identified from the metabolism of 14 in mouse liver microsomes, with the primary metabolites (P-28 and P-140) arising from the dealkylation of the side chain N -alkyl groups although the exact structure is yet to be confirmed. Notably, the formation of the metabolites required NADPH suggesting the presence of microsomes and the involvement of CYP450 enzymes, indicating that they were products of metabolism and not chemical degradation. All the compounds were evaluated for in vitro antiplasmodium activity against both the drug-sensitive NF54 and multidrug-resistant K1 strains of P. falciparum , and the SAR is discussed with respect to IC 50 values on the NF54 strain. Aromatic groups bearing small non-polar meta - or para - electron-withdrawing substituents displayed better antiplasmodium potency, with compound 14 (IC 50 = 0.08 μM) having the highest potency. Incorporation of heteroatoms in the saturated cyclic substituents as exemplified in compounds 2 (IC 50 = 1.67 μM), 3 (IC 50 = 2.37 μM) and 13 (IC 50 = 1.03 μM) was detrimental to antiplasmodium activity. Further, electron-withdrawing substituents such as a fluoro (-F) or a trifluoromethyl (-CF 3 ) on the cyclohexane ring led to comparable antiplasmodium activity in comparison to their unsubstituted congeners as shown in the matched pairs 9 (IC 50 = 0.67 μM), 10 (IC 50 = 0.86 μM) and 11 (IC 50 = 0.69 μM), 12 (IC 50 = 0.93 μM). However, the presence of the electron drawing -CF 3 group on the cyclopropane was detrimental to activity while the -CH 3 group was more tolerated compared to the unsubstituted analogue. Finally, changes in ring size did not have any significant effect on antiplasmodium activity as exemplified in compounds 5 (IC 50 = 0.39 μM), 9 (IC 50 = 0.67 μM), 1 (IC 50 = 0.34 μM), 11 (IC 50 = 0.69 μM), and 15 (IC 50 = 0.67 μM). The cytotoxicity of these analogues was determined against the Chinese hamster ovarian (CHO) cell line using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay. All the compounds showed a favorable cytotoxicity profile on account of their selectivity indices (SI > 10), with compound 14 (SI = 466.25) exhibiting the highest SI. Selected imidazopyridine analogues exhibiting sub-micromolar in vitro asexual blood stage antiplasmodium activity (IC 50 < 1 μM), suitable solubility (> 50 μM), and an acceptable selectivity profile relative to the mammalian Chinese hamster ovarian cell line (SI > 10) were evaluated for metabolic stability in mouse, rat, and human liver microsomes . These compounds were generally not stable across all three species of microsomes, although 1 and 5 were more stable in human liver microsomes than in rodent microsomes. The metabolic stability of these analogues were assessed by the hepatic ratio ( E H ). 14 Considering the generally poor microsomal metabolic stability displayed by selected compounds, metabolite identification studies in mouse liver microsomes were undertaken on one of them, compound 14 . Four (4) metabolites were identified from the metabolism of 14 in mouse liver microsomes, with the primary metabolites (P-28 and P-140) arising from the dealkylation of the side chain N -alkyl groups although the exact structure is yet to be confirmed. Notably, the formation of the metabolites required NADPH suggesting the presence of microsomes and the involvement of CYP450 enzymes, indicating that they were products of metabolism and not chemical degradation. β-Hematin Inhibition Assay and Docking Previously synthesized antimalarial imidazopyridines were shown to inhibit the formation of β-hematin in a cell-free assay and were subsequently confirmed as bonafide inhibitors of hemozoin formation in a cell-fractionation assay. , Based on this precedence, the potential of our imidazopyridine series to inhibit hemozoin formation was assessed using the β-hematin inhibition Assay (BHIA) . Using the discriminatory cut-off of <100 μM, only nine compounds: 1 (18 μM), 5 (16 μM), 6 (33 μM), 10 (76 μM), 11 (80 μM), 12 (31 μM), 14 (9 μM), 18 (18 μM), and 19 (65 μM) exhibited β-hematin inhibition activity in the preferred range, with compound 14 showing the highest potency. The frontrunner compound 14 , which exhibited sub-micromolar in vitro asexual blood-stage antiplasmodium activity, acceptable cytotoxicity profile against the mammalian CHO cell line, good solubility, and potent β-hematin inhibition activity, also showed specific intermolecular interactions with the previously published crystal surface of β-hematin. The imidazopyridine core, the 3-trifluoromethylphenyl, and the 3-chlorophenol moieties of the compound interact through π–π stacking with the porphyrin ring of β-hematin. On the other hand, the basic nitrogen of the tertiary amine on the side chain of the compound forms a hydrogen bond with the propionate group of β-hematin when protonated at pH 4.5 , further supporting the inhibition of β-hematin as a possible contributing mode of action. Fluorescence Drug Localization Studies Fluorescence drug-localization studies were employed as the starting point to probe the subcellular localization of the target compounds within the parasite. The representative compound, 14 (IC 50 Pf NF54 = 0.08 μM), was first assessed for its inherent fluorescence for imaging in P. falciparum using a fluorimeter. Excitation between 200 and 600 nm yielded no significant emission with reference to the blank solvent. This underscored the need to attach an external fluorophore with suitable photophysical properties and comparable in vitro antiplasmodium activity to the parent compound. 7-Nitrobenz-2-oxa-1,3-diazole (NBD) was selected as an appropriate extrinsic fluorophore based on its small size, commercial availability, and stability over a biologically relevant pH range. The point of attachment of the fluorophore was guided by the earlier SAR studies on the scaffold. The NBD-labeled probe retained nanomolar in vitro activity against P. falciparum ( 14-NBD Pf NF54 IC 50 = 0.049 μM; ). It also possesses photophysical properties that are suitable for live-cell imaging. Subcellular accumulation of Pf -infected red blood cells was assessed through confocal microscopy. Commercially available organelle trackers such as the LysoTracker Red, MitoTracker Deep Red, ER-Tacker Red, DRAQ5, and Nile Red aided the colocalization studies of 14-NBD . These dyes illuminate the acidic organelles such as the parasite’s digestive vacuole, mitochondrion, endoplasmic reticulum, nucleus, and lipids, respectively. − The results from the live-cell confocal microscopy showed a partial accumulation between 14-NBD and LysoTracker Red, with regions of intense localization observed around the parasite’s membrane structures. No significant accumulation was seen in the areas around the hemozoin crystals (Hz), suggesting that 14-NBD does not localize in the parasite’s digestive vacuoles ( A). It is noteworthy that while 14-NBD retained antiplasmodium activities compared to the parent compound, the presence of the NBD fluorophore can influence the accumulation of the compound in the parasite. Similarly, no colocalization was observed between the nuclear marker, DRAQ5, and 14-NBD , thereby eliminating the parasite’s nucleus as a site of action of the compound. Although the biochemistry of hemozoin formation has not been fully elucidated, with many hypotheses in the literature regarding its formation, − one hypothesis that has gained popularity is that it is lipid-catalyzed. Neutral lipids, in particular, have been associated with hemozoin formation. Consequently, Nile Red was co-incubated with 14-NBD to identify and assess the interaction of 14-NBD with neutral lipids. Punctuate structures believed to be neutral lipid droplets were observed close to the hemozoin crystals. They are formed from the parasite’s cytosol and transported into its food vacuole, where they aid in the conversion of heme to hemozoin. 14-NBD colocalized with Nile Red, indicating the compound’s association with neutral lipid droplets ( B). Furthermore, 14-NBD interacted significantly with the parasite’s mitochondrion. This is shown by the colocalization between 14-NBD and MitoTracker Deep Red ( A). Heme Speciation Assay To augment the findings from live-cell confocal microscopy, β-hematin inhibition assay, and docking studies that suggest hemozoin inhibition as a possible mode of action of this class of compounds, the frontrunner of the series, 14 was tested in a cellular heme fractionation assay to evaluate the dose-dependent effect of the compound on various iron species in the parasite and to confirm the compound’s ability to inhibit intracellular Hz formation in P. falciparum parasites according to methods previously described by Combrinck and co-workers. However, at increasing concentrations of 14 , no statistically significant change was observed in the levels of heme. Conversely, a significant decrease in the levels of hemozoin was observed between 0.5–2× IC 50 of 14 , suggesting that although compound 14 does not directly interfere with the conversion of heme to hemozoin, it could be targeting other processes in the parasite’s digestive vacuole. At this juncture, it is noteworthy that a true hemozoin inhibitor causes a dose-dependent increase in “free” heme and a corresponding decrease in hemozoin. Previously synthesized antimalarial imidazopyridines were shown to inhibit the formation of β-hematin in a cell-free assay and were subsequently confirmed as bonafide inhibitors of hemozoin formation in a cell-fractionation assay. , Based on this precedence, the potential of our imidazopyridine series to inhibit hemozoin formation was assessed using the β-hematin inhibition Assay (BHIA) . Using the discriminatory cut-off of <100 μM, only nine compounds: 1 (18 μM), 5 (16 μM), 6 (33 μM), 10 (76 μM), 11 (80 μM), 12 (31 μM), 14 (9 μM), 18 (18 μM), and 19 (65 μM) exhibited β-hematin inhibition activity in the preferred range, with compound 14 showing the highest potency. The frontrunner compound 14 , which exhibited sub-micromolar in vitro asexual blood-stage antiplasmodium activity, acceptable cytotoxicity profile against the mammalian CHO cell line, good solubility, and potent β-hematin inhibition activity, also showed specific intermolecular interactions with the previously published crystal surface of β-hematin. The imidazopyridine core, the 3-trifluoromethylphenyl, and the 3-chlorophenol moieties of the compound interact through π–π stacking with the porphyrin ring of β-hematin. On the other hand, the basic nitrogen of the tertiary amine on the side chain of the compound forms a hydrogen bond with the propionate group of β-hematin when protonated at pH 4.5 , further supporting the inhibition of β-hematin as a possible contributing mode of action. Fluorescence drug-localization studies were employed as the starting point to probe the subcellular localization of the target compounds within the parasite. The representative compound, 14 (IC 50 Pf NF54 = 0.08 μM), was first assessed for its inherent fluorescence for imaging in P. falciparum using a fluorimeter. Excitation between 200 and 600 nm yielded no significant emission with reference to the blank solvent. This underscored the need to attach an external fluorophore with suitable photophysical properties and comparable in vitro antiplasmodium activity to the parent compound. 7-Nitrobenz-2-oxa-1,3-diazole (NBD) was selected as an appropriate extrinsic fluorophore based on its small size, commercial availability, and stability over a biologically relevant pH range. The point of attachment of the fluorophore was guided by the earlier SAR studies on the scaffold. The NBD-labeled probe retained nanomolar in vitro activity against P. falciparum ( 14-NBD Pf NF54 IC 50 = 0.049 μM; ). It also possesses photophysical properties that are suitable for live-cell imaging. Subcellular accumulation of Pf -infected red blood cells was assessed through confocal microscopy. Commercially available organelle trackers such as the LysoTracker Red, MitoTracker Deep Red, ER-Tacker Red, DRAQ5, and Nile Red aided the colocalization studies of 14-NBD . These dyes illuminate the acidic organelles such as the parasite’s digestive vacuole, mitochondrion, endoplasmic reticulum, nucleus, and lipids, respectively. − The results from the live-cell confocal microscopy showed a partial accumulation between 14-NBD and LysoTracker Red, with regions of intense localization observed around the parasite’s membrane structures. No significant accumulation was seen in the areas around the hemozoin crystals (Hz), suggesting that 14-NBD does not localize in the parasite’s digestive vacuoles ( A). It is noteworthy that while 14-NBD retained antiplasmodium activities compared to the parent compound, the presence of the NBD fluorophore can influence the accumulation of the compound in the parasite. Similarly, no colocalization was observed between the nuclear marker, DRAQ5, and 14-NBD , thereby eliminating the parasite’s nucleus as a site of action of the compound. Although the biochemistry of hemozoin formation has not been fully elucidated, with many hypotheses in the literature regarding its formation, − one hypothesis that has gained popularity is that it is lipid-catalyzed. Neutral lipids, in particular, have been associated with hemozoin formation. Consequently, Nile Red was co-incubated with 14-NBD to identify and assess the interaction of 14-NBD with neutral lipids. Punctuate structures believed to be neutral lipid droplets were observed close to the hemozoin crystals. They are formed from the parasite’s cytosol and transported into its food vacuole, where they aid in the conversion of heme to hemozoin. 14-NBD colocalized with Nile Red, indicating the compound’s association with neutral lipid droplets ( B). Furthermore, 14-NBD interacted significantly with the parasite’s mitochondrion. This is shown by the colocalization between 14-NBD and MitoTracker Deep Red ( A). To augment the findings from live-cell confocal microscopy, β-hematin inhibition assay, and docking studies that suggest hemozoin inhibition as a possible mode of action of this class of compounds, the frontrunner of the series, 14 was tested in a cellular heme fractionation assay to evaluate the dose-dependent effect of the compound on various iron species in the parasite and to confirm the compound’s ability to inhibit intracellular Hz formation in P. falciparum parasites according to methods previously described by Combrinck and co-workers. However, at increasing concentrations of 14 , no statistically significant change was observed in the levels of heme. Conversely, a significant decrease in the levels of hemozoin was observed between 0.5–2× IC 50 of 14 , suggesting that although compound 14 does not directly interfere with the conversion of heme to hemozoin, it could be targeting other processes in the parasite’s digestive vacuole. At this juncture, it is noteworthy that a true hemozoin inhibitor causes a dose-dependent increase in “free” heme and a corresponding decrease in hemozoin. With the goal of incorporating an intramolecular hydrogen bonding motif in known antimalarial chemotypes, we identified a set of potent antimalarial imidazopyridine analogues. The medicinal chemistry of the chemical series with respect to antiplasmodium SAR profiles around an earlier-identified benzimidazole core was explored, leading to the identification of the series’ frontrunner 14, which displayed the highest potency within the series. Furthermore, all compounds from this series showed a favorable cytotoxicity profile against the CHO cell line. Nonetheless, these compounds were metabolically labile and could not be progressed to in vivo efficacy studies. However, metabolite identification studies provided insight into the metabolic hotspots, which can be used to synthesize analogues that would address this liability in future studies. Although 14 interacted favorably with the β-hematin surface through docking and showed potent β-hematin inhibition, no statistically significant effect on the levels of heme was observed after a dose-dependent treatment of P. falciparum cells with 14 . Conversely, hemozoin levels decreased with increasing concentrations of 14 . Hence, we hypothesized that while 14 does not directly affect the conversion of heme to hemozoin, it may target different digestive vacuole processes. The interaction of 14-NBD with other organelles aside from the parasite’s digestive vacuole may suggest the potential involvement of a novel target. All commercially available chemicals were purchased from either Sigma-Aldrich (Germany) or Combi-Blocks (United States). 1 H NMR (all intermediates and final compounds) and 13 C NMR (for target compounds only) spectra were recorded on a Bruker Spectrometer at 300, 400, or 600 megahertz (MHz). Melting points for all target compounds were determined using a Reichert-Jung Thermovar hot-stage microscope coupled to a Reichert-Jung Thermovar digital thermometer (20–350 °C range). Reaction monitoring using analytical thin-layer chromatography (TLC) was performed on aluminum-backed silica-gel 60 F 254 (70–230 mesh) plates with detection and visualization done using (a) UV lamp (254/366 nm), (b) iodine vapors, or (c) ninhydrin spray reagent. Column chromatography was performed with Merck silica-gel 60 (70–230 mesh). Chemical shifts (δ) are reported in ppm downfield from trimethlysilane (TMS) as the internal standard. Coupling constants ( J ) were recorded in Hertz (Hz). Purity of compounds was determined by an Agilent 1260 Infinity binary pump, Agilent 1260 Infinity diode array detector, Agilent 1290 Infinity column compartment, Agilent 1260 Infinity standard autosampler, and Agilent 6120 quadrupole (single) mass spectrometer, equipped with APCI and ESI multi-mode ionization source. All compounds tested for biological activity were confirmed to have ≥95% purity by HPLC. Solubility, biological assays, and any experimental data not shown below (i.e., NMR of compound intermediates) are fully supplied and detailed in the Supporting information . Preparation of 2-Chloro- N -(4-methoxybenzyl)-5-nitropyridin-4-amine ( a ) A mixture of p -methoxybenzyl amine (2.56 g, 18.65 mmol) and N , N -diisopropylethylamine (DIPEA) in tetrahydrofuran (THF) was added dropwise to a 0 °C solution of 2,4-dichloro-5-nitropyridine (2.00 g, 10.36 mmol) in THF. The solution was then warmed up to 25 °C and stirred for an additional 30 min. Water was then added, and the resulting mixture was extracted with ethyl acetate. The combined organic layer was dried over anhydrous Na 2 SO 4 and concentrated under reduced pressure to produce the desired intermediate as a yellow solid in compound in 98% yield. 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.96 (t, J = 6.1 Hz, 1H), 8.85 (s, 1H), 7.29 (d, J = 8.7 Hz, 2H), 6.94 (s, 1H), 6.89 (d, J = 8.7 Hz, 2H), 4.57 (d, J = 6.1 Hz, 2H), 3.71 (s, 3H). 13 C-NMR (151 MHz, DMSO): δ 159.91, 155.92, 150.57, 150.08, 130.29, 129.76 (2C), 115.42 (2C), 108.83, 56.41, 46.26. HPLC-MS (ESI): purity = 98%, t R = 2.457 min, m/z [M-H] + = 294.0. Preparation of tert -Butylethyl(2-((4-((4-methoxybenzyl)amino)-5-itropyridin-2-yl)amino)ethyl)carbamate ( b ) A mixture of 2-chloro- N -(4-methoxybenzyl)-5-nitropyridin-4-amine ( 1a ) (5.00 g, 17.02 mmol), tert -butyl (2-aminoethyl)(ethyl)carbamate (4.81 g, 25.53 mmol), and triethylamine was made in N , N -dimethylformamide (DMF). The mixture was heated under microwave radiation at 100 °C for 1 h. When the reaction had completed, water was added, and the mixture was extracted with ethyl acetate (4 × 30 mL). The combined organic layer was dried over anhydrous Na 2 SO 4 , concentrated under reduced pressure, and purified via column chromatography. A yellow solid was obtained as the product. 1 H-NMR (600 MHz, chloroform- d ): δ 8.90 (s, 1H), 8.36 (s, 1H) 7.22 (d, J = 8.7 Hz, 2H), 6.85 (d, J = 8.7 Hz, 2H), 4.37 (s, 2H), 3.76 (s, 3H), 3.37–3.34 (m, 4H), 3.17 (q, J = 7.1 Hz, 2H), 1.41 (s, 9H), 1.05 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 161.22, 159.20, 156.45, 150.82, 149.91, 128.71, 128.49 (2C), 124.63, 114.31(2C), 83.65, 79.91, 55.26, 46.20, 45.70, 43.02, 41.73, 28.37 (3C), 13.87. HPLC-MS (ESI): purity = 99%, t R = 2.641 min, m/z [M + H] + = 446.2. Preparation of tert -Butylethyl(2-((4-((4-methoxybenzyl)amino)-5-nitropyridin-2-yl)amino)ethyl)carbamate ( c ) A mixture of tert -butyl ethyl(2-((4-((4-methoxybenzyl)amino)-5-nitropyridin-2 yl)amino)ethyl)carbamate ( b ) (6.00 g, 13.47 mmol) and 10% Pd/C in methanol was stirred for 16 h at 25 °C under hydrogen gas. After the reaction had been completed, the mixture was filtered through a pad of Celite and concentrated in vacuo to obtain the product, which was used in the next reaction without any further purification. 1 H-NMR (600 MHz, chloroform- d ): δ 7.42 (s, 1H), 7.24 (d, J = 8.7 Hz, 2H), 6.84 (d, J = 8.7 Hz, 2H), 4.96 (s, 1H), 4.23 (s, 2H), 3.77 (s, 3H ), 3.32–3.32 (m, 4H), 3.18 (q, J = 7.1 Hz, 2H), 1.41 (s, 9H), 1.04 (t, J = 7.0 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 158.99, 155.78, 149.34, 130.21, 129.99, 128.75 (2C), 119.97, 114.10 (2C), 87.26, 79.42, 55.29, 55.23, 46.61, 46.21, 41.54, 29.65, 28.42 (3C), 13.86. HPLC-MS (ESI): purity = 97%, t R = 2.334 min, m/z [M + H] + = 416.2. General Procedure for the Synthesis of Intermediates d and e Amide Coupling (Intermediate d ) Intermediate c (1 eq) was dissolved in DCM with the appropriate carboxylic acid (1.3 eq) and 4-dimethylaminopyridine (DMAP, 0.1 eq). 1-Ethyl-3-(3-dimethylaminopropyl)carbodiimide hydrochloride (EDCI·HCl, 1.5 eq) was then added, and the reaction mixture was stirred at 25 °C for 16 h. Water was added, and the solution was extracted with ethyl acetate, dried over anhydrous Na 2 SO 4 , and concentrated under reduced pressure. The residue was used in the subsequent reaction without any further purification. Cyclization (Intermediates e.1 – e.19 ) The corresponding amide intermediate d was dissolved in ethanol (10 mL), and 2 M NaOH solution (10 mL) was added. The resulting mixture was heated at 80 °C for 24–72 h depending on the amide intermediate. When the reaction had gone to completion, the solvent was removed in vacuo, and saturated citric acid was added to the residue. Extraction was done with DCM (2 × 20 mL), and the combined organic extract was dried over anhydrous Na 2 SO 4 , filtered, and concentrated in vacuo. The residue was purified via column chromatography (DCM/MeOH) to obtain the corresponding product. tert -Butyl(2-((2-cyclopentyl-1-(4-methoxybenzyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)carbamate ( e.1 ) Obtained from intermediate c (500 mg, 1.20 mmol) and cyclopentane carboxylic acid (0.16 mL, 1.56 mmol) as a wine-colored sticky solid (46%, 270.4 mg); Rf (DCM: MeOH, 9:1) 0.64; 1 H-NMR (600 MHz, chloroform- d ): δ 8.37 (s, 1H), 6.97 (d, J = 8.7 Hz, 2H), 6.80 (d, J = 8.7 Hz, 2H), 6.47 (s, 1H), 5.18 (s, 2H), 3.74 (s, 3H), 3.36–3.10 (m, 6H), 2.71 (p, J = 8.1 Hz, 1H), 2.05–1.92 (m, 2H), 1.89–1.78 (m, 3H), 1.71–1.51 (m, 3H), 1.39 (s, 9H), 1.03 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 181.48, 159.28, 154.15, 127.56, 127.42 (2C), 114.35 (2C), 85.09, 79.49, 55.24, 46.30, 45.90, 44.64, 43.14, 41.82, 37.16, 32.08 (2C), 30.17, 28.39 (3C), 25.86 (2C), 25.76 (2C), 13.93. HPLC-MS (ESI): purity = 98%, t R = 2.598 min, m/z [M + H] + = 494.3. tert -Butylethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydrofuran-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)carbamate ( e.2 ) Obtained from intermediate c (500 mg, 1.20 mmol) and tetrahydrofuran-3-carboxylic acid (0.14 mL, 1.56 mmol) as a wine-colored sticky solid (68%, 401.7 mg); Rf (DCM: MeOH, 9:1) 0.57; 1 H-NMR (600 MHz, chloroform- d ): δ 8.45 (s, 1H), 6.96 (d, J = 8.7 Hz, 2H), 6.81 (d, J = 8.7 Hz, 2H), 6.45 (s, 1H), 5.17 (s, 2H), 4.04–3.98 (m, 2H), 3.93–3.86 (m, 2H), 3.74 (s, 3H), 3.50–3.44 (m, 1H), 3.39–3.33 (m, 4H), 3.20 (q, J = 7.2 Hz, 2H), 2.37–2.30 (m, 1H), 2.24–2.16 (m, 1H), 1.40 (s, 9H), 1.05 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.43, 156.14, 154.68, 137.64, 133.55, 127.48, 127.27 (2C), 114.48 (2C), 84.86, 79.53, 71.93, 70.92, 68.33, 55.27, 46.34, 45.97, 43.07, 41.98, 37.14, 31.86, 29.65, 28.40 (3C), 13.89. HPLC-MS (ESI): purity = 98%, t R = 2.448 min, m/z [M + H] + = 496.3. tert -Butylethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydro-2 H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)carbamate ( e.3 ) Obtained from intermediate c (500 mg, 1.20 mmol) and tetrahydro-2 H -pyran-4-carboxylic acid (202.96 mg, 1.56 mmol) as a wine-colored sticky solid (98%, 599.12 mg); Rf (DCM: MeOH, 9:1) 0.56; 1 H-NMR (600 MHz, chloroform- d ): δ 8.44 (s, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.78 (d, J = 8.7 Hz, 2H), 6.34 (s, 1H, ), 5.15 (s, 2H), 4.02–3.96 (m, 2H), 3.71 (s, 3H), 3.42–3.36 (m, 2H), 3.34–3.30 (m, 4H), 3.16 (q, J = 7.3 Hz, 2H), 2.93 (tt, J = 11.5, 3.7 Hz, 1H), 2.08–1.99 (m, 2H), 1.68–1.63 (m, 2H), 1.37 (s, 9H), 1.01 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.34, 157.79, 154.73, 127.39 (2C), 114.43 (2C), 84.91, 79.47, 67.50, 67.45 (2C), 55.24, 46.23, 45.96, 42.99, 41.99, 33.78, 31.26, 31.21 (2C), 29.62, 29.26, 28.47, 28.38 (3C), 13.86. HPLC-MS (ESI): purity = 97%, t R = 2.457 min, m/z [M + H] + = 510.2. General Procedure for the Synthesis of Intermediates f.1 – f.19 Boc-Deprotection The appropriate intermediate e.1 – e.19 was dissolved in 4 M HCl/dioxane, and the mixture was stirred at 25 °C for 2 h. When the reaction was complete, the solvent was removed in vacuo, and the residue was neutralized with Amberlyst A21 in a mixture of DCM and methanol. Amberlyst was filtered off, the solvent was removed in vacuo, and the residue was used in the next reaction without further purification. Reductive Amination The crude product from step (a) above and 4-chloro-2-hydroxybenzaldehyde in methanol was stirred at 25 °C for 6 h. The mixture was cooled at 0 °C, and sodium borohydride (NaBH 4 ) was added portion-wise. After the addition, the reaction was allowed to warm to room temperature (25 °C) for 2 h. The solvent was removed in vacuo, and the residue was diluted with deionized water. The compound was extracted with DCM and dried over anhydrous sodium sulfate. The solvent was removed in vacuo, and the residue was purified via column chromatography to obtain the desired product. 5-Chloro-2-(((2-((2-cyclopentyl-1-(4-methoxybenzyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( f.1 ) Obtained from intermediate e.1 (160 mg, 0.41 mmol) and 4-chloro-2-hydroxybenzaldehyde (76 mg, 0.49 mmol) as a pale yellow sticky solid (53%, 116 mg); Rf (DCM:MeOH, 9:1) 0.50; 1 H-NMR (600 MHz, chloroform- d ): δ 8.44 (d, J = 1.0 Hz, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.82–6.80 (m, 3H), 6.73 (d, J = 2.1 Hz, 1H), 6.67 (dd, J = 8.0, 2.1 Hz, 1H), 6.03 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 3.74 (s, 3H), 3.72 (s, 2H), 3.38 (t, J = 6.5 Hz, 2H), 3.14–3.08 (m, 1H), 2.72 (t, J = 6.5 Hz, 2H), 2.61 (q, J = 7.2 Hz, 2H), 1.98–1.92 (m, 4H, 1.86–1.79 (m, 2H), 1.64–1.57 (m, 2H), 1.03 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.80, 159.29, 158.80, 154.09, 143.91, 138.46, 134.01, 133.98, 129.25, 128.58, 127.48, 127.39 (2C), 119.13, 116.42, 114.40 (2C), 85.32, 57.12, 55.27, 52.13, 47.60, 46.25, 40.48, 37.14, 32.10 (2C), 25.75 (2C), 10.91. HPLC-MS (ESI): purity = 98%, t R = 2.450 min, m/z [M + H] + = 534.2. 5-Chloro-2-((ethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydrofuran-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( f.2 ) Obtained from intermediate e.2 (140 mg, 0.35 mmol) and 4-chloro-2-hydroxybenzaldehyde (66 mg, 0.42 mmol) as an orange-colored sticky solid (60%, 113 mg); Rf (DCM:MeOH, 9:1) 0.53; 1 H-NMR (600 MHz, chloroform- d ): δ 8.48 (d, J = 1.0 Hz, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.82–6.80 (m, 3H), 6.72 (d, J = 2.1 Hz, 1H), 6.66 (dd, J = 8.0, 2.1 Hz, 1H), 6.08 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 4.02–3.95 (m, 3H), 3.93–3.84 (m, 2H), 3.74 (s, 3H), 3.74 (s, 2H), 3.42 (t, J = 6.5 Hz, 2H), 2.75 (t, J = 6.5 Hz, 2H), 2.64 (q, J = 7.2 Hz, 2H), 2.35–2.26 (m, 1H), 2.23–2.13 (m, 1H), 1.05 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.44, 158.78, 155.83, 154.36, 143.95, 139.07, 134.00, 129.31, 127.39 (2C), 127.28, 120.34, 119.14, 116.41, 114.52 (2C), 85.13, 71.92, 68.32, 57.08, 55.28, 52.16, 47.66, 46.28, 40.41, 37.12, 31.87, 29.65, 10.91. HPLC-MS (ESI): purity = 98%, t R = 2.542 min, m/z [M + H] + = 536.2. 5-Chloro-2-((ethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydro-2 H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( f.3 ) Obtained from intermediate e.3 (90 mg, 0.22 mmol) and 4-chloro-2-hydroxybenzaldehyde (41 mg, 0.26 mmol) as an orange-colored sticky solid (54%, 66 mg); Rf (DCM: MeOH, 9:1) 0.44; 1 H-NMR (600 MHz, chloroform- d ): δ 8.49 (d, J = 1.0 Hz, 1H), 6.92 (d, J = 8.7 Hz, 2H), 6.84–6.79 (m, 3H), 6.72 (d, J = 2.1 Hz, 1H), 6.66 (dd, J = 8.0, 2.1 Hz, 1H), 6.05 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 4.04–3.97 (m, 2H), 3.74 (s, 3H), 3.73 (s, 2H), 3.43–3.38 (m, 4H), 2.93 (tt, J = 11.5, 3.7 Hz, 1H), 2.74 (t, J = 6.4 Hz, 2H), 2.63 (q, J = 7.1 Hz, 2H), 2.09–2.00 (m, 2H), 1.70–1.63 (m, 2H), 1.04 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.39, 158.79, 157.69, 154.31, 143.61, 139.03, 133.98, 129.32, 127.38, 127.33 (2C), 125.25, 120.35, 119.12, 116.39, 114.49 (2C), 85.35, 67.48 (2C), 57.04, 55.27, 52.15, 47.64, 46.22, 40.40, 33.80, 31.24 (2C), 10.90. HPLC-MS (ESI): purity = 98%, t R = 2.531 min, m/z [M + H] + = 550.2. General Procedure for the Synthesis of Target Compounds 1 – 19 The appropriate intermediate f.1 – f.19 was stirred in neat TFA (10 mL) at 100 °C for 16 h. Once the reaction was complete, TFA was removed under reduced pressure. The residue was dissolved in DCM/MeOH (9:1) and stirred with Amberlyst A21 for 1 h. The resin was filtered off, and the filtrate was concentrated under reduced pressure. The residue was purified via column chromatography to obtain the final product. 5-Chloro-2-(((2-((2-cyclopentyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 1 ) Obtained from intermediate f.1 (116 mg, 0.22 mmol) as an off-white sticky solid (68%, 61 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.18 (d, J = 1.0 Hz, 1H), 7.25 (d, J = 8.1 Hz, 1H), 6.81 (d, J = 2.1 Hz, 1H), 6.79 (dd, J = 8.1, 2.1 Hz, 1H), 6.44 (d, J = 1.0 Hz, 1H), 4.03 (s, 2H), 3.40 (t, J = 5.9 Hz, 2H), 3.20–3.13 (m, 1H), 2.98 (t, J = 5.8 Hz, 2H), 2.93 (q, J = 7.2 Hz, 2H), 2.02–1.95 (m, 2H), 1.84–1.77 (m, 2H), 1.73–1.66 (m, 2H), 1.63–1.56 (m, 2H), 1.13 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.76, 154.94, 137.73, 132.39, 130.71, 130.00, 129.65, 128.84, 122.94, 118.83, 115.55, 86.67, 54.92, 52.22, 49.03, 47.10, 39.29, 31.92 (2C), 25.54 (2C), 11.31. HPLC-MS (ESI): purity = 98%, t R = 2.203 min, m/z [M + H] + = 414.1. 5-Chloro-2-((ethyl(2-((2-(tetrahydrofuran-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 2 ) Obtained from intermediate f.2 (113 mg, 0.21 mmol) as an off-white sticky solid (74%, 65 mg); Rf (DCM: MeOH, 9:1) 0.34; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.21 (d, J = 1.0 Hz, 1H), 7.08 (d, J = 8.0 Hz, 1H), 6.73 (d, J = 2.1 Hz, 1H), 6.71 (dd, J = 8.0, 2.1 Hz, 1H), 6.32 (d, J = 1.0 Hz, 1H), 4.02–3.99 (m, 1H), 3.83 (t, J = 6.7 Hz, 2H), 3.78–3.73 (m, 1H), 3.71 (s, 2H), 3.56–3.49 (m, 1H), 3.35–3.31 (m, 2H), 2.65 (t, J = 6.7 Hz, 2H), 2.56 (q, J = 7.1 Hz, 2H), 2.28–2.17 (m, 2H), 0.98 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.75, 155.54, 155.12, 142.76, 138.05, 134.83, 132.38, 130.70, 130.00, 122.96, 118.83, 115.55, 86.69, 71.63, 67.89, 54.93, 52.19, 49.03, 47.10, 31.48, 11.32. HPLC-MS (ESI): purity = 98%, t R = 0.298 min, m/z [M + H] + = 416.2. 5-Chloro-2-((ethyl(2-((2-(tetrahydro-2 H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 3 ) Obtained from intermediate f.3 (66 mg, 0.12 mmol) as an off-white sticky solid (80%, 41 mg); Rf (DCM: MeOH, 9:1) 0.29; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.21 (d, J = 1.0 Hz, 1H), 7.15 (d, J = 7.9 Hz, 1H), 6.76 (d, J = 2.1 Hz, 1H), 6.74 (dd, J = 7.9, 2.1 Hz, 1H), 6.38 (d, J = 1.0 Hz, 1H), 3.91–3.86 (m, 2H), 3.83 (s, 2H), 3.44–3.39 (m, 2H), 3.36 (t, J = 6.4 Hz, 2H), 3.03–2.96 (m, 1H), 2.78 (t, J = 6.4 Hz, 2H), 2.71 (q, J = 7.2 Hz, 2H), 1.90–1.85 (m, 2H), 1.79–1.72 (m, 2H), 1.04 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.55, 154.84, 142.80, 137.40, 133.97, 133.06, 131.58, 130.00, 129.68, 128.96, 119.01, 115.57, 66.96 (2C), 54.16, 52.89, 49.03, 47.44, 35.17, 31.12 (2C), 10.84. HPLC-MS (ESI): purity = 97%, t R = 0.430 min, m/z [M + H] + = 430.2. 5-Chloro-2-((ethyl(2-((2-methyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 4 ) Obtained from intermediate f.4 (66 mg, 0.14 mmol) as an off-white sticky solid (71%, 36 mg); Rf (DCM: MeOH, 9:1) 0.26; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.15 (d, J = 1.0 Hz, 1H), 7.08 (d, J = 8.0 Hz, 1H), 6.73 (d, J = 2.1 Hz, 1H), 6.71 (dd, J = 8.0, 2.1 Hz, 1H), 6.29 (d, J = 1.0 Hz, 1H), 3.70 (s, 2H), 3.32 (t, J = 6.4 Hz, 2H), 2.64 (t, J = 6.4 Hz, 2H), 2.55 (q, J = 7.2 Hz, 2H), 2.35 (s, 3H), 0.98 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.75, 154.96, 151.25, 137.45, 132.35, 130.69, 130.01, 123.02, 118.83, 115.55, 86.62, 63.28, 54.92, 52.20, 49.04, 47.10, 15.01, 11.35. HPLC-MS (ESI): purity = 98%, t R = 0.219 min, m/z [M + H] + = 360.1. 5-Chloro-2-(((2-((2-cyclopropyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 5 ) Obtained from intermediate f.5 (351 mg, 0.69 mmol) as an off-white sticky solid (68%, 181 mg); Rf (DCM: MeOH, 9:1) 0.30; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.11 (d, J = 1.0 Hz, 1H), 7.10 (d, J = 8.0 Hz, 1H), 6.74 (d, J = 2.2 Hz, 1H), 6.72 (dd, J = 8.0, 2.1 Hz, 1H), 6.31 (d, J = 1.0 Hz, 1H), 3.74 (s, 2H), 3.32 (t, J = 6.7 Hz, 2H), 2.68 (t, J = 6.7 Hz, 2H), 2.59 (q, J = 7.2 Hz, 2H), 2.01–1.96 (m, 1H), 0.99 (t, J = 7.1 Hz, 3H), 0.95–0.93 (m, 4H). 13 C-NMR (151 MHz, DMSO): δ 158.69, 157.15, 154.78, 142.64, 137.07, 132.58, 130.97, 130.00, 122.52, 118.88, 115.56, 86.76, 54.69, 52.44, 49.03, 47.20, 11.18, 9.83 (2C), 8.93. HPLC-MS (ESI): purity = 98%, t R = 2.211 min, m/z [M + H] + = 386.1. 5-Chloro-2-((ethyl(2-((2-(1-methylcyclopropyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 6 ) Obtained from intermediate f.6 (381 mg, 0.73 mmol) as an off-white sticky solid (86%, 250 mg); Rf (DCM: MeOH, 9:1) 0.29; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.37 (d, J = 1.0 Hz, 1H), 7.40 (d, J = 8.2 Hz, 1H), 6.92 (d, J = 2.1 Hz, 1H), 6.87 (dd, J = 8.2, 2.1 Hz, 1H), 6.73 (d, J = 1.0 Hz, 1H), 4.30 (s, 2H), 3.66 (t, J = 6.1 Hz, 2H), 3.28 (t, J = 6.0 Hz, 2H), 3.22 (q, J = 7.2 Hz, 1H), 1.53 (s, 3H), 1.31–1.26 (m, 5H), 1.04–1.01 (m, 2H). 13 C-NMR (101 MHz, DMSO): δ 158.06, 152.70, 149.17, 143.15, 140.46, 135.35, 134.45, 131.71, 127.66, 116.35, 115.81, 108.67, 90.14, 51.45, 48.64, 38.07, 20.88, 17.89, 15.68 (2C), 9.19. HPLC-MS (ESI): purity = 98%, t R = 0.755 min, m/z [M + H] + = 400.2. 5-Chloro-2-((ethyl(2-((2-(1-(trifluoromethyl)cyclopropyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 7 ) Obtained from intermediate f.7 (59 mg, 0.10 mmol) as an off-white sticky solid (77%, 35 mg); Rf (DCM: MeOH, 9:1) 0.30; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.50 (d, J = 1.0 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.93 (d, J = 2.1 Hz, 1H), 6.88 (dd, J = 8.1, 2.1 Hz, 1H), 6.74 (d, J = 1.0 Hz, 1H), 4.32 (s, 2H), 3.65 (t, J = 5.9 Hz, 2H), 3.30 (t, J = 5.9 Hz, 2H), 3.24 (q, J = 7.1 Hz, 2H), 1.60–1.55 (m, 4H), 1.29 (t, J = 7.2 Hz, 3H). 13 C-NMR (101 MHz, DMSO): δ 158.05, 152.50, 149.26, 146.22, 141.57, 135.40, 134.48, 129.93, 124.43, 119.59, 116.31, 115.81, 111.78, 51.57, 48.70, 38.22, 31.43, 23.37, 12.03 (2C), 9.18. HPLC-MS (ESI): purity = 98%, t R = 2.194 min, m/z [M + H] + = 454.1. 5-Chloro-2-(((2-((2-(cyclopropylmethyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 8 ) Obtained from intermediate f.8 (239 mg, 0.46 mmol) as an off-white sticky solid (88%, 162 mg); Rf (DCM: MeOH, 9:1) 0.35; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.46 (d, J = 1.0 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.94 (d, J = 2.1 Hz, 1H), 6.87 (dd, J = 8.1, 2.1 Hz, 1H), 6.79 (d, J = 1.0 Hz, 1H), 4.31 (s, 2H), 3.68 (t, J = 6.0 Hz, 2H), 3.30 (t, J = 6.0 Hz, 2H), 3.24 (q, J = 7.2 Hz, 2H), 2.80 (d, J = 7.0 Hz, 2H), 1.30 (t, J = 7.2 Hz, 3H), 1.20–1.15 (m, 1H), 0.57–0.52 (m, 2H), 0.34–0.29 (m, 2H). 13 C-NMR (101 MHz, DMSO): δ 159.52, 158.11, 152.35, 146.06, 135.35, 134.43, 131.03, 125.99, 119.54, 116.36, 115.84, 89.98, 56.56, 52.04, 51.47, 48.64, 38.03, 33.06, 9.19, 4.94 (2C). HPLC-MS (ESI): purity = 98%, t R = 0.588 min, m/z [M + H] + = 400.2. 5-Chloro-2-(((2-((2-cyclobutyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 9 ) Obtained from intermediate f.9 (311 mg, 0.60 mmol) as an off-white sticky solid (79%, 191 mg); Rf (DCM: MeOH, 9:1) 0.31; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.45 (d, J = 1.0 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.94 (d, J = 2.1 Hz, 1H), 6.85 (dd, J = 8.2, 2.1 Hz, 1H), 6.78 (d, J = 1.0 Hz, 1H), 4.31 (s, 2H), 3.83–3.72 (m, 1H), 3.69 (t, J = 6.1 Hz, 2H), 3.29 (t, J = 6.0 Hz, 2H), 3.23 (q, J = 7.2 Hz, 2H), 2.46–2.34 (m, 4H), 2.15–2.02 (m, 1H), 2.00–1.89 (m, 1H), 1.30 (t, J = 7.1 Hz, 3H). 13 C-NMR (101 MHz, DMSO) δ 163.14, 159.51, 158.11, 151.94, 135.38, 134.37, 119.50, 116.27, 115.86, 89.93, 79.09, 51.89, 51.45, 48.65, 40.46, 38.02, 33.40, 27.37 (2C), 18.60, 9.19. HPLC-MS (ESI): purity = 98%, t R = 0.921 min, m/z [M + H] + = 400.2. 5-Chloro-2-(((2-((2-(3,3-difluorocyclobutyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 10 ) Obtained from intermediate f.10 (274 mg, 0.49 mmol) as an off-white sticky solid (84%, 179 mg); Rf (DCM: MeOH, 9:1) 0.29; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.51 (d, J = 1.0 Hz, 1H), 7.37 (d, J = 8.2 Hz, 1H), 6.88 (d, J = 2.1 Hz, 1H), 6.84 (dd, J = 8.2, 2.1 Hz, 1H), 6.80 (d, J = 1.0 Hz, 1H), 4.28 (s, 2H), 3.66 (t, J = 6.3 Hz, 2H), 3.64–3.60 (m, 1H), 3.27 (t, J = 6.2 Hz, 2H), 3.20 (q, J = 7.2 Hz, 2H), 3.10–2.94 (m, 4H), 1.26 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 159.36, 158.02, 150.81, 147.81, 135.40, 134.50, 133.25, 122.12, 120.25, 119.58, 118.45, 116.14, 115.75, 89.93, 63.24, 51.32, 48.61, 37.86, 21.96 (2C), 9.11. HPLC-MS (ESI): purity = 99%, t R = 1.808 min, m/z [M + H] + = 436.1. 5-Chloro-2-(((2-((2-cyclohexyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 11 ) Obtained from intermediate f.11 (274 mg, 0.50 mmol) as an off-white sticky solid (71%, 152 mg); Rf (DCM: MeOH, 9:1) 0.32; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.44 (d, J = 1.0 Hz, 1H), 7.36 (d, J = 8.2 Hz, 1H), 6.88 (d, J = 2.1 Hz, 1H), 6.82 (dd, J = 8.2, 2.1 Hz, 1H), 6.76 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.65 (t, J = 6.2 Hz, 2H), 3.25 (t, J = 6.2 Hz, 2H), 3.19 (q, J = 7.2 Hz, 2H), 2.89 (tt, J = 11.7, 3.7 Hz, 1H), 2.01–1.96 (m, 2H), 1.77–1.73 (m, 2H), 1.67–1.63 (m, 1H), 1.59–1.52 (m, 2H), 1.39–1.30 (m, 2H), 1.25 (t, J = 7.2 Hz, 3H), 1.23–1.17 (m, 1H). 13 C-NMR (151 MHz, DMSO): δ 164.77, 159.42, 158.04, 151.60, 146.44, 135.37, 134.44, 130.11, 126.08, 119.51, 116.14, 115.75, 110.92, 89.86, 51.33, 48.63, 37.63, 30.78 (2C), 25.69, 25.57 (2C), 9.09. HPLC-MS (ESI): purity = 97%, t R = 2.238 min, m/z [M + H] + = 428.2. 5-Chloro-2-(((2-((2-(4,4-difluorocyclohexyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 12 ) Obtained from intermediate f.12 (381 mg, 0.65 mmol) as an off-white sticky solid (83%, 250 mg); Rf (DCM: MeOH, 9:1) 0.37; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.46 (d, J = 1.0 Hz, 1H), 7.36 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.81 (dd, J = 8.2, 2.1 Hz, 1H), 6.80 (d, J = 1.0 Hz, 1H), 4.27 (s, 2H), 3.76–3.70 (m, 1H), 3.67 (t, J = 6.3 Hz, 2H), 3.27 (t, J = 6.2 Hz, 2H), 3.19 (q, J = 7.1 Hz, 2H), 3.12–3.06 (m, 1H), 2.11–2.07 (m, 4H), 2.01–1.89 (m, 1H), 1.88–1.80 (m, 2H), 1.25 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.09, 150.96, 135.35, 134.42, 129.70, 125.56, 123.97, 122.38, 119.46, 116.12, 115.78, 89.88, 62.47, 51.26, 48.62, 37.80, 35.06, 32.56, 27.12 (2C), 25.84 (2C), 9.08. HPLC-MS (ESI): purity = 98%, t R = 2.150 min, m/z [M + H] + = 464.2. 5-Chloro-2-((ethyl(2-((2-(1-methylpyrrolidin-2-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 13 ) Obtained from intermediate f.13 (51 mg, 0.10 mmol) as a yellow sticky solid (71%, 31 mg); Rf (DCM: MeOH, 9:1) 0.20; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.44 (d, J = 1.0 Hz, 1H), 7.38 (d, J = 8.2 Hz, 1H), 6.92 (d, J = 2.1 Hz, 1H), 6.83 (dd, J = 8.2, 2.1 Hz, 1H), 6.66 (d, J = 1.0 Hz, 1H), 4.77 (dd, J = 8.1, 4.3 Hz, 1H), 4.27 (s, 2H), 3.58 (t, J = 5.9 Hz, 2H), 3.24 (t, J = 5.9 Hz, 2H), 3.20 (q, J = 7.2 Hz, 2H), 2.94–2.86 (m, 2H), 2.60–2.53 (m, 2H), 2.27–1.98 (m, 2H), 1.72 (s, 3H), 1.24 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 172.03, 159.23, 158.81, 158.11, 153.85, 135.31, 134.45, 120.46, 119.47, 118.48, 116.50, 116.33, 115.80, 114.52, 52.89, 51.50, 48.58, 38.13, 29.75, 22.88, 22.29, 9.14. HPLC-MS (ESI): purity = 97%, t R = 0.141 min, m/z [M + H] + = 429.2. 5-Chloro-2-((ethyl(2-((2-(3-(trifluoromethyl)phenyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 14 ) Obtained from intermediate f.14 (231 mg, 0.38 mmol) as an off-white sticky solid (81%, 151 mg); Rf (DCM: MeOH, 9:1) 0.30; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.53 (d, J = 1.0 Hz, 1H), 8.47 (dd, J = 2.1, 1.5 Hz, 1H), 8.44 (ddd, J = 7.8, 1.6, 1.5 Hz, 1H), 7.88 (ddd, J = 7.8, 2.1, 1.6 Hz, 1H), 7.79 (t, J = 7.8 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.90 (d, J = 2.1 Hz, 1H), 6.85 (dd, J = 8.2, 2.1 Hz, 1H), 6.75 (d, J = 1.0 Hz, 1H), 4.28 (s, 2H), 3.64 (t, J = 6.0 Hz, 2H), 3.26 (t, J = 5.9 Hz, 2H), 3.21 (q, J = 7.2 Hz, 2H), 1.27 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.96, 158.75, 158.54, 158.32, 158.04, 135.32, 134.55, 131.14, 130.85, 130.47, 127.62, 125.27, 123.74, 120.50, 120.36, 119.56, 118.39, 116.28, 115.75, 51.39, 48.58, 38.16, 22.91, 9.16. HPLC-MS (ESI): purity = 98%, t R = 2.422 min, m/z [M + H] + = 490.1. 5-Chloro-2-(((2-((2-cycloheptyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 15 ) Obtained from intermediate f.15 (76 mg, 0.14 mmol) as an off-white sticky solid (92%, 57 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.41 (d, J = 1.0 Hz, 1H), 7.37 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.2 Hz, 1H), 6.82 (dd, J = 8.2, 2.1 Hz, 1H), 6.74 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.64 (t, J = 6.2 Hz, 2H), 3.25 (t, J = 6.2 Hz, 2H), 3.19 (q, J = 7.2 Hz, 2H), 3.07 (tt, J = 9.4, 4.4 Hz, 1H), 2.03–1.96 (m, 2H), 1.84–1.76 (m, 2H), 1.62–1.56 (m, 4H), 1.55–1.48 (m, 4H), 1.25 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 172.01, 159.28, 159.06, 158.85, 158.64, 158.06, 135.31, 134.44, 119.49, 118.40, 116.22, 115.76, 89.77, 51.31, 48.60, 37.84, 32.67 (2C), 28.14 (2C), 26.17 (2C), 22.88, 9.12. HPLC-MS (ESI): purity = 99%, t R = 2.413 min, m/z [M + H] + = 442.2. 5-Chloro-2-((ethyl(2-((2-((1r,4r)-4-(trifluoromethyl)cyclohexyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 16 ) Obtained from intermediate f.16 (81 mg, 0.13 mmol) as an off-white sticky solid (88%, 57 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.42 (d, J = 1.0 Hz, 1H), 7.37 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.82 (dd, J = 8.2, 2.1 Hz, 1H), 6.76 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.64 (t, J = 6.2 Hz, 2H), 3.25 (t, J = 6.2 Hz, 2H), 3.18 (q, J = 7.2 Hz, 2H), 2.89 (tt, J = 12.1, 3.6 Hz, 1H), 2.36–2.27 (m, 1H), 2.16–2.12 (m, 2H), 1.98–1.94 (m, 2H), 1.66–1.58 (m, 2H), 1.45–1.35 (m, 2H), 1.25 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.44, 158.78, 154.20, 143.67, 138.51, 134.04, 134.02, 129.32, 127.19, 120.34, 119.14, 116.42, 85.28, 57.08, 55.29, 52.10, 47.69, 46.36, 46.24, 41.06, 35.44, 29.99 (2C), 10.89. HPLC-MS (ESI): purity = 98%, t R = 2.413 min, m/z [M + H] + = 496.2. 5-Chloro-2-((ethyl(2-((2-((1r,4r)-4-(trifluoromethyl)cyclohexyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)Amino)ethyl)amino)methyl)phenol ( 17 ) Obtained from intermediate f.17 (77 mg, 0.12 mmol) as an off-white sticky solid (78%, 47 mg); Rf (DCM: MeOH, 9:1) 0.36; 1 H-NMR (600 MHz, DMSO): δ 8.40 (d, J = 1.0 Hz, 1H), 7.36 (d, J = 8.2 Hz, 1H), 6.87 (d, J = 2.2 Hz, 1H), 6.83 (dd, J = 8.2, 2.1 Hz, 1H), 6.71 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.25–3.22 (m, 4H), 3.18 (q, J = 7.2 Hz, 2H), 2.88 (tt, J = 12.3, 3.7 Hz, 1H), 2.37–2.28 (m, 1H), 2.17–2.12 (m, 2H), 1.99–1.94 (m, 2H), 1.66–1.58 (m, 2H), 1.45–1.36 (m, 2H), 1.24 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.44, 158.78, 154.20, 143.67, 138.51, 134.04, 134.02, 129.32, 127.19, 120.34, 119.14, 116.42, 85.28, 57.08, 55.29, 52.10, 47.69, 46.36, 46.24, 41.06, 35.44, 29.99 (2C), 10.89. HPLC-MS (ESI): purity = 97%, t R = 2.343 min, m/z [M + H] + = 496.2. 5-Chloro-2-((ethyl(2-((2-(4-methoxycyclohexyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 18 ) Obtained from intermediate f.18 (73 mg, 0.13 mmol) as an off-white sticky solid (77%, 46 mg); Rf (DCM: MeOH, 9:1) 0.50; 1 H-NMR (600 MHz, DMSO): δ 8.37 (d, J = 1.0 Hz, 1H), 8.13 (d, J = 8.1 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.83 (dd, J = 8.2, 2.1 Hz, 1H), 6.62 (d, J = 1.0 Hz, 1H), 4.25 (s, 2H), 3.64–3.53 (m, 1H), 3.22 (s, 3H), 3.12 (tt, J = 10.88, 4.11 Hz, 1H), 2.92 (t, J = 6.2 Hz, 2H), 2.81 (t, J = 6.2 Hz, 2H), 2.58 (q, J = 7.2 Hz, 2H), 2.08–2.02 (m, 2H), 1.88–1.77 (m, 2H), 1.75–1.71 (m, 2H), 1.63–1.47 (m, 2H), 1.23 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.47, 153.50, 144.28, 136.87, 134.14, 129.50, 127.40, 119.22, 116.44, 85.54, 78.32, 73.57, 56.94, 55.77, 55.30, 52.01, 47.89, 40.43, 35.67, 31.57 (2C), 29.79 (2C), 10.87. HPLC-MS (ESI): purity = 97%, t R = 2.124 min, m/z [M + H] + = 458.2. 5-Chloro-2-((ethyl(2-((2-(6-(trifluoromethyl)pyridin-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 19 ) Obtained from intermediate f.19 (55 mg, 0.09 mmol) as an off-white solid (81%, 36 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 9.44 (d, J = 2.1 Hz, 1H), 8.70 (dd, J = 8.2, 2.1 Hz, 1H), 8.52 (d, J = 1.0 Hz, 1H, ), 8.09 (d, J = 8.2 Hz, 1H), 7.38 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.85 (dd, J = 8.2, 2.1 Hz, 1H), 6.67 (d, J = 1.0 Hz, 1H), 4.28 (s, 2H), 3.59 (t, J = 5.8 Hz, 2H), 3.25 (t, J = 5.9 Hz, 2H), 3.21 (q, J = 7.2 Hz, 2H), 1.27 (t, J = 7.2 Hz, 3H).C-NMR (151 MHz, CDCl 3 ): δ 159.54, 158.60, 155.00, 149.46, 144.49, 140.74, 137.88, 134.69, 134.25, 129.57, 128.82, 126.84, 120.39, 119.88, 119.26, 114.79, 85.39, 56.84, 55.31, 52.20, 47.84, 41.84, 10.77. HPLC-MS (ESI): purity = 98%, t R = 2.352 min, m/z [M + H] + = 491.1. 5-Chloro-2-(((2-((7-nitrobenzo[c][1,2,5]oxadiazol-4-yl)amino)ethyl)(2-((2-(3-(trifluoromethyl)phenyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 14-NBD ) Obtained from intermediate m (339 mg, 0.56 mmols) and NBD-chloride (254 mg, 1.27 mmols) as a brick-red solid (50%, 187 mg); R f (DCM: MeOH, 9:1) 0.56; 1 H NMR (600 MHz, DMSO- d 6 ): δ 8.39 (s, 1H), 8.37 (d, J = 8.8 Hz, 1H), 8.35 (dd, J = 2.8, 2.2 Hz, 1H), 8.31 (d, J = 8.8 Hz, 1H), 7.82 (dd, J = 7.8, 2.2 Hz, 1H), 7.74 (ddd, J = 7.8, 2.8, 2.3 Hz, 1H), 7.13 (dd, J = 8.4, 7.8 Hz, 1H), 6.85 (d, J = 7.8 Hz, 1H), 6.59 (ddd, J = 8.4, 2.3 Hz, 1H), 6.19 (d, J = 2.2 Hz, 1H), 6.04 (s, 1H), 3.70 (s, 2H), 3.53 (t, J = 6.3 Hz, 2H), 3.39 (t, J = 6.0 Hz, 2H), 2.82–2.76 (m, 4H). 13 C NMR (151 MHz, DMSO): δ 159.13, 158.92, 157.57, 155.85, 154.95, 149.82, 147.41, 143.12, 139.53, 138.17, 137.90, 135.58, 132.31, 131.80, 131.35, 129.01, 128.78, 126.30, 123.09, 118.92, 115.35, 99.50, 92.26, 86.51, 55.54, 53.70, 51.38, 44.39, 41.56, 36.89. HPLC-MS (ESI): purity = 98%, t R = 0.905 min, m/z [M + H] + = 668.2. In Vitro P. falciparum Assay Compounds were screened against multi-drug-resistant (K1) and sensitive (NF54) strains of P. falciparum in vitro using a parasite lactate dehydrogenase assay (pLDH) (the method is described fully in the Supporting Information ). In Vitro Cytotoxicity Assay In vitro cytotoxicity was performed on the Chinese Hamster Ovarian cell line by measuring cellular growth and survival calorimetrically through the MTT assay. , The formation of tetrazolium salt was used as a measure of chemosensitivity and growth. (Details of this assay is described in the Supporting Information .) In Vitro Microsomal Stability Assay The in vitro microsomal stability assay was performed in duplicate in a 96-well microtiter plate using a single-point experiment design. The test compounds (1 μM) were incubated individually in human (pool of 50, mixed-gender), rat (pool of 711, male Sprague Dawley) and mouse (pool of 1634, male CD1) liver microsomes (final protein concentration of 0.4 mg/mL; Xenotech, Kansas, USA), suspended in 0.1 M phosphate buffer (pH 7.4). (Details of this assay are described in the Supporting Information .) N -(4-methoxybenzyl)-5-nitropyridin-4-amine ( a ) A mixture of p -methoxybenzyl amine (2.56 g, 18.65 mmol) and N , N -diisopropylethylamine (DIPEA) in tetrahydrofuran (THF) was added dropwise to a 0 °C solution of 2,4-dichloro-5-nitropyridine (2.00 g, 10.36 mmol) in THF. The solution was then warmed up to 25 °C and stirred for an additional 30 min. Water was then added, and the resulting mixture was extracted with ethyl acetate. The combined organic layer was dried over anhydrous Na 2 SO 4 and concentrated under reduced pressure to produce the desired intermediate as a yellow solid in compound in 98% yield. 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.96 (t, J = 6.1 Hz, 1H), 8.85 (s, 1H), 7.29 (d, J = 8.7 Hz, 2H), 6.94 (s, 1H), 6.89 (d, J = 8.7 Hz, 2H), 4.57 (d, J = 6.1 Hz, 2H), 3.71 (s, 3H). 13 C-NMR (151 MHz, DMSO): δ 159.91, 155.92, 150.57, 150.08, 130.29, 129.76 (2C), 115.42 (2C), 108.83, 56.41, 46.26. HPLC-MS (ESI): purity = 98%, t R = 2.457 min, m/z [M-H] + = 294.0. tert -Butylethyl(2-((4-((4-methoxybenzyl)amino)-5-itropyridin-2-yl)amino)ethyl)carbamate ( b ) A mixture of 2-chloro- N -(4-methoxybenzyl)-5-nitropyridin-4-amine ( 1a ) (5.00 g, 17.02 mmol), tert -butyl (2-aminoethyl)(ethyl)carbamate (4.81 g, 25.53 mmol), and triethylamine was made in N , N -dimethylformamide (DMF). The mixture was heated under microwave radiation at 100 °C for 1 h. When the reaction had completed, water was added, and the mixture was extracted with ethyl acetate (4 × 30 mL). The combined organic layer was dried over anhydrous Na 2 SO 4 , concentrated under reduced pressure, and purified via column chromatography. A yellow solid was obtained as the product. 1 H-NMR (600 MHz, chloroform- d ): δ 8.90 (s, 1H), 8.36 (s, 1H) 7.22 (d, J = 8.7 Hz, 2H), 6.85 (d, J = 8.7 Hz, 2H), 4.37 (s, 2H), 3.76 (s, 3H), 3.37–3.34 (m, 4H), 3.17 (q, J = 7.1 Hz, 2H), 1.41 (s, 9H), 1.05 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 161.22, 159.20, 156.45, 150.82, 149.91, 128.71, 128.49 (2C), 124.63, 114.31(2C), 83.65, 79.91, 55.26, 46.20, 45.70, 43.02, 41.73, 28.37 (3C), 13.87. HPLC-MS (ESI): purity = 99%, t R = 2.641 min, m/z [M + H] + = 446.2. tert -Butylethyl(2-((4-((4-methoxybenzyl)amino)-5-nitropyridin-2-yl)amino)ethyl)carbamate ( c ) A mixture of tert -butyl ethyl(2-((4-((4-methoxybenzyl)amino)-5-nitropyridin-2 yl)amino)ethyl)carbamate ( b ) (6.00 g, 13.47 mmol) and 10% Pd/C in methanol was stirred for 16 h at 25 °C under hydrogen gas. After the reaction had been completed, the mixture was filtered through a pad of Celite and concentrated in vacuo to obtain the product, which was used in the next reaction without any further purification. 1 H-NMR (600 MHz, chloroform- d ): δ 7.42 (s, 1H), 7.24 (d, J = 8.7 Hz, 2H), 6.84 (d, J = 8.7 Hz, 2H), 4.96 (s, 1H), 4.23 (s, 2H), 3.77 (s, 3H ), 3.32–3.32 (m, 4H), 3.18 (q, J = 7.1 Hz, 2H), 1.41 (s, 9H), 1.04 (t, J = 7.0 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 158.99, 155.78, 149.34, 130.21, 129.99, 128.75 (2C), 119.97, 114.10 (2C), 87.26, 79.42, 55.29, 55.23, 46.61, 46.21, 41.54, 29.65, 28.42 (3C), 13.86. HPLC-MS (ESI): purity = 97%, t R = 2.334 min, m/z [M + H] + = 416.2. d and e Amide Coupling (Intermediate d ) Intermediate c (1 eq) was dissolved in DCM with the appropriate carboxylic acid (1.3 eq) and 4-dimethylaminopyridine (DMAP, 0.1 eq). 1-Ethyl-3-(3-dimethylaminopropyl)carbodiimide hydrochloride (EDCI·HCl, 1.5 eq) was then added, and the reaction mixture was stirred at 25 °C for 16 h. Water was added, and the solution was extracted with ethyl acetate, dried over anhydrous Na 2 SO 4 , and concentrated under reduced pressure. The residue was used in the subsequent reaction without any further purification. Cyclization (Intermediates e.1 – e.19 ) The corresponding amide intermediate d was dissolved in ethanol (10 mL), and 2 M NaOH solution (10 mL) was added. The resulting mixture was heated at 80 °C for 24–72 h depending on the amide intermediate. When the reaction had gone to completion, the solvent was removed in vacuo, and saturated citric acid was added to the residue. Extraction was done with DCM (2 × 20 mL), and the combined organic extract was dried over anhydrous Na 2 SO 4 , filtered, and concentrated in vacuo. The residue was purified via column chromatography (DCM/MeOH) to obtain the corresponding product. tert -Butyl(2-((2-cyclopentyl-1-(4-methoxybenzyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)carbamate ( e.1 ) Obtained from intermediate c (500 mg, 1.20 mmol) and cyclopentane carboxylic acid (0.16 mL, 1.56 mmol) as a wine-colored sticky solid (46%, 270.4 mg); Rf (DCM: MeOH, 9:1) 0.64; 1 H-NMR (600 MHz, chloroform- d ): δ 8.37 (s, 1H), 6.97 (d, J = 8.7 Hz, 2H), 6.80 (d, J = 8.7 Hz, 2H), 6.47 (s, 1H), 5.18 (s, 2H), 3.74 (s, 3H), 3.36–3.10 (m, 6H), 2.71 (p, J = 8.1 Hz, 1H), 2.05–1.92 (m, 2H), 1.89–1.78 (m, 3H), 1.71–1.51 (m, 3H), 1.39 (s, 9H), 1.03 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 181.48, 159.28, 154.15, 127.56, 127.42 (2C), 114.35 (2C), 85.09, 79.49, 55.24, 46.30, 45.90, 44.64, 43.14, 41.82, 37.16, 32.08 (2C), 30.17, 28.39 (3C), 25.86 (2C), 25.76 (2C), 13.93. HPLC-MS (ESI): purity = 98%, t R = 2.598 min, m/z [M + H] + = 494.3. tert -Butylethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydrofuran-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)carbamate ( e.2 ) Obtained from intermediate c (500 mg, 1.20 mmol) and tetrahydrofuran-3-carboxylic acid (0.14 mL, 1.56 mmol) as a wine-colored sticky solid (68%, 401.7 mg); Rf (DCM: MeOH, 9:1) 0.57; 1 H-NMR (600 MHz, chloroform- d ): δ 8.45 (s, 1H), 6.96 (d, J = 8.7 Hz, 2H), 6.81 (d, J = 8.7 Hz, 2H), 6.45 (s, 1H), 5.17 (s, 2H), 4.04–3.98 (m, 2H), 3.93–3.86 (m, 2H), 3.74 (s, 3H), 3.50–3.44 (m, 1H), 3.39–3.33 (m, 4H), 3.20 (q, J = 7.2 Hz, 2H), 2.37–2.30 (m, 1H), 2.24–2.16 (m, 1H), 1.40 (s, 9H), 1.05 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.43, 156.14, 154.68, 137.64, 133.55, 127.48, 127.27 (2C), 114.48 (2C), 84.86, 79.53, 71.93, 70.92, 68.33, 55.27, 46.34, 45.97, 43.07, 41.98, 37.14, 31.86, 29.65, 28.40 (3C), 13.89. HPLC-MS (ESI): purity = 98%, t R = 2.448 min, m/z [M + H] + = 496.3. tert -Butylethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydro-2 H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)carbamate ( e.3 ) Obtained from intermediate c (500 mg, 1.20 mmol) and tetrahydro-2 H -pyran-4-carboxylic acid (202.96 mg, 1.56 mmol) as a wine-colored sticky solid (98%, 599.12 mg); Rf (DCM: MeOH, 9:1) 0.56; 1 H-NMR (600 MHz, chloroform- d ): δ 8.44 (s, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.78 (d, J = 8.7 Hz, 2H), 6.34 (s, 1H, ), 5.15 (s, 2H), 4.02–3.96 (m, 2H), 3.71 (s, 3H), 3.42–3.36 (m, 2H), 3.34–3.30 (m, 4H), 3.16 (q, J = 7.3 Hz, 2H), 2.93 (tt, J = 11.5, 3.7 Hz, 1H), 2.08–1.99 (m, 2H), 1.68–1.63 (m, 2H), 1.37 (s, 9H), 1.01 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.34, 157.79, 154.73, 127.39 (2C), 114.43 (2C), 84.91, 79.47, 67.50, 67.45 (2C), 55.24, 46.23, 45.96, 42.99, 41.99, 33.78, 31.26, 31.21 (2C), 29.62, 29.26, 28.47, 28.38 (3C), 13.86. HPLC-MS (ESI): purity = 97%, t R = 2.457 min, m/z [M + H] + = 510.2. d ) Intermediate c (1 eq) was dissolved in DCM with the appropriate carboxylic acid (1.3 eq) and 4-dimethylaminopyridine (DMAP, 0.1 eq). 1-Ethyl-3-(3-dimethylaminopropyl)carbodiimide hydrochloride (EDCI·HCl, 1.5 eq) was then added, and the reaction mixture was stirred at 25 °C for 16 h. Water was added, and the solution was extracted with ethyl acetate, dried over anhydrous Na 2 SO 4 , and concentrated under reduced pressure. The residue was used in the subsequent reaction without any further purification. e.1 – e.19 ) The corresponding amide intermediate d was dissolved in ethanol (10 mL), and 2 M NaOH solution (10 mL) was added. The resulting mixture was heated at 80 °C for 24–72 h depending on the amide intermediate. When the reaction had gone to completion, the solvent was removed in vacuo, and saturated citric acid was added to the residue. Extraction was done with DCM (2 × 20 mL), and the combined organic extract was dried over anhydrous Na 2 SO 4 , filtered, and concentrated in vacuo. The residue was purified via column chromatography (DCM/MeOH) to obtain the corresponding product. tert -Butyl(2-((2-cyclopentyl-1-(4-methoxybenzyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)carbamate ( e.1 ) Obtained from intermediate c (500 mg, 1.20 mmol) and cyclopentane carboxylic acid (0.16 mL, 1.56 mmol) as a wine-colored sticky solid (46%, 270.4 mg); Rf (DCM: MeOH, 9:1) 0.64; 1 H-NMR (600 MHz, chloroform- d ): δ 8.37 (s, 1H), 6.97 (d, J = 8.7 Hz, 2H), 6.80 (d, J = 8.7 Hz, 2H), 6.47 (s, 1H), 5.18 (s, 2H), 3.74 (s, 3H), 3.36–3.10 (m, 6H), 2.71 (p, J = 8.1 Hz, 1H), 2.05–1.92 (m, 2H), 1.89–1.78 (m, 3H), 1.71–1.51 (m, 3H), 1.39 (s, 9H), 1.03 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 181.48, 159.28, 154.15, 127.56, 127.42 (2C), 114.35 (2C), 85.09, 79.49, 55.24, 46.30, 45.90, 44.64, 43.14, 41.82, 37.16, 32.08 (2C), 30.17, 28.39 (3C), 25.86 (2C), 25.76 (2C), 13.93. HPLC-MS (ESI): purity = 98%, t R = 2.598 min, m/z [M + H] + = 494.3. tert -Butylethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydrofuran-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)carbamate ( e.2 ) Obtained from intermediate c (500 mg, 1.20 mmol) and tetrahydrofuran-3-carboxylic acid (0.14 mL, 1.56 mmol) as a wine-colored sticky solid (68%, 401.7 mg); Rf (DCM: MeOH, 9:1) 0.57; 1 H-NMR (600 MHz, chloroform- d ): δ 8.45 (s, 1H), 6.96 (d, J = 8.7 Hz, 2H), 6.81 (d, J = 8.7 Hz, 2H), 6.45 (s, 1H), 5.17 (s, 2H), 4.04–3.98 (m, 2H), 3.93–3.86 (m, 2H), 3.74 (s, 3H), 3.50–3.44 (m, 1H), 3.39–3.33 (m, 4H), 3.20 (q, J = 7.2 Hz, 2H), 2.37–2.30 (m, 1H), 2.24–2.16 (m, 1H), 1.40 (s, 9H), 1.05 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.43, 156.14, 154.68, 137.64, 133.55, 127.48, 127.27 (2C), 114.48 (2C), 84.86, 79.53, 71.93, 70.92, 68.33, 55.27, 46.34, 45.97, 43.07, 41.98, 37.14, 31.86, 29.65, 28.40 (3C), 13.89. HPLC-MS (ESI): purity = 98%, t R = 2.448 min, m/z [M + H] + = 496.3. tert -Butylethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydro-2 H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)carbamate ( e.3 ) Obtained from intermediate c (500 mg, 1.20 mmol) and tetrahydro-2 H -pyran-4-carboxylic acid (202.96 mg, 1.56 mmol) as a wine-colored sticky solid (98%, 599.12 mg); Rf (DCM: MeOH, 9:1) 0.56; 1 H-NMR (600 MHz, chloroform- d ): δ 8.44 (s, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.78 (d, J = 8.7 Hz, 2H), 6.34 (s, 1H, ), 5.15 (s, 2H), 4.02–3.96 (m, 2H), 3.71 (s, 3H), 3.42–3.36 (m, 2H), 3.34–3.30 (m, 4H), 3.16 (q, J = 7.3 Hz, 2H), 2.93 (tt, J = 11.5, 3.7 Hz, 1H), 2.08–1.99 (m, 2H), 1.68–1.63 (m, 2H), 1.37 (s, 9H), 1.01 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.34, 157.79, 154.73, 127.39 (2C), 114.43 (2C), 84.91, 79.47, 67.50, 67.45 (2C), 55.24, 46.23, 45.96, 42.99, 41.99, 33.78, 31.26, 31.21 (2C), 29.62, 29.26, 28.47, 28.38 (3C), 13.86. HPLC-MS (ESI): purity = 97%, t R = 2.457 min, m/z [M + H] + = 510.2. -Butyl(2-((2-cyclopentyl-1-(4-methoxybenzyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)carbamate ( e.1 ) Obtained from intermediate c (500 mg, 1.20 mmol) and cyclopentane carboxylic acid (0.16 mL, 1.56 mmol) as a wine-colored sticky solid (46%, 270.4 mg); Rf (DCM: MeOH, 9:1) 0.64; 1 H-NMR (600 MHz, chloroform- d ): δ 8.37 (s, 1H), 6.97 (d, J = 8.7 Hz, 2H), 6.80 (d, J = 8.7 Hz, 2H), 6.47 (s, 1H), 5.18 (s, 2H), 3.74 (s, 3H), 3.36–3.10 (m, 6H), 2.71 (p, J = 8.1 Hz, 1H), 2.05–1.92 (m, 2H), 1.89–1.78 (m, 3H), 1.71–1.51 (m, 3H), 1.39 (s, 9H), 1.03 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 181.48, 159.28, 154.15, 127.56, 127.42 (2C), 114.35 (2C), 85.09, 79.49, 55.24, 46.30, 45.90, 44.64, 43.14, 41.82, 37.16, 32.08 (2C), 30.17, 28.39 (3C), 25.86 (2C), 25.76 (2C), 13.93. HPLC-MS (ESI): purity = 98%, t R = 2.598 min, m/z [M + H] + = 494.3. -Butylethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydrofuran-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)carbamate ( e.2 ) Obtained from intermediate c (500 mg, 1.20 mmol) and tetrahydrofuran-3-carboxylic acid (0.14 mL, 1.56 mmol) as a wine-colored sticky solid (68%, 401.7 mg); Rf (DCM: MeOH, 9:1) 0.57; 1 H-NMR (600 MHz, chloroform- d ): δ 8.45 (s, 1H), 6.96 (d, J = 8.7 Hz, 2H), 6.81 (d, J = 8.7 Hz, 2H), 6.45 (s, 1H), 5.17 (s, 2H), 4.04–3.98 (m, 2H), 3.93–3.86 (m, 2H), 3.74 (s, 3H), 3.50–3.44 (m, 1H), 3.39–3.33 (m, 4H), 3.20 (q, J = 7.2 Hz, 2H), 2.37–2.30 (m, 1H), 2.24–2.16 (m, 1H), 1.40 (s, 9H), 1.05 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.43, 156.14, 154.68, 137.64, 133.55, 127.48, 127.27 (2C), 114.48 (2C), 84.86, 79.53, 71.93, 70.92, 68.33, 55.27, 46.34, 45.97, 43.07, 41.98, 37.14, 31.86, 29.65, 28.40 (3C), 13.89. HPLC-MS (ESI): purity = 98%, t R = 2.448 min, m/z [M + H] + = 496.3. -Butylethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydro-2 H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)carbamate ( e.3 ) Obtained from intermediate c (500 mg, 1.20 mmol) and tetrahydro-2 H -pyran-4-carboxylic acid (202.96 mg, 1.56 mmol) as a wine-colored sticky solid (98%, 599.12 mg); Rf (DCM: MeOH, 9:1) 0.56; 1 H-NMR (600 MHz, chloroform- d ): δ 8.44 (s, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.78 (d, J = 8.7 Hz, 2H), 6.34 (s, 1H, ), 5.15 (s, 2H), 4.02–3.96 (m, 2H), 3.71 (s, 3H), 3.42–3.36 (m, 2H), 3.34–3.30 (m, 4H), 3.16 (q, J = 7.3 Hz, 2H), 2.93 (tt, J = 11.5, 3.7 Hz, 1H), 2.08–1.99 (m, 2H), 1.68–1.63 (m, 2H), 1.37 (s, 9H), 1.01 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.34, 157.79, 154.73, 127.39 (2C), 114.43 (2C), 84.91, 79.47, 67.50, 67.45 (2C), 55.24, 46.23, 45.96, 42.99, 41.99, 33.78, 31.26, 31.21 (2C), 29.62, 29.26, 28.47, 28.38 (3C), 13.86. HPLC-MS (ESI): purity = 97%, t R = 2.457 min, m/z [M + H] + = 510.2. f.1 – f.19 Boc-Deprotection The appropriate intermediate e.1 – e.19 was dissolved in 4 M HCl/dioxane, and the mixture was stirred at 25 °C for 2 h. When the reaction was complete, the solvent was removed in vacuo, and the residue was neutralized with Amberlyst A21 in a mixture of DCM and methanol. Amberlyst was filtered off, the solvent was removed in vacuo, and the residue was used in the next reaction without further purification. Reductive Amination The crude product from step (a) above and 4-chloro-2-hydroxybenzaldehyde in methanol was stirred at 25 °C for 6 h. The mixture was cooled at 0 °C, and sodium borohydride (NaBH 4 ) was added portion-wise. After the addition, the reaction was allowed to warm to room temperature (25 °C) for 2 h. The solvent was removed in vacuo, and the residue was diluted with deionized water. The compound was extracted with DCM and dried over anhydrous sodium sulfate. The solvent was removed in vacuo, and the residue was purified via column chromatography to obtain the desired product. 5-Chloro-2-(((2-((2-cyclopentyl-1-(4-methoxybenzyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( f.1 ) Obtained from intermediate e.1 (160 mg, 0.41 mmol) and 4-chloro-2-hydroxybenzaldehyde (76 mg, 0.49 mmol) as a pale yellow sticky solid (53%, 116 mg); Rf (DCM:MeOH, 9:1) 0.50; 1 H-NMR (600 MHz, chloroform- d ): δ 8.44 (d, J = 1.0 Hz, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.82–6.80 (m, 3H), 6.73 (d, J = 2.1 Hz, 1H), 6.67 (dd, J = 8.0, 2.1 Hz, 1H), 6.03 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 3.74 (s, 3H), 3.72 (s, 2H), 3.38 (t, J = 6.5 Hz, 2H), 3.14–3.08 (m, 1H), 2.72 (t, J = 6.5 Hz, 2H), 2.61 (q, J = 7.2 Hz, 2H), 1.98–1.92 (m, 4H, 1.86–1.79 (m, 2H), 1.64–1.57 (m, 2H), 1.03 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.80, 159.29, 158.80, 154.09, 143.91, 138.46, 134.01, 133.98, 129.25, 128.58, 127.48, 127.39 (2C), 119.13, 116.42, 114.40 (2C), 85.32, 57.12, 55.27, 52.13, 47.60, 46.25, 40.48, 37.14, 32.10 (2C), 25.75 (2C), 10.91. HPLC-MS (ESI): purity = 98%, t R = 2.450 min, m/z [M + H] + = 534.2. 5-Chloro-2-((ethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydrofuran-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( f.2 ) Obtained from intermediate e.2 (140 mg, 0.35 mmol) and 4-chloro-2-hydroxybenzaldehyde (66 mg, 0.42 mmol) as an orange-colored sticky solid (60%, 113 mg); Rf (DCM:MeOH, 9:1) 0.53; 1 H-NMR (600 MHz, chloroform- d ): δ 8.48 (d, J = 1.0 Hz, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.82–6.80 (m, 3H), 6.72 (d, J = 2.1 Hz, 1H), 6.66 (dd, J = 8.0, 2.1 Hz, 1H), 6.08 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 4.02–3.95 (m, 3H), 3.93–3.84 (m, 2H), 3.74 (s, 3H), 3.74 (s, 2H), 3.42 (t, J = 6.5 Hz, 2H), 2.75 (t, J = 6.5 Hz, 2H), 2.64 (q, J = 7.2 Hz, 2H), 2.35–2.26 (m, 1H), 2.23–2.13 (m, 1H), 1.05 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.44, 158.78, 155.83, 154.36, 143.95, 139.07, 134.00, 129.31, 127.39 (2C), 127.28, 120.34, 119.14, 116.41, 114.52 (2C), 85.13, 71.92, 68.32, 57.08, 55.28, 52.16, 47.66, 46.28, 40.41, 37.12, 31.87, 29.65, 10.91. HPLC-MS (ESI): purity = 98%, t R = 2.542 min, m/z [M + H] + = 536.2. 5-Chloro-2-((ethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydro-2 H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( f.3 ) Obtained from intermediate e.3 (90 mg, 0.22 mmol) and 4-chloro-2-hydroxybenzaldehyde (41 mg, 0.26 mmol) as an orange-colored sticky solid (54%, 66 mg); Rf (DCM: MeOH, 9:1) 0.44; 1 H-NMR (600 MHz, chloroform- d ): δ 8.49 (d, J = 1.0 Hz, 1H), 6.92 (d, J = 8.7 Hz, 2H), 6.84–6.79 (m, 3H), 6.72 (d, J = 2.1 Hz, 1H), 6.66 (dd, J = 8.0, 2.1 Hz, 1H), 6.05 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 4.04–3.97 (m, 2H), 3.74 (s, 3H), 3.73 (s, 2H), 3.43–3.38 (m, 4H), 2.93 (tt, J = 11.5, 3.7 Hz, 1H), 2.74 (t, J = 6.4 Hz, 2H), 2.63 (q, J = 7.1 Hz, 2H), 2.09–2.00 (m, 2H), 1.70–1.63 (m, 2H), 1.04 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.39, 158.79, 157.69, 154.31, 143.61, 139.03, 133.98, 129.32, 127.38, 127.33 (2C), 125.25, 120.35, 119.12, 116.39, 114.49 (2C), 85.35, 67.48 (2C), 57.04, 55.27, 52.15, 47.64, 46.22, 40.40, 33.80, 31.24 (2C), 10.90. HPLC-MS (ESI): purity = 98%, t R = 2.531 min, m/z [M + H] + = 550.2. The appropriate intermediate e.1 – e.19 was dissolved in 4 M HCl/dioxane, and the mixture was stirred at 25 °C for 2 h. When the reaction was complete, the solvent was removed in vacuo, and the residue was neutralized with Amberlyst A21 in a mixture of DCM and methanol. Amberlyst was filtered off, the solvent was removed in vacuo, and the residue was used in the next reaction without further purification. The crude product from step (a) above and 4-chloro-2-hydroxybenzaldehyde in methanol was stirred at 25 °C for 6 h. The mixture was cooled at 0 °C, and sodium borohydride (NaBH 4 ) was added portion-wise. After the addition, the reaction was allowed to warm to room temperature (25 °C) for 2 h. The solvent was removed in vacuo, and the residue was diluted with deionized water. The compound was extracted with DCM and dried over anhydrous sodium sulfate. The solvent was removed in vacuo, and the residue was purified via column chromatography to obtain the desired product. 5-Chloro-2-(((2-((2-cyclopentyl-1-(4-methoxybenzyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( f.1 ) Obtained from intermediate e.1 (160 mg, 0.41 mmol) and 4-chloro-2-hydroxybenzaldehyde (76 mg, 0.49 mmol) as a pale yellow sticky solid (53%, 116 mg); Rf (DCM:MeOH, 9:1) 0.50; 1 H-NMR (600 MHz, chloroform- d ): δ 8.44 (d, J = 1.0 Hz, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.82–6.80 (m, 3H), 6.73 (d, J = 2.1 Hz, 1H), 6.67 (dd, J = 8.0, 2.1 Hz, 1H), 6.03 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 3.74 (s, 3H), 3.72 (s, 2H), 3.38 (t, J = 6.5 Hz, 2H), 3.14–3.08 (m, 1H), 2.72 (t, J = 6.5 Hz, 2H), 2.61 (q, J = 7.2 Hz, 2H), 1.98–1.92 (m, 4H, 1.86–1.79 (m, 2H), 1.64–1.57 (m, 2H), 1.03 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.80, 159.29, 158.80, 154.09, 143.91, 138.46, 134.01, 133.98, 129.25, 128.58, 127.48, 127.39 (2C), 119.13, 116.42, 114.40 (2C), 85.32, 57.12, 55.27, 52.13, 47.60, 46.25, 40.48, 37.14, 32.10 (2C), 25.75 (2C), 10.91. HPLC-MS (ESI): purity = 98%, t R = 2.450 min, m/z [M + H] + = 534.2. 5-Chloro-2-((ethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydrofuran-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( f.2 ) Obtained from intermediate e.2 (140 mg, 0.35 mmol) and 4-chloro-2-hydroxybenzaldehyde (66 mg, 0.42 mmol) as an orange-colored sticky solid (60%, 113 mg); Rf (DCM:MeOH, 9:1) 0.53; 1 H-NMR (600 MHz, chloroform- d ): δ 8.48 (d, J = 1.0 Hz, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.82–6.80 (m, 3H), 6.72 (d, J = 2.1 Hz, 1H), 6.66 (dd, J = 8.0, 2.1 Hz, 1H), 6.08 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 4.02–3.95 (m, 3H), 3.93–3.84 (m, 2H), 3.74 (s, 3H), 3.74 (s, 2H), 3.42 (t, J = 6.5 Hz, 2H), 2.75 (t, J = 6.5 Hz, 2H), 2.64 (q, J = 7.2 Hz, 2H), 2.35–2.26 (m, 1H), 2.23–2.13 (m, 1H), 1.05 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.44, 158.78, 155.83, 154.36, 143.95, 139.07, 134.00, 129.31, 127.39 (2C), 127.28, 120.34, 119.14, 116.41, 114.52 (2C), 85.13, 71.92, 68.32, 57.08, 55.28, 52.16, 47.66, 46.28, 40.41, 37.12, 31.87, 29.65, 10.91. HPLC-MS (ESI): purity = 98%, t R = 2.542 min, m/z [M + H] + = 536.2. 5-Chloro-2-((ethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydro-2 H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( f.3 ) Obtained from intermediate e.3 (90 mg, 0.22 mmol) and 4-chloro-2-hydroxybenzaldehyde (41 mg, 0.26 mmol) as an orange-colored sticky solid (54%, 66 mg); Rf (DCM: MeOH, 9:1) 0.44; 1 H-NMR (600 MHz, chloroform- d ): δ 8.49 (d, J = 1.0 Hz, 1H), 6.92 (d, J = 8.7 Hz, 2H), 6.84–6.79 (m, 3H), 6.72 (d, J = 2.1 Hz, 1H), 6.66 (dd, J = 8.0, 2.1 Hz, 1H), 6.05 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 4.04–3.97 (m, 2H), 3.74 (s, 3H), 3.73 (s, 2H), 3.43–3.38 (m, 4H), 2.93 (tt, J = 11.5, 3.7 Hz, 1H), 2.74 (t, J = 6.4 Hz, 2H), 2.63 (q, J = 7.1 Hz, 2H), 2.09–2.00 (m, 2H), 1.70–1.63 (m, 2H), 1.04 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.39, 158.79, 157.69, 154.31, 143.61, 139.03, 133.98, 129.32, 127.38, 127.33 (2C), 125.25, 120.35, 119.12, 116.39, 114.49 (2C), 85.35, 67.48 (2C), 57.04, 55.27, 52.15, 47.64, 46.22, 40.40, 33.80, 31.24 (2C), 10.90. HPLC-MS (ESI): purity = 98%, t R = 2.531 min, m/z [M + H] + = 550.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( f.1 ) Obtained from intermediate e.1 (160 mg, 0.41 mmol) and 4-chloro-2-hydroxybenzaldehyde (76 mg, 0.49 mmol) as a pale yellow sticky solid (53%, 116 mg); Rf (DCM:MeOH, 9:1) 0.50; 1 H-NMR (600 MHz, chloroform- d ): δ 8.44 (d, J = 1.0 Hz, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.82–6.80 (m, 3H), 6.73 (d, J = 2.1 Hz, 1H), 6.67 (dd, J = 8.0, 2.1 Hz, 1H), 6.03 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 3.74 (s, 3H), 3.72 (s, 2H), 3.38 (t, J = 6.5 Hz, 2H), 3.14–3.08 (m, 1H), 2.72 (t, J = 6.5 Hz, 2H), 2.61 (q, J = 7.2 Hz, 2H), 1.98–1.92 (m, 4H, 1.86–1.79 (m, 2H), 1.64–1.57 (m, 2H), 1.03 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.80, 159.29, 158.80, 154.09, 143.91, 138.46, 134.01, 133.98, 129.25, 128.58, 127.48, 127.39 (2C), 119.13, 116.42, 114.40 (2C), 85.32, 57.12, 55.27, 52.13, 47.60, 46.25, 40.48, 37.14, 32.10 (2C), 25.75 (2C), 10.91. HPLC-MS (ESI): purity = 98%, t R = 2.450 min, m/z [M + H] + = 534.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( f.2 ) Obtained from intermediate e.2 (140 mg, 0.35 mmol) and 4-chloro-2-hydroxybenzaldehyde (66 mg, 0.42 mmol) as an orange-colored sticky solid (60%, 113 mg); Rf (DCM:MeOH, 9:1) 0.53; 1 H-NMR (600 MHz, chloroform- d ): δ 8.48 (d, J = 1.0 Hz, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.82–6.80 (m, 3H), 6.72 (d, J = 2.1 Hz, 1H), 6.66 (dd, J = 8.0, 2.1 Hz, 1H), 6.08 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 4.02–3.95 (m, 3H), 3.93–3.84 (m, 2H), 3.74 (s, 3H), 3.74 (s, 2H), 3.42 (t, J = 6.5 Hz, 2H), 2.75 (t, J = 6.5 Hz, 2H), 2.64 (q, J = 7.2 Hz, 2H), 2.35–2.26 (m, 1H), 2.23–2.13 (m, 1H), 1.05 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.44, 158.78, 155.83, 154.36, 143.95, 139.07, 134.00, 129.31, 127.39 (2C), 127.28, 120.34, 119.14, 116.41, 114.52 (2C), 85.13, 71.92, 68.32, 57.08, 55.28, 52.16, 47.66, 46.28, 40.41, 37.12, 31.87, 29.65, 10.91. HPLC-MS (ESI): purity = 98%, t R = 2.542 min, m/z [M + H] + = 536.2. H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( f.3 ) Obtained from intermediate e.3 (90 mg, 0.22 mmol) and 4-chloro-2-hydroxybenzaldehyde (41 mg, 0.26 mmol) as an orange-colored sticky solid (54%, 66 mg); Rf (DCM: MeOH, 9:1) 0.44; 1 H-NMR (600 MHz, chloroform- d ): δ 8.49 (d, J = 1.0 Hz, 1H), 6.92 (d, J = 8.7 Hz, 2H), 6.84–6.79 (m, 3H), 6.72 (d, J = 2.1 Hz, 1H), 6.66 (dd, J = 8.0, 2.1 Hz, 1H), 6.05 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 4.04–3.97 (m, 2H), 3.74 (s, 3H), 3.73 (s, 2H), 3.43–3.38 (m, 4H), 2.93 (tt, J = 11.5, 3.7 Hz, 1H), 2.74 (t, J = 6.4 Hz, 2H), 2.63 (q, J = 7.1 Hz, 2H), 2.09–2.00 (m, 2H), 1.70–1.63 (m, 2H), 1.04 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.39, 158.79, 157.69, 154.31, 143.61, 139.03, 133.98, 129.32, 127.38, 127.33 (2C), 125.25, 120.35, 119.12, 116.39, 114.49 (2C), 85.35, 67.48 (2C), 57.04, 55.27, 52.15, 47.64, 46.22, 40.40, 33.80, 31.24 (2C), 10.90. HPLC-MS (ESI): purity = 98%, t R = 2.531 min, m/z [M + H] + = 550.2. 1 – 19 The appropriate intermediate f.1 – f.19 was stirred in neat TFA (10 mL) at 100 °C for 16 h. Once the reaction was complete, TFA was removed under reduced pressure. The residue was dissolved in DCM/MeOH (9:1) and stirred with Amberlyst A21 for 1 h. The resin was filtered off, and the filtrate was concentrated under reduced pressure. The residue was purified via column chromatography to obtain the final product. 5-Chloro-2-(((2-((2-cyclopentyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 1 ) Obtained from intermediate f.1 (116 mg, 0.22 mmol) as an off-white sticky solid (68%, 61 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.18 (d, J = 1.0 Hz, 1H), 7.25 (d, J = 8.1 Hz, 1H), 6.81 (d, J = 2.1 Hz, 1H), 6.79 (dd, J = 8.1, 2.1 Hz, 1H), 6.44 (d, J = 1.0 Hz, 1H), 4.03 (s, 2H), 3.40 (t, J = 5.9 Hz, 2H), 3.20–3.13 (m, 1H), 2.98 (t, J = 5.8 Hz, 2H), 2.93 (q, J = 7.2 Hz, 2H), 2.02–1.95 (m, 2H), 1.84–1.77 (m, 2H), 1.73–1.66 (m, 2H), 1.63–1.56 (m, 2H), 1.13 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.76, 154.94, 137.73, 132.39, 130.71, 130.00, 129.65, 128.84, 122.94, 118.83, 115.55, 86.67, 54.92, 52.22, 49.03, 47.10, 39.29, 31.92 (2C), 25.54 (2C), 11.31. HPLC-MS (ESI): purity = 98%, t R = 2.203 min, m/z [M + H] + = 414.1. 5-Chloro-2-((ethyl(2-((2-(tetrahydrofuran-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 2 ) Obtained from intermediate f.2 (113 mg, 0.21 mmol) as an off-white sticky solid (74%, 65 mg); Rf (DCM: MeOH, 9:1) 0.34; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.21 (d, J = 1.0 Hz, 1H), 7.08 (d, J = 8.0 Hz, 1H), 6.73 (d, J = 2.1 Hz, 1H), 6.71 (dd, J = 8.0, 2.1 Hz, 1H), 6.32 (d, J = 1.0 Hz, 1H), 4.02–3.99 (m, 1H), 3.83 (t, J = 6.7 Hz, 2H), 3.78–3.73 (m, 1H), 3.71 (s, 2H), 3.56–3.49 (m, 1H), 3.35–3.31 (m, 2H), 2.65 (t, J = 6.7 Hz, 2H), 2.56 (q, J = 7.1 Hz, 2H), 2.28–2.17 (m, 2H), 0.98 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.75, 155.54, 155.12, 142.76, 138.05, 134.83, 132.38, 130.70, 130.00, 122.96, 118.83, 115.55, 86.69, 71.63, 67.89, 54.93, 52.19, 49.03, 47.10, 31.48, 11.32. HPLC-MS (ESI): purity = 98%, t R = 0.298 min, m/z [M + H] + = 416.2. 5-Chloro-2-((ethyl(2-((2-(tetrahydro-2 H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 3 ) Obtained from intermediate f.3 (66 mg, 0.12 mmol) as an off-white sticky solid (80%, 41 mg); Rf (DCM: MeOH, 9:1) 0.29; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.21 (d, J = 1.0 Hz, 1H), 7.15 (d, J = 7.9 Hz, 1H), 6.76 (d, J = 2.1 Hz, 1H), 6.74 (dd, J = 7.9, 2.1 Hz, 1H), 6.38 (d, J = 1.0 Hz, 1H), 3.91–3.86 (m, 2H), 3.83 (s, 2H), 3.44–3.39 (m, 2H), 3.36 (t, J = 6.4 Hz, 2H), 3.03–2.96 (m, 1H), 2.78 (t, J = 6.4 Hz, 2H), 2.71 (q, J = 7.2 Hz, 2H), 1.90–1.85 (m, 2H), 1.79–1.72 (m, 2H), 1.04 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.55, 154.84, 142.80, 137.40, 133.97, 133.06, 131.58, 130.00, 129.68, 128.96, 119.01, 115.57, 66.96 (2C), 54.16, 52.89, 49.03, 47.44, 35.17, 31.12 (2C), 10.84. HPLC-MS (ESI): purity = 97%, t R = 0.430 min, m/z [M + H] + = 430.2. 5-Chloro-2-((ethyl(2-((2-methyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 4 ) Obtained from intermediate f.4 (66 mg, 0.14 mmol) as an off-white sticky solid (71%, 36 mg); Rf (DCM: MeOH, 9:1) 0.26; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.15 (d, J = 1.0 Hz, 1H), 7.08 (d, J = 8.0 Hz, 1H), 6.73 (d, J = 2.1 Hz, 1H), 6.71 (dd, J = 8.0, 2.1 Hz, 1H), 6.29 (d, J = 1.0 Hz, 1H), 3.70 (s, 2H), 3.32 (t, J = 6.4 Hz, 2H), 2.64 (t, J = 6.4 Hz, 2H), 2.55 (q, J = 7.2 Hz, 2H), 2.35 (s, 3H), 0.98 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.75, 154.96, 151.25, 137.45, 132.35, 130.69, 130.01, 123.02, 118.83, 115.55, 86.62, 63.28, 54.92, 52.20, 49.04, 47.10, 15.01, 11.35. HPLC-MS (ESI): purity = 98%, t R = 0.219 min, m/z [M + H] + = 360.1. 5-Chloro-2-(((2-((2-cyclopropyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 5 ) Obtained from intermediate f.5 (351 mg, 0.69 mmol) as an off-white sticky solid (68%, 181 mg); Rf (DCM: MeOH, 9:1) 0.30; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.11 (d, J = 1.0 Hz, 1H), 7.10 (d, J = 8.0 Hz, 1H), 6.74 (d, J = 2.2 Hz, 1H), 6.72 (dd, J = 8.0, 2.1 Hz, 1H), 6.31 (d, J = 1.0 Hz, 1H), 3.74 (s, 2H), 3.32 (t, J = 6.7 Hz, 2H), 2.68 (t, J = 6.7 Hz, 2H), 2.59 (q, J = 7.2 Hz, 2H), 2.01–1.96 (m, 1H), 0.99 (t, J = 7.1 Hz, 3H), 0.95–0.93 (m, 4H). 13 C-NMR (151 MHz, DMSO): δ 158.69, 157.15, 154.78, 142.64, 137.07, 132.58, 130.97, 130.00, 122.52, 118.88, 115.56, 86.76, 54.69, 52.44, 49.03, 47.20, 11.18, 9.83 (2C), 8.93. HPLC-MS (ESI): purity = 98%, t R = 2.211 min, m/z [M + H] + = 386.1. 5-Chloro-2-((ethyl(2-((2-(1-methylcyclopropyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 6 ) Obtained from intermediate f.6 (381 mg, 0.73 mmol) as an off-white sticky solid (86%, 250 mg); Rf (DCM: MeOH, 9:1) 0.29; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.37 (d, J = 1.0 Hz, 1H), 7.40 (d, J = 8.2 Hz, 1H), 6.92 (d, J = 2.1 Hz, 1H), 6.87 (dd, J = 8.2, 2.1 Hz, 1H), 6.73 (d, J = 1.0 Hz, 1H), 4.30 (s, 2H), 3.66 (t, J = 6.1 Hz, 2H), 3.28 (t, J = 6.0 Hz, 2H), 3.22 (q, J = 7.2 Hz, 1H), 1.53 (s, 3H), 1.31–1.26 (m, 5H), 1.04–1.01 (m, 2H). 13 C-NMR (101 MHz, DMSO): δ 158.06, 152.70, 149.17, 143.15, 140.46, 135.35, 134.45, 131.71, 127.66, 116.35, 115.81, 108.67, 90.14, 51.45, 48.64, 38.07, 20.88, 17.89, 15.68 (2C), 9.19. HPLC-MS (ESI): purity = 98%, t R = 0.755 min, m/z [M + H] + = 400.2. 5-Chloro-2-((ethyl(2-((2-(1-(trifluoromethyl)cyclopropyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 7 ) Obtained from intermediate f.7 (59 mg, 0.10 mmol) as an off-white sticky solid (77%, 35 mg); Rf (DCM: MeOH, 9:1) 0.30; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.50 (d, J = 1.0 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.93 (d, J = 2.1 Hz, 1H), 6.88 (dd, J = 8.1, 2.1 Hz, 1H), 6.74 (d, J = 1.0 Hz, 1H), 4.32 (s, 2H), 3.65 (t, J = 5.9 Hz, 2H), 3.30 (t, J = 5.9 Hz, 2H), 3.24 (q, J = 7.1 Hz, 2H), 1.60–1.55 (m, 4H), 1.29 (t, J = 7.2 Hz, 3H). 13 C-NMR (101 MHz, DMSO): δ 158.05, 152.50, 149.26, 146.22, 141.57, 135.40, 134.48, 129.93, 124.43, 119.59, 116.31, 115.81, 111.78, 51.57, 48.70, 38.22, 31.43, 23.37, 12.03 (2C), 9.18. HPLC-MS (ESI): purity = 98%, t R = 2.194 min, m/z [M + H] + = 454.1. 5-Chloro-2-(((2-((2-(cyclopropylmethyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 8 ) Obtained from intermediate f.8 (239 mg, 0.46 mmol) as an off-white sticky solid (88%, 162 mg); Rf (DCM: MeOH, 9:1) 0.35; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.46 (d, J = 1.0 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.94 (d, J = 2.1 Hz, 1H), 6.87 (dd, J = 8.1, 2.1 Hz, 1H), 6.79 (d, J = 1.0 Hz, 1H), 4.31 (s, 2H), 3.68 (t, J = 6.0 Hz, 2H), 3.30 (t, J = 6.0 Hz, 2H), 3.24 (q, J = 7.2 Hz, 2H), 2.80 (d, J = 7.0 Hz, 2H), 1.30 (t, J = 7.2 Hz, 3H), 1.20–1.15 (m, 1H), 0.57–0.52 (m, 2H), 0.34–0.29 (m, 2H). 13 C-NMR (101 MHz, DMSO): δ 159.52, 158.11, 152.35, 146.06, 135.35, 134.43, 131.03, 125.99, 119.54, 116.36, 115.84, 89.98, 56.56, 52.04, 51.47, 48.64, 38.03, 33.06, 9.19, 4.94 (2C). HPLC-MS (ESI): purity = 98%, t R = 0.588 min, m/z [M + H] + = 400.2. 5-Chloro-2-(((2-((2-cyclobutyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 9 ) Obtained from intermediate f.9 (311 mg, 0.60 mmol) as an off-white sticky solid (79%, 191 mg); Rf (DCM: MeOH, 9:1) 0.31; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.45 (d, J = 1.0 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.94 (d, J = 2.1 Hz, 1H), 6.85 (dd, J = 8.2, 2.1 Hz, 1H), 6.78 (d, J = 1.0 Hz, 1H), 4.31 (s, 2H), 3.83–3.72 (m, 1H), 3.69 (t, J = 6.1 Hz, 2H), 3.29 (t, J = 6.0 Hz, 2H), 3.23 (q, J = 7.2 Hz, 2H), 2.46–2.34 (m, 4H), 2.15–2.02 (m, 1H), 2.00–1.89 (m, 1H), 1.30 (t, J = 7.1 Hz, 3H). 13 C-NMR (101 MHz, DMSO) δ 163.14, 159.51, 158.11, 151.94, 135.38, 134.37, 119.50, 116.27, 115.86, 89.93, 79.09, 51.89, 51.45, 48.65, 40.46, 38.02, 33.40, 27.37 (2C), 18.60, 9.19. HPLC-MS (ESI): purity = 98%, t R = 0.921 min, m/z [M + H] + = 400.2. 5-Chloro-2-(((2-((2-(3,3-difluorocyclobutyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 10 ) Obtained from intermediate f.10 (274 mg, 0.49 mmol) as an off-white sticky solid (84%, 179 mg); Rf (DCM: MeOH, 9:1) 0.29; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.51 (d, J = 1.0 Hz, 1H), 7.37 (d, J = 8.2 Hz, 1H), 6.88 (d, J = 2.1 Hz, 1H), 6.84 (dd, J = 8.2, 2.1 Hz, 1H), 6.80 (d, J = 1.0 Hz, 1H), 4.28 (s, 2H), 3.66 (t, J = 6.3 Hz, 2H), 3.64–3.60 (m, 1H), 3.27 (t, J = 6.2 Hz, 2H), 3.20 (q, J = 7.2 Hz, 2H), 3.10–2.94 (m, 4H), 1.26 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 159.36, 158.02, 150.81, 147.81, 135.40, 134.50, 133.25, 122.12, 120.25, 119.58, 118.45, 116.14, 115.75, 89.93, 63.24, 51.32, 48.61, 37.86, 21.96 (2C), 9.11. HPLC-MS (ESI): purity = 99%, t R = 1.808 min, m/z [M + H] + = 436.1. 5-Chloro-2-(((2-((2-cyclohexyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 11 ) Obtained from intermediate f.11 (274 mg, 0.50 mmol) as an off-white sticky solid (71%, 152 mg); Rf (DCM: MeOH, 9:1) 0.32; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.44 (d, J = 1.0 Hz, 1H), 7.36 (d, J = 8.2 Hz, 1H), 6.88 (d, J = 2.1 Hz, 1H), 6.82 (dd, J = 8.2, 2.1 Hz, 1H), 6.76 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.65 (t, J = 6.2 Hz, 2H), 3.25 (t, J = 6.2 Hz, 2H), 3.19 (q, J = 7.2 Hz, 2H), 2.89 (tt, J = 11.7, 3.7 Hz, 1H), 2.01–1.96 (m, 2H), 1.77–1.73 (m, 2H), 1.67–1.63 (m, 1H), 1.59–1.52 (m, 2H), 1.39–1.30 (m, 2H), 1.25 (t, J = 7.2 Hz, 3H), 1.23–1.17 (m, 1H). 13 C-NMR (151 MHz, DMSO): δ 164.77, 159.42, 158.04, 151.60, 146.44, 135.37, 134.44, 130.11, 126.08, 119.51, 116.14, 115.75, 110.92, 89.86, 51.33, 48.63, 37.63, 30.78 (2C), 25.69, 25.57 (2C), 9.09. HPLC-MS (ESI): purity = 97%, t R = 2.238 min, m/z [M + H] + = 428.2. 5-Chloro-2-(((2-((2-(4,4-difluorocyclohexyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 12 ) Obtained from intermediate f.12 (381 mg, 0.65 mmol) as an off-white sticky solid (83%, 250 mg); Rf (DCM: MeOH, 9:1) 0.37; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.46 (d, J = 1.0 Hz, 1H), 7.36 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.81 (dd, J = 8.2, 2.1 Hz, 1H), 6.80 (d, J = 1.0 Hz, 1H), 4.27 (s, 2H), 3.76–3.70 (m, 1H), 3.67 (t, J = 6.3 Hz, 2H), 3.27 (t, J = 6.2 Hz, 2H), 3.19 (q, J = 7.1 Hz, 2H), 3.12–3.06 (m, 1H), 2.11–2.07 (m, 4H), 2.01–1.89 (m, 1H), 1.88–1.80 (m, 2H), 1.25 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.09, 150.96, 135.35, 134.42, 129.70, 125.56, 123.97, 122.38, 119.46, 116.12, 115.78, 89.88, 62.47, 51.26, 48.62, 37.80, 35.06, 32.56, 27.12 (2C), 25.84 (2C), 9.08. HPLC-MS (ESI): purity = 98%, t R = 2.150 min, m/z [M + H] + = 464.2. 5-Chloro-2-((ethyl(2-((2-(1-methylpyrrolidin-2-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 13 ) Obtained from intermediate f.13 (51 mg, 0.10 mmol) as a yellow sticky solid (71%, 31 mg); Rf (DCM: MeOH, 9:1) 0.20; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.44 (d, J = 1.0 Hz, 1H), 7.38 (d, J = 8.2 Hz, 1H), 6.92 (d, J = 2.1 Hz, 1H), 6.83 (dd, J = 8.2, 2.1 Hz, 1H), 6.66 (d, J = 1.0 Hz, 1H), 4.77 (dd, J = 8.1, 4.3 Hz, 1H), 4.27 (s, 2H), 3.58 (t, J = 5.9 Hz, 2H), 3.24 (t, J = 5.9 Hz, 2H), 3.20 (q, J = 7.2 Hz, 2H), 2.94–2.86 (m, 2H), 2.60–2.53 (m, 2H), 2.27–1.98 (m, 2H), 1.72 (s, 3H), 1.24 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 172.03, 159.23, 158.81, 158.11, 153.85, 135.31, 134.45, 120.46, 119.47, 118.48, 116.50, 116.33, 115.80, 114.52, 52.89, 51.50, 48.58, 38.13, 29.75, 22.88, 22.29, 9.14. HPLC-MS (ESI): purity = 97%, t R = 0.141 min, m/z [M + H] + = 429.2. 5-Chloro-2-((ethyl(2-((2-(3-(trifluoromethyl)phenyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 14 ) Obtained from intermediate f.14 (231 mg, 0.38 mmol) as an off-white sticky solid (81%, 151 mg); Rf (DCM: MeOH, 9:1) 0.30; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.53 (d, J = 1.0 Hz, 1H), 8.47 (dd, J = 2.1, 1.5 Hz, 1H), 8.44 (ddd, J = 7.8, 1.6, 1.5 Hz, 1H), 7.88 (ddd, J = 7.8, 2.1, 1.6 Hz, 1H), 7.79 (t, J = 7.8 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.90 (d, J = 2.1 Hz, 1H), 6.85 (dd, J = 8.2, 2.1 Hz, 1H), 6.75 (d, J = 1.0 Hz, 1H), 4.28 (s, 2H), 3.64 (t, J = 6.0 Hz, 2H), 3.26 (t, J = 5.9 Hz, 2H), 3.21 (q, J = 7.2 Hz, 2H), 1.27 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.96, 158.75, 158.54, 158.32, 158.04, 135.32, 134.55, 131.14, 130.85, 130.47, 127.62, 125.27, 123.74, 120.50, 120.36, 119.56, 118.39, 116.28, 115.75, 51.39, 48.58, 38.16, 22.91, 9.16. HPLC-MS (ESI): purity = 98%, t R = 2.422 min, m/z [M + H] + = 490.1. 5-Chloro-2-(((2-((2-cycloheptyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 15 ) Obtained from intermediate f.15 (76 mg, 0.14 mmol) as an off-white sticky solid (92%, 57 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.41 (d, J = 1.0 Hz, 1H), 7.37 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.2 Hz, 1H), 6.82 (dd, J = 8.2, 2.1 Hz, 1H), 6.74 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.64 (t, J = 6.2 Hz, 2H), 3.25 (t, J = 6.2 Hz, 2H), 3.19 (q, J = 7.2 Hz, 2H), 3.07 (tt, J = 9.4, 4.4 Hz, 1H), 2.03–1.96 (m, 2H), 1.84–1.76 (m, 2H), 1.62–1.56 (m, 4H), 1.55–1.48 (m, 4H), 1.25 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 172.01, 159.28, 159.06, 158.85, 158.64, 158.06, 135.31, 134.44, 119.49, 118.40, 116.22, 115.76, 89.77, 51.31, 48.60, 37.84, 32.67 (2C), 28.14 (2C), 26.17 (2C), 22.88, 9.12. HPLC-MS (ESI): purity = 99%, t R = 2.413 min, m/z [M + H] + = 442.2. 5-Chloro-2-((ethyl(2-((2-((1r,4r)-4-(trifluoromethyl)cyclohexyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 16 ) Obtained from intermediate f.16 (81 mg, 0.13 mmol) as an off-white sticky solid (88%, 57 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.42 (d, J = 1.0 Hz, 1H), 7.37 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.82 (dd, J = 8.2, 2.1 Hz, 1H), 6.76 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.64 (t, J = 6.2 Hz, 2H), 3.25 (t, J = 6.2 Hz, 2H), 3.18 (q, J = 7.2 Hz, 2H), 2.89 (tt, J = 12.1, 3.6 Hz, 1H), 2.36–2.27 (m, 1H), 2.16–2.12 (m, 2H), 1.98–1.94 (m, 2H), 1.66–1.58 (m, 2H), 1.45–1.35 (m, 2H), 1.25 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.44, 158.78, 154.20, 143.67, 138.51, 134.04, 134.02, 129.32, 127.19, 120.34, 119.14, 116.42, 85.28, 57.08, 55.29, 52.10, 47.69, 46.36, 46.24, 41.06, 35.44, 29.99 (2C), 10.89. HPLC-MS (ESI): purity = 98%, t R = 2.413 min, m/z [M + H] + = 496.2. 5-Chloro-2-((ethyl(2-((2-((1r,4r)-4-(trifluoromethyl)cyclohexyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)Amino)ethyl)amino)methyl)phenol ( 17 ) Obtained from intermediate f.17 (77 mg, 0.12 mmol) as an off-white sticky solid (78%, 47 mg); Rf (DCM: MeOH, 9:1) 0.36; 1 H-NMR (600 MHz, DMSO): δ 8.40 (d, J = 1.0 Hz, 1H), 7.36 (d, J = 8.2 Hz, 1H), 6.87 (d, J = 2.2 Hz, 1H), 6.83 (dd, J = 8.2, 2.1 Hz, 1H), 6.71 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.25–3.22 (m, 4H), 3.18 (q, J = 7.2 Hz, 2H), 2.88 (tt, J = 12.3, 3.7 Hz, 1H), 2.37–2.28 (m, 1H), 2.17–2.12 (m, 2H), 1.99–1.94 (m, 2H), 1.66–1.58 (m, 2H), 1.45–1.36 (m, 2H), 1.24 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.44, 158.78, 154.20, 143.67, 138.51, 134.04, 134.02, 129.32, 127.19, 120.34, 119.14, 116.42, 85.28, 57.08, 55.29, 52.10, 47.69, 46.36, 46.24, 41.06, 35.44, 29.99 (2C), 10.89. HPLC-MS (ESI): purity = 97%, t R = 2.343 min, m/z [M + H] + = 496.2. 5-Chloro-2-((ethyl(2-((2-(4-methoxycyclohexyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 18 ) Obtained from intermediate f.18 (73 mg, 0.13 mmol) as an off-white sticky solid (77%, 46 mg); Rf (DCM: MeOH, 9:1) 0.50; 1 H-NMR (600 MHz, DMSO): δ 8.37 (d, J = 1.0 Hz, 1H), 8.13 (d, J = 8.1 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.83 (dd, J = 8.2, 2.1 Hz, 1H), 6.62 (d, J = 1.0 Hz, 1H), 4.25 (s, 2H), 3.64–3.53 (m, 1H), 3.22 (s, 3H), 3.12 (tt, J = 10.88, 4.11 Hz, 1H), 2.92 (t, J = 6.2 Hz, 2H), 2.81 (t, J = 6.2 Hz, 2H), 2.58 (q, J = 7.2 Hz, 2H), 2.08–2.02 (m, 2H), 1.88–1.77 (m, 2H), 1.75–1.71 (m, 2H), 1.63–1.47 (m, 2H), 1.23 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.47, 153.50, 144.28, 136.87, 134.14, 129.50, 127.40, 119.22, 116.44, 85.54, 78.32, 73.57, 56.94, 55.77, 55.30, 52.01, 47.89, 40.43, 35.67, 31.57 (2C), 29.79 (2C), 10.87. HPLC-MS (ESI): purity = 97%, t R = 2.124 min, m/z [M + H] + = 458.2. 5-Chloro-2-((ethyl(2-((2-(6-(trifluoromethyl)pyridin-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 19 ) Obtained from intermediate f.19 (55 mg, 0.09 mmol) as an off-white solid (81%, 36 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 9.44 (d, J = 2.1 Hz, 1H), 8.70 (dd, J = 8.2, 2.1 Hz, 1H), 8.52 (d, J = 1.0 Hz, 1H, ), 8.09 (d, J = 8.2 Hz, 1H), 7.38 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.85 (dd, J = 8.2, 2.1 Hz, 1H), 6.67 (d, J = 1.0 Hz, 1H), 4.28 (s, 2H), 3.59 (t, J = 5.8 Hz, 2H), 3.25 (t, J = 5.9 Hz, 2H), 3.21 (q, J = 7.2 Hz, 2H), 1.27 (t, J = 7.2 Hz, 3H).C-NMR (151 MHz, CDCl 3 ): δ 159.54, 158.60, 155.00, 149.46, 144.49, 140.74, 137.88, 134.69, 134.25, 129.57, 128.82, 126.84, 120.39, 119.88, 119.26, 114.79, 85.39, 56.84, 55.31, 52.20, 47.84, 41.84, 10.77. HPLC-MS (ESI): purity = 98%, t R = 2.352 min, m/z [M + H] + = 491.1. 5-Chloro-2-(((2-((7-nitrobenzo[c][1,2,5]oxadiazol-4-yl)amino)ethyl)(2-((2-(3-(trifluoromethyl)phenyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 14-NBD ) Obtained from intermediate m (339 mg, 0.56 mmols) and NBD-chloride (254 mg, 1.27 mmols) as a brick-red solid (50%, 187 mg); R f (DCM: MeOH, 9:1) 0.56; 1 H NMR (600 MHz, DMSO- d 6 ): δ 8.39 (s, 1H), 8.37 (d, J = 8.8 Hz, 1H), 8.35 (dd, J = 2.8, 2.2 Hz, 1H), 8.31 (d, J = 8.8 Hz, 1H), 7.82 (dd, J = 7.8, 2.2 Hz, 1H), 7.74 (ddd, J = 7.8, 2.8, 2.3 Hz, 1H), 7.13 (dd, J = 8.4, 7.8 Hz, 1H), 6.85 (d, J = 7.8 Hz, 1H), 6.59 (ddd, J = 8.4, 2.3 Hz, 1H), 6.19 (d, J = 2.2 Hz, 1H), 6.04 (s, 1H), 3.70 (s, 2H), 3.53 (t, J = 6.3 Hz, 2H), 3.39 (t, J = 6.0 Hz, 2H), 2.82–2.76 (m, 4H). 13 C NMR (151 MHz, DMSO): δ 159.13, 158.92, 157.57, 155.85, 154.95, 149.82, 147.41, 143.12, 139.53, 138.17, 137.90, 135.58, 132.31, 131.80, 131.35, 129.01, 128.78, 126.30, 123.09, 118.92, 115.35, 99.50, 92.26, 86.51, 55.54, 53.70, 51.38, 44.39, 41.56, 36.89. HPLC-MS (ESI): purity = 98%, t R = 0.905 min, m/z [M + H] + = 668.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 1 ) Obtained from intermediate f.1 (116 mg, 0.22 mmol) as an off-white sticky solid (68%, 61 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.18 (d, J = 1.0 Hz, 1H), 7.25 (d, J = 8.1 Hz, 1H), 6.81 (d, J = 2.1 Hz, 1H), 6.79 (dd, J = 8.1, 2.1 Hz, 1H), 6.44 (d, J = 1.0 Hz, 1H), 4.03 (s, 2H), 3.40 (t, J = 5.9 Hz, 2H), 3.20–3.13 (m, 1H), 2.98 (t, J = 5.8 Hz, 2H), 2.93 (q, J = 7.2 Hz, 2H), 2.02–1.95 (m, 2H), 1.84–1.77 (m, 2H), 1.73–1.66 (m, 2H), 1.63–1.56 (m, 2H), 1.13 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.76, 154.94, 137.73, 132.39, 130.71, 130.00, 129.65, 128.84, 122.94, 118.83, 115.55, 86.67, 54.92, 52.22, 49.03, 47.10, 39.29, 31.92 (2C), 25.54 (2C), 11.31. HPLC-MS (ESI): purity = 98%, t R = 2.203 min, m/z [M + H] + = 414.1. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 2 ) Obtained from intermediate f.2 (113 mg, 0.21 mmol) as an off-white sticky solid (74%, 65 mg); Rf (DCM: MeOH, 9:1) 0.34; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.21 (d, J = 1.0 Hz, 1H), 7.08 (d, J = 8.0 Hz, 1H), 6.73 (d, J = 2.1 Hz, 1H), 6.71 (dd, J = 8.0, 2.1 Hz, 1H), 6.32 (d, J = 1.0 Hz, 1H), 4.02–3.99 (m, 1H), 3.83 (t, J = 6.7 Hz, 2H), 3.78–3.73 (m, 1H), 3.71 (s, 2H), 3.56–3.49 (m, 1H), 3.35–3.31 (m, 2H), 2.65 (t, J = 6.7 Hz, 2H), 2.56 (q, J = 7.1 Hz, 2H), 2.28–2.17 (m, 2H), 0.98 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.75, 155.54, 155.12, 142.76, 138.05, 134.83, 132.38, 130.70, 130.00, 122.96, 118.83, 115.55, 86.69, 71.63, 67.89, 54.93, 52.19, 49.03, 47.10, 31.48, 11.32. HPLC-MS (ESI): purity = 98%, t R = 0.298 min, m/z [M + H] + = 416.2. H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 3 ) Obtained from intermediate f.3 (66 mg, 0.12 mmol) as an off-white sticky solid (80%, 41 mg); Rf (DCM: MeOH, 9:1) 0.29; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.21 (d, J = 1.0 Hz, 1H), 7.15 (d, J = 7.9 Hz, 1H), 6.76 (d, J = 2.1 Hz, 1H), 6.74 (dd, J = 7.9, 2.1 Hz, 1H), 6.38 (d, J = 1.0 Hz, 1H), 3.91–3.86 (m, 2H), 3.83 (s, 2H), 3.44–3.39 (m, 2H), 3.36 (t, J = 6.4 Hz, 2H), 3.03–2.96 (m, 1H), 2.78 (t, J = 6.4 Hz, 2H), 2.71 (q, J = 7.2 Hz, 2H), 1.90–1.85 (m, 2H), 1.79–1.72 (m, 2H), 1.04 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.55, 154.84, 142.80, 137.40, 133.97, 133.06, 131.58, 130.00, 129.68, 128.96, 119.01, 115.57, 66.96 (2C), 54.16, 52.89, 49.03, 47.44, 35.17, 31.12 (2C), 10.84. HPLC-MS (ESI): purity = 97%, t R = 0.430 min, m/z [M + H] + = 430.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 4 ) Obtained from intermediate f.4 (66 mg, 0.14 mmol) as an off-white sticky solid (71%, 36 mg); Rf (DCM: MeOH, 9:1) 0.26; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.15 (d, J = 1.0 Hz, 1H), 7.08 (d, J = 8.0 Hz, 1H), 6.73 (d, J = 2.1 Hz, 1H), 6.71 (dd, J = 8.0, 2.1 Hz, 1H), 6.29 (d, J = 1.0 Hz, 1H), 3.70 (s, 2H), 3.32 (t, J = 6.4 Hz, 2H), 2.64 (t, J = 6.4 Hz, 2H), 2.55 (q, J = 7.2 Hz, 2H), 2.35 (s, 3H), 0.98 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.75, 154.96, 151.25, 137.45, 132.35, 130.69, 130.01, 123.02, 118.83, 115.55, 86.62, 63.28, 54.92, 52.20, 49.04, 47.10, 15.01, 11.35. HPLC-MS (ESI): purity = 98%, t R = 0.219 min, m/z [M + H] + = 360.1. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 5 ) Obtained from intermediate f.5 (351 mg, 0.69 mmol) as an off-white sticky solid (68%, 181 mg); Rf (DCM: MeOH, 9:1) 0.30; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.11 (d, J = 1.0 Hz, 1H), 7.10 (d, J = 8.0 Hz, 1H), 6.74 (d, J = 2.2 Hz, 1H), 6.72 (dd, J = 8.0, 2.1 Hz, 1H), 6.31 (d, J = 1.0 Hz, 1H), 3.74 (s, 2H), 3.32 (t, J = 6.7 Hz, 2H), 2.68 (t, J = 6.7 Hz, 2H), 2.59 (q, J = 7.2 Hz, 2H), 2.01–1.96 (m, 1H), 0.99 (t, J = 7.1 Hz, 3H), 0.95–0.93 (m, 4H). 13 C-NMR (151 MHz, DMSO): δ 158.69, 157.15, 154.78, 142.64, 137.07, 132.58, 130.97, 130.00, 122.52, 118.88, 115.56, 86.76, 54.69, 52.44, 49.03, 47.20, 11.18, 9.83 (2C), 8.93. HPLC-MS (ESI): purity = 98%, t R = 2.211 min, m/z [M + H] + = 386.1. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 6 ) Obtained from intermediate f.6 (381 mg, 0.73 mmol) as an off-white sticky solid (86%, 250 mg); Rf (DCM: MeOH, 9:1) 0.29; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.37 (d, J = 1.0 Hz, 1H), 7.40 (d, J = 8.2 Hz, 1H), 6.92 (d, J = 2.1 Hz, 1H), 6.87 (dd, J = 8.2, 2.1 Hz, 1H), 6.73 (d, J = 1.0 Hz, 1H), 4.30 (s, 2H), 3.66 (t, J = 6.1 Hz, 2H), 3.28 (t, J = 6.0 Hz, 2H), 3.22 (q, J = 7.2 Hz, 1H), 1.53 (s, 3H), 1.31–1.26 (m, 5H), 1.04–1.01 (m, 2H). 13 C-NMR (101 MHz, DMSO): δ 158.06, 152.70, 149.17, 143.15, 140.46, 135.35, 134.45, 131.71, 127.66, 116.35, 115.81, 108.67, 90.14, 51.45, 48.64, 38.07, 20.88, 17.89, 15.68 (2C), 9.19. HPLC-MS (ESI): purity = 98%, t R = 0.755 min, m/z [M + H] + = 400.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 7 ) Obtained from intermediate f.7 (59 mg, 0.10 mmol) as an off-white sticky solid (77%, 35 mg); Rf (DCM: MeOH, 9:1) 0.30; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.50 (d, J = 1.0 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.93 (d, J = 2.1 Hz, 1H), 6.88 (dd, J = 8.1, 2.1 Hz, 1H), 6.74 (d, J = 1.0 Hz, 1H), 4.32 (s, 2H), 3.65 (t, J = 5.9 Hz, 2H), 3.30 (t, J = 5.9 Hz, 2H), 3.24 (q, J = 7.1 Hz, 2H), 1.60–1.55 (m, 4H), 1.29 (t, J = 7.2 Hz, 3H). 13 C-NMR (101 MHz, DMSO): δ 158.05, 152.50, 149.26, 146.22, 141.57, 135.40, 134.48, 129.93, 124.43, 119.59, 116.31, 115.81, 111.78, 51.57, 48.70, 38.22, 31.43, 23.37, 12.03 (2C), 9.18. HPLC-MS (ESI): purity = 98%, t R = 2.194 min, m/z [M + H] + = 454.1. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 8 ) Obtained from intermediate f.8 (239 mg, 0.46 mmol) as an off-white sticky solid (88%, 162 mg); Rf (DCM: MeOH, 9:1) 0.35; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.46 (d, J = 1.0 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.94 (d, J = 2.1 Hz, 1H), 6.87 (dd, J = 8.1, 2.1 Hz, 1H), 6.79 (d, J = 1.0 Hz, 1H), 4.31 (s, 2H), 3.68 (t, J = 6.0 Hz, 2H), 3.30 (t, J = 6.0 Hz, 2H), 3.24 (q, J = 7.2 Hz, 2H), 2.80 (d, J = 7.0 Hz, 2H), 1.30 (t, J = 7.2 Hz, 3H), 1.20–1.15 (m, 1H), 0.57–0.52 (m, 2H), 0.34–0.29 (m, 2H). 13 C-NMR (101 MHz, DMSO): δ 159.52, 158.11, 152.35, 146.06, 135.35, 134.43, 131.03, 125.99, 119.54, 116.36, 115.84, 89.98, 56.56, 52.04, 51.47, 48.64, 38.03, 33.06, 9.19, 4.94 (2C). HPLC-MS (ESI): purity = 98%, t R = 0.588 min, m/z [M + H] + = 400.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 9 ) Obtained from intermediate f.9 (311 mg, 0.60 mmol) as an off-white sticky solid (79%, 191 mg); Rf (DCM: MeOH, 9:1) 0.31; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.45 (d, J = 1.0 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.94 (d, J = 2.1 Hz, 1H), 6.85 (dd, J = 8.2, 2.1 Hz, 1H), 6.78 (d, J = 1.0 Hz, 1H), 4.31 (s, 2H), 3.83–3.72 (m, 1H), 3.69 (t, J = 6.1 Hz, 2H), 3.29 (t, J = 6.0 Hz, 2H), 3.23 (q, J = 7.2 Hz, 2H), 2.46–2.34 (m, 4H), 2.15–2.02 (m, 1H), 2.00–1.89 (m, 1H), 1.30 (t, J = 7.1 Hz, 3H). 13 C-NMR (101 MHz, DMSO) δ 163.14, 159.51, 158.11, 151.94, 135.38, 134.37, 119.50, 116.27, 115.86, 89.93, 79.09, 51.89, 51.45, 48.65, 40.46, 38.02, 33.40, 27.37 (2C), 18.60, 9.19. HPLC-MS (ESI): purity = 98%, t R = 0.921 min, m/z [M + H] + = 400.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 10 ) Obtained from intermediate f.10 (274 mg, 0.49 mmol) as an off-white sticky solid (84%, 179 mg); Rf (DCM: MeOH, 9:1) 0.29; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.51 (d, J = 1.0 Hz, 1H), 7.37 (d, J = 8.2 Hz, 1H), 6.88 (d, J = 2.1 Hz, 1H), 6.84 (dd, J = 8.2, 2.1 Hz, 1H), 6.80 (d, J = 1.0 Hz, 1H), 4.28 (s, 2H), 3.66 (t, J = 6.3 Hz, 2H), 3.64–3.60 (m, 1H), 3.27 (t, J = 6.2 Hz, 2H), 3.20 (q, J = 7.2 Hz, 2H), 3.10–2.94 (m, 4H), 1.26 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 159.36, 158.02, 150.81, 147.81, 135.40, 134.50, 133.25, 122.12, 120.25, 119.58, 118.45, 116.14, 115.75, 89.93, 63.24, 51.32, 48.61, 37.86, 21.96 (2C), 9.11. HPLC-MS (ESI): purity = 99%, t R = 1.808 min, m/z [M + H] + = 436.1. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 11 ) Obtained from intermediate f.11 (274 mg, 0.50 mmol) as an off-white sticky solid (71%, 152 mg); Rf (DCM: MeOH, 9:1) 0.32; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.44 (d, J = 1.0 Hz, 1H), 7.36 (d, J = 8.2 Hz, 1H), 6.88 (d, J = 2.1 Hz, 1H), 6.82 (dd, J = 8.2, 2.1 Hz, 1H), 6.76 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.65 (t, J = 6.2 Hz, 2H), 3.25 (t, J = 6.2 Hz, 2H), 3.19 (q, J = 7.2 Hz, 2H), 2.89 (tt, J = 11.7, 3.7 Hz, 1H), 2.01–1.96 (m, 2H), 1.77–1.73 (m, 2H), 1.67–1.63 (m, 1H), 1.59–1.52 (m, 2H), 1.39–1.30 (m, 2H), 1.25 (t, J = 7.2 Hz, 3H), 1.23–1.17 (m, 1H). 13 C-NMR (151 MHz, DMSO): δ 164.77, 159.42, 158.04, 151.60, 146.44, 135.37, 134.44, 130.11, 126.08, 119.51, 116.14, 115.75, 110.92, 89.86, 51.33, 48.63, 37.63, 30.78 (2C), 25.69, 25.57 (2C), 9.09. HPLC-MS (ESI): purity = 97%, t R = 2.238 min, m/z [M + H] + = 428.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 12 ) Obtained from intermediate f.12 (381 mg, 0.65 mmol) as an off-white sticky solid (83%, 250 mg); Rf (DCM: MeOH, 9:1) 0.37; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.46 (d, J = 1.0 Hz, 1H), 7.36 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.81 (dd, J = 8.2, 2.1 Hz, 1H), 6.80 (d, J = 1.0 Hz, 1H), 4.27 (s, 2H), 3.76–3.70 (m, 1H), 3.67 (t, J = 6.3 Hz, 2H), 3.27 (t, J = 6.2 Hz, 2H), 3.19 (q, J = 7.1 Hz, 2H), 3.12–3.06 (m, 1H), 2.11–2.07 (m, 4H), 2.01–1.89 (m, 1H), 1.88–1.80 (m, 2H), 1.25 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.09, 150.96, 135.35, 134.42, 129.70, 125.56, 123.97, 122.38, 119.46, 116.12, 115.78, 89.88, 62.47, 51.26, 48.62, 37.80, 35.06, 32.56, 27.12 (2C), 25.84 (2C), 9.08. HPLC-MS (ESI): purity = 98%, t R = 2.150 min, m/z [M + H] + = 464.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 13 ) Obtained from intermediate f.13 (51 mg, 0.10 mmol) as a yellow sticky solid (71%, 31 mg); Rf (DCM: MeOH, 9:1) 0.20; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.44 (d, J = 1.0 Hz, 1H), 7.38 (d, J = 8.2 Hz, 1H), 6.92 (d, J = 2.1 Hz, 1H), 6.83 (dd, J = 8.2, 2.1 Hz, 1H), 6.66 (d, J = 1.0 Hz, 1H), 4.77 (dd, J = 8.1, 4.3 Hz, 1H), 4.27 (s, 2H), 3.58 (t, J = 5.9 Hz, 2H), 3.24 (t, J = 5.9 Hz, 2H), 3.20 (q, J = 7.2 Hz, 2H), 2.94–2.86 (m, 2H), 2.60–2.53 (m, 2H), 2.27–1.98 (m, 2H), 1.72 (s, 3H), 1.24 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 172.03, 159.23, 158.81, 158.11, 153.85, 135.31, 134.45, 120.46, 119.47, 118.48, 116.50, 116.33, 115.80, 114.52, 52.89, 51.50, 48.58, 38.13, 29.75, 22.88, 22.29, 9.14. HPLC-MS (ESI): purity = 97%, t R = 0.141 min, m/z [M + H] + = 429.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 14 ) Obtained from intermediate f.14 (231 mg, 0.38 mmol) as an off-white sticky solid (81%, 151 mg); Rf (DCM: MeOH, 9:1) 0.30; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.53 (d, J = 1.0 Hz, 1H), 8.47 (dd, J = 2.1, 1.5 Hz, 1H), 8.44 (ddd, J = 7.8, 1.6, 1.5 Hz, 1H), 7.88 (ddd, J = 7.8, 2.1, 1.6 Hz, 1H), 7.79 (t, J = 7.8 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.90 (d, J = 2.1 Hz, 1H), 6.85 (dd, J = 8.2, 2.1 Hz, 1H), 6.75 (d, J = 1.0 Hz, 1H), 4.28 (s, 2H), 3.64 (t, J = 6.0 Hz, 2H), 3.26 (t, J = 5.9 Hz, 2H), 3.21 (q, J = 7.2 Hz, 2H), 1.27 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.96, 158.75, 158.54, 158.32, 158.04, 135.32, 134.55, 131.14, 130.85, 130.47, 127.62, 125.27, 123.74, 120.50, 120.36, 119.56, 118.39, 116.28, 115.75, 51.39, 48.58, 38.16, 22.91, 9.16. HPLC-MS (ESI): purity = 98%, t R = 2.422 min, m/z [M + H] + = 490.1. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 15 ) Obtained from intermediate f.15 (76 mg, 0.14 mmol) as an off-white sticky solid (92%, 57 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.41 (d, J = 1.0 Hz, 1H), 7.37 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.2 Hz, 1H), 6.82 (dd, J = 8.2, 2.1 Hz, 1H), 6.74 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.64 (t, J = 6.2 Hz, 2H), 3.25 (t, J = 6.2 Hz, 2H), 3.19 (q, J = 7.2 Hz, 2H), 3.07 (tt, J = 9.4, 4.4 Hz, 1H), 2.03–1.96 (m, 2H), 1.84–1.76 (m, 2H), 1.62–1.56 (m, 4H), 1.55–1.48 (m, 4H), 1.25 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 172.01, 159.28, 159.06, 158.85, 158.64, 158.06, 135.31, 134.44, 119.49, 118.40, 116.22, 115.76, 89.77, 51.31, 48.60, 37.84, 32.67 (2C), 28.14 (2C), 26.17 (2C), 22.88, 9.12. HPLC-MS (ESI): purity = 99%, t R = 2.413 min, m/z [M + H] + = 442.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 16 ) Obtained from intermediate f.16 (81 mg, 0.13 mmol) as an off-white sticky solid (88%, 57 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.42 (d, J = 1.0 Hz, 1H), 7.37 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.82 (dd, J = 8.2, 2.1 Hz, 1H), 6.76 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.64 (t, J = 6.2 Hz, 2H), 3.25 (t, J = 6.2 Hz, 2H), 3.18 (q, J = 7.2 Hz, 2H), 2.89 (tt, J = 12.1, 3.6 Hz, 1H), 2.36–2.27 (m, 1H), 2.16–2.12 (m, 2H), 1.98–1.94 (m, 2H), 1.66–1.58 (m, 2H), 1.45–1.35 (m, 2H), 1.25 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.44, 158.78, 154.20, 143.67, 138.51, 134.04, 134.02, 129.32, 127.19, 120.34, 119.14, 116.42, 85.28, 57.08, 55.29, 52.10, 47.69, 46.36, 46.24, 41.06, 35.44, 29.99 (2C), 10.89. HPLC-MS (ESI): purity = 98%, t R = 2.413 min, m/z [M + H] + = 496.2. H -imidazo[4,5- c ]pyridin-6-yl)Amino)ethyl)amino)methyl)phenol ( 17 ) Obtained from intermediate f.17 (77 mg, 0.12 mmol) as an off-white sticky solid (78%, 47 mg); Rf (DCM: MeOH, 9:1) 0.36; 1 H-NMR (600 MHz, DMSO): δ 8.40 (d, J = 1.0 Hz, 1H), 7.36 (d, J = 8.2 Hz, 1H), 6.87 (d, J = 2.2 Hz, 1H), 6.83 (dd, J = 8.2, 2.1 Hz, 1H), 6.71 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.25–3.22 (m, 4H), 3.18 (q, J = 7.2 Hz, 2H), 2.88 (tt, J = 12.3, 3.7 Hz, 1H), 2.37–2.28 (m, 1H), 2.17–2.12 (m, 2H), 1.99–1.94 (m, 2H), 1.66–1.58 (m, 2H), 1.45–1.36 (m, 2H), 1.24 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.44, 158.78, 154.20, 143.67, 138.51, 134.04, 134.02, 129.32, 127.19, 120.34, 119.14, 116.42, 85.28, 57.08, 55.29, 52.10, 47.69, 46.36, 46.24, 41.06, 35.44, 29.99 (2C), 10.89. HPLC-MS (ESI): purity = 97%, t R = 2.343 min, m/z [M + H] + = 496.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 18 ) Obtained from intermediate f.18 (73 mg, 0.13 mmol) as an off-white sticky solid (77%, 46 mg); Rf (DCM: MeOH, 9:1) 0.50; 1 H-NMR (600 MHz, DMSO): δ 8.37 (d, J = 1.0 Hz, 1H), 8.13 (d, J = 8.1 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.83 (dd, J = 8.2, 2.1 Hz, 1H), 6.62 (d, J = 1.0 Hz, 1H), 4.25 (s, 2H), 3.64–3.53 (m, 1H), 3.22 (s, 3H), 3.12 (tt, J = 10.88, 4.11 Hz, 1H), 2.92 (t, J = 6.2 Hz, 2H), 2.81 (t, J = 6.2 Hz, 2H), 2.58 (q, J = 7.2 Hz, 2H), 2.08–2.02 (m, 2H), 1.88–1.77 (m, 2H), 1.75–1.71 (m, 2H), 1.63–1.47 (m, 2H), 1.23 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.47, 153.50, 144.28, 136.87, 134.14, 129.50, 127.40, 119.22, 116.44, 85.54, 78.32, 73.57, 56.94, 55.77, 55.30, 52.01, 47.89, 40.43, 35.67, 31.57 (2C), 29.79 (2C), 10.87. HPLC-MS (ESI): purity = 97%, t R = 2.124 min, m/z [M + H] + = 458.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 19 ) Obtained from intermediate f.19 (55 mg, 0.09 mmol) as an off-white solid (81%, 36 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 9.44 (d, J = 2.1 Hz, 1H), 8.70 (dd, J = 8.2, 2.1 Hz, 1H), 8.52 (d, J = 1.0 Hz, 1H, ), 8.09 (d, J = 8.2 Hz, 1H), 7.38 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.85 (dd, J = 8.2, 2.1 Hz, 1H), 6.67 (d, J = 1.0 Hz, 1H), 4.28 (s, 2H), 3.59 (t, J = 5.8 Hz, 2H), 3.25 (t, J = 5.9 Hz, 2H), 3.21 (q, J = 7.2 Hz, 2H), 1.27 (t, J = 7.2 Hz, 3H).C-NMR (151 MHz, CDCl 3 ): δ 159.54, 158.60, 155.00, 149.46, 144.49, 140.74, 137.88, 134.69, 134.25, 129.57, 128.82, 126.84, 120.39, 119.88, 119.26, 114.79, 85.39, 56.84, 55.31, 52.20, 47.84, 41.84, 10.77. HPLC-MS (ESI): purity = 98%, t R = 2.352 min, m/z [M + H] + = 491.1. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 14-NBD ) Obtained from intermediate m (339 mg, 0.56 mmols) and NBD-chloride (254 mg, 1.27 mmols) as a brick-red solid (50%, 187 mg); R f (DCM: MeOH, 9:1) 0.56; 1 H NMR (600 MHz, DMSO- d 6 ): δ 8.39 (s, 1H), 8.37 (d, J = 8.8 Hz, 1H), 8.35 (dd, J = 2.8, 2.2 Hz, 1H), 8.31 (d, J = 8.8 Hz, 1H), 7.82 (dd, J = 7.8, 2.2 Hz, 1H), 7.74 (ddd, J = 7.8, 2.8, 2.3 Hz, 1H), 7.13 (dd, J = 8.4, 7.8 Hz, 1H), 6.85 (d, J = 7.8 Hz, 1H), 6.59 (ddd, J = 8.4, 2.3 Hz, 1H), 6.19 (d, J = 2.2 Hz, 1H), 6.04 (s, 1H), 3.70 (s, 2H), 3.53 (t, J = 6.3 Hz, 2H), 3.39 (t, J = 6.0 Hz, 2H), 2.82–2.76 (m, 4H). 13 C NMR (151 MHz, DMSO): δ 159.13, 158.92, 157.57, 155.85, 154.95, 149.82, 147.41, 143.12, 139.53, 138.17, 137.90, 135.58, 132.31, 131.80, 131.35, 129.01, 128.78, 126.30, 123.09, 118.92, 115.35, 99.50, 92.26, 86.51, 55.54, 53.70, 51.38, 44.39, 41.56, 36.89. HPLC-MS (ESI): purity = 98%, t R = 0.905 min, m/z [M + H] + = 668.2. P. falciparum Assay Compounds were screened against multi-drug-resistant (K1) and sensitive (NF54) strains of P. falciparum in vitro using a parasite lactate dehydrogenase assay (pLDH) (the method is described fully in the Supporting Information ). In vitro cytotoxicity was performed on the Chinese Hamster Ovarian cell line by measuring cellular growth and survival calorimetrically through the MTT assay. , The formation of tetrazolium salt was used as a measure of chemosensitivity and growth. (Details of this assay is described in the Supporting Information .) The in vitro microsomal stability assay was performed in duplicate in a 96-well microtiter plate using a single-point experiment design. The test compounds (1 μM) were incubated individually in human (pool of 50, mixed-gender), rat (pool of 711, male Sprague Dawley) and mouse (pool of 1634, male CD1) liver microsomes (final protein concentration of 0.4 mg/mL; Xenotech, Kansas, USA), suspended in 0.1 M phosphate buffer (pH 7.4). (Details of this assay are described in the Supporting Information .)
Contributions of Women to Cardiovascular Science Over Two Decades: Authorship, Leadership, and Mentorship
62922491-9268-4ae4-b595-0f317ede06d5
10111442
Internal Medicine[mh]
Disparities Women's inclusion as authors of cardiology papers increased slightly over the past 2 decades, yet the proportions of women in first and last authorship roles has remained unchanged. Women are increasingly likely to mentor women first authors and lead diverse research teams. Women last authors are essential to increasing diversity of future independent investigators and inclusive research teams, both of which are associated with innovation and excellence in science. Robust, collective, and targeted efforts by both men and women are required to increase women authors in leadership roles and realize the full value that they bring in diversifying cardiovascular research. We created the most comprehensive sample possible by including all “cardiac and cardiovascular systems” journals according to the most recent Journal Citation Reports (Web of Science, Clarivate Analytics 2019) without establishing any initial search limits. Using the RISmed package version 2.2 in R version 4.0.3 (R Foundation for Statistical Computing, Vienna, Austria), we curated a list of all articles published in these journals since 2002, when PubMed began including first names instead of initials on its citations, making gender identification possible. Our analysis was conducted between January 1, 2002, and December 31, 2020, and publication type was limited to original research (journal articles), editorials, guidelines, randomized controlled trials, and clinical trials. Journals that did not publish original research were excluded (Figure ). Genderize.io application programming interface was used to assign gender. Previous research using Genderize.io has demonstrated high reliability when the program predicted the gender of the author's first name with a probability of ≥60%. , For the present analysis, authors whose gender was predicted at a probability of ≥80% were included in the analysis and this represented 78% of authors. We examined probabilities of 60% to 95% (Figure ), choosing to use a probability of ≥80% for greater accuracy. A sensitivity analysis was performed using a 95% probability threshold (Figure ). Percentage of women authorship was assessed overall and for each journal by year for each authorship role (first, last, and overall). Change in counts of women authors and the average percentage of women authors (first, last, and overall) according to journal was assessed using Poisson and linear regression models, respectively, with year as the independent variable. For temporal trends, the beta ( β ) coefficient represents change in average percentage of women authorship per year. The chi‐square test was used to assess the overall statistical association in mentorship between first author gender and senior author gender, and regression models were used to assess time trends. We used Spearman's rank‐order correlation to evaluate the association between gender pairing and authorship role. A detailed description of additional methodology used in this analysis, including the search strategy for subspecialty analyses, are included Data and Data . The present study is exempt from institutional review board review as the data analyzed are publicly available. All statistical analyses were performed in R using RStudio IDE ( www.rstudio.com , version 1.3.1093) and were based on a significance level of 0.05. The Journal Citation Reports lists 138 “cardiac and cardiovascular systems” journals. We excluded 8 journals that published mostly author initials not allowing gender identification, and 8 journals whose content was limited to review articles. A total of 396 549 articles published in 122 cardiology journals from January 2002 to December 2020 were included (Figure ). In this collection, gender was confidently assigned for 78% of authors that were included in the analysis. Among these, women represented 22.6% of all authors, 3.8% of first authors, and 2.4% of last authors (see Table for individual journal descriptive statistics). Over the 19‐year study period, the number of authors increased from 67 344 in 2002 to 229 198 authors in 2020 (Spearman's rank‐order correlation coefficient R S =0.99 [95% CI, 0.89–1.00]; P <0.001). There was a temporal increase in the absolute numbers of women authors in overall ( β =0.02 [95% CI, 0.02–0.03]; P =0.00), first ( β =0.03 [95% CI, 0.02–0.04]; P =0.00), and last ( β =0.03 [95% CI, 0.02–0.04]; P <0.001; Figure ) authorship roles when adjusted for total number of authors. However, changes in the percentage of women as first authors ( β −0.03 [95% CI, −0.06 to 0.004]; P =0.09) and last authors ( β ‐0.017 [95% CI, –0.04 to 0.006]; P =0.15) were not significant, indicating that most of the observed growth in authorship was among other co‐authorship roles (ie, middle author), with no significant increase in the proportion of women first and last authors over time (Figure ). When analyzed by journal, 72 (59%) journals had a significant increase in the proportion of women authors over the study period, with Congenital Heart Disease having the greatest increase ( β =2.03 [95% CI, 1.06–2.99]; P <0.001). The proportion of women first authors increased slightly in 18 journals (average β =0.13), while 13 journals had significant decreases (average β =−0.37) (Table ). Mentorship Women teams led by men to have a woman first author as compared with their men counterparts (35.9% versus 19.6%; P <0.001). Conversely, 21.9% of articles with a woman first author also had a woman last author (Figure ). Women and men last authors increasingly mentored more women first authors (women mentoring women: +54.97%; R S =0.99 [95% CI, 0.95–0.99]; P <0.001; men mentoring women: +50.56%; R S = 0.93 [95% CI, 0.78–0.98]; P <0.001) (Figure and ) Research Team Diversity On average, when research teams were led by women senior authors, 29.8% of all authors on the research team were women, while teams led by men included an average of 18.68% women authors ( P <0.001). Both women and men are increasingly leading research teams that include at least 1 nonlast woman author (both P <0.001). Although most papers are led by men, who therefore led the numerical majority of diverse teams, women are more likely to lead diverse teams, a gap that is not narrowing (Figure ). Subanalyses Impact Factor Journals with a higher impact factor had higher proportions of women authors (R S =0.205 [95% CI, 0.019–0.378], P =0.03). However, there were no significant associations between journal impact factor and percentage of women as first authors (R S =−0.035 [95% CI, −0.219 to 0.152], P =0.71), or last authors (R S =−0.012 [95% CI, −0.197 to 0.174], P =0.90) suggesting that women in middle author positions were driving the relationship. Geography When comparing US journals versus non‐US journals, we found no significant difference in the percentage of overall women authors (22.5% versus 21.9%; P =0.71), women as first authors (3.8% versus 3.6%; P =0.64), or women as last authors (2.5% versus 2.3%; P =0.43) (Figure ). Subspecialty We found no difference in overall women authors (23.6% versus 22.1%; P =0.18), women first authors (3.7% versus 3.7%; P =0.92), or women last authors (2.5% versus 2.4%; P =0.78) when comparing basic science manuscripts versus clinical research manuscripts (Figure ). Over the narrow range of women authorship (Figure ), interventional cardiology manuscripts tended to have the lowest proportion of overall (18.4% [95% CI, 2.76–33.94]), first (2.7% [95% CI, 0–7.16]), and women last authors (1.7% [95% CI, 0–4.91]). Congenital heart disease manuscripts tended to have the highest overall percentages of women overall (25.7% [95% CI, 7.74–43.57]), first (4.5% [95% CI, 0–9.29]) and last authors (3.1% [95% CI, 0–7.62]). Women teams led by men to have a woman first author as compared with their men counterparts (35.9% versus 19.6%; P <0.001). Conversely, 21.9% of articles with a woman first author also had a woman last author (Figure ). Women and men last authors increasingly mentored more women first authors (women mentoring women: +54.97%; R S =0.99 [95% CI, 0.95–0.99]; P <0.001; men mentoring women: +50.56%; R S = 0.93 [95% CI, 0.78–0.98]; P <0.001) (Figure and ) On average, when research teams were led by women senior authors, 29.8% of all authors on the research team were women, while teams led by men included an average of 18.68% women authors ( P <0.001). Both women and men are increasingly leading research teams that include at least 1 nonlast woman author (both P <0.001). Although most papers are led by men, who therefore led the numerical majority of diverse teams, women are more likely to lead diverse teams, a gap that is not narrowing (Figure ). Impact Factor Journals with a higher impact factor had higher proportions of women authors (R S =0.205 [95% CI, 0.019–0.378], P =0.03). However, there were no significant associations between journal impact factor and percentage of women as first authors (R S =−0.035 [95% CI, −0.219 to 0.152], P =0.71), or last authors (R S =−0.012 [95% CI, −0.197 to 0.174], P =0.90) suggesting that women in middle author positions were driving the relationship. Geography When comparing US journals versus non‐US journals, we found no significant difference in the percentage of overall women authors (22.5% versus 21.9%; P =0.71), women as first authors (3.8% versus 3.6%; P =0.64), or women as last authors (2.5% versus 2.3%; P =0.43) (Figure ). Subspecialty We found no difference in overall women authors (23.6% versus 22.1%; P =0.18), women first authors (3.7% versus 3.7%; P =0.92), or women last authors (2.5% versus 2.4%; P =0.78) when comparing basic science manuscripts versus clinical research manuscripts (Figure ). Over the narrow range of women authorship (Figure ), interventional cardiology manuscripts tended to have the lowest proportion of overall (18.4% [95% CI, 2.76–33.94]), first (2.7% [95% CI, 0–7.16]), and women last authors (1.7% [95% CI, 0–4.91]). Congenital heart disease manuscripts tended to have the highest overall percentages of women overall (25.7% [95% CI, 7.74–43.57]), first (4.5% [95% CI, 0–9.29]) and last authors (3.1% [95% CI, 0–7.62]). Journals with a higher impact factor had higher proportions of women authors (R S =0.205 [95% CI, 0.019–0.378], P =0.03). However, there were no significant associations between journal impact factor and percentage of women as first authors (R S =−0.035 [95% CI, −0.219 to 0.152], P =0.71), or last authors (R S =−0.012 [95% CI, −0.197 to 0.174], P =0.90) suggesting that women in middle author positions were driving the relationship. When comparing US journals versus non‐US journals, we found no significant difference in the percentage of overall women authors (22.5% versus 21.9%; P =0.71), women as first authors (3.8% versus 3.6%; P =0.64), or women as last authors (2.5% versus 2.3%; P =0.43) (Figure ). We found no difference in overall women authors (23.6% versus 22.1%; P =0.18), women first authors (3.7% versus 3.7%; P =0.92), or women last authors (2.5% versus 2.4%; P =0.78) when comparing basic science manuscripts versus clinical research manuscripts (Figure ). Over the narrow range of women authorship (Figure ), interventional cardiology manuscripts tended to have the lowest proportion of overall (18.4% [95% CI, 2.76–33.94]), first (2.7% [95% CI, 0–7.16]), and women last authors (1.7% [95% CI, 0–4.91]). Congenital heart disease manuscripts tended to have the highest overall percentages of women overall (25.7% [95% CI, 7.74–43.57]), first (4.5% [95% CI, 0–9.29]) and last authors (3.1% [95% CI, 0–7.62]). The well‐documented value of diversity to scientific quality coupled with cardiology's underrepresentation of women represents an ongoing challenge in optimizing cardiovascular science. In examining cardiology peer‐review literature from the past 2 decades, we found (1) increases in overall numbers and proportion of women authors that parallel changes in the cardiology workforce; (2) unchanged proportions of women first and last authors such that women remain underrepresented in both these leadership roles; (3) women last authors are more likely to assume a mentorship role for women first authors; (4) having a woman in a last authorship position was associated with a more diverse research team when compared with men last authors (Figure ). We also found that inclusion of women authors was associated with a higher journal impact factor. Neither journal location, subspecialty, nor basic versus clinical science were significantly associated with authorship gender. To our knowledge, this study represents the largest possible bibliometric analysis of gender authorship, leadership, and mentoring in original cardiology research as it encompasses all cardiology journals from almost 2 decades, inclusive of the entire time period for which gender identification is available. The average yearly increase in the proportion of cardiology authors who are women was only 0.4%, rising from 16.6% in 2002 to 24.6% in 2020; a rate that predicts that >60 years would be required for equal representation. This slow rate of change is consistent with increasing diversity in the overall workforce. However, the lack of increase in women first authors (<4%) is particularly concerning as the proportion of women cardiology fellows more than doubled between 1991 (10.1%) and 2016 (21.3%). , First authors often represent junior investigators just entering the field, who need access to opportunities and mentorship to succeed. Further, this suggests that the underrepresentation is not a pipeline issue. The increases that we and others have shown in women's representation in any authorship position is fully accounted by an increase in women in nonleading coauthorship positions. Prior studies examining authorship order and contributions have demonstrated that first authors make the broadest‐range and most notable contributions, followed by last authors, with middle authors making the smallest contributions. Moreover, authorship in peer‐reviewed manuscripts represents not only the intellectual contribution but also the hierarchical structure of the research team with higher credit accorded to first and last authorships and lower merit for co‐authorships. Hence, our finding of increasing middle authorship suggests that women involved in scientific research may be considered to have contributed less. While certainly valid in some cases, this assessment may be prone to implicit or overt biases contributed to by subjective assessments and cultural stereotypes, which are well documented to impede the entry, retention, and advancement of women in academic cardiology. However, if the women in nonleading authorship roles were indeed less equipped to contribute sufficiently to earn a leading position, then this further underscores the crucial need of mentorship, sponsorship, and opportunity to promote career advancement and provide women the chance to gain the skills required to lead original research. We ascertained mentorship through the genders of last and first author pairs and research team diversity on the basis of at least 1 woman who was not the last author. While women were more likely to mentor a woman first author and to lead a more diverse team, we found that, over time, both women and men are increasingly likely to mentor women and to lead a gender‐diverse team. Others have reported similar findings, thereby indicating that senior women may disproportionately drive diversification of the research enterprise beyond their own personal presence. Importantly, shouldering these responsibilities, however welcome, may represent another “tax” paid by senior women already faced with surmounting well‐documented barriers to advancement. The excess burden for women must be recognized and supported by access to resources, academic credit, protected time, and other strategies. The reasons women are not well represented as authors are myriad, as are the potential solutions. The proportion of women in cardiology was much lower in the past, when today's senior researchers were training, thus limiting the currently available pool of more senior women cardiologists to mentor younger individuals. Even though this “demographic inertia” could in part explain the present shortage of women investigators, the expected temporal gradient is absent. Other possible contributors include the greater likelihood of women abandoning academic careers before progressing to senior positions, and that women progress to research leadership roles more slowly than men if at all; both may be attributable in part to added difficulties inside and outside the workplace. Among these added barriers, there are funding disparities favoring men among National Institutes of Health career development award recipients and the overall National Institutes of Health–funded research enterprise. Specific to cardiology training, while women value the role of mentorship and positive role models more highly than men, this asset is not always recognized or rewarded. Regardless of the cause, the lack of change in the proportions of women first or senior authors, despite decades of effort, is disturbing and suggests the need to urgently reexamine strategies. Innovative, creative, and multipronged approaches involving both men and women are needed, including recruitment of women into cardiology and cardiology specialty training programs, ensuring opportunity and mentoring for women, advancement of women in academic institutions and professional societies, and leveling of the playing field for both federal and industry funding. Our data suggest that if we are able to succeed, this will not only improve opportunities for women, but it will also increase mentorship of men and foster more diverse research teams, enterprise‐wide benefits associated with greater research innovation and scientific impact. Our analysis has limitations that warrant mention. First, gender was assigned only when our prediction had a probability of ≥80%. Using this cutoff, we assigned gender to 78.3% of authors. While our prediction threshold is considerably higher than the 60% considered acceptable in prior work, it is not exempt from errors. Manual curation of author gender is not viable for our comprehensive set of articles and numbers of papers and authors included. Furthermore, gender was used as a binary variable (women or men) as the determination of nonbinary individuals was not possible. Second, this analysis examines only trends in article publication and is not able to examine trends in article submissions. In this regard, we cannot examine whether gender bias affects the submission or evaluation process, which could impact the proportion of publications from women authors. Third, our data count appearances as an author and do not represent unique authors, so we cannot address the actual numbers of men and women researchers publishing in each authorship position. Fourth, a more detailed analysis considering professional degree or discipline was not feasible and does not allow separation of authorship contributions by other health care professionals or physicians in other domains. Fifth, we acknowledge that the first/last author distinction of seniority and mentee/mentor relationship is not always followed and may not apply to many studies where both first and last authors can be considered senior authors. In addition, joint first or senior authorship roles were not accounted for in this analysis as these are not labeled in PubMed. Given the proven importance of diversity to scientific quality, the continuing underrepresentation of women in cardiology and as investigators represents a significant barrier to the field. Our study shows that, despite considerable efforts, there has been a slight increase in overall authorship over the past 2 decades. Moreover, the proportion of women in first and last authorship roles are unchanged and remained <5% during the entire study period. Importantly, women last authors are increasingly likely to assume a mentor role for women in first author positions and are increasingly leading gender‐diverse research teams. These are 2 important, previously poorly recognized advantages of greater leadership by women in cardiology research. Robust, collective, and targeted efforts by both men and women are required to increase women authors in leadership roles and realize the full value that they bring in diversifying cardiovascular research. None. None. Data S1–S2 Table S1 Figures S1–S5 Click here for additional data file.
Watchman
ba973fb7-081d-42c6-a259-bbf90f91799e
10111480
Internal Medicine[mh]
None.
A modified primary culture method of rat pulmonary vein smooth muscle cells
c910d097-776c-4727-9145-65658cf140db
10111653
Anatomy[mh]
Pulmonary hypertension (PH) due to left heart disease (PH-LHD) is the most common type of PH . Increased pulmonary vein pressure occurs before the pulmonary artery and is caused by left heart disease, which ultimately leads to PH . Currently, there are no effective treatment methods because of the unclear pathogenesis of PH-LHD. Pulmonary vascular remodeling is a critical mechanism of PH, and pulmonary artery smooth muscle cells (PASMCs) have been widely studied in pulmonary vascular remodeling research in vitro . The pulmonary vein has a thin medium and is easily contaminated when removed, making the pulmonary vein smooth muscle cells isolation method rather challenging; hence, PVSMCs have not been studied further. The present study aimed to introduce a modified pulmonary vein isolation and primary PVSMC culture method to establish a foundation for studying pulmonary vein vascular remodeling in PH-LHD or other pulmonary diseases in the future. Rats Male Sprague-Dawley rats (body weight 300–400 g) were purchased from Shanghai SLAC Laboratory Animal Co. Ltd. (License No. SCXK (HU)2017 0005). All rats were housed in an environment with 12:12 h light-dark cycles at 22–24 ℃, and were given adequate food and water. All experimental protocols were consistent with the Institute of Laboratory Animal Resources, National Academy Press, Washington, DC1996. Drugs and materials Pentobarbital sodium was purchased from Shanghai Rongbai Biological Technology Co. Ltd. DMEM (11,995,073), fetal bovine serum (FBS, 10,100,147), phosphate buffered saline (PBS) and penicillin-streptomycin (15,140,163) were obtained from Gibco (Carlsbad, CA, USA). Rabbit monoclonal anti-rat alpha-smooth muscle actin (α-SMA, 1:500, ab124964) and goat polyclonal anti-rabbit IgG-H&L (1:1000, ab150080) antibodies were purchased from Abcam. Other experimental materials were purchased from Thermo. Pulmonary vein and cell isolation The rats were anesthetized with 2% pentobarbital sodium (50 mg/kg) by intraperitoneal injection and soaked in 75% alcohol for 10 min. Then, the animals were immobilized and disinfected, and the chest wall was excised in the middle to expose the heart and the lung. Because other pulmonary veins were too small for the entry of puncture needle cannulas, we chose the left superior pulmonary vein, right superior pulmonary vein and right inferior pulmonary vein in our study, located in the lung passing into the left atrium. The puncture needle cannulas (18 G) were inserted into the left upper and right lower pulmonary veins, and a puncture needle cannula (20 G) was inserted into the right upper pulmonary vein. The pulmonary vein was clipped along the puncture needle cannula (Fig. ). The connective tissue and adventitia of the pulmonary vein were carefully removed under a microscope until the vascular ring became transparent. The pulmonary vein was rubbed 3–5 times on the cannula to remove the intima. Then, the cannula and the pulmonary vein were cut lengthwise and then the pulmonary vein was placed in sterile PBS. The pulmonary vein was cut into 1 mm pieces and transferred to a culture flask (25 cm 2 , 9–12 tissue pieces per bottle). The time from rat thoracotomy to placing the tissue in a culture flask was < 20 min. The culture flask was inverted, 3–5 mL of DMEM with high glucose and 20% FBS were added, and the cell culture incubated (37 ℃ under 5% CO 2 ), The flask was turned over after 1 h. The cell growth around the tissue blocks was observed daily under an inverted microscope, and the culture medium was replaced every 3 days. At 80–90% confluency, the vascular tissue was removed, and the cells were serially passaged two times. To digest the cells, 1–2 ml of trypsin (0.25%, 37 ℃) was added to the culture flask and incubated for 3–5 min. When most of the cells were suspended or their morphology changed to rounded, 5 ml of DMEM containing 20% FBS was added to terminate the digestion. After 15 min of incubation, the cell suspension was centrifuged (5 min, 1500 RPM), and the supernatant was discarded. Finally, DMEM with 20% FBS was used to culture the cells. PVSMCs were cultured to second generation, after trypsin digestion, 3–4 mL of culture solution was added immediately to terminate digestion, and the cells were gently blown with a straw repeatedly to make them free from the bottle wall and resuspended to form a single-cell suspension and incubated for 15 min, the impure cells based on the principle of fast adhesion rate of fibroblasts. Subsequently, the supernatant was collected by centrifugation (at 1500 rpm for 5 min), and the pellet was recultured. The traditional method involves isolating distal pulmonary veins under a microscope and culturing PVSMCs by enzymatic digestion method, as described by Peng et al. Also, the pulmonary artery was removed simultaneously under the microscope. Hematoxylin and eosin (HE) staining The rats were anesthetized with 2% pentobarbital sodium (50 mg/kg) by intraperitoneal injection. The pulmonary veins and arteries were fixed in 4% paraformaldehyde for 24 h, dehydrated with ethanol, cleared with xylene, embedded in paraffin, and sliced into 4 μm thick sections. Then, the slices were dewaxed in xylene and stained with HE before observing under a microscope. The percentage of vascular medium smooth muscle thickness to vascular wall thickness was measured under the microscope. Immunohistochemical (IHC) analysis The paraffin sections of the pulmonary vein were placed in an oven at 60 ℃ for 2 h, and the dehydrated in xylene for 15 min, 100% alcohol for 5 min, 85% alcohol for 5 min and 75% alcohol for 5 min. The antigen repair was carried out by heating the sections in citric acid buffer. Then, 50 µL of 3% hydrogen peroxide was added to the sections and incubated at 26 ℃ for 10 min. After blocking in goat serum for 30 min, the sections were incubated with rabbit monoclonal anti-rat α-SMA antibody (1:500) at 4 ℃ for 8 h, followed by goat anti-rabbit secondary antibody (1:1000) at 26 ℃ for 50 min. Finally, DAB was used for color development, and vascular smooth muscle cells were observed under a microscope (Nikon, Japan). Immunofluorescence analysis Third-generation cells were seeded in 96-well plates at a density of 3000–5000 cells/ well and cultured for 1–2 days. Then, the cells were washed three times with PBS, fixed with 4% paraformaldehyde, permeabilized with 0.25% Triton-X 100, blocked with goat serum at 26 ℃ for 1 h, and incubated with α-SMA and CD31 primary antibodies at 4 ℃ for 8 h, and goat anti-rabbit secondary antibody at 26 ℃ for 1 h. Subsequently, the cells were treated with nuclear dye 594 (1 µg/mL) for 45 s, and observed under a confocal laser fluorescence microscope. Western blotting The cells were lysed in RIPA buffer containing protease inhibitors and phosphorylase inhibitors on ice for 30 min. The supernatant was collected by centrifugation (12,000 rpm, 4 ℃ for 10 min). After the protein concentration was determined by the BCA method, the denatured proteins were separated by SDS-PAGE and transferred to a PVDF membrane for 45 min. Then, the membrane was blocked with 7% skim milk in TBS-T at 26 ℃ for 2 h and probed with rabbit monoclonal anti-rat α-SMA antibody at 4 ℃ for 8 h. Subsequently, the membrane was incubated with the secondary antibody for 2 h. Finally, the immunoreactive bands were developed by chemiluminescence instrument (ChemiDoc™ Touch System, Bio-Rad, USA). GAPDH was used as the internal control. Flow cytometry PVSMCs were prepared as single-cell suspension at a concentration of 1 × 10 6 cells/mL and incubated with rabbit monoclonal anti-rat α-SMA antibody and staining buffer at 4 ℃ for 20 min. FACSCalibur flow cytometer was used for detection, and the results were analyzed by FACS Diva software. Statistical analysis The experimental results are expressed as the mean ± standard error of the mean. A t-test was used for comparisons between two groups with homogeneity of variance, and one-way analysis of variance (ANOVA) was used for comparisons between multiple groups. p < 0.05 was considered as statistically difference. SPSS 25.0 was used for data analysis and GraphPad Prism 8.0 was used for data plotting. Male Sprague-Dawley rats (body weight 300–400 g) were purchased from Shanghai SLAC Laboratory Animal Co. Ltd. (License No. SCXK (HU)2017 0005). All rats were housed in an environment with 12:12 h light-dark cycles at 22–24 ℃, and were given adequate food and water. All experimental protocols were consistent with the Institute of Laboratory Animal Resources, National Academy Press, Washington, DC1996. Pentobarbital sodium was purchased from Shanghai Rongbai Biological Technology Co. Ltd. DMEM (11,995,073), fetal bovine serum (FBS, 10,100,147), phosphate buffered saline (PBS) and penicillin-streptomycin (15,140,163) were obtained from Gibco (Carlsbad, CA, USA). Rabbit monoclonal anti-rat alpha-smooth muscle actin (α-SMA, 1:500, ab124964) and goat polyclonal anti-rabbit IgG-H&L (1:1000, ab150080) antibodies were purchased from Abcam. Other experimental materials were purchased from Thermo. The rats were anesthetized with 2% pentobarbital sodium (50 mg/kg) by intraperitoneal injection and soaked in 75% alcohol for 10 min. Then, the animals were immobilized and disinfected, and the chest wall was excised in the middle to expose the heart and the lung. Because other pulmonary veins were too small for the entry of puncture needle cannulas, we chose the left superior pulmonary vein, right superior pulmonary vein and right inferior pulmonary vein in our study, located in the lung passing into the left atrium. The puncture needle cannulas (18 G) were inserted into the left upper and right lower pulmonary veins, and a puncture needle cannula (20 G) was inserted into the right upper pulmonary vein. The pulmonary vein was clipped along the puncture needle cannula (Fig. ). The connective tissue and adventitia of the pulmonary vein were carefully removed under a microscope until the vascular ring became transparent. The pulmonary vein was rubbed 3–5 times on the cannula to remove the intima. Then, the cannula and the pulmonary vein were cut lengthwise and then the pulmonary vein was placed in sterile PBS. The pulmonary vein was cut into 1 mm pieces and transferred to a culture flask (25 cm 2 , 9–12 tissue pieces per bottle). The time from rat thoracotomy to placing the tissue in a culture flask was < 20 min. The culture flask was inverted, 3–5 mL of DMEM with high glucose and 20% FBS were added, and the cell culture incubated (37 ℃ under 5% CO 2 ), The flask was turned over after 1 h. The cell growth around the tissue blocks was observed daily under an inverted microscope, and the culture medium was replaced every 3 days. At 80–90% confluency, the vascular tissue was removed, and the cells were serially passaged two times. To digest the cells, 1–2 ml of trypsin (0.25%, 37 ℃) was added to the culture flask and incubated for 3–5 min. When most of the cells were suspended or their morphology changed to rounded, 5 ml of DMEM containing 20% FBS was added to terminate the digestion. After 15 min of incubation, the cell suspension was centrifuged (5 min, 1500 RPM), and the supernatant was discarded. Finally, DMEM with 20% FBS was used to culture the cells. PVSMCs were cultured to second generation, after trypsin digestion, 3–4 mL of culture solution was added immediately to terminate digestion, and the cells were gently blown with a straw repeatedly to make them free from the bottle wall and resuspended to form a single-cell suspension and incubated for 15 min, the impure cells based on the principle of fast adhesion rate of fibroblasts. Subsequently, the supernatant was collected by centrifugation (at 1500 rpm for 5 min), and the pellet was recultured. The traditional method involves isolating distal pulmonary veins under a microscope and culturing PVSMCs by enzymatic digestion method, as described by Peng et al. Also, the pulmonary artery was removed simultaneously under the microscope. The rats were anesthetized with 2% pentobarbital sodium (50 mg/kg) by intraperitoneal injection. The pulmonary veins and arteries were fixed in 4% paraformaldehyde for 24 h, dehydrated with ethanol, cleared with xylene, embedded in paraffin, and sliced into 4 μm thick sections. Then, the slices were dewaxed in xylene and stained with HE before observing under a microscope. The percentage of vascular medium smooth muscle thickness to vascular wall thickness was measured under the microscope. The paraffin sections of the pulmonary vein were placed in an oven at 60 ℃ for 2 h, and the dehydrated in xylene for 15 min, 100% alcohol for 5 min, 85% alcohol for 5 min and 75% alcohol for 5 min. The antigen repair was carried out by heating the sections in citric acid buffer. Then, 50 µL of 3% hydrogen peroxide was added to the sections and incubated at 26 ℃ for 10 min. After blocking in goat serum for 30 min, the sections were incubated with rabbit monoclonal anti-rat α-SMA antibody (1:500) at 4 ℃ for 8 h, followed by goat anti-rabbit secondary antibody (1:1000) at 26 ℃ for 50 min. Finally, DAB was used for color development, and vascular smooth muscle cells were observed under a microscope (Nikon, Japan). Third-generation cells were seeded in 96-well plates at a density of 3000–5000 cells/ well and cultured for 1–2 days. Then, the cells were washed three times with PBS, fixed with 4% paraformaldehyde, permeabilized with 0.25% Triton-X 100, blocked with goat serum at 26 ℃ for 1 h, and incubated with α-SMA and CD31 primary antibodies at 4 ℃ for 8 h, and goat anti-rabbit secondary antibody at 26 ℃ for 1 h. Subsequently, the cells were treated with nuclear dye 594 (1 µg/mL) for 45 s, and observed under a confocal laser fluorescence microscope. The cells were lysed in RIPA buffer containing protease inhibitors and phosphorylase inhibitors on ice for 30 min. The supernatant was collected by centrifugation (12,000 rpm, 4 ℃ for 10 min). After the protein concentration was determined by the BCA method, the denatured proteins were separated by SDS-PAGE and transferred to a PVDF membrane for 45 min. Then, the membrane was blocked with 7% skim milk in TBS-T at 26 ℃ for 2 h and probed with rabbit monoclonal anti-rat α-SMA antibody at 4 ℃ for 8 h. Subsequently, the membrane was incubated with the secondary antibody for 2 h. Finally, the immunoreactive bands were developed by chemiluminescence instrument (ChemiDoc™ Touch System, Bio-Rad, USA). GAPDH was used as the internal control. PVSMCs were prepared as single-cell suspension at a concentration of 1 × 10 6 cells/mL and incubated with rabbit monoclonal anti-rat α-SMA antibody and staining buffer at 4 ℃ for 20 min. FACSCalibur flow cytometer was used for detection, and the results were analyzed by FACS Diva software. The experimental results are expressed as the mean ± standard error of the mean. A t-test was used for comparisons between two groups with homogeneity of variance, and one-way analysis of variance (ANOVA) was used for comparisons between multiple groups. p < 0.05 was considered as statistically difference. SPSS 25.0 was used for data analysis and GraphPad Prism 8.0 was used for data plotting. HE staining and IHC staining HE staining was performed on the pulmonary veins and pulmonary arteries of rats, and the results showed that the proportion of pulmonary vein media to the vascular wall was smaller than that of pulmonary artery media (Fig. a and b). Also, the intima and adventitia of the pulmonary vein were removed by this method (Fig. c and d). Immunohistochemical staining showed that the remaining vascular cells expressed α-SMA (Fig. ). Cell culture The tissue explant method was used to culture PVSMCs from second generation intrapulmonary vein branches, and the cells began to proliferate after 3 days, as observed under a microscope. The cells morphology was scattered and fusiform (Fig. a). PVSMCs overlapped as their number increased. At 70–80% confluency, some PVSMCs clustered, decreasing the PVSMC density of other parts, such that the characteristic “peak-valley” growth of smooth muscle cells appeared (Fig. b). The PVSMCs can fill the flask in 3–5 days. The cells were cultured to the third passage for subsequent experiments. Immunofluorescence Immunofluorescence analysis was performed on PVSMCs removed by our method. The fibers were parallel to each other in PVSMCs (Fig. ). Overlapping growth occurred when the cell density was high, and the fibers around the nucleus in PVSMCs stained strongly for α-SMA (shown in red, Fig. a). Nuclear staining with DAPI showed blue fluorescence (Fig. b). The merged the images of the fibers and the nuclei revealed the fibers corresponding to the nuclei (Fig. c). The control group was not treated with the α-SMA antibody, and immunofluorescence results indicated that α-SMA was negative control in PVSMCs (Fig. d, e and f). Double-immunofluorescence labeling method showed that the fibers surrounding the nucleus in PVSMCs were stained strongly for α-SMA (shown in red, Fig. a) with CD31 low expression (shown in green, Fig. b). The nucleus was stained with DAPI (Fig. c), and merged images of the fibers and the nuclei are illustrated in Fig. d. Western blotting The Western blot results showed that α-SMA expression in PVSMCs removed by our method was higher than in the traditional method ( p < 0.05). However, no significant difference was observed in CD31 expression in PVSMCs removed by our method and traditional methods ( p > 0.05, Fig. ), which indicating that the PVSMCs obtained by our method had high purity, and the degree of endothelial cell removal was similar to that obtained by traditional methods. Flow cytometry The flow cytometry results showed that > 99% cells expressed α-SMA ( p < 0.05, Fig. ). HE staining was performed on the pulmonary veins and pulmonary arteries of rats, and the results showed that the proportion of pulmonary vein media to the vascular wall was smaller than that of pulmonary artery media (Fig. a and b). Also, the intima and adventitia of the pulmonary vein were removed by this method (Fig. c and d). Immunohistochemical staining showed that the remaining vascular cells expressed α-SMA (Fig. ). The tissue explant method was used to culture PVSMCs from second generation intrapulmonary vein branches, and the cells began to proliferate after 3 days, as observed under a microscope. The cells morphology was scattered and fusiform (Fig. a). PVSMCs overlapped as their number increased. At 70–80% confluency, some PVSMCs clustered, decreasing the PVSMC density of other parts, such that the characteristic “peak-valley” growth of smooth muscle cells appeared (Fig. b). The PVSMCs can fill the flask in 3–5 days. The cells were cultured to the third passage for subsequent experiments. Immunofluorescence analysis was performed on PVSMCs removed by our method. The fibers were parallel to each other in PVSMCs (Fig. ). Overlapping growth occurred when the cell density was high, and the fibers around the nucleus in PVSMCs stained strongly for α-SMA (shown in red, Fig. a). Nuclear staining with DAPI showed blue fluorescence (Fig. b). The merged the images of the fibers and the nuclei revealed the fibers corresponding to the nuclei (Fig. c). The control group was not treated with the α-SMA antibody, and immunofluorescence results indicated that α-SMA was negative control in PVSMCs (Fig. d, e and f). Double-immunofluorescence labeling method showed that the fibers surrounding the nucleus in PVSMCs were stained strongly for α-SMA (shown in red, Fig. a) with CD31 low expression (shown in green, Fig. b). The nucleus was stained with DAPI (Fig. c), and merged images of the fibers and the nuclei are illustrated in Fig. d. The Western blot results showed that α-SMA expression in PVSMCs removed by our method was higher than in the traditional method ( p < 0.05). However, no significant difference was observed in CD31 expression in PVSMCs removed by our method and traditional methods ( p > 0.05, Fig. ), which indicating that the PVSMCs obtained by our method had high purity, and the degree of endothelial cell removal was similar to that obtained by traditional methods. The flow cytometry results showed that > 99% cells expressed α-SMA ( p < 0.05, Fig. ). PH-LHD is the most common type of PH in patients. Pulmonary vein pressure is the first to increase, eventually causing PH . Most studies have focused on pulmonary arterial hypertension. However, the current study showed early expression of inflammatory factors and TGF-β1 in the pulmonary vein than in the pulmonary artery of rats. Moreover, the expression levels were increased after a mechanical stretch in the pulmonary vein . Frid et al. found that inflammatory reactions were related to PH . In addition, Ping et al. demonstrated early vascular changes (thickened media and narrow lumen) in the pulmonary veins than in the pulmonary artery in the PH-LHD rat model . In this study, we developed a method to isolate PVSMCs. According to the anatomy, the pulmonary vein is connected to the left atrium and separated from the second pulmonary vein branches with the aid of a puncture needle cannula. Presently, most methods obtain pulmonary veins or arteries by removing the distal pulmonary vascular , but not the main pulmonary vascular, especially via the enzymatic digestion method ; hence, the obtained vein is thin with a low weight. Compared to the vascular media of pulmonary arteries, that of the pulmonary vein was thinner, making it difficult to distinguish the intima from the adventitia . Therefore, our method used the second pulmonary vein branch to obtain more tissue and made the tissue explant method easy. In addition, HE staining showed that due to the use of the puncture cannula, the intima and adventitia of the pulmonary veins were removed more thoroughly than the traditional method. The remaining vascular media clearly expressed α-SMA with low expression of cd31, as shown by IHC. Therefore, the remaining vascular medium was VSMCs. Moreover, the rate of endothelial cells by the two methods was similar. Our method to isolate the pulmonary vein greatly reduced the exposure time of the pulmonary vein in bacterial conditions (< 20 min), such that the probability of cell contamination was reduced. PVSMCs were obtained by the pulmonary vein tissue explant method, and the fibroblast was removed by the velocity differential adhesion method. The obtained cells showed obvious morphological characteristics of VSMC, and cell fusion appeared after the cells grew to a certain density. Finally, the characteristic “peak-valley” growth of VSMCs appeared. Almost all PVSMCs removed by our method expressed α-SMA by immunofluorescence. Moreover, the cell fibers were similar to those of VSMCs in previous studies, and the PVSMCs proliferated rapidly to about 100% in 3–5 days. In addition, PVSMCs removed by our method had higher α-SMA expression than the traditional methods. These results confirmed that the cells acquired and cultured by our method were PVSMCs in good condition. In this study, puncture needle cannulas were used to guide the pulmonary vein tissue isolation under a microscope, and the differential adhesion method was used to purify PVSMCs with a strong proliferation ability and high purity. Although the preliminary outcomes are encouraging, further trials are necessary to evaluate the efficacy of this simple primary culture method.
Determinants of clinical outcome and length of stay in acute care forensic psychiatry units
99e7d53e-494d-4bab-a798-e0f038a93084
10111658
Forensic Medicine[mh]
The number of forensic mental health services increased steadily in many Western countries from early 90’s as a consequence of a variety of factors such as the significant burden of acute and long-standing psychiatric conditions among detained persons, decrease of capacity in general psychiatry but also expansion in the types of psychiatric defences in courts of law and public concerns about violent behaviour attributed to the mentally ill . Short stay forensic services are designed to admit inmates displaying acute symptoms associated with self or others-threatening behaviour and need for urgent psychiatric care [ – ]. Long stay services allow for forensic inpatient treatment, referred to as “psychiatric detention”, “protective” or court ordered treatment (COT), which often exceeds the maximum length of a prison sentence that would be adjudicated for similar offenses committed by healthy perpetrators . In this latter case, the treatment requires extended time since it focuses on both mental health improvement and ensuring public safety. A main concern regarding these units is their efficiency since they are usually high-cost and low-volume services. Moreover, disproportionately long and protracted stays in forensic institutions can lead to human right violations but premature discharge of unstable patients may be equally deleterious leading to worse overall outcomes, poorer quality of life, and increased violence and re-admission risk . Several studies addressed the determinants of length of stay (LoS) and treatment outcome in long stay forensic services with some convergent observations but also conflicting data. These parameters may depend on the course of disease and severity of symptoms, compliance with previous treatments, family support, but also criminological characteristics. Previous criminal convictions, increased Historical Clinical Risk (HCR)-20 risk item scores, violent crimes, younger age, low social support and presence of pervasive psychotic symptoms (including diagnosis of schizophrenia and schizoaffective disorder), treatment resistance, as well as previous contacts with child and adolescent psychiatric services have been consistently related to longer LoS and poorer outcome. Among the other diagnoses, substance use disorders and cluster B personality disorders were associated with worst clinical outcome, yet their impact on LoS remains less clear [ – ]. In addition, external factors related to the judicial system, criteria for admission, and allocation of resources may also impact on both LoS and clinical outcome . The determinants of clinical outcome in the particular case of high and medium security hospitals are less well established. Most previous studies focused solely on criminal recidivism and pointed to well known risk factors such as procriminal companions, attitudes/cognitions supportive of criminal behavior, antisocial personality but also young age at first crime, early onset of mental disorder, and previous forensic treatments. In contrast, clinical variables (except antisocial personality) did not have much predictive value [ , , – ]. In contrast to the long stay forensic services, the determinants of LoS and clinical response in short stay forensic units are rarely addressed. These units play a key role in detention since they have to manage disruptive behaviours and suicidality of inmates that may represent an acute reaction to incarceration or expression of long-lasting vulnerability. It is thus crucial to assess the evolution and outcome of patients treated in these services, both to ensure effectiveness and quality of care and define the profiles of users who may benefit from specialized acute care in forensic settings . The differences between the detained individuals treated in short stay forensic units compared to patients admitted in acute care units of general psychiatry are still matter of debate. From some authors, the presence of a legal framework implies an artificial distinction between the two populations [ , , ]. However, other analyses showed significant differences in the demographic and clinical profile of detained persons admitted in inpatient forensic units in high and medium-security hospitals compared to patients treated in general psychiatry [ – ]. They are more often single, with higher suicide risk , more frequent psychotic beliefs , lower education attainment and occupational levels . The present study explores the determinants of LoS and clinical outcome in detained persons admitted to an acute care secure ward located in the central prison of Geneva, Switzerland. Our a priori hypothesis was that environmental factors (pre-trial versus sentence execution), criminological variables (HCR-20 risk score) and psychiatric diagnosis (personality disorders, schizophrenia, substance use disorders) impact on the duration and outcome of the clinical stay. We also postulated that this would not be the case for parameters usually affecting long term prognosis such as young age at first crime and previous criminal convictions. Subjects The UHPP (Unité hospitalière de psychiatrie pénitentiaire) is a 15 bed unit specially designed for acute psychiatric care of detained persons from the French speaking counties and is part of a medium-security hospital located in prison. Admission to the UHPP was based on need for urgent psychiatric care because of the presence of acute depressive or psychotic symptoms, psychomotor agitation with self or others-threatening behaviours. The health care team is composed of 4 medical doctors, 35 nurses and one nurse-auxiliary. 5 nurses are present during every day shift (2 between 9 pm and 7 am). Prison staff is continuously present, usually 2 to 4 prison guards for shift. They guarantee the security during daily activities in the unit. Care programs are based on the integration of psychopharmacology and psychotherapeutic approaches. The vast majority of the patients receive psychotropic medication. They also systematically benefit from at least one clinical encounter with a nurse during the day and 4 to 5 clinical encounters with the medical doctors during the week. Group therapy, art-therapy and ergotherapy (a physical therapy aiming to reduce pain, discomfort and functional disability) are performed on a regular basis. Clinical activities take place respecting the prison timetable/schedule. Patients are allowed to spend time together in a common room (two hours in the morning and three hours in the afternoon). Following the penitentiary international rules, patients are allowed to spend one hour of time in the yard of the unit, where a tennis table is available. Five slots per day are scheduled for smoking patients, during which they have access to a limited part of the yard. We examined the psychiatric records of all cases admitted between January 1st and December 31th 2020 (total number of admissions = 261). All of the cases with duration of stay less than 10 days (n = 31) were excluded. In these cases, the hospital stay was interrupted because of the rapid improvement of the symptoms and formal request to return in prison or by court decision, not allowing for obtaining the information needed for clinical and forensic assessment. Multiple admissions were registered in 150 cases whereas the remaining 80 cases were admitted only once during the period of reference. To prevent overrepresentation of those repeatedly admitted, we randomly selected 30 cases for each group (repeated and single admissions). The final sample included 60 cases (mean age: 34.8 ± 11, mean LoS 24.8 ± 29.2). Each patient was assigned an identification number that was derived from the name and birth date and subsequently encrypted. Information on judicial status included pre-trial versus sentence execution, previous incarcerations, and age of the first incarceration. Sociodemographic data included age, gender, marital status, and education attainment. Psychiatric history including inpatient stays prior to incarceration were recorded. All of the ICD-10 clinical diagnoses were made at the time of admission by two independent, board-certified psychiatrists (during the hospital stay and after discharge), blind to the scope of the study. Only cases with concordant psychiatric diagnoses (the two independent diagnoses should be identical) were considered in this sample. Psychiatric diagnoses included adjustment disorders, bipolar disorder, depressive disorders (ICD-10 codes F32-33), personality disorders (antisocial and borderline disorders), psychosis (ICD-10 codes F20-F29) and intellectual disability. Presence of suicidal behaviour and substance use disorders were treated as binary variables. Assessment tools The HoNOS (Health of the Nation Outcome scale) is one of the most used clinical outcome measure. It is a questionnaire of 12 scales covering areas of problems experienced by working-age adults in contact with specialised mental health services. It is systematically performed at the admission and at the discharge of every patient. This instrument covers domains of behaviour, impairment, symptoms and social functioning . We also used at admission the HoNOS-S (Health of Nation Outcome Scales-Secure ), a measure that includes both clinical and security scales specially designed to assess the needs of individuals experiencing mental illness who have offended. It comprises seven security items added to the traditional 12 HoNOS items security scale. The security items include risk of harm to adults or children, risk of self-harm, need for building security to prevent escape, need for a safely staffed living environment, need for escort on leave (beyond secure perimeter), risk to individual from others, and need for risk management procedures . HCR-20 (Historical Clinical Risk 20) version 2: it is composed of one static and two dynamic clinician reported scales. This scale is a wide range violence assessment tool developed by Webster, Evan, Douglas, and Witrup in 1995 using a sample of institutionalized people who were followed for approximately 2 years after their discharge into the community . HCR-20 consists of 20 items and assesses past, present and future indicators of violence . Following Brunero & Lamont it is the most widely used risk assessment tools. It contains 20 items: 10 related to historic items (previous violence, age at first violent incident, relationship instability, employment problems, substance use problems, major mental illness, psychopathology, early maladjustment, personality disorder and prior supervision failure ; five clinical items (lack of insight, negative attitudes, active symptoms of major mental illness, impulsivity and unresponsiveness to treatment); and five ‘risk management’ items (plans lack feasibility, exposure to destabilizes, lack of personal support, non-compliance with remediation attempts and stress) . PCL-R (Psychopathy Checklist Revised): It is a pivotal tool to identify psychopathic individuals in correctional settings, . In 1991 Hare designed this scale to measure the clinical construct of psychopathy, and since it has become the leading instrument to predict recidivism, violence and treatment outcome [ – ]. SAPROF (Structured Assessment of Protective Factors): it is a scale designed as a complement to assess risk considering protective factors. The SAPROF items are classed into three areas: internal, motivational, and external factors. Items 1 and 2 (internal factors) are considered static, whereas the other 15 factors are dynamic and therefore likely to change during treatment . The criminological assessment was routinely available for all of the cases admitted for the first time to the UHPP irrespective of their judicial status (pre-trial versus sentenced). Statistical analysis Fisher exact, unpaired Student t and Mann-Whitney u tests were used to compare sociodemographic (age, gender, marital status, education), clinical (psychiatric inpatient care prior to incarceration, suicidal behavior, psychiatric diagnosis, substance use disorders), outcome (length of stay and delta HONOS-S (admission-discharge)) and criminological (PCL-R, HCR-20 and its items, SAPROF scores) variables between pre-trial and sentence execution cases. Marital status (married, separated-divorced, single), education and presence of previous inpatient care were treated as ordinal variables. Suicidal behaviour was treated as binary variables. Psychiatric diagnoses included adjustment disorders (ICD 10 code F43), bipolar disorder (ICD 10 codes F30-F31), depressive disorders (ICD-10 codes F32-33), personality disorders (ICD 10 codes for antisocial and borderline personality), anxiety disorders (ICD 10 code F40-F42) and psychosis (ICD-10 codes F20-F29). Correction for multiple comparison in Table was performed using the Benjamini-Hochberg method. Cases with multiple diagnoses were considered in each diagnostic group separately. Stepwise forward multiple linear regression models predicting the logarithm of LoS (to obtain normally distributed data) and delta HONOS respectively were built with all of the above mentioned variables (included in Table ). The selected variables were then used in univariate and multivariable regression models. The significance level was set at P < 0.05. All statistical analyses were performed using Stata 17.0. The UHPP (Unité hospitalière de psychiatrie pénitentiaire) is a 15 bed unit specially designed for acute psychiatric care of detained persons from the French speaking counties and is part of a medium-security hospital located in prison. Admission to the UHPP was based on need for urgent psychiatric care because of the presence of acute depressive or psychotic symptoms, psychomotor agitation with self or others-threatening behaviours. The health care team is composed of 4 medical doctors, 35 nurses and one nurse-auxiliary. 5 nurses are present during every day shift (2 between 9 pm and 7 am). Prison staff is continuously present, usually 2 to 4 prison guards for shift. They guarantee the security during daily activities in the unit. Care programs are based on the integration of psychopharmacology and psychotherapeutic approaches. The vast majority of the patients receive psychotropic medication. They also systematically benefit from at least one clinical encounter with a nurse during the day and 4 to 5 clinical encounters with the medical doctors during the week. Group therapy, art-therapy and ergotherapy (a physical therapy aiming to reduce pain, discomfort and functional disability) are performed on a regular basis. Clinical activities take place respecting the prison timetable/schedule. Patients are allowed to spend time together in a common room (two hours in the morning and three hours in the afternoon). Following the penitentiary international rules, patients are allowed to spend one hour of time in the yard of the unit, where a tennis table is available. Five slots per day are scheduled for smoking patients, during which they have access to a limited part of the yard. We examined the psychiatric records of all cases admitted between January 1st and December 31th 2020 (total number of admissions = 261). All of the cases with duration of stay less than 10 days (n = 31) were excluded. In these cases, the hospital stay was interrupted because of the rapid improvement of the symptoms and formal request to return in prison or by court decision, not allowing for obtaining the information needed for clinical and forensic assessment. Multiple admissions were registered in 150 cases whereas the remaining 80 cases were admitted only once during the period of reference. To prevent overrepresentation of those repeatedly admitted, we randomly selected 30 cases for each group (repeated and single admissions). The final sample included 60 cases (mean age: 34.8 ± 11, mean LoS 24.8 ± 29.2). Each patient was assigned an identification number that was derived from the name and birth date and subsequently encrypted. Information on judicial status included pre-trial versus sentence execution, previous incarcerations, and age of the first incarceration. Sociodemographic data included age, gender, marital status, and education attainment. Psychiatric history including inpatient stays prior to incarceration were recorded. All of the ICD-10 clinical diagnoses were made at the time of admission by two independent, board-certified psychiatrists (during the hospital stay and after discharge), blind to the scope of the study. Only cases with concordant psychiatric diagnoses (the two independent diagnoses should be identical) were considered in this sample. Psychiatric diagnoses included adjustment disorders, bipolar disorder, depressive disorders (ICD-10 codes F32-33), personality disorders (antisocial and borderline disorders), psychosis (ICD-10 codes F20-F29) and intellectual disability. Presence of suicidal behaviour and substance use disorders were treated as binary variables. The HoNOS (Health of the Nation Outcome scale) is one of the most used clinical outcome measure. It is a questionnaire of 12 scales covering areas of problems experienced by working-age adults in contact with specialised mental health services. It is systematically performed at the admission and at the discharge of every patient. This instrument covers domains of behaviour, impairment, symptoms and social functioning . We also used at admission the HoNOS-S (Health of Nation Outcome Scales-Secure ), a measure that includes both clinical and security scales specially designed to assess the needs of individuals experiencing mental illness who have offended. It comprises seven security items added to the traditional 12 HoNOS items security scale. The security items include risk of harm to adults or children, risk of self-harm, need for building security to prevent escape, need for a safely staffed living environment, need for escort on leave (beyond secure perimeter), risk to individual from others, and need for risk management procedures . HCR-20 (Historical Clinical Risk 20) version 2: it is composed of one static and two dynamic clinician reported scales. This scale is a wide range violence assessment tool developed by Webster, Evan, Douglas, and Witrup in 1995 using a sample of institutionalized people who were followed for approximately 2 years after their discharge into the community . HCR-20 consists of 20 items and assesses past, present and future indicators of violence . Following Brunero & Lamont it is the most widely used risk assessment tools. It contains 20 items: 10 related to historic items (previous violence, age at first violent incident, relationship instability, employment problems, substance use problems, major mental illness, psychopathology, early maladjustment, personality disorder and prior supervision failure ; five clinical items (lack of insight, negative attitudes, active symptoms of major mental illness, impulsivity and unresponsiveness to treatment); and five ‘risk management’ items (plans lack feasibility, exposure to destabilizes, lack of personal support, non-compliance with remediation attempts and stress) . PCL-R (Psychopathy Checklist Revised): It is a pivotal tool to identify psychopathic individuals in correctional settings, . In 1991 Hare designed this scale to measure the clinical construct of psychopathy, and since it has become the leading instrument to predict recidivism, violence and treatment outcome [ – ]. SAPROF (Structured Assessment of Protective Factors): it is a scale designed as a complement to assess risk considering protective factors. The SAPROF items are classed into three areas: internal, motivational, and external factors. Items 1 and 2 (internal factors) are considered static, whereas the other 15 factors are dynamic and therefore likely to change during treatment . The criminological assessment was routinely available for all of the cases admitted for the first time to the UHPP irrespective of their judicial status (pre-trial versus sentenced). Fisher exact, unpaired Student t and Mann-Whitney u tests were used to compare sociodemographic (age, gender, marital status, education), clinical (psychiatric inpatient care prior to incarceration, suicidal behavior, psychiatric diagnosis, substance use disorders), outcome (length of stay and delta HONOS-S (admission-discharge)) and criminological (PCL-R, HCR-20 and its items, SAPROF scores) variables between pre-trial and sentence execution cases. Marital status (married, separated-divorced, single), education and presence of previous inpatient care were treated as ordinal variables. Suicidal behaviour was treated as binary variables. Psychiatric diagnoses included adjustment disorders (ICD 10 code F43), bipolar disorder (ICD 10 codes F30-F31), depressive disorders (ICD-10 codes F32-33), personality disorders (ICD 10 codes for antisocial and borderline personality), anxiety disorders (ICD 10 code F40-F42) and psychosis (ICD-10 codes F20-F29). Correction for multiple comparison in Table was performed using the Benjamini-Hochberg method. Cases with multiple diagnoses were considered in each diagnostic group separately. Stepwise forward multiple linear regression models predicting the logarithm of LoS (to obtain normally distributed data) and delta HONOS respectively were built with all of the above mentioned variables (included in Table ). The selected variables were then used in univariate and multivariable regression models. The significance level was set at P < 0.05. All statistical analyses were performed using Stata 17.0. Cases in pre-trial detention displayed significantly lower HONOS-S scores at baseline, delta HONOS-S scores, as well as PCL-R and SAPROF scores compared to those in sentence execution. Interestingly, the percentage of cases with psychotic disorders was significantly higher in sentence execution yet this differences did not survive after correction for multiple comparisons (Table ). The stepwise forward multiple linear regression models identified six candidate variables explaining the delta HONOS-S score: education, male gender, HCR-20, LoS, number of previous inpatient stays, and pre-trial detention. In univariate models, four among them were significantly associated with the clinical outcome. Higher HCR-scores (historical and clinical items), longer LoS and number of previous inpatient stays were related to higher delta HONOS-S scores. In contrast, cases in pre-trial detention showed a worst clinical outcome with lower differences in HONOS-S scores between admission and discharge. In multivariable models, clinical item score of the HCR-20, longer LoS and pre-trail detention were the more significant predictors of the clinical outcome and explained as much as 30.7% of its variance (Table ). Only education and diagnosis of borderline personality disorder were related to the LoS (logarithm) in the present study. In univariate models, secondary education was associated with longer LoS whereas borderline personality disorder was the only diagnosis to be negatively associated with this variable. In multivariable modes, these parameters remained significant predictors of the LoS and explained 12.6% of its variance (Table ). Our data make it possible to determine the clinical and criminological profile as well as patterns of outcome including symptom evolution and length of stay in an acute care setting specialized in forensic psychiatry in Geneva. Compared to those on sentence execution, cases on pre-trial detention are less symptomatic at admission and show lower changes of HONOS scores at discharge and lower levels of psychopathy, and benefit from less protective factors. Our data reveal a better outcome for patients with higher risk for violent behaviours according to the HCR-20, sentence execution and previous history of psychiatric inpatient care. Importantly, the clinical diagnosis is not a significant determinant of the outcome in acute psychiatric care of detained persons. Moreover, they show that the LoS is independent on the criminological and sociodemographic factors and was significantly lower only in cases with borderline personality disorder. Changes in the severity of acute symptoms were frequently used to assess the clinical evolution in care settings of general psychiatry. Greater severity of symptoms at admission was usually related to better outcomes [ – ]. Compulsory admission, mood and anxiety disorders, absence of personality disorders and substance use disorders, but also single stays were all associated with more favourable clinical evolutions [ , – ]. Controlling for baseline HONOS-secure/HONOS scores, our findings show that higher HCR-20 scores, longer LoS and number of previous hospitalizations were positively associated with HONOS-secure/HONOS score changes indicating that patients with more severe risk for violence, familiar with psychiatric care prior to incarceration are more susceptible to benefit from longer stays in acute care forensic units. A strong positive association was found between the score of clinical and historical but nor risk HCR-20 items and improvement during hospital stay. Importantly, this finding is diagnosis-independent and cannot thus be explained by the positive effect of hospitalization for detained persons with long-lasting psychosis. In contrast, pre-trial detention is associated with lower change of HONOS-secure/HONOS scores upon discharge pointing to the increased vulnerability of this population that accumulates severe stress due to the uncertainty of the final sentence. Most importantly and unlike that reported in general psychiatry settings, the type of diagnosis has no independent effect on HONOS secure/HONOS score evolution in our series. Taking together these results suggest that the impact of the acute care in forensic psychiatry settings depends more on criminological profile, previous exposure to psychiatric care and legal status than clinical diagnosis per se. The association of higher HCR-20 clinical items score and sentenced status with higher delta HONOS scores persisted in multivariable models further supporting the relevance of these factors in predicting better clinical outcome in this care setting. We also found significant differences in the determinants of LoS in our context compared to both long term stays in medium and high security hospitals but also acute care units in general psychiatry. Both criminological factors such as age at first incarceration and type of offense were consistently associated with LoS in medium and high security hospitals. Not surprisingly, older age at first admission, violent crimes and severe offenses (mainly sexual assaults) were related to longer LoS in these settings [ , , , , – ]. The only clinical determinant of longer LoS was the presence of psychosis [ , , ]. In general psychiatry, the number of previous hospitalisations, diagnosis of schizophrenia or mood disorders, and female gender determine longer LoS in acute care whereas the opposite was true for substance use and borderline personality disorders . Controlling for all of these candidate predictors, and besides the effect of secondary level of education that was independently associated with longer LoS possibly reflecting a better acceptance of care, the present study shows that only the diagnosis of borderline personality disorder is associated with shorter duration of stay in forensic acute care settings. This sole parameter explained 12.6% of the LoS variance, a modest but still significant percentage if one considers the significant number of clinical, demographic and environmental factors that impact on this variable. As in general psychiatry, the rapid regression of disruptive behaviour and emotional disturbances may allow for a rapid discharge even with modest changes in HONOS secure/HONOS scores. Of importance, although the risk of violence, assessed with the HCR-20 score, impacts on the clinical outcome it seems unrelated to the LoS. This may be explained by the fact that in contrast to medium and high security hospitals where the discharge implies the transition to more permissive settings (psychiatric hospital, residential care), the end of stay in our unit was followed by a return to prison. In this context, the decision of discharge may be taken on the basis of the clinical evolution solely since it does not imply an increased risk of recidivism. In the same line and in contrast to both medium and high security hospitals as well as acute care in general psychiatry, the diagnosis of psychosis, present in 50% of our sample, does not lead to increased LoS. This may be also due to the absence of immediate consequences of the discharge made in prison on risk of recidivism and psychosocial repercussions in the community. Unlike that reported in general psychiatry settings, the diagnosis of lifetime substance use disorders, that was identified in 70% of our sample, was not associated with shorter LoS. Without the legal constraints imposed by the incarceration, a significant proportion of patients with comorbid substance use disorders interrupt their stay due to craving and intolerance to the hospital rules . Strengths and limitations Strengths of the present study is admission of all cases in the same unit of acute psychiatric care in prison that decreases the variability in the admission criteria, multidimensional characterization of the sample including sociodemographic, clinical and criminological parameters, and use of multivariable models controlling for the variables known to impact on clinical outcome and LoS in both general psychiatry and long term forensic psychiatry settings. Several limitations should, however, be mentioned. Clinical diagnosis was carried out by two independent clinicians blinded to the aim of the study. Standardized diagnostic questionnaires were not used in order to be close to a real-life situation. Moreover, it has to be mentioned that criminal records included prior convictions in Switzerland and countries of the European Union. On the contrary, convictions in other countries (including the native) were assessed solely on the basis of self-reports during the hospital stay. In this situation, we cannot thus exclude a declaration bias that could affect the quality of this variable. In the same line, the assessment of previous inpatient stays outside the Geneva county was also made by self-report and could be biased. The difference in time spent in prison may impact per se on the clinical outcome. To address partly this limitation, our randomization process prevents the overrepresentation of cases with repeated admissions during the period of reference. The negative results regarding bipolar disorder, substance use disorders and antisocial personality should be interpreted with caution given the limited sample. This was also the case for some sociodemographic variables such as university education. Last but not least, these observations concern a specialized unit of forensic psychiatry located in prison and not in a psychiatric hospital. These latter may be radically different in the absence of prison staff that implies an a priori selection of cases with better criminological profiles. Future studies in larger samples using standardized assessment of clinical diagnosis, detailed assessment of previous convictions, and inclusion of forensic psychiatry units outside the prison are needed to explore the determinants of clinical outcome and LoS in acute care forensic psychiatry settings. Strengths of the present study is admission of all cases in the same unit of acute psychiatric care in prison that decreases the variability in the admission criteria, multidimensional characterization of the sample including sociodemographic, clinical and criminological parameters, and use of multivariable models controlling for the variables known to impact on clinical outcome and LoS in both general psychiatry and long term forensic psychiatry settings. Several limitations should, however, be mentioned. Clinical diagnosis was carried out by two independent clinicians blinded to the aim of the study. Standardized diagnostic questionnaires were not used in order to be close to a real-life situation. Moreover, it has to be mentioned that criminal records included prior convictions in Switzerland and countries of the European Union. On the contrary, convictions in other countries (including the native) were assessed solely on the basis of self-reports during the hospital stay. In this situation, we cannot thus exclude a declaration bias that could affect the quality of this variable. In the same line, the assessment of previous inpatient stays outside the Geneva county was also made by self-report and could be biased. The difference in time spent in prison may impact per se on the clinical outcome. To address partly this limitation, our randomization process prevents the overrepresentation of cases with repeated admissions during the period of reference. The negative results regarding bipolar disorder, substance use disorders and antisocial personality should be interpreted with caution given the limited sample. This was also the case for some sociodemographic variables such as university education. Last but not least, these observations concern a specialized unit of forensic psychiatry located in prison and not in a psychiatric hospital. These latter may be radically different in the absence of prison staff that implies an a priori selection of cases with better criminological profiles. Future studies in larger samples using standardized assessment of clinical diagnosis, detailed assessment of previous convictions, and inclusion of forensic psychiatry units outside the prison are needed to explore the determinants of clinical outcome and LoS in acute care forensic psychiatry settings. From a clinical viewpoint, our results suggest that the use of acute wards specialized in forensic psychiatry could be mainly useful for patients with prior inpatient care experience, and higher violence risk during sentence execution. Importantly, these independent variables explain more than 25% of the delta HONOS variance, a quite substantial percentage given the complexity of factors that impact on this clinical parameter. In contrast, they seem to be less performant for persons in pre-trial detention that could benefit from less restrictive clinical settings. However, one should keep in mind that in our sample, pre-trial patients displayed less frequently psychotic disorders that are known to increase the risk of violence in clinical settings. The present findings also indicate that, unlike both general psychiatry and long-term forensic psychiatry settings, in the context of acute care, the forensic parameters are more pertinent than the clinical diagnosis in the prediction of outcome measures.
Dental care for older adults in home health care services - practices, perceived knowledge and challenges among Norwegian dentists and dental hygienists
3771876a-b6f9-4a99-ac27-04f9b3faf41e
10111733
Dental[mh]
The proportion of the elderly retaining their own teeth is increasing, and epidemiological evidence suggests that the burden of caries and periodontal diseases is expected to grow in aging populations, including frail elderly . Maintaining good oral health is based on adequate oral hygiene and regular access to dental services, with oral health being an important determinant of overall health and wellbeing . Risk factors for oral diseases accumulate throughout life, and the majority of older people will therefore continue to have a need for both preventive and curative oral health care . Becoming older is associated with a higher incidence of illnesses and conditions that may lead to care dependency and increased vulnerability . However, caring for patients in their homes is becoming the preferred mode of health care delivery among the elderly population . Several studies have shown that oral health among frail and dependent elderly is inferior to the general population . When physical and cognitive functions are impaired, the capacity to perform oral hygiene is often reduced, thus older people may become dependent on assistance with daily oral care . Moreover, older people in Western countries use dental health care services far less often than younger people , and frail people visit dental clinics less frequently compared to non-frail people . The frequency of dental visits and the relative proportions of diagnostics and prevention decrease with age, with older patients mainly visiting their dentist for restorative or prosthetic care . As preventive measures and treatment strategies for oral diseases are effective at all ages, the same standard of prevention and care should be provided across the entire life span . Providing dental services to dependent older people might be challenging due to reduced mobility, physical and cognitive decline, multimorbidity, and polypharmacy . Previous studies have assessed and identified barriers for continued dental service provision for the care-dependent elderly and have pointed out the lack of suitable facilities, transportation, refusal of care, as well as the lack of adequate training and experience among clinicians . Elderly in home care have been found to have poorer oral health than nursing home residents . Most international studies on the oral health of people in need of long-term care have focused primarily on the nursing home setting, and less is known about the provision of dental care to the elderly in domiciliary care . In addition, most studies exploring the dentists’ experience in delivering oral health care to older people are surveys focusing on oral health care in nursing homes. It has been shown that frail older people newly admitted to nursing homes often have poor oral health, which indicates that the deterioration has already taken place . More focus on the prevention of oral diseases and interventions to improve oral health and dental care among home care recipients is therefore needed. More research should be carried out to investigate the current practices for dental care and preventive advice for older adults in home health care services (HHCS), as well as the challenges dental health personnel experience while delivering oral health care to this patient group . According to Statistics Norway, there were 4919 dentists and 1153 dental hygienists registered in Norway in 2021. For a population of 5.4 million, the number of inhabitants per dentist is approximately 1100, which indicates relatively good access to dental care . Care-dependent elderly are entitled to free dental care in the Public Dental Service (PDS), however, only approximately 20% of those in HHCS are using PDS . The reasons for this have not been fully investigated. We lack knowledge about preventive practices as well as treatment procedures that dental professionals provide for HHCS patients. Moreover, dentists’ and dental hygienists’ experiences and challenges related to providing dental services for care-dependent older adults have not been explored. Thus, the aim of the present study was to explore current practices, knowledge, and experienced challenges related to the treatment of older HHCS adults among dentists and dental hygienists in Norway. Study design and participants This was an explorative nationwide survey among dentists and dental hygienists in Norway. For the PDS, the chief dental officers in all counties in Norway were contacted and asked to distribute the questionnaire by e-mail among clinicians working in public dental clinics. Invitations to dental hygienists were sent by e-mail via the Norwegian Dental Hygienist Association. Invitations to dentists in the private sector were distributed via newsletters and social media to members of the Norwegian Dental Association. Data collection was based on an electronical questionnaire distributed via QuestBack and started on 15 November 2021. Three reminders for participation were sent, and the data collection ended on 16 January 2022. Only dentists and dental hygienists who reported to provide dental care for older HHCS adults (65+) were included. Ethics approval and consent to participate Participation was voluntary and written informed consent was obtained electronically from all participants. No compensation was given to the respondents. Anonymity of the respondents was ensured by QuestBack. The study was approved by the Norwegian Centre for Research Data (210679). All methods were performed in accordance with relevant guidelines and regulations. The manuscript was prepared according to STROBE guidelines . Variables The questionnaire consisted of four parts: Background characteristics (Table ) Current practices (dental care and preventive advice) Dentists were presented 12 dental procedures and asked to rank up to four of their most common procedures when providing dental care for older HHCS adults (Figure ). Dentists and dental hygienists were asked how often they gave preventive advice concerning brushing technique, interdental cleaning, use of fluorides at home and diet on a 5-point Likert scale (always, often, sometimes, seldom, never) (Figure ). Dentists and dental hygienists were asked to state if the treatments they provided were most often performed to relieve oral problems/symptoms, postpone, preserve, or improve the patient’s oral health status. Self-perceived knowledge The respondents were asked to evaluate their self-perceived knowledge regarding treatment of older patients within three different categories: Patients with complex treatment needs, patients with dementia or other cognitive impairment, and patients with impaired physical functioning. The responses were given on a 5-point Likert scale (totally agree, agree, neither agree nor disagree, disagree, totally disagree) (Table ). Experienced treatment-related challenges Dentists and dental hygienists were presented 16 statements about challenging situations relevant for dental treatment of older patients and asked to report how often they experienced each of them, on a 5-point Likert scale (always, often, sometimes, seldom, never) (Figure ). The questionnaire was based on the topics from the literature and pilot-tested for face-validity by ten dentists and dental hygienists in public and private dental service to ensure respondents comprehension of the questions and length of the questionnaire. Content validity was also assessed with the same clinicians to determine whether the questionnaire captured the intended research objectives. Statistical methods Descriptive statistics in the form of frequency and percentage distributions were used to describe the background characteristics of the respondents. Chi-squared test was used to test bivariate associations. From the 16 items on experienced challenges, we conducted an exploratory factor analysis (EFA) using oblique rotation in order to identify and examine clusters of inter-correlated variables. We extracted three clusters of variables called factors, which explained most of the observed variance of the original 16 items. The sampling adequacy of the data for EFA was assessed using the Kaiser-Meyer-Olkin (KMO) statistic, which was considered to be meritorious (KMO = 0.859). In constructing the factors, all items exhibiting factor loads of ≥ 0.30 were considered. Structural equation models (SEMs) were then used to identify socio-demographic characteristics that were significantly associated with the factors extracted. IBM SPSS Statistics 27 was used to perform descriptive analyses whereas Stata SE 17 was used for conducting EFA and SEMs. The level of statistical significance was set at 5%. This was an explorative nationwide survey among dentists and dental hygienists in Norway. For the PDS, the chief dental officers in all counties in Norway were contacted and asked to distribute the questionnaire by e-mail among clinicians working in public dental clinics. Invitations to dental hygienists were sent by e-mail via the Norwegian Dental Hygienist Association. Invitations to dentists in the private sector were distributed via newsletters and social media to members of the Norwegian Dental Association. Data collection was based on an electronical questionnaire distributed via QuestBack and started on 15 November 2021. Three reminders for participation were sent, and the data collection ended on 16 January 2022. Only dentists and dental hygienists who reported to provide dental care for older HHCS adults (65+) were included. Participation was voluntary and written informed consent was obtained electronically from all participants. No compensation was given to the respondents. Anonymity of the respondents was ensured by QuestBack. The study was approved by the Norwegian Centre for Research Data (210679). All methods were performed in accordance with relevant guidelines and regulations. The manuscript was prepared according to STROBE guidelines . The questionnaire consisted of four parts: Background characteristics (Table ) Current practices (dental care and preventive advice) Dentists were presented 12 dental procedures and asked to rank up to four of their most common procedures when providing dental care for older HHCS adults (Figure ). Dentists and dental hygienists were asked how often they gave preventive advice concerning brushing technique, interdental cleaning, use of fluorides at home and diet on a 5-point Likert scale (always, often, sometimes, seldom, never) (Figure ). Dentists and dental hygienists were asked to state if the treatments they provided were most often performed to relieve oral problems/symptoms, postpone, preserve, or improve the patient’s oral health status. Self-perceived knowledge The respondents were asked to evaluate their self-perceived knowledge regarding treatment of older patients within three different categories: Patients with complex treatment needs, patients with dementia or other cognitive impairment, and patients with impaired physical functioning. The responses were given on a 5-point Likert scale (totally agree, agree, neither agree nor disagree, disagree, totally disagree) (Table ). Experienced treatment-related challenges Dentists and dental hygienists were presented 16 statements about challenging situations relevant for dental treatment of older patients and asked to report how often they experienced each of them, on a 5-point Likert scale (always, often, sometimes, seldom, never) (Figure ). The questionnaire was based on the topics from the literature and pilot-tested for face-validity by ten dentists and dental hygienists in public and private dental service to ensure respondents comprehension of the questions and length of the questionnaire. Content validity was also assessed with the same clinicians to determine whether the questionnaire captured the intended research objectives. Descriptive statistics in the form of frequency and percentage distributions were used to describe the background characteristics of the respondents. Chi-squared test was used to test bivariate associations. From the 16 items on experienced challenges, we conducted an exploratory factor analysis (EFA) using oblique rotation in order to identify and examine clusters of inter-correlated variables. We extracted three clusters of variables called factors, which explained most of the observed variance of the original 16 items. The sampling adequacy of the data for EFA was assessed using the Kaiser-Meyer-Olkin (KMO) statistic, which was considered to be meritorious (KMO = 0.859). In constructing the factors, all items exhibiting factor loads of ≥ 0.30 were considered. Structural equation models (SEMs) were then used to identify socio-demographic characteristics that were significantly associated with the factors extracted. IBM SPSS Statistics 27 was used to perform descriptive analyses whereas Stata SE 17 was used for conducting EFA and SEMs. The level of statistical significance was set at 5%. The background characteristics are presented in Table . The vast majority of the respondents were female, graduated in Norway, worked in the PDS and had a full-time position. All background characteristics that were recorded differed significantly between dentists and dental hygienists. Dental hygienists were generally older than dentists, had graduated before 2008 and were more often educated in Norway. In addition, there was a larger proportion of dental hygienists working in the private dental service and a larger proportion of dental hygienists with part-time positions. Treatment practices and self-perceived knowledge Of the respondents, 18% reported that older HHCS adults came to the dental clinic because of pain or acute problems (20% of dentists, 14% of hygienists, p <0.025). According to the respondents, treatment was most often aimed at relieving oral problems (49%), preserving (28%) and improving oral health (24%). Dental hygienists were more often focused on improving oral health compared to dentists (27% versus 23%, p =0.009). The most frequent procedures performed by dentists are shown in Figure . Restorative treatments (fillings) were most frequently performed, followed by clinical and radiographical examinations and extractions. Less than 10% reported that periodontal treatment, endodontic treatment or fixed prosthetic treatment was among their most frequently performed procedures, and less than 3% frequently performed oral surgery, implant dentistry or temporomandibular joint treatment on this patient group. Figure shows the frequency of preventive advice given by clinicians to older HHCS adults. The majority of respondents reported to always/often give preventive advice about brushing technique, interdental cleaning and use of fluorides at home, but less than half reported to always/often give dietary advice. A larger proportion of dental hygienists than dentists reported to give preventive advice in all four categories. There were statistically significant differences between dentists and dental hygienists considering their self-perceived knowledge regarding patients with complex treatment needs, cognitive or physical impairment ( p <0.001) (Table ). Slightly more than half of the dentists totally agreed or agreed that they had enough knowledge in all three areas (50.5%, 60.2% and 55.5%, respectively), compared to less than half of the dental hygienists (39.5%, 46.9% and 37.9%, respectively). Experienced challenges The vast majority of dentists and hygienists reported that they always/often had to use extra time to update the older HHCS patients’ medication lists and patient history (Figure ). On the other hand, very few reported experiencing patients who resisted treatment or patients who wanted to continue in the private dental practice but were hindered by economic reasons. Seven of the 16 statements revealed statistically significant different experiences between dentists and dental hygienists (Table S ). Using EFA, three factors with eigenvalues above the Kaiser’s criterion of 1 were extracted and explained 51.92% of the variance. Table shows the factor loadings ≥ 0.30. Six items clustered on Factor 1, which represents challenges related to time needed to gather essential information of the patients (Time); another six items clustered on Factor 2 representing challenges related to resources and practical issues (Practical organization), while three items clustered on Factor 3 representing communication problems (Communication). Factor 1 had high reliability, while Factor 2 and 3 had moderately high reliabilities. Standardized coefficients obtained from SEMs showed that males were more likely than females to consider practical organization to be a challenge when providing dental treatment to older HHCS patients (Table ). Dentists and dental hygienists who graduated in 2008 or later were less likely to experience challenges with the time needed to gather essential information about the patient and practical organization. However, they were more likely to experience communication problems than those who graduated before 2008. Clinicians who graduated in countries other than Norway were less likely to report problems regarding time. Those who reported using 45 minutes or more per patient were less likely to experience challenges with time and practical concerns than those using less time per patient. Clinicians working in the private dental service were more likely to experience practical challenges and were less likely to experience communication problems than those working in the PDS. Of the respondents, 18% reported that older HHCS adults came to the dental clinic because of pain or acute problems (20% of dentists, 14% of hygienists, p <0.025). According to the respondents, treatment was most often aimed at relieving oral problems (49%), preserving (28%) and improving oral health (24%). Dental hygienists were more often focused on improving oral health compared to dentists (27% versus 23%, p =0.009). The most frequent procedures performed by dentists are shown in Figure . Restorative treatments (fillings) were most frequently performed, followed by clinical and radiographical examinations and extractions. Less than 10% reported that periodontal treatment, endodontic treatment or fixed prosthetic treatment was among their most frequently performed procedures, and less than 3% frequently performed oral surgery, implant dentistry or temporomandibular joint treatment on this patient group. Figure shows the frequency of preventive advice given by clinicians to older HHCS adults. The majority of respondents reported to always/often give preventive advice about brushing technique, interdental cleaning and use of fluorides at home, but less than half reported to always/often give dietary advice. A larger proportion of dental hygienists than dentists reported to give preventive advice in all four categories. There were statistically significant differences between dentists and dental hygienists considering their self-perceived knowledge regarding patients with complex treatment needs, cognitive or physical impairment ( p <0.001) (Table ). Slightly more than half of the dentists totally agreed or agreed that they had enough knowledge in all three areas (50.5%, 60.2% and 55.5%, respectively), compared to less than half of the dental hygienists (39.5%, 46.9% and 37.9%, respectively). The vast majority of dentists and hygienists reported that they always/often had to use extra time to update the older HHCS patients’ medication lists and patient history (Figure ). On the other hand, very few reported experiencing patients who resisted treatment or patients who wanted to continue in the private dental practice but were hindered by economic reasons. Seven of the 16 statements revealed statistically significant different experiences between dentists and dental hygienists (Table S ). Using EFA, three factors with eigenvalues above the Kaiser’s criterion of 1 were extracted and explained 51.92% of the variance. Table shows the factor loadings ≥ 0.30. Six items clustered on Factor 1, which represents challenges related to time needed to gather essential information of the patients (Time); another six items clustered on Factor 2 representing challenges related to resources and practical issues (Practical organization), while three items clustered on Factor 3 representing communication problems (Communication). Factor 1 had high reliability, while Factor 2 and 3 had moderately high reliabilities. Standardized coefficients obtained from SEMs showed that males were more likely than females to consider practical organization to be a challenge when providing dental treatment to older HHCS patients (Table ). Dentists and dental hygienists who graduated in 2008 or later were less likely to experience challenges with the time needed to gather essential information about the patient and practical organization. However, they were more likely to experience communication problems than those who graduated before 2008. Clinicians who graduated in countries other than Norway were less likely to report problems regarding time. Those who reported using 45 minutes or more per patient were less likely to experience challenges with time and practical concerns than those using less time per patient. Clinicians working in the private dental service were more likely to experience practical challenges and were less likely to experience communication problems than those working in the PDS. In the present study, the current practices, self-perceived knowledge, and experienced challenges of Norwegian dentists and dental hygienists when providing dental care to older HHCS adults were explored. Our results demonstrated that dental care for this population was mainly aimed at relieving acute problems rather than preserving or improving patients’ oral health. This is in accordance with the most frequent reported treatments offered by dentists which, with the exception of clinical and radiographical examinations, were fillings, extractions, and removable prostheses. In contrast, fluoride varnish application, periodontal and endodontic treatment were performed less frequently. This is in agreement with previous studies showing that restorative treatments were the dominating treatment procedures given by dentists to older adults in Western countries , and that periodontal treatment was seldom performed . Studies investigating the oral health of older patients have shown that the proportion of older adults retaining their natural teeth in the general population is increasing , however, the prevalence of edentulism among HHCS elderly is still high . In the present study, 95% of the dentists reported fillings to be one of their most frequently performed procedures, indicating that a substantial proportion of this population are dentate. Dentists were more likely to experience patients contacting them due to pain compared to dental hygienists (20% versus 14%). This is not unexpected, considering the different work responsibilities of dentists and dental hygienists. As the frequency of dental visits has been shown to decline with age and worsening general health , it is reasonable to believe that the institutionalized elderly visit the dental clinic less often than those living at home . It has also been reported that older people who visit community dental practices are still relatively healthy, non-frail, and highly educated . In Norway, only a small proportion of older adults in HHCS use their rights to free dental care, and the reason for this is not known. However, a questionnaire study among HHCS users revealed that only 49% were aware of the possibility of free dental care . It has been speculated that not all eligible older adults may receive the information about free dental treatment or that some may simply forget receiving the information . In other countries, factors that influence the dental service utilization in this patient group have been identified, such as the presence of pain, need for prosthetic treatment, education level, or financial situation. Lack of suitable facilities for transportation or treatment, staff and time constraints, as well as lack of awareness and knowledge among nursing staff have been reported as barriers. In addition, individuals without any or with only functional limitations and better mental health tended to use dental services more frequently [ , , ]. Poor oral hygiene, a sugar-rich diet, and dry mouth are common risk factors for oral diseases . Earlier studies have pointed to a lack of awareness regarding oral health and hygiene among healthcare providers, patients, and relatives – resulting in poor oral health . Substantial treatment needs as well as inadequate daily oral hygiene have been revealed among older adults in HHCS . In addition to the increased emphasis on the assistance with daily oral care provided by caregivers in HHCS, it is important that dental health personnel focus on preventive measures when meeting patients and their caregivers. In the present study, both dentists and dental hygienists frequently recommended fluoride supplements for home use and gave advice on brushing technique and interdental cleaning. However, it is alarming that 25% of dentists and 15% of dental hygienists reported never giving dietary advice, and that less than half gave dietary advice often. Generally, dental hygienists provided more preventive services than dentists, which is in line with different professional responsibilities. In the present study, between 47% and 56% of all clinicians reported having enough knowledge about dental treatment of patients with complex treatment needs, dementia or cognitive impairment, or with impaired physical functioning. Dentists were more confident than dental hygienists when treating these patients. Several studies have reported a demand for further training related to care of older people, especially when managing patients with dementia. Lack of knowledge, adequate training, and experience have been reported to be barriers when providing dental care to dependent older patients . Almost all respondents reported needing more time to gather essential information from patients. Limited resources and time have been described as barriers against providing the necessary dental care. In addition, physical limitations, multimorbidity and polypharmacy may lead to medical complexity and an unstable overall health, which may hamper the treatments . The challenges experienced by dentists and dental hygienists in the present study were consolidated into three domains: time, practical organization, and communication. Respondents’ work experience showed an impact on all of the extracted factors from the EFA. Dentists and dental hygienists with more experience were more likely to experience challenges regarding time and practical organization than those with the shortest work experience. Almost all respondents highlighted the time efforts needed to update individual patients’ medication list or their general history, in line with previous evidence . These findings indicate the need for increased focus on communication, information exchange, and interprofessional collaboration issues as users of HHCS have health issues and comorbidities which often require multidisciplinary conversation. In contrast to earlier studies that have found refusal of treatment and lack of suitable facilities as main barriers for dental treatment of dependent older people , however, very few respondents reported this to be a challenge in the present study. In addition, suitable facilities for dental treatment or patient transportation have been found to be important for dentists to provide oral care for older patients . This is in contrast with this study, where very few respondents reported a lack of customized equipment to be a challenge for dental treatment. Nevertheless, a large proportion reported ergonomic issues for the clinician or the patient as challenging. This study comes with a number of limitations. This was a voluntary questionnaire study and study participants self-selected to complete the survey. Selection bias related to personal interests of respondents may have occurred. Furthermore, recall bias among participants as well as confirmation bias cannot be ruled out. Approximately 10% of the total number of registered dentists and 20% of dental hygienists in Norway responded to the questionnaire . Public dentists, females and younger individuals were overrepresented among the respondents, thus, generalization of our findings should be made with caution. To the best of our knowledge, the present study is the first in Norway to highlight the dentists’ and dental hygienists’ perspectives by investigating current practices, self-perceived knowledge and experienced challenges when providing dental care to older HHCS adults. In this study population, challenges related to dental care for older adults receiving HHCS could be categorized into time, practical organization and communication. Variation in perceived challenges was associated with background characteristics and sector, but not with the professional category of clinicians. The results indicated that dental care for older patients is time-demanding and more often aimed at relieving symptoms than preserving or improving the oral status. A substantial proportion of dentists and dental hygienists in Norway lacked confidence in providing dental care for older HHCS adults. Thus, more emphasis both in the undergraduate and postgraduate curricula of dental personnel is needed in order to meet the complex interprofessional needs for this increasing patient group. Additional file 1: Table S1. Experiences and perceived challenges among dentists and dental hygienists.
Associated factors of prosthetic rehabilitation in specialized dental care in Brazil: a cross-sectional study
37970610-4295-4739-99b3-bc4f8b0144e4
10111834
Dental[mh]
Oral health conditions are a challenge for public health . Until the year 2017, approximately 3.5 billion people in the world had oral problems, and 267 million were people who had tooth loss . In Brazil, the last epidemiological survey of oral health showed an increase, in tooth loss with age. The decayed, missing and filled teeth (DMF-T) rate represented 5.8% of the rate among young people, 44.7% among adults and 92% among the elderly . Tooth loss compromises the functionality of the dentition, leading to difficulties in speech and the act of smiling and, consequently, affecting individuals’ social and emotional interactions . Thus, prosthetic rehabilitation is necessary since the replacement of teeth lost by prostheses restores chewing, promotes better nutrition, and provides well-being and facial aesthetics, which leads to increased quality of life . The prosthetic rehabilitation offered by the Unified Health System (SUS) is provided by the National Oral Health Policy (PNSB) through the “Brasil Sorridente” program. The laboratory manufacture of dental prostheses, which include removable mandibular partial dentures, removable maxillary partial dentures, total mandibular dentures, total maxillary dentures, and fixed/adhesive coronary/intra-articular prostheses, are the responsibility of the regional dental prosthesis laboratories (LRPD) or private laboratories hired by the management of the services, as provided for in Ordinance No. 1,825 of 2012. The clinical phase of dental prostheses, including impressions, cementation, adaptation, and guidance to users regarding use, hygiene, and post-use adjustment, is carried out by dentists working in Primary Health Care (PHC) or by prosthetics from the Dental Specialty Centers (DSC). The DSCs were created to increase the population’s access to specialized procedures, continuing the care initiated by the PHC according to referral protocols established by the services . Despite efforts to resolve a greater supply of prostheses in the SUS, in 2014, approximately 22,653 complete dentures and 10,070 removable partial dentures were delivered per month in the country , an insufficient number to meet the dental prosthesis needs of the 9,501,160 Brazilians aged between 65 and 74 years. Furthermore, the uneven distribution of public oral health facilities stands out . Of the 780 DSCs in the Brazilian regions in 2014, 325 had established working processes with the LRPD. Considering the proportion of capitals with DSCs, the North and Northeast regions were the least favored . Thus, in addition to the limited and uneven supply of dental prostheses in DSCs, other organizational and individual factors of SUS users may affect access to rehabilitation treatment . Thus, it is necessary to investigate the factors that permeate the prosthetic rehabilitation of users in specialized dental services to collaborate with the work process at this level of oral health care. This study aimed to analyze the individual and contextual factors associated with prosthetic rehabilitation in Dental Specialty Centers in Brazil. Methods This is a cross-sectional study with data extracted from the database on the second cycle of External Evaluation of the National Program for Improving Access and Quality of Dental Specialty Centers (PMAQ-CEO – in Portuguese), available on the website of the Secretary of Primary Health Care, from the Ministry of Health ( https://aps.saude.gov.br/ape/pmaq/ciclo2ceo/ ). The second cycle of the PMAQ-CEO was carried out in Brazil in 2018 and presented three stages of development. The last was the on-site verification of the quality standards established by the program (external evaluation). In this stage, a trained external evaluator, independent of the service, applied a questionnaire divided into three modules. Module I related to evaluating the structure, equipment, instruments, and supplies of the facility. Module II, which included data related to the work process, the organization of the service and the care of the users, was answered by the managers of the DSC and a dentist of any specialty. Module III, designed to collect data on user satisfaction and perceptions of specialized oral health services regarding access and use, was administered to users at the DSC. Detailed information about the program is available in the PMAQ-CEO 2nd Cycle Instruction Manual . The present study extracted data from modules II and III of the PMAQ-CEO. The spreadsheets exported to the Microsoft Office Excel 2010 program were merged using the National Registry of Establishment number as a common identifier, as reported in a previous study . In this way, linking the data provided by the DSC’s managers and dentist to the users’ data was possible. The sample of users participating in the external evaluation of the 2nd Cycle of the PMAQ-CEO was of convenience. As inclusion criteria, only users aged 18 and over were considered for the interview. Those present for the first time in the DSC were excluded, as provided in the Ministry of Health’s PMAQ-CEO external evaluation manual. Each field evaluator was trained to apply the instrument to 10 users aged 18 and over who were present at the DSC on the day of the external evaluation . The dependent variable was extracted from module III, based on the following question: “Where did you get your dental prosthesis?”. The response options were operationalized for analysis purposes in: “In the DSC (in this or another)” or “Other” (Primary Health Unit, Private Clinic or Private Practice, Other). The independent contextual variables were the region of the country (including all regions considered) and the location of the DSC (urban or rural). In addition to the questions from module II, namely: referral for prosthodontics impressions at the DSC (yes/no), waiting list management (yes/no) and presence of predefined places for referral of primary care users to clinical prosthodontist (yes/no/no service in this specialty), estimated waiting time for the user to be seen by the clinical prosthodontist at the DSC (dichotomized in: ≤2 months, > 2 months, no information), number of people in the queue waiting to be seen for a prosthesis (dichotomized by the median into ≤ 123, >123 days and no information), suspension of DSC care in the past 12 months due to lack of supplies or instruments (yes/no), average number of dentures delivered per month (dichotomized into: ≤25 or > 25). Questions from module III were also considered as individual independent variables related to sociodemographic aspects, such as sex (male/female), age (categorized as < 44 years old, from 45 to 64 years old or ≥ 65 years old), self-reported ethnicity/skin-color (yellow/indigenous, white, brown, black) marital status (single, married, divorced/widowed), retirement (yes/no), family income (≤ 1 minimum wage, > 1 minimum wage), family allowance benefits (yes/no), level of education (up to complete secondary education/at least complete secondary education). In addition, it was considered whether the respondent lived in the same municipality as the DSC (yes/no), whether the home was covered/accompanied by the Family Health Strategy (yes/no), the mean time to reach the DSC (dichotomized by the mean ≤ 20 min and > 20 min), whether the opening hours met the needs (yes/no), reception when looking for the DSC (very nice, good, fair or poor, very fair), good conditions of use of the facilities of the DSC (yes/no), the general opinion of the service received from the DSC (very nice, good, fair or poor, very fair). The sample size was determined by the PMAQ-CEO Coordination in accordance with the program manuals. Initially, frequency distribution tables were built. Then, analyses of the associations between prosthesis performance in a DSC with individual and contextual variables were performed. For this, simple and multiple multilevel logistic regression models were used. First, multilevel models were performed to consider possible dependencies between the observations of patients from the same DSC. The variables of the first level (individual) and the second level (contextual/DSC) were then considered in the model. Finally, using the empty model, using only the intercept, it was possible to calculate the intraclass correlation coefficient, estimating the proportion of the total variance due to the context (DSC). In the multiple models, we selected the variables that presented p < 0.20 in the crude analyses. Next, the first-level variables were included in the model, remaining with p ≤ 0.05 after adjustments for the first-level variables. After the second-level variables were included, those with p ≤ 0.05 remained in the final model after adjustments for the other variables. Finally, the crude Odds Ratio (OR) were calculated and adjusted to 95% confidence intervals (95%CI). The QIC evaluated the fit of the models. The analyzes were performed using the R and Statistical Analysis System (SAS) programs. The study was approved by the Research Ethics Council under protocol 23458213.0.1001.5208, following resolution 466/2012 of the National Health Council. All participants received and signed the Free and Informed Consent Term in two signed copies. This is a cross-sectional study with data extracted from the database on the second cycle of External Evaluation of the National Program for Improving Access and Quality of Dental Specialty Centers (PMAQ-CEO – in Portuguese), available on the website of the Secretary of Primary Health Care, from the Ministry of Health ( https://aps.saude.gov.br/ape/pmaq/ciclo2ceo/ ). The second cycle of the PMAQ-CEO was carried out in Brazil in 2018 and presented three stages of development. The last was the on-site verification of the quality standards established by the program (external evaluation). In this stage, a trained external evaluator, independent of the service, applied a questionnaire divided into three modules. Module I related to evaluating the structure, equipment, instruments, and supplies of the facility. Module II, which included data related to the work process, the organization of the service and the care of the users, was answered by the managers of the DSC and a dentist of any specialty. Module III, designed to collect data on user satisfaction and perceptions of specialized oral health services regarding access and use, was administered to users at the DSC. Detailed information about the program is available in the PMAQ-CEO 2nd Cycle Instruction Manual . The present study extracted data from modules II and III of the PMAQ-CEO. The spreadsheets exported to the Microsoft Office Excel 2010 program were merged using the National Registry of Establishment number as a common identifier, as reported in a previous study . In this way, linking the data provided by the DSC’s managers and dentist to the users’ data was possible. The sample of users participating in the external evaluation of the 2nd Cycle of the PMAQ-CEO was of convenience. As inclusion criteria, only users aged 18 and over were considered for the interview. Those present for the first time in the DSC were excluded, as provided in the Ministry of Health’s PMAQ-CEO external evaluation manual. Each field evaluator was trained to apply the instrument to 10 users aged 18 and over who were present at the DSC on the day of the external evaluation . The dependent variable was extracted from module III, based on the following question: “Where did you get your dental prosthesis?”. The response options were operationalized for analysis purposes in: “In the DSC (in this or another)” or “Other” (Primary Health Unit, Private Clinic or Private Practice, Other). The independent contextual variables were the region of the country (including all regions considered) and the location of the DSC (urban or rural). In addition to the questions from module II, namely: referral for prosthodontics impressions at the DSC (yes/no), waiting list management (yes/no) and presence of predefined places for referral of primary care users to clinical prosthodontist (yes/no/no service in this specialty), estimated waiting time for the user to be seen by the clinical prosthodontist at the DSC (dichotomized in: ≤2 months, > 2 months, no information), number of people in the queue waiting to be seen for a prosthesis (dichotomized by the median into ≤ 123, >123 days and no information), suspension of DSC care in the past 12 months due to lack of supplies or instruments (yes/no), average number of dentures delivered per month (dichotomized into: ≤25 or > 25). Questions from module III were also considered as individual independent variables related to sociodemographic aspects, such as sex (male/female), age (categorized as < 44 years old, from 45 to 64 years old or ≥ 65 years old), self-reported ethnicity/skin-color (yellow/indigenous, white, brown, black) marital status (single, married, divorced/widowed), retirement (yes/no), family income (≤ 1 minimum wage, > 1 minimum wage), family allowance benefits (yes/no), level of education (up to complete secondary education/at least complete secondary education). In addition, it was considered whether the respondent lived in the same municipality as the DSC (yes/no), whether the home was covered/accompanied by the Family Health Strategy (yes/no), the mean time to reach the DSC (dichotomized by the mean ≤ 20 min and > 20 min), whether the opening hours met the needs (yes/no), reception when looking for the DSC (very nice, good, fair or poor, very fair), good conditions of use of the facilities of the DSC (yes/no), the general opinion of the service received from the DSC (very nice, good, fair or poor, very fair). The sample size was determined by the PMAQ-CEO Coordination in accordance with the program manuals. Initially, frequency distribution tables were built. Then, analyses of the associations between prosthesis performance in a DSC with individual and contextual variables were performed. For this, simple and multiple multilevel logistic regression models were used. First, multilevel models were performed to consider possible dependencies between the observations of patients from the same DSC. The variables of the first level (individual) and the second level (contextual/DSC) were then considered in the model. Finally, using the empty model, using only the intercept, it was possible to calculate the intraclass correlation coefficient, estimating the proportion of the total variance due to the context (DSC). In the multiple models, we selected the variables that presented p < 0.20 in the crude analyses. Next, the first-level variables were included in the model, remaining with p ≤ 0.05 after adjustments for the first-level variables. After the second-level variables were included, those with p ≤ 0.05 remained in the final model after adjustments for the other variables. Finally, the crude Odds Ratio (OR) were calculated and adjusted to 95% confidence intervals (95%CI). The QIC evaluated the fit of the models. The analyzes were performed using the R and Statistical Analysis System (SAS) programs. The study was approved by the Research Ethics Council under protocol 23458213.0.1001.5208, following resolution 466/2012 of the National Health Council. All participants received and signed the Free and Informed Consent Term in two signed copies. Of the 10,391 respondents, out of 1,042 DSCs in the country, 24.4% (95%CI: 23.6%; 25.2%) used dental prostheses. Of individuals who reported using dental prostheses, 26.0% performed in a DSC and the rest in another location. Tables and present the frequency distributions of patients who used dental prostheses according to the independent variables. Most of the sample was female (68.3%), aged between 45 and 64 years (57.0%), married (54.1%), not retired (65.1%), with a family income of up to one minimum wage (55.6%) and a level of education up to incomplete high school (67.2%). In addition, 93.2% live in the municipality of DSC, with Family Health Strategy (FHS) coverage (81.9%). It was observed that 97.2% and 97.1% found the reception and service provided by the DSC to be good, respectively, but for 96.7%, the DSC’s opening hours do not meet their needs. It was also possible to identify that the majority were patients of DSCs from the countryside (85.0%), with 72.0% patients of DSCs who perform dental prosthesis impressions and 75.0% of DSCs who manage the waiting list. When the variables were adjusted (final model), it was observed that the prevalence of patients who had their prosthesis performed in a DSC was significantly higher among those with a lower level of education [OR = 1.23 (95%CI 1.01;1 0.50)], who live in the municipality where the DSC is located [OR = 1.69 (95%CI 1.07;2.66)] and among those patients with DSC in the countryside [OR = 1.41(95%CI 1 0.01;1.97)] (Table ). The present study evaluated the individual and contextual factors associated with prosthetic rehabilitation in DSC in Brazil. Still, it identified that the prevalence of users who had dental prostheses produced by the DCS is low. At an individual level, users with a lower level of education and who lived in the same city where the DSC is located, and at a contextual level, those who accessed the DSC in the countryside were more likely to have their prosthetic needs met. There are financial, geographic and organizational barriers that compromise users’ access to the services offered by the DSC . However, few investigations explore access to prosthetic rehabilitation promoted by this service. When it comes to barriers to accessing dental prostheses in the DSC, the findings revealed that people with a lower level of education were more likely to have a prosthesis in the DSC, a finding also verified in the literature . Although this seems optimistic, this data may reflect attention that only comes to individuals dependent on the SUS, in which the wait and realization occur only because they do not have another viable option for its execution. Thus, reducing the waiting list, in addition to making the service more resolute, can minimize the problems inherent to a population that, when not attended, has no other way of performing the dental prosthesis. Obtaining a dental prosthesis was higher among users who lived in the same municipality as the DSC. Therefore, despite all efforts to facilitate the process of regionalization of services in an inter-municipal way, through referrals to specialties through the Consortium Agreed Programming (PPC) among the consortium entities, still end up privileging the reference municipality. In the PPC, the state and the countryside of a specific health region financially agree to maintain the DSC and its operation according to the existing human resources, establishing the number of vacancies for the countryside and state that compose it . In general, services use the Regulation System (SISREG) to manage referrals between Primary Health Care and the Regional Reference Center for Dental Specialty . However, in the first external evaluation of the PMAQ-CEO, carried out in 2014, it was observed that of the 876 DSCs distributed in Brazil, only 358 (38.5%) had clinical protocols for referring users of primary health care to the dental prosthesis specialty, which also demonstrates a low agreement for specialized dental prosthesis services in Brazil. It can also be understood that the larger population size of the municipality of reference for regional health makes the citizens of these municipalities more likely to be served in these establishments. Therefore, the relationship between user access and decentralization and regionalization of dental care services at a secondary level has not been established . This seems to depend on the distribution characteristics of the DSC, its coverage areas, transport logistics, protocols and work practices, and the demand profile for specialized care. It is suggested that other studies explore this theme, considering the specialty of dental prostheses. After all, strengthening the regionalization of services makes it possible to take specialized, more expensive technologies to the population of the municipalities associated with a region, optimizing resources and expanding the guarantee of oral health care . It was also observed that users living in rural areas were more likely to solve their prosthetic problems at the DSC than users living in urban areas of the capital. In general, the literature reports difficulties for the prosthodontist service to achieve good productivity [ , , ]. Some specialties offered by the DSC have better performance in terms of targets for procedures, which may be related to the sociodemographic conditions of the communities, such as the Human Development Index, gross domestic product, illiteracy rate, poverty and FHS coverage . Although one study showed non-compliance with oral surgery targets was associated with the larger population size of the cities studied, no studies with this evaluation were found for the prosthodontic specialty. Finally, it is worth clarifying that the PNSB established prosthetic rehabilitation in the SUS in 2004. Through this policy, the Dental Prosthesis Laboratories (LRPD) were structured. However, by 2013, 1,465 LRPD were qualified, unequally distributed in the country, without considering epidemiological indicators and the population’s prosthetic needs, with production below meeting the population’s demands - rates of 15.81 total dentures delivered per month per 100,000 inhabitants . The low productivity and distribution of LRPD can affect the offers of the prosthesis specialty at the DSC, since these laboratories collaborate with the manufacture of prostheses requested both by the oral health teams of primary care and mobile dental teams units (UOM) and DSC. According to the Department of Health’s Strategic Management Support Room, 2,524,403 dentures were provided in secondary care between 2010 and 2015. Although many dentures have been delivered over the years, there is a deficit in access to prosthetic rehabilitation at this level of care. In this sense, the pace of expansion of services has not kept pace with the demand for services. It is noteworthy that the specialty of prosthetics is not included in the list of minimum specialties available to the DSC, and its inclusion may be a local management decision. The expansion of access to dental prostheses in the SUS has been discussed, including encouraging the provision of dental prostheses in PHC. However, the analysis of the performance of 18,114 oral health teams inserted in the PHC in 2014 revealed that less than half of them (43%) delivered some type of dental prosthesis . Comparing the performance of the oral health teams in PHC, between the years 2011/2012 and 2013/2014, there was a 0.8% increase in taking impressions for prostheses, indicating a low number of teams in PHC that perform the procedures for the prosthetic rehabilitation of users . It is also essential to consider that the manufacture of dental prostheses is recommended through several stages and requires inputs, material resources, and technical skills from the dentist . Characteristics of the dental practices included in the DSC, and the structure of these establishments, need to be considered. It is expected that the largest number of deliveries of prostheses to the SUS user population is through this service, which is a hypothesis to be tested for better readjustment and reallocation of resources considering all access points of the oral health care network. Individual factors such as education level and living in the same municipality as the DSC, and contextual factors such as accessing a DSC located in the countryside, are associated with prosthetic rehabilitation in the specialized dental care of the SUS. As a limitation, it is emphasized that the design of the PMAQ-CEO data collection considered the approach only to users present in the establishment, as informants by free adhesion, thus veiling the data and the perception of users not present. Thus, the convenience sample may not characterize the entire population assisted by dental specialties, so we cannot generalize our findings to the country. Below is the link to the electronic supplementary material. Supplementary Material 1
Functional Potential of Soil Microbial Communities and Their Subcommunities Varies with Tree Mycorrhizal Type and Tree Diversity
e7d50f8f-c9a1-4482-a103-797a549980d7
10111882
Microbiology[mh]
Microorganisms, especially bacteria and fungi, contribute enormously to terrestrial ecosystem services: for example, by playing a vital role in soil nutrient cycling ( ). Particularly, the contribution of plant symbiotic microbes in soil nutrient cycling has been well reported. For example, mycorrhizal fungi form symbiotic associations with around 90% of terrestrial plant species and take part in nutrient cycling by mobilizing nitrogen (N) and phosphorus (P) in soils ( , ). Similarly, plant-symbiotic bacteria belonging to Rhizobium and Frankia can fix nitrogen and thus essentially participate in N cycling ( ). Moreover, at the community level, it is also important to consider the extensive contribution of free-living soil bacteria and fungi to soil nutrient cycling as they constitute a major part of soil microbiota ( ). A few examples include carbon-fixing Actinobacteria ( , ), nitrogen-fixing Azotobacter ( , ), and phosphate-solubilizing Acidobacteria ( , ). Likewise, Penicillium , Aspergillus , and Trichoderma are free-living fungi and known for being actively involved in the decomposition of soil organic compounds (C cycle), nitrification (N cycle), and P solubilization (P cycle), respectively ( ). Soil stoichiometry of nutrients like C/N/P ratios is known to affect the soil microbial communities, depending upon their constituting members’ organismal nutrient stoichiometric ratios ( , ). For example, it was reported that high N and P abundances in soil favor the abundance of fast-growing bacteria (i.e., copiotrophic, r -strategists) like Actinobacteria and Alphaproteobacteria while discriminating against slow-growing bacteria (i.e., oligotrophic, K -strategists) like Acidobacteria ( , ). Also, previous research suggests that ectomycorrhizal fungi (EMF) preferentially associate with soils of high-C/N substrates, whereas saprotrophic fungi prevail in soils with low C/N ratios ( ). There has been a surge in recent studies showing the link between microbial diversity, community composition, and soil ecosystem multifunctionality ( ). However, there is still a knowledge gap about how the soil microbial communities vary in the stoichiometry of their nutrient cycling genomic potential, which can be the relative combinations of genes coding for different nutrient cycling enzymes. In a study taking a genomic perspective on soil carbon cycling, Hartman et al. ( ) reported links between microbial community composition, the microbe’s C, N, and P substrate utilization potential, and C turnover. This highlights the importance of studying the genomic potential of microbial communities to better understand soil nutrient cycling. Given the fact that soil C, N, and P cycles are linked, it is essential to study the co-occurring bacterial and fungal communities together for their genomic potential in the cycling of different major nutrients and their combinations ( viz. C, N, P, CN, CP, NP, and CNP). For instance, the ability to decompose soil organic matter (SOM) with various nutrient ratios depends on the composition of soil microbial communities ( ). Subsequently, the decomposed SOM would be available for bacteria and fungi conditioned on their abilities to continue with either N fixation or denitrification ( , ) and/or concurrently also be available for P mineralization or solubilization ( , ). This linkage between different soil nutrient cycling processes and the different microbes involved can be viewed from a “microbial syntrophy” (microbial metabolic interrelationships) perspective ( ), which is affected by many factors (for example, available nutrient ratios, etc.) but essentially depends on the genomic potential of the members of the microbial communities. The ecological processes and relationships within a microbial community can cumulatively emerge from the constituting microbial groups/clusters (i.e., taxa that are more strongly associated within that group than with other groups), which are also known as subcommunities ( , ). Based on network theory, the study of subcommunities, also known as modules, can provide key insights into the overall functioning of the microbial community, allowing us to assess the metabolic potential based on the single microbes’ functional roles, which otherwise remains a black box. In addition, knowledge of subcommunities also sheds light on the ecological processes that shape and regulate community structure and organization, such as environmental filtering or niche differentiation ( ). For example, recent studies in soil microbial ecology have taken the advantage of subcommunity-based analyses to develop a deeper understanding of environment-specific relationships ( , ) and the functional roles of microbial communities ( ). One of the key factors influencing the soil microbial communities in forests is the tree mycorrhizal type ( ), which is also known to impact microbial functional genes ( ) and soil nutrient cycling ( ). In addition, tree diversity has also been reported to affect soil microbial communities ( ) and soil nutrient availability ( ). Despite these efforts, there is still a great need to understand how the tree mycorrhizal type and tree diversity affect the co-occurring soil bacterial and fungal communities at the subcommunity level and, in consequence, their genomic functional potential for nutrient cycling. Insight into these processes would provide a broader understanding of the intrinsic characteristics of soil microbial groups operating in ecological processes and the functional potential emerging at the community level. Such in-depth mechanistic understanding would also be the basis for managing forest soil ecosystems to maintain or increase forest multifunctionality. To fill this knowledge gap, this study was conducted at the BEF-China experimental research platform ( ), using tree species of two mycorrhizal types, namely, ectomycorrhizal (EcM) and arbuscular mycorrhizal (AM), at different tree diversity levels ( ). We employed the fungal-bacterial interkingdom co-occurrence network approach ( ) to derive the microbial subcommunities (here, interchangeably used with “modules”) and used PICRUSt2 ( ) to predict the potential genomic functions with regard to nutrient cycling from the amplicon sequencing data. Our main objective was to understand how the stoichiometry in genomic functional potential of soil microbial communities and their subcommunities with regard to the three major nutrient cycles and their combinations (C, N, P, CN, CP, NP, and CNP) varies in EcM and AM trees at different tree diversity levels. In particular, we asked the following research questions. How do the EcM and AM tree species pair (TSP) soil bacterial and fungal community co-occurrence network structures differ across tree diversity levels, and which soil characteristics drive the composition of the subcommunities in these networks? What are the effects of tree diversity and tree mycorrhizal type on the predicted genomic functional potential (in terms of C, N, and P cycles and their combinations) of the co-occurring bacterial and fungal communities? How do EcM and AM TSPs soil microbial subcommunities differ in their genomic functional abundances in the three nutrient cycles and their combinations within the tree diversity levels, and which microbial taxa drive these differences? EcM and AM TSPs soil microbial interkingdom network characteristics. The differences in the number of input bacterial taxa used for the construction of networks at each tree diversity level were minuscule between EcM and AM trees (ranging from 796 to 798 amplicon sequence variants [ASVs]). The fungal input varied most in two-tree-species mixtures, with 430 and 503 ASVs for EcM and AM networks, respectively (see Table S1 in the supplemental material). Consistently we found no contrasting differences in clustering coefficient and modularity; however, there are three more modules in the EcM than AM network in each of the monospecific stands and two-tree-species tree diversity levels (Table S1). To assess the underlying network community organization and also the importance of the community members, we tested the distribution of four important network centrality indices, namely, node degree (used to identify community hub taxa), betweenness (a measure of a taxon’s influence in the network), closeness (a measure of the closeness of a taxon to all other members), and eigenvector centrality (a measure of a taxon’s linkage to others accounting for how connected the others are). We found significant differences ( P < 0.05) in the distributions of these four centrality indices between EcM and AM networks in all tree diversity levels ( ). AM networks had higher median values of these distributions except for betweenness centrality, wherein EcM networks had higher values, especially at the monospecific stands and two-tree-species tree diversity levels, indicating differences in the organization of microbial taxa in their respective communities ( ). Subcommunities significantly responding to the soil environment. We identified the subcommunities of all EcM and AM networks that were significantly associated with the soil variables using the distance-based redundancy analysis (dbRDA) models (Table S2). Overall, 21 of the 43 identified modules were found to be significantly responsive to the soil environment. For AM, 4 (out of 5), 4 (out of 6), and 3 (out of 8) significant modules were found in the monospecific stands, two-tree-species, and multi-tree-species (i.e., ≥4 tree species) mixtures, respectively, and for EcM, 3 (out of 8), 4 (out of 9), and 3 (out of 7) significant modules were found, respectively. Except for one AM module in two-tree-species mixtures, all of the significant modules (both AM and EcM) were strongly pH sensitive. We found one AM module in each of the tree diversity levels associated with nitrate, while in EcM communities, all modules in two-tree-species mixtures were associated with nitrate in addition to a module in monospecific stands. Although all of the significant AM modules in monospecific stands were related to P, this was only the case for one of the EcM modules ( F = 2.09; P = 0.04). Furthermore, one module of each EcM ( F = 1.51, P = 0.04) and AM ( F = 1.56, P = 0.03) network in monospecific stands was associated with C. Total N and NH 4 + were found to be significantly related to both EcM and AM modules in two-tree-species mixtures. In multi-tree-species mixtures, AM modules were significantly related to NO 3 − and moisture in addition to pH, which was the only significant soil variable associated with EcM modules. Collectively, this indicated the differential roles of different subcommunities of AM and EcM networks in different tree diversity levels. Tree mycorrhizal type and tree diversity-level effects on the predicted functional potential of co-occurring bacterial and fungal communities. In total, 57 nutrient cycling-related EC numbers known to be part of the C, N, and P cycles were used to filter the PICRUSt2 predicted gene family content for both bacterial and fungal data sets that were used to construct the co-occurrence networks (Table S3). We found a total of 64 (43 for bacteria and 21 for fungi) ECs, where the functional abundance matrix contained 45 unique ECs comprised of 11, 16, and 18 enzymes related to C, N, and P cycling, respectively (Table S4). Significant effects of the tree mycorrhizal type were observed on the functional diversity of the co-occurring microbial community in all nutrient cycling combinations, except for C, N, and CN. In contrast, the effects of tree diversity and the interaction with mycorrhizal type were not significant in any of the nutrient cycling combinations (Table S5). Moreover, the post hoc analysis revealed that a tree mycorrhizal type effect was only present in monospecific stands (except for C), but was absent in two-tree-species and multi-tree-species mixtures ( ). Permutational multivariate analysis of variance (PERMANOVA) of the effects of tree mycorrhizal type and tree diversity level on the microbial community genomic functional potential of nutrient cycling combinations showed a strong effect of tree mycorrhizal type on all combinations of genomic functional compositions ( R 2 value range, 5.5 to 12.8%). In addition, significant interaction effects of tree mycorrhizal type and tree diversity were found for CP and CNP combinations ( ). Furthermore, post hoc analysis of the whole community revealed that the tree mycorrhizal type effect was not significant in multi-tree-species mixtures (Table S6). Comparative analysis of the functional compositions of the whole community with those of the significantly soil-responsive modules showed similar results, except for the additional significance of interaction terms for CN and NP (Table S7). Similarly, the tree mycorrhizal type effect was also not significant in multi-tree-species mixtures (Table S8). Pairwise comparison of functional abundances of EcM and AM TSPs’ soil microbial subcommunities. The principal coordinate analysis (PCoA) ordination based on the relative functional abundances showed that the significant subcommunities of EcM and AM TSPs soil microbial networks became decreasingly distant from monospecific stands to two-tree-species and multi-tree-species mixtures ( ). In addition, envfit analysis ( P < 0.01) indicated that the differentiation of these subcommunities might be driven by the different sets of nutrient cycling enzymes across the tree diversity levels, predominantly by enzymes of the P cycle (Table S9). In monospecific stands, the significantly correlated enzymes were predominantly related to P ( n = 11), followed by N ( n = 9) cycles, while in two-tree-species mixtures, they were related to P ( n = 16), followed by C ( n = 9) cycles. In contrast, in multi-tree-species mixtures, fewer enzymes were correlated with the differentiation of modules, and those were mainly related to the C ( n = 6) and P ( n = 6) cycles (Table S9). Furthermore, pairwise comparisons across the significant subcommunities of EcM and AM TSP soil microbial networks revealed that 25 module pairs were significantly different in terms of their genomic potential for nutrient cycling. Except for C and N, in all nutrient cycling combinations, we found a higher number of significantly abundant AM modules across the tree diversity levels ( ). Interestingly, no significant differences were found in N-cycling potential in multi-tree-species mixtures. Furthermore, for C-related gene families, only EcM modules were significantly abundant in monospecific stands, while for C and CN combinations in multi-tree-species mixtures, AM modules were significantly abundant ( ). In addition, the pairwise comparisons of significant modules within tree mycorrhizal type (i.e., AM versus AM and EcM versus EcM modules) indicated that the proportion of significant differences was higher in AM subcommunities in all combinations, except for CNP (equal proportion), compared to EcM subcommunities (see Fig. S1 in the supplemental material). Differentially abundant taxa behind the observed functional abundance differences of EcM and AM TSPs’ soil microbial subcommunities. We tested the differences in relative functional abundances of taxa between each EcM and AM significantly soil-responding module pairs within each tree diversity level and found a total of 995 unique differentially abundant ASVs. Furthermore, all the ASVs were aggregated at the class taxonomic level, and we identified the two most differentially abundant classes in both bacteria and fungi that strongly contributed to the functional abundances of EcM and AM TSP soil microbial communities at each tree diversity level for all nutrient cycling combinations ( ). These contributions ranged from 48% to 62% of the relative functional abundances. In monospecific stands for EcM modules, Agaricomycetes and Sordariomycetes were the predominant fungi contributing to the functional abundances of all nutrient cycling combinations. In AM modules, Sordariomycetes were the top fungi, followed by Leotiomycetes, contributing to all nutrient cycling combinations except for P (4.3%) and NP (5.4%) combinations, while Eurotiomycetes were the second most important. In the case of bacteria, Acidobacteria and Alphaproteobacteria were the predominant contributors in both EcM and AM modules, except to the C cycle, not only in monospecific stands but also in two- and multi-tree-species mixtures. Interestingly, Actinobacteria were the second most important contributor to the C cycle across the tree diversity levels, except in EcM modules of two-tree-species mixtures, where Verrucomicrobia (10.2%) took that place. In two-tree species mixtures, for EcM modules, Agaricomycetes were the predominant fungal contributor to all nutrient cycling combinations, followed by Leotiomycetes in C, N, CN, and CNP combinations, Sordariomycetes in CP (3%) and NP (2.3%), and Eurotiomycetes (1.9%) in the P cycle, while for AM modules, Eurotiomycetes, followed by Leotiomycetes, were the major contributors to most of the nutrient cycling combinations, except in C (15.1%) and CN (12.7%), where Agaricomycetes were predominant. In multi-tree-species mixtures, Eurotiomycetes followed by Sordariomycetes were the main fungal contributors to all nutrient cycling combinations in EcM modules. This was also the case for AM modules, except for the C and CN combinations, wherein Leotiomycetes and Agaricomycetes were the second major contributors, respectively. Across the tree diversity levels, in both EcM and AM modules, bacteria outweighed fungi as major differentially abundant contributors to the P cycle. Furthermore, compared to EcM, higher fungal contribution in AM modules was found in monospecific stands and two-tree-species mixtures ( ). The differences in the number of input bacterial taxa used for the construction of networks at each tree diversity level were minuscule between EcM and AM trees (ranging from 796 to 798 amplicon sequence variants [ASVs]). The fungal input varied most in two-tree-species mixtures, with 430 and 503 ASVs for EcM and AM networks, respectively (see Table S1 in the supplemental material). Consistently we found no contrasting differences in clustering coefficient and modularity; however, there are three more modules in the EcM than AM network in each of the monospecific stands and two-tree-species tree diversity levels (Table S1). To assess the underlying network community organization and also the importance of the community members, we tested the distribution of four important network centrality indices, namely, node degree (used to identify community hub taxa), betweenness (a measure of a taxon’s influence in the network), closeness (a measure of the closeness of a taxon to all other members), and eigenvector centrality (a measure of a taxon’s linkage to others accounting for how connected the others are). We found significant differences ( P < 0.05) in the distributions of these four centrality indices between EcM and AM networks in all tree diversity levels ( ). AM networks had higher median values of these distributions except for betweenness centrality, wherein EcM networks had higher values, especially at the monospecific stands and two-tree-species tree diversity levels, indicating differences in the organization of microbial taxa in their respective communities ( ). We identified the subcommunities of all EcM and AM networks that were significantly associated with the soil variables using the distance-based redundancy analysis (dbRDA) models (Table S2). Overall, 21 of the 43 identified modules were found to be significantly responsive to the soil environment. For AM, 4 (out of 5), 4 (out of 6), and 3 (out of 8) significant modules were found in the monospecific stands, two-tree-species, and multi-tree-species (i.e., ≥4 tree species) mixtures, respectively, and for EcM, 3 (out of 8), 4 (out of 9), and 3 (out of 7) significant modules were found, respectively. Except for one AM module in two-tree-species mixtures, all of the significant modules (both AM and EcM) were strongly pH sensitive. We found one AM module in each of the tree diversity levels associated with nitrate, while in EcM communities, all modules in two-tree-species mixtures were associated with nitrate in addition to a module in monospecific stands. Although all of the significant AM modules in monospecific stands were related to P, this was only the case for one of the EcM modules ( F = 2.09; P = 0.04). Furthermore, one module of each EcM ( F = 1.51, P = 0.04) and AM ( F = 1.56, P = 0.03) network in monospecific stands was associated with C. Total N and NH 4 + were found to be significantly related to both EcM and AM modules in two-tree-species mixtures. In multi-tree-species mixtures, AM modules were significantly related to NO 3 − and moisture in addition to pH, which was the only significant soil variable associated with EcM modules. Collectively, this indicated the differential roles of different subcommunities of AM and EcM networks in different tree diversity levels. In total, 57 nutrient cycling-related EC numbers known to be part of the C, N, and P cycles were used to filter the PICRUSt2 predicted gene family content for both bacterial and fungal data sets that were used to construct the co-occurrence networks (Table S3). We found a total of 64 (43 for bacteria and 21 for fungi) ECs, where the functional abundance matrix contained 45 unique ECs comprised of 11, 16, and 18 enzymes related to C, N, and P cycling, respectively (Table S4). Significant effects of the tree mycorrhizal type were observed on the functional diversity of the co-occurring microbial community in all nutrient cycling combinations, except for C, N, and CN. In contrast, the effects of tree diversity and the interaction with mycorrhizal type were not significant in any of the nutrient cycling combinations (Table S5). Moreover, the post hoc analysis revealed that a tree mycorrhizal type effect was only present in monospecific stands (except for C), but was absent in two-tree-species and multi-tree-species mixtures ( ). Permutational multivariate analysis of variance (PERMANOVA) of the effects of tree mycorrhizal type and tree diversity level on the microbial community genomic functional potential of nutrient cycling combinations showed a strong effect of tree mycorrhizal type on all combinations of genomic functional compositions ( R 2 value range, 5.5 to 12.8%). In addition, significant interaction effects of tree mycorrhizal type and tree diversity were found for CP and CNP combinations ( ). Furthermore, post hoc analysis of the whole community revealed that the tree mycorrhizal type effect was not significant in multi-tree-species mixtures (Table S6). Comparative analysis of the functional compositions of the whole community with those of the significantly soil-responsive modules showed similar results, except for the additional significance of interaction terms for CN and NP (Table S7). Similarly, the tree mycorrhizal type effect was also not significant in multi-tree-species mixtures (Table S8). The principal coordinate analysis (PCoA) ordination based on the relative functional abundances showed that the significant subcommunities of EcM and AM TSPs soil microbial networks became decreasingly distant from monospecific stands to two-tree-species and multi-tree-species mixtures ( ). In addition, envfit analysis ( P < 0.01) indicated that the differentiation of these subcommunities might be driven by the different sets of nutrient cycling enzymes across the tree diversity levels, predominantly by enzymes of the P cycle (Table S9). In monospecific stands, the significantly correlated enzymes were predominantly related to P ( n = 11), followed by N ( n = 9) cycles, while in two-tree-species mixtures, they were related to P ( n = 16), followed by C ( n = 9) cycles. In contrast, in multi-tree-species mixtures, fewer enzymes were correlated with the differentiation of modules, and those were mainly related to the C ( n = 6) and P ( n = 6) cycles (Table S9). Furthermore, pairwise comparisons across the significant subcommunities of EcM and AM TSP soil microbial networks revealed that 25 module pairs were significantly different in terms of their genomic potential for nutrient cycling. Except for C and N, in all nutrient cycling combinations, we found a higher number of significantly abundant AM modules across the tree diversity levels ( ). Interestingly, no significant differences were found in N-cycling potential in multi-tree-species mixtures. Furthermore, for C-related gene families, only EcM modules were significantly abundant in monospecific stands, while for C and CN combinations in multi-tree-species mixtures, AM modules were significantly abundant ( ). In addition, the pairwise comparisons of significant modules within tree mycorrhizal type (i.e., AM versus AM and EcM versus EcM modules) indicated that the proportion of significant differences was higher in AM subcommunities in all combinations, except for CNP (equal proportion), compared to EcM subcommunities (see Fig. S1 in the supplemental material). We tested the differences in relative functional abundances of taxa between each EcM and AM significantly soil-responding module pairs within each tree diversity level and found a total of 995 unique differentially abundant ASVs. Furthermore, all the ASVs were aggregated at the class taxonomic level, and we identified the two most differentially abundant classes in both bacteria and fungi that strongly contributed to the functional abundances of EcM and AM TSP soil microbial communities at each tree diversity level for all nutrient cycling combinations ( ). These contributions ranged from 48% to 62% of the relative functional abundances. In monospecific stands for EcM modules, Agaricomycetes and Sordariomycetes were the predominant fungi contributing to the functional abundances of all nutrient cycling combinations. In AM modules, Sordariomycetes were the top fungi, followed by Leotiomycetes, contributing to all nutrient cycling combinations except for P (4.3%) and NP (5.4%) combinations, while Eurotiomycetes were the second most important. In the case of bacteria, Acidobacteria and Alphaproteobacteria were the predominant contributors in both EcM and AM modules, except to the C cycle, not only in monospecific stands but also in two- and multi-tree-species mixtures. Interestingly, Actinobacteria were the second most important contributor to the C cycle across the tree diversity levels, except in EcM modules of two-tree-species mixtures, where Verrucomicrobia (10.2%) took that place. In two-tree species mixtures, for EcM modules, Agaricomycetes were the predominant fungal contributor to all nutrient cycling combinations, followed by Leotiomycetes in C, N, CN, and CNP combinations, Sordariomycetes in CP (3%) and NP (2.3%), and Eurotiomycetes (1.9%) in the P cycle, while for AM modules, Eurotiomycetes, followed by Leotiomycetes, were the major contributors to most of the nutrient cycling combinations, except in C (15.1%) and CN (12.7%), where Agaricomycetes were predominant. In multi-tree-species mixtures, Eurotiomycetes followed by Sordariomycetes were the main fungal contributors to all nutrient cycling combinations in EcM modules. This was also the case for AM modules, except for the C and CN combinations, wherein Leotiomycetes and Agaricomycetes were the second major contributors, respectively. Across the tree diversity levels, in both EcM and AM modules, bacteria outweighed fungi as major differentially abundant contributors to the P cycle. Furthermore, compared to EcM, higher fungal contribution in AM modules was found in monospecific stands and two-tree-species mixtures ( ). EcM and AM TSPs soil microbial interkingdom networks and their subcommunities differ in their ecological properties. The network topological parameters provide key insights into the associations between taxa and the influence of some taxa on particular modules or the whole community. In our study, the observed significant differences between EcM and AM TSP soil microbial co-occurrence networks revealed differences in the taxon assembly and organization in the respective communities. Similarly, a recent greenhouse experimental study, Yuan et al. ( ) reported significant differences in the co-occurrence network topology between arbuscular mycorrhizal fungal (AMF)-bacterial networks and nonmycorrhizal fungal (comprising saprotrophs, pathogens, endophytes, and unclassified)-bacterial networks. Relatively high values of degree centrality and betweenness centrality may indicate stronger relationships among the taxa and a powerful influence of some taxa on bridging or communicating between different parts of the network, respectively ( ). Our results show that EcM TSPs’ soil microbial networks had relatively higher betweenness centrality than that of AM networks, especially in monospecific stands and two-tree-species mixtures, suggesting that some key taxa might exert control over other taxa members of the network. A relatively higher abundance of ectomycorrhizal fungi (EMF) in EcM TSPs’ soils which were known to regulate other microbes in the community ( , ) might be a possible reason for the higher betweenness centrality. In contrast, the higher degree centrality in AM networks, especially in monospecific stands and two-tree species mixtures could be attributed to the relatively higher abundance of saprotrophs in AM TSP soils ( ). Microbes belonging to a subcommunity/module may share similar ecological processes like nutrient cycling functions or be affected by the same environmental filtering processes ( , ). In our analysis, we identified such modules: for instance, in AM monospecific stands, all of the modules had significant relationships with P, which is compliant with the fact that AM trees acquire P through the arbuscular mycorrhizal fungi (AMF), and P is a limiting nutrient for the soil microbes in the subtropical systems with AM-dominated stands ( ). Interestingly, the modules (both EcM and AM) in two-tree-species mixtures were strongly related to N or its inorganic forms, NO 3 − and NH 4 + . It is well known that N is a vital limiting nutrient for both plants and microbes ( ) and that the EcM and AM tree-dominated systems have contrasting N acquisition and allocation strategies, where organic N is preferred in EcM systems, while this is the case for inorganic N in AM systems ( ). One possible reason for the observed association of modules with N or the inorganic N compounds in two-tree-species mixtures could be the coexistence of different mycorrhizal type trees in a plot (i.e., AM tree species with EcM trees and vice versa). This proportional addition of contrasting N-acquisitioning tree individuals in one plot would have triggered the mechanisms that may limit the preferred source of N for the associated soil microbial subcommunities. In multi-tree-species mixtures, all EcM and AM modules were significantly associated with pH, which is known to affect both bacterial and fungal communities ( , ) and has a subtle relationship with soil nutrients. For example, low pH was reported to impede N mineralization and nitrification ( , , ), while P availability was suggested to be high at near-neutral pH: i.e., pH 6.5 to 7 ( [but see reference ]). Consequently, the microbial subcommunities in multi-tree-species mixtures might have dynamic functional roles in nutrient cycling. Functional potentials of EcM and AM TSP soil co-occurring bacterial and fungal communities were strongly impacted by tree mycorrhizal type. As expected, we found a significant tree mycorrhizal type effect on the functional compositions of the co-occurring microbial communities. Our results are in line with a study from boreal and temperate regional sites by Bahram et al. ( ), who reported significant differences in the composition of microbial functional genes between sites dominated by EcM and AM mycorrhizal type plants. Through their specific mycorrhizal partners, trees can select the associated microbial communities with the required functional abilities ( ). For example, given the genomic potential to release oxidative and hydrolytic extracellular enzymes to directly break down the soil organic matter ( , ), EMF have been reported to outcompete and limit the saprotrophs in microbial communities of EcM tree-dominated systems ( ). In contrast, AMF are known to have very little genomic repertoire for enzymatic degradation of soil organic matter. In consequence, they rely upon and enrich saprotrophic fungi and bacteria in soils under AM trees ( , ). Furthermore, we found significant interactive effects of tree diversity and tree mycorrhizal type in some nutrient cycling combinations (CP and CNP for whole communities and CN, NP, CP, CNP for significant modules), wherein multi-tree-species mixtures neutralize the tree mycorrhizal type effect on the functional compositions of soil microbial communities. More co-occurring tree species and including different mycorrhizal type trees in multi-tree-species mixtures could be the potential explanation for the observed absence of significant differences in the functional compositions of soil microbial communities ( ). Similar to the functional composition analysis, we found a significant tree mycorrhizal type effect on the functional diversity of soil microbial communities. Nonetheless, this effect was relatively weak and found only in monospecific stands. The results are in line with the significant effect of tree mycorrhizal type on the functional gene ortholog (GO) richness of fungi and bacteria as reported by Bahram et al. ( ). We did not encounter any significant tree diversity effect on the functional diversity of soil microbial co-occurring communities, which was contrary to previous findings of the positive effects of plant diversity on microbial community functions and activities ( ). Although this effect was not significant, we observed the tendency of increased microbial functional diversity under EcM trees in multi-tree-species mixtures. One might expect that the positive effect of tree diversity on the functional diversity of microbial communities might become significant in the long term ( , ). Moreover, our findings revealed that high tree diversity that includes both AM and EcM mycorrhizal type trees can harbor rich and converging functional genomic potential, which in turn, can have a positive effect on the studied ecosystem. This conforms to the previous findings of our study site of higher stand-level productivity in multi-tree-species mixtures compared to monospecific stands ( ). Hence, our study warrants further research on the detailed mechanisms of how soil microbial communities contribute to the increased above-ground productivity in more-species-rich stands. Insights into the functional abundance differences of EcM and AM TSP soil co-occurring microbial subcommunities. Furthermore, we investigated how EcM and AM TSP soil microbial subcommunities at each tree diversity level differ in their genomic functional abundances. The ordination coupled with the fitting of the significantly contributing enzymes showed for monospecific stands that all of the C-cycling and most of the P-cycling enzymes were diverging in opposite directions of the ordination. These C-cycling enzymes along with amidase and chitinase (N-cycling enzymes) might have similar functional roles in the community, which in this case could be the decomposition of complex carbohydrates for microbial utilization ( ). In the other direction, the P-cycling enzymes were broadly involved in inorganic P solubilization and organic P mineralization, along with a set of N-cycling enzymes that take part in nitrification (e.g., hydroxylamine reductase) and nitrate reduction (e.g., ferredoxin-nitrite reductase). These findings indicate that these subcommunities might have major functional roles in producing plant- and microbe-available forms of N and P ( ). This view was corroborated by the response of these modules to the soil chemistry as seen from dbRDA analysis. In contrast, in two-tree-species mixtures, a higher number of nutrient cycling enzymes did not show any distinct pattern, and this might indicate that the module differentiation was possibly driven by multiple functional differences. In multi-tree-species mixtures, fewer correlated enzymes were found, and this might reflect that the module differentiation was driven by fewer functional differences. Expectedly, P-cycle enzymes were predominantly correlated with the module differentiation at all tree diversity levels, and together with their relationship to soil nutrients in monospecific stands, suggests that the soil microbial subcommunities at our study site are shaped by the P limitation, which is in line with previous reports ( , , ). Intriguingly, our subcommunity-level functional analysis pointed out the natural selection of microbes with required functional potential suitable to the habitat at community and subcommunity levels. Furthermore, we encountered differences in functional abundances of nutrient cycling combinations at the module level among the EcM and AM TSP soil microbial communities. Overall, AM modules had a higher number of significantly abundant modules, except for C and N cycles. In particular, significantly abundant EcM modules for the C cycle were encountered more often in monospecific stands, while not a single significantly abundant EcM module was found in multi-tree-species mixtures. The higher abundance pattern in monospecific stands of such modules can be explained by the fact that ectomycorrhizal fungi can efficiently sequester carbon from plants ( , ), influence the recruitment of co-occurring microbes, including bacteria ( , ), and then can allocate the C to them ( ). In support of this interpretation, we observed a major contribution of bacteria compared to fungi to the nutrient cycling potential in EcM modules in monospecific stands. In monospecific stands, for the N cycle, we found three significantly abundant EcM modules and one significantly abundant AM module. A recent soil metagenomics-based study from temperate forests ( ) reported a larger estimated amount of N-cycling genes in AM than in EcM tree-dominated soils. In our study, we focused on those subcommunities that fulfill specific functional roles, which would explain the aforementioned observation. Nevertheless, in concordance, we found a relatively higher number of significantly abundant AM modules in two-tree-species mixtures. It is known that soils under AM trees have more open and faster nutrient cycling rates than EcM systems ( , ), which is facilitated by the specifically associated fast-cycling versus slow-cycling microbes ( ). In agreement with this assumption, we found an overall higher number of significantly abundant AM modules under the remaining nutrient cycling combinations (P, CN, CP, NP, and CNP). Moreover, the number of modules that differed between EcM and AM was fewer in multi-tree-species stands compared to monospecific stands and two-tree-species mixtures. Taken together, these findings suggest converging genomic functional potential of EcM and AM soil microbiota at the subcommunity level with increasing tree species richness. Additionally, pairwise module analysis within tree mycorrhizal type resulted in a higher proportion of significant differences within AM subcommunities than that of EcM subcommunities in all nutrient cycling combinations, except for CNP, where equal proportions were observed. This might point to a higher functional equivalence in EcM subcommunities, which is probably facilitated by the slow-cycling members, such as ectomycorrhizal fungi, as reflected by members of the Agaricomycetes, which were the predominant differentially abundant fungal contributors to the nutrient cycling in monospecific stands and two-tree-species mixtures. In contrast, a higher number of specialized functional units in the AM subcommunities might be promoted by fast-cycling microbes, such as saprotrophs, which is reflected in their higher functional abundances in most of the nutrient cycling combinations and also by their differentially abundant taxa. Higher functional abundance in their subcommunities might confer resilience to the AM TSPs’ soil microbial communities. This expected functional resilience in AM and the functional equivalence in EcM TSP soil microbial communities can foster soil microbiome stability, which would be most pronounced in multi-tree-species mixtures ( ). Differentially abundant taxa and the top contributors to the functional abundance and nutrient cycling combinations. Finally, differential abundance analysis revealed the taxa behind the differences between each EcM and AM significantly soil-responsive module pairs within each tree diversity level. Agaricomycetes are a phylogenetically diverse group of fungi containing both biotrophs, such as ectomycorrhizal fungi and saprotrophs ( , ), which explains their predominant contributions to the nutrient cycling combinations. Sordariomycetes were one of the major contributors to the nutrient cycling combinations in AM monospecific stands and also for both EcM and AM in multi-tree-species mixtures. Sordariomycetes are known to contain decomposers of wood and leaf litter ( , ). A recent study identified some Sordariomycetes taxa to function as connector hubs in soil microbial networks and were positively correlated with the abundance of functional genes involved in C, N, and P cycling ( ). Eurotiomycetes and Leotiomycetes, which contributed to various nutrient cycling combinations in our study, were also shown to have a significant link to the production of C-cycling enzymes ( ). In addition, Eurotiomycetes were also found to be involved in denitrification ( ). Acidobacteria and Alphaproteobacteria were the predominant contributors in all nutrient cycling combinations. Together with the Actinobacteria , which showed the second highest association with C in our study, all of these groups are known from the literature to be involved in the C cycle ( , ), N cycle ( ), and P cycle ( , ). We have also shown the functional potential of these groups for other nutrient combinations, including CN, CP, NP, and CNP. This information can be helpful in future studies on the relationship between microbial taxa and nutrient cycling. Although these top differentially abundant classes were common in both EcM and AM modules, it is worth noting that they differ in their role at the lower taxon levels, such as ASVs. Moreover, the top two contributing fungal and bacterial classes differed between EcM and AM modules in the different tree diversity levels, especially in two-tree-species mixtures. This indicates that the subcommunities recruit groups of different taxa depending on their functional roles and niche requirements. Conclusions. Taken together, our study highlights the importance of interkingdom soil microbial co-occurrence networks and their subcommunities to understand the factors that shape their community composition and functional roles. We comprehensively characterized the predicted genomic functional potential of co-occurring EcM and AM TSPs soil microbial subcommunities. Our analysis indicated that the nutrient cycling potential of the soil microbiota at the community level was a cumulative effect of their subcommunities. More importantly, functional potential differences, driven by differentially enriched taxa, were revealed among subcommunities that were not obvious at the community level. Our results highlight the key role of the tree mycorrhizal type in the recruitment and organization of these networks. Furthermore, higher tree diversity levels of coexisting AM and EcM mycorrhizal trees were found to foster microbial communities with rich and converging functional genomic potential, thereby promoting stable and better functioning of the forest soil ecosystem. These findings underline the versatility and significance of microbial subcommunities in different soil nutrient cycling processes, which contribute to maintaining multifunctionality and modulating tree-tree interactions in diverse forest ecosystems. The network topological parameters provide key insights into the associations between taxa and the influence of some taxa on particular modules or the whole community. In our study, the observed significant differences between EcM and AM TSP soil microbial co-occurrence networks revealed differences in the taxon assembly and organization in the respective communities. Similarly, a recent greenhouse experimental study, Yuan et al. ( ) reported significant differences in the co-occurrence network topology between arbuscular mycorrhizal fungal (AMF)-bacterial networks and nonmycorrhizal fungal (comprising saprotrophs, pathogens, endophytes, and unclassified)-bacterial networks. Relatively high values of degree centrality and betweenness centrality may indicate stronger relationships among the taxa and a powerful influence of some taxa on bridging or communicating between different parts of the network, respectively ( ). Our results show that EcM TSPs’ soil microbial networks had relatively higher betweenness centrality than that of AM networks, especially in monospecific stands and two-tree-species mixtures, suggesting that some key taxa might exert control over other taxa members of the network. A relatively higher abundance of ectomycorrhizal fungi (EMF) in EcM TSPs’ soils which were known to regulate other microbes in the community ( , ) might be a possible reason for the higher betweenness centrality. In contrast, the higher degree centrality in AM networks, especially in monospecific stands and two-tree species mixtures could be attributed to the relatively higher abundance of saprotrophs in AM TSP soils ( ). Microbes belonging to a subcommunity/module may share similar ecological processes like nutrient cycling functions or be affected by the same environmental filtering processes ( , ). In our analysis, we identified such modules: for instance, in AM monospecific stands, all of the modules had significant relationships with P, which is compliant with the fact that AM trees acquire P through the arbuscular mycorrhizal fungi (AMF), and P is a limiting nutrient for the soil microbes in the subtropical systems with AM-dominated stands ( ). Interestingly, the modules (both EcM and AM) in two-tree-species mixtures were strongly related to N or its inorganic forms, NO 3 − and NH 4 + . It is well known that N is a vital limiting nutrient for both plants and microbes ( ) and that the EcM and AM tree-dominated systems have contrasting N acquisition and allocation strategies, where organic N is preferred in EcM systems, while this is the case for inorganic N in AM systems ( ). One possible reason for the observed association of modules with N or the inorganic N compounds in two-tree-species mixtures could be the coexistence of different mycorrhizal type trees in a plot (i.e., AM tree species with EcM trees and vice versa). This proportional addition of contrasting N-acquisitioning tree individuals in one plot would have triggered the mechanisms that may limit the preferred source of N for the associated soil microbial subcommunities. In multi-tree-species mixtures, all EcM and AM modules were significantly associated with pH, which is known to affect both bacterial and fungal communities ( , ) and has a subtle relationship with soil nutrients. For example, low pH was reported to impede N mineralization and nitrification ( , , ), while P availability was suggested to be high at near-neutral pH: i.e., pH 6.5 to 7 ( [but see reference ]). Consequently, the microbial subcommunities in multi-tree-species mixtures might have dynamic functional roles in nutrient cycling. As expected, we found a significant tree mycorrhizal type effect on the functional compositions of the co-occurring microbial communities. Our results are in line with a study from boreal and temperate regional sites by Bahram et al. ( ), who reported significant differences in the composition of microbial functional genes between sites dominated by EcM and AM mycorrhizal type plants. Through their specific mycorrhizal partners, trees can select the associated microbial communities with the required functional abilities ( ). For example, given the genomic potential to release oxidative and hydrolytic extracellular enzymes to directly break down the soil organic matter ( , ), EMF have been reported to outcompete and limit the saprotrophs in microbial communities of EcM tree-dominated systems ( ). In contrast, AMF are known to have very little genomic repertoire for enzymatic degradation of soil organic matter. In consequence, they rely upon and enrich saprotrophic fungi and bacteria in soils under AM trees ( , ). Furthermore, we found significant interactive effects of tree diversity and tree mycorrhizal type in some nutrient cycling combinations (CP and CNP for whole communities and CN, NP, CP, CNP for significant modules), wherein multi-tree-species mixtures neutralize the tree mycorrhizal type effect on the functional compositions of soil microbial communities. More co-occurring tree species and including different mycorrhizal type trees in multi-tree-species mixtures could be the potential explanation for the observed absence of significant differences in the functional compositions of soil microbial communities ( ). Similar to the functional composition analysis, we found a significant tree mycorrhizal type effect on the functional diversity of soil microbial communities. Nonetheless, this effect was relatively weak and found only in monospecific stands. The results are in line with the significant effect of tree mycorrhizal type on the functional gene ortholog (GO) richness of fungi and bacteria as reported by Bahram et al. ( ). We did not encounter any significant tree diversity effect on the functional diversity of soil microbial co-occurring communities, which was contrary to previous findings of the positive effects of plant diversity on microbial community functions and activities ( ). Although this effect was not significant, we observed the tendency of increased microbial functional diversity under EcM trees in multi-tree-species mixtures. One might expect that the positive effect of tree diversity on the functional diversity of microbial communities might become significant in the long term ( , ). Moreover, our findings revealed that high tree diversity that includes both AM and EcM mycorrhizal type trees can harbor rich and converging functional genomic potential, which in turn, can have a positive effect on the studied ecosystem. This conforms to the previous findings of our study site of higher stand-level productivity in multi-tree-species mixtures compared to monospecific stands ( ). Hence, our study warrants further research on the detailed mechanisms of how soil microbial communities contribute to the increased above-ground productivity in more-species-rich stands. Furthermore, we investigated how EcM and AM TSP soil microbial subcommunities at each tree diversity level differ in their genomic functional abundances. The ordination coupled with the fitting of the significantly contributing enzymes showed for monospecific stands that all of the C-cycling and most of the P-cycling enzymes were diverging in opposite directions of the ordination. These C-cycling enzymes along with amidase and chitinase (N-cycling enzymes) might have similar functional roles in the community, which in this case could be the decomposition of complex carbohydrates for microbial utilization ( ). In the other direction, the P-cycling enzymes were broadly involved in inorganic P solubilization and organic P mineralization, along with a set of N-cycling enzymes that take part in nitrification (e.g., hydroxylamine reductase) and nitrate reduction (e.g., ferredoxin-nitrite reductase). These findings indicate that these subcommunities might have major functional roles in producing plant- and microbe-available forms of N and P ( ). This view was corroborated by the response of these modules to the soil chemistry as seen from dbRDA analysis. In contrast, in two-tree-species mixtures, a higher number of nutrient cycling enzymes did not show any distinct pattern, and this might indicate that the module differentiation was possibly driven by multiple functional differences. In multi-tree-species mixtures, fewer correlated enzymes were found, and this might reflect that the module differentiation was driven by fewer functional differences. Expectedly, P-cycle enzymes were predominantly correlated with the module differentiation at all tree diversity levels, and together with their relationship to soil nutrients in monospecific stands, suggests that the soil microbial subcommunities at our study site are shaped by the P limitation, which is in line with previous reports ( , , ). Intriguingly, our subcommunity-level functional analysis pointed out the natural selection of microbes with required functional potential suitable to the habitat at community and subcommunity levels. Furthermore, we encountered differences in functional abundances of nutrient cycling combinations at the module level among the EcM and AM TSP soil microbial communities. Overall, AM modules had a higher number of significantly abundant modules, except for C and N cycles. In particular, significantly abundant EcM modules for the C cycle were encountered more often in monospecific stands, while not a single significantly abundant EcM module was found in multi-tree-species mixtures. The higher abundance pattern in monospecific stands of such modules can be explained by the fact that ectomycorrhizal fungi can efficiently sequester carbon from plants ( , ), influence the recruitment of co-occurring microbes, including bacteria ( , ), and then can allocate the C to them ( ). In support of this interpretation, we observed a major contribution of bacteria compared to fungi to the nutrient cycling potential in EcM modules in monospecific stands. In monospecific stands, for the N cycle, we found three significantly abundant EcM modules and one significantly abundant AM module. A recent soil metagenomics-based study from temperate forests ( ) reported a larger estimated amount of N-cycling genes in AM than in EcM tree-dominated soils. In our study, we focused on those subcommunities that fulfill specific functional roles, which would explain the aforementioned observation. Nevertheless, in concordance, we found a relatively higher number of significantly abundant AM modules in two-tree-species mixtures. It is known that soils under AM trees have more open and faster nutrient cycling rates than EcM systems ( , ), which is facilitated by the specifically associated fast-cycling versus slow-cycling microbes ( ). In agreement with this assumption, we found an overall higher number of significantly abundant AM modules under the remaining nutrient cycling combinations (P, CN, CP, NP, and CNP). Moreover, the number of modules that differed between EcM and AM was fewer in multi-tree-species stands compared to monospecific stands and two-tree-species mixtures. Taken together, these findings suggest converging genomic functional potential of EcM and AM soil microbiota at the subcommunity level with increasing tree species richness. Additionally, pairwise module analysis within tree mycorrhizal type resulted in a higher proportion of significant differences within AM subcommunities than that of EcM subcommunities in all nutrient cycling combinations, except for CNP, where equal proportions were observed. This might point to a higher functional equivalence in EcM subcommunities, which is probably facilitated by the slow-cycling members, such as ectomycorrhizal fungi, as reflected by members of the Agaricomycetes, which were the predominant differentially abundant fungal contributors to the nutrient cycling in monospecific stands and two-tree-species mixtures. In contrast, a higher number of specialized functional units in the AM subcommunities might be promoted by fast-cycling microbes, such as saprotrophs, which is reflected in their higher functional abundances in most of the nutrient cycling combinations and also by their differentially abundant taxa. Higher functional abundance in their subcommunities might confer resilience to the AM TSPs’ soil microbial communities. This expected functional resilience in AM and the functional equivalence in EcM TSP soil microbial communities can foster soil microbiome stability, which would be most pronounced in multi-tree-species mixtures ( ). Finally, differential abundance analysis revealed the taxa behind the differences between each EcM and AM significantly soil-responsive module pairs within each tree diversity level. Agaricomycetes are a phylogenetically diverse group of fungi containing both biotrophs, such as ectomycorrhizal fungi and saprotrophs ( , ), which explains their predominant contributions to the nutrient cycling combinations. Sordariomycetes were one of the major contributors to the nutrient cycling combinations in AM monospecific stands and also for both EcM and AM in multi-tree-species mixtures. Sordariomycetes are known to contain decomposers of wood and leaf litter ( , ). A recent study identified some Sordariomycetes taxa to function as connector hubs in soil microbial networks and were positively correlated with the abundance of functional genes involved in C, N, and P cycling ( ). Eurotiomycetes and Leotiomycetes, which contributed to various nutrient cycling combinations in our study, were also shown to have a significant link to the production of C-cycling enzymes ( ). In addition, Eurotiomycetes were also found to be involved in denitrification ( ). Acidobacteria and Alphaproteobacteria were the predominant contributors in all nutrient cycling combinations. Together with the Actinobacteria , which showed the second highest association with C in our study, all of these groups are known from the literature to be involved in the C cycle ( , ), N cycle ( ), and P cycle ( , ). We have also shown the functional potential of these groups for other nutrient combinations, including CN, CP, NP, and CNP. This information can be helpful in future studies on the relationship between microbial taxa and nutrient cycling. Although these top differentially abundant classes were common in both EcM and AM modules, it is worth noting that they differ in their role at the lower taxon levels, such as ASVs. Moreover, the top two contributing fungal and bacterial classes differed between EcM and AM modules in the different tree diversity levels, especially in two-tree-species mixtures. This indicates that the subcommunities recruit groups of different taxa depending on their functional roles and niche requirements. Taken together, our study highlights the importance of interkingdom soil microbial co-occurrence networks and their subcommunities to understand the factors that shape their community composition and functional roles. We comprehensively characterized the predicted genomic functional potential of co-occurring EcM and AM TSPs soil microbial subcommunities. Our analysis indicated that the nutrient cycling potential of the soil microbiota at the community level was a cumulative effect of their subcommunities. More importantly, functional potential differences, driven by differentially enriched taxa, were revealed among subcommunities that were not obvious at the community level. Our results highlight the key role of the tree mycorrhizal type in the recruitment and organization of these networks. Furthermore, higher tree diversity levels of coexisting AM and EcM mycorrhizal trees were found to foster microbial communities with rich and converging functional genomic potential, thereby promoting stable and better functioning of the forest soil ecosystem. These findings underline the versatility and significance of microbial subcommunities in different soil nutrient cycling processes, which contribute to maintaining multifunctionality and modulating tree-tree interactions in diverse forest ecosystems. For detailed descriptions of the study site and design, sampling procedures, laboratory analyses and data generation, please refer to the 2021 study by Singavarapu et al. ( ). Study site, experimental design, and sampling. The BEF-China tree diversity experimental study site (site A) contains native subtropical tree species with a diversity gradient ranging from monospecific stands to 24-species mixtures ( ). The experimental site was planted in 2009 in the Chinese subtropics (Xingangshan, Jiangxi Province, Southeast China [29.08 to 29.11°N, 117.90 to 117.93°E]) on a total area of 18.4 ha. The plots have a size of 25.8 m by 25.8 m, with 400 trees each spaced on a regular grid at 1.29 m. In our study design, two adjacent target trees were considered a tree-species pair (TSP) ( ), and we focused on the conspecific TSPs, including six EcM and six AM type TSPs for this study. TSPs were randomly selected across 55 plots, with three replicates in each of the monospecific stands (denoted as “Mono”), two-tree-species mixtures (denoted as “Two”), and multi-tree-species mixtures (denoted as “Multi”), which comprised plots with a tree species richness of ≥4. This resulted in a total of 108 TSPs with the following six combinations: EcM|Mono ( n = 18), EcM|Two ( n = 18), EcM|Multi ( n = 18), AM|Mono ( n = 18), AM|Two ( n = 18), and AM|Multi ( n = 18). For more details on the study site, design and sampling, please refer to the 2021 study by Singavarapu et al. ( ) (see Table S1 and Fig. S1 in the reference ). Four soil cores (diameter of 5 cm and depth of 10 cm) were collected from the tree-tree interaction zone (i.e., the horizontal axis between the two partner trees of a TSP) at distances of 5 cm from the center of a TSP (first two cores) and a further 20 cm away (other two cores). A composite soil sample was made from the four soil cores after pooling, mixing, and removal of root fragments by sieving the mixed soil through a 2-mm-pore mesh-size sieve. Soil samples for microbiota analyses (30 g) were freeze-dried ( ) and stored at −80°C until further analyses. Soil characteristics. Soil samples were divided into two parts for the measurement of soil moisture and other soil variables. Soil moisture was measured by drying the soil at 105°C for 24 h. Soil pH was measured in a 1:2.5 soil-water solution with a Thermo Scientific Orion Star A221 pH meter after air drying of the soil at 40°C for 2 days. Soil total organic carbon (TOC) was measured using a TOC analyzer (Liqui TOC II; Elementar Analysensysteme GmbH, Hanau, Germany). Soil total nitrogen (TN) was measured using an autoanalyzer (SEAL Analytical GmbH, Norderstedt, Germany) by the Kjeldahl method ( ). Soil total phosphorus (TP) was measured following wet digestion with H 2 SO 4 and HClO 4 using a UV-visible (UV-Vis) spectrophotometer (UV2700; Shimadzu, Japan). NH 4 + and NO 3 − were measured using the colorimetric method with a Smart Chem 200 Discrete auto analyzer (AMS, Italy) after extraction with 2 M KCl ( ). Sequencing of microbial communities. Briefly, soil microbial genomic DNA was extracted using PowerSoil DNA isolation kit (Mo Bio Laboratories, Inc., Carlsbad, CA, USA), followed by quantification using a NanoDrop spectrophotometer (Thermo Fisher Scientific, Dreieich, Germany). The bacterial amplicon libraries were prepared by the amplification of the V4 region of the bacterial 16S rRNA gene using the universal primer pair 515f and 806r ( ) with Illumina adapter sequence overhangs. Fungal amplicon libraries were prepared by seminested PCR, first to amplify the internal transcribed spacer 2 (ITS2) ribosomal DNA (rDNA) region using the ITS1F ( ) and ITS4 ( ) primers, followed by a second amplification round with the primer pair fITS7 ( ) and ITS4 containing the Illumina adapter sequences. Both amplicon libraries were purified with AMPure XP beads (Beckman Coulter, Krefeld, Germany), and then Illumina Nextera XT indices were added to those libraries using the indexing PCR, followed by another round of purification with AMPure XP beads. The indexed amplicon libraries were quantified by PicoGreen assay and then pooled equimolarly to a final concentration of 4 nM each for fungi and bacteria. Furthermore, the final library with the pool of fungal and bacterial libraries was sequenced (paired-end sequencing of 2 × 300 bp with MiSeq reagent kit v.3) on an Illumina MiSeq platform (Illumina, Inc., San Diego, CA, USA) at the Department of Environmental Microbiology, UFZ, Leipzig, Germany. Bioinformatics analysis. Bioinformatics analysis was performed using the Quantitative Insights into Microbial Ecology (QIIME 2 2020.2) ( ) software. Raw reads were demultiplexed, and primer sequences were trimmed, followed by sequence denoising and grouping into amplicon sequence variants (ASVs) using cut-adapt ( ) (q2-cutadapt) and DADA2 ( ) (q2-dada2), respectively. Taxonomy assignment was made using the q2-feature-classifier ( ) with a classify-sklearn naive Bayes taxonomy classifier against the silva-132-99-515-806-nb-classifier and unite-ver8-99-classifier-04.02.2020 for bacteria and fungi, respectively. The resulting fungal and bacterial ASV matrices, taxonomic tables, and representative sequences were transferred to R software (v.4.0.2) using the phyloseq package ( ). The ASV matrices were rarefied to 16,542 and 28,897 reads per sample, for fungi and bacteria, respectively, to control for differential sequencing depths. To identify the microbial taxa that are faithfully represented in each of the tree mycorrhizal type and tree diversity combinations ( viz ., EcM|Mono, EcM|Two, EcM|Multi, AM|Mono, AM|Two, and AM|Multi), stringent filtering steps were applied to fungal and bacterial data sets prior to further data analyses. First, all taxa with an abundance of >3% mean total sequencing reads were filtered, resulting in 798 bacterial and 728 fungal taxa. Next, in each of the tree mycorrhizal type and tree diversity combinations, the taxa were further filtered with a frequency of presence in at least 2/3 of the samples (≥33%) in their respective data sets. These filtered data sets from each combination were merged into one bacterial and one fungal data set each and were used as input into PICRUSt2 (Phylogenetic Investigation of Communities by Reconstruction of Unobserved States) software for the prediction of metagenome functional abundances ( ). In PICRUSt2, briefly, first, the ASV representative sequences of bacteria and fungi were multiple aligned with the 16S and ITS reference genome database files using hidden Markov models (HMMER tool). For bacteria, we used default settings, and for fungi, we used the minimum-alignment option of 0.5 (default 0.8) to include all of the taxa that were classified until the genus level in the output. Then these aligned sequences were placed into the reference phylogenetic tree constructed by the maximum likelihood phylogenetic placement method using EPA-ng ( ) and Gappa tools ( ). Next, gene family content was predicted for both bacterial and fungal ASVs based on EC (Enzyme Commission/Classification) numbers ( ) using the castor package ( ). Here, we filtered the predicted EC content tables of bacteria and fungi for the carbon, nitrogen, and phosphorus nutrient cycling-related EC numbers (enzymes) based on previously available literature (Table S3). Finally, these filtered EC content tables were used to determine the gene family abundances per sample with respect to nutrient cycling for both bacterial and fungal data sets. Here, one ASV in each bacterial and fungal data set was removed as they were above the default NSTI (nearest-sequenced-taxon index) values, the metric that identifies the ASVs that are far from all the reference sequences, thus allowing us to exclude less reliable predictions. Statistical analysis. All of the statistical analyses were done in R (version 4.0.2) software. EcM and AM TSPs’ soil bacterial and fungal interkingdom co-occurrence networks were constructed at each tree diversity level ( viz. , EcM|Mono, EcM|Two, EcM|Multi, AM|Mono, AM|Two, and AM|Multi) using the filtered data sets (i.e., [i] an abundance of >3% mean total sequencing reads and [ii] present in at least 2/3 of the samples) mentioned in the bioinformatics analysis. Networks were constructed using the R package SpiecEasi ( ). SpiecEasi controls the spurious co-occurrences by controlling for the lack of independence in normalized count data, which accounts for the high number of edges in the network-based analysis of amplicon data sets. Networks were estimated by the Meinshausen and Bühlmann graph inference method. The minimum λ ratio was 10 −3 , and network assessment was done over 100 values of λ for every 50 cross-validations. Network structural and topological properties, including edges, centrality indices, modularity, etc., were calculated using the igraph package ( ). Modules that are considered to be subcommunities in each network were determined based on a hierarchical agglomeration algorithm with modularity optimization using the “cluster_fast_greedy” function. Differences in the distribution of four network centrality measures (degree, betweenness, closeness, and eigen centralities) between EcM and AM TSPs’ soil microbial networks were tested by bootstrapping with 10,000 iterations, followed by a two-sample Kolmogorov-Smirnov test using the ‘ks.test’ function in R. Furthermore, these distributions were visualized with sinaplots using the ggforce and ggplot2 packages. Network modules that were significantly associated with soil chemical properties were determined using dbRDA (distance-based redundancy analysis) models based on the Bray-Curtis distance using the “capscale” function in the vegan package ( ), and for this, modules with a size of ≥40 were considered. Soil variables (C, N, P, C/N, C/P, N/P, TOC, SOM, NH 4 + , NO 3 − , pH, and moisture) were standardized to a mean of zero and standard deviation of 1 (“decostand” function in vegan). Multicolinearity was checked using the “vifstep” function in the usdm package ( ). Furthermore, important soil variables were selected using stepwise model selection (the “ordistep” function in vegan), and the variables selected were included in the final model for each subcommunity. Variables that were significant in the final model were considered the significant soil characteristics, and the subcommunities that were associated with at least one of these significant soil variables were treated as soil-responsive subcommunities, in the following called “significant” modules. The predicted gene family abundance matrices from PICRUSt2 output were merged per EC number to yield the co-occurring community enzyme/gene family abundance (functional abundance) matrices. These functional compositions were categorized into nutrient cycling combinations (C, N, P, CN, CP, NP, and CNP) based on the constituent EC numbers. Shannon diversity of these functional abundance matrices was calculated as a measure for functional diversity and tested for the effects of tree diversity and tree mycorrhizal type using two-way analysis of variance (ANOVA) with the “aov” function in R. Furthermore, within each tree diversity level, pairwise comparison of tree mycorrhizal type was done with t tests followed by Benjamini-Hochberg (BH) multiple testing correction. The effects of the tree diversity and tree mycorrhizal type on the functional compositions were tested with Bray-Curtis distance-based permutational multivariate analysis of variance (PERMANOVA) using the vegan package. Moreover, the functional composition of the whole community was compared with those of the soil-responsive modules, and consequently, all the analyses based on subcommunities were rerun using only the soil-responsive subcommunities. To derive subcommunity relative functional abundances, first, mean taxon relative abundances of subcommunities in each network were calculated using the normalized bacterial and fungal ASV abundances from the PICRUSt2 output. Next, matrix multiplication was applied using the mean taxon relative abundances of subcommunities and the predicted EC content (gene family numbers) matrix of the taxa as shown in the exemplary formula shown in . In , the matrix on the left-hand side is a matrix of module (mod1, mod2) by taxon (t1, t2, t3) with the taxon’s mean relative abundances in the modules, and the one on the right-hand side is a matrix of taxon (t1, t2, t3) by enzyme (e1, e2), with the number of enzyme gene families per taxon. The result is a matrix with gene family abundances of enzymes (i.e., functional abundances) in each module (mod1, mod2). (1) mod1 mod2 [ 0.10 0.19 0.07 0.02 0.03 0.06 ] t1          t2             t3 × t1 t2 t3 [ 1  7 2  1 4  3 ] e1  e2 = mod1 mod2 [ 0.76 1.10 0.32 0.35 ] e1          e2 The obtained subcommunity functional abundances across tree diversity levels were visualized by ordination with PCoA, using the ape package ( ). Moreover, enzymes related to C, N, and P cycling were fitted to the ordination using “envfit” function in Vegan. Those enzymes with a P value of <0.01 were considered significantly associated with the differentiation of modules. Furthermore, pairwise comparisons of subcommunity functional abundances at each tree diversity level were done with Wilcoxon signed-rank tests followed by BH multiple-testing correction with a significance threshold of P < 0.01 using the rstatix package, and the results are presented as a heat map using ComplexHeatmap package ( ). In addition, taxon differential abundance tests were performed for all EcM and AM modules that were significantly different on the overall CNP relative functional abundance of each ASV per subcommunity. The latter was obtained by multiplying the relative abundance of that ASV by its predicted EC content. Pairwise Wilcoxon rank sum tests (BH multiple-testing correction with a significance threshold of P < 0.01) were used to determine the differentially abundant ASVs between subcommunity pairs and aggregated these significant ASVs at the class taxonomic level. The relative functional abundance proportions of the top two of each of the fungal and bacterial classes per tree diversity level in subcommunities of each of the EcM and AM TSPs’ soil microbial networks were visualized as Sankey diagrams using the networkD3 package ( ). Data availability. The data sets generated for this study can be found in the National Center for Biotechnology Information (NCBI) Sequence Read Archive (SRA) under BioProject no. PRJNA702024 . The BEF-China tree diversity experimental study site (site A) contains native subtropical tree species with a diversity gradient ranging from monospecific stands to 24-species mixtures ( ). The experimental site was planted in 2009 in the Chinese subtropics (Xingangshan, Jiangxi Province, Southeast China [29.08 to 29.11°N, 117.90 to 117.93°E]) on a total area of 18.4 ha. The plots have a size of 25.8 m by 25.8 m, with 400 trees each spaced on a regular grid at 1.29 m. In our study design, two adjacent target trees were considered a tree-species pair (TSP) ( ), and we focused on the conspecific TSPs, including six EcM and six AM type TSPs for this study. TSPs were randomly selected across 55 plots, with three replicates in each of the monospecific stands (denoted as “Mono”), two-tree-species mixtures (denoted as “Two”), and multi-tree-species mixtures (denoted as “Multi”), which comprised plots with a tree species richness of ≥4. This resulted in a total of 108 TSPs with the following six combinations: EcM|Mono ( n = 18), EcM|Two ( n = 18), EcM|Multi ( n = 18), AM|Mono ( n = 18), AM|Two ( n = 18), and AM|Multi ( n = 18). For more details on the study site, design and sampling, please refer to the 2021 study by Singavarapu et al. ( ) (see Table S1 and Fig. S1 in the reference ). Four soil cores (diameter of 5 cm and depth of 10 cm) were collected from the tree-tree interaction zone (i.e., the horizontal axis between the two partner trees of a TSP) at distances of 5 cm from the center of a TSP (first two cores) and a further 20 cm away (other two cores). A composite soil sample was made from the four soil cores after pooling, mixing, and removal of root fragments by sieving the mixed soil through a 2-mm-pore mesh-size sieve. Soil samples for microbiota analyses (30 g) were freeze-dried ( ) and stored at −80°C until further analyses. Soil samples were divided into two parts for the measurement of soil moisture and other soil variables. Soil moisture was measured by drying the soil at 105°C for 24 h. Soil pH was measured in a 1:2.5 soil-water solution with a Thermo Scientific Orion Star A221 pH meter after air drying of the soil at 40°C for 2 days. Soil total organic carbon (TOC) was measured using a TOC analyzer (Liqui TOC II; Elementar Analysensysteme GmbH, Hanau, Germany). Soil total nitrogen (TN) was measured using an autoanalyzer (SEAL Analytical GmbH, Norderstedt, Germany) by the Kjeldahl method ( ). Soil total phosphorus (TP) was measured following wet digestion with H 2 SO 4 and HClO 4 using a UV-visible (UV-Vis) spectrophotometer (UV2700; Shimadzu, Japan). NH 4 + and NO 3 − were measured using the colorimetric method with a Smart Chem 200 Discrete auto analyzer (AMS, Italy) after extraction with 2 M KCl ( ). Briefly, soil microbial genomic DNA was extracted using PowerSoil DNA isolation kit (Mo Bio Laboratories, Inc., Carlsbad, CA, USA), followed by quantification using a NanoDrop spectrophotometer (Thermo Fisher Scientific, Dreieich, Germany). The bacterial amplicon libraries were prepared by the amplification of the V4 region of the bacterial 16S rRNA gene using the universal primer pair 515f and 806r ( ) with Illumina adapter sequence overhangs. Fungal amplicon libraries were prepared by seminested PCR, first to amplify the internal transcribed spacer 2 (ITS2) ribosomal DNA (rDNA) region using the ITS1F ( ) and ITS4 ( ) primers, followed by a second amplification round with the primer pair fITS7 ( ) and ITS4 containing the Illumina adapter sequences. Both amplicon libraries were purified with AMPure XP beads (Beckman Coulter, Krefeld, Germany), and then Illumina Nextera XT indices were added to those libraries using the indexing PCR, followed by another round of purification with AMPure XP beads. The indexed amplicon libraries were quantified by PicoGreen assay and then pooled equimolarly to a final concentration of 4 nM each for fungi and bacteria. Furthermore, the final library with the pool of fungal and bacterial libraries was sequenced (paired-end sequencing of 2 × 300 bp with MiSeq reagent kit v.3) on an Illumina MiSeq platform (Illumina, Inc., San Diego, CA, USA) at the Department of Environmental Microbiology, UFZ, Leipzig, Germany. Bioinformatics analysis was performed using the Quantitative Insights into Microbial Ecology (QIIME 2 2020.2) ( ) software. Raw reads were demultiplexed, and primer sequences were trimmed, followed by sequence denoising and grouping into amplicon sequence variants (ASVs) using cut-adapt ( ) (q2-cutadapt) and DADA2 ( ) (q2-dada2), respectively. Taxonomy assignment was made using the q2-feature-classifier ( ) with a classify-sklearn naive Bayes taxonomy classifier against the silva-132-99-515-806-nb-classifier and unite-ver8-99-classifier-04.02.2020 for bacteria and fungi, respectively. The resulting fungal and bacterial ASV matrices, taxonomic tables, and representative sequences were transferred to R software (v.4.0.2) using the phyloseq package ( ). The ASV matrices were rarefied to 16,542 and 28,897 reads per sample, for fungi and bacteria, respectively, to control for differential sequencing depths. To identify the microbial taxa that are faithfully represented in each of the tree mycorrhizal type and tree diversity combinations ( viz ., EcM|Mono, EcM|Two, EcM|Multi, AM|Mono, AM|Two, and AM|Multi), stringent filtering steps were applied to fungal and bacterial data sets prior to further data analyses. First, all taxa with an abundance of >3% mean total sequencing reads were filtered, resulting in 798 bacterial and 728 fungal taxa. Next, in each of the tree mycorrhizal type and tree diversity combinations, the taxa were further filtered with a frequency of presence in at least 2/3 of the samples (≥33%) in their respective data sets. These filtered data sets from each combination were merged into one bacterial and one fungal data set each and were used as input into PICRUSt2 (Phylogenetic Investigation of Communities by Reconstruction of Unobserved States) software for the prediction of metagenome functional abundances ( ). In PICRUSt2, briefly, first, the ASV representative sequences of bacteria and fungi were multiple aligned with the 16S and ITS reference genome database files using hidden Markov models (HMMER tool). For bacteria, we used default settings, and for fungi, we used the minimum-alignment option of 0.5 (default 0.8) to include all of the taxa that were classified until the genus level in the output. Then these aligned sequences were placed into the reference phylogenetic tree constructed by the maximum likelihood phylogenetic placement method using EPA-ng ( ) and Gappa tools ( ). Next, gene family content was predicted for both bacterial and fungal ASVs based on EC (Enzyme Commission/Classification) numbers ( ) using the castor package ( ). Here, we filtered the predicted EC content tables of bacteria and fungi for the carbon, nitrogen, and phosphorus nutrient cycling-related EC numbers (enzymes) based on previously available literature (Table S3). Finally, these filtered EC content tables were used to determine the gene family abundances per sample with respect to nutrient cycling for both bacterial and fungal data sets. Here, one ASV in each bacterial and fungal data set was removed as they were above the default NSTI (nearest-sequenced-taxon index) values, the metric that identifies the ASVs that are far from all the reference sequences, thus allowing us to exclude less reliable predictions. All of the statistical analyses were done in R (version 4.0.2) software. EcM and AM TSPs’ soil bacterial and fungal interkingdom co-occurrence networks were constructed at each tree diversity level ( viz. , EcM|Mono, EcM|Two, EcM|Multi, AM|Mono, AM|Two, and AM|Multi) using the filtered data sets (i.e., [i] an abundance of >3% mean total sequencing reads and [ii] present in at least 2/3 of the samples) mentioned in the bioinformatics analysis. Networks were constructed using the R package SpiecEasi ( ). SpiecEasi controls the spurious co-occurrences by controlling for the lack of independence in normalized count data, which accounts for the high number of edges in the network-based analysis of amplicon data sets. Networks were estimated by the Meinshausen and Bühlmann graph inference method. The minimum λ ratio was 10 −3 , and network assessment was done over 100 values of λ for every 50 cross-validations. Network structural and topological properties, including edges, centrality indices, modularity, etc., were calculated using the igraph package ( ). Modules that are considered to be subcommunities in each network were determined based on a hierarchical agglomeration algorithm with modularity optimization using the “cluster_fast_greedy” function. Differences in the distribution of four network centrality measures (degree, betweenness, closeness, and eigen centralities) between EcM and AM TSPs’ soil microbial networks were tested by bootstrapping with 10,000 iterations, followed by a two-sample Kolmogorov-Smirnov test using the ‘ks.test’ function in R. Furthermore, these distributions were visualized with sinaplots using the ggforce and ggplot2 packages. Network modules that were significantly associated with soil chemical properties were determined using dbRDA (distance-based redundancy analysis) models based on the Bray-Curtis distance using the “capscale” function in the vegan package ( ), and for this, modules with a size of ≥40 were considered. Soil variables (C, N, P, C/N, C/P, N/P, TOC, SOM, NH 4 + , NO 3 − , pH, and moisture) were standardized to a mean of zero and standard deviation of 1 (“decostand” function in vegan). Multicolinearity was checked using the “vifstep” function in the usdm package ( ). Furthermore, important soil variables were selected using stepwise model selection (the “ordistep” function in vegan), and the variables selected were included in the final model for each subcommunity. Variables that were significant in the final model were considered the significant soil characteristics, and the subcommunities that were associated with at least one of these significant soil variables were treated as soil-responsive subcommunities, in the following called “significant” modules. The predicted gene family abundance matrices from PICRUSt2 output were merged per EC number to yield the co-occurring community enzyme/gene family abundance (functional abundance) matrices. These functional compositions were categorized into nutrient cycling combinations (C, N, P, CN, CP, NP, and CNP) based on the constituent EC numbers. Shannon diversity of these functional abundance matrices was calculated as a measure for functional diversity and tested for the effects of tree diversity and tree mycorrhizal type using two-way analysis of variance (ANOVA) with the “aov” function in R. Furthermore, within each tree diversity level, pairwise comparison of tree mycorrhizal type was done with t tests followed by Benjamini-Hochberg (BH) multiple testing correction. The effects of the tree diversity and tree mycorrhizal type on the functional compositions were tested with Bray-Curtis distance-based permutational multivariate analysis of variance (PERMANOVA) using the vegan package. Moreover, the functional composition of the whole community was compared with those of the soil-responsive modules, and consequently, all the analyses based on subcommunities were rerun using only the soil-responsive subcommunities. To derive subcommunity relative functional abundances, first, mean taxon relative abundances of subcommunities in each network were calculated using the normalized bacterial and fungal ASV abundances from the PICRUSt2 output. Next, matrix multiplication was applied using the mean taxon relative abundances of subcommunities and the predicted EC content (gene family numbers) matrix of the taxa as shown in the exemplary formula shown in . In , the matrix on the left-hand side is a matrix of module (mod1, mod2) by taxon (t1, t2, t3) with the taxon’s mean relative abundances in the modules, and the one on the right-hand side is a matrix of taxon (t1, t2, t3) by enzyme (e1, e2), with the number of enzyme gene families per taxon. The result is a matrix with gene family abundances of enzymes (i.e., functional abundances) in each module (mod1, mod2). (1) mod1 mod2 [ 0.10 0.19 0.07 0.02 0.03 0.06 ] t1          t2             t3 × t1 t2 t3 [ 1  7 2  1 4  3 ] e1  e2 = mod1 mod2 [ 0.76 1.10 0.32 0.35 ] e1          e2 The obtained subcommunity functional abundances across tree diversity levels were visualized by ordination with PCoA, using the ape package ( ). Moreover, enzymes related to C, N, and P cycling were fitted to the ordination using “envfit” function in Vegan. Those enzymes with a P value of <0.01 were considered significantly associated with the differentiation of modules. Furthermore, pairwise comparisons of subcommunity functional abundances at each tree diversity level were done with Wilcoxon signed-rank tests followed by BH multiple-testing correction with a significance threshold of P < 0.01 using the rstatix package, and the results are presented as a heat map using ComplexHeatmap package ( ). In addition, taxon differential abundance tests were performed for all EcM and AM modules that were significantly different on the overall CNP relative functional abundance of each ASV per subcommunity. The latter was obtained by multiplying the relative abundance of that ASV by its predicted EC content. Pairwise Wilcoxon rank sum tests (BH multiple-testing correction with a significance threshold of P < 0.01) were used to determine the differentially abundant ASVs between subcommunity pairs and aggregated these significant ASVs at the class taxonomic level. The relative functional abundance proportions of the top two of each of the fungal and bacterial classes per tree diversity level in subcommunities of each of the EcM and AM TSPs’ soil microbial networks were visualized as Sankey diagrams using the networkD3 package ( ). The data sets generated for this study can be found in the National Center for Biotechnology Information (NCBI) Sequence Read Archive (SRA) under BioProject no. PRJNA702024 .
Trainees’ perceptions of course quality in postgraduate General Practice training in Ireland
589bdd81-23ce-47a0-ba91-783f95790bdb
10113123
Family Medicine[mh]
GP training in Ireland is provided through fourteen training schemes under the auspices of the Irish College of General Practitioners (ICGP), the national GP training body. Trainees spend four years on their training programme. The first 2 years are in hospital-based rotations working in NCHD roles and the final 2 years are spent in year-long training attachments with an individual GP trainer in a dedicated training practice. Owing to the heterogenous nature of medical practice and the varied working and business arrangements in each GP surgery, there is considerable variation in the approaches taken to training. As well as the formal curriculum and core competencies expected of trainees, which are set out by WONCA (World Organization of Family Doctors) and the ICGP, a significant part of GP education is achieved in experiential workplace learning through the master-apprentice relationship. This is often described as the “hidden curriculum” and it covers a range of implicit lessons learnt through practice and interpersonal interactions in areas such as cultural competence, achieving medical professionalism, and dealing with uncertainty . There is no formal national annual trainee survey dedicated to GP trainees and their views. In Ireland, the acceptability of the postgraduate education to each GP trainee and their perceptions of training is assessed for the most part on an individualised basis at local GP training scheme level through one-to-one meetings between each trainee and their trainer where they receive feedback and discuss their progress every 6 months. At these meetings the trainee gives an assessment of the trainer’s performance while reciprocally they received feedback on their own progress in training. Trainees’ reported feedback and perceptions are also the principal means of assessing both the quality of training delivered and trainer competency in the programme . In effect, there is a link between trainees’ giving their perceptions of training and their receipt of competency accreditation. It is the authors’ concern that this may create a bias towards positive trainee feedback on the quality of the learning environment and militate against expressing negative feedback as trainees may believe that providing negative commentary may preclude them from progressing on their training scheme . This research study examines the experiences of trainees in third and fourth year of post-graduate training as these years are specific to GP practice-based training. It is hoped that this survey will shed light on how GP trainers interact with their trainees by allowing trainees an anonymised protected forum to express their opinions and, in so doing reveal how GP training is conducted in Ireland at present. Study aim and design The aim of this study is to assess what the trainee population think of their training environment, and to analyse the factors which help to create that environment. This was carried out using a mixed methods approach which was deployed using a cross-sectional research survey. Study population and sampling A questionnaire was distributed to all third- and fourth-year GP trainees ( N = 404). Prior approval for the study was requested from and granted by the ICGP. All trainees were included. No exclusion criteria were applied. Participants were incentivised to take part by entry into a raffle for a voucher. Typical expected response rates for this type of research, notwithstanding incentives, is 24% . Study instrument The Manchester Clinical Placement index (MCPI) was chosen as the instrument of choice for the current study . The tool required adaptation as the original MCPI was validated in an undergraduate setting. This was carried out by expert panel review. The tool is henceforth termed the MCPI adapted (see Table ). Ethical considerations and data protection Ethical approval was sought from and granted by the ICGP Clinical Ethics Committee and the Dundee University Ethics Committee prior to commencement. Data analysis methods Qualitative data was coded using Microsoft Excel Version 16.43 (Microsoft Inc, USA). done in accordance with the guidelines for thematic analysis using NVivo release 1.0 (QSR International, Australia) . The descriptive statistics and statistical analysis were performed with SAS 9.4 (SAS Software, USA). For each multiple-choice question, responses were summarized as frequencies and proportions. Summary statistics were calculated for the total dataset and across the following strata: year of training, practice type and practice location. The final 10 questions in the questionnaire (Supervision, Reception and Induction, People, Entrustment, Monitoring, Modelling, Dialogue, Feedback, Facilities and Structure of the post) were part of a 10-item measurement index. For the original questionnaire the authors’ subdivided items into the terms “training” and “learning environment” and was estimated according to the calculations: [12pt]{minimal} $$=&\ (++.\\&.+\;+) 100/30\%$$ Learning environment = Leadership + Reception + People + Facilities + Organization × 100 / 30 % [12pt]{minimal} $$=(++) 100/18\%$$ Training = Instruction + Observation + Feedback × 100 / 18 % This subdivision is less applicable to postgraduate training and such a subdivision was not suggested in the adaptations made by the expert panel. The scoring for the current study was subsequently clarified through communication with the original study author which supported this adapted calculation . The development of a GP trainee is the product of both the training they receive and the environment within which they receive it, hence the index was calculated as a score of all answers from the current 10-item questionnaire in this study . The MCPI adapted percentage score for each respondent was therefore calculated as follows: [12pt]{minimal} $${MCPI}_{adapted}=_{i=1}^{10}{Question}_{i} 100/60\%$$ MCPI adapted = ∑ i = 1 10 Question i × 100 / 60 % The MCPI adapted is an overall score of each respondent and represents their satisfaction with the training environment. Therefore it represents a percentile and it exists to aide direct side by side comparison of each answer and in future it will provide a means of trending overall annual scores. The aim of this study is to assess what the trainee population think of their training environment, and to analyse the factors which help to create that environment. This was carried out using a mixed methods approach which was deployed using a cross-sectional research survey. A questionnaire was distributed to all third- and fourth-year GP trainees ( N = 404). Prior approval for the study was requested from and granted by the ICGP. All trainees were included. No exclusion criteria were applied. Participants were incentivised to take part by entry into a raffle for a voucher. Typical expected response rates for this type of research, notwithstanding incentives, is 24% . The Manchester Clinical Placement index (MCPI) was chosen as the instrument of choice for the current study . The tool required adaptation as the original MCPI was validated in an undergraduate setting. This was carried out by expert panel review. The tool is henceforth termed the MCPI adapted (see Table ). Ethical approval was sought from and granted by the ICGP Clinical Ethics Committee and the Dundee University Ethics Committee prior to commencement. Qualitative data was coded using Microsoft Excel Version 16.43 (Microsoft Inc, USA). done in accordance with the guidelines for thematic analysis using NVivo release 1.0 (QSR International, Australia) . The descriptive statistics and statistical analysis were performed with SAS 9.4 (SAS Software, USA). For each multiple-choice question, responses were summarized as frequencies and proportions. Summary statistics were calculated for the total dataset and across the following strata: year of training, practice type and practice location. The final 10 questions in the questionnaire (Supervision, Reception and Induction, People, Entrustment, Monitoring, Modelling, Dialogue, Feedback, Facilities and Structure of the post) were part of a 10-item measurement index. For the original questionnaire the authors’ subdivided items into the terms “training” and “learning environment” and was estimated according to the calculations: [12pt]{minimal} $$=&\ (++.\\&.+\;+) 100/30\%$$ Learning environment = Leadership + Reception + People + Facilities + Organization × 100 / 30 % [12pt]{minimal} $$=(++) 100/18\%$$ Training = Instruction + Observation + Feedback × 100 / 18 % This subdivision is less applicable to postgraduate training and such a subdivision was not suggested in the adaptations made by the expert panel. The scoring for the current study was subsequently clarified through communication with the original study author which supported this adapted calculation . The development of a GP trainee is the product of both the training they receive and the environment within which they receive it, hence the index was calculated as a score of all answers from the current 10-item questionnaire in this study . The MCPI adapted percentage score for each respondent was therefore calculated as follows: [12pt]{minimal} $${MCPI}_{adapted}=_{i=1}^{10}{Question}_{i} 100/60\%$$ MCPI adapted = ∑ i = 1 10 Question i × 100 / 60 % The MCPI adapted is an overall score of each respondent and represents their satisfaction with the training environment. Therefore it represents a percentile and it exists to aide direct side by side comparison of each answer and in future it will provide a means of trending overall annual scores. The current study’s results are presented in four sections. First an overview of the descriptive quantitative results is presented. Second, the quantitative analysis is subdivided into figures by year of training, practice type, and practice location. Third, the MCPI adapted index score distributions are presented. Finally, the qualitative figures and comments are shown. These are categorised by respondent attributes and overarching themes noted across the responses are described. Quantitative findings There was a response rate of 30.94% ( N = 125). Questions 1 to 7 outlined demographic data and provided a description of the characteristics of the study population. 35.2% of the population were male and 64.8% were female. There was a broad range in the age of respondents: 15.2% aged 25–29 years 46.4% aged 30–34 years 32% aged 35–39 years 4% aged 40–44 2.4% aged 44 years and over 43.2% were from year 3 and 56.8% were from year 4. 74.4% were working in a group practice and 25.6% were based in single-handed practices. 34.3% worked in a practice in a rural setting; 41.6% worked in an urban setting; 24% worked in a geographical location where the patient population was spread across both setting types. There was a broad time gap between graduation from medical school and enrolment in GP with a range of year of graduation from medical school from 2003 to 2018. 66.4% were not enrolled on a previous medical training scheme. Quantitative data for each individual question was assessed on a 7-point Likert scale. For questions 8 to 26 all responses were weight towards positive responses (strongly agree, agree or somewhat agree). The full data summary table is available in the appendices. An example of the data is presented for Question 8 here (see Table ). Subdivided quantitative findings The quantitative findings which were given in a broad overview above were further analysed by subdividing the findings by year of training, practice location, and practice type. Responses divided by year of training The quantitative findings for Year of Training (Q3) were subdivided by each question and were plotted on bar charts. The figures in are presented as percentages of the row total and are available in the appendix. The figures were closely matched for each year. An example of the responses is outline for in the area of Supervision below (see Figs. ). Responses by practice type Similarly, the quantitative findings for practice type were subdivided by each question. Again, these were closely matched for each type of practice. An example is depicted below for Supervision where due the higher number of group practices surveyed the percentages appear higher for group practices, but remain proportional overall across both categories (see Fig. ). Responses divided by Practice Location The qualitative findings for practice location (Q5) were also subdivided according to each question. The figures which are available in the appendix are presented as percentages of the row total. An example for Supervision is shown below. Again the results are closely matched (Fig. ). Distribution of MCPI adapted scores The MCPI adapted can be scored based on the response format which was a 7-point Likert scale (disagree-agree scores from 0–6). Higher scores therefore represented more positive responses. These results were again subdivided by year of training, practice type, and practice location. When depicted graphically, the results reflect the distribution of responses mentioned above and are closely matched as shown below. The absolute figures are described in the appendix (see Figs. , , , ). Qualitative findings Individual qualitative answers were coded to give an overview of the nature of the responses. Broadly, the codes depicted positive and negative response as outlined in the study instrument. These are displayed as bar charts depicting the number of positive and negative codes for each question. Selected examples of quotes which best represented the broader set of responses are displayed to give a greater sense of the nature of the qualitative data. The data was coded and further analysed for common themes where these appear, and this is presented below. In cases where codes overlapped between positive/negative answers and an overarching theme they were coded twice so as not to detract from either count. Cases in each category (year, practice location) are expressed in parentheses. Responses to individual questions Respondents were asked their opinions on a variety of areas of training. They were given the prompts “ Strengths of ____ were … Weaknesses or ways ____ could be improved …”. For reference, the supplied definition for the term which describes each area is included below. A summary bar chart of each question outlines how the distribution of responses were weighted. Selected examples of quotes from the population are also presented for each question. Question 9 Supervision There is supervision if one or more senior doctors take responsibility for your education and training (see Fig. and Table ). Question 11. Reception and induction An appropriate reception and induction is a welcome that includes an explanation of how the post can contribute to your learning (see Fig. and Table ). Question 13. People The support to your learning from people (like doctors, secretaries, receptionists, nurses, and others) you worked with in this post (see Fig. and Table ). Question 15. Entrustment Appropriate entrustment is being allowed to undertake clinical activities from which you can learn (activities at your level of competence, or slightly beyond it) (see Fig. and Table ). Question 17. Monitoring Monitoring is when your work is observed directly or indirectly in order to provide you with feedback and to ensure patient safety (through discussion of cases you’ve seen, checking notes you’ve written or clinical decisions you’ve made etc.) (see Fig. and Table ). Question 19. Modelling Modelling requires having the opportunity to observe senior doctors and other members of the healthcare team with patients (see Fig. and Table ). Question 21. Dialogue Dialogue is discussing patient care and other aspects of practice with senior doctors and the healthcare team (see Fig. and Table ) Question 23. Feedback Receiving feedback on how you performed clinical tasks (see Fig. and Table ). Question 25. Facilities Your learning environment may include such things as space for students (to write notes, read, and be taught) and resources (books, computers or other materials) that support your learning (see Fig. and Table ). Question 27. Structure of the post An appropriately structured post is one whose activities are organized in a way that supports your learning (see Fig. and Table ). Overarching themes throughout the data COVID 19 effects The impact of COVID 19 was cited as an inhibitory factor on aspects of training throughout the survey at low levels. Unsurprisingly, the biggest influence this had was interference in modelling as the opportunity to share experiences is limited through decreased patient contact and through decreased contact time between trainer and trainee during the pandemic. The code counts for each question are shown in the appendix and selected quotations are shown below (see Table ). Workload effects The effects of having an onerous workload were mentioned by respondents in a number of areas, namely supervision, support, entrustment, modelling, and, mostly notably, in post structure. It is apparent that in instances where the training was deemed to be poorly structured the workload burden was referenced by trainees most often. The code counts for each question are in the appendix and selected quotations are shown below (see Table ). Time constraints The effects of being constrained in terms of time was noted as a factor in training in many areas but it was most marked as being a factor in supervision, dialogue, and post structure. Time management is a key skill in all areas of GP, but the management and utilisation of time in an efficient manner can have a bearing on training as is noted by trainees in their commentary below. The code counts for each question are shown in the appendix with sample quotes below (see Table ). There was a response rate of 30.94% ( N = 125). Questions 1 to 7 outlined demographic data and provided a description of the characteristics of the study population. 35.2% of the population were male and 64.8% were female. There was a broad range in the age of respondents: 15.2% aged 25–29 years 46.4% aged 30–34 years 32% aged 35–39 years 4% aged 40–44 2.4% aged 44 years and over 43.2% were from year 3 and 56.8% were from year 4. 74.4% were working in a group practice and 25.6% were based in single-handed practices. 34.3% worked in a practice in a rural setting; 41.6% worked in an urban setting; 24% worked in a geographical location where the patient population was spread across both setting types. There was a broad time gap between graduation from medical school and enrolment in GP with a range of year of graduation from medical school from 2003 to 2018. 66.4% were not enrolled on a previous medical training scheme. Quantitative data for each individual question was assessed on a 7-point Likert scale. For questions 8 to 26 all responses were weight towards positive responses (strongly agree, agree or somewhat agree). The full data summary table is available in the appendices. An example of the data is presented for Question 8 here (see Table ). The quantitative findings which were given in a broad overview above were further analysed by subdividing the findings by year of training, practice location, and practice type. The quantitative findings for Year of Training (Q3) were subdivided by each question and were plotted on bar charts. The figures in are presented as percentages of the row total and are available in the appendix. The figures were closely matched for each year. An example of the responses is outline for in the area of Supervision below (see Figs. ). Similarly, the quantitative findings for practice type were subdivided by each question. Again, these were closely matched for each type of practice. An example is depicted below for Supervision where due the higher number of group practices surveyed the percentages appear higher for group practices, but remain proportional overall across both categories (see Fig. ). The qualitative findings for practice location (Q5) were also subdivided according to each question. The figures which are available in the appendix are presented as percentages of the row total. An example for Supervision is shown below. Again the results are closely matched (Fig. ). adapted scores The MCPI adapted can be scored based on the response format which was a 7-point Likert scale (disagree-agree scores from 0–6). Higher scores therefore represented more positive responses. These results were again subdivided by year of training, practice type, and practice location. When depicted graphically, the results reflect the distribution of responses mentioned above and are closely matched as shown below. The absolute figures are described in the appendix (see Figs. , , , ). Individual qualitative answers were coded to give an overview of the nature of the responses. Broadly, the codes depicted positive and negative response as outlined in the study instrument. These are displayed as bar charts depicting the number of positive and negative codes for each question. Selected examples of quotes which best represented the broader set of responses are displayed to give a greater sense of the nature of the qualitative data. The data was coded and further analysed for common themes where these appear, and this is presented below. In cases where codes overlapped between positive/negative answers and an overarching theme they were coded twice so as not to detract from either count. Cases in each category (year, practice location) are expressed in parentheses. Respondents were asked their opinions on a variety of areas of training. They were given the prompts “ Strengths of ____ were … Weaknesses or ways ____ could be improved …”. For reference, the supplied definition for the term which describes each area is included below. A summary bar chart of each question outlines how the distribution of responses were weighted. Selected examples of quotes from the population are also presented for each question. There is supervision if one or more senior doctors take responsibility for your education and training (see Fig. and Table ). An appropriate reception and induction is a welcome that includes an explanation of how the post can contribute to your learning (see Fig. and Table ). The support to your learning from people (like doctors, secretaries, receptionists, nurses, and others) you worked with in this post (see Fig. and Table ). Appropriate entrustment is being allowed to undertake clinical activities from which you can learn (activities at your level of competence, or slightly beyond it) (see Fig. and Table ). Monitoring is when your work is observed directly or indirectly in order to provide you with feedback and to ensure patient safety (through discussion of cases you’ve seen, checking notes you’ve written or clinical decisions you’ve made etc.) (see Fig. and Table ). Modelling requires having the opportunity to observe senior doctors and other members of the healthcare team with patients (see Fig. and Table ). Dialogue is discussing patient care and other aspects of practice with senior doctors and the healthcare team (see Fig. and Table ) Receiving feedback on how you performed clinical tasks (see Fig. and Table ). Your learning environment may include such things as space for students (to write notes, read, and be taught) and resources (books, computers or other materials) that support your learning (see Fig. and Table ). An appropriately structured post is one whose activities are organized in a way that supports your learning (see Fig. and Table ). COVID 19 effects The impact of COVID 19 was cited as an inhibitory factor on aspects of training throughout the survey at low levels. Unsurprisingly, the biggest influence this had was interference in modelling as the opportunity to share experiences is limited through decreased patient contact and through decreased contact time between trainer and trainee during the pandemic. The code counts for each question are shown in the appendix and selected quotations are shown below (see Table ). Workload effects The effects of having an onerous workload were mentioned by respondents in a number of areas, namely supervision, support, entrustment, modelling, and, mostly notably, in post structure. It is apparent that in instances where the training was deemed to be poorly structured the workload burden was referenced by trainees most often. The code counts for each question are in the appendix and selected quotations are shown below (see Table ). Time constraints The effects of being constrained in terms of time was noted as a factor in training in many areas but it was most marked as being a factor in supervision, dialogue, and post structure. Time management is a key skill in all areas of GP, but the management and utilisation of time in an efficient manner can have a bearing on training as is noted by trainees in their commentary below. The code counts for each question are shown in the appendix with sample quotes below (see Table ). The impact of COVID 19 was cited as an inhibitory factor on aspects of training throughout the survey at low levels. Unsurprisingly, the biggest influence this had was interference in modelling as the opportunity to share experiences is limited through decreased patient contact and through decreased contact time between trainer and trainee during the pandemic. The code counts for each question are shown in the appendix and selected quotations are shown below (see Table ). The effects of having an onerous workload were mentioned by respondents in a number of areas, namely supervision, support, entrustment, modelling, and, mostly notably, in post structure. It is apparent that in instances where the training was deemed to be poorly structured the workload burden was referenced by trainees most often. The code counts for each question are in the appendix and selected quotations are shown below (see Table ). The effects of being constrained in terms of time was noted as a factor in training in many areas but it was most marked as being a factor in supervision, dialogue, and post structure. Time management is a key skill in all areas of GP, but the management and utilisation of time in an efficient manner can have a bearing on training as is noted by trainees in their commentary below. The code counts for each question are shown in the appendix with sample quotes below (see Table ). Population demographics The study population was predominantly female (64%) and aged over 30 years old. More respondents were based in group practices (74%) and in urban settings (41%). Only 33% had enrolled on a prior training scheme. This implies that GP remains a first-choice long-term career for the majority of trainees. Trainee perceptions of their GP learning environment Overall, the current study demonstrates a high level of satisfaction by GP trainees with GP training as it is carried out in Ireland today. Trainees’ growth is founded on a strong but flexible and well-supported learning environment which incorporates a non-hierarchical relationship between trainers and trainees . Accordingly, there is a common thread in the positive observations noted in the qualitative section in the positive language used to depict this type of trainer. Patterns, strengths, and weaknesses in training A number of areas of training noted in the results section were of particular interest to the current study. It is revealing to look at the pattern of results in these areas, which are listed below. GP training quality is not likely to be a factor in GP emigration A lack of training quality among NCHDs was seen as one of the major causative systemic factors contributing to doctors deciding to emigrate . In this study, it is shown that poor training is unlikely to be a factor in GP emigration. Overall MCPI adapted scores of 83% with an encouraging distribution towards positive responses compares favorably with international scores on training from the UK, where overall National Trainee Survey respondents had an 83% rate of satisfaction as either good or excellent . The current GP trainer-trainee feedback mechanisms are adequate In terms of overall feedback 81.6% of respondents had a positive response, whereas only 15.2% recorded negative responses. However, the qualitative results diverge somewhat from the quantitative findings in this area. In particular, negative views on feedback outnumbered the codes for positive views for fourth-year single-handed practice-based trainees. It was also stated in the comments that in such situations it might be difficult to be negative in a one-to-one relationship and that some trainers were difficult to approach. Solutions to this issue may be to introduce more widespread use of anonymised instruments but also through the introduction of training for all parties on the use of feedback as a structured standard training tool . Trainees’ suggestions to address areas that are deficient The questionnaire did not mandate the trainees to make suggestions to address deficiencies in training. However, aspects requiring improvement were either implied or, in some cases, specifically suggested by trainees in their responses. These were most notable in four specific areas of the questionnaire. First, in supervision some trainees noted a lack of oversight and support in a few aspects including while working in out of hours settings; also, some annual leave was taken by the trainer at inappropriate times, such as when the trainee was starting the post. Secondly, in induction trainees noted a lack of a formal induction where in some cases there was no introduction to other staff members, no IT set-up or passwords ready to access the patient records system, and a lack of explanation or discussion around local care pathways or practice policies. Thirdly, some trainees reported that some resources were found wanting and basic infrastructure which is mandated by the ICGP to be available for the trainee was not in place. The final and most notable area was in the definition of roles and responsibilities. Some trainees noted that clear goal setting was not addressed and in one case the trainee was unsure even what the standard for passing was. Are there differences in training standards based on location or practice type? Any differences noted in location or practice type were marginal and in effect would not amount to a significant practical difference in training standards. The lack of a significant difference in scores represents the excellent work being done in all locations and in all practice types in this regard. Qualitative themes Outside of the broadly strong praise for the excellent efforts being made by trainers, three common themes were developed throughout the study within the coded qualitative responses: the impact of COVID19, time constraints, and excessive workload. These were unsurprising as they are challenges which were encountered by all healthcare professionals. As far back as 2007 stress management, team working, and workload were all factors which were identified as a challenge to GP training and that remains the same today in the midst of a pandemic . Most of the codes in relation to COVID19 are noted under the modelling section as the usual side by side working arrangements which allow effective monitoring were disrupted. Similarly, excessive workload comments centred on poorly structured posts and trainees broadly commented that at times they felt they were overloaded. These effects were also mentioned by respondents in relation to supervision, support, entrustment, and modelling. Time constraints were most notable in the areas of supervision, dialogue, and post structure with many stating that it was a ‘busy practice’ which is reflective of real-world general practice today. Study design and methodology Study strengths The study structure and methodology were appropriate. The structure addressed the research questions stated and the methodology for this was appropriate. This is the first study of its kind to employ a mixed methodology in studying the perceptions of GP trainees’ learning environment in postgraduate training in Ireland. The research tool chosen was short yet comprehensive which allowed it to be more flexible and accessible to trainee respondents. Study limitations There were limitations to the study design and procedures. The study tool used was an adaptation of the original version. Although adapted after an expert panel review, it was not validated for the study population in postgraduate GP training itself. The study instrument used was somewhat rigid in nature. Respondents were forced to think a response framed by the nature of the questions rather than develop their own personal rhetoric. The structure was somewhat limited in gaining more depth to the study responses. By design, this survey type takes a snapshot of the sample population and cannot determine correlations. The survey did not include the opinions of GP trainers as it was beyond the scope of the current work. It would be interesting and worthwhile to include their views in future research to give a more global perspective and to help investigate deficiencies in the training environment from the trainers’ perspective. The restrictive nature of the ethical approval process meant that it was not possible to contact any respondents after the survey was completed and thus precluded member checking which would have been a valuable tool to explore the credibility of the qualitative results. The study population was predominantly female (64%) and aged over 30 years old. More respondents were based in group practices (74%) and in urban settings (41%). Only 33% had enrolled on a prior training scheme. This implies that GP remains a first-choice long-term career for the majority of trainees. Overall, the current study demonstrates a high level of satisfaction by GP trainees with GP training as it is carried out in Ireland today. Trainees’ growth is founded on a strong but flexible and well-supported learning environment which incorporates a non-hierarchical relationship between trainers and trainees . Accordingly, there is a common thread in the positive observations noted in the qualitative section in the positive language used to depict this type of trainer. A number of areas of training noted in the results section were of particular interest to the current study. It is revealing to look at the pattern of results in these areas, which are listed below. GP training quality is not likely to be a factor in GP emigration A lack of training quality among NCHDs was seen as one of the major causative systemic factors contributing to doctors deciding to emigrate . In this study, it is shown that poor training is unlikely to be a factor in GP emigration. Overall MCPI adapted scores of 83% with an encouraging distribution towards positive responses compares favorably with international scores on training from the UK, where overall National Trainee Survey respondents had an 83% rate of satisfaction as either good or excellent . The current GP trainer-trainee feedback mechanisms are adequate In terms of overall feedback 81.6% of respondents had a positive response, whereas only 15.2% recorded negative responses. However, the qualitative results diverge somewhat from the quantitative findings in this area. In particular, negative views on feedback outnumbered the codes for positive views for fourth-year single-handed practice-based trainees. It was also stated in the comments that in such situations it might be difficult to be negative in a one-to-one relationship and that some trainers were difficult to approach. Solutions to this issue may be to introduce more widespread use of anonymised instruments but also through the introduction of training for all parties on the use of feedback as a structured standard training tool . Trainees’ suggestions to address areas that are deficient The questionnaire did not mandate the trainees to make suggestions to address deficiencies in training. However, aspects requiring improvement were either implied or, in some cases, specifically suggested by trainees in their responses. These were most notable in four specific areas of the questionnaire. First, in supervision some trainees noted a lack of oversight and support in a few aspects including while working in out of hours settings; also, some annual leave was taken by the trainer at inappropriate times, such as when the trainee was starting the post. Secondly, in induction trainees noted a lack of a formal induction where in some cases there was no introduction to other staff members, no IT set-up or passwords ready to access the patient records system, and a lack of explanation or discussion around local care pathways or practice policies. Thirdly, some trainees reported that some resources were found wanting and basic infrastructure which is mandated by the ICGP to be available for the trainee was not in place. The final and most notable area was in the definition of roles and responsibilities. Some trainees noted that clear goal setting was not addressed and in one case the trainee was unsure even what the standard for passing was. Any differences noted in location or practice type were marginal and in effect would not amount to a significant practical difference in training standards. The lack of a significant difference in scores represents the excellent work being done in all locations and in all practice types in this regard. Outside of the broadly strong praise for the excellent efforts being made by trainers, three common themes were developed throughout the study within the coded qualitative responses: the impact of COVID19, time constraints, and excessive workload. These were unsurprising as they are challenges which were encountered by all healthcare professionals. As far back as 2007 stress management, team working, and workload were all factors which were identified as a challenge to GP training and that remains the same today in the midst of a pandemic . Most of the codes in relation to COVID19 are noted under the modelling section as the usual side by side working arrangements which allow effective monitoring were disrupted. Similarly, excessive workload comments centred on poorly structured posts and trainees broadly commented that at times they felt they were overloaded. These effects were also mentioned by respondents in relation to supervision, support, entrustment, and modelling. Time constraints were most notable in the areas of supervision, dialogue, and post structure with many stating that it was a ‘busy practice’ which is reflective of real-world general practice today. Study strengths The study structure and methodology were appropriate. The structure addressed the research questions stated and the methodology for this was appropriate. This is the first study of its kind to employ a mixed methodology in studying the perceptions of GP trainees’ learning environment in postgraduate training in Ireland. The research tool chosen was short yet comprehensive which allowed it to be more flexible and accessible to trainee respondents. Study limitations There were limitations to the study design and procedures. The study tool used was an adaptation of the original version. Although adapted after an expert panel review, it was not validated for the study population in postgraduate GP training itself. The study instrument used was somewhat rigid in nature. Respondents were forced to think a response framed by the nature of the questions rather than develop their own personal rhetoric. The structure was somewhat limited in gaining more depth to the study responses. By design, this survey type takes a snapshot of the sample population and cannot determine correlations. The survey did not include the opinions of GP trainers as it was beyond the scope of the current work. It would be interesting and worthwhile to include their views in future research to give a more global perspective and to help investigate deficiencies in the training environment from the trainers’ perspective. The restrictive nature of the ethical approval process meant that it was not possible to contact any respondents after the survey was completed and thus precluded member checking which would have been a valuable tool to explore the credibility of the qualitative results. The current research findings were broadly positive and supportive of the good work being done in GP training and by trainers in Ireland today. Further research will be needed to validate the study instrument and to further refine some aspects of its configuration. The implementation of such a survey on a regular basis may have merit as part of the quality assurance process in GP education alongside existing feedback structures .
Free-living bacteria stimulate sugarcane growth traits and edaphic factors along soil depth gradients under contrasting fertilization
ba8afaf4-e114-4898-b03a-12aa3a06b618
10113235
Microbiology[mh]
Globally, sugarcane is one of the main economic crops and is regarded for its high sugar content and bioenergy , . China is the third-largest sugarcane-producing country worldwide, with Guangxi province accounting for approximately 60% of the total sugar production in China . Sugarcane consecutive monoculture farming system is widely practiced in China due to insufficient land and inadequate judicious planting concepts . However, long-term sugarcane continuous cropping can have deteriorating effects on essential soil nutrients in sugarcane rhizosphere zones as well as induce the proliferation of soil-borne diseases , which may eventually impede the overall productivity of sugarcane . These phenomena have been observed in crops, such as soybeans and bananas . In a recent study, Pang et al. demonstrated that continuous sugarcane cultivation had profound negative impacts on sugarcane agronomic parameters, soil fertility, and soil microbial community. Fertilization is generally carried out to improve crop productivity , . For instance, sugarcane growers in Guangxi province apply nitrogen (N) fertilizer at the rate of 600–800 kg N ha −1 , which is 6–8 times more than the average N application rate in Brazil . On the other hand, the utilization of high-dose of N fertilizer in sugarcane continuous cropping fields may not only negatively influence soil fertility and health , , but may also adversely alter soil microbial and crop growth . Hence, there is mounting pressure on how to safely enhance agricultural productivity. Organic fertilization has an obvious positive effect on soil microbial biomass, functional diversity , and soil enzyme activities compared with mineral fertilizer . Francioli et al. reported that bacterial diversity under organic fertilization significantly improved. In our previous study, biochar (BC) amended soil significantly increased the stalk weight and height of sugarcane, improved soil NO 3 − , NH 4 + , OM, TC, and AK, and had a profound impact on the abundance of diazotrophs genera . Additionally, environmental concerns and the desire for producing food using an eco-friendly approach have led farmers to seek more suitable N management strategies . Interestingly, it is worth noting that opting for biological N-fixation (BNF) is an ameliorative strategy because it can provide nutrients for crops , thus boosting crop production capacity , and also maintaining a sustainable terrestrial ecosystem . BNF is the major biological mechanism by which N is available to plants, which is performed by prokaryotic bacteria called diazotrophs . Free-living N-fixing bacteria inhabiting soils contribute considerably to the N budgets of many ecosystems, which are vital for the growth and development of crops. However, soil N cycle has been disturbed unprecedentedly by the excessive use of synthetic fertilizers , thus shifting a diverse range of microbial activities and communities . For instance, Tan et al. and Berthrong et al. mentioned that the utilization of N fertilizers diminished diazotrophic community. In a related study, Feng and colleagues pointed out that long-term chemical fertilizer utilization significantly altered soil diazotrophic community structure and led to a decrease in diazotrophs diversity . However, little is known about diazotrophs N-fixation abilities and how their contributions to N budgets impact plant growth and yield, including C and N cycling enzymes in a long-term consecutive sugarcane monoculture farming system, under contrasting amendments, along different soil horizons (0–20, 20–40, and 20–60 cm). To fill these knowledge gaps, we leveraged high-throughput sequencing (HTS) to investigate diazotrophs N-fixation abilities, and how their contributions to N budgets impact plant growth and yield, including C and N cycling enzymes in a long-term consecutive sugarcane monoculture farming system, under contrasting amendments, along different soil horizons. Effects of different fertilization methods on sugarcane agronomic traits We noticed that the BC, OM, and FM treatments did not improve the sucrose content, stem diameter, and stalk height compared with the CK treatment (Fig. A,B,D). On the other hand, the BC and FM treatments significantly increased ( p < 0.05) sugarcane stalk weight and ratoon weight compared with the CK and OM treatments (Fig. C,E), while chlorophyll content peaked significantly ( p < 0.05) under the BC, FM, and OM treatments compared with the CK treatment (Fig. F). Effects of different fertilization methods on edaphic factors Here, the BC and OM treatments significantly improved ( p < 0.05) ammonium (NH 4 + N) in 0–20 soil depth compared with the CK treatment. However, the FM treatment significantly decreased ( p < 0.05) NH 4 + -N in 0–20 soil depth compared with the CK treatment (Fig. A). Moreover, soil nitrate (NO 3 – N) significantly increased ( p < 0.05) in all the treatments in 0–20 cm soil depth (Fig. B) compared with the CK treatment. In 0–20 cm soil depth, soil organic matter content (SOM) significantly increased ( p < 0.05) under the BC, FM, and OM amended soils compared with the CK treatment. However, SOM was not influenced under the the OM amended soil across 20–40 and 40–60 cm soil depths compared with the CK treatment (Fig. C). Furthermore, soil total carbon (TC) was enhanced significantly ( p < 0.05) under all the amended soils compared with the CK treatment in soil depth 0–20 cm (Fig. D). However, soil total nitrogen (TN) and TC/TN were not significantly impacted under the BC, FM, and OM treatments, especially in the first soil depth (0–20 cm) compared with the CK treatment (Fig. E,F). Soil available potassium (AK) was significantly higher ( p < 0.05) in 0–20 and 20–40 cm soil depths compared with the CK treatment, but significantly decreased ( p < 0.05) in soil depth 0–20 cm under the FM and OM treatments compared with the CK treatment. We also observed that soil AK significantly peaked ( p < 0.05) under the FM and OM treatments in 20–40 and 40–60 cm soil depths compared with the CK treatment (Fig. G). On the other hand, soil available phosphorus (AP) revealed no significant change in the entire soil depth in the BC treatment compared with the CK treatment. Whereas the FM treatment significantly increased ( p < 0.05) soil AP in soil depths 0–20 and 20–40 cm compared with the CK treatment. While soil AP significantly increased ( p < 0.05) in 0–20 cm soil depth under the OM treatment, but it was not significantly impacted in the 20–40 and 40–60 cm soil depths compared with the CK (Fig. H). Additionally, soil pH was not affected under the BC treatment across the entire soil depth compared with the CK treatment. Moreover, soil pH significantly reduced ( p < 0.05) under the FM treatment in soil depths 20–40 and 40–60 cm. Whereas soil pH in soil depths 0–20 and 20–40 cm significantly reduced ( p < 0.05), but was remarkably high ( p < 0.05 in) in 40–60 cm soil depth under the OM treatment compared with the CK treatment (Fig. I). Besides, soil EC increased significantly ( p < 0.05) under the BC and FM treatments in soil depths 0–20 cm, and 0–20 and 20–40 cm compared with the CK treatment, respectively. While the OM treatment significantly diminished soil EC in soil depth 0–20 cm compared with the CK treatment (Fig. J). Whereas soil SWC showed no significant difference under the entire treatment (Fig. K). Meanwhile, β-glucosidase was significantly reduced ( p < 0.05) in the BC and OM treatments in 0–20 cm soil depth compared with the FM and CK treatments (Fig. L). Soil acid phosphatase was considerably improved ( p < 0.05) in 0–20 cm soil depth under both FM and OM treatments, while the BC amended soil revealed no difference relative to those under the CK treatment (Fig. M). Moreover, urease activity under the BC, FM, and OM treatments was significantly higher ( p < 0.05) in 20–40 cm soil depth compared with the CK treatment (Fig. N). In addition, cellulose activity decreased with increasing soil depth under the BC and FM treatments relative to those in the CK treatment. However, cellulose activity significantly reduced ( p < 0.05) in 0–20 cm soil depth in the OM treatment compared with the CK treatment (Fig. O). Effects of different fertilization methods on nifH gene copies and alpha diversity Both BC and OM amended soils significantly diminished the nifH gene in 0–20 cm soil depth compared with the CK treatment. On the other hand, the FM treatment significantly increased nifH gene in 20–40 and 40–60 cm soil depths compared with the CK treatment. Regarding different soil depths, nifH gene was stable in the entire soil depth in the BC amended soil, but higher in 0–20 cm soil depth compared with 20–40 and 40–60 cm soil depths in the CK treatment. Furthermore, nifH gene was significantly high ( p < 0.05) in 20–40 cm soil depth compared with 0–20 cm under the FM treatment, but decreased with soil depth in the OM treatment (Fig. A). Diazotrophs community diversity was analyzed using diversity estimator (Shannon and Simpson) and richness (Ace and Chao1). The analysis revealed that diazotrophs diversity and richness under various soil amendments exhibited no significant change in the entire soil depth compared with the CK treatment (Table ). Dominant diazotrophs phyla and genera response to different soil amendments The dominant diazotrophs relative abundance was examined in the different soil depths (0–20, 20–40, and 40–60 cm) at the phyla and genera levels. We observed that soil depth 0–20 cm was dominantly occupied by diazotrophs phyla, namely, Proteobacteria (71.1–80.2%) and Cyanobacteria (8.6–15.3%). Moreover, 20–40 cm soil depth was characterized by Proteobacteria (88.6–94.4%) and Cyanobacteria (0.0–2.8%), while 40–60 cm was dominated by Proteobacteria (82.9–88.4%) (Fig. A). However, the FM, OM, and BC amended soils had little impact on diazotrophs phyla compared with the CK treatment in the entire soil depth (Fig. B–I). At diazotrophs genera level, Geobacter (89.8–94.3%), Anaeromyxobacter (3.2–5.1%), Burkholderia (0.8–2.2%), Azotobacter (0.1–1.7%), Desulfovibrio (0.3–1.5%), Anabaena (0.4–1.0%), and Enterobacter (0.1–0.5%) were the dominant bacterial genera in 0–20 cm soil depth. Furthermore, Geobacter (90.6–94.0%) and Anaeromyxobacter (4.7–6.6%) were the dominant genera in soil depth 20–40 cm. In 40–60 cm soil depth, Geobacter (83.7–89.5%) and Anaeromyxobacter (10.0–16.1%) were abundant (Fig. B). Further analysis showed that a vast majority of diazotrophs genera were altered significantly in the different soil depths under the different soil amendments (Fig. J–S). Noticeably, Anabaena was significantly ( p < 0.05) higher in 0–20 cm soil depth in the BC amended soil than the other treatments (Fig. J). In addition, Burkholderia, Desulfovibrio , and Enterobacter in soil depth 0–20 cm under the FM and BC treatments performed better compared with the OM and CK treatments ( p < 0.05) (Fig. M–O) . Whereas Methylomonas in soil depth 20–40 cm peaked significantly ( p < 0.05) under the BC treatment relative to that under the CK, OM, and FM treatments (Fig. Q). However, Geobacter diminished in soil depths 0–20 and 40–60 cm, while Stenotrophomonas was promoted in 0–20 cm soil depth under the entire treatment (Fig. P,S). The unique and overlapping N-fixing genera between the different treatments and soil depths were explored using a Venn diagram. It was observed that 1 genus was unique in both CK and BC treatments, 3 in the FM, and none in the OM amendment (Fig. C). Moreover, 8 genera were unique in 0–20 cm and 1 in both 20–40 and 40–60 cm soil depths (Fig. D). Diazotrophs alpha diversity, nifH gene, and edaphic factors response to soil depths and fertilizations Multivariate ANOVA analysis was leveraged to test the effects of soil depth gradient and fertilization on different soil parameters relating to diazotrophs, namely, OTUs, Shannon, Chao1, coverage, nifH gene copies, and edaphic factors, such as urease, cellulase, β-glucosidase and acid phosphatase (Table ). It was revealed that soil depth significantly ( p < 0.05) impacted diazotrophs richness index (Chao1) and diversity index (Shannon), followed by diazotrophs coverage. However, soil depth had no impact on nifH gene copy number. Furthermore, both soil depth and treatment had a significant impact on bacteria OTUs, while the various treatments had little impact on diazotrophs coverage. Moreover, soil enzyme activities, namely, urease, β-glucosidase, acid phosphatase, followed by cellulase were affected to a greater extent by the different soil depths compared with the different treatments (Table ). Likewise, edaphic factors, namely, soil pH, AP, AK, TC, TN, NH 4 + -N, and NO 3 – N were significantly influenced by the different soil depths than the various treatments, while the interaction of the different treatments and soil depths had little impact on soil TC/TN. However, both treatment and soil depth revealed no impact on SOM (Table ). Diazotrophs community compositions under contrasting fertilizations along different soil depths Principal coordinates analysis (PCoA) was adopted to assess diazotroph community composition in the different soil depths and the different soil amendments. The analysis demonstrated that diazotrophs community composition in the entire soil depth and the different treatments exhibited distinct distribution patterns (Fig. A,B).  Later, redundancy analysis (RDA) was employed separately in two soil depths (0–20 and 20–60 cm) to assess the impact of edaphic factors on diazotrophs community composition at the phyla level. The analysis showed that soil AP ( R 2 = 1.1860, p < 0.05), EC ( R 2 = 1.0933, p < 0.05), NH 4 + -N ( R 2 = 1.0915, p < 0.05), TN ( R 2 = 1.9840, p < 0.05), SOM ( R 2 = 1.8575, p < 0.05), followed by soil pH ( R 2 = 1.5793, p < 0.01) and AK ( R 2 = 1.5232, p < 0.01) had a significant impact on diazotrophs community composition. Whereas soil TC ( R 2 = 1.5702, p < 0.05) was the minor factor influencing diazotrophs community composition in 0–20 cm soil depth (Fig. C). In 20–60 cm soil depth, soil AP ( R 2 = 0.4968, p < 0.001), AK ( R 2 = 0.4273, p < 0.001), and NO 3 – N ( R 2 = 0.7832, p < 0.001) were the major impact factors shifting diazotrophs community composition, while TC ( R 2 = 0.2532, p < 0.01) and EC ( R 2 = 0.2184, p < 0.01) were the minor drivers altering diazotrophs community composition in 20–60 cm soil depth (Fig. D). Correlation between edaphic factors and diazotrophs community composition Network correlation analysis was used to test the possible interaction between edaphic factors and diazotrophs genera community composition in each soil depth (Fig. A–C, Table – ). It was noticed that the total nodes and edges decreased with increasing soil depth, with 0–20 cm recording the highest number of nodes and edges (125 and 58, respectively), followed by 20–40 cm soil depth (76 and 50, respectively) and 40–60 cm soil depth (55 and 43, respectively). Worth noting, diazotrophs association with edaphic factors recorded the highest positive associations (72.8%) and the lowest negative associations (27.2%) in 0–20 cm soil depth. Whereas 20–40 cm and 40–60 cm soil depths accounted for 52.63% and 49.09% positive associations, and 47.37% and 50.91% negative associations, respectively (Fig. A–C, Table ). Moreover, the patterns in network structure demonstrated that diazotrophs genera belonging to Proteobacteria exhibited a significant and positive ( p < 0.05) association with a vast majority of edaphic factors, especially soil EC, AP, TN, followed by soil SOM in 0–20 cm soil depth (Fig. A, Table ). Similarly, Proteobacteria and Bacteroidetes exhibited a strong and positive ( p < 0.05) association with soil pH and β-glucosidase in 20–40 cm soil depth (Fig. B, Table ). Whereas diazotrophs genera belonging to Proteobacteria exhibited a strong and positive ( p < 0.05) correlation with edaphic factors, including β-glucosidase and AP in 40–60 cm (Fig. C, Table ). To have a comprehensive understanding of the relationship between diazotrophs genera edaphic factors, and sugarcane traits, we adopted Mantle test using diazotrophs OTUs. The analysis demonstrated that the taxonomic composition of nifH OTUs showed a significant and positive correlation ( p < 0.05) with a vast majority of the edaphic factors, including SOM, TN, EC, NH 4 + -N, followed by soil pH, TC, and AK (Fig. D). Later, Pearson’s correlation coefficients were employed separately in the various soil depths to further broaden our understanding of how edaphic factors affected the community composition of diazotrophs phyla (Fig. A–C) and genera (Fig. D–F). It was observed that phylum Bacteroidetes responded strongly and positively ( p < 0.05) to soil EC, TN, AP, SOM, and NO 3 – N. Whereas Firmicutes exhibited a strong and positive association with soil AP and SOM , while phylum Proteobacteria demonstrated a strong and positive ( p < 0.05) relationship with soil AK in 0–20 cm (Fig. A, Table ). In 20–40 cm soil depth, Proteobacteria was significantly and positively ( p < 0.05) associated with soil EC, AP, and pH. Whereas Firmicutes had a strong and positive ( p < 0.05) correlation with soil NH 4 – N and acid phosphatase, while phylum Euryarchaeota responded strongly and positively ( p < 0.05) to β-glucosidase and soil AP. Besides,Cyanobacteria and Verrucomicrobia were significantly and positively ( p < 0.05) correlated with soil pH and EC, respectively (Fig. B, Table ). In 40–60 cm soil depth, Proteobacteria exhibited a strong and positive ( p < 0.05) association with cellulose, soil pH, and AK (Fig. C, Table ). Worth noting, majority of diazotrophic genera were significantly and positively ( p < 0.05) associated with soil edaphic factors compared with diazotrophic phyla, especially in the 0–20 cm soil profile (Fig. D–F, Tables – ). To evaluate the association between sugarcane agronomic traits and diazotrophs genera, regression analysis was adopted and suggested that some potential N-fixing bacteria, including Burkholderia , Azotobacter , Anabaena, and Enterobacter exhibited a strong and positive ( p < 0.05) association with sugarcane agronomic traits. For instance, genera such as Azotobacter and Burkholderia exhibited a strong and positive association with stalk weight (Fig. I,J, Table ), whereas Enterobacter had a significant and positive correlation with sugarcane height, ratoon weight, and chlorophyll content (Fig. G, Table ). The analysis also showed that Anabaena was significantly and positively associated with sugarcane ratoon weight and chlorophyll content (Fig. H, Table ). We noticed that the BC, OM, and FM treatments did not improve the sucrose content, stem diameter, and stalk height compared with the CK treatment (Fig. A,B,D). On the other hand, the BC and FM treatments significantly increased ( p < 0.05) sugarcane stalk weight and ratoon weight compared with the CK and OM treatments (Fig. C,E), while chlorophyll content peaked significantly ( p < 0.05) under the BC, FM, and OM treatments compared with the CK treatment (Fig. F). Here, the BC and OM treatments significantly improved ( p < 0.05) ammonium (NH 4 + N) in 0–20 soil depth compared with the CK treatment. However, the FM treatment significantly decreased ( p < 0.05) NH 4 + -N in 0–20 soil depth compared with the CK treatment (Fig. A). Moreover, soil nitrate (NO 3 – N) significantly increased ( p < 0.05) in all the treatments in 0–20 cm soil depth (Fig. B) compared with the CK treatment. In 0–20 cm soil depth, soil organic matter content (SOM) significantly increased ( p < 0.05) under the BC, FM, and OM amended soils compared with the CK treatment. However, SOM was not influenced under the the OM amended soil across 20–40 and 40–60 cm soil depths compared with the CK treatment (Fig. C). Furthermore, soil total carbon (TC) was enhanced significantly ( p < 0.05) under all the amended soils compared with the CK treatment in soil depth 0–20 cm (Fig. D). However, soil total nitrogen (TN) and TC/TN were not significantly impacted under the BC, FM, and OM treatments, especially in the first soil depth (0–20 cm) compared with the CK treatment (Fig. E,F). Soil available potassium (AK) was significantly higher ( p < 0.05) in 0–20 and 20–40 cm soil depths compared with the CK treatment, but significantly decreased ( p < 0.05) in soil depth 0–20 cm under the FM and OM treatments compared with the CK treatment. We also observed that soil AK significantly peaked ( p < 0.05) under the FM and OM treatments in 20–40 and 40–60 cm soil depths compared with the CK treatment (Fig. G). On the other hand, soil available phosphorus (AP) revealed no significant change in the entire soil depth in the BC treatment compared with the CK treatment. Whereas the FM treatment significantly increased ( p < 0.05) soil AP in soil depths 0–20 and 20–40 cm compared with the CK treatment. While soil AP significantly increased ( p < 0.05) in 0–20 cm soil depth under the OM treatment, but it was not significantly impacted in the 20–40 and 40–60 cm soil depths compared with the CK (Fig. H). Additionally, soil pH was not affected under the BC treatment across the entire soil depth compared with the CK treatment. Moreover, soil pH significantly reduced ( p < 0.05) under the FM treatment in soil depths 20–40 and 40–60 cm. Whereas soil pH in soil depths 0–20 and 20–40 cm significantly reduced ( p < 0.05), but was remarkably high ( p < 0.05 in) in 40–60 cm soil depth under the OM treatment compared with the CK treatment (Fig. I). Besides, soil EC increased significantly ( p < 0.05) under the BC and FM treatments in soil depths 0–20 cm, and 0–20 and 20–40 cm compared with the CK treatment, respectively. While the OM treatment significantly diminished soil EC in soil depth 0–20 cm compared with the CK treatment (Fig. J). Whereas soil SWC showed no significant difference under the entire treatment (Fig. K). Meanwhile, β-glucosidase was significantly reduced ( p < 0.05) in the BC and OM treatments in 0–20 cm soil depth compared with the FM and CK treatments (Fig. L). Soil acid phosphatase was considerably improved ( p < 0.05) in 0–20 cm soil depth under both FM and OM treatments, while the BC amended soil revealed no difference relative to those under the CK treatment (Fig. M). Moreover, urease activity under the BC, FM, and OM treatments was significantly higher ( p < 0.05) in 20–40 cm soil depth compared with the CK treatment (Fig. N). In addition, cellulose activity decreased with increasing soil depth under the BC and FM treatments relative to those in the CK treatment. However, cellulose activity significantly reduced ( p < 0.05) in 0–20 cm soil depth in the OM treatment compared with the CK treatment (Fig. O). nifH gene copies and alpha diversity Both BC and OM amended soils significantly diminished the nifH gene in 0–20 cm soil depth compared with the CK treatment. On the other hand, the FM treatment significantly increased nifH gene in 20–40 and 40–60 cm soil depths compared with the CK treatment. Regarding different soil depths, nifH gene was stable in the entire soil depth in the BC amended soil, but higher in 0–20 cm soil depth compared with 20–40 and 40–60 cm soil depths in the CK treatment. Furthermore, nifH gene was significantly high ( p < 0.05) in 20–40 cm soil depth compared with 0–20 cm under the FM treatment, but decreased with soil depth in the OM treatment (Fig. A). Diazotrophs community diversity was analyzed using diversity estimator (Shannon and Simpson) and richness (Ace and Chao1). The analysis revealed that diazotrophs diversity and richness under various soil amendments exhibited no significant change in the entire soil depth compared with the CK treatment (Table ). The dominant diazotrophs relative abundance was examined in the different soil depths (0–20, 20–40, and 40–60 cm) at the phyla and genera levels. We observed that soil depth 0–20 cm was dominantly occupied by diazotrophs phyla, namely, Proteobacteria (71.1–80.2%) and Cyanobacteria (8.6–15.3%). Moreover, 20–40 cm soil depth was characterized by Proteobacteria (88.6–94.4%) and Cyanobacteria (0.0–2.8%), while 40–60 cm was dominated by Proteobacteria (82.9–88.4%) (Fig. A). However, the FM, OM, and BC amended soils had little impact on diazotrophs phyla compared with the CK treatment in the entire soil depth (Fig. B–I). At diazotrophs genera level, Geobacter (89.8–94.3%), Anaeromyxobacter (3.2–5.1%), Burkholderia (0.8–2.2%), Azotobacter (0.1–1.7%), Desulfovibrio (0.3–1.5%), Anabaena (0.4–1.0%), and Enterobacter (0.1–0.5%) were the dominant bacterial genera in 0–20 cm soil depth. Furthermore, Geobacter (90.6–94.0%) and Anaeromyxobacter (4.7–6.6%) were the dominant genera in soil depth 20–40 cm. In 40–60 cm soil depth, Geobacter (83.7–89.5%) and Anaeromyxobacter (10.0–16.1%) were abundant (Fig. B). Further analysis showed that a vast majority of diazotrophs genera were altered significantly in the different soil depths under the different soil amendments (Fig. J–S). Noticeably, Anabaena was significantly ( p < 0.05) higher in 0–20 cm soil depth in the BC amended soil than the other treatments (Fig. J). In addition, Burkholderia, Desulfovibrio , and Enterobacter in soil depth 0–20 cm under the FM and BC treatments performed better compared with the OM and CK treatments ( p < 0.05) (Fig. M–O) . Whereas Methylomonas in soil depth 20–40 cm peaked significantly ( p < 0.05) under the BC treatment relative to that under the CK, OM, and FM treatments (Fig. Q). However, Geobacter diminished in soil depths 0–20 and 40–60 cm, while Stenotrophomonas was promoted in 0–20 cm soil depth under the entire treatment (Fig. P,S). The unique and overlapping N-fixing genera between the different treatments and soil depths were explored using a Venn diagram. It was observed that 1 genus was unique in both CK and BC treatments, 3 in the FM, and none in the OM amendment (Fig. C). Moreover, 8 genera were unique in 0–20 cm and 1 in both 20–40 and 40–60 cm soil depths (Fig. D). nifH gene, and edaphic factors response to soil depths and fertilizations Multivariate ANOVA analysis was leveraged to test the effects of soil depth gradient and fertilization on different soil parameters relating to diazotrophs, namely, OTUs, Shannon, Chao1, coverage, nifH gene copies, and edaphic factors, such as urease, cellulase, β-glucosidase and acid phosphatase (Table ). It was revealed that soil depth significantly ( p < 0.05) impacted diazotrophs richness index (Chao1) and diversity index (Shannon), followed by diazotrophs coverage. However, soil depth had no impact on nifH gene copy number. Furthermore, both soil depth and treatment had a significant impact on bacteria OTUs, while the various treatments had little impact on diazotrophs coverage. Moreover, soil enzyme activities, namely, urease, β-glucosidase, acid phosphatase, followed by cellulase were affected to a greater extent by the different soil depths compared with the different treatments (Table ). Likewise, edaphic factors, namely, soil pH, AP, AK, TC, TN, NH 4 + -N, and NO 3 – N were significantly influenced by the different soil depths than the various treatments, while the interaction of the different treatments and soil depths had little impact on soil TC/TN. However, both treatment and soil depth revealed no impact on SOM (Table ). Principal coordinates analysis (PCoA) was adopted to assess diazotroph community composition in the different soil depths and the different soil amendments. The analysis demonstrated that diazotrophs community composition in the entire soil depth and the different treatments exhibited distinct distribution patterns (Fig. A,B).  Later, redundancy analysis (RDA) was employed separately in two soil depths (0–20 and 20–60 cm) to assess the impact of edaphic factors on diazotrophs community composition at the phyla level. The analysis showed that soil AP ( R 2 = 1.1860, p < 0.05), EC ( R 2 = 1.0933, p < 0.05), NH 4 + -N ( R 2 = 1.0915, p < 0.05), TN ( R 2 = 1.9840, p < 0.05), SOM ( R 2 = 1.8575, p < 0.05), followed by soil pH ( R 2 = 1.5793, p < 0.01) and AK ( R 2 = 1.5232, p < 0.01) had a significant impact on diazotrophs community composition. Whereas soil TC ( R 2 = 1.5702, p < 0.05) was the minor factor influencing diazotrophs community composition in 0–20 cm soil depth (Fig. C). In 20–60 cm soil depth, soil AP ( R 2 = 0.4968, p < 0.001), AK ( R 2 = 0.4273, p < 0.001), and NO 3 – N ( R 2 = 0.7832, p < 0.001) were the major impact factors shifting diazotrophs community composition, while TC ( R 2 = 0.2532, p < 0.01) and EC ( R 2 = 0.2184, p < 0.01) were the minor drivers altering diazotrophs community composition in 20–60 cm soil depth (Fig. D). Network correlation analysis was used to test the possible interaction between edaphic factors and diazotrophs genera community composition in each soil depth (Fig. A–C, Table – ). It was noticed that the total nodes and edges decreased with increasing soil depth, with 0–20 cm recording the highest number of nodes and edges (125 and 58, respectively), followed by 20–40 cm soil depth (76 and 50, respectively) and 40–60 cm soil depth (55 and 43, respectively). Worth noting, diazotrophs association with edaphic factors recorded the highest positive associations (72.8%) and the lowest negative associations (27.2%) in 0–20 cm soil depth. Whereas 20–40 cm and 40–60 cm soil depths accounted for 52.63% and 49.09% positive associations, and 47.37% and 50.91% negative associations, respectively (Fig. A–C, Table ). Moreover, the patterns in network structure demonstrated that diazotrophs genera belonging to Proteobacteria exhibited a significant and positive ( p < 0.05) association with a vast majority of edaphic factors, especially soil EC, AP, TN, followed by soil SOM in 0–20 cm soil depth (Fig. A, Table ). Similarly, Proteobacteria and Bacteroidetes exhibited a strong and positive ( p < 0.05) association with soil pH and β-glucosidase in 20–40 cm soil depth (Fig. B, Table ). Whereas diazotrophs genera belonging to Proteobacteria exhibited a strong and positive ( p < 0.05) correlation with edaphic factors, including β-glucosidase and AP in 40–60 cm (Fig. C, Table ). To have a comprehensive understanding of the relationship between diazotrophs genera edaphic factors, and sugarcane traits, we adopted Mantle test using diazotrophs OTUs. The analysis demonstrated that the taxonomic composition of nifH OTUs showed a significant and positive correlation ( p < 0.05) with a vast majority of the edaphic factors, including SOM, TN, EC, NH 4 + -N, followed by soil pH, TC, and AK (Fig. D). Later, Pearson’s correlation coefficients were employed separately in the various soil depths to further broaden our understanding of how edaphic factors affected the community composition of diazotrophs phyla (Fig. A–C) and genera (Fig. D–F). It was observed that phylum Bacteroidetes responded strongly and positively ( p < 0.05) to soil EC, TN, AP, SOM, and NO 3 – N. Whereas Firmicutes exhibited a strong and positive association with soil AP and SOM , while phylum Proteobacteria demonstrated a strong and positive ( p < 0.05) relationship with soil AK in 0–20 cm (Fig. A, Table ). In 20–40 cm soil depth, Proteobacteria was significantly and positively ( p < 0.05) associated with soil EC, AP, and pH. Whereas Firmicutes had a strong and positive ( p < 0.05) correlation with soil NH 4 – N and acid phosphatase, while phylum Euryarchaeota responded strongly and positively ( p < 0.05) to β-glucosidase and soil AP. Besides,Cyanobacteria and Verrucomicrobia were significantly and positively ( p < 0.05) correlated with soil pH and EC, respectively (Fig. B, Table ). In 40–60 cm soil depth, Proteobacteria exhibited a strong and positive ( p < 0.05) association with cellulose, soil pH, and AK (Fig. C, Table ). Worth noting, majority of diazotrophic genera were significantly and positively ( p < 0.05) associated with soil edaphic factors compared with diazotrophic phyla, especially in the 0–20 cm soil profile (Fig. D–F, Tables – ). To evaluate the association between sugarcane agronomic traits and diazotrophs genera, regression analysis was adopted and suggested that some potential N-fixing bacteria, including Burkholderia , Azotobacter , Anabaena, and Enterobacter exhibited a strong and positive ( p < 0.05) association with sugarcane agronomic traits. For instance, genera such as Azotobacter and Burkholderia exhibited a strong and positive association with stalk weight (Fig. I,J, Table ), whereas Enterobacter had a significant and positive correlation with sugarcane height, ratoon weight, and chlorophyll content (Fig. G, Table ). The analysis also showed that Anabaena was significantly and positively associated with sugarcane ratoon weight and chlorophyll content (Fig. H, Table ). In the current study, we aimed at unraveling diazotrophs N-fixation abilities, and how their contributions to N budgets impact plant growth and yield, including C and N cycling enzymes in a long-term consecutive sugarcane monoculture farming system, under contrasting amendments, along different soil depths. Li et al. and Orndorff et al. revealed that organic fertilization enhanced sugarcane growth parameters compared with mineral fertilizers. Similarly, it was observed that the sugarcane stalk and ratoon weight significantly increased under the BC and FM treatments, whereas sugarcane chlorophyll content under various organic amendments peaked significantly compared with the CK treatment. We, therefore, assumed that the accumulations of organic materials in surface soil, which, in turn, could be made available to plants explain the mechanism underpinning the phenomenon . Furthermore, this result could be ascribed to the large presence of potential N-fixing bacteria detected in the surface soil, as they are known to play significant role in promoting available plant nutrients . Organic fertilization is considered an alternative to inorganic fertilization, with the benefits of enhancing soil nutrients , . Likewise, we found that edaphic factors such as NO 3 – N, TC, and OM contents under the BC, OM, and FM treatments increased significantly, which conformed with previous studies conducted by Yang et al. and Gopinath et al. . They reported that edaphic factors such as soil N and C accumulation rate, soil pH, and oxidizable organic carbon peaked considerably under organic amendments. We, therefore, hypothesized that the substrate applied may have triggered the proliferation of N-fixation activities, which, in turn, enhanced available soil nutrients. It has also been mentioned that edaphic factors decreased with increasing depths , . In the current study, soil NH 4 + -N, NO 3 – N, TC, TN, AK, SOM, and EC were significantly higher in the upper soil depth compared with the subsoil, which is consistent with our previous study , in which soil NH 4 + -N, NO 3 – N, TC, and TN in the upper soil depth performed better than the subsoil. Soil enzyme activities are considered important indicators of soil fertility due to their pivotal role in soil biochemical reactions, and the maintenance and sustenance of soil fertility and health . Akhtar et al. and Zhao et al. documented that OM amended soils increased soil enzyme activity in topsoil and tends to decrease with increasing soil depth , . Similarly, we observed that β-glucosidase and acid phosphatase were significantly higher in 0–20 cm soil depth in all treatments compared with the subsoil, which may, in part, suggest that surface soil could show a more significant improvement in β-glucosidase and acid phosphatase turnover than subsoil. The increase in these soil enzyme activities in the topsoil in all treatments could be associated with the different soil amendments used . Environmental gradients such as soil management practices and soil depths are major factors influencing the density of soil microorganisms , . For instance, Seuradge et al. reported that soil depth was the primary environmental gradient that affected the bacterial community. In a related study, it was revealed that bacterial abundance was profoundly altered in different soil horizons under straw retention farming systems . Likewise, a vast majority of diazotrophs genera under the various treatments were considerably altered. However, Proteobacteria accounted for a substantial number of bacterial phyla, especially in soil depths 0–20 and 20–40 cm. Proteobacteria are Gram-negative, with outer membranes largely consisting of lipopolysaccharides, and are widely known as a plant growth promoters . Although not many studies have lined Proteobacteria to N-fixation activities as compared with cyanobacterial populations, evidence has emerged that they are worthy of fixing N. For instance, Delmont and his co-workers documented the first genomic evidence for non-cyanobacterial diazotrophs bacteria harboring the surface of ocean waters belonging to Proteobacteria with N-fixing potential. They also mentioned that the detected diazotrophs were remarkably abundant and widespread in both the Atlantic Ocean and the Pacific Ocean , which partly agreed with our finding. We, therefore, inferred that the significantly proportion of Proteobacteria detected in the soil may have played vital role in promoting soil nutrients such as TN, NH 4 + -N, and NO 3 – N, which, in turn, precipitated the crop traits. Additionally, Geobacter bacteria accounted for a substantial portion of the total bacterial genera in the entire soil depth, which is roughly consonant with previous reports documented by Liu et al. and Liao et al. , in which it was established that Geobacter was one of the abundant soil microbial detected in the soil amended with BC. Moreover, recent discoveries have pointed out that Geobacter is a newly identified N-fixing bacteria dominant in paddy soils. For example, Masuda et al. demonstrated that soil N-fixing activity peaked significantly after adding ferrihydrite and ferric iron oxides to the soil, which was primarily driven by Geobacter and Anaeromyxobacter . In a related study, it was reported that G. sulfurreducens was capable of fixing N, which was contingent upon anode respiration , evident by the increase in TN, NH 4 + -N, and NO 3 – N. Genera Anabaena and Enterobacter were significantly enhanced in the 0–20 cm in the BC and FM treatments. Studies have revealed that Anabaena is capable of fixing N, and it is a filamentous cyanobacteria genera , . Our findings corroborated with the study conducted by Chen et al. , wherein it was reported that the utilization of BC improved soil microbial abundance in 0–15 cm soil depth. The significant amount of Anabaena detected in the surface soil (0–20 cm) may have led to the increase in soil N-related nutrients such as NH 4 + -N, NO 3 – N, and TN. Enterobacter is widely spread in the environment, including soil, plant, water , vegetation, and human feces , and is considered a nosocomial pathogenic bacteria and a plant growth promoter . For instance, Ji and his colleagues documented that Enterobacter cloacae HG-1 strain isolated from saline-alkali soil contained high N-fixation activity and produced plant hormones, iron carriers, and 1-aminocyclopropane-1-carboxylic acid deaminase. They also established that the inoculation of this strain was worthy of enhancing crop agronomic traits, including plant height, root length, dry weight, and fresh weight by 18.83%, 19.15%, 17.96%, and 16.67%, respectively . We, therefore, theorized that the increase in Enterobacter in the 0–20 cm under the BC and FM treatments may have led to the increase in NH 4 + -N and NO 3 – N, which, in turn, could be used by sugarcane plants, thus triggering the growth of sugarcane traits , . Soil microbial communities have been reported to be very responsive to soil environmental variables , . In a study conducted by Pang and his co-workers, it was reported that a vast majority of edaphic factors exhibited a strong and positive regularity effect on bacterial community composition. For instance, some potential N-fixing bacteria such as Bacteroidetes and Verrucomicrobia exhibited a strong and positive correlation with AN, AK, and OM, while Actinobacteria, and Proteobacteria and Cyanobacteria revealed a significant and positive relationship with soil AN and AK, respectively . Similarly, Lian et al. and his colleagues established that soil organic carbon, NH 4 + -N, NO 3 – N, dissolved organic carbon, and soil pH were the principal factors influencing rhizosphere bacterial dissimilarities under sugarcane-soybean intercropping. Here, RDA analysis showed that soil AP, EC, NH 4 + -N, TN, and OM were the major impact factors shifting diazotrophs genera community composition, particularly in 0–20 cm soil depth. This phenomenon was further validated by network analysis, where diazotrophs bacteria belonging to Proteobacteria demonstrated a significant and positive association with soil EC, AP, TN, followed by SOM, especially in the 0–20 cm soil depth. These results were reinforced by Mantle test and Pearson’s correlation coefficients analyses, which is in agreement with Pang et al. findings, wherein it was pointed out that nitrifying flora and N-fixing flora were significantly associated with soil NO 3 – N, pH, and C/N. A number of studies have investigated plant-microbiome interactions in a quest to identify plant growth-promoting strains, with the aim of promoting more eco-friendly agriculture activities . For example, Kifle et al. established that the utilization of diazotrophs bacteria strains significantly increased the germination rate of maize seed, root length, seed vigor index, leaf chlorophyll, and dry weight. Correspondingly, we observed that some potential N-fixing bacteria, including Burkholderia , Azotobacter , Anabaena, and Enterobacter exhibited a strong and positive ( p < 0.05) association with sugarcane agronomic traits, namely, sugarcane biomass and ratoon weight, respectively. We, therefore, postulated that this phenomenon was responsible for the marked increase observed in the sugarcane stalk weight, ratoon weight, and chlorophyll content. Our study demonstrated that organic soil amendments such as BC, FM, and OM treatments are worthy of enhancing crop agronomic traits as well as edaphic factors, including β-glucosidase, acid phosphatase, NH 4 + -N, NO 3 – N, OM, TN, and TC, especially in the first soil depth (0–20 cm). Moreover, our findings suggested that the abundance of Proteobacteria, Geobacter , Anabaena , Enterobacter , and Desulfovibrio were worth of promoting crop growth traits as well as vital nutrients, including TN, NH 4 + -N, NO 3 – N, OM, and TC, particularly in the upper soil depth (0–20 cm), evident by the strong and positive association detected between diazotrophs bacteria and edaphic factors. Taken together, our findings are likely to further enhance our understanding of diazotrophs N fixation abilities, and how their contributions to key soil nutrients such as N impact plant growth and yield, including C and N cycling enzymes in a long-term consecutive sugarcane monoculture farming system, under contrasting amendments, along different soil horizons. This study was conducted from March 2018 to December 2020 at the Fujian Agriculture and Forestry University, Sugarcane Research Center, Fuzhou, Fujian Province, China (26°05′00.0″N, 119°13′47.0″E). The site has a clay loam texture soil, with an annual temperature of 20 °C and rainfall of 1369 mm annually. The experiment was laid in a randomized block design consisting of four treatments replicated thrice. The treatments include: control (CK), organic matter (OM), biochar (BC), and filter mud (FM). The size of experimental site was measured 100 m 2 (25 m × 4 m), with each replicate covering an area of 25 m 2 (5 m × 5 m). On March 20, 2018, the BC was applied at the rate of 30 t ha −1 , organic matter at 25.5 t ha −1 , and filter mud was applied at the rate of 20.5 t ha −1 . The FM and BC utilized during the study were purchased from Nanjing Qinfeng Crop Straw Technology Company, China. The BC was produced from sugarcane straw at the 550–650 °C and the OM used during the research was composed of pig manure, while FM was obtained from precipitated impurities found in the sugarcane juice that is removed when sugarcane is being processed through filtration, as mentioned by Orndorff et al. and Elsayed et al. . The basic soil properties were measured before the application of various amendments (Table ). The different soil amendments were surface applied and immediately mixed into the ploughed soil at the depth of 0–30 cm using rotary tillage before cultivating the sugarcane. Sugarcane stalks were cut at about 10–15 cm in length, with two buds on each sett . Fifteen setts were planted on each row, consisting of 0.3 m between plant-to-plant spacing and 0.5 m row-to-row spacing. Soil sampling Surface soil (0–20 cm), subsoil (20–40 cm), and dipper soil depth (40–60 cm) were sampled in December 2020. Sampling was conducted at five different spots in each plot, homogenized, and mixed accordingly . A portion of each soil sample was air-dried, grounded, and sieved through 2 mm mesh. Sieved soil (2 mm) was used to analyze soil enzyme activities, while the other portion was stored at − 20 °C for the extraction of DNA, ammonium (NH 4 + -N), and nitrate (NO 3 – N). Assessment of sugarcane agronomic traits Sugarcane heights were determined in centimeters (cm) using a meter rod from the soil surface to sugarcane’s top. The mean of sugarcane heights were determined using the average of three replicates. We used Legendre approach by milling and measuring the juice for pol and Brox using thirty sugarcane stalks that were randomly sampled from each row. The individual weight of each sugarcane stalk (kg stalk −1 ) was measured using sugarcane plant fresh weighs. Plants were harvested in December 2020, and yield parameters were estimated. A portable chlorophyll meter was used to record the chlorophyll content of ten mature and healthy leaf close to the top in each plot. All the methods we adopted in this study were performed according to relevant rules and guidelines. Measurement of edaphic factors under contrasting amendments Soil edaphic factors, namely, total nitrogen (TN), total carbon (TC), total phosphorus (AP), and available potassium (AK) were determined as mentioned by Bao . A glass electrode pH meter was used for the estimation of soil pH. Fresh soil sample was used to extract soil NH 4 + -N and NO 3 – N with 2.0 M KCl and measured using the continuous flow analyzer (San++, Skalar, Holland) . Soil OM was assessed by adopting the Walkley − Black approach, which contained the soil OM oxidation by H 2 SO 4 and K 2 Cr 2 O 7 , and FeSO 4 was later used for titration . Soil electrical conductivity (EC) was calculated in a 1/5 (w/v) aqueous solution using conductivimeter (Crison mod. 2001, Barcelona, Spain). Soil water content (SWC) was estimated gravimetrically by drying the soil samples in an oven at 105 °C for 12 h and the dried soil samples were later weighed . The estimation of soil enzyme activities were carried out following the methods reported by Tayyab and Sun et al. . In brief, cellulose (glucose, mg/g 24 h, 37 °C) was estimated colorimetrically by calculating a decrease in 3,5-dinitrosalicylic acid from reducing sugar using buffer sodium carboxymethylcellulose solution after the soil was incubated. Acid phosphatase activity was measured using a nitrophenyl phosphate disodium substrate (phenol, ug/g, 1 h, 37 °C). β-glucosidase activity was assessed using a colorimetric p-nitrophenol assay after buffering the soil with p-nitrophenyl-β-glucopyranoside, (p-nitrophenyl, μg/g, 1 h, 37 °C). Kandeler and Gerber buffered method was employed to measure soil urea activity by using urea as a substrate. Soil DNA extraction Genomic DNA from all the samples was extracted using the Fast DNA TM Spin kit according to the manufacturer's guideline (MP Biomedical, Santa Ana, CA, USA) which is designed for soil DNA isolation. DNA purification was performed using DNA purification kits according to manufacturer instructions (Tiangen Biotech Co., Ltd., Beijing, China). Nanodrop spectrophotometer was adopted to measure the DNA quality and stored at − 20 °C for further analysis. Quantitative real-time PCR assay Real-time quantitative PCR was used to determine the abundance of the nifH gene using MIQE (Minimum Information for Population of qPCR Experiments). The qPCR experiment was performed using SYBR Premix Ex TaxTM (Perfect Real Time) kit with a 7500 Fast Real-Time PCR system. The reaction was carried out in a 25 µL volume containing 12.5 µL of SYBR Premix Ex TaqTM (2 ×, TaKaRa Biotechnology Co.), 0.5 µL ROX Reference dye II (50 ×, TaKaRa Biotechnology Co.), 10 µL dd H 2 O, 1 µL (10–30 ng) DNA template and 0.5 µL (5 µM) using primer set PolF and PolR . nifH gene PCR protocols consisted of an initial activation step of 95 °C for 30 s, followed by 40 cycles of 95 °C for 5 s and 34 s at 60 °C. Fragments of the nifH gene were cloned into the pMD19-T plasmid and the correct inserted genes were chosen. The potential PCR inhibitors of the DNA samples were determined using serial dilutions. Major inhibitions were not observed in the DNA samples extracted. To develop the standard curve, serially diluting plasmid was used to the final concentrations of 10 8 –10 2 gene copies number µL −1 . The qPCR efficiencies were 98% for nifH and the R 2 of the standard was higher than 0.99. nifH gene sequencing We conducted high throughput sequencing to investigate diazotrophs community composition using the Illumina Miseq platform. nifH gene amplification was conducted using primer pair PolF and PolR and merged with barcode sequences and Illumina adaptor sequences . Sample libraries were obtained from the products of the purified PCR. We used the Miseq 300 cycle Kit to conduct paired-end sequencing using a Miseq benchtop sequencer (Illumina, San Diego, CA, United States). We separated raw nifH gene sequences using sample based on their barcodes and permitting up to one mismatch and carried out quality trimming using Btrim . FLASH was leveraged to merge the forward and reverse reads into full-length sequences , and sequences with short bases were eliminated. We randomly conducted resampling with 10,000 sequences/sample. UCLUST was adopted to categorize the operational taxonomic units (OTUs) at 97% similarity level, and singletons were removed. The frameshift caused by insertions and deletion in DNA sequences were checked and corrected by RDP FrameBox. Later, we translated valid nifH gene sequences (300–320 bp) to proteins sequences and taxonomic assignment was carried out using RDP FrameBox tool . Finally, the raw data were submitted to the NCBI Sequence Read Archive (accession no. PRJNA815949). Statistical analysis We examined the differences in mean values between treatments and soil depth using two-way analysis of variance (ANOVA), followed by Tukey's comparison at a 5% significance level . Venn diagram was employed to visualize unique and overlapped diazotrophs genera in the various treatments and soil depths ( http://bioinfogp.cnb.csic.es/tools/venny/index.html ). The effect of soil depth gradient and fertilization regime on different soil parameters relating to diazotrophs and edaphic factors were tested using multivariate ANOVA in 0–20 cm, 20–40 cm, and 40–60 cm soil depths. Principal coordinate analysis (PCoA) and an analysis of similarities (ANOSIM) were conducted to test if there was a significant difference in diazotrophs community composition in the different soil depths and treatments. We also tested the association between diazotrophs community composition and edaphic factors by adopting redundancy analysis (RDA) in 0–20 and 20–60 cm. The patterns in the network structure of diazotrophs community composition and PERMANOVA (with permutations = 999) analyses were tested using vegan R-package and later generated using ggplot . Mantel test was adopted to examine the relationship between diazotrophs taxonomic composition and edaphic factors using “vegan” package . The correlation between diazotrophs community composition and edaphic factors in the different soil depths was further conducted using diazotrophs genera and phyla by adopting “corrplot” package in R-software , and the significant level was tested using “psych” package. Regression analysis was leveraged to test the relationships between important diazotrophs genera and sugarcane traits using “ggpmisc” package. Surface soil (0–20 cm), subsoil (20–40 cm), and dipper soil depth (40–60 cm) were sampled in December 2020. Sampling was conducted at five different spots in each plot, homogenized, and mixed accordingly . A portion of each soil sample was air-dried, grounded, and sieved through 2 mm mesh. Sieved soil (2 mm) was used to analyze soil enzyme activities, while the other portion was stored at − 20 °C for the extraction of DNA, ammonium (NH 4 + -N), and nitrate (NO 3 – N). Sugarcane heights were determined in centimeters (cm) using a meter rod from the soil surface to sugarcane’s top. The mean of sugarcane heights were determined using the average of three replicates. We used Legendre approach by milling and measuring the juice for pol and Brox using thirty sugarcane stalks that were randomly sampled from each row. The individual weight of each sugarcane stalk (kg stalk −1 ) was measured using sugarcane plant fresh weighs. Plants were harvested in December 2020, and yield parameters were estimated. A portable chlorophyll meter was used to record the chlorophyll content of ten mature and healthy leaf close to the top in each plot. All the methods we adopted in this study were performed according to relevant rules and guidelines. Soil edaphic factors, namely, total nitrogen (TN), total carbon (TC), total phosphorus (AP), and available potassium (AK) were determined as mentioned by Bao . A glass electrode pH meter was used for the estimation of soil pH. Fresh soil sample was used to extract soil NH 4 + -N and NO 3 – N with 2.0 M KCl and measured using the continuous flow analyzer (San++, Skalar, Holland) . Soil OM was assessed by adopting the Walkley − Black approach, which contained the soil OM oxidation by H 2 SO 4 and K 2 Cr 2 O 7 , and FeSO 4 was later used for titration . Soil electrical conductivity (EC) was calculated in a 1/5 (w/v) aqueous solution using conductivimeter (Crison mod. 2001, Barcelona, Spain). Soil water content (SWC) was estimated gravimetrically by drying the soil samples in an oven at 105 °C for 12 h and the dried soil samples were later weighed . The estimation of soil enzyme activities were carried out following the methods reported by Tayyab and Sun et al. . In brief, cellulose (glucose, mg/g 24 h, 37 °C) was estimated colorimetrically by calculating a decrease in 3,5-dinitrosalicylic acid from reducing sugar using buffer sodium carboxymethylcellulose solution after the soil was incubated. Acid phosphatase activity was measured using a nitrophenyl phosphate disodium substrate (phenol, ug/g, 1 h, 37 °C). β-glucosidase activity was assessed using a colorimetric p-nitrophenol assay after buffering the soil with p-nitrophenyl-β-glucopyranoside, (p-nitrophenyl, μg/g, 1 h, 37 °C). Kandeler and Gerber buffered method was employed to measure soil urea activity by using urea as a substrate. Genomic DNA from all the samples was extracted using the Fast DNA TM Spin kit according to the manufacturer's guideline (MP Biomedical, Santa Ana, CA, USA) which is designed for soil DNA isolation. DNA purification was performed using DNA purification kits according to manufacturer instructions (Tiangen Biotech Co., Ltd., Beijing, China). Nanodrop spectrophotometer was adopted to measure the DNA quality and stored at − 20 °C for further analysis. Real-time quantitative PCR was used to determine the abundance of the nifH gene using MIQE (Minimum Information for Population of qPCR Experiments). The qPCR experiment was performed using SYBR Premix Ex TaxTM (Perfect Real Time) kit with a 7500 Fast Real-Time PCR system. The reaction was carried out in a 25 µL volume containing 12.5 µL of SYBR Premix Ex TaqTM (2 ×, TaKaRa Biotechnology Co.), 0.5 µL ROX Reference dye II (50 ×, TaKaRa Biotechnology Co.), 10 µL dd H 2 O, 1 µL (10–30 ng) DNA template and 0.5 µL (5 µM) using primer set PolF and PolR . nifH gene PCR protocols consisted of an initial activation step of 95 °C for 30 s, followed by 40 cycles of 95 °C for 5 s and 34 s at 60 °C. Fragments of the nifH gene were cloned into the pMD19-T plasmid and the correct inserted genes were chosen. The potential PCR inhibitors of the DNA samples were determined using serial dilutions. Major inhibitions were not observed in the DNA samples extracted. To develop the standard curve, serially diluting plasmid was used to the final concentrations of 10 8 –10 2 gene copies number µL −1 . The qPCR efficiencies were 98% for nifH and the R 2 of the standard was higher than 0.99. gene sequencing We conducted high throughput sequencing to investigate diazotrophs community composition using the Illumina Miseq platform. nifH gene amplification was conducted using primer pair PolF and PolR and merged with barcode sequences and Illumina adaptor sequences . Sample libraries were obtained from the products of the purified PCR. We used the Miseq 300 cycle Kit to conduct paired-end sequencing using a Miseq benchtop sequencer (Illumina, San Diego, CA, United States). We separated raw nifH gene sequences using sample based on their barcodes and permitting up to one mismatch and carried out quality trimming using Btrim . FLASH was leveraged to merge the forward and reverse reads into full-length sequences , and sequences with short bases were eliminated. We randomly conducted resampling with 10,000 sequences/sample. UCLUST was adopted to categorize the operational taxonomic units (OTUs) at 97% similarity level, and singletons were removed. The frameshift caused by insertions and deletion in DNA sequences were checked and corrected by RDP FrameBox. Later, we translated valid nifH gene sequences (300–320 bp) to proteins sequences and taxonomic assignment was carried out using RDP FrameBox tool . Finally, the raw data were submitted to the NCBI Sequence Read Archive (accession no. PRJNA815949). We examined the differences in mean values between treatments and soil depth using two-way analysis of variance (ANOVA), followed by Tukey's comparison at a 5% significance level . Venn diagram was employed to visualize unique and overlapped diazotrophs genera in the various treatments and soil depths ( http://bioinfogp.cnb.csic.es/tools/venny/index.html ). The effect of soil depth gradient and fertilization regime on different soil parameters relating to diazotrophs and edaphic factors were tested using multivariate ANOVA in 0–20 cm, 20–40 cm, and 40–60 cm soil depths. Principal coordinate analysis (PCoA) and an analysis of similarities (ANOSIM) were conducted to test if there was a significant difference in diazotrophs community composition in the different soil depths and treatments. We also tested the association between diazotrophs community composition and edaphic factors by adopting redundancy analysis (RDA) in 0–20 and 20–60 cm. The patterns in the network structure of diazotrophs community composition and PERMANOVA (with permutations = 999) analyses were tested using vegan R-package and later generated using ggplot . Mantel test was adopted to examine the relationship between diazotrophs taxonomic composition and edaphic factors using “vegan” package . The correlation between diazotrophs community composition and edaphic factors in the different soil depths was further conducted using diazotrophs genera and phyla by adopting “corrplot” package in R-software , and the significant level was tested using “psych” package. Regression analysis was leveraged to test the relationships between important diazotrophs genera and sugarcane traits using “ggpmisc” package. Supplementary Figure S1. Supplementary Tables.
Association between circulating leukocytes and arrhythmias: Mendelian randomization analysis in immuno-cardiac electrophysiology
d9d4c25f-bb86-465d-822e-bb362a5b7bb7
10113438
Physiology[mh]
Introduction Cardiac arrhythmia, a condition in which the heartbeat is irregular, too fast or too slow, is a relatively common heart disease. Some arrhythmias are brief and asymptomatic, while others are persistent and can lead to hemodynamic instability, thromboembolic events, and even cardiac sudden death, imposing a significant burden on healthcare systems. However, evidence on how to effectively prevent and treat arrhythmias, thus far, has been limited. Although how exactly leukocytes participate in arrhythmogenesis is not fully understood, it is generally accepted that leukocytes might contribute to arrhythmia either directly through coupling to cardiomyocytes, or indirectly by producing cytokines and antibodies . Neutrophils, which are rarely found in the healthy myocardium, are rapidly recruited to the heart in response to stress signals, and they exert the arrhythmogenic effect by releasing myeloperoxidase and lipocalin , promoting oxidative stress and interstitial fibrosis. Monocytes/macrophages are the most numerous leukocytes in the heart, and can effectively clear dysfunctional mitochondria, apoptotic cells and debris, thus preventing ventricular tachycardia and fibrillation after myocardial infarction . However, macrophages can also secrete cytokines like IL-1β, which then prolong the action potential duration of cardiomyocytes and induces arrhythmias . Recent research has uncovered a non-canonical leukocyte function for macrophages in cardiac electrical conduction, demonstrating that they can directly couple to conducting cardiomyocytes via gap junctions containing Cx43, altering their electrical properties . T lymphocytes elicit cell-mediated immunity, and subsets of T lymphocytes may produce cytokines like IFN-γ, IL-2 or IL-17, exacerbating the neutrophilic inflammation, then promote micro-scar formation among myocardial tissue, leading to insulating fibrosis . B lymphocytes can promote cardiac arrhythmias by means of autoantibodies targeting specific calcium, potassium, or sodium channels on the surface of cardiomyocytes . Last, less is known about the function of basophils and eosinophils in arrhythmias, but recent experimental data highlight that basophil-derived IL-14 plays an essential role in the heart, by balancing macrophage polarization. While eosinophils may play an anti-inflammatory and cardioprotective role after myocardial infarction, reducing cardiomyocyte death and inflammatory cell accumulation . Due to these pioneering works, researchers have attempted to integrate electrophysiology and immunology, and a new terminology “immuno-cardiac electrophysiology” was introduced to highlight the emerging essential role of immune cells in arrhythmias . Circulating leukocytes are crude markers of the systemic immunological status of individuals, and they can modulate local inflammatory responses. Cellular numbers are the most critical parameter for homeostasis of circulating immune cells. So far, some cross-sectional clinical surveys have linked circulating leukocyte counts to incidence of cardiac arrhythmias. In the CALIBER study of 775,231 individuals, high neutrophil count , low eosinophil count, and low lymphocyte count were associated with ventricular arrhythmia. In the Framingham Heart Study, white blood cell counts correlated with risk of atrial fibrillation . Other studies have linked risk of atrial fibrillation to eosinophil count and proportion of monocyte subsets . However, that literature does not definitively establish a role for leukocyte counts in the pathogenesis of arrhythmias because observational studies are prone to residual unmeasured confounding and reverse causation. Of particular concern is the potential for reverse causation. Atrial fibrillation itself might promote systemic inflammation during atrial remodeling and induce a spurious inverse association. In addition, observational studies have come to conflicting conclusions about the association of leukocyte counts with supraventricular tachycardia . Therefore, evidence from observational studies alone is insufficient. The causal effect of leukocyte counts on the risk of arrhythmias remains unknown. And additional studies are needed to characterize the role of each immune cell subtype in different types of arrhythmias. Addressing these causal questions can accelerate the discovery of mechanisms underlying disease and open new prevention and treatment avenues. Mendelian randomization (MR) is an epidemiologic approach that strives to address some key limitations of observational studies, such as confounding and reverse causation . It uses genetic variants, usually single-nucleotide polymorphisms (SNPs), as proxies for clinical interventions (as a result of exposure) in order to assess whether the genetic variants are associated with the outcome. In this way, MR supports inferences about causality , placing it at the interface between traditional observational epidemiology and interventional trials . MR should be robust to confounders, given that alleles are randomly distributed at conception, and it should be robust to reverse causation, since an individual’s genetic code is fixed at birth, before the outcome of interest. In the present study, two-sample MR was used to estimate whether leukocyte counts cause changes in arrhythmia risk, based on summary data in genome-wide association studies (GWAS). Methods 2.1 Study design For the current study, we conducted two-sample MR analysis of circulating leukocyte counts on arrhythmias using data from publicly available GWAS. The five subtypes of leukocytes were considered: neutrophils, eosinophils, basophils, monocytes, and lymphocytes. Arrythmia was defined as all types in aggregate or as one of the following five specific types: atrial fibrillation, atrioventricular block, left bundle branch block (LBBB), right bundle branch block (RBBB), and paroxysmal tachycardia. All study procedures were performed in accordance with the World Medical Association Declaration of Helsinki ethical principles for medical research. Ethics approval was considered unnecessary for the present study because the included GWAS reported appropriate ethical approval from their respective institutions, and the present analyses were performed only on summary-level data. 2.2 Selection of genetic instruments for circulating leukocyte counts We extracted summary statistics from the largest meta-analyzed GWAS data provided by the Blood Cell Consortium . The Blood Cell Consortium Phase 2 includes 563,946 European participants from 26 GWAS cohorts, after excluding patients with blood cancer, acute medical/surgical illness, myelodysplastic syndrome, bone marrow transplant, congenital/hereditary anemia, HIV, end-stage kidney disease, splenectomy, cirrhosis or extreme blood cell counts. An overview of the data sources is provided in , and more detail is available in the original article . SNPs associated with the counts of the five leukocyte counts were selected at the genome-wide significance level ( P <5×10 –8 ) and defined as genetic instruments. To ensure that SNPs were independent, a clumping procedure was performed, and the SNPs were pruned at a stringent linkage disequilibrium (LD) of R 2 < 0.001 within a 10,000-kb window. The proportions of variance in respective leukocyte counts explained by the selected SNPs were estimated , and F-statistics were calculated as measures of instrument strength . The F value for all genetic instruments was > 10, ensuring that weak bias would be <10% at least 95% of the time . 2.3 Data sources for arrhythmia To more thoroughly evaluate the association of leukocyte counts and the risk of arrhythmias, we aimed to include all eligible GWAS of arrhythmias by extensively searching the public Integrative Epidemiology Unit (IEU) GWAS database ( https://gwas.mrcieu.ac.uk/ ). We selected GWAS with the largest samples, leading to seven GWAS whose summary statistics for different types of arrhythmias were used in the present study. Genetic association estimates for the outcome of all types of arrhythmia were obtained from the UK Biobank (UK Biobank field ID 20002, value 1077), based on the UKB GWAS pipeline set up for the MRC IEU. We restricted the analytical cohort to individuals of European descent. Individuals with cardiac arrhythmia were identified via self-report during a face-to-face interview with a trained nurse. The GWAS dataset on atrial fibrillation was obtained from a meta-analysis comprising 1,030,836 participants of European ancestry . Cases of atrial fibrillation were defined as those patients with paroxysmal atrial fibrillation, permanent atrial fibrillation, or atrial flutter. Summary data for the other four types of arrhythmias (atrioventricular block, LBBB, RBBB, and paroxysmal tachycardia) were retrieved from the FinnGen project (release 2), where cases were defined as those assigned the corresponding ICD-10 diagnosis codes. Specifically, cases of atrioventricular block were defined as patients with first degree (ICD10:I440), second degree (I441), third degree atrioventricular block (I442) or other unspecified atrioventricular block (I443). LBBB included left anterior fascicular block (I444), left posterior fascicular block(I445), other fascicular block (I446) and unspecified LBBB (I447). While RBBB inlcuded right fascicular block (I450) and other RBBB (I451). And the term paroxysmal tachycardia referred to re-entry ventricular, supraventricular, ventricular tachycardia and unspecified paroxysmal tachycardia (I47). The FinnGen project included 102,739 Finnish participants and combined genetic data from Finnish biobanks and health records from Finnish health registries. Further details on data sources are included in . Prior to the MR analyses, we harmonized the SNPs identified from exposure GWAS with SNPs in outcome GWAS in order to align alleles on the same strand. 2.4 Statistical analyses We used the inverse-variance weighted (IVW) method as the primary analysis. Then we applied a range of sensitivity analyses to assess the robustness of the IVW findings against potential violations, including MR-Egger, weighted median, MR-PRESSO and multivariable MR (MVMR) analyses. Although these methods have relatively low statistical efficiency on their own, they have different theoretical properties to control for different types of biases, and they are robust to certain assumption violations. The IVW method (random effects model) can provide the greatest statistical power , assuming all genetic instruments are valid. This method is equivalent to a weighted linear regression of the SNP-exposure effects against the SNP-outcome effects, with the intercept constrained to zero. Owing to this constraint, it can lead to a relatively high rate of false positives in the presence of horizontal pleiotropy. Cochran’s Q statistic from IVW analysis was used for global heterogeneity testing. Based on the notion that pleiotropy is one of the main sources of heterogeneity, low heterogeneity (Cochran’s Q p > 0.05) implies the minor possibility of pleiotropy. MR-Egger regression is performed similarly as IVW, except the intercept is not fixed to zero . Therefore, the slope coefficient of MR-Egger regression gives an adjusted causal estimate, even when pleiotropy is present. The intercept of MR-Egger regression is an indicator of average pleiotropic effect across the genetic variants. An intercept of zero associated with P > 0.05 was considered evidence for absence of pleiotropic bias. The weighted median method is a consensus approach that takes the median of the ratio estimate distribution as the overall causal estimate. It has the advantage that it provides unbiased estimates when more than 50% of the weight comes from valid variants. It is less affected when a few genetic variants have pleiotropic effects, and it can be viewed as an implicit outlier removal approach. The MR-PRESSO , a newly proposed MR method, is a variation on the IVW method. MR-PRESSO global test is used to assess the presence of overall horizontal pleiotropy. If pleiotropy is detected, the MR-PRESSO outlier test allows the detection of individual pleiotropic outliers through calculation of the residual sum of squares. Finally, the causal estimate is obtained by applying the IVW method to the genetic variants remaining after exclusion of outliers. Steiger filtering , which computes the amount of variance each SNP explains in the exposure and in the outcome variable, identifies variant instruments that are likely to reflect reverse causation. When significant horizontal pleiotropy was detected, we also used Cook’s distance to identify outliers. Cook’s distance identifies SNPs that exert disproportionate influences on the overall estimates as outliers. MVMR, an extension of the standard MR approach, considers multiple correlated exposures within a single model, allowing the disentanglement of independent associations of each exposure with the outcome. This method was performed while considering associations of SNPs with diabetes mellitus (DM), hypertension, and coronary artery disease (CAD) as covariates in order to estimate the direct effects of leukocyte counts independently of risk factors known to influence risk of arrhythmia. Given the strong correlations between leukocyte subtypes, we also performed MVMR to determine the effect of each of the five leukocyte subtypes separately on arrhythmia, after adjusting for the effects of the other four subtypes. We performed reverse-direction MR analysis to evaluate whether there is genetic evidence for the possibility that arrhythmia alters circulating leukocyte counts. Because we detected few genome-wide significant SNPs for arrhythmias (defined as p < 5×10 –8 ), we used a less stringent statistical threshold (p<1×10 –5 ) to select genetic instruments . In fact, we were unable to detect eligible SNPs associated with the aggregated occurrence of all types of arrhythmia, even at the suggestive level of p < 1×10 –5 , so this outcome was not included in the analysis. In this reverse-direction analysis, IVW, MR-Egger and weighted median analyses were performed as described above. All statistical analyses were conducted using the TwoSampleMR, MendelianRandomization, and MR-PRESSO packages in R (version 4.0.3). Effect estimates for dichotomous outcomes were reported as odds ratios (ORs) with corresponding 95% confidence intervals (CIs). 2.5 Interpretation of results Normally, Bonferroni-corrected p values are used to adjust for multiple testing. However, given the large number of arrhythmia outcomes and leukocyte counts in the study, we judged this correction procedure to be unnecessarily conservative . Therefore, we applied the conventional p value threshold of 0.05, and we interpreted p values near 0.05 with caution. We considered casual associations to be strongly supported if the following four criteria were satisfied. (1) Primary IVW analysis gave a statistically significant causal estimate (p < 0.05). (2) All sensitivity analyses yielded concordant estimates, despite making different assumptions. (3) No evidence of unbalanced horizontal pleiotropy was observed, defined as p >0.05 for Cochran’s Q statistic, MR–Egger intercept test and MR-PRESSO global pleiotropy test. (4) No evidence of reverse causation from arrhythmias to leukocyte differential counts was observed, defined as p > 0.05 in the IVW, MR-Egger, and weighted median analyses in reverse MR analysis. Study design For the current study, we conducted two-sample MR analysis of circulating leukocyte counts on arrhythmias using data from publicly available GWAS. The five subtypes of leukocytes were considered: neutrophils, eosinophils, basophils, monocytes, and lymphocytes. Arrythmia was defined as all types in aggregate or as one of the following five specific types: atrial fibrillation, atrioventricular block, left bundle branch block (LBBB), right bundle branch block (RBBB), and paroxysmal tachycardia. All study procedures were performed in accordance with the World Medical Association Declaration of Helsinki ethical principles for medical research. Ethics approval was considered unnecessary for the present study because the included GWAS reported appropriate ethical approval from their respective institutions, and the present analyses were performed only on summary-level data. Selection of genetic instruments for circulating leukocyte counts We extracted summary statistics from the largest meta-analyzed GWAS data provided by the Blood Cell Consortium . The Blood Cell Consortium Phase 2 includes 563,946 European participants from 26 GWAS cohorts, after excluding patients with blood cancer, acute medical/surgical illness, myelodysplastic syndrome, bone marrow transplant, congenital/hereditary anemia, HIV, end-stage kidney disease, splenectomy, cirrhosis or extreme blood cell counts. An overview of the data sources is provided in , and more detail is available in the original article . SNPs associated with the counts of the five leukocyte counts were selected at the genome-wide significance level ( P <5×10 –8 ) and defined as genetic instruments. To ensure that SNPs were independent, a clumping procedure was performed, and the SNPs were pruned at a stringent linkage disequilibrium (LD) of R 2 < 0.001 within a 10,000-kb window. The proportions of variance in respective leukocyte counts explained by the selected SNPs were estimated , and F-statistics were calculated as measures of instrument strength . The F value for all genetic instruments was > 10, ensuring that weak bias would be <10% at least 95% of the time . Data sources for arrhythmia To more thoroughly evaluate the association of leukocyte counts and the risk of arrhythmias, we aimed to include all eligible GWAS of arrhythmias by extensively searching the public Integrative Epidemiology Unit (IEU) GWAS database ( https://gwas.mrcieu.ac.uk/ ). We selected GWAS with the largest samples, leading to seven GWAS whose summary statistics for different types of arrhythmias were used in the present study. Genetic association estimates for the outcome of all types of arrhythmia were obtained from the UK Biobank (UK Biobank field ID 20002, value 1077), based on the UKB GWAS pipeline set up for the MRC IEU. We restricted the analytical cohort to individuals of European descent. Individuals with cardiac arrhythmia were identified via self-report during a face-to-face interview with a trained nurse. The GWAS dataset on atrial fibrillation was obtained from a meta-analysis comprising 1,030,836 participants of European ancestry . Cases of atrial fibrillation were defined as those patients with paroxysmal atrial fibrillation, permanent atrial fibrillation, or atrial flutter. Summary data for the other four types of arrhythmias (atrioventricular block, LBBB, RBBB, and paroxysmal tachycardia) were retrieved from the FinnGen project (release 2), where cases were defined as those assigned the corresponding ICD-10 diagnosis codes. Specifically, cases of atrioventricular block were defined as patients with first degree (ICD10:I440), second degree (I441), third degree atrioventricular block (I442) or other unspecified atrioventricular block (I443). LBBB included left anterior fascicular block (I444), left posterior fascicular block(I445), other fascicular block (I446) and unspecified LBBB (I447). While RBBB inlcuded right fascicular block (I450) and other RBBB (I451). And the term paroxysmal tachycardia referred to re-entry ventricular, supraventricular, ventricular tachycardia and unspecified paroxysmal tachycardia (I47). The FinnGen project included 102,739 Finnish participants and combined genetic data from Finnish biobanks and health records from Finnish health registries. Further details on data sources are included in . Prior to the MR analyses, we harmonized the SNPs identified from exposure GWAS with SNPs in outcome GWAS in order to align alleles on the same strand. Statistical analyses We used the inverse-variance weighted (IVW) method as the primary analysis. Then we applied a range of sensitivity analyses to assess the robustness of the IVW findings against potential violations, including MR-Egger, weighted median, MR-PRESSO and multivariable MR (MVMR) analyses. Although these methods have relatively low statistical efficiency on their own, they have different theoretical properties to control for different types of biases, and they are robust to certain assumption violations. The IVW method (random effects model) can provide the greatest statistical power , assuming all genetic instruments are valid. This method is equivalent to a weighted linear regression of the SNP-exposure effects against the SNP-outcome effects, with the intercept constrained to zero. Owing to this constraint, it can lead to a relatively high rate of false positives in the presence of horizontal pleiotropy. Cochran’s Q statistic from IVW analysis was used for global heterogeneity testing. Based on the notion that pleiotropy is one of the main sources of heterogeneity, low heterogeneity (Cochran’s Q p > 0.05) implies the minor possibility of pleiotropy. MR-Egger regression is performed similarly as IVW, except the intercept is not fixed to zero . Therefore, the slope coefficient of MR-Egger regression gives an adjusted causal estimate, even when pleiotropy is present. The intercept of MR-Egger regression is an indicator of average pleiotropic effect across the genetic variants. An intercept of zero associated with P > 0.05 was considered evidence for absence of pleiotropic bias. The weighted median method is a consensus approach that takes the median of the ratio estimate distribution as the overall causal estimate. It has the advantage that it provides unbiased estimates when more than 50% of the weight comes from valid variants. It is less affected when a few genetic variants have pleiotropic effects, and it can be viewed as an implicit outlier removal approach. The MR-PRESSO , a newly proposed MR method, is a variation on the IVW method. MR-PRESSO global test is used to assess the presence of overall horizontal pleiotropy. If pleiotropy is detected, the MR-PRESSO outlier test allows the detection of individual pleiotropic outliers through calculation of the residual sum of squares. Finally, the causal estimate is obtained by applying the IVW method to the genetic variants remaining after exclusion of outliers. Steiger filtering , which computes the amount of variance each SNP explains in the exposure and in the outcome variable, identifies variant instruments that are likely to reflect reverse causation. When significant horizontal pleiotropy was detected, we also used Cook’s distance to identify outliers. Cook’s distance identifies SNPs that exert disproportionate influences on the overall estimates as outliers. MVMR, an extension of the standard MR approach, considers multiple correlated exposures within a single model, allowing the disentanglement of independent associations of each exposure with the outcome. This method was performed while considering associations of SNPs with diabetes mellitus (DM), hypertension, and coronary artery disease (CAD) as covariates in order to estimate the direct effects of leukocyte counts independently of risk factors known to influence risk of arrhythmia. Given the strong correlations between leukocyte subtypes, we also performed MVMR to determine the effect of each of the five leukocyte subtypes separately on arrhythmia, after adjusting for the effects of the other four subtypes. We performed reverse-direction MR analysis to evaluate whether there is genetic evidence for the possibility that arrhythmia alters circulating leukocyte counts. Because we detected few genome-wide significant SNPs for arrhythmias (defined as p < 5×10 –8 ), we used a less stringent statistical threshold (p<1×10 –5 ) to select genetic instruments . In fact, we were unable to detect eligible SNPs associated with the aggregated occurrence of all types of arrhythmia, even at the suggestive level of p < 1×10 –5 , so this outcome was not included in the analysis. In this reverse-direction analysis, IVW, MR-Egger and weighted median analyses were performed as described above. All statistical analyses were conducted using the TwoSampleMR, MendelianRandomization, and MR-PRESSO packages in R (version 4.0.3). Effect estimates for dichotomous outcomes were reported as odds ratios (ORs) with corresponding 95% confidence intervals (CIs). Interpretation of results Normally, Bonferroni-corrected p values are used to adjust for multiple testing. However, given the large number of arrhythmia outcomes and leukocyte counts in the study, we judged this correction procedure to be unnecessarily conservative . Therefore, we applied the conventional p value threshold of 0.05, and we interpreted p values near 0.05 with caution. We considered casual associations to be strongly supported if the following four criteria were satisfied. (1) Primary IVW analysis gave a statistically significant causal estimate (p < 0.05). (2) All sensitivity analyses yielded concordant estimates, despite making different assumptions. (3) No evidence of unbalanced horizontal pleiotropy was observed, defined as p >0.05 for Cochran’s Q statistic, MR–Egger intercept test and MR-PRESSO global pleiotropy test. (4) No evidence of reverse causation from arrhythmias to leukocyte differential counts was observed, defined as p > 0.05 in the IVW, MR-Egger, and weighted median analyses in reverse MR analysis. Results 3.1 Circulating leukocyte counts and heart arrhythmias: Primary results First, we investigated the causal effect of each leukocyte subtype count on arrhythmias using IVW methods with multiplicative random effects. The IVW approach is recommended as the primary method in MR analysis because it is optimally efficient when all genetic variants are valid . The results of IVW analysis are presented in . We did not find clear evidence supporting causal effects of any leukocyte subtype counts on the overall occurrence of all-type arrhythmia . Nevertheless, there was evidence that different leukocyte subtype counts causally affected three specific types of arrhythmias. A genetically estimated 1-standard deviation increase in lymphocyte count was associated with 46% higher risk of atrioventricular block (OR 1.46, 95% CI 1.11–1.93, p=0.0065). We also found moderate evidence for causal effects of basophil count on atrial fibrillation (OR 1.08, 95% CI 1.01–1.58, p=0.0237), and neutrophil count on RBBB (OR 2.32, 95% CI 1.11–4.86, p=0.0259). No significant associations were observed for the other outcomes. 3.2 Sensitivity analyses of positive results We assessed the robustness of the significant causal estimates from the above IVW analysis using sensitivity analyses. These sensitivity analyses are generally considered less powerful than the conventional IVW approach, but robust to different forms of biases (see Methods). Therefore we conducted MR-Egger, MR-PRESSO, weighted median, Steiger filtering and multivariable MR analyses on the following three combinations of exposure and outcome: (1) lymphocyte count and atrioventricular block, (2) neutrophil count and RBBB, and (3) basophil count and atrial fibrillation. 3.2.1 Lymphocyte count and atrioventricular block Sensitivity analyses supported the causal link between lymphocyte count and atrioventricular block : the MR-Egger approach indicated an OR 1.95 (95% CI 1.12-3.39; p=0.019), and the weighted median approach indicated an OR 1.76 (95% CI 1.20-2.78; p=0.015). With respect to pleiotropy detection, Cochran’s Q test gave a p value of 0.586, suggesting no evidence of heterogeneity between genetic instruments and therefore no pleiotropy. Similarly, bias due to pleiotropy was not detectable in the IVW analyses, based on a p value of 0.249 for the MR-Egger intercept test and p value of 0.566 for MR-PRESSO global pleiotropy test. Additionally, the absence of outliers detected through the Steiger filtering reinforced this conclusion . Using MVMR analysis, we confirmed this causal relationship after adjusting for risk factors of arrhythmia (CAD, DM and hypertension) and for effects from the other four subtypes of leukocytes . 3.2.2 Neutrophil count and RBBB The weighted median method (OR 3.40, 95% CI 1.02–11.28; p=0.049) and MR-Egger method (OR 3.13, 95% CI 0.64–15.32; p=0.158) produced results similar with those of the primary IVW analysis . But these CIs were wide and the p values near 0.05 or above 0.05, likely due to lack of statistical power. There was no indication of heterogeneity or pleiotropy in the corresponding Cochran’s Q test (p =0.200), MR-Egger intercept test (p=0. 674) or MR-PRESSO global pleiotropy test (p=0.209). And Steiger filtering did not detect any outliers . In MVMR analysis, accounting for counts of lymphocytes and eosinophils abolished the direct effect of neutrophil count on RBBB . Together, these analyses suggest no direct, independent effect of neutrophil counts on the risk of RBBB. 3.2.3 Basophil count and atrial fibrillation For basophil count and atrial fibrillation, the issue of horizontal pleiotropy is a particular concern . Although the intercept estimated from the MR-Egger regression was centered around zero (−0.0004, p=0.801), and Steiger filtering did not identify any outliers , we determined the presence of overall horizontal pleiotropy among all genetic instruments using MR-PRESSO (global pleiotropy p<0.001). After removing five outlier SNPs, the causal estimate of basophil count on atrial fibrillation no longer achieved statistical significance (MR-PRESSO outlier correction p=0.106). Similarly, effect estimates from MR-Egger and weighted median analyses were not significant. In conclusion, these analyses suggest that the estimate from IVW analysis may be strongly affected by pleiotropy, and that no compelling evidence exists in support of a causal association between basophil count and atrial fibrillation. 3.3 Sensitivity analyses of negative results To reduce the incidence of false negative findings, sensitivity analyses (MR-Egger, weighted median, Steiger filtering, MR-PRESSO) were also performed to assess the validity of negative results. Empirically, we focused on the causal relationships of lymphocyte count or neutrophil count with atrial fibrillation. For lymphocyte count and atrial fibrillation, both MR-Egger and MR-PRESSO methods gave negative, non-significant estimates similar to those of the IVW analysis . Only weighted median analysis showed a significant, albeit small, effect. Steiger filtering identified one outlier SNP, but the results of above analysis remained essentially unchanged after removing the outlier . The finding from weighted median analysis alone is insufficient evidence. Overall, we conclude the absence of strong evidence for a causal association between lymphocyte count and risk of atrial fibrillation. For neutrophil count and atrial fibrillation, the MR-Egger and weighted median analyses showed a statistically significant causal estimate, which was inconsistent with the IVW analysis . These two methods are perceived as methods that have natural robustness to pleiotropy. Meanwhile, we found evidence of pleiotropy based on the p values for the MR-Egger intercept test (p=0.013) and MR-PRESSO global pleiotropy test(p<0.001), as well as evidence of substantial heterogeneity based on the p value for Cochran’s Q statistic (p<0.001). We suspect that pleiotropy biased the effect estimate towards null in the IVW analysis, even if pleiotropy more often biases estimates away from null. To remove potential pleiotropy as much as possible, we applied two additional different methods(MR-PRESSO outlier test and Cook’s distance) to further determine and exclude potential outliers. Using MR-PRESSO and Cook’s distance, we identified 9 and 19 outliers, respectively. After removing the outlier SNPs, the causal estimates still did not reach statistical significance . In fact, the estimates were even smaller than before. Taken together, our analyses indicate no compelling evidence for a causal effect of neutrophil count on atrial fibrillation. Sensitivity analyses of the other 29 exposure-outcome combinations yielded negative findings similar to those of the IVW analyses . 3.4 Reverse MR analysis to assess the effect of arrhythmias on leukocyte counts To examine the possibility that reverse causation could be driving our findings, we performed extensive reverse MR analysis in which the risk of arrhythmia was the exposure and counts of the five leukocyte subtypes were the outcome. Although the IVW analysis showed that atrial fibrillation, paroxysmal tachycardia, LBBB and RBBB all had effects on the differential leukocyte counts, the effect sizes were so small that their practical significance is highly questionable . Moreover, these causal effects did not achieve statistical significance in either MR-Egger or weighted median analysis . Therefore, we did not found any robust evidence of reverse associations. In particular, we did not observe causal effects of atrioventricular block on lymphocyte count in IVW method (OR 1.001, 95% CI 0.998–1.004; P p=0.44) . Similar results were observed in MR-Egger and weighted median analyses . Circulating leukocyte counts and heart arrhythmias: Primary results First, we investigated the causal effect of each leukocyte subtype count on arrhythmias using IVW methods with multiplicative random effects. The IVW approach is recommended as the primary method in MR analysis because it is optimally efficient when all genetic variants are valid . The results of IVW analysis are presented in . We did not find clear evidence supporting causal effects of any leukocyte subtype counts on the overall occurrence of all-type arrhythmia . Nevertheless, there was evidence that different leukocyte subtype counts causally affected three specific types of arrhythmias. A genetically estimated 1-standard deviation increase in lymphocyte count was associated with 46% higher risk of atrioventricular block (OR 1.46, 95% CI 1.11–1.93, p=0.0065). We also found moderate evidence for causal effects of basophil count on atrial fibrillation (OR 1.08, 95% CI 1.01–1.58, p=0.0237), and neutrophil count on RBBB (OR 2.32, 95% CI 1.11–4.86, p=0.0259). No significant associations were observed for the other outcomes. Sensitivity analyses of positive results We assessed the robustness of the significant causal estimates from the above IVW analysis using sensitivity analyses. These sensitivity analyses are generally considered less powerful than the conventional IVW approach, but robust to different forms of biases (see Methods). Therefore we conducted MR-Egger, MR-PRESSO, weighted median, Steiger filtering and multivariable MR analyses on the following three combinations of exposure and outcome: (1) lymphocyte count and atrioventricular block, (2) neutrophil count and RBBB, and (3) basophil count and atrial fibrillation. 3.2.1 Lymphocyte count and atrioventricular block Sensitivity analyses supported the causal link between lymphocyte count and atrioventricular block : the MR-Egger approach indicated an OR 1.95 (95% CI 1.12-3.39; p=0.019), and the weighted median approach indicated an OR 1.76 (95% CI 1.20-2.78; p=0.015). With respect to pleiotropy detection, Cochran’s Q test gave a p value of 0.586, suggesting no evidence of heterogeneity between genetic instruments and therefore no pleiotropy. Similarly, bias due to pleiotropy was not detectable in the IVW analyses, based on a p value of 0.249 for the MR-Egger intercept test and p value of 0.566 for MR-PRESSO global pleiotropy test. Additionally, the absence of outliers detected through the Steiger filtering reinforced this conclusion . Using MVMR analysis, we confirmed this causal relationship after adjusting for risk factors of arrhythmia (CAD, DM and hypertension) and for effects from the other four subtypes of leukocytes . 3.2.2 Neutrophil count and RBBB The weighted median method (OR 3.40, 95% CI 1.02–11.28; p=0.049) and MR-Egger method (OR 3.13, 95% CI 0.64–15.32; p=0.158) produced results similar with those of the primary IVW analysis . But these CIs were wide and the p values near 0.05 or above 0.05, likely due to lack of statistical power. There was no indication of heterogeneity or pleiotropy in the corresponding Cochran’s Q test (p =0.200), MR-Egger intercept test (p=0. 674) or MR-PRESSO global pleiotropy test (p=0.209). And Steiger filtering did not detect any outliers . In MVMR analysis, accounting for counts of lymphocytes and eosinophils abolished the direct effect of neutrophil count on RBBB . Together, these analyses suggest no direct, independent effect of neutrophil counts on the risk of RBBB. 3.2.3 Basophil count and atrial fibrillation For basophil count and atrial fibrillation, the issue of horizontal pleiotropy is a particular concern . Although the intercept estimated from the MR-Egger regression was centered around zero (−0.0004, p=0.801), and Steiger filtering did not identify any outliers , we determined the presence of overall horizontal pleiotropy among all genetic instruments using MR-PRESSO (global pleiotropy p<0.001). After removing five outlier SNPs, the causal estimate of basophil count on atrial fibrillation no longer achieved statistical significance (MR-PRESSO outlier correction p=0.106). Similarly, effect estimates from MR-Egger and weighted median analyses were not significant. In conclusion, these analyses suggest that the estimate from IVW analysis may be strongly affected by pleiotropy, and that no compelling evidence exists in support of a causal association between basophil count and atrial fibrillation. Lymphocyte count and atrioventricular block Sensitivity analyses supported the causal link between lymphocyte count and atrioventricular block : the MR-Egger approach indicated an OR 1.95 (95% CI 1.12-3.39; p=0.019), and the weighted median approach indicated an OR 1.76 (95% CI 1.20-2.78; p=0.015). With respect to pleiotropy detection, Cochran’s Q test gave a p value of 0.586, suggesting no evidence of heterogeneity between genetic instruments and therefore no pleiotropy. Similarly, bias due to pleiotropy was not detectable in the IVW analyses, based on a p value of 0.249 for the MR-Egger intercept test and p value of 0.566 for MR-PRESSO global pleiotropy test. Additionally, the absence of outliers detected through the Steiger filtering reinforced this conclusion . Using MVMR analysis, we confirmed this causal relationship after adjusting for risk factors of arrhythmia (CAD, DM and hypertension) and for effects from the other four subtypes of leukocytes . Neutrophil count and RBBB The weighted median method (OR 3.40, 95% CI 1.02–11.28; p=0.049) and MR-Egger method (OR 3.13, 95% CI 0.64–15.32; p=0.158) produced results similar with those of the primary IVW analysis . But these CIs were wide and the p values near 0.05 or above 0.05, likely due to lack of statistical power. There was no indication of heterogeneity or pleiotropy in the corresponding Cochran’s Q test (p =0.200), MR-Egger intercept test (p=0. 674) or MR-PRESSO global pleiotropy test (p=0.209). And Steiger filtering did not detect any outliers . In MVMR analysis, accounting for counts of lymphocytes and eosinophils abolished the direct effect of neutrophil count on RBBB . Together, these analyses suggest no direct, independent effect of neutrophil counts on the risk of RBBB. Basophil count and atrial fibrillation For basophil count and atrial fibrillation, the issue of horizontal pleiotropy is a particular concern . Although the intercept estimated from the MR-Egger regression was centered around zero (−0.0004, p=0.801), and Steiger filtering did not identify any outliers , we determined the presence of overall horizontal pleiotropy among all genetic instruments using MR-PRESSO (global pleiotropy p<0.001). After removing five outlier SNPs, the causal estimate of basophil count on atrial fibrillation no longer achieved statistical significance (MR-PRESSO outlier correction p=0.106). Similarly, effect estimates from MR-Egger and weighted median analyses were not significant. In conclusion, these analyses suggest that the estimate from IVW analysis may be strongly affected by pleiotropy, and that no compelling evidence exists in support of a causal association between basophil count and atrial fibrillation. Sensitivity analyses of negative results To reduce the incidence of false negative findings, sensitivity analyses (MR-Egger, weighted median, Steiger filtering, MR-PRESSO) were also performed to assess the validity of negative results. Empirically, we focused on the causal relationships of lymphocyte count or neutrophil count with atrial fibrillation. For lymphocyte count and atrial fibrillation, both MR-Egger and MR-PRESSO methods gave negative, non-significant estimates similar to those of the IVW analysis . Only weighted median analysis showed a significant, albeit small, effect. Steiger filtering identified one outlier SNP, but the results of above analysis remained essentially unchanged after removing the outlier . The finding from weighted median analysis alone is insufficient evidence. Overall, we conclude the absence of strong evidence for a causal association between lymphocyte count and risk of atrial fibrillation. For neutrophil count and atrial fibrillation, the MR-Egger and weighted median analyses showed a statistically significant causal estimate, which was inconsistent with the IVW analysis . These two methods are perceived as methods that have natural robustness to pleiotropy. Meanwhile, we found evidence of pleiotropy based on the p values for the MR-Egger intercept test (p=0.013) and MR-PRESSO global pleiotropy test(p<0.001), as well as evidence of substantial heterogeneity based on the p value for Cochran’s Q statistic (p<0.001). We suspect that pleiotropy biased the effect estimate towards null in the IVW analysis, even if pleiotropy more often biases estimates away from null. To remove potential pleiotropy as much as possible, we applied two additional different methods(MR-PRESSO outlier test and Cook’s distance) to further determine and exclude potential outliers. Using MR-PRESSO and Cook’s distance, we identified 9 and 19 outliers, respectively. After removing the outlier SNPs, the causal estimates still did not reach statistical significance . In fact, the estimates were even smaller than before. Taken together, our analyses indicate no compelling evidence for a causal effect of neutrophil count on atrial fibrillation. Sensitivity analyses of the other 29 exposure-outcome combinations yielded negative findings similar to those of the IVW analyses . Reverse MR analysis to assess the effect of arrhythmias on leukocyte counts To examine the possibility that reverse causation could be driving our findings, we performed extensive reverse MR analysis in which the risk of arrhythmia was the exposure and counts of the five leukocyte subtypes were the outcome. Although the IVW analysis showed that atrial fibrillation, paroxysmal tachycardia, LBBB and RBBB all had effects on the differential leukocyte counts, the effect sizes were so small that their practical significance is highly questionable . Moreover, these causal effects did not achieve statistical significance in either MR-Egger or weighted median analysis . Therefore, we did not found any robust evidence of reverse associations. In particular, we did not observe causal effects of atrioventricular block on lymphocyte count in IVW method (OR 1.001, 95% CI 0.998–1.004; P p=0.44) . Similar results were observed in MR-Egger and weighted median analyses . Discussion In this study, using large publicly available genomic datasets, we conducted MR analyses to investigate the causal effects of leukocyte counts on different types of arrhythmias. Our principal findings are that genetically determined high lymphocyte count increases risk of atrioventricular block. In contrast, we did not detect a significant causal effect of either neutrophil or lymphocyte count on risk of atrial fibrillation. Although sparse observational studies have reported relationships between leukocyte counts and some types of arrhythmias, the unique contribution of the present study is that we precisely investigated the association of each differential leukocyte count with five specific types of arrhythmias. In addition, we used MR methods, which help to minimize bias due to confounding factors and reverse causation, allowing us to draw conclusions about causal relationships, not merely associations. Diversity is an intrinsic characteristic of the immune system, which exerts an important influence on an individual’s risk of developing immune mediated diseases. Although the abundance of circulating immune cells is particularly prone to change in the context of infection or injury, it has been demonstrated to be highly variable even among “healthy” individuals . Moreover, evidence has suggested that immune cell composition is associated with risks of cancer and cardiovascular disease among healthy people without prior corresponding diseases, although the exact causal relationship between immune cell composition changes and disease remains unclear. The analyses in the present study were carried out on data in the Blood Cell Consortium, for which mean leukocyte counts were within the normal range . Thus, our results may support the potential of leukocyte counts for predicting assessing arrhythmia risk in disease-free individuals. High-degree atrioventricular block is the leading reason for pacemaker implantation. First-degree atrioventricular block, previously thought to be associated with a favorable prognosis, may actually be linked to adverse cardiovascular outcomes and increased mortality . However, due to the unknown mechanism of atrioventricular block, prevention and non-invasive treatment strategies are largely lacking in clinical practice. In particular, whether changes in circulating leukocyte components affect the risk of developing atrioventricular block remains unclear, as is the question of which types of leukocyte exert greater influence on atrioventricular block. Macrophages have been implicated in the disorder: they are abundant at the atrioventricular node and affect its physiological function though electrical coupling with cardiomyocytes . However, the current study did not find evidence supporting a causal effect of circulating monocyte count on atrioventricular block. We assume that this discrepancy stems from the fact that most cardiac macrophages, especially those resident in the atrioventricular node, populate the heart during embryogenesis and self-maintain locally with minimal exchange with the population of circulating monocytes . On the other hand, our results revealed that genetically determined high lymphocyte count increases the risk of atrioventricular block. To the best of our knowledge, data on the impact of lymphocyte on atrioventricular block are scarce. The etiology of atrioventricular block is related to fibrosis of the conduction system , electrical remodeling of atrioventricular node myocytes , and elevated vagal tone . Depending on the types of cells involved, it is speculated that lymphocytes may affect atrioventricular conduction in various ways. For instance, by secreting cytokines, lymphocytes can regulate monocyte/macrophage recruitment and differentiation . As previously mentioned, macrophages can directly affect the action potential of cardiomyocytes through gap junctions . Additionally, lymphocytes can promote fibroblast activation by secreting inflammatory mediators , leading to fibrosis in the atrioventricular node area and subsequent electrical isolation. Moreover, it may be possible that during cardiac injury, endogenous antigens in the conduction system are exposed, triggering the proliferation of autoreactive T and B cells and subsequent damage to atrioventricular node myocytes. Finally, it is worth investigating whether lymphocytes can directly couple to cardiomyocytes or produce autoantibodies that cross-react with ion channels in cardiomyocytes and ultimately affect their action potential. In conclusion, our results justify detailed studies into the role of lymphocytes in the pathogenesis of atrioventricular block, as well as their utility as a biomarker in disease risk assessment. Atrial fibrillation is the most common arrhythmia, and it increases the risk of stroke, heart failure and mortality . Previous observational studies have reported links between the disorder and high ratios of circulating neutrophils to lymphocytes . Animal studies further support that atrial fibrillation involves atrial infiltration by neutrophils . However, we did not find any significant association between genetically predicted neutrophil or lymphocyte counts and atrial fibrillation. In particular, although our effect estimates for neutrophil counts were directionally concordant with the results from observational studies, the effect sizes were small and the CIs wide. These findings, coupled with inconsistent estimates from our various sensitivity analyses, lead us to conclude that genetically determined neutrophil counts do not substantially influence risk of atrial fibrillation. One potential reason for the differences between our work and previous epidemiological studies is that our MR analysis evaluated how lifelong exposure to increased leukocyte counts affected risk of atrial fibrillation . In contrast, observational studies typically have limited follow-up and may focus on short-term effects of leukocyte counts on the risk of postoperative atrial fibrillation . This study has limitations worth considering. First, it was restricted to a population of European descent for the sake of genetic homogeneity, so its generalizability to other ethnic groups is unclear. Second, lymphocytes are a diverse population of cells that have distinct phenotypic and functional properties. The aggregated count of all lymphocytes is far from fully representing the heterogeneous changes of lymphocyte subpopulations. Future studies should examine specific subsets of circulating lymphocytes, such as through fluorescence-activated cell sorting. Third, we were unable to distinguish different subtype of each kind of arrhythmias in our analysis, due to the lack of detailed original GWAS data. Fourth, no MR analysis can entirely exclude the influence of pleiotropic effects. Nevertheless, the observed consistency of effect estimates across multiple sensitivity analyses implies minimal confounding and bias. Fifth, this study did not encompass ventricular tachycardia or ventricular fibrillation, as large-scale population-based GWAS summary statistics on ventricular arrhythmias are currently unavailable. Recruiting patients with ventricular fibrillation in the setting of acute myocardial infarction is challenging when compared to the ease of recruitment of atrial fibrillation patients . Existing GWAS primarily focus on electrophysiological parameters that highly correlated with ventricular tachyarrhythmia, such as PR interval , QT interval , or specific diseases like Brugada syndrome or long QT syndrome , which are predominantly characterized by ventricular arrhythmias. Sixth, the “all types of arrhythmias” analyzed in the study represent a collection of phenotypes. It may introduce composition bias. This is because the proportion of each arrhythmia in the dataset is unknown, and changes in the proportion can significantly impact the causal effects, thereby reducing reproducibility. Furthermore, if the causal effects of leukocytes are opposite on different types of arrhythmias, they may mutually cancel out, resulting in inaccurate findings. Conclusion In conclusion, our study provides strong evidence of a causal effect of genetically high lymphocyte count on the risk of atrioventricular block. We failed to find evidence supporting a causal effect of lymphocyte or neutrophil count on atrial fibrillation. Our results provide insights into the role of systemic immune changes in the pathogenesis of arrhythmias. The original contributions presented in the study are included in the article/ . Further inquiries can be directed to the corresponding authors. Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. YuC, JY and SH contributed to conception and design of the study. YuC, LJ, XZ, YaC, LC, FZ, ZL and TF performed the statistical analysis. YuC and LL wrote the first draft of the manuscript. YuC, LL, XZ and YaC wrote sections of the manuscript All authors contributed to the article and approved the submitted version.
AAPM task group report 275.S: Survey strategy and results on plan review and chart check practices in US and Canada
bbcecb05-f751-4017-be0b-86059cac4c23
10113700
Internal Medicine[mh]
INTRODUCTION Effective physics plan and chart review in radiation therapy is an integral component of patient safety. , Initial plan, on‐treatment (i.e., weekly), and end‐of‐treatment chart checks are critical physics activities that act as safety barriers to detect and prevent suboptimal or erroneous treatments. Following a risk‐based process proposed by the American Association of Physicists in Medicine (AAPM) Task Group (TG)‐100, AAPM TG‐275 was formed to develop national benchmarks and recommendations for the type and extent of checks to be performed for an effective physics plan and chart review in radiation therapy. TG‐275 was also charged with conducting a survey of the medical physics community with the goal of understanding and mapping practices and clinical processes in performing plan and chart reviews. To date, only two other initiatives have provided an overview of physics plan review and chart check practices. In 2015, the results of the AAPM Safety Profile Assessment demonstrated that initial plan review and on‐treatment physics chart checks were conducted in 82 and 87 of the 114 responding institutions respectively. While this study provided some initial evidence of heterogeneity across practices with respect to plan and chart reviews, it did not provide additional information about what or how these clinical processes are performed. Also in 2015, the Medical Physics Community of Practice Chart Checking Practices Working Group (CCPWG) conducted a 36‐question survey of 15 cancer centers in Ontario, Canada. , The CCPWG survey provided information about province‐wide practices related to workload, workflow, dose verification, and patient specific Quality Assurance (QA). The CCPWG survey also investigated items checked and intra‐center variability of checks performed, troubleshooting, consultation, and documentation as part of plan review and on‐treatment chart checks. Although these two initiatives offer insight into some aspects of physics plan and chart review processes, they do not provide a full picture of the diverse practices across the medical physics community. This work serves as a complement to the TG‐275 report providing a detailed review of the design, implementation, and results of the TG‐275 survey. Results of the survey are summarized using overall descriptive statistics. In addition, four demographic questions were selected to perform statistical tests of association to better understand differences and trends across practices. MATERIALS AND METHODS 2.1 Survey scope, development, and design The main aim of the TG‐275 survey was to gather information about how physicists perform initial plan, on‐treatment, and end‐of‐treatment chart checks for photon, electron, and proton treatment modalities across a wide variety of clinics and institutions. Brachytherapy was outside of the scope of this survey. The survey was designed and organized using information from clinical process maps developed as part of the AAPM white paper on consensus recommendations for incident learning database structures in radiation oncology. The clinical process maps provided an organized structure to capture the checks performed during the plan review and chart check processes. A comprehensive list of items to be checked was created through an iterative process of using task group members’ clinical experience combined with the premise that each step in the clinical process maps was an item to be checked. In addition to the list of check items, the survey also captured general demographics information about participant's institution and details on how plan and chart review processes are implemented at the participant's practice. Structured data and multiple‐choice questions were used throughout the survey. Figure provides an overview of the survey structure, which consisted of 100 questions divided into four main sections. The first section contained 18 demographic questions characterizing participant's facility infrastructure and general aspects of the clinical practice such as safety culture, staffing levels, patient load, and institution type (e.g., community, academic‐affiliated, etc.). The three remaining sections captured data related to the initial plan, on‐treatment, and end‐of‐treatment chart checks, respectively.  Each of these three sections contained two types of questions: Process‐Based questions, designed to determine how checks are performed, and Check‐Specific questions designed to ascertain what information is evaluated when the checks are performed. Similarly, participants were only presented with proton‐specific questions if they indicated their center provides proton therapy and that they were familiar with the proton plan review and chart check processes. Figure provides an overview of the construction of nested questions and examples of the Process‐Based and Check‐Specific questions used in the survey. Survey questions and corresponding multiple‐choice options were reviewed by TG‐275 members to ensure clarity of wording and content and to minimize survey completion time. The pre‐deployment survey completion time was estimated to range between 15 and 30 min depending on practice complexity and the number of nested questions displayed to the participant. The complete survey is provided in the so readers can review the questions and corresponding response options. 2.2 Study sample and survey participation incentives With the support of AAPM headquarters, the survey was deployed using QuestionPro ( www.questionpro.com ), a web‐based survey platform. An invitation to participate was sent to all full members of the AAPM who self‐reported working, at least partially, in the radiation oncology subspecialty (approximately 4500 members). The survey was open for participation for approximately 7 weeks from 10 February to 31 March 2016. To motivate participation and survey completion, survey respondents were enrolled in a raffle for a complementary registration to either the AAPM Annual or Spring Clinical Meeting. Additionally, respondents who completed the survey had the opportunity to fulfill Part 2 Maintenance of Certification (MOC) requirement for up to 15 h of Self‐Assessment Continuing Education (SA‐CE) and the Part 4 MOC requirement for Participatory Quality Improvement Activity as defined by the American Board of Radiology (ABR). Instructions on how to earn the credits were shared with all survey respondents who completed the survey. A Self‐Directed Educational Project template from the ABR, and an educational plan developed by TG‐275 members, were also provided. Since participants’ information was required for the raffle, the survey was not anonymous. However, the data were treated confidentially, and all responses were de‐identified for survey summary and statistical analysis. 2.3 Survey summary and statistical analysis Since the TG‐275 survey used structured data and multiple‐choice questions as the main strategy to collect information, data aggregation was easily achieved.  Descriptive statistics, which included total number of responses and percentage relative to the total number of contributions to each question, were used for general demographics and plan review and chart check processes.  For the purpose of this publication and further statistical analysis, only contributions from participants in the United States and Canada were included. This data sample is consistent with the one used in the TG‐275 report to cross‐correlate the results of the failure mode and effects analysis (FMEA) risk assessment with the TG‐275 survey. To further explore the landscape and study possible differences across clinical practices, tests of association were performed using data grouped by the following four demographic questions: (1) Institution Type : Academic and Non‐Academic clinics, where the Non‐Academic group represents participants who reported belonging to a free‐standing clinic, community hospital, or government hospital. Following other AAPM survey standards, three other options were available to choose under the institution type question: consulting groups, vendors, or other. However, participants who reported belonging to these groups were not included in the statistical analysis. (2) Average number of patients treated daily : Low (≤ 50 patients), Medium (51 to 100 patients, and High (> 100 patients) volume practices. (3) Radiation Oncology Electronic Medical Record (RO‐EMR) : ARIA and MOSAIQ. (4) Perceived Culture of Safety : Always, Usually, and ≤ Sometimes, where the “ ≤ Sometimes” group includes participants who selected Sometimes, Rarely, or Never when asked the following question: “Do you feel that there is a culture of safety in your institution where deviations and errors can be communicated amongst the groups openly and without any repercussions?” Statistical tests of association were used to evaluate differences between the pre‐defined groups for each of the four selected demographics questions above. The statistical tests were performed for each of the 40 Process‐Based questions as well as each of the 218 checks that were common across all external beam treatment modalities. For example, statistical analysis based on Institution Type was used to assess if there were statistically significant differences in processes and items checked between participants from Academic and Non‐Academic clinics. In all cases, only univariate analysis was performed; thus, possible interdependencies between the four demographic groups were not accounted for in the determination of statistical significance (e.g., the potential interdependent relationship between Institution Type and Average number of patients treated daily was not considered). A chi‐squared test of association was performed for survey questions with discrete responses while an analysis of variance (ANOVA) test was utilized for survey questions with continuous responses. When performing the test of association on the demographic questions with three groups such as the Average Number of Patients Treated Daily (Low: < 50; Medium: 51–100; High > 100) and Perceived Culture of Safety (Always, Usually, and ≤ Sometimes), the test calculates and reports whether there is statistically significant differences across the three groups. Throughout this work, the threshold for statistical significance was P < 0.05. Survey scope, development, and design The main aim of the TG‐275 survey was to gather information about how physicists perform initial plan, on‐treatment, and end‐of‐treatment chart checks for photon, electron, and proton treatment modalities across a wide variety of clinics and institutions. Brachytherapy was outside of the scope of this survey. The survey was designed and organized using information from clinical process maps developed as part of the AAPM white paper on consensus recommendations for incident learning database structures in radiation oncology. The clinical process maps provided an organized structure to capture the checks performed during the plan review and chart check processes. A comprehensive list of items to be checked was created through an iterative process of using task group members’ clinical experience combined with the premise that each step in the clinical process maps was an item to be checked. In addition to the list of check items, the survey also captured general demographics information about participant's institution and details on how plan and chart review processes are implemented at the participant's practice. Structured data and multiple‐choice questions were used throughout the survey. Figure provides an overview of the survey structure, which consisted of 100 questions divided into four main sections. The first section contained 18 demographic questions characterizing participant's facility infrastructure and general aspects of the clinical practice such as safety culture, staffing levels, patient load, and institution type (e.g., community, academic‐affiliated, etc.). The three remaining sections captured data related to the initial plan, on‐treatment, and end‐of‐treatment chart checks, respectively.  Each of these three sections contained two types of questions: Process‐Based questions, designed to determine how checks are performed, and Check‐Specific questions designed to ascertain what information is evaluated when the checks are performed. Similarly, participants were only presented with proton‐specific questions if they indicated their center provides proton therapy and that they were familiar with the proton plan review and chart check processes. Figure provides an overview of the construction of nested questions and examples of the Process‐Based and Check‐Specific questions used in the survey. Survey questions and corresponding multiple‐choice options were reviewed by TG‐275 members to ensure clarity of wording and content and to minimize survey completion time. The pre‐deployment survey completion time was estimated to range between 15 and 30 min depending on practice complexity and the number of nested questions displayed to the participant. The complete survey is provided in the so readers can review the questions and corresponding response options. Study sample and survey participation incentives With the support of AAPM headquarters, the survey was deployed using QuestionPro ( www.questionpro.com ), a web‐based survey platform. An invitation to participate was sent to all full members of the AAPM who self‐reported working, at least partially, in the radiation oncology subspecialty (approximately 4500 members). The survey was open for participation for approximately 7 weeks from 10 February to 31 March 2016. To motivate participation and survey completion, survey respondents were enrolled in a raffle for a complementary registration to either the AAPM Annual or Spring Clinical Meeting. Additionally, respondents who completed the survey had the opportunity to fulfill Part 2 Maintenance of Certification (MOC) requirement for up to 15 h of Self‐Assessment Continuing Education (SA‐CE) and the Part 4 MOC requirement for Participatory Quality Improvement Activity as defined by the American Board of Radiology (ABR). Instructions on how to earn the credits were shared with all survey respondents who completed the survey. A Self‐Directed Educational Project template from the ABR, and an educational plan developed by TG‐275 members, were also provided. Since participants’ information was required for the raffle, the survey was not anonymous. However, the data were treated confidentially, and all responses were de‐identified for survey summary and statistical analysis. Survey summary and statistical analysis Since the TG‐275 survey used structured data and multiple‐choice questions as the main strategy to collect information, data aggregation was easily achieved.  Descriptive statistics, which included total number of responses and percentage relative to the total number of contributions to each question, were used for general demographics and plan review and chart check processes.  For the purpose of this publication and further statistical analysis, only contributions from participants in the United States and Canada were included. This data sample is consistent with the one used in the TG‐275 report to cross‐correlate the results of the failure mode and effects analysis (FMEA) risk assessment with the TG‐275 survey. To further explore the landscape and study possible differences across clinical practices, tests of association were performed using data grouped by the following four demographic questions: (1) Institution Type : Academic and Non‐Academic clinics, where the Non‐Academic group represents participants who reported belonging to a free‐standing clinic, community hospital, or government hospital. Following other AAPM survey standards, three other options were available to choose under the institution type question: consulting groups, vendors, or other. However, participants who reported belonging to these groups were not included in the statistical analysis. (2) Average number of patients treated daily : Low (≤ 50 patients), Medium (51 to 100 patients, and High (> 100 patients) volume practices. (3) Radiation Oncology Electronic Medical Record (RO‐EMR) : ARIA and MOSAIQ. (4) Perceived Culture of Safety : Always, Usually, and ≤ Sometimes, where the “ ≤ Sometimes” group includes participants who selected Sometimes, Rarely, or Never when asked the following question: “Do you feel that there is a culture of safety in your institution where deviations and errors can be communicated amongst the groups openly and without any repercussions?” Statistical tests of association were used to evaluate differences between the pre‐defined groups for each of the four selected demographics questions above. The statistical tests were performed for each of the 40 Process‐Based questions as well as each of the 218 checks that were common across all external beam treatment modalities. For example, statistical analysis based on Institution Type was used to assess if there were statistically significant differences in processes and items checked between participants from Academic and Non‐Academic clinics. In all cases, only univariate analysis was performed; thus, possible interdependencies between the four demographic groups were not accounted for in the determination of statistical significance (e.g., the potential interdependent relationship between Institution Type and Average number of patients treated daily was not considered). A chi‐squared test of association was performed for survey questions with discrete responses while an analysis of variance (ANOVA) test was utilized for survey questions with continuous responses. When performing the test of association on the demographic questions with three groups such as the Average Number of Patients Treated Daily (Low: < 50; Medium: 51–100; High > 100) and Perceived Culture of Safety (Always, Usually, and ≤ Sometimes), the test calculates and reports whether there is statistically significant differences across the three groups. Throughout this work, the threshold for statistical significance was P < 0.05. RESULTS 3.1 General demographics A total of 2030 entries were collected during the seven‐week period that the survey was open for participation. Upon review of the raw data, we found multiple entries originating from the same participant as well as non‐attributable entries with no participant demographics. As part of the data clean‐up process, all non‐attributable and duplicate entries were removed, keeping the entry with the most completed survey response. The clean data set contained 1526 non‐duplicate entries (one entry per participant), representing a 33% response rate relative to the estimated 4500 full AAPM members working in radiation oncology.  Participants from 37 countries contributed to the survey: 1310 (85.8%) from the United States, 60 (3.9%) from Canada, and 107 (7.1%) from 35 other countries. Forty‐nine participants (3.2%) did not provide a country of origin. While the clean data set included contributions from respondents in other countries, the analysis performed in this work is based only on responses from participants located in the United States and Canada (nTotal = 1370). This data sample is consistent with that used in the TG‐275 report. Forty‐seven participants reported both working at a center with a proton facility and having experience in proton treatment delivery. Only responses from these participants contributed to the additional proton section of the survey. However, it should be noted that due to the low participation rate (47), statistical analysis of the proton‐specific checks was excluded from this work. Figure summarizes the descriptive statistics for the general demographic questions, which included, for example, questions about the use and type of incident learning systems and participants’ perceived level of safety culture at their institution. The distribution of participants based on institution type was:  31% from academic‐affiliated hospitals, 39% from community hospitals, 19% free‐standing clinics, 7% government hospitals, 2% consulting groups, 0.1% vendors, and 1.6% other. Participants reporting belonging to the vendors group were excluded from statistical analysis because they primarily do not provide clinical services. Additionally, participants who reported belonging to consulting and other groups were excluded from the analysis to avoid possible data ambiguities given that, in the majority of instances, they could support multiple centers with different practices. A summary of the descriptive statistics for the initial plan process checks are shown in Figure , while the on‐treatment and end‐of‐treatment process chart checks are shown in Figure . To account for experimental mortality and increase accuracy, contributions to a given question were normalized using the total number of participants (n) that answered the question. For questions where participants were asked to select more than one choice, the total number of answers exceeds the number of participants and responses sum to more than 100%. 3.2 Differences across practices: process‐based questions Tests of association were performed for 36 of the 40 Process‐Based questions among the four demographic dimensions: Average Number of Patients Treated Daily, Institution Type, Perceived Culture of Safety, and RO‐EMR. Four questions related to individual workload (i.e., average and number of initial plans or on‐treatment checks performed per day) were excluded from this analysis. Figures , , , summarize the differences within each demographic dimension that were found to be both statistically significant ( P < 0.05) and where the magnitude of the intragroup difference was greater than 10%. For each question the highest percentage is bolded and the lowest is underlined . If one option for a given question was found to be significant and with difference > 10%, all options for that question are shown for completion, with no bolding or underlining for nonsignificant options. Figure shows process differences based on typical daily patient volume in the participant's clinic where “Low” indicates a clinic treating ≤ 50 patients, “Medium” indicates a clinic treating between 51 and 100 patients, and “High” indicates a clinic treating > 100 patients daily. Figure shows process differences based on participant clinic type. “Non‐Academic” represents participants who reported belonging to a free‐standing clinic, community hospital, or a government hospital. The Perceived Culture of Safety demographic question asked participants to indicate whether deviations and errors could be communicated openly and without repercussions. Intragroup comparisons, shown in Figure , were made based on those who responded “Always,” “Usually,” or “≤ Sometimes” where “≤ Sometimes” grouped those who answered Sometimes, Rarely, or Never. Only a single process question had an intragroup difference > 10% that was also statistically significant ( P < 0.05) as a function of the RO‐EMR environment, shown in Figure . 3.3 Differences across practices: summary of check‐specific questions In this analysis, we assessed statistically significant differences in the performance of 218 items from the initial plan check (151 items), on‐treatment chart check (52 items), and end‐of‐treatment chart check (15 items) across practices based on the four selected demographic questions: Institution Type, Average Number of Patients Treated, RO‐EMR, and Perceived Culture of Safety. For example, 66.9% of participants from Academic clinics responded “Yes” to performing a check to confirm that a plan conforms to clinical trial guidelines compared to 51.0% of participants from Non‐Academic clinics ( P < 0.05). Figure shows the percentage of checks with statistically significant differences between groups relative to the total number checks (n_chk) for each of the plan review and chart check survey sections. The Perceived Culture of Safety demographic had the largest number of statistically significant differences between groups with P < 0.05 for 111 checks. The RO‐EMR demographic question had 102 checks with significant differences. However, on further investigation, 27 of the checks with statistically significant differences for the RO‐EMR group were related to the verification of data transfer between third‐party systems (Figure ). The Average Number of Patients Treated Daily and Institution Type demographics had 97 and 71 checks with statistically significant differences, respectively. 3.4 Differences across practices: risk‐based assessment of initial plan check The risk‐based summary is intended to evaluate for differences amongst the four demographic questions for those checks associated with the highest risk Failure Modes (FMs) identified by TG‐275. We selected the 10 FMs with the highest Risk Priority Number (RPN) and the corresponding checks from the survey (Figure ). A total of 40 checks were associated with the top 10 FMs. Eliminating duplicate checks that are applicable across multiple FMs resulted in a total of 28 unique checks. Twenty‐two out of the 28 unique checks met our inclusion criteria for the risk‐based summary with a statistically significant difference ( P < 0.05) and an intra‐group difference in the performance of the checks ≥ 5% for at least one of the four demographic questions. It is important to point out that the ≥ 5% threshold is different than the > 10% threshold used in the previous section. The lower threshold of ≥ 5% was chosen to improve the sensitivity of this high‐risk based analysis. The results of the risk‐based analysis are summarized in Figure . Each row represents one of the 22 checks associated with a top 10 FMs that met our inclusion criteria. For each of the demographic questions, the percentage of survey participants performing the check is shown. Checks where the intra‐group differences in the performance was < 5% or the differences lacked statistical significance ( P > 0.05) are left blank. Based on the risk‐based summary, the Perceived Culture of Safety demographic had the largest number of high‐risk checks (19 of 22) that were both statistically significant ( P < 0.05) and had an absolute intra‐group difference ≥ 5%. The Institution Type, Average Number of Patients Treated Daily, and the RO‐EMR had 9, 7, and 4 checks, respectively, that met the inclusion criteria. General demographics A total of 2030 entries were collected during the seven‐week period that the survey was open for participation. Upon review of the raw data, we found multiple entries originating from the same participant as well as non‐attributable entries with no participant demographics. As part of the data clean‐up process, all non‐attributable and duplicate entries were removed, keeping the entry with the most completed survey response. The clean data set contained 1526 non‐duplicate entries (one entry per participant), representing a 33% response rate relative to the estimated 4500 full AAPM members working in radiation oncology.  Participants from 37 countries contributed to the survey: 1310 (85.8%) from the United States, 60 (3.9%) from Canada, and 107 (7.1%) from 35 other countries. Forty‐nine participants (3.2%) did not provide a country of origin. While the clean data set included contributions from respondents in other countries, the analysis performed in this work is based only on responses from participants located in the United States and Canada (nTotal = 1370). This data sample is consistent with that used in the TG‐275 report. Forty‐seven participants reported both working at a center with a proton facility and having experience in proton treatment delivery. Only responses from these participants contributed to the additional proton section of the survey. However, it should be noted that due to the low participation rate (47), statistical analysis of the proton‐specific checks was excluded from this work. Figure summarizes the descriptive statistics for the general demographic questions, which included, for example, questions about the use and type of incident learning systems and participants’ perceived level of safety culture at their institution. The distribution of participants based on institution type was:  31% from academic‐affiliated hospitals, 39% from community hospitals, 19% free‐standing clinics, 7% government hospitals, 2% consulting groups, 0.1% vendors, and 1.6% other. Participants reporting belonging to the vendors group were excluded from statistical analysis because they primarily do not provide clinical services. Additionally, participants who reported belonging to consulting and other groups were excluded from the analysis to avoid possible data ambiguities given that, in the majority of instances, they could support multiple centers with different practices. A summary of the descriptive statistics for the initial plan process checks are shown in Figure , while the on‐treatment and end‐of‐treatment process chart checks are shown in Figure . To account for experimental mortality and increase accuracy, contributions to a given question were normalized using the total number of participants (n) that answered the question. For questions where participants were asked to select more than one choice, the total number of answers exceeds the number of participants and responses sum to more than 100%. Differences across practices: process‐based questions Tests of association were performed for 36 of the 40 Process‐Based questions among the four demographic dimensions: Average Number of Patients Treated Daily, Institution Type, Perceived Culture of Safety, and RO‐EMR. Four questions related to individual workload (i.e., average and number of initial plans or on‐treatment checks performed per day) were excluded from this analysis. Figures , , , summarize the differences within each demographic dimension that were found to be both statistically significant ( P < 0.05) and where the magnitude of the intragroup difference was greater than 10%. For each question the highest percentage is bolded and the lowest is underlined . If one option for a given question was found to be significant and with difference > 10%, all options for that question are shown for completion, with no bolding or underlining for nonsignificant options. Figure shows process differences based on typical daily patient volume in the participant's clinic where “Low” indicates a clinic treating ≤ 50 patients, “Medium” indicates a clinic treating between 51 and 100 patients, and “High” indicates a clinic treating > 100 patients daily. Figure shows process differences based on participant clinic type. “Non‐Academic” represents participants who reported belonging to a free‐standing clinic, community hospital, or a government hospital. The Perceived Culture of Safety demographic question asked participants to indicate whether deviations and errors could be communicated openly and without repercussions. Intragroup comparisons, shown in Figure , were made based on those who responded “Always,” “Usually,” or “≤ Sometimes” where “≤ Sometimes” grouped those who answered Sometimes, Rarely, or Never. Only a single process question had an intragroup difference > 10% that was also statistically significant ( P < 0.05) as a function of the RO‐EMR environment, shown in Figure . Differences across practices: summary of check‐specific questions In this analysis, we assessed statistically significant differences in the performance of 218 items from the initial plan check (151 items), on‐treatment chart check (52 items), and end‐of‐treatment chart check (15 items) across practices based on the four selected demographic questions: Institution Type, Average Number of Patients Treated, RO‐EMR, and Perceived Culture of Safety. For example, 66.9% of participants from Academic clinics responded “Yes” to performing a check to confirm that a plan conforms to clinical trial guidelines compared to 51.0% of participants from Non‐Academic clinics ( P < 0.05). Figure shows the percentage of checks with statistically significant differences between groups relative to the total number checks (n_chk) for each of the plan review and chart check survey sections. The Perceived Culture of Safety demographic had the largest number of statistically significant differences between groups with P < 0.05 for 111 checks. The RO‐EMR demographic question had 102 checks with significant differences. However, on further investigation, 27 of the checks with statistically significant differences for the RO‐EMR group were related to the verification of data transfer between third‐party systems (Figure ). The Average Number of Patients Treated Daily and Institution Type demographics had 97 and 71 checks with statistically significant differences, respectively. Differences across practices: risk‐based assessment of initial plan check The risk‐based summary is intended to evaluate for differences amongst the four demographic questions for those checks associated with the highest risk Failure Modes (FMs) identified by TG‐275. We selected the 10 FMs with the highest Risk Priority Number (RPN) and the corresponding checks from the survey (Figure ). A total of 40 checks were associated with the top 10 FMs. Eliminating duplicate checks that are applicable across multiple FMs resulted in a total of 28 unique checks. Twenty‐two out of the 28 unique checks met our inclusion criteria for the risk‐based summary with a statistically significant difference ( P < 0.05) and an intra‐group difference in the performance of the checks ≥ 5% for at least one of the four demographic questions. It is important to point out that the ≥ 5% threshold is different than the > 10% threshold used in the previous section. The lower threshold of ≥ 5% was chosen to improve the sensitivity of this high‐risk based analysis. The results of the risk‐based analysis are summarized in Figure . Each row represents one of the 22 checks associated with a top 10 FMs that met our inclusion criteria. For each of the demographic questions, the percentage of survey participants performing the check is shown. Checks where the intra‐group differences in the performance was < 5% or the differences lacked statistical significance ( P > 0.05) are left blank. Based on the risk‐based summary, the Perceived Culture of Safety demographic had the largest number of high‐risk checks (19 of 22) that were both statistically significant ( P < 0.05) and had an absolute intra‐group difference ≥ 5%. The Institution Type, Average Number of Patients Treated Daily, and the RO‐EMR had 9, 7, and 4 checks, respectively, that met the inclusion criteria. DISCUSSION The TG‐275 survey successfully captured practices on initial plan, on‐treatment, and end‐of‐treatment checks across a wide variety of clinics and institutions. The response rate estimate of 33% is considered within the normal response rate range of widespread population surveys. External validity was evaluated by comparing the TG‐275 survey distribution to that of AAPM members who have opted to self‐report their area of practice through the organization's website. The two distributions are nearly identical with 31% and 32% of respondents working in an academic‐affiliate hospital based on the TG‐275 survey and the AAPM website respectively while 39% of participants reported working in a community practice in both data sources. Additionally, smaller clinics were well represented in the overall survey data with 70.9% of respondents reporting four or fewer treatment machines in their institution and 40.1% of survey participants indicating that their clinic treats ≤ 50 patients per day. We selected four demographic questions on which to perform a more detailed statistical analysis. These demographic questions were selected due to their relevance to the community in general, and included: Institution type (i.e., Academic and Non‐Academic), RO‐EMR system used in the clinic (i.e., ARIA and Mosaiq), a proxy for practice workload (i.e., average number of patients treated daily), and perceived culture of safety. The analysis showed a varying number of statistically significant differences across both processes as well as initial plan, on‐treatment, and end‐of‐treatment physics check items. The demographic question with the largest number of checks with statistically significant inter‐group differences was the Perceived Culture of Safety with 111 out of 218 checks, followed by the RO‐EMR with 102, Average Number of Patients Treated Daily with 97, and Institution Type with 71 checks. Overall trends in the checks performed among the tested demographic groups were identified and are shown in Figure . However, this overview may not provide the full picture as some of the observed differences are due to inherent process or system‐based differences across the tested groups. For example, 27 out of the 102 checks with statistically significant differences in the RO‐EMR comparison were related to the verification of data transfer between the treatment planning system and a third‐party record and verify system. It is also recognized that not all checks have the same level of clinical significance. The cross‐correlation exercise between the survey results and the FMEA performed by TG‐275 provides an excellent framework to classify and prioritize checks based on their risk and impact on quality and safety. Using the results of the TG‐275 cross‐correlation exercise, we were able to highlight the checks from the initial plan check process associated with the top 10 RPN failure modes. For example, while the RO‐EMR demonstrated an overall high number of checks (102 checks) with statistically significant inter‐group differences, 27 of those checks are intended to verify data transfer between the treatment planning system and a third‐party RO‐EMR. Based on the TG‐275 cross‐correlation exercise, these checks tend to have a low RPN due to the high detectability of data transfer errors. Furthermore, the risk‐based summary demonstrates that the RO‐EMR group had only four checks associated with the top 10 FMs that met out inclusion criteria ( P < 0.05 and an intergroup difference ≥ 5%); all four checks were related to contour verification. Results from the survey seem to indicate that participants with an ARIA RO‐EMR more consistently check contouring‐related items when compared to survey participants in Mosaiq RO‐EMR environments. Based on the results of the TG‐275 survey, participants utilizing Mosaiq demonstrated working across third‐party systems at higher rates than participants utilizing Aria. As a result, an important portion of the initial plan check time in the non‐integrated environment may be used to verify data transfer potentially leaving insufficient time to perform contouring verification checks. In contrast, survey participants from integrated environments, where the RO‐EMR and the treatment planning system share the same database (e.g., ARIA‐Eclipse), may be able to dedicate more of their time to the verification of contours. Efficient access between the RO‐EMR and the treatment planning system in an integrated environment may also be a contributing factor to this observed difference. The risk‐based summary showed that the Perceived Culture of Safety demographic group had both the largest number of statistically significant differences overall as well as checks associated with the top 10 FMs (19 out of 22 checks in Figure ). In every instance, those who answered “Always” to whether deviations and errors could be communicated openly and without repercussions performed the checks at statistically significant higher rates than those who answered “Usually” or “≤ Sometimes.” This result seems to indicate that the ability (or inability) to openly discuss errors may be an indication of the larger, overall safety culture of a clinic. Institution Type and Average Number of Patients Treated Daily had approximately the same number of statistically significant intragroup differences associated with the top 10 FMs with 9 and 7 checks out of 22, respectively. Unlike the RO‐EMR and the Perceived Safety Culture, no common trends were identified for these two groups. The statistical analysis and corresponding results across the 4 demographic questions (i.e., Institution Type, Average Number of Patients Treated Daily, RO‐EMR, and Perceived Culture of Safety), provides a benchmark for clinics to evaluate their own practice against other clinics as a function of demographic categories. One potential limitation of this work is that each participant's response was treated individually, and no attempt was made to aggregate data as a function of participant's institutional affiliation. This approach could potentially lead to over‐sampling bias from larger institutions. However, this data analysis approach was important to maintain participant confidentiality. Additionally, analysis of independent responses also helped ensure that individual variations in the plan review and chart check process were considered. Given that 22.8% of participants responded that plan checks were conducted using a combination of personal and institutional checklists, suggests that there can be variations between individual plan review processes within a single institution. Another limitation of the study is that a univariate analysis was utilized to provide descriptive statistics and identify trends as a function of each of the four demographic questions without evaluation of the interdependence between the variables. The TG‐275 survey results highlight the differences in plan and chart checking practice that exist across our profession. Some of the observed variations may be due to inherent characteristics, such as differences in technology utilized in our clinics. For example, it is unclear whether the observed differences in checks of motion management instructions are due to the check not being performed or due to a lack of motion management technology. The performance of initial plan check items appears to be only moderately dependent on institutional features, such as the clinic type or daily patient volume. However, the question of whether errors and deviations can be openly discussed without fear of repercussions may be indicative of a larger safety culture issue, as 86% of high‐risk items showed significant intra‐group differences in the performance of these checks. The TG‐275 survey provided an overall picture of physics plan review and chart check practices while the result of this statistical analysis provides a more detailed picture by which an individual clinic can evaluate their own processes. CONCLUSION The TG‐275 survey captured a baseline of plan review and chart check practices from a large and diverse population sample across the AAPM membership. In addition to capturing data on specific checks and practices, the demographic questions allowed for further understanding of the context of those practices. In this review of United States and Canadian practices, four demographic questions were chosen for an in‐depth statistical analysis. Results of test of association showed that there is heterogeneity in the performance of these tasks as a function of demographics. The Perceived Culture of Safety had the largest number of checks with statistically significant differences including among high‐risk checks. The underlying causes of the observed differences cannot be fully explained by the survey data. However, it does provide a rationale for future research to investigate practice variations, which may contribute to potential barriers for TG‐275 recommendations, particularly those recommendations associated with high‐risk failure modes. The TG‐275 survey also contains other demographics questions, which can form the basis of future analysis. The authors confirm contribution to the paper as follows: Study conception and design : Deborah L. Schofield, Leigh Conroy, Jennifer L. Johnson, Michelle C. Wells, Lei Dong, Luis E. Fong de los Santos; Data acquisition : Deborah L. Schofield, Jennifer L. Johnson, Michelle C. Wells, Lei Dong, Luis E. Fong de los Santos; Analysis and interpretation of results : Deborah L. Schofield, Leigh Conroy, William S. Harmsen, Luis E. Fong de los Santos; Draft manuscript preparation : Deborah L. Schofield, Leigh Conroy, William S. Harmsen, Luis E. Fong de los Santos; Final approval of the version to be published : Deborah L. Schofield, Leigh Conroy, William S. Harmsen, Jennifer L. Johnson, Michelle C. Wells, Lei Dong, Luis E. Fong de los Santos. All authors reviewed the results and approved the final version of the manuscript. All the authors of this manuscript do not have any conflict of interest to declare arising from the publication of this manuscript Supporting information. Click here for additional data file.
Tumor Area Positivity (TAP) score of programmed death-ligand 1 (PD-L1): a novel visual estimation method for combined tumor cell and immune cell scoring
666777f8-de3e-4135-871f-7cd09daaa6e1
10114344
Anatomy[mh]
The discovery of immune checkpoints has led to a paradigm shift toward immunotherapy treatment in cancer. One such checkpoint is the programmed cell death protein 1 (PD-1)/programmed death-ligand 1 (PD-L1) axis which is responsible for inhibiting an immune response of immune cells (IC) to foreign antigens . Tumor cells (TC) can also express PD-L1, leading to activation of the PD-1/PD-L1 pathway, which subsequently allows TC to evade the immune response and results in tumor growth . Increased PD-L1 expression in tissue from patients with cancer is positively correlated with clinical response to immunotherapy ; this highlights the need for scoring methods to accurately quantify PD-L1 protein expression. Optimal scoring methods should be accurate, precise, and help simplify workflow for practicing pathologists. Currently, United States Food and Drug Administration (FDA)-approved PD-L1 immunohistochemistry (IHC) assays/algorithms include scoring methods that consider TC positivity and/or IC positivity (Table ) . Combined Positive Score (CPS) is the only FDA-approved method that combines TC and IC; however, it is an approach based on cell counting which is time consuming and not intuitive to practicing pathologists. In this study, we introduce the Tumor Area Positivity (TAP) score, a simple, visual-based method for scoring TC and IC together which addresses the limitations of a cell-counting approach with comparable efficacy and reproducibility. Institutional review board approval was obtained by the Roche Tissue Diagnostics Clinical Operation Department. The two reader precision studies used commercial samples. For the samples used in the comparison study, which were collected as part of a BeiGene study, consent was obtained in compliance with requirements. Each pathologist received training on the TAP scoring algorithm: [12pt]{minimal} $$=}{}$$ TAP = % PD-L1 positive TC and IC Tumor area Pathologists were then required to pass a series of tests before participation in the studies (see section). Samples from gastric adenocarcinoma, gastroesophageal junction (GEJ) adenocarcinoma and esophageal squamous cell carcinoma (ESCC) (including both resections and biopsies) were stained using the VENTANA PD-L1 (SP263) assay (Ventana Medical Systems, Inc., Tucson, AZ, USA). Between- and within-reader precision studies were performed for the TAP score among three internal (Roche Tissue Diagnostics) pathologists (internal study) and six pathologists from three external organizations (external study). After successful completion of the reader precision studies, TAP score was compared to CPS retrospectively for concordance and time efficacy. TAP scoring method description and approach Identification of tumor area To determine TAP score, a hematoxylin and eosin-stained slide is first examined to identify tumor area (area occupied by all viable TC and the tumor-associated stroma containing tumor-associated IC) (Fig. ). If tumor nests are separated by non-neoplastic tissue, they are included as part of the tumor area as long as the tumor nests are bordered on both sides of a 10x field; the intervening non-neoplastic tissue is also included in the tumor area (abbreviated as 10x field rule in the text below; Fig. ). Necrosis, crush, and cautery artifacts are excluded from tumor area. For gastric and GEJ adenocarcinoma, the following must be considered: Pools of mucin and glandular luminal spaces in the presence or absence of viable TC are included as part of the tumor area. Tumor nests within the lymphovascular spaces are included in the tumor area. Tumor area determination in lymph nodes For lymph nodes with multiple nests of tumor metastasis, apply the 10x field rule. In lymph nodes with focal or discrete tumor metastases, tumor area includes tumor nests and the areas occupied by the IC immediately adjacent to the leading edge of the metastatic tumor nests. Determination of tumor-associated IC Tumor-associated IC are intra- and peri-tumoral, including those present within the tumor proper, between tumor nests, and within any tumor-associated reactive stroma. In lymph nodes with focal or discrete tumor metastases, only IC immediately adjacent to the leading edge of the metastatic tumor nest were defined as tumor-associated IC. Determination of TAP score The TAP score is determined on the IHC slide by visually aggregating/estimating the area covered by PD-L1 positive TC and tumor-associated IC relative to the total tumor area. Both circumferential and partial/lateral membrane staining of TC at any intensity is regarded as positive PD-L1 staining, while cytoplasmic staining of TC is disregarded; membranous, cytoplasmic, and punctate staining of tumor-associated IC at any intensity is regarded as PD-L1 positive staining (Fig. ). For gastric and GEJ adenocarcinoma, staining of IC in the germinal center of lymphoid aggregates are included in the TAP score if they are located within the tumor area. Intra-luminal macrophage staining is not included in the TAP score unless the macrophages completely fill the luminal space and are in direct contact with the TC. Staining of multi-nucleated giant cells, granulomas, and IC located within blood vessels and lymphatics are not included in the TAP score. Off-target staining (e.g., fibroblasts, endothelial cells, neuroendocrine cells, smooth muscle, and nerves) should not be confused for specific PD-L1 staining, and is not included in the TAP score. Pathologist training The training included review of an interpretation guide via Microsoft PowerPoint (Microsoft Corporation, Redmond, WA, USA) presentation, and review of a set of training glass slides using multi-headed microscopes in conjunction with the training pathologist. During the training session, PD-L1 biology, staining characteristics of TC and IC (Fig. ), and acceptability of system level controls were reviewed, among other topics. For gastric/GEJ adenocarcinoma, the test and training sets were designed to train the pathologists to accurately score PD-L1 expression status around the 5% cutoff (Fig. ). The tests included a self-study set of 10 cases with consensus scores, a mini-test of 10 cases, and a final test of 60 cases. To pass the final test, the trainee pathologist had to achieve 85% agreement with reference scores on either an initial or a repeat test. The training on ESCC scoring was conducted using different training and test sets. Internal reader precision study Three internal pathologists were trained and qualified for this study. This study evaluated: i) between-reader precision: across qualified readers individually evaluating the same set of randomized gastric or GEJ adenocarcinoma samples ( N = 100 with equal distribution of PD-L1 expression level for positive [ n = 50] and negative [ n = 50] samples, spanning the range of the TAP score); and ii) within-reader precision: within individual readers evaluating the same set of gastric or GEJ adenocarcinoma samples over two assessments, separated by a wash-out time period of at least 2 weeks, and re-randomized and blinded prior to the second read. Between- and within-reader precision were assessed by evaluating the concordance of PD-L1 expression level of samples among the three readers from their first round of reads and within individual readers from their first and second round of reads, respectively. In the between-reader precision analysis, there were three pair-wise comparisons for each sample (reader 1 vs. reader 2, reader 1 vs. reader 3, and reader 2 vs. reader 3). With N = 100 samples, there were a total of 300 pair-wise comparisons. In the within-reader precision analysis, with N = 100 samples, there were 100 comparisons between the two reading rounds for each reader. All samples were commercially obtained formalin-fixed paraffin-embedded specimens. A cutoff of 5%, using the TAP score, was used to determine if the PD-L1 expression in the sample was considered positive or negative. The sample set included 90% resection samples and 10% biopsy samples, 10% of which showed borderline range of PD-L1 expression. A sample was considered negative borderline if the TAP score was 2–4%, and positive borderline if TAP score was 5–9%. The average positive agreement (APA), average negative agreement (ANA), and overall percent agreement (OPA) between and within readers were then calculated, along with 95% confidence intervals (CIs). The acceptance criterion for between-reader precision was ≥85% ANA and APA. The acceptance criteria for within-reader precision were ≥ 90% OPA, and ≥ 85% ANA and APA. The assay was required to produce acceptable levels of non-specific staining on BenchMark ULTRA instruments (Ventana Medical Systems Inc.) in at least 90% of samples. External reader precision study Three external organizations participated in an inter-laboratory reproducibility study using a cutoff of 5% TAP. At each site, two trained and qualified pathologists were selected to score the slides originating from the same sets of blocks. Specifically, 28 commercially obtained gastric or GEJ adenocarcinoma formalin-fixed paraffin-embedded specimens spanning the range of the TAP score were used in the external study. There was an equal distribution of PD-L1 expression level for positive ( n = 14) and negative ( n = 14) samples using the TAP score at 5% cutoff. Ten percent biopsy samples and 10% borderline cases were included in the sample set. The 28 cases were stained on five non-consecutive days over a period of at least 20 days at three sites, generating a total of five sets of slides for evaluation by the two pathologists at each site. The APA, ANA, and OPA were calculated across the three sites. Comparison of TAP and CPS Gastric or GEJ adenocarcinoma and ESCC samples ( n = 52) from a BGB-A317 trial carried out by BeiGene (Beijing, China) were used to compare the TAP and CPS scoring algorithms for evaluation of PD-L1 expression in a retrospective manner. Of the 52 samples, n = 10 were resection samples and n = 42 were biopsies. All samples were stained with the VENTANA PD-L1 (SP263) assay. The samples were distributed among eight internal pathologists and were scored using both methods. All eight pathologists were trained and qualified to evaluate PD-L1 expression using both the TAP and CPS scoring algorithms. The concordance of the TAP score at a 1% and 5% cutoff was assessed against a CPS score of 1 (equivalent to 1%), the FDA-approved cutoff for gastric or GEJ adenocarcinoma. The time spent on scoring for each method was also assessed. Identification of tumor area To determine TAP score, a hematoxylin and eosin-stained slide is first examined to identify tumor area (area occupied by all viable TC and the tumor-associated stroma containing tumor-associated IC) (Fig. ). If tumor nests are separated by non-neoplastic tissue, they are included as part of the tumor area as long as the tumor nests are bordered on both sides of a 10x field; the intervening non-neoplastic tissue is also included in the tumor area (abbreviated as 10x field rule in the text below; Fig. ). Necrosis, crush, and cautery artifacts are excluded from tumor area. For gastric and GEJ adenocarcinoma, the following must be considered: Pools of mucin and glandular luminal spaces in the presence or absence of viable TC are included as part of the tumor area. Tumor nests within the lymphovascular spaces are included in the tumor area. Tumor area determination in lymph nodes For lymph nodes with multiple nests of tumor metastasis, apply the 10x field rule. In lymph nodes with focal or discrete tumor metastases, tumor area includes tumor nests and the areas occupied by the IC immediately adjacent to the leading edge of the metastatic tumor nests. Determination of tumor-associated IC Tumor-associated IC are intra- and peri-tumoral, including those present within the tumor proper, between tumor nests, and within any tumor-associated reactive stroma. In lymph nodes with focal or discrete tumor metastases, only IC immediately adjacent to the leading edge of the metastatic tumor nest were defined as tumor-associated IC. Determination of TAP score The TAP score is determined on the IHC slide by visually aggregating/estimating the area covered by PD-L1 positive TC and tumor-associated IC relative to the total tumor area. Both circumferential and partial/lateral membrane staining of TC at any intensity is regarded as positive PD-L1 staining, while cytoplasmic staining of TC is disregarded; membranous, cytoplasmic, and punctate staining of tumor-associated IC at any intensity is regarded as PD-L1 positive staining (Fig. ). For gastric and GEJ adenocarcinoma, staining of IC in the germinal center of lymphoid aggregates are included in the TAP score if they are located within the tumor area. Intra-luminal macrophage staining is not included in the TAP score unless the macrophages completely fill the luminal space and are in direct contact with the TC. Staining of multi-nucleated giant cells, granulomas, and IC located within blood vessels and lymphatics are not included in the TAP score. Off-target staining (e.g., fibroblasts, endothelial cells, neuroendocrine cells, smooth muscle, and nerves) should not be confused for specific PD-L1 staining, and is not included in the TAP score. To determine TAP score, a hematoxylin and eosin-stained slide is first examined to identify tumor area (area occupied by all viable TC and the tumor-associated stroma containing tumor-associated IC) (Fig. ). If tumor nests are separated by non-neoplastic tissue, they are included as part of the tumor area as long as the tumor nests are bordered on both sides of a 10x field; the intervening non-neoplastic tissue is also included in the tumor area (abbreviated as 10x field rule in the text below; Fig. ). Necrosis, crush, and cautery artifacts are excluded from tumor area. For gastric and GEJ adenocarcinoma, the following must be considered: Pools of mucin and glandular luminal spaces in the presence or absence of viable TC are included as part of the tumor area. Tumor nests within the lymphovascular spaces are included in the tumor area. Tumor area determination in lymph nodes For lymph nodes with multiple nests of tumor metastasis, apply the 10x field rule. In lymph nodes with focal or discrete tumor metastases, tumor area includes tumor nests and the areas occupied by the IC immediately adjacent to the leading edge of the metastatic tumor nests. For lymph nodes with multiple nests of tumor metastasis, apply the 10x field rule. In lymph nodes with focal or discrete tumor metastases, tumor area includes tumor nests and the areas occupied by the IC immediately adjacent to the leading edge of the metastatic tumor nests. Tumor-associated IC are intra- and peri-tumoral, including those present within the tumor proper, between tumor nests, and within any tumor-associated reactive stroma. In lymph nodes with focal or discrete tumor metastases, only IC immediately adjacent to the leading edge of the metastatic tumor nest were defined as tumor-associated IC. The TAP score is determined on the IHC slide by visually aggregating/estimating the area covered by PD-L1 positive TC and tumor-associated IC relative to the total tumor area. Both circumferential and partial/lateral membrane staining of TC at any intensity is regarded as positive PD-L1 staining, while cytoplasmic staining of TC is disregarded; membranous, cytoplasmic, and punctate staining of tumor-associated IC at any intensity is regarded as PD-L1 positive staining (Fig. ). For gastric and GEJ adenocarcinoma, staining of IC in the germinal center of lymphoid aggregates are included in the TAP score if they are located within the tumor area. Intra-luminal macrophage staining is not included in the TAP score unless the macrophages completely fill the luminal space and are in direct contact with the TC. Staining of multi-nucleated giant cells, granulomas, and IC located within blood vessels and lymphatics are not included in the TAP score. Off-target staining (e.g., fibroblasts, endothelial cells, neuroendocrine cells, smooth muscle, and nerves) should not be confused for specific PD-L1 staining, and is not included in the TAP score. The training included review of an interpretation guide via Microsoft PowerPoint (Microsoft Corporation, Redmond, WA, USA) presentation, and review of a set of training glass slides using multi-headed microscopes in conjunction with the training pathologist. During the training session, PD-L1 biology, staining characteristics of TC and IC (Fig. ), and acceptability of system level controls were reviewed, among other topics. For gastric/GEJ adenocarcinoma, the test and training sets were designed to train the pathologists to accurately score PD-L1 expression status around the 5% cutoff (Fig. ). The tests included a self-study set of 10 cases with consensus scores, a mini-test of 10 cases, and a final test of 60 cases. To pass the final test, the trainee pathologist had to achieve 85% agreement with reference scores on either an initial or a repeat test. The training on ESCC scoring was conducted using different training and test sets. Three internal pathologists were trained and qualified for this study. This study evaluated: i) between-reader precision: across qualified readers individually evaluating the same set of randomized gastric or GEJ adenocarcinoma samples ( N = 100 with equal distribution of PD-L1 expression level for positive [ n = 50] and negative [ n = 50] samples, spanning the range of the TAP score); and ii) within-reader precision: within individual readers evaluating the same set of gastric or GEJ adenocarcinoma samples over two assessments, separated by a wash-out time period of at least 2 weeks, and re-randomized and blinded prior to the second read. Between- and within-reader precision were assessed by evaluating the concordance of PD-L1 expression level of samples among the three readers from their first round of reads and within individual readers from their first and second round of reads, respectively. In the between-reader precision analysis, there were three pair-wise comparisons for each sample (reader 1 vs. reader 2, reader 1 vs. reader 3, and reader 2 vs. reader 3). With N = 100 samples, there were a total of 300 pair-wise comparisons. In the within-reader precision analysis, with N = 100 samples, there were 100 comparisons between the two reading rounds for each reader. All samples were commercially obtained formalin-fixed paraffin-embedded specimens. A cutoff of 5%, using the TAP score, was used to determine if the PD-L1 expression in the sample was considered positive or negative. The sample set included 90% resection samples and 10% biopsy samples, 10% of which showed borderline range of PD-L1 expression. A sample was considered negative borderline if the TAP score was 2–4%, and positive borderline if TAP score was 5–9%. The average positive agreement (APA), average negative agreement (ANA), and overall percent agreement (OPA) between and within readers were then calculated, along with 95% confidence intervals (CIs). The acceptance criterion for between-reader precision was ≥85% ANA and APA. The acceptance criteria for within-reader precision were ≥ 90% OPA, and ≥ 85% ANA and APA. The assay was required to produce acceptable levels of non-specific staining on BenchMark ULTRA instruments (Ventana Medical Systems Inc.) in at least 90% of samples. Three external organizations participated in an inter-laboratory reproducibility study using a cutoff of 5% TAP. At each site, two trained and qualified pathologists were selected to score the slides originating from the same sets of blocks. Specifically, 28 commercially obtained gastric or GEJ adenocarcinoma formalin-fixed paraffin-embedded specimens spanning the range of the TAP score were used in the external study. There was an equal distribution of PD-L1 expression level for positive ( n = 14) and negative ( n = 14) samples using the TAP score at 5% cutoff. Ten percent biopsy samples and 10% borderline cases were included in the sample set. The 28 cases were stained on five non-consecutive days over a period of at least 20 days at three sites, generating a total of five sets of slides for evaluation by the two pathologists at each site. The APA, ANA, and OPA were calculated across the three sites. Gastric or GEJ adenocarcinoma and ESCC samples ( n = 52) from a BGB-A317 trial carried out by BeiGene (Beijing, China) were used to compare the TAP and CPS scoring algorithms for evaluation of PD-L1 expression in a retrospective manner. Of the 52 samples, n = 10 were resection samples and n = 42 were biopsies. All samples were stained with the VENTANA PD-L1 (SP263) assay. The samples were distributed among eight internal pathologists and were scored using both methods. All eight pathologists were trained and qualified to evaluate PD-L1 expression using both the TAP and CPS scoring algorithms. The concordance of the TAP score at a 1% and 5% cutoff was assessed against a CPS score of 1 (equivalent to 1%), the FDA-approved cutoff for gastric or GEJ adenocarcinoma. The time spent on scoring for each method was also assessed. Internal reader precision study As shown in Table , for between-reader analyses (including borderline cases), the pre-defined acceptance criteria were met for APA (296/298 [99.3%]; 95% CI, 98.0–100.0), ANA (300/302 [99.3%]; 95% CI, 98.0–100.0), and OPA (298/300 [99.3%]; 95% CI, 98.0–100.0). For within-reader analyses (including borderline cases), the pre-defined acceptance criteria were met for APA (296/299 [99.0%]; 95% CI, 98.0–100.0), ANA (298/301 [99.0%]; 95% CI, 98.0–100.0), and OPA (297/300 [99.0%]; 95% CI, 98.0–100.0). The background acceptability rate (600/600 [100.0%]; 95% CI, 99.4–100.0) also met the pre-defined acceptance criteria. External reader precision study Table shows that site A achieved the lowest agreement rates for APA (88/109 [80.7%], 95% CI, 63.6–93.5), ANA (144/165 [87.3%], 95% CI, 78.0–95.7), and OPA (116/137 [84.7%], 95% CI, 73.2–94.9), while sites B and C produced identical results for APA (140/140 [100.0%], 95% CI, 97.3–100.0), ANA (140/140 [100.0%], 95% CI, 97.3–100.0), and OPA (140/140 [100.0%], 95% CI, 97.3–100.0). Overall, high agreement levels were demonstrated across the three sites (APA, 368/389 [94.6%], 95% CI, 90.8–98.0; ANA, 424/445 [95.3%], 95% CI, 91.5–98.5; OPA, 396/417 [95.0%], 95% CI, 91.2–98.3). Correlation of TAP and CPS The percentage agreement between TAP (1% cutoff) vs CPS (cutoff of 1) was 39/39 samples (100%; 95% CI, 91.0–100.0) for positive percent agreement (PPA), 11/13 samples (84.6%; 95% CI, 57.8–95.7) for negative percent agreement (NPA), and 50/52 samples (96.2%; 95% CI, 87.0–98.9) for OPA (Table ). For TAP (5% cutoff) vs CPS (cutoff of 1), the percentage agreement was 35/39 samples (89.7%; 95% CI, 76.4–95.9) for PPA, 13/13 samples (100%; 95% CI, 77.2–100.0) for NPA, and 48/52 samples (92.3%; 95% CI, 81.8–97.0) for OPA (Table ). The average time spent on scoring was 5 min for the TAP score and 30 min for the CPS scoring algorithm. As shown in Table , for between-reader analyses (including borderline cases), the pre-defined acceptance criteria were met for APA (296/298 [99.3%]; 95% CI, 98.0–100.0), ANA (300/302 [99.3%]; 95% CI, 98.0–100.0), and OPA (298/300 [99.3%]; 95% CI, 98.0–100.0). For within-reader analyses (including borderline cases), the pre-defined acceptance criteria were met for APA (296/299 [99.0%]; 95% CI, 98.0–100.0), ANA (298/301 [99.0%]; 95% CI, 98.0–100.0), and OPA (297/300 [99.0%]; 95% CI, 98.0–100.0). The background acceptability rate (600/600 [100.0%]; 95% CI, 99.4–100.0) also met the pre-defined acceptance criteria. Table shows that site A achieved the lowest agreement rates for APA (88/109 [80.7%], 95% CI, 63.6–93.5), ANA (144/165 [87.3%], 95% CI, 78.0–95.7), and OPA (116/137 [84.7%], 95% CI, 73.2–94.9), while sites B and C produced identical results for APA (140/140 [100.0%], 95% CI, 97.3–100.0), ANA (140/140 [100.0%], 95% CI, 97.3–100.0), and OPA (140/140 [100.0%], 95% CI, 97.3–100.0). Overall, high agreement levels were demonstrated across the three sites (APA, 368/389 [94.6%], 95% CI, 90.8–98.0; ANA, 424/445 [95.3%], 95% CI, 91.5–98.5; OPA, 396/417 [95.0%], 95% CI, 91.2–98.3). The percentage agreement between TAP (1% cutoff) vs CPS (cutoff of 1) was 39/39 samples (100%; 95% CI, 91.0–100.0) for positive percent agreement (PPA), 11/13 samples (84.6%; 95% CI, 57.8–95.7) for negative percent agreement (NPA), and 50/52 samples (96.2%; 95% CI, 87.0–98.9) for OPA (Table ). For TAP (5% cutoff) vs CPS (cutoff of 1), the percentage agreement was 35/39 samples (89.7%; 95% CI, 76.4–95.9) for PPA, 13/13 samples (100%; 95% CI, 77.2–100.0) for NPA, and 48/52 samples (92.3%; 95% CI, 81.8–97.0) for OPA (Table ). The average time spent on scoring was 5 min for the TAP score and 30 min for the CPS scoring algorithm. Understanding of immune checkpoint inhibitors has revolutionized the treatment options for cancer patients. Thus far, PD-L1 has been the focus of that recent paradigm shift. However, different scoring systems were introduced in a rapid successive fashion which may have burdened practicing pathologists who had to consistently play catch-up. This study aimed to provide a simple, visual-based estimate scoring method which combines TC and IC to identify the intended patient population of interest. On-market FDA-approved PD-L1 scoring algorithms can be classified into TC- or IC-only score, TC and IC score in a sequential manner, or combined TC/IC score (Table ). In general, TC-only scoring methods have been favorably adopted by the pathology community , whereas IC scoring or sequential TC/IC scoring have been perceived as challenging. CPS is the only FDA-approved method that combines TC and IC. It is a cell counting-based approach where the number of PD-L1-stained cells (TC, lymphocytes, and macrophages) is divided by the total number of viable TC, multiplied by 100 . Cell counting can be time-consuming and is not in sync with pathology practice, which classically uses a Gestalt approach based on visual pattern recognition and estimation. Our study found that the average time spent on scoring was 5 min for the TAP score and 30 min for the CPS scoring algorithm, with one case of a large resection taking up to 1 h using CPS. Accordingly, pathologists must develop strategies to cope with CPS scoring during busy practice periods due to the time-consuming nature of the cell counting process. From communicating with practicing pathologists in the field, these strategies include piecemeal scoring approaches for large tumor resection specimens with heterogeneous staining pattern, eyeballing when applying 20x rules which provide estimated tumor cell numbers, and using a standard cellularity table for TC numbers. An added complexity of CPS scoring is assessment of the type of IC to be included in the count, which requires the pathologist to select only mononuclear IC . The TAP scoring method is inclusive of all types of IC; therefore, pathologists need not exhaust themselves under high magnification to confirm a cell type. Increasingly, research has shown that granulocytes are part of the adaptive tumor immune response ; we have also observed weak to moderate PD-L1 expression in neutrophils around TC (Supplementary Fig. ). This evidence led to inclusion of granulocytes in development of the TAP method. To overly simplify, the TAP method is essentially “the percentage of relevant brown (positive cells) over blue (entire tumor areas on IHC slide)”. In this study, we compared the percentage agreement between TAP (1% and 5% cutoff) and CPS (cutoff of 1) in gastric/GEJ adenocarcinoma and ESCC samples using the VENTANA PD-L1 (SP263) assay, to investigate whether the two scoring methods were interchangeable, and if so, at what cutoff. The PPA, NPA, and OPA of the two comparisons were equal to or greater than 85%, with TAP score at 1% cutoff having better concordance with CPS 1 compared with TAP score at 5%. This suggests that the two algorithms, when used at different cutoffs, could potentially identify the same population of patients. In theory, samples in which the tumor stroma does not comprise large portions of tumor areas, such as mucosal biopsy specimens, have even greater potential for higher concordance of the two scoring methods (TAP and CPS). In fact, a study evaluated associations and potential correlations with clinical efficacy of the PD-L1 SP263 assay scored with the TAP algorithm (referred to as TIC [Tumor and Immune Cell]) at 5% cutoff and the PD-L1 22C3 assay scored with the CPS algorithm at 1% cutoff in gastroesophageal adenocarcinoma. Both the SP263 assay (TAP scoring) and 22C3 assay (CPS scoring) aided in the identification of patients with gastroesophageal adenocarcinoma likely to benefit from tislelizumab . A potential limitation of TAP scoring is in defining the tumor areas in situations where the specimens have complicated histology with various non-neoplastic cells present in between tumor cells. However, this becomes less problematic as a pathologist reviews more cases and gains more experience. The introduction of another PD-L1 scoring method (TAP) to an already confused market could be perceived as a limitation. However, as we have demonstrated, this method can help reduce confusion by providing a viable path for simplifying and standardizing pathology practice without compromising accuracy of patient selection. The data in this study show that the TAP scoring method is as effective as the CPS method in detecting patients with positive PD-L1 expression, but substantially less time-consuming. In addition to being highly reproducible among different pathologists, it can potentially standardize the existing scoring methods that evaluate both TC and IC. Additional file 1: Supplementary Fig. 1. Neutrophils with weak cytoplasmic staining.
Global prevalence of
67bd1627-deb3-4449-89ef-a2b978d72e46
10114346
Microbiology[mh]
In the mid-1970s, the gram-positive and anaerobic bacterium Clostridioides difficile (formerly known as Clostridium difficile ) was found as a common cause for nosocomial infection and a major cause of antibiotic-associated diarrhea [ – ]. By forming resistant spores and the ability of producing toxins, C. difficile is responsible for a diverse group of infection, from mild and self-limiting gastrointestinal infections to severe life threatening infections, like toxic megacolon . C. difficile infection (CDI) is associated with significant mortality and increased healthcare costs in the world [ – ]. C. difficile is basically a nosocomial pathogen, but the prevalence of community-acquired CDI seems to be increasing . Prevalence of C. difficile contamination in food is high, and a wide range of foods are contaminated by C. difficile . Therefore, consumption of C. difficile contaminated food is a risk factor for transmission of this infection in community, and one of the most important route of transmitting could be contaminated food by C. difficile spore . The presence of C. difficile in sewage-treatment plants might be a major reason of its community acquisition, transmission to food, and ultimately food contamination . This issue demands more attention to this health-threatening pathogen. The main aims of this systematic and meta-analysis study were (i) to investigate the prevalence of C. difficile in different types of food and compare them with each other, (ii) to determine the frequency of toxin genes, (iii) to assay the relationship of toxin genes with the prevalence of C. difficile , and (iv) to evaluate the phenotypic and genotypic diagnostic methods from 17,148 food samples. Literature search Published studies from January 2009 to December 2019 were retrieved from four main databases including Web of sciences, Scopus, PubMed, and Google Scholar by applying the following keywords: “clostridia”, “ Clostridium spp.”, “ Clostridium difficile ”, “ Clostridioides difficile ”, “ C. difficile ”, “antibiotic resistance”, “food contamination”, “toxinotype”, “ribotype”, and “toxin genes” alone or combined with ‘‘AND’’ and/or ‘‘OR’’ operators. To conduct the present study, Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guideline were considered . Inclusion/exclusion criteria All cross-sectional studies focusing on the prevalence of C. difficile contamination in food samples were included. Short communications, cohort studies, clinical trials, letter to editors, narrative or systematic reviews, and the non-English articles were excluded. Selection of studies and data gathering The text of all included studies was accurately read by two independent authors, and in case of any discrepancy, the issue was discussed by other authors to be resolved. The following characteristics of each study were collected: first author, year of publication, sampling year, the location of the study, detection methods, sample type, sample size, the number of detected C. difficile , toxinotypes, ribotypes, toxin genes, antibiotics used, number of resistance isolates, and the method of antibiotic susceptibility assay. Data analysis Data analyses were performed using Comprehensive Meta-Analysis software, V2.2.064. The C. difficile prevalence in different food samples and the prevalence of toxinotype and toxin genes, and antibiotic resistance rate in the C. difficile isolates were shown with event rate and a 95% confidence interval (CI). The random-effects model was chosen for meta-analyses, and several subgroup analyses were conducted to evaluate the source of heterogeneity based on the continent, country, sample types and the sampling periods of time. Using a random-effects model, risk ratios for each sample type were calculated to quantify the differences and rank the sample types based on the risk. The Q test and I2 statistic were applied to measure any possible heterogeneity between the studies. The publication bias was evaluated by conducting Egger weighted regression test. In all analyses, the significate threshold was < 0.05 ( p value < 0.05). Published studies from January 2009 to December 2019 were retrieved from four main databases including Web of sciences, Scopus, PubMed, and Google Scholar by applying the following keywords: “clostridia”, “ Clostridium spp.”, “ Clostridium difficile ”, “ Clostridioides difficile ”, “ C. difficile ”, “antibiotic resistance”, “food contamination”, “toxinotype”, “ribotype”, and “toxin genes” alone or combined with ‘‘AND’’ and/or ‘‘OR’’ operators. To conduct the present study, Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guideline were considered . All cross-sectional studies focusing on the prevalence of C. difficile contamination in food samples were included. Short communications, cohort studies, clinical trials, letter to editors, narrative or systematic reviews, and the non-English articles were excluded. The text of all included studies was accurately read by two independent authors, and in case of any discrepancy, the issue was discussed by other authors to be resolved. The following characteristics of each study were collected: first author, year of publication, sampling year, the location of the study, detection methods, sample type, sample size, the number of detected C. difficile , toxinotypes, ribotypes, toxin genes, antibiotics used, number of resistance isolates, and the method of antibiotic susceptibility assay. Data analyses were performed using Comprehensive Meta-Analysis software, V2.2.064. The C. difficile prevalence in different food samples and the prevalence of toxinotype and toxin genes, and antibiotic resistance rate in the C. difficile isolates were shown with event rate and a 95% confidence interval (CI). The random-effects model was chosen for meta-analyses, and several subgroup analyses were conducted to evaluate the source of heterogeneity based on the continent, country, sample types and the sampling periods of time. Using a random-effects model, risk ratios for each sample type were calculated to quantify the differences and rank the sample types based on the risk. The Q test and I2 statistic were applied to measure any possible heterogeneity between the studies. The publication bias was evaluated by conducting Egger weighted regression test. In all analyses, the significate threshold was < 0.05 ( p value < 0.05). Search results In total, 2202 studies were recovered after accurate searching in the databases using the aforementioned key words. Among them, 1026 papers were non-duplicated articles and were considered in the study. After title/abstract screening, 116 studies remained. For eligibility, 79 studies were assessed by full-text reading. Sixty studies remained for final qualitative and meta-analysis. The diagram of our search strategy is given in Fig. , and the extracted characteristics of the studies are shown in Table . The pooled prevalence of C. difficile in food samples To analyze the pooled prevalence of C. difficile in food samples, 60 studies were used in a random-effects model. The event rate, which was the number of C. difficile cases over the number of samples, was applied as the effect size index. The overall pooled prevalence of C. difficile in food samples was estimated to be 6.3% (CI 95%: 4.8–8.2) (Fig. ). The lowest and highest C. difficile prevalence was observed in Shaughnessy et al. and Romano et al. reports with 0.1% and 66.7% prevalence, respectively (Fig. ). The Q -value was 1049.1 which was much higher than the number of studies minus 1 (60–1 = 59), that reject the null hypothesis and showed a significant heterogeneity between studies. The I 2 statistics indicated that 94.4% of the variances reflect true variances between studies. Subgroup analysis of C. difficile prevalence based on the study continent, the year of sampling and the sample types To subgroup analysis of C. difficile prevalence in food samples was performed based on the study continent, in which the 60 studies were divided into the following subgroups: Africa (two studies), Asia (20 studies), Central/North America (20 studies), Europe (17 studies), and South America (one study). Difference in prevalence of C. difficile isolated from food samples in different continents was not significant (Table ). To subgroup analysis of C. difficile food prevalence based on the sampling year, three time frames were used as follows: TF1 (2004-the end of 2008), TF2 (2009-the end of 2013), and TF3 (2014 ≤). Considering these time frames, 44 studies were used for a random-effects model subgroup analysis. No statistically significant difference was observed between time frame subgroups (Table ). To subgroup analysis of C. difficile food prevalence based on the sample type, the following subgroups were used: raw meat (R-meat), cooked meat/Hamburger (C-meat/Ham), poultry raw meat (R-poultry), cooked poultry (C-poultry), raw seafood/fish (Seafood), cooked seafood/fish (C-seafood), vegetables (Veg.), ready-to-eat meat (RTE meat), Milk/Dairy, salad, soy, side dishes (S-dishes), and pet food. The prevalence of C. difficile in each sample type is presented in Table . The highest and lowest prevalence were 10.3% and 0.8%, which were seen in Seafood and S-dishes sample types, respectively (Table ). Although there were some differences in C. difficile prevalence of different sample types, no significant heterogeneity was observed between groups ( Q -value: 10.657, p value: 0.557) (Table ). For better presentation of the results, in another arrangement, the studies were divided to more general groups based on sample types as follows: meat, poultry, seafood, vegetables, salad, milk/diary, and others (S-dishes, soy, pet food) (Fig. ). For presenting each sample type in each country, more subgroup analyses were performed. The summary results of these analyses are shown in Fig. . Also, risk ratios were obtained using the extracted data. Based on the ranking of the risk ratio, S-dishes as a reference and was the lowest source of C. difficile and seafood, RTE meat, C-poultry, salad, R-poultry and R-meat had highest risks. Compared to S-dishes, the probability of contamination of seafood with CD was 12.88 times higher than S-dishes, and the risk of contamination of RTE meat, C-poultry, salad, R-poultry and R-meat obtained 9.75, 7.75, 7.63, 7.63 and 7.0 times more than S-dishes, respectively (Fig. ). Prevalence of C. difficile ribotype, toxinotypes and toxin genes According to a very diverse reported ribotypes, it was impossible to analyze the pooled prevalence of the ribotypes; this parameter is represented in Additional file : Table S1 without further analysis. The most frequent toxinotypes of C. difficile were toxinotype 0, III, and V. As it is shown in Table , the toxinotype V was more prevalent comparing to other two toxinotypes, and there was a significant heterogeneity between the toxinotypes ( Q -value: 9.725, p value: 0.008) (Table ). The toxin genes that were reported in more than one study include genes A, B, CTD, tcdC, tcdC18, tcdC39, tcdC117, and cdtA. The toxin genes of A and B were the most frequent, and genes tcdC18 and tcdC117 were the lowest frequent genes studied (Table ). There was also significant heterogeneity between the studied genes ( Q -value: 58.9, p value: 0.000) (Table ). As shown in Table , toxin type 0 in which pathogenic strains were located shows a higher prevalence in seafood samples. While the prevalence of toxin types 3 and 5 was higher in RTE meat and R-poultry. As shown in Table , the highest prevalence of toxin genes A, B, and CDT was observed in RTE meat samples. Compared to other samples, Milk/Dairy and Salad rank after RTE meat in terms of the high prevalence of genes A and B toxins.’ Publication bias The publication bias was checked based on the pooled prevalence of C. difficile isolates in food samples. The Egger’s linear regression test result showed a significant publication bias in the included studies ( p value < 0.0001). In total, 2202 studies were recovered after accurate searching in the databases using the aforementioned key words. Among them, 1026 papers were non-duplicated articles and were considered in the study. After title/abstract screening, 116 studies remained. For eligibility, 79 studies were assessed by full-text reading. Sixty studies remained for final qualitative and meta-analysis. The diagram of our search strategy is given in Fig. , and the extracted characteristics of the studies are shown in Table . C. difficile in food samples To analyze the pooled prevalence of C. difficile in food samples, 60 studies were used in a random-effects model. The event rate, which was the number of C. difficile cases over the number of samples, was applied as the effect size index. The overall pooled prevalence of C. difficile in food samples was estimated to be 6.3% (CI 95%: 4.8–8.2) (Fig. ). The lowest and highest C. difficile prevalence was observed in Shaughnessy et al. and Romano et al. reports with 0.1% and 66.7% prevalence, respectively (Fig. ). The Q -value was 1049.1 which was much higher than the number of studies minus 1 (60–1 = 59), that reject the null hypothesis and showed a significant heterogeneity between studies. The I 2 statistics indicated that 94.4% of the variances reflect true variances between studies. C. difficile prevalence based on the study continent, the year of sampling and the sample types To subgroup analysis of C. difficile prevalence in food samples was performed based on the study continent, in which the 60 studies were divided into the following subgroups: Africa (two studies), Asia (20 studies), Central/North America (20 studies), Europe (17 studies), and South America (one study). Difference in prevalence of C. difficile isolated from food samples in different continents was not significant (Table ). To subgroup analysis of C. difficile food prevalence based on the sampling year, three time frames were used as follows: TF1 (2004-the end of 2008), TF2 (2009-the end of 2013), and TF3 (2014 ≤). Considering these time frames, 44 studies were used for a random-effects model subgroup analysis. No statistically significant difference was observed between time frame subgroups (Table ). To subgroup analysis of C. difficile food prevalence based on the sample type, the following subgroups were used: raw meat (R-meat), cooked meat/Hamburger (C-meat/Ham), poultry raw meat (R-poultry), cooked poultry (C-poultry), raw seafood/fish (Seafood), cooked seafood/fish (C-seafood), vegetables (Veg.), ready-to-eat meat (RTE meat), Milk/Dairy, salad, soy, side dishes (S-dishes), and pet food. The prevalence of C. difficile in each sample type is presented in Table . The highest and lowest prevalence were 10.3% and 0.8%, which were seen in Seafood and S-dishes sample types, respectively (Table ). Although there were some differences in C. difficile prevalence of different sample types, no significant heterogeneity was observed between groups ( Q -value: 10.657, p value: 0.557) (Table ). For better presentation of the results, in another arrangement, the studies were divided to more general groups based on sample types as follows: meat, poultry, seafood, vegetables, salad, milk/diary, and others (S-dishes, soy, pet food) (Fig. ). For presenting each sample type in each country, more subgroup analyses were performed. The summary results of these analyses are shown in Fig. . Also, risk ratios were obtained using the extracted data. Based on the ranking of the risk ratio, S-dishes as a reference and was the lowest source of C. difficile and seafood, RTE meat, C-poultry, salad, R-poultry and R-meat had highest risks. Compared to S-dishes, the probability of contamination of seafood with CD was 12.88 times higher than S-dishes, and the risk of contamination of RTE meat, C-poultry, salad, R-poultry and R-meat obtained 9.75, 7.75, 7.63, 7.63 and 7.0 times more than S-dishes, respectively (Fig. ). C. difficile ribotype, toxinotypes and toxin genes According to a very diverse reported ribotypes, it was impossible to analyze the pooled prevalence of the ribotypes; this parameter is represented in Additional file : Table S1 without further analysis. The most frequent toxinotypes of C. difficile were toxinotype 0, III, and V. As it is shown in Table , the toxinotype V was more prevalent comparing to other two toxinotypes, and there was a significant heterogeneity between the toxinotypes ( Q -value: 9.725, p value: 0.008) (Table ). The toxin genes that were reported in more than one study include genes A, B, CTD, tcdC, tcdC18, tcdC39, tcdC117, and cdtA. The toxin genes of A and B were the most frequent, and genes tcdC18 and tcdC117 were the lowest frequent genes studied (Table ). There was also significant heterogeneity between the studied genes ( Q -value: 58.9, p value: 0.000) (Table ). As shown in Table , toxin type 0 in which pathogenic strains were located shows a higher prevalence in seafood samples. While the prevalence of toxin types 3 and 5 was higher in RTE meat and R-poultry. As shown in Table , the highest prevalence of toxin genes A, B, and CDT was observed in RTE meat samples. Compared to other samples, Milk/Dairy and Salad rank after RTE meat in terms of the high prevalence of genes A and B toxins.’ The publication bias was checked based on the pooled prevalence of C. difficile isolates in food samples. The Egger’s linear regression test result showed a significant publication bias in the included studies ( p value < 0.0001). Consuming the contaminated raw and cooked foods with C. difficile spore might be an important route of its transmission [ – ]. Food contamination has played an important role in epidemiology of some infectious diseases, but little information is available about the global frequency of C. difficile in food products . The present study analyzed the distribution of C. difficile in 60 studies published from 2009 to 2019 in 17,148 food samples. The results showed that the overall prevalence of C. difficile in all food samples was 6.3%, with the lowest and highest prevalence of C. difficile were 0.1% and 66.7%, respectively. In a systematic review study, Rodriguez-Palacios and colleagues reported the 4.1% prevalence of C. difficile in human diet samples during 1981 to 2018 . Comparing to the results presented in this study, it seems that the reported prevalence of the bacterium in these two studies is quiet the same. Taken together, the overall C. difficile prevalence in food samples in the world seems to be less than 10%, but it is relatively high and should not be undermined. Significant heterogeneity was observed between the studies that indicated different prevalence of C. difficile in different parts of the world. However, in addition to real differences in C. difficile prevalence, the observed heterogeneity may be due to different seasons of sampling, temperatures and geographical conditions, the quality of studies, the sensitivity of detection methods, etc. . Although the frequency of C. difficile varied in food samples from different continents, these differences were not statistically significant. The prevalence of C. difficile in Asia and Europe was almost the same, but it was lower in Africa and North/Central America comparing to similar reports . In this study, this difference could be attributed to high consumption of seafood’s in diet of Asia and Europe, and a large number of seafood samples have been studied. The lowest prevalence of C. difficile was observed in South America. Most of the studies were on meat and meat products. The contamination of undercooked and prepared foods was evident . The prevalence of C. difficile in meat products of this study was the same as a report by Usui 2020 , but was lower than study reported from Canada by Warriner in 2017 . It must be noted that the prevalence of food sample isolated C. difficile was so variable with a range of 1.6% from Netherlands to 42% from USA . The prevalence of C. difficile in chicken and poultry meat was 6.2%, which was similar to the previous study (6.7%) . However, the isolation rate of C. difficile from chicken meat samples was ranging from %0 [ – ] to 44.4% in turkey meat samples . It seems that the chicken with skin is more vulnerable to contamination comparing to skin-less chicken samples . Seafood and oysters well-known carriers of C. difficile . In the present meta-analysis, the overall contamination rate of seafood was 10.3% and had the highest risk ratio (12.88%). According to another meta-analysis study, pooled prevalence of C. difficile in seafood was shown a little bit more in comparison with our pooled prevalence (Seafood risk ratio was 14.3) . This difference may be because of longer time and more included studies. The variation between prevalence of C. difficile isolated from seafood’s have been seen in many studies from around the world ranged from 3.9% to more than 40% [ – ]. The first report of root vegetables contamination with C. difficile was in 1996 . In this study, the overall prevalence of C. difficile in contaminated vegetables was 5.7%, which was less than another meta-analysis (12% on average). This would be due to the increase of health level in production and transfer of vegetables . Regardless of the type of food products, the most important issue in relation to C. difficile strains is detecting their ribotypes and toxinotypes . Although we could not statistically analyze the C. difficile ribotypes data due to vast divergence of the informations, it is obvious that ribotypes 027 and 078 were the most predominants followed by 001 , 010 and 020/014 ribotypes. The results of the present study showed that the most common toxinotypes were toxinotypes were toxinotype V, 0, III, respectively. In a review study, the presence of toxin genes in food samples was estimated as 3.5% to 100% [ , , ]. The types of toxinotypes can be important in the development of molecular diagnostic tests and vaccines . As reported by many studies C. difficile harboring tcd A and tcd B, toxin genes were more prevalent than other strains . The contamination risk analysis showed that seafood and RTE-meat are the high-risk foods. While in Rodriguez-Palacios study (21), among different food items, vegetables and seafood were ranked as the high-risk food items, in both studies, seafood is one of the risk food items. This information can be useful for determining preventive food safety measures (cooking food and not consuming raw food) to minimize the possibility of further food contamination. This study showed that a variety of foods, especially seafood, were at potential risk for C. difficile . The frequency of C. difficile varied in food samples from different continents. This difference can be attributed to the high consumption of seafood in the diet of Asia and Europe. These results suggest that consumption of raw and undercooked foods is a way to further transmit C. difficile to humans. Therefore, enough cooking of food, suitable washing of animal carcasses in the slaughter process, prevention of carcass contamination with animal feces play an important role in increasing food safety. Additional file 1. The ribotypes of the studies.
Tracing the path of 37,050 studies into practice across 18 specialties of the 2.4 million published between 2011 and 2020
2a515150-e416-439e-b71c-0733b09ac3a5
10115455
Internal Medicine[mh]
Two key elements to the underperformance of national healthcare systems are that: (a) many patients do not receive recommended services and (b) many receive treatment that is neither necessary nor appropriate for them . The Institute of Medicine (IOM) Roundtable on Value and Science-Driven Healthcare argues; however, the challenge is not a matter of overuse (or underuse) of services, but the absence of evidence to assess the appropriateness of treatment approaches . With more than 1 million medical research articles published in the past year alone, the adoption of clinical studies into practice is one critical aspect of this challenge – further compounded by a limited understanding of how the wave of biomedical literature reaches the shores of clinical practice. A few case studies/series have attempted to understand this block in clinical adoption using surrogate markers, such as submission to the Food and Drug Administration (FDA) , number of citations , or incorporation into society-specific clinical guidelines . However, these studies are often too coarse and indirect for a real-time and practical understanding of how clinicians read, synthesize, and integrate the literature into their everyday practice. Furthermore, these studies often conflate translation of basic science with translation of clinical studies to practice, which the IOM has identified as two separate and distinct translational blocks . In addition, using citation in consensus documents or society recommendations is too slow and often limited in scope to provide answers to the questions defined here. Focusing on the translation of clinical studies into practice, we capitalize on the electronic resource UpToDate, which provides current evidence-based clinical information at the point-of-care and is used by over a million clinicians across 32,000 organizations in 180 countries . While the relevance of UpToDate varies, it serves as a reliable and regularly updated source of a specialty-focused clinician-driven curation of the broader literature. Thus, we use citation in UpToDate as one metric to assess translation, especially given its quantifiable impact on patient care . Leveraging a dataset of more than 10,000 UpToDate articles, sampled every 3 months for the past decade (2011–2020), we provide the first thorough and most comprehensive characterization and understanding of the factors that influence the adoption of clinical research by tracing the path of 37,050 newly added references from 887 journals, as well as provide valuable insight into the variation of adoption across 18 non-surgical specialties by clinical topic, article type, geography, and over time. What fraction of the published literature is eventually cited in point-of-care resources? Among the 18 specialties included in our analysis, neurology had the highest citation rate; of the 85,843 research articles published in clinical neurology journals during our sampling window, 2057 (2.4%) were eventually cited at least once in UpToDate. Rheumatology (1442 cited of 62,681 published; 2.3%), hematology (2506 of 110,055; 2.3%), and pediatrics (2678 of 119,486; 2.2%) had similar citation rates. Three specialties had sub-percent citation rates: radiology (1214 cited of 165,985 published; 0.7%), geriatrics (64 of 9781; 0.6%), and pathology (317 of 69,343; 0.4%). All remaining specialties, including internal medicine, had between 1 and 2% of all published research eventually cited in UpToDate. The proportion of citations also varied substantially by article type . Practice guidelines represented the most likely article type to be cited, with 9 of the 18 specialties citing >13% (interquartile range [IQR] of 5.1–14.5%) of all practice guidelines published in their respective journals. Although clinical trials (especially phase III trials) were the second most likely article type to be cited (9 of 18 specialties citing >9.5% of all phase III clinical trials published during our sampling window [IQR 3.0–13.0%]), it was also the most variable (SD of 8.7%). In 9 of the 18 specialties, we observed that less than 1 in 10 phase III clinical trials were ever cited at the point-of-care . Of the top-performing specialties, the citation rate of clinical trials was distinctly high in internal medicine (299 cited of 822 phase III clinical trials published; 36.3%), pediatrics (8 of 48; 16.7%), and infectious diseases (15 of 99; 15.1%). Notably, no equivalence trial, among the 43 published across all 18 specialties, was ever cited. Comparatively, pragmatic clinical trials were only cited in 5 of the 18 specialties: oncology (50% of published pragmatic clinical trials cited), internal medicine (20.3%), endocrinology (11.1%), cardiology (8.0%), and pediatrics (7.7%). The remaining 13 specialties had a 0% citation rate for pragmatic clinical trials. Similarly, case reports were also unlikely to be cited at the point-of-care across all specialties, with only 3111 case reports (0.8%) cited of the 403,043 published in specialty journals during our sampling window. Which are the predominant article types cited in point-of-care resources? Despite a cumulative citation rate of <1%, case reports still represented the most common article type of those cited in 2 of the 18 specialties . Among the 1506 citations added from dermatology journals over the past decade, 501 (32.0%) were case reports. Similarly, of the 317 citations from pathology journals, 49 (15.5%) were case reports. Strikingly, case reports were also consistently among the three most commonly cited article types across all but six specialties (median of 7.1% of added citations were case reports; IQR 5.6–12.3%). By comparison, phase III clinical trials represented less than 1.0% of added citations in 9 of 18 specialties (IQR 0.2–1.9%). Of the 18 specialties, anesthesiology, cardiac and cardiovascular systems, critical care, geriatrics, internal medicine, and oncology tended to favor higher-quality evidence ; reviews/systematic reviews, practice-guidelines, and meta-analyses represented the three most cited article types among five of these six specialties. Oncology was relatively unique in that it was the only specialty where phase III clinical trials ranked among the most commonly cited article types; we counted 411 phase III clinical trials among the 3071 references added during our sampling window from oncology journals. What is the time-to-citation by specialty and article type? Time-to-citation did not vary meaningfully between specialties; 50% of articles were cited within a year of publication (IQR 0–4 years). There were significant differences, however, between article types . Phase III clinical trials had the shortest time-to-citation, with 75% cited within the year of publication (IQR 0–1 year). Meta-analyses, practice guidelines, and systematic reviews followed a similar, albeit slightly slower, trend. Case reports had the longest time-to-citation (median 3 years; IQR 1–9 years). Across all specialties, higher quality of evidence correlated with a shorter time-to-citation . Is journal impact factor predictive of either proportion of articles cited or time-to-citation in point-of-care resources? For 12 of the 18 medical specialties, journal impact factor was significantly correlated with the proportion of articles cited . In descending order, impact factor was significantly correlated with citation rate in: rheumatology (Spearman’s rho = 0.86, p=1.4 × 10 –6 ), infectious diseases (rho = 0.79, p=7.7 × 10 –5 ), hematology (rho = 0.69, p=8.1 × 10 –5 ), pediatrics (rho = 0.66, p=0.0001), gastroenterology and hepatology (rho = 0.53, p=3.6 × 10 –5 ), cardiac and cardiovascular systems (rho = 0.55, p=1.2 × 10 –6 ), internal medicine (rho = 0.49, p=2.3 × 10 –6 ), neurology (rho = 0.43, p=0.0086), dermatology (rho = 0.39, p=0.02), urology and nephrology (rho = 0.37, p=0.007), endocrinology and metabolism (rho = 0.32, p=0.02), and oncology (rho = 0.29, p=0.01). In other words, in these 12 specialties, journals with higher impact factors tended to have a larger fraction of their published articles cited at the point-of-care. visualizes the respective scatterplots labeled by journal. For the remaining six specialties, the relationship between impact factor and portion of cited articles was not significant (p>0.05). Analogously, journal impact factor was significantly and negatively correlated with time-to-citation for 10 of 18 specialties: infectious diseases (Spearman’s rho = –0.51, p=0.03), internal medicine (rho = –0.408, p=0.0001), hematology (rho = –0.407, p=0.03), pediatrics (rho = –0.40, p=0.03), dermatology (rho = –0.406, p=0.02), pathology (rho = –0.45, p=0.04), neurology (Spearman’s rho = –0.37, p=0.03), urology and nephrology (rho = –0.34, p=0.02), cardiac and cardiovascular systems (rho = –0.36, p=0.002), and oncology (rho = –0.31, p=0.006). In other words, articles from higher impact specialty journals tended to have a quicker time to citation . While the impact factor appears able to (partially) prioritize journals with greater (or quicker) than expected contributions to clinical practice, we sought to better quantify the impact of the journal on clinical practice using two previously introduced new indices (see Materials and methods): the clinical relevancy index (CRI) and the clinical immediacy index (CII). We calculated these indices for all journals in the 18 medical specialties discussed here in . What topics are over-(or under-)represented in abstracts cited in point-of-care resources, compared to uncited literature? Do these topics explain variation in time-to-citation? Abstract contents (as assessed using Unified Medical Language System [UMLS] concepts or terms) can significantly explain citation (versus not) in point-of-care resources, as well as variation in time-to-citation among cited abstracts ( and – ). While results for all 18 specialties are fascinating and informative, the Appendix 1 focuses on two specialties (cardiac and cardiovascular systems and endocrinology and metabolism). What is the influence of department-specific NIH funding on the absolute number of citations and time-to-citation? What is the impact of cumulative NIH funding? As we previously noted in curation of our dataset, it is often difficult to disentangle hospitals, medical schools, and affiliated research institutions. As such, to explore the role and impact of NIH funding, we use city (rather than individual institutions or hospitals) as the unit of analysis. Our analysis primarily focused on the United States for two reasons. Firstly, 35% of references from the 18 medical specialties cited in UpToDate were from the United States; by way of comparison, in 2019, 39% of all publications in PubMed were from the United States. Thus, our data was well powered for our funding analyses. Secondly, the NIH publicly releases its funding information with sufficient granularity and standardized specialty labels to enable the analysis. Our department-specific analysis combined eight specialties (cardiac and cardiovascular systems, critical care medicine, endocrinology and metabolism, gastroenterology and hepatology, geriatrics and gerontology, hematology, rheumatology, and oncology) under the ‘general and internal medicine’ specialty label because a large portion of the funding for these specialties occurs through the NIH department combining name of ‘internal medicine/medicine’ (i.e. there was no specific department labels for these subset of specialties). Average annual department-specific NIH funding correlated strongly with the absolute number of total citations, in the past decade, across all specialties ( and ): pathology (Spearman’s rho = 0.73, p=1.2 × 10 –10 ), neurology (rho = 0.70, p<2.2 × 10 –16 ), pediatrics (rho = 0.67, p<2.2 × 10 –16 ), radiology (rho = 0.64, p=4.8 × 10 –12 ), internal medicine (rho = 0.60, p<2.2 × 10 –16 ), dermatology (rho = 0.57, p=1.2 × 10 –9 ), urology and nephrology (rho = 0.57, p=3.0 × 10 –13 ), emergency medicine (rho = 0.52, p=2.1 × 10 –7 ), anesthesiology (rho = 0.48, p=9.0 × 10 –6 ), and infectious diseases (rho = 0.41, p=0.006). depicts city-labeled scatterplots that highlight, both, American cities (and institutions) that were successful at translating research back to the practice and cities that were particularly efficient. illustrates the cumulative correlation of NIH funding, across all medical and surgical specialties. In sharp contrast, to both the department specific and cumulative funding associations with number of citations, NIH funding did not correlate with time-to-citation in any specialty . Given the strength of correlation of NIH funding with the absolute number of citations across all medical specialties, we also sought to quantify the cost of one new added citation to the point-of-care (i.e. the slope) using a simple linear model. More concretely, we defined the model as a linear function between the average number of UpToDate citations from each city over the past 10 years and the average annual NIH department-specific funding between 2011 and 2020. This estimate may be interpreted as the approximate (indirect) cost of bringing clinical research to the bedside in NIH funding dollars, with the intercept being proportional to the initial investment ‘set-up’ cost. In descending order, a new citation at the point-of-care costs: $48,086.18 per new point-of-care UpToDate citation from urology and nephrology journals (± SE of $7410.68 and an intercept of $470,546.67), $34,529.29 dollars from dermatology journals (± $4043.66 and an intercept = $251,133.85), $13,286.72 from general and internal medicine specialty journals (± $746.58 and an intercept = $673,780.86), $10,655.93 dollars in emergency medicine (± $2795.21 and an intercept = $265,336.27), $6,832.46 dollars from pediatrics journals (± $756.08 and an intercept = $325,662.10), $6482.30 dollars from anesthesiology journals (± $1393.98 and intercept = $206,374.57), $6,227.91 dollars from radiology journals (± $1019.13 and intercept = $254,528.95), $6106.92 dollars from neurology journals (± $607.81 and intercept = $261,566.15), and $874.85 dollars from pathology journals (± $229.67 and an intercept = $174,163.00). The model was not significant for infectious diseases. We subsequently generated US-focused maps to highlight, per specialty, institutions, and cities successful at translating clinical research from specialty journals to the bedside ( and ). Among the 18 specialties included in our analysis, neurology had the highest citation rate; of the 85,843 research articles published in clinical neurology journals during our sampling window, 2057 (2.4%) were eventually cited at least once in UpToDate. Rheumatology (1442 cited of 62,681 published; 2.3%), hematology (2506 of 110,055; 2.3%), and pediatrics (2678 of 119,486; 2.2%) had similar citation rates. Three specialties had sub-percent citation rates: radiology (1214 cited of 165,985 published; 0.7%), geriatrics (64 of 9781; 0.6%), and pathology (317 of 69,343; 0.4%). All remaining specialties, including internal medicine, had between 1 and 2% of all published research eventually cited in UpToDate. The proportion of citations also varied substantially by article type . Practice guidelines represented the most likely article type to be cited, with 9 of the 18 specialties citing >13% (interquartile range [IQR] of 5.1–14.5%) of all practice guidelines published in their respective journals. Although clinical trials (especially phase III trials) were the second most likely article type to be cited (9 of 18 specialties citing >9.5% of all phase III clinical trials published during our sampling window [IQR 3.0–13.0%]), it was also the most variable (SD of 8.7%). In 9 of the 18 specialties, we observed that less than 1 in 10 phase III clinical trials were ever cited at the point-of-care . Of the top-performing specialties, the citation rate of clinical trials was distinctly high in internal medicine (299 cited of 822 phase III clinical trials published; 36.3%), pediatrics (8 of 48; 16.7%), and infectious diseases (15 of 99; 15.1%). Notably, no equivalence trial, among the 43 published across all 18 specialties, was ever cited. Comparatively, pragmatic clinical trials were only cited in 5 of the 18 specialties: oncology (50% of published pragmatic clinical trials cited), internal medicine (20.3%), endocrinology (11.1%), cardiology (8.0%), and pediatrics (7.7%). The remaining 13 specialties had a 0% citation rate for pragmatic clinical trials. Similarly, case reports were also unlikely to be cited at the point-of-care across all specialties, with only 3111 case reports (0.8%) cited of the 403,043 published in specialty journals during our sampling window. Despite a cumulative citation rate of <1%, case reports still represented the most common article type of those cited in 2 of the 18 specialties . Among the 1506 citations added from dermatology journals over the past decade, 501 (32.0%) were case reports. Similarly, of the 317 citations from pathology journals, 49 (15.5%) were case reports. Strikingly, case reports were also consistently among the three most commonly cited article types across all but six specialties (median of 7.1% of added citations were case reports; IQR 5.6–12.3%). By comparison, phase III clinical trials represented less than 1.0% of added citations in 9 of 18 specialties (IQR 0.2–1.9%). Of the 18 specialties, anesthesiology, cardiac and cardiovascular systems, critical care, geriatrics, internal medicine, and oncology tended to favor higher-quality evidence ; reviews/systematic reviews, practice-guidelines, and meta-analyses represented the three most cited article types among five of these six specialties. Oncology was relatively unique in that it was the only specialty where phase III clinical trials ranked among the most commonly cited article types; we counted 411 phase III clinical trials among the 3071 references added during our sampling window from oncology journals. Time-to-citation did not vary meaningfully between specialties; 50% of articles were cited within a year of publication (IQR 0–4 years). There were significant differences, however, between article types . Phase III clinical trials had the shortest time-to-citation, with 75% cited within the year of publication (IQR 0–1 year). Meta-analyses, practice guidelines, and systematic reviews followed a similar, albeit slightly slower, trend. Case reports had the longest time-to-citation (median 3 years; IQR 1–9 years). Across all specialties, higher quality of evidence correlated with a shorter time-to-citation . For 12 of the 18 medical specialties, journal impact factor was significantly correlated with the proportion of articles cited . In descending order, impact factor was significantly correlated with citation rate in: rheumatology (Spearman’s rho = 0.86, p=1.4 × 10 –6 ), infectious diseases (rho = 0.79, p=7.7 × 10 –5 ), hematology (rho = 0.69, p=8.1 × 10 –5 ), pediatrics (rho = 0.66, p=0.0001), gastroenterology and hepatology (rho = 0.53, p=3.6 × 10 –5 ), cardiac and cardiovascular systems (rho = 0.55, p=1.2 × 10 –6 ), internal medicine (rho = 0.49, p=2.3 × 10 –6 ), neurology (rho = 0.43, p=0.0086), dermatology (rho = 0.39, p=0.02), urology and nephrology (rho = 0.37, p=0.007), endocrinology and metabolism (rho = 0.32, p=0.02), and oncology (rho = 0.29, p=0.01). In other words, in these 12 specialties, journals with higher impact factors tended to have a larger fraction of their published articles cited at the point-of-care. visualizes the respective scatterplots labeled by journal. For the remaining six specialties, the relationship between impact factor and portion of cited articles was not significant (p>0.05). Analogously, journal impact factor was significantly and negatively correlated with time-to-citation for 10 of 18 specialties: infectious diseases (Spearman’s rho = –0.51, p=0.03), internal medicine (rho = –0.408, p=0.0001), hematology (rho = –0.407, p=0.03), pediatrics (rho = –0.40, p=0.03), dermatology (rho = –0.406, p=0.02), pathology (rho = –0.45, p=0.04), neurology (Spearman’s rho = –0.37, p=0.03), urology and nephrology (rho = –0.34, p=0.02), cardiac and cardiovascular systems (rho = –0.36, p=0.002), and oncology (rho = –0.31, p=0.006). In other words, articles from higher impact specialty journals tended to have a quicker time to citation . While the impact factor appears able to (partially) prioritize journals with greater (or quicker) than expected contributions to clinical practice, we sought to better quantify the impact of the journal on clinical practice using two previously introduced new indices (see Materials and methods): the clinical relevancy index (CRI) and the clinical immediacy index (CII). We calculated these indices for all journals in the 18 medical specialties discussed here in . Do these topics explain variation in time-to-citation? Abstract contents (as assessed using Unified Medical Language System [UMLS] concepts or terms) can significantly explain citation (versus not) in point-of-care resources, as well as variation in time-to-citation among cited abstracts ( and – ). While results for all 18 specialties are fascinating and informative, the Appendix 1 focuses on two specialties (cardiac and cardiovascular systems and endocrinology and metabolism). Abstract contents (as assessed using Unified Medical Language System [UMLS] concepts or terms) can significantly explain citation (versus not) in point-of-care resources, as well as variation in time-to-citation among cited abstracts ( and – ). While results for all 18 specialties are fascinating and informative, the Appendix 1 focuses on two specialties (cardiac and cardiovascular systems and endocrinology and metabolism). What is the impact of cumulative NIH funding? As we previously noted in curation of our dataset, it is often difficult to disentangle hospitals, medical schools, and affiliated research institutions. As such, to explore the role and impact of NIH funding, we use city (rather than individual institutions or hospitals) as the unit of analysis. Our analysis primarily focused on the United States for two reasons. Firstly, 35% of references from the 18 medical specialties cited in UpToDate were from the United States; by way of comparison, in 2019, 39% of all publications in PubMed were from the United States. Thus, our data was well powered for our funding analyses. Secondly, the NIH publicly releases its funding information with sufficient granularity and standardized specialty labels to enable the analysis. Our department-specific analysis combined eight specialties (cardiac and cardiovascular systems, critical care medicine, endocrinology and metabolism, gastroenterology and hepatology, geriatrics and gerontology, hematology, rheumatology, and oncology) under the ‘general and internal medicine’ specialty label because a large portion of the funding for these specialties occurs through the NIH department combining name of ‘internal medicine/medicine’ (i.e. there was no specific department labels for these subset of specialties). Average annual department-specific NIH funding correlated strongly with the absolute number of total citations, in the past decade, across all specialties ( and ): pathology (Spearman’s rho = 0.73, p=1.2 × 10 –10 ), neurology (rho = 0.70, p<2.2 × 10 –16 ), pediatrics (rho = 0.67, p<2.2 × 10 –16 ), radiology (rho = 0.64, p=4.8 × 10 –12 ), internal medicine (rho = 0.60, p<2.2 × 10 –16 ), dermatology (rho = 0.57, p=1.2 × 10 –9 ), urology and nephrology (rho = 0.57, p=3.0 × 10 –13 ), emergency medicine (rho = 0.52, p=2.1 × 10 –7 ), anesthesiology (rho = 0.48, p=9.0 × 10 –6 ), and infectious diseases (rho = 0.41, p=0.006). depicts city-labeled scatterplots that highlight, both, American cities (and institutions) that were successful at translating research back to the practice and cities that were particularly efficient. illustrates the cumulative correlation of NIH funding, across all medical and surgical specialties. In sharp contrast, to both the department specific and cumulative funding associations with number of citations, NIH funding did not correlate with time-to-citation in any specialty . Given the strength of correlation of NIH funding with the absolute number of citations across all medical specialties, we also sought to quantify the cost of one new added citation to the point-of-care (i.e. the slope) using a simple linear model. More concretely, we defined the model as a linear function between the average number of UpToDate citations from each city over the past 10 years and the average annual NIH department-specific funding between 2011 and 2020. This estimate may be interpreted as the approximate (indirect) cost of bringing clinical research to the bedside in NIH funding dollars, with the intercept being proportional to the initial investment ‘set-up’ cost. In descending order, a new citation at the point-of-care costs: $48,086.18 per new point-of-care UpToDate citation from urology and nephrology journals (± SE of $7410.68 and an intercept of $470,546.67), $34,529.29 dollars from dermatology journals (± $4043.66 and an intercept = $251,133.85), $13,286.72 from general and internal medicine specialty journals (± $746.58 and an intercept = $673,780.86), $10,655.93 dollars in emergency medicine (± $2795.21 and an intercept = $265,336.27), $6,832.46 dollars from pediatrics journals (± $756.08 and an intercept = $325,662.10), $6482.30 dollars from anesthesiology journals (± $1393.98 and intercept = $206,374.57), $6,227.91 dollars from radiology journals (± $1019.13 and intercept = $254,528.95), $6106.92 dollars from neurology journals (± $607.81 and intercept = $261,566.15), and $874.85 dollars from pathology journals (± $229.67 and an intercept = $174,163.00). The model was not significant for infectious diseases. We subsequently generated US-focused maps to highlight, per specialty, institutions, and cities successful at translating clinical research from specialty journals to the bedside ( and ). As we previously noted in curation of our dataset, it is often difficult to disentangle hospitals, medical schools, and affiliated research institutions. As such, to explore the role and impact of NIH funding, we use city (rather than individual institutions or hospitals) as the unit of analysis. Our analysis primarily focused on the United States for two reasons. Firstly, 35% of references from the 18 medical specialties cited in UpToDate were from the United States; by way of comparison, in 2019, 39% of all publications in PubMed were from the United States. Thus, our data was well powered for our funding analyses. Secondly, the NIH publicly releases its funding information with sufficient granularity and standardized specialty labels to enable the analysis. Our department-specific analysis combined eight specialties (cardiac and cardiovascular systems, critical care medicine, endocrinology and metabolism, gastroenterology and hepatology, geriatrics and gerontology, hematology, rheumatology, and oncology) under the ‘general and internal medicine’ specialty label because a large portion of the funding for these specialties occurs through the NIH department combining name of ‘internal medicine/medicine’ (i.e. there was no specific department labels for these subset of specialties). Average annual department-specific NIH funding correlated strongly with the absolute number of total citations, in the past decade, across all specialties ( and ): pathology (Spearman’s rho = 0.73, p=1.2 × 10 –10 ), neurology (rho = 0.70, p<2.2 × 10 –16 ), pediatrics (rho = 0.67, p<2.2 × 10 –16 ), radiology (rho = 0.64, p=4.8 × 10 –12 ), internal medicine (rho = 0.60, p<2.2 × 10 –16 ), dermatology (rho = 0.57, p=1.2 × 10 –9 ), urology and nephrology (rho = 0.57, p=3.0 × 10 –13 ), emergency medicine (rho = 0.52, p=2.1 × 10 –7 ), anesthesiology (rho = 0.48, p=9.0 × 10 –6 ), and infectious diseases (rho = 0.41, p=0.006). depicts city-labeled scatterplots that highlight, both, American cities (and institutions) that were successful at translating research back to the practice and cities that were particularly efficient. illustrates the cumulative correlation of NIH funding, across all medical and surgical specialties. In sharp contrast, to both the department specific and cumulative funding associations with number of citations, NIH funding did not correlate with time-to-citation in any specialty . Given the strength of correlation of NIH funding with the absolute number of citations across all medical specialties, we also sought to quantify the cost of one new added citation to the point-of-care (i.e. the slope) using a simple linear model. More concretely, we defined the model as a linear function between the average number of UpToDate citations from each city over the past 10 years and the average annual NIH department-specific funding between 2011 and 2020. This estimate may be interpreted as the approximate (indirect) cost of bringing clinical research to the bedside in NIH funding dollars, with the intercept being proportional to the initial investment ‘set-up’ cost. In descending order, a new citation at the point-of-care costs: $48,086.18 per new point-of-care UpToDate citation from urology and nephrology journals (± SE of $7410.68 and an intercept of $470,546.67), $34,529.29 dollars from dermatology journals (± $4043.66 and an intercept = $251,133.85), $13,286.72 from general and internal medicine specialty journals (± $746.58 and an intercept = $673,780.86), $10,655.93 dollars in emergency medicine (± $2795.21 and an intercept = $265,336.27), $6,832.46 dollars from pediatrics journals (± $756.08 and an intercept = $325,662.10), $6482.30 dollars from anesthesiology journals (± $1393.98 and intercept = $206,374.57), $6,227.91 dollars from radiology journals (± $1019.13 and intercept = $254,528.95), $6106.92 dollars from neurology journals (± $607.81 and intercept = $261,566.15), and $874.85 dollars from pathology journals (± $229.67 and an intercept = $174,163.00). The model was not significant for infectious diseases. We subsequently generated US-focused maps to highlight, per specialty, institutions, and cities successful at translating clinical research from specialty journals to the bedside ( and ). We have demonstrated that, depending on specialty, 0.4–2.4% of published clinical research is eventually cited in UpToDate. Our analysis also revealed several alarming trends: most clinical trials are never cited at the point-of-care – less than 1 in 10 phase III clinical trials are ever cited in 9 of 18 medical specialties. In the best-performing specialty (general and internal medicine), this citation reached a peak of 36%; that is, at least, 64% of trials are never cited. This was in line with a recent manual review of 125 randomized interventional clinical trials published in 2009–2010 in three disease areas (ischemic heart disease, diabetes mellitus, and lung cancer), which demonstrated only 26.4% of trials fulfilled four conditions of informativeness: importance of the clinical question, trial design, feasibility, and reporting of results . This trend was generally consistent among other higher quality-of-evidence research; 9 of 18 specialties had a citation rate of <13% for practice guidelines. Comparatively, while less than 1% of published case reports are ever cited, they represent one of the most commonly cited article types. For some specialties (e.g. dermatology), case-reports represented nearly a third of newly added references. The persistence of case reports as a resource to guide practice is itself not necessarily problematic; some are helpful in certain circumstances (e.g. to address rare conditions or phenomena that are hard to evaluate via other means). However, in some specialties, our results suggest that case reports outnumber most article types in UpToDate reference lists, including higher quality of evidence such as meta-analyses, systematic reviews, practice guidelines, and clinical trials. Further investigation of these case studies will highlight unmet clinical needs/questions that should be addressed with higher quality of evidence. Reassuringly, a subset of specialties (e.g. ‘cardiac and cardiovascular systems,’ ‘general and internal medicine,’ and ‘oncology’) did incorporate higher quality of evidence into point-of-care reference lists, with more clinical trials cited than all other specialties cumulatively. In-depth investigation of differences between specialties in how clinical trials are designed/funded and how practice guidelines are formulated will likely reveal strategies to translating clinical research that should be applied more broadly. Exploring over- and under-represented topics provided a fascinating perspective on how specialties prioritized particular topics, treatment paradigms, and clinical discoveries over the past decade. While a more thorough investigation is warranted, our preliminary study revealed that some specialties demonstrated a clear bias toward particular disease topics and treatment paradigms (e.g. cardiac and cardiovascular systems and oncology), while others were far more diverse (e.g. endocrinology and metabolism). The strong correlation of number of citations with NIH funding (both department specific and cumulatively) suggests that funding may, in part, dictate the research focus and, thus, which references are ultimately successful in making it back to the point-of-care. Limitations There are many possible reasons for the low rate of citation of published research noted in our analysis (e.g. it is possible that some of the published research does not adequately answer a particular clinical question). It is also quite likely that the problem is at least partly one of translation. Both practice guidelines and clinical trials have a low-citation rate despite their design and implementation requiring uncertainty or equipoise surrounding two or more care options (i.e. they are designed to help clinicians choose one treatment or diagnostic approach over another). Thus, the low rate of citation rate of clinical trials, practice guidelines, and other high-quality evidence (e.g. systematic reviews) itself suggests a translational block. To address this limitation of various factors causing the low-citation rate, we explicitly investigate the rate of citation of various article types (as well as analyze topic distributions of these article types) separately and cumulatively to delineate the quality of evidence independently of the global citation rate. Although it represents the largest and most comprehensive point-of-care resource, UpToDate is also just one perspective of how clinicians synthesize and integrate clinical research. Besides scholarly medical, nursing, and pharmacy journals (as major examples), many additional sources of information are readily available and accessed by these diverse stakeholders besides UpToDate. Examples include in person and online professional society communications and meetings, daily inter-professional interactions, local health system guideline consensus groups, access to clinicians who practice in ‘centers of excellence,’ and point-of-care decision support in electronic medical records. However, these are too informal and inaccessible for a systematic and comprehensive analysis of the translational highway between the clinical research enterprise and medical practice. Thus, UpToDate is a small but robust window into mapping and modeling translation. We also recognize that the relevance of UpToDate varies by specialty and training status, and thus, its contents do not necessarily reflect the breadth or depth of medical care provided in a subset of medical specialties (and by extension, the body of evidence that underpins that care). Thus, while we use citation in UpToDate as a metric of translation, citation does not necessarily indicate actual changes in practice; rather citation represents adoption of knowledge to support current approaches, inform new changes in practice, or highlight points of controversy. Importantly, strengthening our conclusions, UpToDate does serve as a reliable source of a specialty-based clinician-driven curation of the broader literature; its regularly updated reference lists accurately represent a clinician’s perspective on the ever-expanding literature . Thus, rather than viewing our analysis as a comprehensive look at all evidence that underpins all care, we suggest that this analysis be viewed as a standardized (cursory) survey of a fixed set of clinicians over the past decade on particular topics (defined by the scope of UpToDate articles). Our division of the literature (and medical journals) into subspecialties using Clarivate’s Journal Citation Reports admittedly does not capture the overlap/nuance of the boundaries between specialties (and journals); however, we believe it made our analyses much clearer and easier to understand. Where appropriate (e.g. citation rates of article types and cumulative NIH funding models), we analyzed all specialties together to enable us to retain a bird’s eye view on global trends across all 18 medical specialties. Conclusions Tracing the path of clinical research into medical practice reveals substantial variation in how specialties prioritize and adopt clinical research into practice. The success of a subset of specialties in incorporating a larger proportion of published research, as well as high(er) quality of evidence, demonstrates the existence of translational strategies that should be applied more broadly. While the findings are largely descriptive and exploratory, the dataset and method described here are designed to generate hypotheses regarding the translation of biomedical research into practice. In designing the dataset, we sought to provide a baseline for monitoring the efficiency of research investments and ultimately lead to the development of mechanisms for weighing the efficacy of reforms to the biomedical scientific enterprise (e.g. quantifying impact at point-of-care rather than number of publications or citations). There are many possible reasons for the low rate of citation of published research noted in our analysis (e.g. it is possible that some of the published research does not adequately answer a particular clinical question). It is also quite likely that the problem is at least partly one of translation. Both practice guidelines and clinical trials have a low-citation rate despite their design and implementation requiring uncertainty or equipoise surrounding two or more care options (i.e. they are designed to help clinicians choose one treatment or diagnostic approach over another). Thus, the low rate of citation rate of clinical trials, practice guidelines, and other high-quality evidence (e.g. systematic reviews) itself suggests a translational block. To address this limitation of various factors causing the low-citation rate, we explicitly investigate the rate of citation of various article types (as well as analyze topic distributions of these article types) separately and cumulatively to delineate the quality of evidence independently of the global citation rate. Although it represents the largest and most comprehensive point-of-care resource, UpToDate is also just one perspective of how clinicians synthesize and integrate clinical research. Besides scholarly medical, nursing, and pharmacy journals (as major examples), many additional sources of information are readily available and accessed by these diverse stakeholders besides UpToDate. Examples include in person and online professional society communications and meetings, daily inter-professional interactions, local health system guideline consensus groups, access to clinicians who practice in ‘centers of excellence,’ and point-of-care decision support in electronic medical records. However, these are too informal and inaccessible for a systematic and comprehensive analysis of the translational highway between the clinical research enterprise and medical practice. Thus, UpToDate is a small but robust window into mapping and modeling translation. We also recognize that the relevance of UpToDate varies by specialty and training status, and thus, its contents do not necessarily reflect the breadth or depth of medical care provided in a subset of medical specialties (and by extension, the body of evidence that underpins that care). Thus, while we use citation in UpToDate as a metric of translation, citation does not necessarily indicate actual changes in practice; rather citation represents adoption of knowledge to support current approaches, inform new changes in practice, or highlight points of controversy. Importantly, strengthening our conclusions, UpToDate does serve as a reliable source of a specialty-based clinician-driven curation of the broader literature; its regularly updated reference lists accurately represent a clinician’s perspective on the ever-expanding literature . Thus, rather than viewing our analysis as a comprehensive look at all evidence that underpins all care, we suggest that this analysis be viewed as a standardized (cursory) survey of a fixed set of clinicians over the past decade on particular topics (defined by the scope of UpToDate articles). Our division of the literature (and medical journals) into subspecialties using Clarivate’s Journal Citation Reports admittedly does not capture the overlap/nuance of the boundaries between specialties (and journals); however, we believe it made our analyses much clearer and easier to understand. Where appropriate (e.g. citation rates of article types and cumulative NIH funding models), we analyzed all specialties together to enable us to retain a bird’s eye view on global trends across all 18 medical specialties. Tracing the path of clinical research into medical practice reveals substantial variation in how specialties prioritize and adopt clinical research into practice. The success of a subset of specialties in incorporating a larger proportion of published research, as well as high(er) quality of evidence, demonstrates the existence of translational strategies that should be applied more broadly. While the findings are largely descriptive and exploratory, the dataset and method described here are designed to generate hypotheses regarding the translation of biomedical research into practice. In designing the dataset, we sought to provide a baseline for monitoring the efficiency of research investments and ultimately lead to the development of mechanisms for weighing the efficacy of reforms to the biomedical scientific enterprise (e.g. quantifying impact at point-of-care rather than number of publications or citations). We sampled all UpToDate articles (n=10,036 articles) multiple times over the past decade using the Internet Archive’s WayBackMachine; capturing 169,203 unique versions over a median of 39 months per article (IQR 16–73 months). The WayBackMachine is a digital archive of the World Wide Web that preserves archived copies of defunct or revised web pages. The reference list of each UpToDate article was subsequently extracted a median of 14 times (IQR 6–25 times) over its respective sampling window. The reference lists were subsequently filtered to exclude non-research references as defined by MEDLINE. Our final dataset consisted of 83,423 unique references from 4055 journals newly cited in the sampling window. The first version of each UpToDate article served as a baseline to enable us to calculate the time-to-citation for all references (an additional brief Methods Supplement [Appendix 1] provides additional details about UpToDate) For clarity, throughout the text, we use the shorter phrase ‘citation at the point-of-care’ as equivalent to ‘citation in an UpToDate article during our sampling window.’ We subsequently filtered the references to those published in non-surgical specialties as defined by the Clarivate’s Journal Citation Reports: 37,050 newly added unique references from 887 journals. To enable comparisons with the uncited literature, we used PubMed to identify all articles published during our sampling window in these 887 journals. These 2.4 million articles were similarly processed (i.e. matched to appropriate metadata). We subsequently paired all references with the corresponding entries in PubMed to extract the associated abstracts, author affiliations, and date of publication. Thus, our final dataset for analysis represented a curated list of all references added over the past 10 years to UpToDate, alongside relevant metadata (such as journal, year of citation, author affiliations, etc.). We extracted the UMLS concepts from the paired abstracts using SciSpacy , which enabled us to map the abstract free text to UMLS concepts . This is a similar pipeline used by PubMed to index articles for search engines and enabled us to extract ‘high-level’ concepts from the abstracts of all references. The performance of these algorithms (including validity and misclassification) are described elsewhere . For this manuscript, we subsequently filtered the references to those published in non-surgical specialties as defined by the Clarivate’s Journal Citation Reports (i.e. the categories specified in assessment of the impact factor): anesthesiology, cardiac and cardiovascular systems (i.e. cardiology), clinical neurology, critical care medicine, dermatology, emergency medicine, endocrinology and metabolism, gastroenterology and hepatology, geriatrics and gerontology, hematology, infectious diseases, medicine (general and internal), oncology, pathology, pediatrics, ‘radiology, nuclear medicine and medical imaging’ (i.e. radiology), rheumatology, and urology and nephrology. This filtered dataset subset included 37,050 newly added unique references from 887 journals, alongside relevant meta-data. To enable comparisons with the uncited literature, we used PubMed to identify all articles published during our sampling window in these 887 journals. These 2.4 million articles were similarly processed (i.e. matched to appropriate metadata). For all analyses, summary statistics were generated using base functions in R v4.1. Where appropriate, p-values were corrected for multiple testing using Benjamini-Hochberg. For all 887 journals, we also calculate two new indices: the CRI and the CII. Unlike the impact factor, these metrics exclusively quantify citations in point-of-care resources (i.e. UpToDate), rather than overall number of citations in other research publications and thus indirectly assess the presumed impact of any given journal on clinical practice. The CRI captures the long-standing impact of the journal over the past decade using a fraction of articles from the journal cited at least once in UpToDate and is defined as: . C R I d e c a d e = A r t i c l e s C i t e d i n U p T o D a t e i n p a s t d e c a d e T o t a l A r t i c l e s E v e r P u b l i s h e d p a s t d e c a d e Similarly, by using median time-to-citation, the CII captures journal specific trends in time-to-clinical-adoption (i.e. a measure of latency for each journal) that is distinct from the overall impact of the journal, defined as: . C I I d e c a d e = m e d i a n ( d a t e o f a d d e d c i t a t i o n s i n p a s t d e c a d e − d a t e o f p u b l i c a t i o n )
Reinventing the Clinical Audit in a Pediatric Oncology Network
591a8942-3334-4968-9a98-46f5a00dd231
10115487
Internal Medicine[mh]
The goal of the St. Jude Affiliate Program is to allow more children to receive St. Jude care close to home and to increase access to pediatric oncology clinical research trials developed at St. Jude. The eight St. Jude affiliate clinics serve 9 states in the Southeast and Midwest (Fig. ). The affiliate institutions serve rural and suburban areas with a diverse demographic population. The affiliate clinics provide a substantial resource for patient recruitment for St. Jude clinical trials. During the years of this report, the affiliate clinics in aggregate saw an average of 302 new oncology patients per year of which an average of 38% of patients were enrolled on therapeutic primary clinical trials. Each affiliate clinic is part of a not-for-profit health system. No affiliate clinic is part of an institution that provides pediatric stem cell transplantation, and none has pediatric hematology/oncology fellowship programs. Each clinic ranges in size and capacity. Four clinics had 40 to 60 new oncology patients per year and 4 clinics had 20 to 30 new oncology patients per year during the years of this report. The number of providers in each clinic ranged from 3 to 7. Ensuring high-quality care in smaller programs can be challenging and maintaining equitable high-quality care across a remote network is critical for patient safety and an optimal patient experience. We instituted an on-site clinical audit to assess care in the affiliate clinics. Annually, an audit team composed of 1 physician and 1 nurse observed direct patient care in the clinic over a 2-day to 3-day period. They surveyed central line care, chemotherapy administration, patient teaching, blood product administration, and provider-patient interactions. Patient safety and confidentiality were maintained. The observers looked for adherence to a central line bundle based on national standards. During chemotherapy administration, the observers looked for independent dose calculations, verification of chemotherapy orders with the treatment schema, review of laboratory criteria, patient identification, and proper handling of cytotoxic therapy. Patient and family education, including anticipatory guidance, was reviewed. Thirty-six independent items were evaluated (Supplementary Table 1, Supplemental Digital Content 1, http://links.lww.com/JPHO/A584 ). After the first 2 years, the on-site clinical audit continued to demonstrate deficiencies without improvement. The most common deficiencies noted were inconsistent communication when patients transitioned between St. Jude and the affiliate clinics, delay in the time to antibiotics in febrile immunocompromised patients, inconsistent documentation of oral chemotherapy administration and the lack of adherence to a central line bundle in the ambulatory setting. We then developed a more comprehensive approach to the clinical audit. First, we engaged the clinical team being audited. We began surveys of the clinic staff regarding perceptions of their clinic operations to identify perceived areas of strengths and weaknesses. This component enabled the clinic staff to indicate which aspects of their clinic were working well and which aspects could be improved, setting the tone for continuous quality improvement. We identified provider champions for each clinic who received training in quality improvement using the American Society of Clinical Oncology Quality Training Program (ASCO QTP) ( https://practice.asco.org/quality-improvement/quality-programs/quality-training-program ). Each clinic had a dedicated nurse educator with protected time to lead projects. Joint quality improvement projects were facilitated by the Affiliate Nurse Director (J.M.), with multiple affiliate team members participating from various disciplines, including nursing, pharmacy, and physicians. Each month, the affiliate clinic nurse educator submitted their local data on specific quality indicators to a secure dashboard. The dashboard tracked time to administration of antibiotics in immunocompromised children with fever, central line-associated bloodstream infections in ambulatory patients, patient/parent satisfaction scores, medication adverse events, and laboratory adverse events. We started sharing the individual quality metrics with the clinic team, and the data from each clinic was presented anonymously with team members of the 8 clinics. The quality data from each affiliate clinic was shared annually with the respective hospital leadership, including chief medical officers, chief executive officers, and senior leaders. The clinical audit structure started as an annual, on-site audit. After the first 2 years 2 clinics showed minimal improvement, however, there was no improvement at the other 6 clinics. After instituting the additional audit components of self-reflection, data sharing, quality training, and engagement of senior leaders the number of findings at every clinic decreased (Fig. ). Building a team approach from the ground-up garnered more engagement with the clinical audit and the quality improvement efforts. The self-reflection surveys gave the clinic team members an opportunity to think about what was working and what needed improvement. The survey was informed by the most common deficiency noted at every clinic, which was inconsistent communication. The survey scored bidirectional communication and asked about barriers to high-quality care and suggestions for improvement. The response rate to the survey ranged from 36% to 38%. We shared the results with all the providers. Rather than a directive being imposed upon them, the clinic staff took ownership of the audit findings, which in some cases led to the development of quality improvement projects. In one case, a clinic assumed the wait times in their clinic were adequate, but audit findings showed otherwise. This clinic initiated a quality improvement project to decrease wait times in the chemotherapy area. Transparent sharing of data with the teams provided opportunities for improvement. Quality metrics shared anonymously between the teams prompted some clinics to ask what higher-performing clinics were doing to improve outcomes. For example, when the central line-associated bloodstream infection rate in implanted catheters was shared between clinics, 1 clinic recognized a potential area for improvement. They learned from another clinic that infections were occurring when lines were accessed outside the pediatric oncology clinic (eg, emergency departments and diagnostic imaging suites). They instituted a teach-back method with parents of children with implanted catheters. Quality metrics were shared with the clinical teams and, importantly, with the hospital leadership of each institution. This step in the comprehensive approach was instrumental to ensure that the teams had resources to achieve the shared goals. For example, decreasing the time to administration of antibiotics in immunocompromised children with fever may not have been possible without senior leadership’s involvement. Because fever often develops in children at nights and on the weekends, when the clinics were closed, the outpatient clinic teams needed support from hospital leadership to ensure that prompt antibiotics administration occurred in emergency departments and on the inpatient hospital units. Training clinic team members as quality champions was critical to implement change. The champions received quality improvement training through the ASCO QTP. Having peers with proficiency in quality improvement increased the commitment to work through the audit findings. One finding noted in the clinical audits was inconsistent documentation of oral chemotherapy administration. With the coaches of the ASCO QTP, a team composed of members of 3 clinics performed a quality improvement project to improve compliance with oral chemotherapy documentation from a baseline of 17.4%. The team developed an aim statement, worked through a process map of the current state, created a cause-and-effect diagram, and studied 3 interventions. Compliance was improved to >85% within 6 months, and the team built a plan for sustainability (unpublished data). Integrating clinical research trials into the practice of community providers has been proposed as 1 mechanism to increase diversity of research participants. , The St. Jude Affiliate Program is 1 example of this approach. However, ensuring high-quality care in the clinical network of remote sites can be challenging. We developed a comprehensive clinical audit process that decreased deficiencies noted in the clinical audits after implementation. As described in the literature, the effectiveness of audits is not uniformly positive. The nature of an external reviewer giving feedback may appear dictatorial rather than engaging and collaborative. Our initial experience was similar. We found that a simple, yearly clinical audit did not yield continuous improvement. Adding a team-based approach involving all stakeholders was more successful. Because each individual component was not independently evaluated, we do not know the benefit of each component; nonetheless, the comprehensive approach was engaging and provided more sustainable quality improvement. Starting with a bottom-up process, set the tone for a more-inclusive approach to quality improvement. As noted by others, self-reflection is a method to garner engagement with the clinical audit. The sharing of data was key. Rather than the team members making assumptions about how the clinic was operating, examining the data objectively showed how the clinic was functioning. Transparent data sharing can be motivating and reinforces the team approach. , Sharing the data with hospital leadership is a tool to help get support for resources when necessary. The training in quality improvement equipped the teams with knowledge, skills, and attitudes to champion culture change in their clinical practices. Successful quality improvement works when clinicians lead; however, they must know how to design, implement, and evaluate projects. The ASCO QTP provided the tools to practice continuous quality improvement ( https://practice.asco.org/quality-improvement/quality-programs/quality-training-program ). Using a comprehensive approach that involves self-reflection, transparency of data sharing, development of local champions, and engagement of senior leaders, we have been successful in quality improvement across a broad geographic pediatric oncology network. This strategy may be used with other clinical networks working to increase access to clinical research trials in community health care systems.
Pediatric Pulmonology Training in India: Current Status and Future Directions
544ac0dc-628a-4844-b66b-b5f93562313c
10115596
Pediatrics[mh]
Pediatric respiratory illnesses are one of the common causes of morbidity and mortality in children. The proportion of children attending the outpatient department (OPD) or getting admitted due to respiratory diseases might be 30–50% in various healthcare facilities . Therefore, postgraduates in Pediatrics spent a significant time of their training in management of respiratory disorders. The training includes skill development for identifying and appropriately treating respiratory infections, asthma, and chronic respiratory illnesses. Pediatric super specialties have evolved over the past 6–7 decades . Till 3–4 decades ago, pediatrics was considered one of the super specialty branches in India. However, the development of subspecialties or super specialties in Pediatrics is a felt need because of the significant increase in knowledge that may not be possible for a single specialist to handle . Respiratory illnesses are common and have increased significantly due to better survival of preterm infants, early diagnosis and management of chronic respiratory diseases, and availability of better diagnostic tools . Diagnosing rare illnesses, better therapeutic interventions, and improving survival with some chronic morbidities are now possible. With ongoing research and advanced care in Pediatric Pulmonology, there is increased survival and improved quality of life in developed countries. Therapeutic interventions and supportive care require advanced training of pediatricians to care for these children with acute/ chronic respiratory problems. The majority of preterm infants survive without major morbidities. However, a significant proportion may have morbidities, including bronchopulmonary dysplasia, problems secondary to interventions, including mechanical ventilation, airway problems, exposure to environmental hazards, aspirations, and increased infections . Many of these children need respiratory support at home. Management of these morbidities requires specialized training to equip a pediatrician to improve the quality of life and reduce stress in caretakers. Some conditions were considered rare or non-existent, but now these are being increasingly diagnosed in India. With better diagnostic tools, including molecular diagnostics, imaging, bronchoscopies, and intervention bronchoscopies, we can diagnose rare illnesses. Many children with conditions like cystic fibrosis, primary ciliary dyskinesia, interstitial lung diseases, congenital airway and lung malformations that underwent surgery, etc ., survive due to early diagnosis, intervention, and better supportive care . With improved communications, physicians' awareness about chronic respiratory illnesses has increased. The parents of children also get knowledge and have increased expectations from physicians caring for children with chronic respiratory diseases. Therefore, physicians must know the details of the advancement of management during their training. Therapeutic interventions are costly and challenging to implement in resource-limited settings . This results in frustration among physicians as well as parents and grown-up children. There is a need to develop country-specific treatment protocols using available resources and innovations to overcome this complex situation (knowing the intervention but not being able to implement it because of non-availability and cost). Trained persons may discuss, decide on appropriate treatment, identify research priorities, and counsel the families. Hence, there is a need to develop Pediatric Pulmonology as a sub-specialty. In the developed world, Pediatric Pulmonology is a separate specialty started by pediatricians interested in Pulmonology. In North America, the increasing need for Pediatric Pulmonology as a respective discipline was first recognized by Edwin Kendig in 1973 . In early 1978, the section on chest diseases, American Academy of Pediatrics first published guidelines for training in Pediatric Pulmonology, emphasizing research and statistics in training . The separate board of Pediatric Pulmonology was established in 1985 in America. The organization of Pediatric Pulmonology in Europe, the UK, and the Commonwealth countries was always much less formal than in the USA for many years . It was during the early 1970s that the Pediatric Pulmonology sub-specialty began to emerge in the UK. However, the Pediatric Pulmonology sub-specialty began organizing in Europe in 1980. In India, a separate society was made under the aegis of the Indian Academy of Pediatrics (IAP), initially named as IAP Respiratory Chapter in 1987 and currently registered as IAP National Respiratory Chapter (NRC). The IAP NRC has branches at the state level. The first conference of the IAP Respiratory chapter was organized in 1989 . The primary function of the IAP NRC includes the organization of conferences, providing updates, and conducting courses. The IAP NRC took the initiative to develop India-specific guidelines to treat common respiratory problems, including asthma, acute respiratory infections, tuberculosis, etc . . These module-based training programs have helped to improve the care of children with common respiratory problems. The formal post-MD training program, Doctorate in Medicine (DM), started in India in 2016 at the All India Institute of Medical Sciences (AIIMS), New Delhi. Over the past six years, four more centers (Post Graduate Institute of Medical Education and Research, Chandigarh; AIIMS, Jodhpur; AIIMS, Rishikesh, and AIIMS, Bhubaneswar) have started the program. Three centers (two in Bengaluru and one in Mumbai) started IAP NRC Fellowship in Pediatric Pulmonology last year. Going by the population of India, there is a need for a large number of trained workforce for pediatric respiratory illnesses. There is a need for other institutions to start the training program. The training program in government institutions involves various steps, so it takes a long time. IAP NRC has taken the initiative to start a training program at multiple institutions. Many centers across India have shown interest in starting Fellowship programs in Pediatric Pulmonology. This training program aims to prepare human resources trained in pediatric respiratory diseases. Though the fellowship program started by IAP NRC will create a pool of trained workforce, the training needs to be recognized by governmental agencies, including National Medical Council. Till then, there may not be positions for specialists. As a step toward getting recognition for super specialty, the IAP has initiated the Indian College of Pediatrics, which is responsible for uniform training and quality improvement of super specialty training . There are ongoing training programs in the Americas and Europe . These programs may be examined for developing training programs in India. Because of the different disease spectrum, available resources, and priorities, we must modify protocols/guidelines suitable for low-resource countries like India. There is a need to develop treatment protocols for uniformity of care for common illnesses. The nurses, physiotherapists, dieticians, etc . , must be trained in chronic respiratory problems in children. Clinical Training The training program for India can be divided into clinical care, basic investigations, advanced investigations, and therapeutic interventions. Developing special care for respiratory illnesses includes providing care for an acute problem and ensuring regular follow-up for supportive care. This can be achieved by developing the team (doctor, nurse, physiotherapist, dietician, social worker, etc . ) and identifying the day, time, and place for OPD and inpatient services. The proforma for recording details of children specifically for chronic respiratory illnesses should be prepared. There should be a program for regular follow-up. At large, trainees are required to gain experience in asthma, bronchopulmonary dysplasia, cystic fibrosis, pulmonary infections, respiratory disorders in systemic diseases and immunocompromised hosts, neuromuscular diseases, disorders of ventilatory control, interstitial lung diseases, congenital malformations of the respiratory system, etc . It is also crucial for trainees to gain experience in the use of supplemental oxygen, home mechanical ventilation, non-invasive mask ventilation, and other technologies . An essential component of Pediatric Pulmonology training is understanding respiratory physiology and the use of basic investigations to diagnose and manage common respiratory illnesses. Therefore, procuring equipment for basic investigations, including spirometry, Peak Expiratory Flow Rate (PEFR) meter, basic radiology including X-ray and CT scan, and microbiology, including investigations for tuberculosis, etc . , should be done. The basic diagnostic bronchoscopic service must be separate for children. However, facilities for advanced investigations may be either at the individual site or shared among different sub-specialties, including endobronchial ultrasound, intervention bronchology services, sleep lab, sweat testing, molecular diagnostic, etc . A simulation-based training may be helpful in building confidence in the trainee for the procedure. Besides the knowledge of basic and advanced investigations, the trainee should know the skill of chest physiotherapy. The trainees are also expected to be experts in procedures essential for Pediatric Pulmonology, especially flexible bronchoscopy. The training should provide extensive exposure to the hands-on training on bronchoscopy, performance, and interpretation of routine pulmonary function tests, sweat tests, sleep studies, etc . Additional clinical training opportunities at overseas centers or other centers in India, such as training in interventional bronchoscopy, specialized cystic fibrosis care, pediatric sleep lab, etc . , should be offered to the trainees during the final year of training or after the completion of training. Academic Training Academic training should aim to produce a visionary figureheads in Pediatric Pulmonology. The trainees should also be involved in basic and clinical research. They might understand biostatistics, research design, and the ethics of research. Preparing a teaching schedule is one of the mainstays of the training program. Each site should develop teaching sessions at least twice a week. It should include case discussions, journal clubs, seminars, etc . A combined program of some institutions using teleconferencing may add to ancillary learning that may enhance interaction. It should involve periodic assessments of students enrolled for training. Interdisciplinary Training Pediatric Pulmonology heavily depends on radiological imaging, microbiology, pathology, nuclear medicine, etc . Therefore, specialists must interact with radiologists. Necessary radiological investigations include CT and MRI imaging of the chest, USG-guided sampling, study of swallowing, etc . Interpretation of imaging findings may play an essential role in diagnosing and monitoring respiratory problems . Respiratory infections are one of the most common morbidities. Therefore close interaction with the microbiology department is essential. This will help in the identification of etiological agents by smear, cultures, molecular testing, and serology. Pathology plays an essential role in the precise diagnosis of lung diseases. Bronchoalveolar lavage, considered a liquid biopsy of lungs, needs specialized training for the pathologists and an appropriate interpretation for the clinicians. Specialized training for interpreting findings on small tissue samples is of paramount importance. Equally important is the interaction between the pathologists and the clinicians, which will undoubtedly enhance diagnostic accuracy in challenging cases. Children with neuromuscular problems and developmental delay have significant respiratory morbidities . These children may need pulmonary rehabilitation with other supportive care, and pulmonologists should support these patients. Another important evolving specialty is sleep medicine. Therefore, pulmonologists need to interact regularly with neurologists and adult pulmonologists. Children with immunocompromised status are at a risk for opportunistic infections. Pediatric Pulmonologists play a vital role in diagnosing and managing such patients. Airway malformations, including tumors, malacias, etc . , may need surgical interventions and close interaction with Pediatric Surgeons and Otolaryngologists. The training program for India can be divided into clinical care, basic investigations, advanced investigations, and therapeutic interventions. Developing special care for respiratory illnesses includes providing care for an acute problem and ensuring regular follow-up for supportive care. This can be achieved by developing the team (doctor, nurse, physiotherapist, dietician, social worker, etc . ) and identifying the day, time, and place for OPD and inpatient services. The proforma for recording details of children specifically for chronic respiratory illnesses should be prepared. There should be a program for regular follow-up. At large, trainees are required to gain experience in asthma, bronchopulmonary dysplasia, cystic fibrosis, pulmonary infections, respiratory disorders in systemic diseases and immunocompromised hosts, neuromuscular diseases, disorders of ventilatory control, interstitial lung diseases, congenital malformations of the respiratory system, etc . It is also crucial for trainees to gain experience in the use of supplemental oxygen, home mechanical ventilation, non-invasive mask ventilation, and other technologies . An essential component of Pediatric Pulmonology training is understanding respiratory physiology and the use of basic investigations to diagnose and manage common respiratory illnesses. Therefore, procuring equipment for basic investigations, including spirometry, Peak Expiratory Flow Rate (PEFR) meter, basic radiology including X-ray and CT scan, and microbiology, including investigations for tuberculosis, etc . , should be done. The basic diagnostic bronchoscopic service must be separate for children. However, facilities for advanced investigations may be either at the individual site or shared among different sub-specialties, including endobronchial ultrasound, intervention bronchology services, sleep lab, sweat testing, molecular diagnostic, etc . A simulation-based training may be helpful in building confidence in the trainee for the procedure. Besides the knowledge of basic and advanced investigations, the trainee should know the skill of chest physiotherapy. The trainees are also expected to be experts in procedures essential for Pediatric Pulmonology, especially flexible bronchoscopy. The training should provide extensive exposure to the hands-on training on bronchoscopy, performance, and interpretation of routine pulmonary function tests, sweat tests, sleep studies, etc . Additional clinical training opportunities at overseas centers or other centers in India, such as training in interventional bronchoscopy, specialized cystic fibrosis care, pediatric sleep lab, etc . , should be offered to the trainees during the final year of training or after the completion of training. Academic training should aim to produce a visionary figureheads in Pediatric Pulmonology. The trainees should also be involved in basic and clinical research. They might understand biostatistics, research design, and the ethics of research. Preparing a teaching schedule is one of the mainstays of the training program. Each site should develop teaching sessions at least twice a week. It should include case discussions, journal clubs, seminars, etc . A combined program of some institutions using teleconferencing may add to ancillary learning that may enhance interaction. It should involve periodic assessments of students enrolled for training. Pediatric Pulmonology heavily depends on radiological imaging, microbiology, pathology, nuclear medicine, etc . Therefore, specialists must interact with radiologists. Necessary radiological investigations include CT and MRI imaging of the chest, USG-guided sampling, study of swallowing, etc . Interpretation of imaging findings may play an essential role in diagnosing and monitoring respiratory problems . Respiratory infections are one of the most common morbidities. Therefore close interaction with the microbiology department is essential. This will help in the identification of etiological agents by smear, cultures, molecular testing, and serology. Pathology plays an essential role in the precise diagnosis of lung diseases. Bronchoalveolar lavage, considered a liquid biopsy of lungs, needs specialized training for the pathologists and an appropriate interpretation for the clinicians. Specialized training for interpreting findings on small tissue samples is of paramount importance. Equally important is the interaction between the pathologists and the clinicians, which will undoubtedly enhance diagnostic accuracy in challenging cases. Children with neuromuscular problems and developmental delay have significant respiratory morbidities . These children may need pulmonary rehabilitation with other supportive care, and pulmonologists should support these patients. Another important evolving specialty is sleep medicine. Therefore, pulmonologists need to interact regularly with neurologists and adult pulmonologists. Children with immunocompromised status are at a risk for opportunistic infections. Pediatric Pulmonologists play a vital role in diagnosing and managing such patients. Airway malformations, including tumors, malacias, etc . , may need surgical interventions and close interaction with Pediatric Surgeons and Otolaryngologists. From the above description, it is clear that there is a need to develop Pediatric Pulmonologist services and research in India. At present, there are fewer qualified specialists. Most are self-styled specialists, as happened 5–6 decades ago in Europe. We need to develop trained Pediatric Pulmonologists to foster the field's growth. The institutions running the DM program have few seats reserved for pediatricians working in government institutions. Pediatricians interested in Pediatric Pulmonology and working in government institutions should undergo long-term training and develop Pediatric Pulmonology services and training programs at their respective centers. Pediatricians working in the private sector and interested in Pediatric Pulmonology may have training opportunities through the IAP NRC fellowship program in Pediatric Pulmonology. The faculties working at institutions where training is already being done should play a mentorship role to guide freshly trained Pediatric Pulmonologists in their career development. They should also help to develop Pediatric Pulmonology services at other centers. Faculty mentors must provide educational and research opportunities to budding Pediatric Pulmonologists. Initial concerns of overshadowing Pediatric Pulmonology by other specialties like allergy or infectious disease are not considered a challenge now. Specialized skills in Pediatric Pulmonology training can not be acquired by other specialists . Moreover, trained Pediatric Pulmonologists will improve the care of children with infectious diseases, immunocompromised, neuromuscular disorders, etc . Another issue is the need for a significant investment in developing Pediatric Pulmonology services. This can be overcome by developing basic services and sharing some services. The scope of Pediatric Pulmonology will increase with improved survival of preterm newborns, more prolonged survival of chronic respiratory problems, neuromuscular problems, and an ever-increasing number of immunocompromised hosts . A pandemic like COVID-19 and other respiratory viruses will further increase the demand for trained Pediatric Pulmonologists all over the globe, as these disorders may have a long-term effect on lung health . Pediatric Pulmonology is a vibrant field since it combines acute and chronic patient care with continuity of care and long-term relationships with families. Because there is a need for trained Pediatric Pulmonologists, we need to increase the number of facilities with good training to meet the requirement.
Multiomic neuropathology improves diagnostic accuracy in pediatric neuro-oncology
cdb257dd-32ed-4406-a34f-72fb2d8f2b0c
10115638
Pathology[mh]
Children and adolescents can be diagnosed with a broad spectrum of central nervous system (CNS) tumors with divergent clinical behavior. The recently updated World Health Organization (WHO) classification of CNS tumors , recognizes a plethora of variants that can be difficult to distinguish. Some are exceedingly rare, such that a neuropathologist would see only very few cases over the course of their career. To improve diagnostic accuracy in neuro-oncology, we developed a neuro-oncology-specific next-generation sequencing (NGS) gene panel and introduced a DNA methylation-based classification system for CNS tumors . Since 2016, the accompanying online research tool for CNS tumor classification from DNA methylation data has seen more than 90,000 sample uploads. Although the benefit of implementing this tool in specialized centers has been reported—especially for difficult-to-diagnose tumors – —its utility in a routine diagnostic setting still has to be evaluated. We launched the Molecular Neuropathology 2.0 (MNP 2.0) study as part of the German pediatric neuro-oncology ‘Treatment Network HIT’, aiming to integrate DNA methylation analysis and gene panel sequencing with blinded central neuropathological assessment for a population-based cohort of pediatric patients with CNS tumors at the time of primary diagnosis. Patient recruitment and sample processing Over a 4-year period (April 2015 to March 2019), 1,204 patients with available formalin-fixed, paraffin-embedded (FFPE) tumor tissue were enrolled, excluding 163 patients who did not fulfill the inclusion criteria (117 recurrences, 23 retrospective registrations, 12 metastases, 11 adults) (Fig. ). Patients were enrolled from 65 centers in Germany, Australia/New Zealand (starting June 2017) and Switzerland (starting July 2017) in a population-based manner (Fig. and Supplementary Figs. and ). In 59 tumors, received tissue was either insufficient (31, 2.6%) or not suitable (28, 2.4%) for DNA methylation analysis and/or NGS (4.0% and 1.4%, respectively) (Fig. ). Median time from arrival of FFPE sections at the molecular testing laboratory to first molecular report was 21 days (Supplementary Fig. ). Timelines from tumor surgery to successful patient registration were shorter in centers with higher recruitment rates (Supplementary Fig. ). CNS tumor classification WHO-based CNS tumor types by neuropathological assessment The distribution of tumor types by reference neuropathological evaluation according to the WHO classification of CNS tumors, and the corresponding clinical patient data, were considered representative of a population-based cohort of pediatric patients with CNS tumors undergoing tumor biopsy or resection (Fig. , Extended Data Figs. and , Supplementary Fig. and Supplementary Table ). Comparison with epidemiological data from the German Childhood Cancer Registry showed an annual recruitment of up to 64% of all patients newly diagnosed with CNS tumors (Supplementary Fig. ). Neurofibromatosis type 1-associated or diffuse midline gliomas may have been underrepresented, as they are not consistently biopsied. No neoplastic tissue was detected in 21 samples (1.7%). In the remaining 1,182 tumors, a confident diagnosis was assigned in 1,028 cases (87.0%), whereas 77 were compatible with and 22 suspicious of a certain tumor type (6.5% and 1.9%, respectively). A descriptive diagnosis was established for 55 tumors (4.7%), including 33 (2.7%) that could not be assigned to any tumor category. The most common diagnostic categories were low-grade glial/glioneuronal (LGG) tumors (37.7%), medulloblastomas (MBs, 16.0%), high-grade gliomas (HGGs, 15.6%), ependymal (EPN) tumors (10.6%) and other embryonal or pineal (EMB/PIN) tumors (6.2%) (Supplementary Fig. ). Various other less frequent tumor types made up a total of 9.5% of the cohort. Patient age and sex were distributed as expected (Extended Data Fig. ). DNA methylation-based CNS tumor classification Using, in each case, the latest applicable version at the time of diagnosis (version 9.0–version 11b4; (ref. )) of a DNA methylation-based random forest (RF) class prediction algorithm, tumors were assigned to 65 (from a possible 91) different DNA methylation classes (Fig. , Extended Data Fig. and Supplementary Table ). Besides LGG (28.5%), MB (16.3%) formed the second largest category, followed by HGG (10.1%), EPN (10.1%) and other EMB/PIN tumors (5.5%), whereas the remaining 6.2% were distributed among other less frequent classes (Fig. and Supplementary Fig. ). A substantial fraction of tumors (21.1%) could not be confidently assigned to a DNA methylation class. The DNA methylation profiles of 25 (2.2%) samples assigned to a control class of non-neoplastic tissue were indicative of low tumor cell content in the analyzed tissue. DNA methylation classes were associated with patterns of patient age, sex and tumor location (Extended Data Figs. and ) as well as DNA copy number alterations (Extended Data Fig. , Supplementary Figs. and and Supplementary Table ). As examples, the DNA methylation class ‘infantile hemispheric glioma’ exclusively comprised hemispheric tumors in infants with frequent focal amplifications on cytoband 2p23.2, indicative of fusions involving the ALK gene , ; the DNA methylation class ‘PXA’ comprised hemispheric tumors across ages consistently harboring homozygous deletions of the CDKN2A/B locus (9p21.3); and the DNA methylation class ‘ETMR’ comprised predominantly occipital or posterior fossa tumors in young children with a pathognomonic amplification at 19q13.42 (ref. ). Additional significant copy number alterations included focal deletion involving the MYB locus in ‘LGG, MYB/MYBL1’ (6q24.1), amplification of MYCN in ‘HGG, MYCN’ (2p24.3) and amplification involving EGFR in ‘HGG, RTK’ (7p11.2) (Extended Data Fig. ). Comparison of WHO-based and DNA methylation-based classification Directly juxtaposing WHO-based tumor type and DNA methylation class for individual tumors (Fig. , Extended Data Fig. , Supplementary Fig. and Supplementary Table ) as well as pairwise comparison indicated strong correlation between combinations known to correspond or overlap across categories (Extended Data Fig. , Supplementary Fig. and Supplementary Table ) but also a high fraction of tumors unclassifiable by RF-based prediction among WHO-defined HGG (33.5%), LGG (20%) and other rare tumors (37.6%) (Fig. , Extended Data Fig. and Supplementary Table ). Visualization of DNA methylation patterns by t -distributed stochastic neighbor embedding ( t -SNE) (Fig. and Supplementary Table ), and subsequent class assignment by visual inspection (Supplementary Fig. ), allowed classification of another 229 samples, with profiles of 34 tumors (3.0%) suggestive of novel molecular classes not represented in the original reference cohort , such as HGG of the posterior fossa and neuroepithelial tumors with PATZ1 fusions or PLAGL1 fusions (Fig. and Supplementary Fig. ). In most tumors (67.8%), neuropathological WHO-based tumor typing and DNA methylation class prediction were considered concordant, with an additional refinement by DNA methylation class in 49.7% of all cases (Fig. and Supplementary Table ). Assignments to a discrepant tumor class (within a category, 2.0%) or to a discrepant tumor category (3.0%) were considered clinically relevant (that is, changing the recommended treatment protocol) in 5% of all cases. This included 15 of 43 samples with inconclusive histology or no detectable tumor tissue, of which most (11/15) were classified as lower-grade glial or glioneuronal tumors by DNA methylation analysis (Extended Data Fig. ). There was an enrichment of clinically relevant discrepancies in histologically classified HGG (24/173, 13.9%) compared to other WHO-defined categories ( P < 0.001). Among those, the most common combinations (21/24) included anaplastic (pilocytic) astrocytomas or glioblastomas (WHO grade 3–4) assigned to DNA methylation classes of lower-grade gliomas, including PA, GG or MYB / MYBL -altered tumors (WHO grade 1–2) (Fig. and Supplementary Fig. ). Clinically relevant discrepancies were rarer in LGG (2.2%), MB (1.1%), EPN (1.6%) and other tumor types (0.0%). Discrepant tumor types and classes currently not considered clinically relevant were assigned in 4.6% of samples, affecting mostly lower-grade glial and glioneuronal tumors (29/52) (Extended Data Fig. and Supplementary Fig. ). Samples could not be assigned to any tumor category or did not contain detectable tumor tissue by both neuropathological assessment and DNA methylation analysis in 1.4% and 0.7%, respectively (Extended Data Fig. and Fig. ). Integration of NGS Detection of relevant genetic alterations Using a customized enrichment/hybrid-capture-based NGS gene panel comprising 130 genes of interest (Supplementary Table ) , complemented by RNA sequencing in selected cases , we detected genetic alterations in 625 of 1,034 tumors (60.4%) (Fig. , Extended Data Fig. , Supplementary Fig. and Supplementary Table ). For the most commonly affected gene BRAF (272/1,034), fusion events were observed in 158 of 237 DNA methylation-defined infratentorial (124/160), midline (28/51) or cortical LGG (6/26), whereas V600E mutations were further observed in GG (7/13) and PXA (17/23). Other genes mutated in ≥2% of all tumors were TP53 (5.1%), FGFR1 (4.4%), NF1 (4.2%), H3F3A (3.7%) and CTNNB1 (2.2%). Recurrent alterations occurring in ≥75% of tumors (with ≥2 sequenced) in specific DNA methylation classes included histone 3 K27M in DMG, K27 (27/27), H3F3A G34R/V in HGG, G34 (11/11), IDH1 in gliomas, IDH-mutant (7/7), BCOR ITD in CNS, BCOR (6/6), SMARCB1 in ATRT, TYR (6/8), DICER1 in primary intracranial DICER1-mutant sarcomas (2/2), NF2 in spinal EPN (2/2) and TSC1 in SEGA (2/2). A fraction of tumors unclassifiable or assigned to a control class by RF-based DNA methylation class prediction harbored diagnostically indicative alterations affecting BRAF (V600E, 25/214; KIAA1549 : BRAF , 22/214), IDH1 (8/214) or H3F3A (K27M, 2/214) as well as less clearly pathognomonic mutations. Overall, alterations considered of diagnostic relevance were detected in 41.9% of tumors ( BRAF , 26.5%; H3F3A , 3.9%; ATRX , 2.1%; CTNNB1 , 1.8%; IDH1 , 1.6%; PTCH1 , 1.5%; ZFTA , 1.1%; SMARCB1 , 1.1%; and others, <1%). Alterations were considered to have therapeutic implications in 15.2% of tumors, with directly targetable alterations in BRAF (V600E, 7.4%), FGFR1 / 3 (4.0%), ALK (0.8%), NTRK2 / 3 (0.4%), MET (0.1%) and RET (0.1%) (Fig. ). Tumors considered hypermutated (with ≥10 somatic mutations per megabase (Mb)) (11/1,034, 1.1%) were among DNA methylation classes MB, SHH (4/37), HGG, midline (2/6), IDH (1/2) and unclassifiable (4/197) tumors (Extended Data Fig. ), with constitutional pathogenic alterations in mismatch repair (MMR)-associated genes detected in three patients with hypermutated tumors (see below). A mutational burden >5 per Mb was observed in tumors from seven of 11 patients with constitutional pathogenic alterations in MMR-associated genes. Prevalence of cancer predisposition syndromes Gene panel sequencing of leukocyte-derived DNA enabled screening for constitutional variants considered (likely) pathogenic (LPV/PV) in 1,034 patients. Cancer predisposing variants were detected in 101 of 1,034 individuals (9.8%) (Fig. ) affecting 25 genes (Fig. , Extended Data Fig. , Supplementary Fig. and Supplementary Table ). The most common cancer predisposition syndromes (CPSs) were neurofibromatosis type 1 (caused by constitutional LPV/PV in NF1 ; 1.5%), Li–Fraumeni syndrome ( TP53 ; 1.2%), constitutional MMR deficiency or Lynch syndrome ( MLH1 , MSH2 and MSH6 , 1.1%; PMS2 was not included in the gene panel at the time of analysis), ataxia–telangiectasia and ATM heterozygous carriers ( ATM , 0.9%), neurofibromatosis type 2 ( NF2 , 0.8%), DICER1 syndrome ( DICER1 , 0.6%) and rhabdoid tumor predisposition syndrome 1 ( SMARCB1 , 0.4%). LPV/PV in other genes occurred at lower frequencies (<0.5%). Known associations included NF1 in LGG and SMARCB1 in atypical teratoid/rhabdoid tumor (AT/RT) (Supplementary Fig. ). Additional findings included constitutional TP53 variants enriched in MYCN-activated HGG; MLH1 , MSH2 and MSH6 in RTK-activated and midline HGG classes (Extended Data Fig. and Supplementary Fig. ); and notable findings including a previously unidentified PTPN11 variant in a patient with an H3 K27-altered DMG. We also observed a substantial proportion of patients with pathogenic constitutional alterations whose tumors were not readily classifiable by RF-based DNA methylation class prediction (31/101, 30.7%), of which most displayed high-grade (13/31, 41.9%) or low-grade (4/31, 12.9%) glioma histology, in line with t -SNE-based DNA methylation class assignment (15/31, 48.4%), including three IDH1 -mutant astrocytomas. Indications for cancer predisposition were documented at national study headquarters in only 37 of 101 (36.6%) patients in whom we detected constitutional pathogenic variants, indicating a high proportion of previously unknown CPS among affected individuals and their families. Due to the lack of routine copy number assessment in constitutional patient DNA, constitutional copy number variations of SMARCB1 were not reported in two patients with AT/RT and a known rhabdoid tumor predisposition syndrome where data were suggestive of a heterozygous deletion. Interdisciplinary tumor board discussions Cases with discrepant neuropathological WHO-based and DNA methylation-based classification were discussed in a weekly interdisciplinary tumor board (Extended Data Fig. and Supplementary Table ). Focusing on discrepancies after DNA methylation class assignment by t -SNE inspection, 70.1% of discussed discrepancies were considered clinically relevant. Additional gene panel sequencing data and reference neuroradiological evaluation were available in 93.5% and 76.6% of cases, respectively, and considered compatible with both WHO-based (63% and 100%) and DNA methylation-based (100% and 85%) classification in most cases. Variants detected by NGS considered inconsistent with WHO tumor type predominantly occurred as BRAF or MYBL1 alterations in HGG defined by WHO criteria (8/14). Additional investigations (such as targeted sequencing or FISH) were initiated in 15.6%. Constellations enabled a consensus in 27.3% of discussed cases, in which an integrated diagnosis was based on DNA methylation class (42.9%) or WHO tumor type (9.5%); the WHO tumor type was within the histopathological spectrum of the DNA methylation class (38.1%); or the DNA methylation class was considered as a differential diagnosis by reference neuropathological evaluation (9.5%). Discrepancies remained irresolvable in most discussed cases (71.4%). Review of WHO-defined anaplastic astrocytomas and glioblastomas displaying DNA methylation profiles of lower-grade gliomas (frequently occurring in infants and young children) indicated increased mitotic activity, in particular with aberrant (atypical) mitotic figures, as the main reason for assigning a high grade, with thrombosed vessels or palisading necrosis as criteria for anaplasia in individual cases. One sample swap (<0.1%) occurred during molecular analysis and was detected upon discussion. Risk stratification for patients with HGG Given the recurring constellation of HGG according to WHO criteria with DNA methylation profiles of lower-grade gliomas (Fig. ), we stratified patients with WHO-defined HGG into molecular risk groups. Data on survival and treatment modalities were available for 952 enrolled patients (79.1%; Supplementary Table ), including 162 patients with WHO-defined HGG. Median follow-up was 22 months (range 0–192 months) after diagnosis. Tumors from high-risk DNA methylation classes (DMG, K27M; HGG, G34; HGG, midline; HGG, MYCN; HGG, RTK) were associated with poor overall survival (OS), whereas HGG from intermediate-risk (A, IDH; HGG, IDH; aPA; PXA; IHG; CNS NB, FOXR2) and low-risk (PA, PF; PA, midline; PA/GG, hemispheric; GG; LGG, MYB/MYBL1; DLGNT) DNA methylation classes were associated with significantly longer OS ( P < 0.001, log-rank test) (Fig. ). Patients in the low-risk group included four children in complete remission (two of them 34 months and 41 months after tumor resection and following a watch-and-wait strategy) and only five patients who had received both radiotherapy and chemotherapy (Supplementary Table ). Similar results in this group were obtained when using DNA methylation class assignment by t -SNE analysis (Supplementary Fig. ) or defining the HGG cohort for analysis by DNA methylation classes (Supplementary Fig. ). There was also a significant, yet less discriminatory, difference when comparing tumors assigned WHO grade 3 with WHO grade 4 ( P = 0.0051) (Fig. ), and WHO grades 1–2 (PXA, WHO grade 2 in 9/13 cases) indicated improved OS among DNA methylation-defined HGG (Supplementary Fig. ). Additional survival analyses by WHO-based tumor type and DNA methylation class in LGG (Supplementary Fig. ), MB (Supplementary Fig. ), EPN (Supplementary Fig. ) and EMB/PIN (Supplementary Fig. ) indicated differences largely known from previous retrospective studies. Advancement of automated DNA methylation class prediction To evaluate the advancement of RF-based DNA methylation class prediction, we applied version 11b4 (publicly released in October 2017) and version 12.5 (released in January 2022) of the algorithm to the DNA methylation dataset of 1,124 tumors (Extended Data Fig. and Supplementary Table ). By increasing the total class number and introducing a hierarchy of DNA methylation subclasses (184), classes (147), class families (81) and superfamilies (66), the total number of tumors that could not readily be assigned to any tumor category decreased from 29% in version 11b4 to 15% in version 12.5. At the same time, 32 tumors (2.9%) that were assigned to a distinct class in version 11b4 did not reach the threshold score of 0.9 for any class or family in version 12.5. Another 135 tumors (12.0%, 126 of which were deemed classifiable by t -SNE analysis) remained unclassifiable in both versions of the RF-based algorithm. In 58 of 167 samples unclassifiable by version 12.5, genetic alterations indicative of a DNA methylation class were detected by NGS in BRAF (42/167), IDH1 (5/167), histone 3 genes (4/167), CTNNB1 (3/167), ALK (2/167), SMARCB1 (1/167) and YAP1 (1/167). Over a 4-year period (April 2015 to March 2019), 1,204 patients with available formalin-fixed, paraffin-embedded (FFPE) tumor tissue were enrolled, excluding 163 patients who did not fulfill the inclusion criteria (117 recurrences, 23 retrospective registrations, 12 metastases, 11 adults) (Fig. ). Patients were enrolled from 65 centers in Germany, Australia/New Zealand (starting June 2017) and Switzerland (starting July 2017) in a population-based manner (Fig. and Supplementary Figs. and ). In 59 tumors, received tissue was either insufficient (31, 2.6%) or not suitable (28, 2.4%) for DNA methylation analysis and/or NGS (4.0% and 1.4%, respectively) (Fig. ). Median time from arrival of FFPE sections at the molecular testing laboratory to first molecular report was 21 days (Supplementary Fig. ). Timelines from tumor surgery to successful patient registration were shorter in centers with higher recruitment rates (Supplementary Fig. ). WHO-based CNS tumor types by neuropathological assessment The distribution of tumor types by reference neuropathological evaluation according to the WHO classification of CNS tumors, and the corresponding clinical patient data, were considered representative of a population-based cohort of pediatric patients with CNS tumors undergoing tumor biopsy or resection (Fig. , Extended Data Figs. and , Supplementary Fig. and Supplementary Table ). Comparison with epidemiological data from the German Childhood Cancer Registry showed an annual recruitment of up to 64% of all patients newly diagnosed with CNS tumors (Supplementary Fig. ). Neurofibromatosis type 1-associated or diffuse midline gliomas may have been underrepresented, as they are not consistently biopsied. No neoplastic tissue was detected in 21 samples (1.7%). In the remaining 1,182 tumors, a confident diagnosis was assigned in 1,028 cases (87.0%), whereas 77 were compatible with and 22 suspicious of a certain tumor type (6.5% and 1.9%, respectively). A descriptive diagnosis was established for 55 tumors (4.7%), including 33 (2.7%) that could not be assigned to any tumor category. The most common diagnostic categories were low-grade glial/glioneuronal (LGG) tumors (37.7%), medulloblastomas (MBs, 16.0%), high-grade gliomas (HGGs, 15.6%), ependymal (EPN) tumors (10.6%) and other embryonal or pineal (EMB/PIN) tumors (6.2%) (Supplementary Fig. ). Various other less frequent tumor types made up a total of 9.5% of the cohort. Patient age and sex were distributed as expected (Extended Data Fig. ). DNA methylation-based CNS tumor classification Using, in each case, the latest applicable version at the time of diagnosis (version 9.0–version 11b4; (ref. )) of a DNA methylation-based random forest (RF) class prediction algorithm, tumors were assigned to 65 (from a possible 91) different DNA methylation classes (Fig. , Extended Data Fig. and Supplementary Table ). Besides LGG (28.5%), MB (16.3%) formed the second largest category, followed by HGG (10.1%), EPN (10.1%) and other EMB/PIN tumors (5.5%), whereas the remaining 6.2% were distributed among other less frequent classes (Fig. and Supplementary Fig. ). A substantial fraction of tumors (21.1%) could not be confidently assigned to a DNA methylation class. The DNA methylation profiles of 25 (2.2%) samples assigned to a control class of non-neoplastic tissue were indicative of low tumor cell content in the analyzed tissue. DNA methylation classes were associated with patterns of patient age, sex and tumor location (Extended Data Figs. and ) as well as DNA copy number alterations (Extended Data Fig. , Supplementary Figs. and and Supplementary Table ). As examples, the DNA methylation class ‘infantile hemispheric glioma’ exclusively comprised hemispheric tumors in infants with frequent focal amplifications on cytoband 2p23.2, indicative of fusions involving the ALK gene , ; the DNA methylation class ‘PXA’ comprised hemispheric tumors across ages consistently harboring homozygous deletions of the CDKN2A/B locus (9p21.3); and the DNA methylation class ‘ETMR’ comprised predominantly occipital or posterior fossa tumors in young children with a pathognomonic amplification at 19q13.42 (ref. ). Additional significant copy number alterations included focal deletion involving the MYB locus in ‘LGG, MYB/MYBL1’ (6q24.1), amplification of MYCN in ‘HGG, MYCN’ (2p24.3) and amplification involving EGFR in ‘HGG, RTK’ (7p11.2) (Extended Data Fig. ). Comparison of WHO-based and DNA methylation-based classification Directly juxtaposing WHO-based tumor type and DNA methylation class for individual tumors (Fig. , Extended Data Fig. , Supplementary Fig. and Supplementary Table ) as well as pairwise comparison indicated strong correlation between combinations known to correspond or overlap across categories (Extended Data Fig. , Supplementary Fig. and Supplementary Table ) but also a high fraction of tumors unclassifiable by RF-based prediction among WHO-defined HGG (33.5%), LGG (20%) and other rare tumors (37.6%) (Fig. , Extended Data Fig. and Supplementary Table ). Visualization of DNA methylation patterns by t -distributed stochastic neighbor embedding ( t -SNE) (Fig. and Supplementary Table ), and subsequent class assignment by visual inspection (Supplementary Fig. ), allowed classification of another 229 samples, with profiles of 34 tumors (3.0%) suggestive of novel molecular classes not represented in the original reference cohort , such as HGG of the posterior fossa and neuroepithelial tumors with PATZ1 fusions or PLAGL1 fusions (Fig. and Supplementary Fig. ). In most tumors (67.8%), neuropathological WHO-based tumor typing and DNA methylation class prediction were considered concordant, with an additional refinement by DNA methylation class in 49.7% of all cases (Fig. and Supplementary Table ). Assignments to a discrepant tumor class (within a category, 2.0%) or to a discrepant tumor category (3.0%) were considered clinically relevant (that is, changing the recommended treatment protocol) in 5% of all cases. This included 15 of 43 samples with inconclusive histology or no detectable tumor tissue, of which most (11/15) were classified as lower-grade glial or glioneuronal tumors by DNA methylation analysis (Extended Data Fig. ). There was an enrichment of clinically relevant discrepancies in histologically classified HGG (24/173, 13.9%) compared to other WHO-defined categories ( P < 0.001). Among those, the most common combinations (21/24) included anaplastic (pilocytic) astrocytomas or glioblastomas (WHO grade 3–4) assigned to DNA methylation classes of lower-grade gliomas, including PA, GG or MYB / MYBL -altered tumors (WHO grade 1–2) (Fig. and Supplementary Fig. ). Clinically relevant discrepancies were rarer in LGG (2.2%), MB (1.1%), EPN (1.6%) and other tumor types (0.0%). Discrepant tumor types and classes currently not considered clinically relevant were assigned in 4.6% of samples, affecting mostly lower-grade glial and glioneuronal tumors (29/52) (Extended Data Fig. and Supplementary Fig. ). Samples could not be assigned to any tumor category or did not contain detectable tumor tissue by both neuropathological assessment and DNA methylation analysis in 1.4% and 0.7%, respectively (Extended Data Fig. and Fig. ). The distribution of tumor types by reference neuropathological evaluation according to the WHO classification of CNS tumors, and the corresponding clinical patient data, were considered representative of a population-based cohort of pediatric patients with CNS tumors undergoing tumor biopsy or resection (Fig. , Extended Data Figs. and , Supplementary Fig. and Supplementary Table ). Comparison with epidemiological data from the German Childhood Cancer Registry showed an annual recruitment of up to 64% of all patients newly diagnosed with CNS tumors (Supplementary Fig. ). Neurofibromatosis type 1-associated or diffuse midline gliomas may have been underrepresented, as they are not consistently biopsied. No neoplastic tissue was detected in 21 samples (1.7%). In the remaining 1,182 tumors, a confident diagnosis was assigned in 1,028 cases (87.0%), whereas 77 were compatible with and 22 suspicious of a certain tumor type (6.5% and 1.9%, respectively). A descriptive diagnosis was established for 55 tumors (4.7%), including 33 (2.7%) that could not be assigned to any tumor category. The most common diagnostic categories were low-grade glial/glioneuronal (LGG) tumors (37.7%), medulloblastomas (MBs, 16.0%), high-grade gliomas (HGGs, 15.6%), ependymal (EPN) tumors (10.6%) and other embryonal or pineal (EMB/PIN) tumors (6.2%) (Supplementary Fig. ). Various other less frequent tumor types made up a total of 9.5% of the cohort. Patient age and sex were distributed as expected (Extended Data Fig. ). Using, in each case, the latest applicable version at the time of diagnosis (version 9.0–version 11b4; (ref. )) of a DNA methylation-based random forest (RF) class prediction algorithm, tumors were assigned to 65 (from a possible 91) different DNA methylation classes (Fig. , Extended Data Fig. and Supplementary Table ). Besides LGG (28.5%), MB (16.3%) formed the second largest category, followed by HGG (10.1%), EPN (10.1%) and other EMB/PIN tumors (5.5%), whereas the remaining 6.2% were distributed among other less frequent classes (Fig. and Supplementary Fig. ). A substantial fraction of tumors (21.1%) could not be confidently assigned to a DNA methylation class. The DNA methylation profiles of 25 (2.2%) samples assigned to a control class of non-neoplastic tissue were indicative of low tumor cell content in the analyzed tissue. DNA methylation classes were associated with patterns of patient age, sex and tumor location (Extended Data Figs. and ) as well as DNA copy number alterations (Extended Data Fig. , Supplementary Figs. and and Supplementary Table ). As examples, the DNA methylation class ‘infantile hemispheric glioma’ exclusively comprised hemispheric tumors in infants with frequent focal amplifications on cytoband 2p23.2, indicative of fusions involving the ALK gene , ; the DNA methylation class ‘PXA’ comprised hemispheric tumors across ages consistently harboring homozygous deletions of the CDKN2A/B locus (9p21.3); and the DNA methylation class ‘ETMR’ comprised predominantly occipital or posterior fossa tumors in young children with a pathognomonic amplification at 19q13.42 (ref. ). Additional significant copy number alterations included focal deletion involving the MYB locus in ‘LGG, MYB/MYBL1’ (6q24.1), amplification of MYCN in ‘HGG, MYCN’ (2p24.3) and amplification involving EGFR in ‘HGG, RTK’ (7p11.2) (Extended Data Fig. ). Directly juxtaposing WHO-based tumor type and DNA methylation class for individual tumors (Fig. , Extended Data Fig. , Supplementary Fig. and Supplementary Table ) as well as pairwise comparison indicated strong correlation between combinations known to correspond or overlap across categories (Extended Data Fig. , Supplementary Fig. and Supplementary Table ) but also a high fraction of tumors unclassifiable by RF-based prediction among WHO-defined HGG (33.5%), LGG (20%) and other rare tumors (37.6%) (Fig. , Extended Data Fig. and Supplementary Table ). Visualization of DNA methylation patterns by t -distributed stochastic neighbor embedding ( t -SNE) (Fig. and Supplementary Table ), and subsequent class assignment by visual inspection (Supplementary Fig. ), allowed classification of another 229 samples, with profiles of 34 tumors (3.0%) suggestive of novel molecular classes not represented in the original reference cohort , such as HGG of the posterior fossa and neuroepithelial tumors with PATZ1 fusions or PLAGL1 fusions (Fig. and Supplementary Fig. ). In most tumors (67.8%), neuropathological WHO-based tumor typing and DNA methylation class prediction were considered concordant, with an additional refinement by DNA methylation class in 49.7% of all cases (Fig. and Supplementary Table ). Assignments to a discrepant tumor class (within a category, 2.0%) or to a discrepant tumor category (3.0%) were considered clinically relevant (that is, changing the recommended treatment protocol) in 5% of all cases. This included 15 of 43 samples with inconclusive histology or no detectable tumor tissue, of which most (11/15) were classified as lower-grade glial or glioneuronal tumors by DNA methylation analysis (Extended Data Fig. ). There was an enrichment of clinically relevant discrepancies in histologically classified HGG (24/173, 13.9%) compared to other WHO-defined categories ( P < 0.001). Among those, the most common combinations (21/24) included anaplastic (pilocytic) astrocytomas or glioblastomas (WHO grade 3–4) assigned to DNA methylation classes of lower-grade gliomas, including PA, GG or MYB / MYBL -altered tumors (WHO grade 1–2) (Fig. and Supplementary Fig. ). Clinically relevant discrepancies were rarer in LGG (2.2%), MB (1.1%), EPN (1.6%) and other tumor types (0.0%). Discrepant tumor types and classes currently not considered clinically relevant were assigned in 4.6% of samples, affecting mostly lower-grade glial and glioneuronal tumors (29/52) (Extended Data Fig. and Supplementary Fig. ). Samples could not be assigned to any tumor category or did not contain detectable tumor tissue by both neuropathological assessment and DNA methylation analysis in 1.4% and 0.7%, respectively (Extended Data Fig. and Fig. ). Detection of relevant genetic alterations Using a customized enrichment/hybrid-capture-based NGS gene panel comprising 130 genes of interest (Supplementary Table ) , complemented by RNA sequencing in selected cases , we detected genetic alterations in 625 of 1,034 tumors (60.4%) (Fig. , Extended Data Fig. , Supplementary Fig. and Supplementary Table ). For the most commonly affected gene BRAF (272/1,034), fusion events were observed in 158 of 237 DNA methylation-defined infratentorial (124/160), midline (28/51) or cortical LGG (6/26), whereas V600E mutations were further observed in GG (7/13) and PXA (17/23). Other genes mutated in ≥2% of all tumors were TP53 (5.1%), FGFR1 (4.4%), NF1 (4.2%), H3F3A (3.7%) and CTNNB1 (2.2%). Recurrent alterations occurring in ≥75% of tumors (with ≥2 sequenced) in specific DNA methylation classes included histone 3 K27M in DMG, K27 (27/27), H3F3A G34R/V in HGG, G34 (11/11), IDH1 in gliomas, IDH-mutant (7/7), BCOR ITD in CNS, BCOR (6/6), SMARCB1 in ATRT, TYR (6/8), DICER1 in primary intracranial DICER1-mutant sarcomas (2/2), NF2 in spinal EPN (2/2) and TSC1 in SEGA (2/2). A fraction of tumors unclassifiable or assigned to a control class by RF-based DNA methylation class prediction harbored diagnostically indicative alterations affecting BRAF (V600E, 25/214; KIAA1549 : BRAF , 22/214), IDH1 (8/214) or H3F3A (K27M, 2/214) as well as less clearly pathognomonic mutations. Overall, alterations considered of diagnostic relevance were detected in 41.9% of tumors ( BRAF , 26.5%; H3F3A , 3.9%; ATRX , 2.1%; CTNNB1 , 1.8%; IDH1 , 1.6%; PTCH1 , 1.5%; ZFTA , 1.1%; SMARCB1 , 1.1%; and others, <1%). Alterations were considered to have therapeutic implications in 15.2% of tumors, with directly targetable alterations in BRAF (V600E, 7.4%), FGFR1 / 3 (4.0%), ALK (0.8%), NTRK2 / 3 (0.4%), MET (0.1%) and RET (0.1%) (Fig. ). Tumors considered hypermutated (with ≥10 somatic mutations per megabase (Mb)) (11/1,034, 1.1%) were among DNA methylation classes MB, SHH (4/37), HGG, midline (2/6), IDH (1/2) and unclassifiable (4/197) tumors (Extended Data Fig. ), with constitutional pathogenic alterations in mismatch repair (MMR)-associated genes detected in three patients with hypermutated tumors (see below). A mutational burden >5 per Mb was observed in tumors from seven of 11 patients with constitutional pathogenic alterations in MMR-associated genes. Prevalence of cancer predisposition syndromes Gene panel sequencing of leukocyte-derived DNA enabled screening for constitutional variants considered (likely) pathogenic (LPV/PV) in 1,034 patients. Cancer predisposing variants were detected in 101 of 1,034 individuals (9.8%) (Fig. ) affecting 25 genes (Fig. , Extended Data Fig. , Supplementary Fig. and Supplementary Table ). The most common cancer predisposition syndromes (CPSs) were neurofibromatosis type 1 (caused by constitutional LPV/PV in NF1 ; 1.5%), Li–Fraumeni syndrome ( TP53 ; 1.2%), constitutional MMR deficiency or Lynch syndrome ( MLH1 , MSH2 and MSH6 , 1.1%; PMS2 was not included in the gene panel at the time of analysis), ataxia–telangiectasia and ATM heterozygous carriers ( ATM , 0.9%), neurofibromatosis type 2 ( NF2 , 0.8%), DICER1 syndrome ( DICER1 , 0.6%) and rhabdoid tumor predisposition syndrome 1 ( SMARCB1 , 0.4%). LPV/PV in other genes occurred at lower frequencies (<0.5%). Known associations included NF1 in LGG and SMARCB1 in atypical teratoid/rhabdoid tumor (AT/RT) (Supplementary Fig. ). Additional findings included constitutional TP53 variants enriched in MYCN-activated HGG; MLH1 , MSH2 and MSH6 in RTK-activated and midline HGG classes (Extended Data Fig. and Supplementary Fig. ); and notable findings including a previously unidentified PTPN11 variant in a patient with an H3 K27-altered DMG. We also observed a substantial proportion of patients with pathogenic constitutional alterations whose tumors were not readily classifiable by RF-based DNA methylation class prediction (31/101, 30.7%), of which most displayed high-grade (13/31, 41.9%) or low-grade (4/31, 12.9%) glioma histology, in line with t -SNE-based DNA methylation class assignment (15/31, 48.4%), including three IDH1 -mutant astrocytomas. Indications for cancer predisposition were documented at national study headquarters in only 37 of 101 (36.6%) patients in whom we detected constitutional pathogenic variants, indicating a high proportion of previously unknown CPS among affected individuals and their families. Due to the lack of routine copy number assessment in constitutional patient DNA, constitutional copy number variations of SMARCB1 were not reported in two patients with AT/RT and a known rhabdoid tumor predisposition syndrome where data were suggestive of a heterozygous deletion. Using a customized enrichment/hybrid-capture-based NGS gene panel comprising 130 genes of interest (Supplementary Table ) , complemented by RNA sequencing in selected cases , we detected genetic alterations in 625 of 1,034 tumors (60.4%) (Fig. , Extended Data Fig. , Supplementary Fig. and Supplementary Table ). For the most commonly affected gene BRAF (272/1,034), fusion events were observed in 158 of 237 DNA methylation-defined infratentorial (124/160), midline (28/51) or cortical LGG (6/26), whereas V600E mutations were further observed in GG (7/13) and PXA (17/23). Other genes mutated in ≥2% of all tumors were TP53 (5.1%), FGFR1 (4.4%), NF1 (4.2%), H3F3A (3.7%) and CTNNB1 (2.2%). Recurrent alterations occurring in ≥75% of tumors (with ≥2 sequenced) in specific DNA methylation classes included histone 3 K27M in DMG, K27 (27/27), H3F3A G34R/V in HGG, G34 (11/11), IDH1 in gliomas, IDH-mutant (7/7), BCOR ITD in CNS, BCOR (6/6), SMARCB1 in ATRT, TYR (6/8), DICER1 in primary intracranial DICER1-mutant sarcomas (2/2), NF2 in spinal EPN (2/2) and TSC1 in SEGA (2/2). A fraction of tumors unclassifiable or assigned to a control class by RF-based DNA methylation class prediction harbored diagnostically indicative alterations affecting BRAF (V600E, 25/214; KIAA1549 : BRAF , 22/214), IDH1 (8/214) or H3F3A (K27M, 2/214) as well as less clearly pathognomonic mutations. Overall, alterations considered of diagnostic relevance were detected in 41.9% of tumors ( BRAF , 26.5%; H3F3A , 3.9%; ATRX , 2.1%; CTNNB1 , 1.8%; IDH1 , 1.6%; PTCH1 , 1.5%; ZFTA , 1.1%; SMARCB1 , 1.1%; and others, <1%). Alterations were considered to have therapeutic implications in 15.2% of tumors, with directly targetable alterations in BRAF (V600E, 7.4%), FGFR1 / 3 (4.0%), ALK (0.8%), NTRK2 / 3 (0.4%), MET (0.1%) and RET (0.1%) (Fig. ). Tumors considered hypermutated (with ≥10 somatic mutations per megabase (Mb)) (11/1,034, 1.1%) were among DNA methylation classes MB, SHH (4/37), HGG, midline (2/6), IDH (1/2) and unclassifiable (4/197) tumors (Extended Data Fig. ), with constitutional pathogenic alterations in mismatch repair (MMR)-associated genes detected in three patients with hypermutated tumors (see below). A mutational burden >5 per Mb was observed in tumors from seven of 11 patients with constitutional pathogenic alterations in MMR-associated genes. Gene panel sequencing of leukocyte-derived DNA enabled screening for constitutional variants considered (likely) pathogenic (LPV/PV) in 1,034 patients. Cancer predisposing variants were detected in 101 of 1,034 individuals (9.8%) (Fig. ) affecting 25 genes (Fig. , Extended Data Fig. , Supplementary Fig. and Supplementary Table ). The most common cancer predisposition syndromes (CPSs) were neurofibromatosis type 1 (caused by constitutional LPV/PV in NF1 ; 1.5%), Li–Fraumeni syndrome ( TP53 ; 1.2%), constitutional MMR deficiency or Lynch syndrome ( MLH1 , MSH2 and MSH6 , 1.1%; PMS2 was not included in the gene panel at the time of analysis), ataxia–telangiectasia and ATM heterozygous carriers ( ATM , 0.9%), neurofibromatosis type 2 ( NF2 , 0.8%), DICER1 syndrome ( DICER1 , 0.6%) and rhabdoid tumor predisposition syndrome 1 ( SMARCB1 , 0.4%). LPV/PV in other genes occurred at lower frequencies (<0.5%). Known associations included NF1 in LGG and SMARCB1 in atypical teratoid/rhabdoid tumor (AT/RT) (Supplementary Fig. ). Additional findings included constitutional TP53 variants enriched in MYCN-activated HGG; MLH1 , MSH2 and MSH6 in RTK-activated and midline HGG classes (Extended Data Fig. and Supplementary Fig. ); and notable findings including a previously unidentified PTPN11 variant in a patient with an H3 K27-altered DMG. We also observed a substantial proportion of patients with pathogenic constitutional alterations whose tumors were not readily classifiable by RF-based DNA methylation class prediction (31/101, 30.7%), of which most displayed high-grade (13/31, 41.9%) or low-grade (4/31, 12.9%) glioma histology, in line with t -SNE-based DNA methylation class assignment (15/31, 48.4%), including three IDH1 -mutant astrocytomas. Indications for cancer predisposition were documented at national study headquarters in only 37 of 101 (36.6%) patients in whom we detected constitutional pathogenic variants, indicating a high proportion of previously unknown CPS among affected individuals and their families. Due to the lack of routine copy number assessment in constitutional patient DNA, constitutional copy number variations of SMARCB1 were not reported in two patients with AT/RT and a known rhabdoid tumor predisposition syndrome where data were suggestive of a heterozygous deletion. Cases with discrepant neuropathological WHO-based and DNA methylation-based classification were discussed in a weekly interdisciplinary tumor board (Extended Data Fig. and Supplementary Table ). Focusing on discrepancies after DNA methylation class assignment by t -SNE inspection, 70.1% of discussed discrepancies were considered clinically relevant. Additional gene panel sequencing data and reference neuroradiological evaluation were available in 93.5% and 76.6% of cases, respectively, and considered compatible with both WHO-based (63% and 100%) and DNA methylation-based (100% and 85%) classification in most cases. Variants detected by NGS considered inconsistent with WHO tumor type predominantly occurred as BRAF or MYBL1 alterations in HGG defined by WHO criteria (8/14). Additional investigations (such as targeted sequencing or FISH) were initiated in 15.6%. Constellations enabled a consensus in 27.3% of discussed cases, in which an integrated diagnosis was based on DNA methylation class (42.9%) or WHO tumor type (9.5%); the WHO tumor type was within the histopathological spectrum of the DNA methylation class (38.1%); or the DNA methylation class was considered as a differential diagnosis by reference neuropathological evaluation (9.5%). Discrepancies remained irresolvable in most discussed cases (71.4%). Review of WHO-defined anaplastic astrocytomas and glioblastomas displaying DNA methylation profiles of lower-grade gliomas (frequently occurring in infants and young children) indicated increased mitotic activity, in particular with aberrant (atypical) mitotic figures, as the main reason for assigning a high grade, with thrombosed vessels or palisading necrosis as criteria for anaplasia in individual cases. One sample swap (<0.1%) occurred during molecular analysis and was detected upon discussion. Given the recurring constellation of HGG according to WHO criteria with DNA methylation profiles of lower-grade gliomas (Fig. ), we stratified patients with WHO-defined HGG into molecular risk groups. Data on survival and treatment modalities were available for 952 enrolled patients (79.1%; Supplementary Table ), including 162 patients with WHO-defined HGG. Median follow-up was 22 months (range 0–192 months) after diagnosis. Tumors from high-risk DNA methylation classes (DMG, K27M; HGG, G34; HGG, midline; HGG, MYCN; HGG, RTK) were associated with poor overall survival (OS), whereas HGG from intermediate-risk (A, IDH; HGG, IDH; aPA; PXA; IHG; CNS NB, FOXR2) and low-risk (PA, PF; PA, midline; PA/GG, hemispheric; GG; LGG, MYB/MYBL1; DLGNT) DNA methylation classes were associated with significantly longer OS ( P < 0.001, log-rank test) (Fig. ). Patients in the low-risk group included four children in complete remission (two of them 34 months and 41 months after tumor resection and following a watch-and-wait strategy) and only five patients who had received both radiotherapy and chemotherapy (Supplementary Table ). Similar results in this group were obtained when using DNA methylation class assignment by t -SNE analysis (Supplementary Fig. ) or defining the HGG cohort for analysis by DNA methylation classes (Supplementary Fig. ). There was also a significant, yet less discriminatory, difference when comparing tumors assigned WHO grade 3 with WHO grade 4 ( P = 0.0051) (Fig. ), and WHO grades 1–2 (PXA, WHO grade 2 in 9/13 cases) indicated improved OS among DNA methylation-defined HGG (Supplementary Fig. ). Additional survival analyses by WHO-based tumor type and DNA methylation class in LGG (Supplementary Fig. ), MB (Supplementary Fig. ), EPN (Supplementary Fig. ) and EMB/PIN (Supplementary Fig. ) indicated differences largely known from previous retrospective studies. To evaluate the advancement of RF-based DNA methylation class prediction, we applied version 11b4 (publicly released in October 2017) and version 12.5 (released in January 2022) of the algorithm to the DNA methylation dataset of 1,124 tumors (Extended Data Fig. and Supplementary Table ). By increasing the total class number and introducing a hierarchy of DNA methylation subclasses (184), classes (147), class families (81) and superfamilies (66), the total number of tumors that could not readily be assigned to any tumor category decreased from 29% in version 11b4 to 15% in version 12.5. At the same time, 32 tumors (2.9%) that were assigned to a distinct class in version 11b4 did not reach the threshold score of 0.9 for any class or family in version 12.5. Another 135 tumors (12.0%, 126 of which were deemed classifiable by t -SNE analysis) remained unclassifiable in both versions of the RF-based algorithm. In 58 of 167 samples unclassifiable by version 12.5, genetic alterations indicative of a DNA methylation class were detected by NGS in BRAF (42/167), IDH1 (5/167), histone 3 genes (4/167), CTNNB1 (3/167), ALK (2/167), SMARCB1 (1/167) and YAP1 (1/167). In contrast to the unbiased approach presented here, previous studies applying similar techniques were largely performed in retrospect , , aiming specifically to subgroup archived cohorts defined by WHO tumor types – or to characterize novel CNS tumor groups based on distinct DNA methylation patterns , , – , and smaller-scale prospective studies focused explicitly on tumors challenging to classify by conventional neuropathology and/or did not follow-up on patient outcome – . Our data support the incorporation of DNA methylation-based classification as included in the 5th edition of the WHO classification of CNS tumors as a desirable diagnostic criterion for many tumor types and an essential criterion for some otherwise difficult to diagnose , . Adding a DNA methylation (sub)class further refines the molecular layer of a coherent integrated diagnosis in most cases, which is becoming increasingly important in the era of molecularly informed patient stratification and subgroup-specific therapies. DNA methylation analysis has the potential to increase certainty in tumors with a suspected diagnosis and to establish a valid diagnosis in some samples where no neoplastic cells can be detected by neuropathological examination alone. On the other hand, contamination by non-neoplastic cells can be a limitation for reaching the diagnostic threshold for DNA methylation-based CNS tumor class prediction and underlines the importance of thorough neuropathological assessment . The enrichment of discrepant classifications in gliomas suggests that this group of pediatric patients may currently benefit most from integrating DNA methylation analysis in standard neuropathological practice. A substantial fraction of histologically defined HGGs present with DNA methylation profiles resembling those of lower-grade lesions. Our interdisciplinary tumor board discussions show that—especially in the absence of pathognomonic mutations or fusions—a diagnostic gold standard is usually missing, making consensus on an integrated diagnosis often difficult to reach. In the ongoing debate concerning the clinical behavior of these tumors, our follow-up data indicate improved outcome, similar to patients with histologically defined LGG. Using prospectively assigned DNA methylation classes to stratify patients with HGG into molecular risk groups predicted prognosis more accurately than WHO grading and should be considered for clinical decision-making in such constellations. Some of these are already incorporated in the current WHO classification, exemplified by exclusion of anaplasia as an essential diagnostic criterion for MYB -altered or MYBL1 -altered diffuse astrocytomas . Increased mitotic activity as the main reason for diagnosing HGGs in infants and young children whose tumors display DNA methylation patterns of lower-grade gliomas warrants future studies to better define cutoffs for tumor mitotic activity in this age group. The DNA methylation class comprising both WHO grades 2 and 3 of PXA (based on mitotic count , here provisionally categorized as HGG) was associated with an intermediate prognosis compared to both HGG and LGG within our follow-up period, rendering grading for this class difficult and re-visiting these data in the future necessary. For tumors not readily classifiable by RF-based class prediction, subjecting DNA methylation data to advanced analyses such as t -SNE alongside suitable reference cohorts can be instrumental in determining tumor type. Tumors with class prediction scores slightly below the threshold of 0.9 are typically projected onto or in close proximity to reference tumors of a DNA methylation class and may still be reliably assigned to that class (Supplementary Fig. ) . In contrast, tumors with overall low scores are often projected in between reference tumor classes. They may indicate the existence of yet unknown DNA methylation classes, especially when clustering together with other difficult-to-classify samples over time. Results from our study fed into a constantly growing database of more than 100,000 tumors that allows for identifying such clusters, exploring their associated molecular, pathological and clinical features, and iteratively introducing them as new reference DNA methylation (sub)classes into the RF-based class prediction algorithm , , – , , resulting in lower rates of unclassifiable tumors applying in its latest version. The requirement of careful visual inspection and (subjective) interpretation of output generated by t -SNE analyses, however, remain a caveat when used for clinical decision-making. The associations between certain copy number alterations and DNA methylation classes in our current cohort confirm the benefit of integrating DNA copy number alterations derived from DNA methylation arrays into diagnostic considerations . At the time of primary diagnosis, DNA methylation-based CNS tumor classification and copy number profiling is ideally complemented by targeted NGS of a neuro-oncology-specific gene panel (or equivalent approaches) designed to detect diagnostically and/or therapeutically relevant alterations from tumor and constitutional DNA . The presence of a pathognomonic alteration (for example, in BRAF , histone 3 variants, IDH , ZFTA , BCOR , MN1 and others) corroborates a specific diagnosis in tumors with discrepant classification or inconclusive DNA methylation analysis. As molecularly informed treatment strategies are becoming increasingly feasible as first-line options, identifying a tumor’s mutational makeup, including directly targetable alterations, will be essential in guiding patients toward optimal treatment, as demonstrated by targeting BRAF V600E, FGFR, ALK and NTRK in (among others) pediatric gliomas – . In selected tumors, subsequent RNA sequencing from the same FFPE sample (as performed here) represents a feasible approach to detect fusions with immediate impact on patient care , . Our results suggest previous assessments of pathogenic constitutional variants underlying CNS tumor development (in approximately 10% of patients) to appear broadly robust and an enrichment of Li–Fraumeni syndrome, Lynch syndrome and constitutional MMR deficiency underlying H3 wild-type HGG. We, therefore, recommend genetic counseling and testing for pediatric patients with H3 wild-type HGG (in addition to existing guidelines , ). The clinical information retrieved through national study headquarters indicates that most patients were not known or suspected to carry pathogenic constitutional variants, similarly to previous observations beyond patients with CNS tumors , . This highlights the importance of diligent consultation of patients and their families, considering that more than 95% of study participants and parents elected to be informed about constitutional pathogenic variants detected by NGS. Detection of CPSs at primary diagnosis brings added advantages over precision oncology programs designed for relapsed or progressive malignancies by enabling appropriate adaptation of treatment approaches already in the frontline setting—for example, avoiding ionizing irradiation to reduce the risk of secondary tumors in patients with Li–Fraumeni syndrome or considering upfront immune checkpoint inhibition in children with constitutional DNA replication repair deficiency , . The high fraction of tumors not readily classifiable by RF-based class prediction in patients with CPSs may be addressed by augmenting future reference cohorts with syndrome-associated tumors . Although we consider the median turnaround time of ~21 days for the centralized generation and interpretation of DNA methylation profiling and targeted NGS results acceptable, the regulatory and logistic framework of our study resulted in delays primarily affecting pre-analytical steps performed at the level of more than 60 local centers, posing a challenge especially for hospitals with lower patient recruitment. DNA methylation analysis has recently been decentralized and is now being performed at more than five experienced neuropathology centers across Germany as part of their immediate reference evaluation, minimizing total turnaround times between operation and reporting down to less than 28 days. Although targeted tumor/blood NGS is currently being performed in a similar timeframe, it cannot be initiated without informed consent from patients/parents indicating their desire to (not) be informed about potential relevant constitutional alterations. Together with the need to obtain and ship a patient blood sample, this may cause pre-analytical delay if not initiated early. Providing multi-omic data from as few as ten unstained sections of FFPE tissue, our study produced a high level of information at reasonable costs and with a very low dropout rate of ~5% of tumors. The benefits of our program and their impact on clinical patient management have prompted German national health insurance companies to cover the expenses for DNA methylation analysis and gene panel sequencing (from both tumor tissue and blood leukocytes) as part of the reference services of the nationwide multi-disciplinary ‘Treatment Network HIT’ for children and adolescents with newly diagnosed CNS tumors. This sets an excellent example of direct and rapid translation of scientific innovation into routine clinical practice, substantially improves the standard of care in German pediatric neuro-oncology and may serve as a blueprint for other countries. Patient population, samples and clinical data collection Patients were recruited between April 2015 and March 2019 from childhood cancer centers cooperating within the German Society for Pediatric Oncology/Hematology (GPOH), the Swiss Paediatric Oncology Group (SPOG) and the Australian & New Zealand Children’s Haematology/Oncology Group (ANZCHOG) in accordance with ethics board approval from the ethics committee of the Medical Faculty Heidelberg as well as local institutes. Patient sex and/or gender were not considered in the design of the study. Inclusion criteria comprised age ≤21 years at primary diagnosis of a CNS neoplasm and availability of FFPE tumor tissue. FFPE tumor tissue for reference neuropathological assessment and patient blood samples were collected at the Brain Tumor Reference Center (HTRZ) of the German Society for Neuropathology and Neuroanatomy (DGNN; Department of Neuropathology, Bonn, Germany). FFPE tumor tissue and patient blood samples were forwarded to the Clinical Cooperation Unit Neuropathology at the German Cancer Research Center (DKFZ) for molecular analyses in accordance with research ethics board approval of the University of Heidelberg. Clinical patient data were collected at the DKFZ through national study headquarters of the German HIT network of the GPOH, SPOG and ANZCHOG, using standardized case report forms within the framework of clinical trials. Evidence or clinical signs of cancer predisposition were reported to national study headquarters by local participating centers as part of those case report forms but not reviewed. Additional clinical data from 84 patients with WHO-defined HGG were obtained by reviewing primary records provided by local treating centers. Patient sex was determined by physical examination by the treating physician responsible for patient registration. No disaggregated information on patient sex and gender was collected in this study. Informed consent The MNP 2.0 study complies with the principles of the Declaration of Helsinki in its current version. Informed consent from adult patients or parental consent was obtained for all patients before enrollment. As part of consenting, patients or parents decided if they wanted to be informed about constitutional variants indicative of a CPS (890/935, 95.2%) or not (45/935, 4.8%). In cases for which this decision was not forwarded upon registration (269/1,204, 22.3%) and sequencing data were available (157/1,034, 15.2%), information on constitutional variants was not reported to treating physicians, but pseudonymized data were included in further aggregated analyses presented here, as part of the approved protocol. Only constitutional variants considered pathogenic or likely pathogenic were reported (see below). CNS tumor nomenclature To conform with the 2021 WHO Classification of Tumors of the CNS, the term ‘type’ is used for specific diagnoses recognized by the WHO (termed ‘entity’ in previous editions; for example, ‘pilocytic astrocytoma’), and the term ‘subtype’ is used for subgroups thereof (termed ‘variant’ in previous editions) , . Multiple CNS tumor types are grouped into ‘categories’ (for example, ‘low-grade glioma’). To conform with the 2021 WHO Classification of Tumors of the CNS, WHO tumor grades are expressed in Arabic numerals even though based on previous editions , . For DNA methylation-based classification, the term ‘class’ refers to a distinct DNA methylation class (for example, ‘pilocytic astrocytoma, posterior fossa’), and multiple classes are grouped into ‘categories’ corresponding to the category level of WHO-based tumor types. A hierarchy of ‘subclasses’, ‘classes’, ‘class families’ and ‘superfamilies’ was introduced in version 12.5 of the DNA methylation-based CNS tumor classification algorithm. Color coding Palettes of optimally distinct colors for CNS tumor categories and types/classes (as depicted in Extended Data Fig. ) were generated and refined using I want hue developed by Mathieu Jacomy at the Sciences-Po Medialab ( http://medialab.github.io/iwanthue ) and Graphical User Interface to Pick Colors in HCL Space by Claus O. Wilke, Reto Stauffer and Achim Zeileis ( http://hclwizard.org:3000/hclcolorpicker ). Corresponding DNA methylation classes and WHO-based diagnoses share the same color hue; overlapping DNA methylation classes and WHO-based diagnoses share shades of the same color hue (that is, different luminance). DNA methylation classes and WHO-based diagnoses from the same tumor category share a similar color hue spectrum. Reference neuropathological evaluation Central reference neuropathological evaluation was performed at the HTRZ (Department of Neuropathology, Bonn, Germany) according to the criteria defined by the respective applicable version of the WHO classification at the time of diagnosis—that is, 4th (2015–2016) and revised 4th (2016–2019) editions , . Diagnostic workup included conventional stainings such as hematoxilin & eosin staining and silver impreganation, immunohistochemical analysis of differentiation, cell lineage and proliferation markers and for mutant proteins as well as molecular pathological assays where appropriate for reaching a WHO-conform diagnosis. Tumor tissue from 21 of 707 patients (3.0%; recorded until 15 February 2018) was sufficient only for reference neuropathological assessment. Molecular genetic analyses Per protocol, ten unstained sections of FFPE tissue were requested for molecular genetic analyses. In 980 of 1,161 cases with detailed documentation (84%), a complete set of one HE-stained section, three sections at 4 µm and ten sections at 10 µm or an FFPE tissue block were available (Supplementary Table ). In 1,093 of 1,161 cases (94%), a minimum of ten sections at 10 µm were available. Testing also proceeded if fewer than ten sections at 10 µm (range: 2–9 sections; median: six sections) were available (59/1,161, 5%). In 11 of 1,161 cases (1%), DNA extracted at the stage of reference neuropathological evaluation was provided. Although aiming to extract DNA from tissue areas with more than 70% tumor cell content, this was not a prerequisite for molecular genetic analyses. Nucleic acid extraction, DNA methylation and copy number analysis using the Infinium HumanMethylation450 ( n = 187) and MethylationEPIC ( n = 937) BeadChip arrays (Illumina) and tumor/constitutional DNA sequencing using a customized enrichment/hybrid-capture-based NGS gene panel were performed at the Department of Neuropathology, Heidelberg University Hospital, as previously described , . The NGS panel comprised the entire coding (all exons ±25 bp) and selected intronic and promoter regions of 130 genes (Supplementary Table ) and was designed to detect single-nucleotide variants (SNVs), small insertions/deletions (InDels), exonic re-arrangements and recurrent fusion events. For selected samples ( n = 41), RNA sequencing was performed as previously described . Selection criteria for RNA sequencing included indications for fusion events inferred by targeted DNA sequencing or copy number data derived from DNA methylation arrays, assignment to DNA methylation classes known to be associated with fusion events (such as infantile hemispheric gliomas or MYB / MYBL1 -altered LGGs) and unclassifiable tumors in which RNA sequencing was deemed potentially informative. NGS data were processed and analyzed as previously described , . In addition to automated SNV and InDel calling, hotspots in BRAF , H3F3A , IDH1 , BCOR and FGFR1 were manually screened for alterations using the Integrative Genomics Viewer (IGV) . Tumor mutational burden was calculated as the total number of somatic SNVs and InDels per Mb of investigated genomic sequence (including synonymous SNV and hotspot mutations). NGS data were not analyzed for copy number variations. Relevant constitutional alterations identified by NGS of leukocyte-derived DNA were technically validated by Sanger sequencing at the Institute of Human Genetics at Heidelberg University Hospital. Constitutional alterations in a predefined list of 47 known cancer predisposition genes included in the gene panel (Supplementary Table ) were assessed by human geneticists according to American College of Medical Genetics and Genomics (ACMG) criteria , and only likely pathogenic (ACMG class 4) or pathogenic (ACMG class 5) variants were reported to the treating physician, and genetic counseling of the patient and the family was recommended. DNA methylation-based classification of tumor samples was performed using an RF classifying algorithm as published previously , using, in each case, the latest applicable CNS tumor classifier version at the time of diagnosis—that is, version 9.0 (2015; n = 64), version 11.0 (2015–2016; n = 95), version 11b2 (2016–2017; n = 325), version 11b4 (2017–2019; n = 658) and version 12.5 (applied for aggregated re-analysis of all 1,124 tumors as depicted in Extended Data Fig. ) ( https://www.molecularneuropathology.org/mnp/ ). In version 9.0, a tumor was assigned to a DNA methylation class if its raw RF-based class prediction score was within the interquartile range of class prediction scores of the respective reference class. After the introduction of score calibration (version 11.0), a DNA methylation class was assigned to a sample when its calibrated class prediction score reached the threshold of ≥0.9 for a reference class . t -SNE analysis of DNA methylation data from the study cohort was performed alongside 89 published reference DNA methylation classes after removal of five duplicate samples from the reference cohort. DNA methylation data from 208 of 1,124 samples in this study cohort were part of the reference cohort used to train version version 12.5 of the RF classifying algorithm. Discrepancies between WHO tumor type and DNA methylation class were considered clinically relevant if the diagnosis according to DNA methylation-based classification would have affected clinical patient management by changing the recommended treatment protocol and, therefore, (1) applying or omitting chemotherapy, (2) applying or omitting radiotherapy or (3) applying a different chemotherapy regimen. Recommendations for clinical patient management were based on phase 3 clinical trial protocols endorsed by the brain tumor ‘Treatment Network HIT’ of the GPOH between 2015 and 2019. Cancer cell fraction and tumor purity were predicted in silico from DNA methylation data by deconvolution of tumor composition (MethylCIBERSORT) and RF-based tumor purity prediction (RF_Purify) , respectively (Supplementary Fig. ). There was a direct correlation between the two methods (Pearson correlation: 0.86), but neither of the two estimates for tumor cell content correlated with RF class prediction scores (using version 11b4 across the entire cohort). Lower tumor cell content was predominantly observed in LGG but did not seem to necessarily impair class prediction. Overlaying estimated tumor cell content with t -SNE analyses showed a clear tendency for tumors with lower tumor cell content to cluster together and in close proximity of the non-neoplastic reference DNA methylation class ‘Control tissue, reactive tumor microenvironment’. Enhanced copy number variation analysis using Illumina DNA methylation arrays was performed using the R package conumee . DNA copy number state of the genomic locus containing CDKN2A/B in BRAF V600E-positive and BRAF fusion-positive tumors was assessed by visual inspection of resulting segmented copy number data using IGV . Summary copy number plots to display rates of copy number gains and losses per DNA methylation class with a minimum sample size of five were generated using an in-house R script ( https://github.com/dstichel/CNsummaryplots ). GISTIC2.0 (version 2.0.23) analyses were performed to identify genes targeted by somatic copy number variations per DNA methylation class with a minimum sample size of five via the online platform GenePattern ( https://www.genepattern.org/ ) using default settings . All other computational analyses were performed using the programming language R (ref. ). Sample processing timelines Total processing time from operation to reporting of molecular results ranged from 30 days to 290 days (median 77 days, excluding 79 patients registered >100 days after operation) (Supplementary Fig. ). Most time was consumed for patient registration (median 14 days; range 0–95 days) and data generation (median 18 days; range 5–59 days, with DNA methylation analyses completed before patient registration as part of local neuropathological diagnostics in seven cases). There was no considerable change in sample processing times throughout the recruitment period, but there was a trend toward earlier patient registration in centers with higher recruitment (Supplementary Fig. ). Interdisciplinary tumor board discussion Interdisciplinary tumor board discussions of cases with divergent reference neuropathological and molecular classification were held with a maximum of four cases per week. Discussions included participants from the DKFZ (Division of Pediatric Neurooncology), Heidelberg University Hospital (Department of Neuropathology), the Brain Tumor Reference Center (Bonn, Germany) and the Neuroradiology Reference Center (Würzburg/Augsburg, Germany). Participation of local pediatric oncologists and neuropathologists and representatives of the GPOH/SPOG/ANZCHOG study centers was encouraged but optional. In cases with discrepant findings, results of DNA methylation analysis and gene panel sequencing were initially forwarded only to treating physicians after interdisciplinary tumor board discussion and included a summary of the tumor board consensus. In April 2016, the study protocol was amended, and molecular results were provided immediately with a caveat that the report was considered preliminary until tumor board discussion; a final report including the tumor board consensus was issued thereafter. Risk stratification of patients with HGG Patients with HGGs (WHO grade 3–4) diagnosed by reference neuropathological evaluation according to the criteria of the WHO classification of tumors of the CNS were assigned to molecular risk groups based on the following molecular criteria. High risk: DNA methylation classes of HGG, G34; DMG, K27; HGG, MYCN; HGG, midline; HGG, RTK; in tumors unclassifiable by RF-based DNA methylation class prediction or without DNA methylation data: presence of an H3 K27M ( n = 1) or H3 G34R/V ( n = 1) mutation. Intermediate risk: DNA methylation classes of A, IDH; HGG, IDH; O, IDH; aPA; PXA; IHG; CNS NB, FOXR2; in tumors unclassifiable by RF-based DNA methylation class prediction: presence of an IDH1/2 R132H mutation ( n = 7); presence of a fusion involving ALK ( n = 4), NTRK ( n = 2), ROS1 ( n = 1) or MET ( n = 1); co-occurrence of BRAF V600E mutation and CDKN2A/B homozygous deletion ( n = 2). Low risk: DNA methylation classes of PA, PF; PA, midline; PA/GG, hemispheric; LGG, MYB/MYBL1; GG; DLGNT; in tumors with low tumor cell content unclassifiable by RF-based DNA methylation class prediction or without DNA methylation data: presence of a BRAF fusion ( n = 16); presence of a BRAF V600E mutation in absence of a CDKN2A/B deletion ( n = 24). Unknown risk: DNA methylation class of non-neoplastic control tissue or pattern unclassifiable in absence of abovementioned alterations. Not assessed: DNA methylation analysis not performed, targeted gene panel sequencing not performed or without detection of abovementioned alterations. By t -SNE-based DNA methylation class assignment, molecular high-risk HGG additionally included HGG of the posterior fossa. Intermediate-risk HGG additionally included DGONC . Low-risk HGG additionally included LGG, not otherwise specified (NOS). Tumors with t -SNE-based assignment to novel DNA methylation classes with unknown clinical behavior, such as tumors with PATZ1 fusions or PLAGL1 fusions , were excluded. Statistical analysis of molecular and clinical data Correlation between classification into individual WHO-based tumor types and DNA methylation-based tumor classes was tested by calculating the phi coefficient between a sample × WHO type and a sample × DNA methylation class matrix. The distribution of discrepant constellations between WHO-based tumor type and DNA methylation class among tumor categories was tested using a Fisher’s exact test. Kaplan–Meier analysis was performed to estimate the survival time of patients from different CNS tumor groups, and a log-rank test was performed to compare survival distributions between independent groups. Pairwise comparisons between groups were corrected for multiple testing using the Benjamini–Hochberg method. OS was defined as time from date of initial diagnosis until death of any cause. Surviving patients were censored at the date of last follow-up. Event-free survival was calculated from date of diagnosis until event, defined as relapse after complete resection, clinical or radiological progression, start of non-surgical/adjuvant therapy or death of any cause. Patients without event were censored at the date of last follow-up. Data visualization and statistical analyses were performed using the programming language R (ref. ). Tumor location was visualized for DNA methylation classes with a minimum sample size of five by adapting an R package for anatomical visualization of spatiotemporal brain data . Reporting summary Further information on research design is available in the linked to this article. Patients were recruited between April 2015 and March 2019 from childhood cancer centers cooperating within the German Society for Pediatric Oncology/Hematology (GPOH), the Swiss Paediatric Oncology Group (SPOG) and the Australian & New Zealand Children’s Haematology/Oncology Group (ANZCHOG) in accordance with ethics board approval from the ethics committee of the Medical Faculty Heidelberg as well as local institutes. Patient sex and/or gender were not considered in the design of the study. Inclusion criteria comprised age ≤21 years at primary diagnosis of a CNS neoplasm and availability of FFPE tumor tissue. FFPE tumor tissue for reference neuropathological assessment and patient blood samples were collected at the Brain Tumor Reference Center (HTRZ) of the German Society for Neuropathology and Neuroanatomy (DGNN; Department of Neuropathology, Bonn, Germany). FFPE tumor tissue and patient blood samples were forwarded to the Clinical Cooperation Unit Neuropathology at the German Cancer Research Center (DKFZ) for molecular analyses in accordance with research ethics board approval of the University of Heidelberg. Clinical patient data were collected at the DKFZ through national study headquarters of the German HIT network of the GPOH, SPOG and ANZCHOG, using standardized case report forms within the framework of clinical trials. Evidence or clinical signs of cancer predisposition were reported to national study headquarters by local participating centers as part of those case report forms but not reviewed. Additional clinical data from 84 patients with WHO-defined HGG were obtained by reviewing primary records provided by local treating centers. Patient sex was determined by physical examination by the treating physician responsible for patient registration. No disaggregated information on patient sex and gender was collected in this study. The MNP 2.0 study complies with the principles of the Declaration of Helsinki in its current version. Informed consent from adult patients or parental consent was obtained for all patients before enrollment. As part of consenting, patients or parents decided if they wanted to be informed about constitutional variants indicative of a CPS (890/935, 95.2%) or not (45/935, 4.8%). In cases for which this decision was not forwarded upon registration (269/1,204, 22.3%) and sequencing data were available (157/1,034, 15.2%), information on constitutional variants was not reported to treating physicians, but pseudonymized data were included in further aggregated analyses presented here, as part of the approved protocol. Only constitutional variants considered pathogenic or likely pathogenic were reported (see below). To conform with the 2021 WHO Classification of Tumors of the CNS, the term ‘type’ is used for specific diagnoses recognized by the WHO (termed ‘entity’ in previous editions; for example, ‘pilocytic astrocytoma’), and the term ‘subtype’ is used for subgroups thereof (termed ‘variant’ in previous editions) , . Multiple CNS tumor types are grouped into ‘categories’ (for example, ‘low-grade glioma’). To conform with the 2021 WHO Classification of Tumors of the CNS, WHO tumor grades are expressed in Arabic numerals even though based on previous editions , . For DNA methylation-based classification, the term ‘class’ refers to a distinct DNA methylation class (for example, ‘pilocytic astrocytoma, posterior fossa’), and multiple classes are grouped into ‘categories’ corresponding to the category level of WHO-based tumor types. A hierarchy of ‘subclasses’, ‘classes’, ‘class families’ and ‘superfamilies’ was introduced in version 12.5 of the DNA methylation-based CNS tumor classification algorithm. Palettes of optimally distinct colors for CNS tumor categories and types/classes (as depicted in Extended Data Fig. ) were generated and refined using I want hue developed by Mathieu Jacomy at the Sciences-Po Medialab ( http://medialab.github.io/iwanthue ) and Graphical User Interface to Pick Colors in HCL Space by Claus O. Wilke, Reto Stauffer and Achim Zeileis ( http://hclwizard.org:3000/hclcolorpicker ). Corresponding DNA methylation classes and WHO-based diagnoses share the same color hue; overlapping DNA methylation classes and WHO-based diagnoses share shades of the same color hue (that is, different luminance). DNA methylation classes and WHO-based diagnoses from the same tumor category share a similar color hue spectrum. Central reference neuropathological evaluation was performed at the HTRZ (Department of Neuropathology, Bonn, Germany) according to the criteria defined by the respective applicable version of the WHO classification at the time of diagnosis—that is, 4th (2015–2016) and revised 4th (2016–2019) editions , . Diagnostic workup included conventional stainings such as hematoxilin & eosin staining and silver impreganation, immunohistochemical analysis of differentiation, cell lineage and proliferation markers and for mutant proteins as well as molecular pathological assays where appropriate for reaching a WHO-conform diagnosis. Tumor tissue from 21 of 707 patients (3.0%; recorded until 15 February 2018) was sufficient only for reference neuropathological assessment. Per protocol, ten unstained sections of FFPE tissue were requested for molecular genetic analyses. In 980 of 1,161 cases with detailed documentation (84%), a complete set of one HE-stained section, three sections at 4 µm and ten sections at 10 µm or an FFPE tissue block were available (Supplementary Table ). In 1,093 of 1,161 cases (94%), a minimum of ten sections at 10 µm were available. Testing also proceeded if fewer than ten sections at 10 µm (range: 2–9 sections; median: six sections) were available (59/1,161, 5%). In 11 of 1,161 cases (1%), DNA extracted at the stage of reference neuropathological evaluation was provided. Although aiming to extract DNA from tissue areas with more than 70% tumor cell content, this was not a prerequisite for molecular genetic analyses. Nucleic acid extraction, DNA methylation and copy number analysis using the Infinium HumanMethylation450 ( n = 187) and MethylationEPIC ( n = 937) BeadChip arrays (Illumina) and tumor/constitutional DNA sequencing using a customized enrichment/hybrid-capture-based NGS gene panel were performed at the Department of Neuropathology, Heidelberg University Hospital, as previously described , . The NGS panel comprised the entire coding (all exons ±25 bp) and selected intronic and promoter regions of 130 genes (Supplementary Table ) and was designed to detect single-nucleotide variants (SNVs), small insertions/deletions (InDels), exonic re-arrangements and recurrent fusion events. For selected samples ( n = 41), RNA sequencing was performed as previously described . Selection criteria for RNA sequencing included indications for fusion events inferred by targeted DNA sequencing or copy number data derived from DNA methylation arrays, assignment to DNA methylation classes known to be associated with fusion events (such as infantile hemispheric gliomas or MYB / MYBL1 -altered LGGs) and unclassifiable tumors in which RNA sequencing was deemed potentially informative. NGS data were processed and analyzed as previously described , . In addition to automated SNV and InDel calling, hotspots in BRAF , H3F3A , IDH1 , BCOR and FGFR1 were manually screened for alterations using the Integrative Genomics Viewer (IGV) . Tumor mutational burden was calculated as the total number of somatic SNVs and InDels per Mb of investigated genomic sequence (including synonymous SNV and hotspot mutations). NGS data were not analyzed for copy number variations. Relevant constitutional alterations identified by NGS of leukocyte-derived DNA were technically validated by Sanger sequencing at the Institute of Human Genetics at Heidelberg University Hospital. Constitutional alterations in a predefined list of 47 known cancer predisposition genes included in the gene panel (Supplementary Table ) were assessed by human geneticists according to American College of Medical Genetics and Genomics (ACMG) criteria , and only likely pathogenic (ACMG class 4) or pathogenic (ACMG class 5) variants were reported to the treating physician, and genetic counseling of the patient and the family was recommended. DNA methylation-based classification of tumor samples was performed using an RF classifying algorithm as published previously , using, in each case, the latest applicable CNS tumor classifier version at the time of diagnosis—that is, version 9.0 (2015; n = 64), version 11.0 (2015–2016; n = 95), version 11b2 (2016–2017; n = 325), version 11b4 (2017–2019; n = 658) and version 12.5 (applied for aggregated re-analysis of all 1,124 tumors as depicted in Extended Data Fig. ) ( https://www.molecularneuropathology.org/mnp/ ). In version 9.0, a tumor was assigned to a DNA methylation class if its raw RF-based class prediction score was within the interquartile range of class prediction scores of the respective reference class. After the introduction of score calibration (version 11.0), a DNA methylation class was assigned to a sample when its calibrated class prediction score reached the threshold of ≥0.9 for a reference class . t -SNE analysis of DNA methylation data from the study cohort was performed alongside 89 published reference DNA methylation classes after removal of five duplicate samples from the reference cohort. DNA methylation data from 208 of 1,124 samples in this study cohort were part of the reference cohort used to train version version 12.5 of the RF classifying algorithm. Discrepancies between WHO tumor type and DNA methylation class were considered clinically relevant if the diagnosis according to DNA methylation-based classification would have affected clinical patient management by changing the recommended treatment protocol and, therefore, (1) applying or omitting chemotherapy, (2) applying or omitting radiotherapy or (3) applying a different chemotherapy regimen. Recommendations for clinical patient management were based on phase 3 clinical trial protocols endorsed by the brain tumor ‘Treatment Network HIT’ of the GPOH between 2015 and 2019. Cancer cell fraction and tumor purity were predicted in silico from DNA methylation data by deconvolution of tumor composition (MethylCIBERSORT) and RF-based tumor purity prediction (RF_Purify) , respectively (Supplementary Fig. ). There was a direct correlation between the two methods (Pearson correlation: 0.86), but neither of the two estimates for tumor cell content correlated with RF class prediction scores (using version 11b4 across the entire cohort). Lower tumor cell content was predominantly observed in LGG but did not seem to necessarily impair class prediction. Overlaying estimated tumor cell content with t -SNE analyses showed a clear tendency for tumors with lower tumor cell content to cluster together and in close proximity of the non-neoplastic reference DNA methylation class ‘Control tissue, reactive tumor microenvironment’. Enhanced copy number variation analysis using Illumina DNA methylation arrays was performed using the R package conumee . DNA copy number state of the genomic locus containing CDKN2A/B in BRAF V600E-positive and BRAF fusion-positive tumors was assessed by visual inspection of resulting segmented copy number data using IGV . Summary copy number plots to display rates of copy number gains and losses per DNA methylation class with a minimum sample size of five were generated using an in-house R script ( https://github.com/dstichel/CNsummaryplots ). GISTIC2.0 (version 2.0.23) analyses were performed to identify genes targeted by somatic copy number variations per DNA methylation class with a minimum sample size of five via the online platform GenePattern ( https://www.genepattern.org/ ) using default settings . All other computational analyses were performed using the programming language R (ref. ). Total processing time from operation to reporting of molecular results ranged from 30 days to 290 days (median 77 days, excluding 79 patients registered >100 days after operation) (Supplementary Fig. ). Most time was consumed for patient registration (median 14 days; range 0–95 days) and data generation (median 18 days; range 5–59 days, with DNA methylation analyses completed before patient registration as part of local neuropathological diagnostics in seven cases). There was no considerable change in sample processing times throughout the recruitment period, but there was a trend toward earlier patient registration in centers with higher recruitment (Supplementary Fig. ). Interdisciplinary tumor board discussions of cases with divergent reference neuropathological and molecular classification were held with a maximum of four cases per week. Discussions included participants from the DKFZ (Division of Pediatric Neurooncology), Heidelberg University Hospital (Department of Neuropathology), the Brain Tumor Reference Center (Bonn, Germany) and the Neuroradiology Reference Center (Würzburg/Augsburg, Germany). Participation of local pediatric oncologists and neuropathologists and representatives of the GPOH/SPOG/ANZCHOG study centers was encouraged but optional. In cases with discrepant findings, results of DNA methylation analysis and gene panel sequencing were initially forwarded only to treating physicians after interdisciplinary tumor board discussion and included a summary of the tumor board consensus. In April 2016, the study protocol was amended, and molecular results were provided immediately with a caveat that the report was considered preliminary until tumor board discussion; a final report including the tumor board consensus was issued thereafter. Patients with HGGs (WHO grade 3–4) diagnosed by reference neuropathological evaluation according to the criteria of the WHO classification of tumors of the CNS were assigned to molecular risk groups based on the following molecular criteria. High risk: DNA methylation classes of HGG, G34; DMG, K27; HGG, MYCN; HGG, midline; HGG, RTK; in tumors unclassifiable by RF-based DNA methylation class prediction or without DNA methylation data: presence of an H3 K27M ( n = 1) or H3 G34R/V ( n = 1) mutation. Intermediate risk: DNA methylation classes of A, IDH; HGG, IDH; O, IDH; aPA; PXA; IHG; CNS NB, FOXR2; in tumors unclassifiable by RF-based DNA methylation class prediction: presence of an IDH1/2 R132H mutation ( n = 7); presence of a fusion involving ALK ( n = 4), NTRK ( n = 2), ROS1 ( n = 1) or MET ( n = 1); co-occurrence of BRAF V600E mutation and CDKN2A/B homozygous deletion ( n = 2). Low risk: DNA methylation classes of PA, PF; PA, midline; PA/GG, hemispheric; LGG, MYB/MYBL1; GG; DLGNT; in tumors with low tumor cell content unclassifiable by RF-based DNA methylation class prediction or without DNA methylation data: presence of a BRAF fusion ( n = 16); presence of a BRAF V600E mutation in absence of a CDKN2A/B deletion ( n = 24). Unknown risk: DNA methylation class of non-neoplastic control tissue or pattern unclassifiable in absence of abovementioned alterations. Not assessed: DNA methylation analysis not performed, targeted gene panel sequencing not performed or without detection of abovementioned alterations. By t -SNE-based DNA methylation class assignment, molecular high-risk HGG additionally included HGG of the posterior fossa. Intermediate-risk HGG additionally included DGONC . Low-risk HGG additionally included LGG, not otherwise specified (NOS). Tumors with t -SNE-based assignment to novel DNA methylation classes with unknown clinical behavior, such as tumors with PATZ1 fusions or PLAGL1 fusions , were excluded. Correlation between classification into individual WHO-based tumor types and DNA methylation-based tumor classes was tested by calculating the phi coefficient between a sample × WHO type and a sample × DNA methylation class matrix. The distribution of discrepant constellations between WHO-based tumor type and DNA methylation class among tumor categories was tested using a Fisher’s exact test. Kaplan–Meier analysis was performed to estimate the survival time of patients from different CNS tumor groups, and a log-rank test was performed to compare survival distributions between independent groups. Pairwise comparisons between groups were corrected for multiple testing using the Benjamini–Hochberg method. OS was defined as time from date of initial diagnosis until death of any cause. Surviving patients were censored at the date of last follow-up. Event-free survival was calculated from date of diagnosis until event, defined as relapse after complete resection, clinical or radiological progression, start of non-surgical/adjuvant therapy or death of any cause. Patients without event were censored at the date of last follow-up. Data visualization and statistical analyses were performed using the programming language R (ref. ). Tumor location was visualized for DNA methylation classes with a minimum sample size of five by adapting an R package for anatomical visualization of spatiotemporal brain data . Further information on research design is available in the linked to this article. Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at 10.1038/s41591-023-02255-1. Supplementary Information Supplementary Figs. 1–17 and Supplementary Table legends. Reporting Summary Supplementary Tables 1 and 3–7 Supplementary Table 2
CFTR function, pathology and pharmacology at single-molecule resolution
94bac698-6794-4791-9f7e-109e428d49a1
10115640
Physiology[mh]
CFTR belongs to the ATP-binding cassette transporter family of proteins, but uniquely functions as an ion channel . It consists of two transmembrane domains that form an ion permeation pathway, two cytosolic NBDs that bind and hydrolyse ATP, and a cytosolic regulatory (R) domain that includes several phosphorylation sites. Decades of electrophysiological, biochemical and structural studies (reviewed in refs. , ) established that CFTR activity requires phosphorylation of the R domain by protein kinase A (PKA) . Once phosphorylated, ATP binding drives pore opening. CFTR contains two functionally distinct ATP-binding sites . The ‘consensus’ site is catalytically competent, whereas the ‘degenerate’ site is not . ATP hydrolysis at the consensus site leads to pore closure . Pore opening in the absence of ATP and non-hydrolytic pore closure can occur, albeit very rarely , . Cryogenic electron microscopy (cryo-EM) studies of CFTR have thus far revealed two globally distinct conformations. In the absence of phosphorylation and ATP, CFTR forms a pore-closed conformation in which the NBDs are separated by approximately 20 Å, and the R domain sterically precludes NBD dimerization (Fig. ). The phosphorylated and ATP-bound CFTR, structurally characterized using the hydrolysis-deficient E1371Q variant , exhibits a pre-hydrolytic conformation, in which the NBDs form a closed dimer with two ATP molecules bound at their interface (Fig. ). Despite these advances, major gaps in our understanding of CFTR function and regulation remain. For example, although extant structures of CFTR indicate that large-scale conformational changes are required for channel opening, they fall short of addressing the mechanistic relationship between NBD dimerization and the gating mechanism. How ion permeation is coupled to ATP hydrolysis and NBD isomerization remains contested. One model proposes that in every gating cycle one round of ATP hydrolysis is coupled with one pore-opening event and one NBD-dimerization and NBD-separation event , – . Alternative models posit that the NBDs remain dimerized through several gating cycles with only partial disengagement of the dimer interface at the consensus site , , . The CFTR pore has been suggested to be either strictly , or probabilistically coupled to nucleotide state in the consensus site. Attempts to differentiate these models have thus far been inconclusive. Moreover, the steps rate-limiting to CFTR activity in unaffected individuals and patients with cystic fibrosis, and thus most likely to be sensitive to pharmacological modulation, remain unclear. To address these open questions, we undertook an integrative approach combining ensemble measurements of ATPase activity, single-molecule fluorescence resonance energy transfer (smFRET) imaging, electrophysiology and kinetic simulations to examine the structure–function relationship in human CFTR. The information obtained reveals an allosteric gating mechanism in which ATP-dependent NBD dimerization is insufficient to enable pore opening. Although phosphorylated CFTR predominantly occupies an NBD-dimerized conformation at physiological ATP concentration, downstream conformational changes in CFTR governed by ATP turnover are required for chloride conductance. Disease-associated alterations and the pharmacological potentiators ivacaftor and GLPG1837 influence the efficiency of the coupling between NBD dimerization and ion permeation. These findings identify an allosteric link between the catalytically competent ATP-binding site and the channel pore that functions as a critical rate-limiting conduit for physiological and pharmacological regulation in CFTR. To enable smFRET imaging of the protein’s conformational state, we sought to develop a variant of human CFTR that could be labelled with maleimide-activated donor and acceptor fluorophores. After substituting 16 of the 18 native cysteines (Extended Data Fig. ), we further introduced cysteines into NBD1 (T388C) and NBD2 (S1435C). This variant, CFTR FRET , was labelled with maleimide-activated forms of self-healing donor (LD555) and acceptor (LD655) fluorophores to create an NBD-dimerization sensor (Fig. ). Labelling of the two introduced cysteines was >90% specific (Extended Data Fig. ). We next tested whether CFTR FRET retains the functional properties of the wild-type CFTR. Macroscopic currents were measured in excised inside-out membrane patches using unlabelled wild-type CFTR and CFTR FRET , both fused to a carboxy-terminal GFP tag (Extended Data Fig. ). These data show that CFTR FRET conducted phosphorylation- and ATP-dependent currents and retained sensitivity to the potentiator GLPG1837 in a manner indistinguishable from that of wild-type CFTR (Extended Data Fig. ). The time courses for current activation on ATP application and current relaxation on ATP withdrawal were also indistinguishable between wild-type CFTR and CFTR FRET (Extended Data Fig. ). We further evaluated the effects of conjugating fluorophores to CFTR using purified protein. Digitonin-solubilized and fluorophore-labelled CFTR FRET (Extended Data Fig. ) hydrolysed ATP at a rate nearly identical to that of wild-type CFTR (Extended Data Fig. ). On reconstitution into synthetic planar lipid bilayers, the fluorophore-labelled CFTR FRET and wild-type CFTR (without fluorophore labels) exhibited similar current–voltage relationship, open probability, and response to GLPG1837 (Extended Data Fig. ). Single-channel conductance of fluorophore-labelled CFTR FRET was slightly higher (Extended Data Fig. ), possibly due to the C343S substitution, a residue bordering the pore. On the basis of these observations, we conclude that the conformational and gating dynamics of CFTR FRET closely recapitulate those of wild-type CFTR. To examine the relationship between ATP binding and NBD dimerization directly, we carried out smFRET imaging on digitonin-solubilized, C-terminally His-tagged CFTR FRET molecules that were surface-tethered within passivated microfluidic chambers via a streptavidin–biotin–tris-(NTA-Ni 2+ ) bridge (Extended Data Fig. ). Imaging was carried out using a wide-field total internal reflection fluorescence (TIRF) microscope equipped with scientific complementary metal–oxide sensor (sCMOS) detection and stopped-flow capabilities at 10 or 100 ms time resolution. Monomeric CFTR FRET molecules were tethered with high specificity as demonstrated by near-quantitative release from the surface with imidazole (Extended Data Fig. ). Based on extant structures, fluorophore-labelled CFTR FRET is expected to exhibit low FRET efficiency in NBD-separated conformations and higher FRET efficiency in NBD-dimerized conformations (Fig. ). Indeed, in the absence of ATP and phosphorylation, CFTR exhibited a homogeneous low-FRET-efficiency distribution centred at 0.25 ± 0.01 (mean ± s.d. across six repeats) and exhibited few, if any, FRET fluctuations (Fig. and Extended Data Fig. ). Consistent with current increase on phosphorylation and ATP addition (Fig. ), smFRET measurements also showed that adding ATP to phosphorylated CFTR caused a shift to higher FRET efficiency (0.49 ± 0.02), in which only brief excursions to lower-FRET states were evidenced (Fig. and Extended Data Fig. ). Substitution of the catalytic base in the consensus site (E1371Q), which prevents ATP hydrolysis, further stabilized CFTR in higher-FRET-efficiency conformations (Fig. ). On the basis of these observations, we ascribed the ≈0.25 and ≈0.49 FRET states to NBD-separated and NBD-dimerized CFTR conformations evidenced by cryo-EM, respectively. In contrast to the case for phosphorylated CFTR, addition of ATP to the dephosphorylated channel caused only a small shift in FRET efficiency, from ≈0.25 to 0.28 ± 0.01 (Fig. and Extended Data Fig. ). The FRET distribution of phosphorylated, ATP-free CFTR was also centred at 0.28 ± 0.02 (Fig. and Extended Data Fig. ). To explore the molecular basis of this shift, we determined the cryo-EM structure of the dephosphorylated wild-type CFTR in the presence of 3 mM ATP to 4.3 Å resolution (Extended Data Fig. and Extended Data Table ). Consistent with the smFRET data, the overall CFTR architecture was largely indistinguishable from that of the ATP-free CFTR structure. However, at both NBD1 and NBD2 binding sites, density corresponding to the ATP molecule was clearly evidenced (Extended Data Fig. ). These data indicate that ATP binding to the dephosphorylated CFTR does not induce any global conformational change. The small shift in FRET efficiency is probably due to local changes that affect either the position and/or dynamics of the sites of labelling. Consistent with the gradual increase in open probability observed for single channels , pre-steady-state measurements of PKA-mediated CFTR phosphorylation in the presence of saturating ATP (3 mM) revealed that individual CFTR FRET molecules did not always instantaneously transition to a stably NBD-dimerized state (Fig. and Extended Data Fig. ). Instead, stable NBD dimerization was achieved through processes that involved rapid sampling of NBD-separated and NBD-dimerized states. Parallel electrophysiological recordings revealed matching progression of current activation (Fig. ). NBD dimerization was fully reversible by phosphatase treatment (Extended Data Fig. ). By contrast, the E1371Q substitution slowed NBD separation (Extended Data Fig. ), indicating that ATP turnover facilitates NBD separation. These observations suggest that the gradual transition to steady-state channel activation probably reflects stochastic ATP binding to the individual NBDs and/or transient reinsertion of partially phosphorylated R domain, which resolve to stable NBD dimerization only when the R domain becomes fully phosphorylated and both NBDs are simultaneously ATP bound. The ATP dose responses for NBD dimerization and current activation for fully phosphorylated CFTR strongly correlated, both yielding half-maximum effective concentration (EC 50 ) values of approximately 50 µM (Fig. and Extended Data Fig. ). This finding is indicative of both processes being limited by the same underlying molecular event. NBD dimerization and channel-open probabilities differed substantially: at saturating ATP concentration, approximately 85% of CFTR FRET molecules were in the NBD-dimerized conformation but the channel-open probability was only 22% (Fig. ). We thus conclude that both conductive and non-conductive NBD-dimerized states must exist. Consistent with this notion, the observed FRET dynamics differed from the evidenced gating dynamics (Fig. ). The rate of CFTR pore opening exhibits a saturable dependence on ATP concentration whereas the channel closing rate remains constant , . By contrast, NBD-dimerization and NBD-separation rates both changed monotonically with ATP concentration (Fig. ). At saturating ATP concentration, the dwell time of the NBD-dimerized state was approximately 20 times longer than that of the channel-open state (Fig. ), suggesting that FRET-silent processes occur within the NBD-dimerized conformation that trigger channel opening and closure and that only subtle rearrangements at the dimer interface are required for nucleotide exchange. This conclusion was supported by analogous imaging studies carried out at both 10 and 100 ms time resolutions (Extended Data Fig. ). We conclude that CFTR remains stably dimerized through multiple gating cycles or that transitions to partially separated NBD states are either FRET silent or occur on timescales markedly exceeding the temporal resolution of our measurements (100 s −1 ). Both models nonetheless specify that NBD dimerization is not strictly coupled to channel opening. At a cellular ATP to ADP ratio (≈10:1), fully phosphorylated CFTR FRET predominantly occupied dimerized conformations (Extended Data Fig. ), in line with CFTR predominantly binding ATP in the physiological setting. High ADP concentrations were, however, able to competitively inhibit both NBD dimerization and channel opening , (Extended Data Fig. ). To validate the physiological relevance of these findings, we carried out targeted smFRET imaging studies with phosphorylated CFTR FRET reconstituted into proteoliposomes (Extended Data Fig. ). In the absence of ATP, membrane-embedded CFTR FRET molecules stably occupied the NBD-separated (0.28) FRET state (Extended Data Fig. ). On addition of 3 mM ATP, CFTR FRET molecules transitioned to the NBD-dimerized (0.49) FRET state (Extended Data Fig. ). The fraction of ATP-responsive molecules was reduced, probably due to degradations in channel activity or mixed orientations in the bilayer. However, the molecules that responded predominantly occupied NBD-dimerized conformations at steady state, with only rare, transient excursions to states with low FRET efficiency (Extended Data Fig. ). Also consistent with expectation, CFTR FRET molecules relaxed to the NBD-separated state on ATP withdrawal (Extended Data Fig. ). These observations demonstrate that physical properties of the digitonin-solubilized CFTR FRET recapitulate those present in the lipid bilayer. To ensure the most robust signals and statistics, we carried out the remainder of our smFRET experiments using digitonin-solubilized CFTR FRET . In CFTR, the consensus ATP-binding site hydrolyses approximately 0.3 to 1 ATP molecules per second, whereas the degenerate site retains ATP for minutes , . We reasoned that ATP binding in the degenerate site alone is sufficient for NBD dimerization, whereas ATP binding in the consensus site is required for channel opening. To test this hypothesis, we sought to deconvolute the individual contributions of the two ATP-binding sites by substituting aromatic ATP-stacking residues, W401 and Y1219 (ref. ), with alanine to reduce the affinity for ATP at the degenerate and consensus sites, respectively (Fig. ). Whereas the degenerate site variant W401A hydrolysed ATP at a rate comparable to that of the wild-type CFTR, the ATPase activity of the consensus site variant Y1219A only marginally exceeded the background, established by analogous measurements of the E1371Q variant (Fig. ). The activity of the double variant (Y1219A/W401A) was indistinguishable from that of the Y1219A variant (Fig. ). These data show that the Y1219A substitution nearly abolished functionally relevant ATP-binding events at the consensus site. The conformational dynamics of the W401A and Y1219A variants were markedly different, both from each other, and from those wild-type CFTR FRET (compare Fig. with Fig. ). Relative to wild-type CFTR, the W401A variant, which is capable of binding and hydrolysing ATP at the consensus site, underwent comparatively rapid transitions between NBD-separated and NBD-dimerized states that more closely resembled the dynamics of pore opening measured in electrophysiological recordings (Fig. ). This was predominantly attributed to a specific reduction in the dwell time of the NBD-dimerized state (compare Fig. with Fig. ). By contrast, the Y1219A variant, which binds ATP principally at the degenerate site, slowly transitioned between NBD-dimerized and NBD-separated states (Fig. ). Whereas NBD dimerization and channel-open probabilities became more comparable in the W401A variant (Fig. ), single-channel measurements of the Y1219A variant exhibited only sporadic opening events (Fig. ). These findings indicate that NBD dimerization is largely uncoupled from channel gating when ATP binding and hydrolysis is abrogated at the consensus site. At 3 mM ATP, the dimerization probabilities of the W401A and Y1219A variants were comparable, at about 50% of the wild-type level (Fig. ). The channel-open probabilities of the two variants were, however, very different (Fig. ). Whereas the W401A variant functioned like wild-type CFTR in this regard, the open probability of the Y1219A variant was nearly zero. These data indicate that ATP binding at either degenerate or consensus sites is sufficient for NBD dimerization. They further support that transitions to NBD-dimerized states do not necessarily precipitate ATP hydrolysis or channel opening and that channel opening largely depends on ATP binding to the consensus site. These conclusions were further substantiated through assessment of the ‘coupling ratio’ between open probability and the probability of NBD dimerization (Fig. ), which showed that the coupling efficiency was far more sensitive to occupancy of the consensus site by ATP. The coupling ratio of the W401A variant was sixfold greater than that of the Y1219A variant. The extent of coupling between NBD dimerization and channel opening was the greatest for the E1371Q variant, which traps the pre-hydrolytic NBD-dimerized state with both sites occupied by ATP (Fig. ). To examine the temporal relationship between ATP-dependent NBD dimerization and channel opening, we carried out parallel experiments in which the pre-steady state of smFRET and electrophysiological CFTR reaction coordinates were monitored in response to rapid ATP addition (Fig. ). Here we separately tracked the time courses of NBD dimerization and macroscopic current increase on application of saturating ATP (3 mM) to CFTR FRET , which had been previously phosphorylated by PKA treatment, followed by complete ATP removal from the system. In these experiments, individual CFTR FRET molecules transitioned either directly to a stably NBD-dimerized state, or through a highly dynamic interval with rapid NBD-isomerization events (Fig. ), resembling the W401A variant at steady state (Fig. ). Given that no change in R domain phosphorylation occurs in this experiment, we conclude that the observed heterogeneity in NBD-dimerization kinetics reflects stochastic and sequential ATP binding to the individual NBDs, which ultimately equilibrate to both NBDs being occupied by ATP. Comparison of the activation time courses of both reaction coordinates further revealed that channel opening on ATP binding was delayed relative to NBD dimerization (Fig. ). The rate of channel opening ( τ opening = 490 ± 40 ms; Fig. and Extended Data Fig. ) was approximately threefold slower than the solvent exchange rate of the perfusion system ( τ exchange ≈ 150 ms; Extended Data Fig. ). By contrast, the fitted rates for NBD dimerization from FRET measurements ( τ dimerization ≈ 100 ms) were on the same scale as the solvent exchange rate in the fluorescence microscope ( τ exchange = 115 ms; Extended Data Fig. ). Thus, the observed delay in current activation could not be ascribed to differences in rates of mixing of the two experimental methods. We therefore conclude that the observed delay reflects conformational changes within the NBD-dimerized state that precede channel opening and that the mean first-passage time of this process is approximately 400–500 ms. To understand the molecular events surrounding the process of pore closure, we monitored conformational changes and macroscopic current decays of fully phosphorylated CFTR on sudden ATP withdrawal (Fig. and Extended Data Fig. ). Consistent with the findings of previous studies , , our observations showed that ATP removal leads to rapid current decay that is dependent on ATP hydrolysis (Extended Data Fig. ). Parallel FRET experiments showed that the time course of NBD separation is biphasic, with time constants of 1.6 s and 20 s, respectively (Fig. ). These rates correlate with the double-exponential time constants reported for CFTR current decay and ligand exchange , . This apparent correlation suggests a common underlying molecular mechanism determining both transitions. Inhibiting ATP hydrolysis with the E1371Q substitution markedly slowed NBD separation, and biochemical approaches to reduce ATP hydrolysis or conformational events immediately following hydrolysis—including magnesium withdrawal, as well as beryllium fluoride or aluminium fluoride addition—also resulted in much slower NBD separation (Fig. and Extended Data Fig. ). On ATP withdrawal, individual CFTR FRET molecules first exhibited dynamic NBD isomerization, followed by stable NBD separation (Fig. ). This dynamic period resembled the steady-state behaviour of the Y1219A variant (Fig. ). Disruption of ATP binding at the degenerate site by the W401A substitution eliminated this dynamic period, such that transitions occurred directly from the NBD-dimerized state to the NBD-separated state on ATP withdrawal (Fig. and Extended Data Fig. ). These observations suggest that the dynamic period represents a post-hydrolytic state, in which the consensus site becomes vacated, and the degenerate site retains ATP. As ATP rebinding is not possible in this experiment, subsequent ATP dissociation from the degenerate site then precipitates stable NBD separation (Extended Data Fig. ). Dissociation of ATP from both sites probably leads to the reversible rundown of CFTR currents that occurs after prolonged exposure to nucleotide-free solutions . ATP rebinding at physiological ATP concentrations (approximately 1–10 mM) is expected to occur rapidly to the post-hydrolytic CFTR molecule, thereby initiating new catalytic cycles before NBD separation. Consistent with this notion, the transition frequency between the low- and high-FRET states exhibited a bell-shaped dependence on ATP concentration (Fig. and Extended Data Fig. ). These findings suggest that the probability of ATP rebinding exceeds that of complete NBD separation when ATP concentration is greater than 100 µM. This concept, consistent with ligand exchange experiments , suggests that repetitive cycles of ATP turnover can occur in an ostensibly NBD-dimerized conformation with only subtle changes at the consensus site required for nucleotide exchange. In cellular settings, repetitive gating cycles are therefore expected to persist until the finite rate of NBD separation at cellular ATP concentrations allows the dephosphorylated R domain to reinsert, terminating CFTR gating. A wide range of alterations in CFTR are directly linked to cystic fibrosis (Extended Data Fig. ). The mechanisms by which such alterations affect an individual’s health have been broadly categorized into those that interfere with CFTR expression, folding or localization or the function of the channels on the cell surface ( https://www.cftr2.org/mutations_history ). Here we examine two clinically evidenced variants that affect channel gating at the cell surface, G551D and L927P, to understand the molecular basis of their defects. The G551 residue forms part of the consensus ATP-binding site that coordinates the phosphate moieties for hydrolysis (Figs. and ). Substituting G551 with an aspartate nearly abolished channel opening and ATP hydrolysis (Fig. ). On addition of saturating ATP (3 mM) to the phosphorylated CFTR FRET (G551D) variant, we observed an upward shift in FRET efficiency from the NBD-separated state (≈0.25 FRET efficiency) to 0.37 ± 0.01. This intermediate FRET efficiency value was clearly distinct from that of the NBD-dimerized conformation (≈0.49 FRET) observed for wild-type CFTR FRET , indicative of a conformation involving an intermediate approach of the NBDs (Fig. and Extended Data Fig. ). From this intermediate conformation, excursions to the 0.49 FRET efficiency states were evidenced, albeit rarely (Fig. ). The high-FRET, NBD-dimerized CFTR(G551D) conformation is likely to be different from that of CFTR(E1371Q) previously observed by cryo-EM, evident by a lower coupling ratio (Fig. ) and a shorter lifetime (Extended Data Fig. ). In agreement with these data, the findings of a recent cryo-EM study showed that the G551D variant adopts conformations in between those of the fully NBD-separated and NBD-dimerized conformations . The L927 residue in CFTR resides within a transmembrane hinge region that mediates local conformational changes during gating (Fig. ). Thus, it would be reasonable to propose that L927P causes cystic fibrosis by altering the flexibility of the transmembrane hinge in NBD-dimerized CFTR conformations. Compared to the wild-type CFTR, the L927P substitution resulted in a 65% reduction of the ATPase activity and a >99% reduction in channel-open probability (Fig. ). Its open dwell time was reduced by approximately 15-fold (Extended Data Fig. ). Notably, the L927P substitution, although 50 Å away from either ATP-binding site, was also detrimental to the NBD-dimerization process. In the absence of ATP, the fully phosphorylated L927P variant behaved similarly to wild-type CFTR FRET with the NBDs constitutively separated (Extended Data Fig. ). However, on ATP introduction (3 mM), the L927P variant adopted a conformation exhibiting an intermediate extent of NBD closure (FRET efficiency = 0.31 ± 0.01) from which relatively frequent, although transient, NBD-dimerization events occurred (Fig. and Extended Data Fig. ). For both G551D and L927P variants, FRET transitions exhibited ATP dependence indicative of wild-type ATP binding affinities (Extended Data Fig. ). Their functional defects are caused by deficits in ATP effecting formation of a tight NBD dimer and in the coupling of the allosteric processes within NBD-dimerized CFTR that give rise to channel opening (Fig. ). On the basis of the positions of the G551D and L927P substitutions, we posit that an allosteric link transmits conformational information from the consensus ATP-binding site at the NBD dimer interface through transmembrane helix 8 (TM8) to the gating region on the opposing side of the membrane (Fig. ). Distant G551D and L927P substitutions both impact this pathway of allosteric communication, indicating that local disruptions at either end of this link can affect both NBD dimerization and channel opening. Ivacaftor, a drug approved by the US Food and Drug Administration, and the investigational compound GLPG1837 (ref. , ) both bind to CFTR within the TM8 hinge region to promote channel opening (Fig. ). Although the effects of these compounds on gating kinetics have been extensively characterized – , how they alter the conformational landscape of CFTR remains elusive. Consistent with the findings of previous reports – , our observations showed that both potentiators induced marked increases of channel-open probabilities (Fig. and Extended Data Fig. ). By comparison their effects on NBD dimerization were much smaller for all CFTR variants tested (Fig. and Extended Data Fig. ). For example, GLPG1837 increased the open probability of the G551D variant by more than 30-fold, whereas the change in NBD dimerization was marginal (Fig. ). This observation, together with the recent cryo-EM study of the CFTR(G551D) variant in the presence of ivacaftor , demonstrates that neither ivacaftor nor GLPG1837 promotes NBD dimerization. Similarly, for the L927P variant, the relative stimulation of open probability greatly exceeded the relative stimulation of dimerization probability (Fig. ). The potency with which GLPG1837 promoted NBD dimerization, measured at the EC 50 for ATP, was approximately 60 nM (Extended Data Fig. ), similar to the estimated affinity from electrophysiology . The apparent potency of ATP to mediate dimerization also increased in a dose-dependent manner with GLPG1837 (Extended Data Fig. ). Furthermore, the rate of NBD separation on ATP withdrawal was slowed by GLPG1837 or ivacaftor (Extended Data Fig. ), analogous to their impacts on the rate of current relaxation after ATP withdrawal . Potentiators both shorten the closed dwell time and extend the open dwell time of the pore . Here we show that the steady-state ATP hydrolysis rates of wild-type CFTR and CFTR(L927P), for which hydrolysis rates were measurable, were increased by 60–100% by ivacaftor or GLPG1837 (Fig. ). Hence, both potentiators exert, by opposing effects on the open and closed dwell times, a net increased flux through the gating cycle by targeting the allosteric pathway linking the NBDs to the channel gate. These data lead to the conclusion that the main effect of ivacaftor or GLPG1837 is not to support transition from NBD-separated to NBD-dimerized conformations. Rather, these potentiators principally operate by promoting pore opening when NBDs are already dimerized. In other words, potentiators affect the coupling efficiency between NBD dimerization and channel opening, possibly by stabilizing the transmembrane domains in the pore-open configuration . This effect also manifests in variants unable to form a canonical NBD dimer, such as G551D and a variant devoid of the entire NBD2 (ref. ). It has long been debated whether NBD dimerization in CFTR is strictly coupled to pore opening , . By directly comparing the kinetics of NBD isomerization and channel gating, we show that NBD dimerization and ion permeation are not strictly coupled but instead probabilistically linked through allosteric control mechanisms. At physiological ATP concentrations, fully phosphorylated CFTR remains NBD-dimerized for many cycles of ATP turnover and pore opening. The structure of the NBD-dimerized CFTR suggests that only small changes at the consensus site, such as disrupting the hydrogen bond between R555 and T1246 (ref. ), would be sufficient for nucleotide exchange. Notably, the allosteric relationship evidenced between NBD dimerization and pore opening held true across diverse conditions and CFTR variants and was sensitive to both nucleotide state in the consensus site and potentiator binding within the membrane more than 50 Å away. These findings reveal an allosteric pathway linking the consensus ATPase site, through TM8 and the potentiator-binding site, to the gate of the pore on the opposing membrane surface. Structurally, we speculate that this pathway of long-distance information transfer minimally consists of TM8 and TM9 and the transverse alpha helix between them (Fig. ). The transmission of structural information along this allosteric pathway physically linking NBD dimerization to pore opening is rate-limiting to CFTR function. Substitutions causing cystic fibrosis (for example, G551D and L927P) attenuate the strength of this allosteric pathway whereas the potentiators ivacaftor and GLPG1837 enhance it. The observation that both G551D and L927P variants are also defective in NBD dimerization suggests that modulators that quantitatively rescue this defect should work additively with ivacaftor and GLPG1837. The investigational compound 5-nitro-2-(3-phenylpropylamino) benzoate was proposed to stimulate pore opening by such a mechanism . The data presented herein, in conjunction with the vast body of literature in the field, permit us to propose a model that describes the main events accompanying the wild-type CFTR gating cycle at physiological ATP concentrations (Fig. ). Dephosphorylated CFTR adopts an NBD-separated, auto-inhibited conformation as observed by cryo-EM . Following phosphorylation of the R domain, the NBDs can dimerize rapidly with ATP bound at both sites (step 1 in Fig. ). Rate-limiting conformational changes within CFTR that allosterically communicate information from the consensus ATP-binding site across the lipid bilayer can subsequently open the pore (step 2 in Fig. ) and enable ATP hydrolysis (step 3 in Fig. ). Before ATP hydrolysis, pore opening is transient, and flicker-closed states are rapidly sampled . The post-hydrolytic channel, with ADP and inorganic phosphate bound at the consensus site, remains open but eventually relaxes to a non-conductive dimerized state (step 4 in Fig. ). Dissociation of ADP (step 5 in Fig. ) results in a dynamic intermediate to which ATP can rebind (steps 6–8 in Fig. ) thereby initiating another gating cycle. Rare events are not depicted in this model as their fractional contributions are expected to be low at physiological ATP concentration. These events include release of ATP from the degenerate site , , and channel opening with ATP at only one site , , or in the complete absence of nucleotide , , , . The topology of this scheme was validated through steady-state kinetic simulations of NBD dynamics and ion conduction for fully phosphorylated CFTR at saturating ATP concentrations (Extended Data Fig. and ). Kinetic constants within this topology were estimated on the basis of the model’s capacity to recapitulate experimental observables including pre-steady-state rates of NBD dimerization, channel current and conformational relaxation as well as ATP hydrolysis rates from ensemble measurements (Extended Data Fig. ). Stochastic simulations for wild-type CFTR and CFTR(E1371Q) gating carried out with this topology and rate information (Extended Data Fig. ) closely recapitulated the key dynamical features of both wild-type and E1371Q variant gating (Extended Data Fig. and Supplementary Video ). However, establishment of this model revealed notable quantitative discrepancies that are worthy of consideration. First, our simulation predicts a modestly greater steady-state ATP hydrolysis rate than is experimentally estimated. We speculate that this may reflect either inadequacies of the simplified model or the presence of an inactive fraction in the bulk measurement that lowers the apparent turnover rate. Second, the simulated model does not recapitulate the multimodality of NBD-dimerized and NBD-separated dwell-time distributions. Such findings, viewed in light of our analyses of altered degenerate and consensus binding sites, suggest that these distinct modes probably reflect periods of NBD dynamics and gating in CFTR in which only one of the two ATP-binding sites is occupied. Such considerations imply requirements for additional complexities to the presented model topology that will need to be explored by developing techniques that simultaneously detect conformational state and functional output at single-molecule resolution. The present investigations nonetheless reveal a physical framework for understanding CFTR function, pathology and pharmacology and exploring the possibility that more potent activators of the rate-limiting allosteric events regulating CFTR gating mechanism, potentially specific to an individual’s allelic variation, can be identified and leveraged for therapeutic purposes. Protein expression, purification and labelling CFTR was expressed as previously described . Human CFTR with a C-terminal PreScission Protease-cleavable GFP tag was cloned into the BacMam vector. For single-molecule FRET, the following substitutions were introduced: C76L, C128S, C225S, C276S, C343S, T388C, C491S, C592M, C647S, C832S, C866S, C1344S, C1355S, C1395S, C1400S, C1410S, S1435C and C1458S. A deca-His tag was inserted C-terminally to CFTR and before the PreScission Protease cleavage site to allow for surface immobilization. Recombinant baculovirus was generated using Sf9 cells (Gibco, catalogue number 11496015, lot number 1670337) cultured in sf-900 SFM medium (Gibco), supplemented with 5% (v/v) heat-inactivated fetal bovine serum and 1% (v/v) antibiotic–antimycotic (Gibco) as described previously . HEK293S GnTI − (ATCC CRL-3022, lot number 62430067) suspension cells were cultured in FreeStyle 293 medium (Gibco) supplemented with 2% (v/v) heat-inactivated fetal bovine serum and 1% (v/v) antibiotic–antimycotic (Gibco), shaking at 37 °C with 8% CO 2 and 80% humidity. Sf9 and HEK293S GnTI − cells were authenticated by Gibco and ATCC, respectively and confirmed negative for mycoplasma contamination. At a density of 2.5 × 10 6 cells ml −1 , cells were infected with 10% (v/v) P3 baculovirus. After 12 h, the culture was supplemented with 10 mM sodium butyrate, and the temperature was reduced to 30 °C. After a further 48 h, the cells were collected and flash-frozen in liquid nitrogen. For protein purification, cells were solubilized for 75 min at 4 °C in extraction buffer containing 1.25% (w/v) lauryl maltose neopentyl glycol (LMNG), 0.25% (w/v) cholesteryl hemisuccinate (CHS), 200 mM NaCl, 20 mM HEPES (pH 7.2 with NaOH), 2 mM MgCl 2 , 10 μM dithiothreitol (DTT), 20% (v/v) glycerol, 1 mM ATP, 1 μg ml −1 pepstatin A, 1 μg ml −1 leupeptin, 1 μg ml −1 aprotinin, 100 μg ml −1 soy trypsin inhibitor, 1 mM benzamidine, 1 mM phenylmethylsulfonyl fluoride (PMSF) and 3 µg ml −1 DNase I. Lysate was clarified by centrifugation at 75,000 g for 2 × 20 min at 4 °C, and mixed with NHS-activated Sepharose 4 Fast Flow resin (GE Healthcare) conjugated with GFP nanobody, which had been pre-equilibrated in 20 column volumes of extraction buffer. After 1 h, the resin was packed into a chromatography column, washed with 20 column volumes of wash buffer containing 0.06% (w/v) digitonin, 200 mM NaCl, 20 mM HEPES (pH 6.8 with NaOH), 1 mM ATP and 2 mM MgCl 2 , and then incubated for 2 h at 4 °C with 0.35 mg ml −1 PreScission Protease to cleave off the GFP tag. The eluate was collected by dripping through Glutathione Sepharose 4B resin (Cytiva) to remove PreScission Protease, and CFTR was concentrated to 2 μM. To label with fluorophores, CFTR was mixed with 9.5 μM maleimide-conjugated LD555 and 10.5 μM maleimide-conjugated LD655 (Lumidyne Technologies) for 10 min at 4 °C. Subsequent steps were carried out protected from light. The labelling reaction was quenched by addition of 2 mM DTT, and the labelled product was purified by gel filtration chromatography at 4 °C using a Superose 6 10/300 GL column (GE Healthcare), equilibrated with 0.06% (w/v) digitonin, 200 mM NaCl, 20 mM HEPES (pH 7.2 with NaOH), 1 mM ATP and 2 mM MgCl 2 . Peak fractions were concentrated to 2 μM and mixed with 5 μM biotin–tris-NTA-Ni 2+ for 30 min at 4 °C. The CFTR–Ni-NTA complex was purified by another round of gel filtration, concentrated to 2 μM, aliquoted, snap-frozen in liquid nitrogen, and stored at –80 °C. For ATP hydrolysis measurements, the purification protocol was adjusted: extraction buffer contained 1.25% (w/v) LMNG, 0.25% (w/v) CHS, 200 mM KCl, 20 mM HEPES (pH 8.0 with KOH), 2 mM MgCl 2 , 2 mM DTT, 20% (v/v) glycerol, 1 μg ml −1 pepstatin A, 1 μg ml −1 leupeptin, 1 μg ml −1 aprotinin, 100 μg ml −1 soy trypsin inhibitor, 1 mM benzamidine, 1 mM PMSF and 3 µg ml −1 DNase I. Wash and gel filtration buffers contained 0.06% (w/v) digitonin, 20 mM HEPES (pH 8.0 with KOH), 200 mM KCl, 2 mM MgCl 2 and 2 mM DTT. The eluate from the GFP nanobody resin was concentrated, phosphorylated with PKA (NEB) for 1 h at 25 °C, purified by gel filtration chromatography, and immediately used for hydrolysis measurements. For proteoliposome reconstitution, the purification was also adjusted: extraction buffer contained 1.25% (w/v) LMNG, 0.25% (w/v) CHS, 200 mM NaCl, 20 mM HEPES (pH 7.2 with NaOH), 2 mM MgCl 2 , 2 mM DTT, 20% (v/v) glycerol, 1 μg ml −1 pepstatin A, 1 μg ml −1 leupeptin, 1 μg ml −1 aprotinin, 100 μg ml −1 soy trypsin inhibitor, 1 mM benzamidine, 1 mM PMSF and 3 µg ml −1 DNase I. Wash and gel filtration buffers contained 0.006% (w/v) glyco-diosgenin (GDN), 200 mM NaCl, 20 mM HEPES (pH 7.2 with NaOH) and 2 mM MgCl 2 . The eluate from the GFP nanobody resin was concentrated, phosphorylated with PKA (NEB) for 1 h at 25 °C, purified by gel filtration chromatography, and immediately reconstituted. ATP hydrolysis measurements Steady-state ATP hydrolysis activity was measured using an NADH-coupled assay . Reaction buffer contained 50 mM HEPES (pH 8.0 with KOH), 150 mM KCl, 2 mM MgCl 2 , 2 mM DTT, 0.06% (w/v) digitonin, 60 µg ml −1 pyruvate kinase (Roche), 32 µg ml −1 lactate dehydrogenase (Roche), 9 mM phosphoenolpyruvate and 150 µM NADH, and was prepared immediately before starting the assay. A 200 nM concentration of phosphorylated CFTR was diluted into reaction buffer. Aliquots of 30 µl in volume were distributed into a Corning 384-well Black/Clear Flat Bottom Polystyrene NBS Microplate. Samples were kept at 4 °C and light-protected until the reactions were initiated by addition of 3 mM ATP. The rate of fluorescence depletion was monitored at λ ex = 340 nm and λ em = 445 nm at 28 °C with an Infinite M1000 microplate reader (Tecan), and converted to ATP turnover with an NADH standard curve. Patch-clamp recording Chinese hamster ovary cells (ATCC CCL-61, lot number 70014310) were maintained in DMEM-F12 (ATCC) supplemented with 10% (v/v) heat-inactivated fetal bovine serum and 1% (v/v) GlutaMAX (Gibco) at 37 °C. Chinese hamster ovary cells were authenticated by ATCC. The cells were plated in 35-mm cell culture dishes (Falcon) 24 h before transfection. Cells were transfected with C-terminally GFP-fused CFTR cloned into the BacMam expression vector, using Lipofectamine 3000 according to the manufacturer’s protocol (Invitrogen). At 12 h following transfection, medium was replaced with DMEM-F12 supplemented with 2% (v/v) heat-inactivated fetal bovine serum and 1% (v/v) GlutaMAX, and the cells were then incubated for 24 h at 30 °C before recording. Bath solution contained 145 mM NaCl, 2 mM MgCl 2 , 5 mM KCl, 1 mM CaCl 2 , 5 mM glucose, 5 mM HEPES and 20 mM sucrose (pH 7.4 with NaOH). Pipette solution contained 140 mM NMDG, 5 mM CaCl 2 , 2 mM MgCl 2 and 10 mM HEPES (pH 7.4 with HCl). Perfusion solution contained 150 mM NMDG, 2 mM MgCl 2 , 1 mM CaCl 2 , 10 mM EGTA and 8 mM Tris (pH 7.4 with HCl). Magnesium was omitted where indicated. CFTR was activated by exposure to PKA (Sigma-Aldrich) and 3 mM ATP. The rate of buffer exchange by the perfusion system was estimated by exchanging perfusion solution with 150 mM NMDG, 2 mM MgSO 4 , 1 mM calcium gluconate, 10 mM EGTA and 8 mM Tris (pH 7.4 with H 2 SO 4 ). Pipettes were pulled from borosilicate glass (outer diameter 1.5 mm, inner diameter 0.86 mm, Sutter) to 1.5–2.5 MΩ resistance and fire polished. Recordings were carried out using the inside-out patch configuration with local perfusion at the patch. Membrane potential was clamped at –30 mV. Currents were recorded at 25 °C using an Axopatch 200B amplifier, a Digidata 1550 digitizer and the pClamp software suite (Molecular Devices). Recordings were low-pass-filtered at 1 kHz and digitized at 20 kHz. All displayed recordings were further low-pass filtered at 100 Hz. Data were analysed with Clampfit, GraphPad Prism and OriginPro. Proteoliposome reconstitution A lipid mixture containing 1,2-dioleoyl- sn -glycero-3-phosphoetanolamine, 1-palmitoyl-2-oleyl- sn -glycero-3-phosphocholine and 1-palmitoyl-2-oleoyl- sn -glycero-3-phospho- l -serine at a 2:1:1 (w/w/w) ratio was resuspended by sonication in buffer containing 200 mM NaCl, 20 mM HEPES (pH 7.2 with NaOH) and 2 mM MgCl 2 . Lipids were mixed with GDN to a final detergent concentration of 2% (w/v), and lipid concentration of 20 mg ml −1 for 1 h at 25 °C covered by argon gas. Purified CFTR was mixed with the lipid mixture at a protein-to-lipid ratio of 1:100 or 1:250 (w/w) and incubated at 4 °C for 2 h covered by argon gas. Methylated beta-cyclodextrin was added to the reaction at a 1.2× molar ratio to GDN. After an additional 4 h, an equivalent amount of methylated beta-cyclodextrin was added. This procedure was repeated for a total of four additions. Proteoliposomes were collected by centrifugation at 150,000 g for 45 min at 4 °C, resuspended in buffer containing 200 mM NaCl, 20 mM HEPES (pH 7.2 with NaOH) and 2 mM MgCl 2 , aliquoted, snap-frozen in liquid nitrogen and stored at −80 °C. Planar lipid bilayer recording Synthetic planar lipid bilayers were made by painting a 1,2-dioleoyl- sn -glycero-3-phosphoetanolamine, 1-palmitoyl-2-oleyl- sn -glycero-3-phosphocholine and 1-palmitoyl-2-oleoyl- sn -glycero-3-phospho- l -serine 2:1:1 (w/w/w) lipid mixture solubilized in decane across an approximately 100-µm-diameter hole on a plastic transparency. CFTR-containing proteoliposomes were phosphorylated with PKA (NEB) for 1 h at 25 °C, and then fused with the synthetic bilayers. Currents were recorded at 25 °C in symmetric buffer containing 150 mM NaCl, 2 mM MgCl 2 and 20 mM HEPES (pH 7.2 with NaOH), supplemented with ATP as indicated. Unless otherwise indicated voltage was clamped at 150 mV with an Axopatch 200B amplifier (Molecular Devices). Currents were low-pass filtered at 1 kHz, digitized at 20 kHz with a Digidata 1440A digitizer and recorded using the pCLAMP software suite (Molecular Devices). All displayed recordings were further low-pass filtered at 100 Hz. Data were analysed with Clampfit, GraphPad Prism and OriginPro. Single-molecule fluorescence imaging Imaging was carried out as outlined in ref. . PEG- and biotin–PEG-passivated microfluidic chambers were incubated for 5 min with 0.8 µM streptavidin (Invitrogen) in buffer containing 0.06% (w/v) digitonin, 150 mM NaCl, 2 mM MgCl 2 and 20 mM HEPES (pH 7.2 with NaOH). CFTR was either dephosphorylated by Lambda protein phosphatase (λ, NEB) or phosphorylated by PKA (Sigma-Aldrich) before immobilization. Fluorophore-conjugated and biotin–tris-NTA-Ni 2+ -bound CFTR at 200 pM concentration was immobilized within the microfluidic chambers for 1 min, and unbound CFTR was cleared from the channel by washing with buffer. Imaging was carried out in deoxygenated imaging buffer containing 0.06% (w/v) digitonin, 150 mM NaCl, 2 mM MgCl 2 , 20 mM HEPES (pH 7.2 with NaOH), 2 mM protocatechuic acid and 50 nM protocatechuate-3,4-dioxygenase to minimize photobleaching . MgCl 2 was omitted where indicated. Microfluidic chambers were reused several times in the same day by dissociating the immobilized protein with 300 mM imidazole. Experiments were carried out at 25 °C. For imaging of proteoliposome-reconstituted CFTR, vesicles containing fluorophore-labelled CFTR were extruded through 400-nm and then 100-nm polycarbonate filters (Whatman). The vesicles were then incubated with 1 µM biotin–tris-NTA-Ni 2+ . Excess biotin–tris-NTA-Ni 2+ was removed by pelleting the vesicles by ultracentrifugation at 150,000 g for 45 min, removing the supernatant and resuspending in buffer containing 150 mM NaCl, 2 mM MgCl 2 and 20 mM HEPES (pH 7.2 with NaOH). The procedure was repeated twice. Vesicles were immobilized within the microfluidic chambers for 5 min, and unbound vesicles were cleared from the channel by washing with buffer. Imaging was carried out in deoxygenated imaging buffer containing 150 mM NaCl, 2 mM MgCl 2 , 20 mM HEPES (pH 7.2 with NaOH), 2 mM protocatechuic acid and 50 nM protocatechuate-3,4-dioxygenase. Single-molecule imaging was carried out using a custom-built wide-field, prism-based total internal reflection fluorescence microscope. LD555 fluorophores were excited with an evanescent wave generated using a 532-nm laser (Opus, Laser Quantum). Emitted fluorescence from LD555 and LD655 was collected with a 1.27 NA 60× water-immersion objective (Nikon), spectrally separated using a T635lpxr dichroic (Chroma), and imaged onto two Fusion sCMOS cameras (Hamamatsu) with integration periods of 10 or 100 ms. Single-molecule FRET data analysis Single-molecule fluorescence data were analysed using SPARTAN analysis software in MATLAB . FRET trajectories were calculated from the emitted donor and acceptor fluorescence intensities ( I D and I A , respectively) as E FRET = I A /( I A + I D ). FRET trajectories were selected for further analysis on the basis of the following criteria: single-step donor photobleaching; a signal-to-noise ratio >8; fewer than 4 donor-blinking events; and FRET efficiency above baseline for at least 50 frames. Further, single-molecule traces exhibiting FRET values above 0.8 were excluded from analysis. This subpopulation was insensitive to phosphorylation and nucleotide, and probably reflected denatured molecules. For kinetic analysis, traces were also manually curated to remove obvious photophysical artefacts. FRET trajectories were idealized using the segmental k- means algorithm with a model containing two non-zero-FRET states with FRET values of 0.25 ± 0.1 and 0.48 ± 0.1. Data were further analysed with GraphPad Prism and OriginPro. Electron microscopy data acquisition and processing Dephosphorylated wild-type CFTR directly from gel filtration was concentrated to 5.5 mg ml −1 . Concentrations of 3 mM ATP and 3 mM fluorinated Fos-choline-8 were added to the sample immediately before application onto Quantifoil R1.2/1.3 400 mesh Au grids and then vitrification using a Vitrobot Mark IV (FEI). Cryo-EM images were collected with a 300-keV Titan Krios transmission electron microscope equipped with a Gatan K2 Summit detector using SerialEM . A total of 3,501 micrographs were collected in superresolution mode with a nominal defocus range of 0.8–2.5 µm. Micrographs had a physical pixel size of 1.03 Å (0.515 Å superresolution pixel size). Micrographs were recorded with 10-s exposure (0.2 s per frame) with a dose rate of 8 electrons per pixel per second. Image stacks were gain-normalized, binned by 2, and corrected for beam-induced specimen motion with MotionCor2 (ref. ). Contrast transfer function estimation was carried out using GCTF . Images with estimated resolutions below 4.5 Å were removed. Particles were initially picked with the Laplacian-of-Gaussian implementation in RELION . Selected two-dimensional classes from this particle set were then used for template-based particle picking. The 710,322 picked particles were cleaned by several rounds of two- and three-dimensional classification. A total of 157,629 particles were included in the final refined map. Reporting summary Further information on research design is available in the linked to this article. CFTR was expressed as previously described . Human CFTR with a C-terminal PreScission Protease-cleavable GFP tag was cloned into the BacMam vector. For single-molecule FRET, the following substitutions were introduced: C76L, C128S, C225S, C276S, C343S, T388C, C491S, C592M, C647S, C832S, C866S, C1344S, C1355S, C1395S, C1400S, C1410S, S1435C and C1458S. A deca-His tag was inserted C-terminally to CFTR and before the PreScission Protease cleavage site to allow for surface immobilization. Recombinant baculovirus was generated using Sf9 cells (Gibco, catalogue number 11496015, lot number 1670337) cultured in sf-900 SFM medium (Gibco), supplemented with 5% (v/v) heat-inactivated fetal bovine serum and 1% (v/v) antibiotic–antimycotic (Gibco) as described previously . HEK293S GnTI − (ATCC CRL-3022, lot number 62430067) suspension cells were cultured in FreeStyle 293 medium (Gibco) supplemented with 2% (v/v) heat-inactivated fetal bovine serum and 1% (v/v) antibiotic–antimycotic (Gibco), shaking at 37 °C with 8% CO 2 and 80% humidity. Sf9 and HEK293S GnTI − cells were authenticated by Gibco and ATCC, respectively and confirmed negative for mycoplasma contamination. At a density of 2.5 × 10 6 cells ml −1 , cells were infected with 10% (v/v) P3 baculovirus. After 12 h, the culture was supplemented with 10 mM sodium butyrate, and the temperature was reduced to 30 °C. After a further 48 h, the cells were collected and flash-frozen in liquid nitrogen. For protein purification, cells were solubilized for 75 min at 4 °C in extraction buffer containing 1.25% (w/v) lauryl maltose neopentyl glycol (LMNG), 0.25% (w/v) cholesteryl hemisuccinate (CHS), 200 mM NaCl, 20 mM HEPES (pH 7.2 with NaOH), 2 mM MgCl 2 , 10 μM dithiothreitol (DTT), 20% (v/v) glycerol, 1 mM ATP, 1 μg ml −1 pepstatin A, 1 μg ml −1 leupeptin, 1 μg ml −1 aprotinin, 100 μg ml −1 soy trypsin inhibitor, 1 mM benzamidine, 1 mM phenylmethylsulfonyl fluoride (PMSF) and 3 µg ml −1 DNase I. Lysate was clarified by centrifugation at 75,000 g for 2 × 20 min at 4 °C, and mixed with NHS-activated Sepharose 4 Fast Flow resin (GE Healthcare) conjugated with GFP nanobody, which had been pre-equilibrated in 20 column volumes of extraction buffer. After 1 h, the resin was packed into a chromatography column, washed with 20 column volumes of wash buffer containing 0.06% (w/v) digitonin, 200 mM NaCl, 20 mM HEPES (pH 6.8 with NaOH), 1 mM ATP and 2 mM MgCl 2 , and then incubated for 2 h at 4 °C with 0.35 mg ml −1 PreScission Protease to cleave off the GFP tag. The eluate was collected by dripping through Glutathione Sepharose 4B resin (Cytiva) to remove PreScission Protease, and CFTR was concentrated to 2 μM. To label with fluorophores, CFTR was mixed with 9.5 μM maleimide-conjugated LD555 and 10.5 μM maleimide-conjugated LD655 (Lumidyne Technologies) for 10 min at 4 °C. Subsequent steps were carried out protected from light. The labelling reaction was quenched by addition of 2 mM DTT, and the labelled product was purified by gel filtration chromatography at 4 °C using a Superose 6 10/300 GL column (GE Healthcare), equilibrated with 0.06% (w/v) digitonin, 200 mM NaCl, 20 mM HEPES (pH 7.2 with NaOH), 1 mM ATP and 2 mM MgCl 2 . Peak fractions were concentrated to 2 μM and mixed with 5 μM biotin–tris-NTA-Ni 2+ for 30 min at 4 °C. The CFTR–Ni-NTA complex was purified by another round of gel filtration, concentrated to 2 μM, aliquoted, snap-frozen in liquid nitrogen, and stored at –80 °C. For ATP hydrolysis measurements, the purification protocol was adjusted: extraction buffer contained 1.25% (w/v) LMNG, 0.25% (w/v) CHS, 200 mM KCl, 20 mM HEPES (pH 8.0 with KOH), 2 mM MgCl 2 , 2 mM DTT, 20% (v/v) glycerol, 1 μg ml −1 pepstatin A, 1 μg ml −1 leupeptin, 1 μg ml −1 aprotinin, 100 μg ml −1 soy trypsin inhibitor, 1 mM benzamidine, 1 mM PMSF and 3 µg ml −1 DNase I. Wash and gel filtration buffers contained 0.06% (w/v) digitonin, 20 mM HEPES (pH 8.0 with KOH), 200 mM KCl, 2 mM MgCl 2 and 2 mM DTT. The eluate from the GFP nanobody resin was concentrated, phosphorylated with PKA (NEB) for 1 h at 25 °C, purified by gel filtration chromatography, and immediately used for hydrolysis measurements. For proteoliposome reconstitution, the purification was also adjusted: extraction buffer contained 1.25% (w/v) LMNG, 0.25% (w/v) CHS, 200 mM NaCl, 20 mM HEPES (pH 7.2 with NaOH), 2 mM MgCl 2 , 2 mM DTT, 20% (v/v) glycerol, 1 μg ml −1 pepstatin A, 1 μg ml −1 leupeptin, 1 μg ml −1 aprotinin, 100 μg ml −1 soy trypsin inhibitor, 1 mM benzamidine, 1 mM PMSF and 3 µg ml −1 DNase I. Wash and gel filtration buffers contained 0.006% (w/v) glyco-diosgenin (GDN), 200 mM NaCl, 20 mM HEPES (pH 7.2 with NaOH) and 2 mM MgCl 2 . The eluate from the GFP nanobody resin was concentrated, phosphorylated with PKA (NEB) for 1 h at 25 °C, purified by gel filtration chromatography, and immediately reconstituted. Steady-state ATP hydrolysis activity was measured using an NADH-coupled assay . Reaction buffer contained 50 mM HEPES (pH 8.0 with KOH), 150 mM KCl, 2 mM MgCl 2 , 2 mM DTT, 0.06% (w/v) digitonin, 60 µg ml −1 pyruvate kinase (Roche), 32 µg ml −1 lactate dehydrogenase (Roche), 9 mM phosphoenolpyruvate and 150 µM NADH, and was prepared immediately before starting the assay. A 200 nM concentration of phosphorylated CFTR was diluted into reaction buffer. Aliquots of 30 µl in volume were distributed into a Corning 384-well Black/Clear Flat Bottom Polystyrene NBS Microplate. Samples were kept at 4 °C and light-protected until the reactions were initiated by addition of 3 mM ATP. The rate of fluorescence depletion was monitored at λ ex = 340 nm and λ em = 445 nm at 28 °C with an Infinite M1000 microplate reader (Tecan), and converted to ATP turnover with an NADH standard curve. Chinese hamster ovary cells (ATCC CCL-61, lot number 70014310) were maintained in DMEM-F12 (ATCC) supplemented with 10% (v/v) heat-inactivated fetal bovine serum and 1% (v/v) GlutaMAX (Gibco) at 37 °C. Chinese hamster ovary cells were authenticated by ATCC. The cells were plated in 35-mm cell culture dishes (Falcon) 24 h before transfection. Cells were transfected with C-terminally GFP-fused CFTR cloned into the BacMam expression vector, using Lipofectamine 3000 according to the manufacturer’s protocol (Invitrogen). At 12 h following transfection, medium was replaced with DMEM-F12 supplemented with 2% (v/v) heat-inactivated fetal bovine serum and 1% (v/v) GlutaMAX, and the cells were then incubated for 24 h at 30 °C before recording. Bath solution contained 145 mM NaCl, 2 mM MgCl 2 , 5 mM KCl, 1 mM CaCl 2 , 5 mM glucose, 5 mM HEPES and 20 mM sucrose (pH 7.4 with NaOH). Pipette solution contained 140 mM NMDG, 5 mM CaCl 2 , 2 mM MgCl 2 and 10 mM HEPES (pH 7.4 with HCl). Perfusion solution contained 150 mM NMDG, 2 mM MgCl 2 , 1 mM CaCl 2 , 10 mM EGTA and 8 mM Tris (pH 7.4 with HCl). Magnesium was omitted where indicated. CFTR was activated by exposure to PKA (Sigma-Aldrich) and 3 mM ATP. The rate of buffer exchange by the perfusion system was estimated by exchanging perfusion solution with 150 mM NMDG, 2 mM MgSO 4 , 1 mM calcium gluconate, 10 mM EGTA and 8 mM Tris (pH 7.4 with H 2 SO 4 ). Pipettes were pulled from borosilicate glass (outer diameter 1.5 mm, inner diameter 0.86 mm, Sutter) to 1.5–2.5 MΩ resistance and fire polished. Recordings were carried out using the inside-out patch configuration with local perfusion at the patch. Membrane potential was clamped at –30 mV. Currents were recorded at 25 °C using an Axopatch 200B amplifier, a Digidata 1550 digitizer and the pClamp software suite (Molecular Devices). Recordings were low-pass-filtered at 1 kHz and digitized at 20 kHz. All displayed recordings were further low-pass filtered at 100 Hz. Data were analysed with Clampfit, GraphPad Prism and OriginPro. A lipid mixture containing 1,2-dioleoyl- sn -glycero-3-phosphoetanolamine, 1-palmitoyl-2-oleyl- sn -glycero-3-phosphocholine and 1-palmitoyl-2-oleoyl- sn -glycero-3-phospho- l -serine at a 2:1:1 (w/w/w) ratio was resuspended by sonication in buffer containing 200 mM NaCl, 20 mM HEPES (pH 7.2 with NaOH) and 2 mM MgCl 2 . Lipids were mixed with GDN to a final detergent concentration of 2% (w/v), and lipid concentration of 20 mg ml −1 for 1 h at 25 °C covered by argon gas. Purified CFTR was mixed with the lipid mixture at a protein-to-lipid ratio of 1:100 or 1:250 (w/w) and incubated at 4 °C for 2 h covered by argon gas. Methylated beta-cyclodextrin was added to the reaction at a 1.2× molar ratio to GDN. After an additional 4 h, an equivalent amount of methylated beta-cyclodextrin was added. This procedure was repeated for a total of four additions. Proteoliposomes were collected by centrifugation at 150,000 g for 45 min at 4 °C, resuspended in buffer containing 200 mM NaCl, 20 mM HEPES (pH 7.2 with NaOH) and 2 mM MgCl 2 , aliquoted, snap-frozen in liquid nitrogen and stored at −80 °C. Synthetic planar lipid bilayers were made by painting a 1,2-dioleoyl- sn -glycero-3-phosphoetanolamine, 1-palmitoyl-2-oleyl- sn -glycero-3-phosphocholine and 1-palmitoyl-2-oleoyl- sn -glycero-3-phospho- l -serine 2:1:1 (w/w/w) lipid mixture solubilized in decane across an approximately 100-µm-diameter hole on a plastic transparency. CFTR-containing proteoliposomes were phosphorylated with PKA (NEB) for 1 h at 25 °C, and then fused with the synthetic bilayers. Currents were recorded at 25 °C in symmetric buffer containing 150 mM NaCl, 2 mM MgCl 2 and 20 mM HEPES (pH 7.2 with NaOH), supplemented with ATP as indicated. Unless otherwise indicated voltage was clamped at 150 mV with an Axopatch 200B amplifier (Molecular Devices). Currents were low-pass filtered at 1 kHz, digitized at 20 kHz with a Digidata 1440A digitizer and recorded using the pCLAMP software suite (Molecular Devices). All displayed recordings were further low-pass filtered at 100 Hz. Data were analysed with Clampfit, GraphPad Prism and OriginPro. Imaging was carried out as outlined in ref. . PEG- and biotin–PEG-passivated microfluidic chambers were incubated for 5 min with 0.8 µM streptavidin (Invitrogen) in buffer containing 0.06% (w/v) digitonin, 150 mM NaCl, 2 mM MgCl 2 and 20 mM HEPES (pH 7.2 with NaOH). CFTR was either dephosphorylated by Lambda protein phosphatase (λ, NEB) or phosphorylated by PKA (Sigma-Aldrich) before immobilization. Fluorophore-conjugated and biotin–tris-NTA-Ni 2+ -bound CFTR at 200 pM concentration was immobilized within the microfluidic chambers for 1 min, and unbound CFTR was cleared from the channel by washing with buffer. Imaging was carried out in deoxygenated imaging buffer containing 0.06% (w/v) digitonin, 150 mM NaCl, 2 mM MgCl 2 , 20 mM HEPES (pH 7.2 with NaOH), 2 mM protocatechuic acid and 50 nM protocatechuate-3,4-dioxygenase to minimize photobleaching . MgCl 2 was omitted where indicated. Microfluidic chambers were reused several times in the same day by dissociating the immobilized protein with 300 mM imidazole. Experiments were carried out at 25 °C. For imaging of proteoliposome-reconstituted CFTR, vesicles containing fluorophore-labelled CFTR were extruded through 400-nm and then 100-nm polycarbonate filters (Whatman). The vesicles were then incubated with 1 µM biotin–tris-NTA-Ni 2+ . Excess biotin–tris-NTA-Ni 2+ was removed by pelleting the vesicles by ultracentrifugation at 150,000 g for 45 min, removing the supernatant and resuspending in buffer containing 150 mM NaCl, 2 mM MgCl 2 and 20 mM HEPES (pH 7.2 with NaOH). The procedure was repeated twice. Vesicles were immobilized within the microfluidic chambers for 5 min, and unbound vesicles were cleared from the channel by washing with buffer. Imaging was carried out in deoxygenated imaging buffer containing 150 mM NaCl, 2 mM MgCl 2 , 20 mM HEPES (pH 7.2 with NaOH), 2 mM protocatechuic acid and 50 nM protocatechuate-3,4-dioxygenase. Single-molecule imaging was carried out using a custom-built wide-field, prism-based total internal reflection fluorescence microscope. LD555 fluorophores were excited with an evanescent wave generated using a 532-nm laser (Opus, Laser Quantum). Emitted fluorescence from LD555 and LD655 was collected with a 1.27 NA 60× water-immersion objective (Nikon), spectrally separated using a T635lpxr dichroic (Chroma), and imaged onto two Fusion sCMOS cameras (Hamamatsu) with integration periods of 10 or 100 ms. Single-molecule fluorescence data were analysed using SPARTAN analysis software in MATLAB . FRET trajectories were calculated from the emitted donor and acceptor fluorescence intensities ( I D and I A , respectively) as E FRET = I A /( I A + I D ). FRET trajectories were selected for further analysis on the basis of the following criteria: single-step donor photobleaching; a signal-to-noise ratio >8; fewer than 4 donor-blinking events; and FRET efficiency above baseline for at least 50 frames. Further, single-molecule traces exhibiting FRET values above 0.8 were excluded from analysis. This subpopulation was insensitive to phosphorylation and nucleotide, and probably reflected denatured molecules. For kinetic analysis, traces were also manually curated to remove obvious photophysical artefacts. FRET trajectories were idealized using the segmental k- means algorithm with a model containing two non-zero-FRET states with FRET values of 0.25 ± 0.1 and 0.48 ± 0.1. Data were further analysed with GraphPad Prism and OriginPro. Dephosphorylated wild-type CFTR directly from gel filtration was concentrated to 5.5 mg ml −1 . Concentrations of 3 mM ATP and 3 mM fluorinated Fos-choline-8 were added to the sample immediately before application onto Quantifoil R1.2/1.3 400 mesh Au grids and then vitrification using a Vitrobot Mark IV (FEI). Cryo-EM images were collected with a 300-keV Titan Krios transmission electron microscope equipped with a Gatan K2 Summit detector using SerialEM . A total of 3,501 micrographs were collected in superresolution mode with a nominal defocus range of 0.8–2.5 µm. Micrographs had a physical pixel size of 1.03 Å (0.515 Å superresolution pixel size). Micrographs were recorded with 10-s exposure (0.2 s per frame) with a dose rate of 8 electrons per pixel per second. Image stacks were gain-normalized, binned by 2, and corrected for beam-induced specimen motion with MotionCor2 (ref. ). Contrast transfer function estimation was carried out using GCTF . Images with estimated resolutions below 4.5 Å were removed. Particles were initially picked with the Laplacian-of-Gaussian implementation in RELION . Selected two-dimensional classes from this particle set were then used for template-based particle picking. The 710,322 picked particles were cleaned by several rounds of two- and three-dimensional classification. A total of 157,629 particles were included in the final refined map. Further information on research design is available in the linked to this article. Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at 10.1038/s41586-023-05854-7. Supplementary Information This file contains Supplementary Methods, Tables 1 and 2, Fig. 1 and References. Reporting Summary Supplementary Video 1 Stochastic simulation of CFTR gating. Stochastic simulation of a PKA-phosphorylated wild-type CFTR molecule at saturating ATP concentration. The topology and rates outlined in Extended Data Fig. 10a were used. Pore and NBD dynamics are indicated with simulated smFRET and single-channel electrophysiology traces. The red dots indicate ATP hydrolysis events. Step 2, rate-limiting for pore opening, is coloured blue. Peer Review File
Contrasting geochemical and fungal controls on decomposition of lignin and soil carbon at continental scale
4b0353d2-c879-428d-ac6d-a34cbf645f35
10115774
Microbiology[mh]
Lignin is one of the most abundant biopolymers in the terrestrial biosphere and protects other components of plant tissue from microbial attack. Traditionally, it was assumed that lignin limits litter decomposition , and contributes substantially to soil organic carbon (SOC) , . More recently, lignin’s importance in controlling litter and SOC decomposition has become controversial. Lignin might decompose fastest during early stages of litter decomposition , and lignin-derived C could be less persistent in soil than other C components . The contradictory views related to lignin decomposition and its contributions to SOC might be related to biogeochemical differences among ecosystems. The persistence of lignin relative to SOC and other litter components may vary systematically with climatic, geochemical, and microbial characteristics across diverse soils , but the controls on lignin, litter, and SOC decomposition have rarely been investigated together or across a wide range of climatic, geochemical or microbial variation. Climate can effectively predict litter decomposition at site to continental scales , but climate may affect decomposition of different C forms in different ways. For example, although high temperature and precipitation generally increase litter decomposition , they can also increase mineral weathering and C stabilization with reactive metals , , which may specifically bind lignin-derived C – . In addition to climate, the ratio of lignin to nitrogen (N) is another conventionally important predictor of litter decomposition, but it may also have different relationships with lignin, litter or SOC decomposition. Greater litter N content may increase lignin and litter decomposition by alleviating microbial N limitation , , whereas increased N availability may also decrease decomposition of lignin or SOC by suppressing the production of oxidative enzymes . Both mechanisms may occur in the same soils depending on the stage of litter decomposition; N may stimulate early stages while inhibiting later stages of litter decay , . However, the overall importance of N relative to other soil characteristics remains poorly understood. Soil geochemical characteristics might also be important predictors of lignin and litter decomposition in ways that differ from bulk SOC. Soil minerals and metals can protect SOC from microbial decomposition through sorption, co-precipitation, and polyvalent cation bridging . In some soils, lignin-derived C may preferentially associate with iron (Fe) and aluminum (Al) relative to bulk litter or bulk SOC , , , and these metals might therefore be more important for limiting the decomposition of lignin vs. other compounds in litter or SOC. Manganese (Mn) might also protect C by physical or chemical mechanisms , yet because some forms of Mn are powerful oxidants, increased Mn availability can also stimulate lignin decomposition . Calcium (Ca) might also have a contrasting relationship between the decomposition of different substrates: Ca promotes physicochemical protection of SOC , yet Ca availability may stimulate lignin-degrading fungi and litter decomposition . However, the consistency of relationships between metals and decomposition rates of lignin, litter, and/or SOC across diverse ecosystems remains unresolved. Besides geochemistry, microbial composition, and abundance may also have different impacts on the decomposition of different C forms. Many microbial taxa perform similar functions, and it remains elusive whether microbial composition explains process rates . Yet, due to lignin’s complex biochemical structure, only a small subset of microbial taxa (white-, brown-, and soft-rot fungi, and certain bacteria) has been conclusively demonstrated to cleave the lignin macromolecule at the propyl sidechain, which is likely the rate-limiting step in lignin decomposition – . In contrast, microbial composition may be less important for SOC decomposition because physical restrictions on microbial access to C substrates may predominantly limit SOC mineralization . To test competing viewpoints and potential mechanisms underlying the role of lignin in organic matter decomposition, we measured decomposition of lignin, bulk litter, and SOC via a uniform and quantitative isotopic method from mineral soil samples collected across broad biophysical gradients. We used 20 sites from the US National Ecological Observatory Network (NEON) that span diverse ecosystems and climatic zones (tundra to tropics). These particular samples were not necessarily representative of North American soils as a whole, but were instead selected to span a broad range of biogeochemical properties thought to influence C decomposition and accrual; they included 9 of 12 orders in the USDA soil taxonomy (Fig. , Supplementary Fig. and Table ). Previous examinations of lignin decomposition often relied on indirect methods, such as acid-unhydrolyzable residue to approximate lignin content , oxidation of simple substrates as a measure of potential ligninolytic enzymes , or use of lignin monomers rather than polymers in incubation experiments . These methods can substantially underestimate or overestimate the lignin content of litter and soil as well as the activities of ligninolytic enzymes , . However, C isotope-labeled, high-molecular-weight synthetic lignin provides a tracer that allows unambiguous quantitative measurement of lignin decomposition . Here, we combine isotope-labeled lignin with natural abundance litter derived from a C 4 grass and add these mixtures to separate soil samples for incubation in the lab, enabling us to quantify lignin, litter, and SOC decomposition over time (Fig. ), and to assess their relationships with climatic, N-related, geochemical and microbial factors across soils. Lignin decomposition is also measured in a separate N addition lab experiment and in field-incubated samples. We hypothesize that (1) lignin decomposition predictably varies with soil geochemical characteristics (reactive minerals and metals) and fungal communities at the continental scale, in addition to climatic and N-related variables, and that (2) the predictors for lignin decomposition are similar with litter decomposition, while these predictors have different relationships with SOC decomposition due to specific interactions of lignin with metals and microbial communities. Our results support these hypotheses that partially reconcile aspects of classic and modern views of decomposition, such that lignin decomposition is a bottleneck for litter decomposition but not for SOC decomposition, thus explaining the variable contributions of lignin to SOC among soils as a function of their biogeochemical characteristics. Decomposition rates of lignin, litter, and SOC The temporal dynamics of lab lignin C decomposition were generally more variable than litter C or SOC decomposition, although instantaneous lignin decomposition rate was about 4-fold lower than litter and soil decomposition, on average, when normalized by C mass in these pools (Fig. ). Lab lignin decomposition rate generally decreased over time; however, in some soils, it transiently increased over timescales of months or was still increasing at the end of incubation (after 18 months) (Fig. and Supplementary Fig. ). Litter decomposition rate also increased transiently over time in some sites (Fig. ), and it was significantly related to instantaneous lignin decomposition rate as indicated by Pearson correlation ( r = 0.65; P < 0.01). The temporal pattern of lab lignin C decomposition differed from SOC decomposition, which generally showed a declining trend at the end of the incubation (Fig. ); instantaneous decomposition of lignin C and SOC were not statistically related ( r = −0.04; P > 0.05). At the end of the lab incubation (571 d), cumulative C decomposition relative to initial C was 1.7–31.4% for lignin, 2.0–53.0% for litter, and 6.3–99.0% for SOC (Fig. ). The soils with coolest climate (TOOL, a Gelisol) showed the lowest site-averaged decomposition of lab lignin (3.1%), litter (15.4%) and SOC (13.3%) relative to other soils. The soils with warm and dry climate (ONAQ and SRER, Aridisols) had relatively lower site-averaged lab lignin (4.1%) and litter C (19.6%) decomposition but the highest site-averaged SOC decomposition (49.4%). The cumulative decomposition of lignin and SOC was similar between 0 and 15 cm and 15–30 cm soil samples, while cumulative litter decomposition was significantly lower in the deeper soil (34% for 0–15 cm vs. 23% for 15–30 cm; P < 0.01). We conducted a separate N addition experiment to test effects of N availability on lignin and litter decomposition in the 0–15 cm soils. Lab lignin and litter decomposition rates were slightly increased by N addition in most sites throughout the lab incubation (Supplementary Fig. ), and cumulative decomposition was significantly greater after the 571-d incubation ( P < 0.01 for both; Fig. ). On average, N addition increased cumulative lab lignin decomposition by 1.6% and litter decomposition by 6.2%, and neither lignin nor litter decomposition was significantly depressed by N addition at any site after 18 months (Fig. ). Our field sites had large climate differences whereas the lab samples were incubated at the same temperature and comparable moisture, so we used an additional field lignin decomposition experiment with 0–15 cm soils to test whether similar biogeochemical predictors were important in the field and in the lab. The field experiment was conducted in mesh bags that allowed additional microbes to colonize the soil/litter/lignin mixtures over time. Cumulative field lignin C decomposition after ~1 y showed higher variation among samples and overall higher rates than observed in the lab (Supplementary Fig. ), corresponding with overall higher fungal quantity in the field than in the lab (Supplementary Fig. ). Total field lignin C loss relative to initial lignin C concentrations averaged 9–63% across the 20 sites, while the site-averaged lab lignin C loss was 3–16% (Supplementary Fig. ). There was no significant correlation ( P > 0.05) between field and lab lignin C loss. Variation in biogeochemical predictors among soils Along with climatic factors (mean annual temperature, MAT, and mean annual precipitation, MAP), we selected 25 biogeochemical predictors and separated them into three categories, including those related to N availability (11), geochemistry (8), and microbes (6) (Supplementary Table and Fig. ). The variation of these biogeochemical predictors was very high among sites and was generally similar between the lab and field incubation datasets (Supplementary Fig. ). For the N predictors, total soil N was 0.1–32 mg g −1 and C/N was 8–58. Both NH 4 + -N (0–104 µg N g −1 ) and NO 3 - -N (0–937 µg N g −1 ) tended to increase with incubation time in the lab (Supplementary Fig. ). For the geochemical predictors, soil pH was 4.0–9.2, and soil particle size (silt+clay, 5–91%) and metals had at least one order of magnitude difference among sites (0–30 mg g −1 Al ox ; 0–20 mg g −1 Fe HCl , 0–78 mg g −1 Fe ox ; 0–54 mg g −1 Fe cd-ox , 0.0–3.3 mg g −1 Mn cd , 0–55 mg g −1 Ca cd ; Supplementary Fig. ). Microbial community variables exhibited as much as four orders of magnitude difference among sites, with fungal quantity of 1.1 × 10 5 –2.6 × 10 9 gene copies g −1 , bacterial quantity of 5.2 × 10 9 –9.8 × 10 11 gene copies g −1 , fungal-to-bacterial ratio of 5.8 × 10 −6 –6.2 × 10 −2 , and a fungal diversity index of −39–73 (i.e., the residual of the Chao1 index for fungal communities, Supplementary Fig. ). Microbial quantity changed with time in the lab-incubated samples: fungal quantity increased after 9 months vs. the initial soil samples, and fungal quantity and bacterial quantity also increased after 14 months vs. 9 months (Supplementary Fig. ). Importance of biogeochemical predictors for C decomposition We used three statistical approaches (linear-mixed models, LMMs, generalized additive mixed models, GAMMs, and random forest models, RFMs) to identify the most important soil geochemical, microbial, N, and climatic predictors of decomposition (Fig. ). The RFM partial dependence plots showed that many relationships between predictors and response variables were approximately linear until predictors increased above the 90th or larger percentiles, where response variables became approximately constant (Supplementary Figs. – ). The GAMMs similarly demonstrated that nearly all predictors had linear relationships with response variables (Supplementary Table ). Most of our optimal statistical models included multiple predictor variables from most categories (N-related, geochemical, and microbial variables; Fig. ). The optimal RFMs included similar predictors as the LMMs, with a few exceptions (see ). Thus, for clarity of explanation we hereafter focused mainly on results from the LMMs. In addition, to test whether biogeochemical predictors changed with time throughout lab incubation, we compared models of cumulative decomposition after 6, 12, and 18 months (Supplementary Figs. and ). Predictors were generally similar over time, and thus we focused our subsequent analysis on the 18-month (571 d) dataset (the other results are presented in the ). The LMM of lab lignin decomposition showed that soil pH and fungal composition were the strongest predictors, and that MAT, fungal quantity, Mn cd , Ca cd , silt+clay, Fe ox , and soil C/N could also improve the final model (Fig. ). Soil pH and Fe ox had negative relationships with lab lignin decomposition while all other predictors had positive relationships. These predictors explained 43% of the observed variance in lab lignin decomposition; the overall model (including random effects for site) explained 45%. Lignin decomposition in the field and lab shared many of the same predictors in the LMMs, including MAT, Fe ox , Ca cd , and Mn cd (Fig. ). The Fe ox showed a negative relationship while the other three predictors showed positive relationships with field lignin decomposition. These four predictors, along with MAP, soil C/N, soil N, pH, and Al ox , explained 31% of the variation in field lignin decomposition; the overall model with random effects explained 53% of the variation. Predictors of lab litter decomposition were generally similar to lab lignin decomposition. The LMM showed that Fe ox was the strongest predictor of litter decomposition, followed by fungal composition, Mn cd , Fe HCl , soil N, fungal quantity, bacterial quantity, MAT, and pH (Fig. ). Fe ox and pH were negative while all other predictors were positively related to litter decomposition. These predictors explained 49% of the variation in litter decomposition, and the overall model with random effects explained 52%. Contrary to lignin and litter decomposition, microbial variables were not important predictors of SOC decomposition in the statistical models (Fig. ). The LMM showed that soil C/N, MAT, and silt+clay were the strongest predictors for SOC decomposition, and that MAP, Ca cd , Mn cd , Al ox , and pH were also important (Fig. ). MAT and pH were positively while all other variables were negatively related to SOC decomposition. The predictors collectively explained 43% of the variation in SOC decomposition and the overall LMM (including random effects) explained 71%. The temporal dynamics of lab lignin C decomposition were generally more variable than litter C or SOC decomposition, although instantaneous lignin decomposition rate was about 4-fold lower than litter and soil decomposition, on average, when normalized by C mass in these pools (Fig. ). Lab lignin decomposition rate generally decreased over time; however, in some soils, it transiently increased over timescales of months or was still increasing at the end of incubation (after 18 months) (Fig. and Supplementary Fig. ). Litter decomposition rate also increased transiently over time in some sites (Fig. ), and it was significantly related to instantaneous lignin decomposition rate as indicated by Pearson correlation ( r = 0.65; P < 0.01). The temporal pattern of lab lignin C decomposition differed from SOC decomposition, which generally showed a declining trend at the end of the incubation (Fig. ); instantaneous decomposition of lignin C and SOC were not statistically related ( r = −0.04; P > 0.05). At the end of the lab incubation (571 d), cumulative C decomposition relative to initial C was 1.7–31.4% for lignin, 2.0–53.0% for litter, and 6.3–99.0% for SOC (Fig. ). The soils with coolest climate (TOOL, a Gelisol) showed the lowest site-averaged decomposition of lab lignin (3.1%), litter (15.4%) and SOC (13.3%) relative to other soils. The soils with warm and dry climate (ONAQ and SRER, Aridisols) had relatively lower site-averaged lab lignin (4.1%) and litter C (19.6%) decomposition but the highest site-averaged SOC decomposition (49.4%). The cumulative decomposition of lignin and SOC was similar between 0 and 15 cm and 15–30 cm soil samples, while cumulative litter decomposition was significantly lower in the deeper soil (34% for 0–15 cm vs. 23% for 15–30 cm; P < 0.01). We conducted a separate N addition experiment to test effects of N availability on lignin and litter decomposition in the 0–15 cm soils. Lab lignin and litter decomposition rates were slightly increased by N addition in most sites throughout the lab incubation (Supplementary Fig. ), and cumulative decomposition was significantly greater after the 571-d incubation ( P < 0.01 for both; Fig. ). On average, N addition increased cumulative lab lignin decomposition by 1.6% and litter decomposition by 6.2%, and neither lignin nor litter decomposition was significantly depressed by N addition at any site after 18 months (Fig. ). Our field sites had large climate differences whereas the lab samples were incubated at the same temperature and comparable moisture, so we used an additional field lignin decomposition experiment with 0–15 cm soils to test whether similar biogeochemical predictors were important in the field and in the lab. The field experiment was conducted in mesh bags that allowed additional microbes to colonize the soil/litter/lignin mixtures over time. Cumulative field lignin C decomposition after ~1 y showed higher variation among samples and overall higher rates than observed in the lab (Supplementary Fig. ), corresponding with overall higher fungal quantity in the field than in the lab (Supplementary Fig. ). Total field lignin C loss relative to initial lignin C concentrations averaged 9–63% across the 20 sites, while the site-averaged lab lignin C loss was 3–16% (Supplementary Fig. ). There was no significant correlation ( P > 0.05) between field and lab lignin C loss. Along with climatic factors (mean annual temperature, MAT, and mean annual precipitation, MAP), we selected 25 biogeochemical predictors and separated them into three categories, including those related to N availability (11), geochemistry (8), and microbes (6) (Supplementary Table and Fig. ). The variation of these biogeochemical predictors was very high among sites and was generally similar between the lab and field incubation datasets (Supplementary Fig. ). For the N predictors, total soil N was 0.1–32 mg g −1 and C/N was 8–58. Both NH 4 + -N (0–104 µg N g −1 ) and NO 3 - -N (0–937 µg N g −1 ) tended to increase with incubation time in the lab (Supplementary Fig. ). For the geochemical predictors, soil pH was 4.0–9.2, and soil particle size (silt+clay, 5–91%) and metals had at least one order of magnitude difference among sites (0–30 mg g −1 Al ox ; 0–20 mg g −1 Fe HCl , 0–78 mg g −1 Fe ox ; 0–54 mg g −1 Fe cd-ox , 0.0–3.3 mg g −1 Mn cd , 0–55 mg g −1 Ca cd ; Supplementary Fig. ). Microbial community variables exhibited as much as four orders of magnitude difference among sites, with fungal quantity of 1.1 × 10 5 –2.6 × 10 9 gene copies g −1 , bacterial quantity of 5.2 × 10 9 –9.8 × 10 11 gene copies g −1 , fungal-to-bacterial ratio of 5.8 × 10 −6 –6.2 × 10 −2 , and a fungal diversity index of −39–73 (i.e., the residual of the Chao1 index for fungal communities, Supplementary Fig. ). Microbial quantity changed with time in the lab-incubated samples: fungal quantity increased after 9 months vs. the initial soil samples, and fungal quantity and bacterial quantity also increased after 14 months vs. 9 months (Supplementary Fig. ). We used three statistical approaches (linear-mixed models, LMMs, generalized additive mixed models, GAMMs, and random forest models, RFMs) to identify the most important soil geochemical, microbial, N, and climatic predictors of decomposition (Fig. ). The RFM partial dependence plots showed that many relationships between predictors and response variables were approximately linear until predictors increased above the 90th or larger percentiles, where response variables became approximately constant (Supplementary Figs. – ). The GAMMs similarly demonstrated that nearly all predictors had linear relationships with response variables (Supplementary Table ). Most of our optimal statistical models included multiple predictor variables from most categories (N-related, geochemical, and microbial variables; Fig. ). The optimal RFMs included similar predictors as the LMMs, with a few exceptions (see ). Thus, for clarity of explanation we hereafter focused mainly on results from the LMMs. In addition, to test whether biogeochemical predictors changed with time throughout lab incubation, we compared models of cumulative decomposition after 6, 12, and 18 months (Supplementary Figs. and ). Predictors were generally similar over time, and thus we focused our subsequent analysis on the 18-month (571 d) dataset (the other results are presented in the ). The LMM of lab lignin decomposition showed that soil pH and fungal composition were the strongest predictors, and that MAT, fungal quantity, Mn cd , Ca cd , silt+clay, Fe ox , and soil C/N could also improve the final model (Fig. ). Soil pH and Fe ox had negative relationships with lab lignin decomposition while all other predictors had positive relationships. These predictors explained 43% of the observed variance in lab lignin decomposition; the overall model (including random effects for site) explained 45%. Lignin decomposition in the field and lab shared many of the same predictors in the LMMs, including MAT, Fe ox , Ca cd , and Mn cd (Fig. ). The Fe ox showed a negative relationship while the other three predictors showed positive relationships with field lignin decomposition. These four predictors, along with MAP, soil C/N, soil N, pH, and Al ox , explained 31% of the variation in field lignin decomposition; the overall model with random effects explained 53% of the variation. Predictors of lab litter decomposition were generally similar to lab lignin decomposition. The LMM showed that Fe ox was the strongest predictor of litter decomposition, followed by fungal composition, Mn cd , Fe HCl , soil N, fungal quantity, bacterial quantity, MAT, and pH (Fig. ). Fe ox and pH were negative while all other predictors were positively related to litter decomposition. These predictors explained 49% of the variation in litter decomposition, and the overall model with random effects explained 52%. Contrary to lignin and litter decomposition, microbial variables were not important predictors of SOC decomposition in the statistical models (Fig. ). The LMM showed that soil C/N, MAT, and silt+clay were the strongest predictors for SOC decomposition, and that MAP, Ca cd , Mn cd , Al ox , and pH were also important (Fig. ). MAT and pH were positively while all other variables were negatively related to SOC decomposition. The predictors collectively explained 43% of the variation in SOC decomposition and the overall LMM (including random effects) explained 71%. Overall, our continental-scale data showed the particular importance of geochemical and microbial predictors for lignin and litter decomposition, and their differing relationships with SOC decomposition (Fig. ), consistent with our first and second hypothesis, respectively. Our results collectively supported different aspects of classic and modern views of decomposition. The strong correlation between lignin and litter decomposition (Fig. ) and the similar biogeochemical predictors of these processes support the classic view that lignin decomposition is tightly coupled with overall litter decomposition , . However, we found that decomposition of SOC was unrelated to decomposition of lignin and litter, and that these processes often had contrasting relationships with biogeochemical predictors. Several soil geochemical factors had negative (Fe ox and Al ox ) and positive (Fe HCl , Mn cd , and Ca cd ) relationships with lignin and/or litter decomposition, while there were almost entirely negative relationships between extractable soil metals and SOC decomposition (Fig. ; Supplementary Fig. ). Intriguingly, microbial variables including fungal composition and fungal and bacterial quantity were needed to explain variation in the decomposition of lignin and litter, but not SOC (Fig. ). These findings are inconsistent with the classic idea that the slow decomposition of lignin residues limits decomposition of total SOC . Rather, the disparate rates and predictors of lignin and SOC decomposition support the modern proposal that lignin depolymerization is not necessarily a primary bottleneck for SOC decomposition , . Furthermore, our dataset provides an explanation for the decoupling of lignin and SOC decomposition by highlighting their differing relationships with geochemical and microbial variables. Our data demonstrated that lignin decomposition was not universally slow or fast when compared with decomposition of litter and SOC, but rather, it varied predictably among sites along with biogeochemical variables (Fig. ). Long-term climatic predictors (MAT and MAP) could explain variation in both field and lab decomposition, but the actual vs. legacy climate, as reflected by the field vs. lab experiments, respectively, had different relationships with lignin decomposition (Fig. ; Supplementary Figs. and ). A relatively high amount of N addition stimulated decomposition of lignin and litter to a minor degree in most sites after 18 months (Fig. ), but its overall impact was small when considering the wide range of decomposition across samples (Fig. ). Inorganic N appeared to be less important than geochemical and microbial properties (Fig. ). We found that geochemical variables often had different relationships with lignin and litter decomposition than with SOC decomposition (Fig. ). For lignin and/or litter decomposition, some geochemical variables had negative relationships (e.g., Fe ox in the LMM and RFM and Al ox in the RFM) and others had positive relationships (e.g., Fe HCl , Mn cd and Ca cd in the LMM and RFM), while for SOC decomposition, they mainly had negative relationships (e.g., Al ox , Mn cd , and Ca cd in the LMM and RFM, and Fe HCl and Fe cd-ox in the RFM). The metals extracted from these NEON soils likely represent ions (e.g. Ca cd ), metals dissolved from mineral phases of varying crystallinity (e.g., Fe HCl , Fe ox , Fe cd-ox ), or a mixture of ions and mineral phases (e.g., Mn cd and Al ox ) . Synthesis studies and lab experiments demonstrate that soil mineral and metal cations as well as fine particles (silt+clay) are important predictors of SOC concentration due to protection by sorption, precipitation, and aggregation , , . In our study, protective effects of soil metals and minerals were mainly applicable for SOC decomposition, and conversely, some of these same variables were actually associated with greater decomposition of lignin and litter. We found positive associations of some soil metals (e.g., Mn cd and Ca cd in the LMM, and Fe HCl in the RFM) with lignin and litter decomposition (Fig. and Supplementary Fig. ), consistent with catalytic or biological roles of soil metals for organic matter decomposition demonstrated in other studies , , . The finding that Fe HCl was positively related and Fe ox was negatively related with lignin and litter decomposition was consistent with multiple functional roles of Fe, which might stimulate decomposition or provide protection depending on C molecular composition and/or redox environment , . Moreover, lignin and litter decomposition increased in samples with greater Mn and Ca (Fig. ), consistent with the importance of Mn-promoted degradation of organic C . Mn can promote lignin decomposition via enzymes and redox cycling , which may have increased overall litter decomposition . The strong positive relationships between Ca and decomposition of lignin and litter agreed with previous studies showing that Ca was positively related to the extent of litter mass loss, and in particular, lignin degradation , as Ca is an essential component of the fungal cell wall and can increase the growth of white rot fungi . We also found an overall positive relationship of silt+clay with lignin and litter decomposition, which might reflect multiple biological and physical factors that co-vary with particle size, as well as the potential for minerals to catalyze OM decomposition . Overall, the role of certain metals and fine particles in stimulating lignin and litter decomposition while suppressing SOC decomposition provides an explanation for the fact that these processes may be coupled or decoupled to varying degrees , depending on soil characteristics. Consistent with our hypotheses, composition, and quantity of overall fungal communities explained variation in lignin and litter decomposition (Fig. ). Intriguingly, however, only three of the fungal genera significantly correlated with lab lignin decomposition have been reported to degrade lignocellulose ( Trichocladium , soft-rot; Mycena and Hypochnicium , white rot) – (Supplementary Table ). This indicates that the most commonly studied lignin-degrading fungi (i.e., the known “rot” fungi) were not necessarily the most important lignin-degrading organisms in our continental-scale dataset . Consistent with a previous study across North America , we found that fungal communities were highly heterogenous across sites and even within plots; e.g., only 65 of 342 fungal species occurred in >10 samples. This finding further suggests that specific fungal taxa possibly responsible for lignin decomposition varied with locations and even depths at the same plot. Bacterial quantity was also related to both lignin and litter decomposition (Fig. and Supplementary Fig. ). Although bacteria may degrade lignin directly , they might simply be responding to increased C availability as a consequence of fungal lignin decomposition, or may synergistically interact with fungi to promote lignin decomposition . Our findings build on previous laboratory experiments that demonstrated an impact of microbial composition on decomposition of inoculated litter , , by showing that fungal community composition and abundance explained variation in lignin and litter decomposition rates even among diverse soils. This challenges the hypothesis of microbial functional redundancy , . Furthermore, the differing relationships between fungal composition and decomposition of lignin and SOC provide another explanation for the observed decoupling of these processes. Contrary to lignin and litter decomposition, microbial variables were less important predictors of SOC decomposition (Fig. ), despite high variation in fungal composition and richness and bacterial and fungal quantities across soils (Supplementary Figs. and ). Different axes of fungal community composition correlated with decomposition of lignin and litter (PC1) vs. SOC (PC2; Supplementary Fig. ). Significant relationships among SOC decomposition rate and microbial community composition, biomass, and richness have sometimes been reported , but other studies found weak relationships . Consistent with the latter findings, we found that microbial predictors including fungal community composition, fungal quantity, and bacterial quantity were not related to SOC decomposition after accounting for other variables (Fig. ). One possible explanation for the null relationship between microbial predictors and SOC decomposition is that SOC turnover is dominantly determined by decomposer access to SOC . A large proportion of the SOC in our incubated soils was probably stored in small pores inaccessible to microbes that did not protect the added lignin and litter, which were gently mixed into the soil. This might explain why decomposition of the added lignin and litter was measurably related to fungal community composition and quantity, whereas decomposition of SOC was not. After accounting for other biogeochemical predictors, MAT and MAP of the study sites was still related to organic matter decomposition (Fig. ) even under the common conditions of temperature and moisture imposed in the lab incubation. This is consistent with previous findings that climate history influenced litter and SOC decomposition, possibly by shaping the composition and functional responses of decomposer communities and/or via correlation with soil minerals through secondary mineral formation , , . Microbial communities from different soils can remain distinct over months to years even when exposed to a common temperature and moisture regime, and in spite of changes in community composition over time , (e.g., Supplemental Fig. ). Climate greatly impacts soil weathering , , and although our statistical models included geochemical variables, it is also possible that the apparent relationships between decomposition and climate also reflected geochemical differences that were not accounted for by the extractable metals data (Fig. ). It was not surprising that MAP had different relationships with field and lab lignin decomposition (Fig. , Supplementary Figs. and ), given that MAP reflected either the actual differences in climate during the field experiment or the legacies of prior differences in climate during the lab experiment. Nevertheless, all organic matter decomposition variables were positively related to MAT in the statistical models (Fig. ), suggesting that the legacy effect of soil MAT was stronger than the legacy effect of MAP on OM decomposition. On balance, inorganic N addition led to only a small net stimulation of lab lignin and litter decomposition after 18 months (Fig. ), and the effects of N were relatively small in comparison with variation across sites (Figs. and ). The positive response of lignin and litter decomposition to N addition might imply that microbial growth was N-limited in many sites. However, lab lignin and litter decomposition were not consistently related to inorganic N in the experiment without N addition, and they had differing relationships with total N and C:N (Fig. , Supplementary Figs. , and ). When considering continental-scale variation in biogeochemical properties, variation in N availability may be a less important driver of decomposition than sometimes assumed. Comparison of our results with other recent observations from NEON soils indicates that the differing controls on decomposition of lignin and litter vs. SOC may contribute to variation in SOC concentration and organic matter composition among ecosystems. Many of the same variables that predicted C decomposition in the lab incubation also predicted differences in SOC concentration and the distribution of SOC between size fractions, defined as chemically dispersed particulate organic C (>53 µm, likely derived mostly from plants) and mineral-associated organic C (<53 µm, likely derived from a mixture of plants and microbes), which were described in previous studies of NEON soils , . Silt- and clay-sized minerals and reactive Fe phases in particular have long been thought to protect SOC from decomposition, even though the relationships among these variables can be relatively weak across large datasets , . Here, we found that the magnitude or even the sign of the pairwise correlation or model coefficient between decomposition and silt+clay or Fe in various extractions (Fe HCl and Fe ox ) often differed between lignin/litter and SOC (Fig. and Supplementary Fig. ). These differences could influence SOM composition while explaining the context-dependency of relationships between Fe and SOC concentration in other datasets , . For example, negative relationships of Fe ox with lignin and litter decomposition (Fig. ) could help explain the positive relationship between Fe ox and the increasing proportion of SOC in particulate vs. mineral-associated organic C, which we observed in our previous study with the same soils . That is, Fe ox could increase particulate organic C by disproportionately decreasing rates of lignin decomposition relative to bulk SOC, consistent with the view that particulate organic C is mostly composed of decomposing plant detritus which may aggregate with certain metals . Similarly, the positive relationship between silt+clay and lignin decomposition and its negative relationship with SOC decomposition is consistent with our previous finding that increased silt+clay was associated with lower particulate vs. mineral-associated organic C . This might simply be due to increased capacity for mineral protection, but it might also be linked to increased catalysis of lignin decomposition by metals and/or minerals in these fine particle fractions . Together, the contrasting relationships of silt+clay and Fe ox with decomposition of lignin and litter vs. SOC provide an explanation for why these variables may be poor predictors of SOC concentration over broad scales, even though they may be related to the physical forms of SOC (particulate vs. mineral-associated organic C). In summary, using a quantitative isotopic method, we found that decomposition of lignin varied 18-fold among soils sampled from sites across North America and incubated in a common environment. Lignin decomposition was always slower than but was strongly related to bulk litter decomposition. Differences in lignin decomposition among sites were strongly related to biogeochemical predictors, in a manner that was similar to bulk litter decomposition but often differed from SOC decomposition. Different axes of fungal community composition were related to decomposition of lignin and litter compared to SOC, and metals often positively correlated with lignin decomposition even though they had a neutral or negative correlation with SOC decomposition. Similarities in controls on lignin vs. bulk litter decomposition reinforce the traditional view that lignin is tightly coupled with overall litter decay over timescales of months to years, although this might differ in environments subject to photodegradation . In contrast, the difference in controls on lignin and litter decomposition vs. SOC supports the modern notion that lignin depolymerization is not always a primary bottleneck for SOC decomposition. Based on the observed differences in drivers of lignin decomposition vs. SOC decomposition, we might expect lignin to be a more important component of SOC in soils with higher Fe ox and lower Mn cd , Ca cd , and silt+clay. Decomposition of all C forms increased with site MAT even though samples were incubated under a common temperature, possibly reflecting microbial or geochemical legacies related to climate. While substantial research has focused on N dynamics as controls on litter decomposition, our data showed that the influence of N availability on decomposition of lignin and litter was often smaller than other geochemical and microbial factors. Together, our data demonstrate the critical need for mechanistic models to account for contrasting geochemical and microbial controls on decomposition of lignin and litter vs. SOC, in addition to the traditional variables of climate, residue quality, and nutrient availability. Experimental design We used 20 sites from National Ecological Observatory Network (NEON) to examine decomposition of lignin, bulk litter, and SOC and to test biogeochemical predictors of decomposition of these substrates, including geochemical, microbial, N-related, and climatic variables. Soils amended with C stable isotope (β- 13 C)-labeled and unlabeled lignins and a single natural litter source were incubated in the lab to quantify lignin, litter, and SOC decomposition over 18 months (Fig. ). An additional incubation experiment was also conducted to test the effects of N addition on lignin and litter decomposition. The results of lignin decomposition and its predictors from the lab incubation were further compared with those from a field incubation. The lab incubation enabled us to compare C decomposition among samples while standardizing temperature and moisture, whereas the field incubation allowed us to assess effects of actual site temperature and moisture on lignin decomposition, while also allowing for sample colonization by additional microbes. Site selection and soil sampling NEON is a U.S.-based, continental-scale ecological monitoring network that provides open data, samples, and research infrastructure to reveal how ecosystems are responding to environmental change . NEON sites are stratified among domains defined by climate characteristics , not by soil type, and while they naturally contain a wide diversity of soil types, soils at each site are not necessarily representative of the corresponding ecoclimatic domain. For this project, we selected 20 NEON terrestrial sites, denoted by their acronyms as follows: BONA, CPER, DSNY, GRSM, HARV, KONZ, LENO, NIWO, ONAQ, OSBS, PUUM, SJER, SRER, SCBI, TALL, TOOL, UNDE, WREF, WOOD, YELL (Fig. ). These sites span wide edaphic, climatic and ecosystem gradients (Supplementary Fig. ), and they were chosen to span broad differences in biogeochemical characteristics, within constraints of feasibility and permitting. They encompass 9 out of the 12 soil orders in the United States Department of Agriculture (USDA) soil classification system (no Histosols, Oxisols, or Vertisols; Supplementary Table ). The nine soil orders include Alfisols (CPER and SCBI), Andisols (WREF), Aridisols (ONAQ and SRER), Entisols (DSNY, OSBS and PUUM), Gelisols (TOOL), Inceptisols (BONA, GRSM, HARV, LENO and NIWO), Mollisols (KONZ, SJER, WOOD and YELL), Spodosols (UNDE), and Ultisols (TALL). The sites had mean annual temperature (MAT) of −9–22 °C and received 262–2657 mm of mean annual precipitation (MAP). The sites included diverse ecosystem types, such as tundra, forest, wetland, grassland, shrubland, and desert. Soils at each site were sampled by NEON staff during the growing season of 2019 (April–August; later sampling occurred at Alaska sites where soils did not thaw until July or August). Mineral soil samples were collected at two depths (0–15 cm and 15–30 cm), after removing any surface litter or organic horizon (Supplementary Table ), using a 2- to 5-cm diameter corer, according to the standard NEON sampling procedure for that particular site. At each site, samples were collected around the perimeter of one 40 × 40-m “distributed base plot” selected to represent the dominant upland vegetation and soil type of that site whenever possible. Soil at the KONZ site was collected only at 0–15 cm due to the shallow soil depth. Each plot had 16 replicates ( n = 16), denoted sampling points 1–16 hereafter. Point 1 was located 4 m west and 4 m south from the SW corner of each plot, and the other points were located in counterclockwise sequence at 12-m intervals around the perimeter of the plot, each located 4 m outside of the plot boundary (Fig. ). Soil cores from each point were collected and shipped overnight on ice (~4 °C) to Iowa State University (ISU) for use in laboratory and field incubations. Soil from each sample was gently homogenized inside a plastic bag after any coarse roots, macrofauna, or rocks were manually removed. We did not sieve samples except for ONAQ and SRER, where rocks were abundant and were removed by passing soil through a 2-mm sieve. Lab incubation experiments Soils from four sampling points at two depths per site were used for lab incubation and biogeochemical analyses, totaling 156 samples. The four sampling points were mainly selected at the odd number in the middle of each side of the 40 × 40-m plot (red circles in Fig. ), although sampling points from some sites were selected at the numbers next to the middle numbers if soils were not available for both layers. Subsamples of soils used for the lab incubation experiment were brought to field moisture capacity, which was determined for each soil by saturating an additional 20–30-g subsample placed on filter paper in a funnel, and measuring gravimetric water content following 48 h of drainage. Subsamples (1 g dry mass equivalent) from each sampling point and depth were incubated under each of three separate substrate treatments to partition C decomposition among three sources, using measurements of δ 13 C values of CO 2 . We quantified decomposition of C from extant soil organic matter, C from added litter (senesced leaves of Andropogon gerardi , a C 4 grass), and a specific C atom (the C β position of the propyl sidechain) in lignin that was precipitated on the added litter. The lignin was prepared as described in the . Substrate treatments were: (1) soils alone (control); (2) soils amended with A. gerardi litter precipitated with trace natural abundance 13 C lignin (soil + litter + unlabeled lignin); (3) soils amended with A. gerardi litter precipitated with trace lignin labeled with 99 atom percent 13 C at the C β position of each lignin C 9 substructure (soil + litter + 13 C β -labeled lignin). We added uniform litter and synthetic lignin to each of the mineral soils to focus on soil biogeochemical gradients rather than substrate quality. Soils were gently mixed with the litter + lignin mixture in a 250:25:1 ratio of soil:litter:lignin (1 g dry soil mass equivalent was mixed with 100 mg litter and 4 mg lignin). To prepare the litter + lignin mixture, the unlabeled or labeled lignins were precipitated in a 1:25 mass ratio on dried and finely ground leaf litter of A. gerardi (41.9% C, 0.41% N, and δ 13 C = −12.6‰; see Supplementary Methods for more details). The 20 NEON sites comprise ecosystems ranging from C 3 -dominated forest and grassland sites to mixed C 3 –C 4 grasslands and/or plants with Crassulacean acid metabolism, such that the δ 13 C value of the added C 4 litter was always more positive than δ 13 C value of CO 2 derived from soil organic matter at a given site. Soil samples were incubated under oxic conditions in the dark at 23 °C for 571 d. Soil was kept in an open 50 mL centrifuge tube inside a glass jar (946 mL) sealed with a gas-tight aluminum lid with butyl septa for headspace gas purging and sampling. The jars were flushed with CO 2 -free air following periodic headspace sampling as described below, and CO 2 concentrations remained below 5000 ppm during the incubation. Assuming a 1:1 ratio between CO 2 production and oxygen (O 2 ) consumption, O 2 decreased by <2.4% of the initial value (20.9%) during each sampling period. Because the volume of incubated soil was ~1000-fold smaller than the jar headspace, CO 2 produced by soil microbes would diffuse out of the soil and accumulate in the headspace with negligible storage in soil pores. Soil moisture was monitored by recording the mass of each sample, and water was added as necessary to match the original mass of each sample under field moisture capacity every month before 179 d and every other month thereafter (due to the less frequent gas sampling) to replenish vapor lost during headspace flushing. To monitor instantaneous decomposition over time and to avoid accumulation of CO 2 > 5000 ppm in the jar, headspace gas was initially measured at 4 d and 11 d, every other week for another 140 d, and then every month after 179 d (for a total duration of 571 d). The CO 2 concentrations and their δ 13 C values were measured by a tunable diode laser absorption spectrometer (TGA200A, Campbell Scientific, Logan, UT) immediately prior to flushing the headspace . Because jars remained sealed between headspace sampling events, we were able to quantify the entire cumulative production of CO 2 and its δ 13 C value from each replicate over the course of the experiment. The CO 2 production from soil was measured on samples with no addition of litter and lignin, and CO 2 from litter and 13 C β -labeled lignin was calculated by two-source mixing models that used measurements from the litter + unlabeled lignin and litter + 13 C β -labeled lignin treatments, respectively , (see for more details). The C decomposition from soil, litter and lignin were expressed as percentages of their initial C masses (41.9 mg for litter and 264 μg for the 13 C β atom of the labeled lignin, and a variable amount for SOC; Supplementary Table ). We also conducted an N addition experiment to test the effects of N availability on lignin and litter decomposition, using additional subsamples of the 0–15 cm soils collected from the four sampling points described above. For this experiment, the subsamples amended with litter + unlabeled lignin or litter + 13 C β -labeled lignin were also amended with NH 4 NO 3 at 50 mg N g −1 . The amount of added N was relatively high but comparable to inorganic N concentrations often observed in agricultural fields after fertilization . Briefly, 51 mL of 0.0386 mol L −1 NH 4 NO 3 was added to soil samples, and then more water was added as necessary to achieve field moisture capacity. Sample incubation and gas measurements were the same as described above. Field incubation experiments The 0–15 cm soils from all 16 sampling points at each site were used for field incubation (Fig. ). Soil subsamples (4.5 g dry mass equivalent) were gently mixed with litter + unlabeled lignin or litter + 13 C β -labeled lignin according to the mass ratios and substrate treatments described above. The soil + litter mixtures were then transferred to mesh bags (8 cm × 8 cm in size; 55 μm nylon screen), which allowed entry of fungal hyphae, bacteria, and soil microfauna while minimizing particle loss . The mesh bags were sealed with hot glue and shipped back to the sites of origin and buried at a depth of 0–15 cm at the same locations where soils were initially sampled, and geo-referenced to facilitate retrieval. The mesh bags with litter + unlabeled lignin were buried at even-numbered sampling points for each site, and those with litter + 13 C β -labeled lignin were buried at the odd-numbered sampling points. After ~1 y of field incubation, the mesh bags were retrieved by NEON staff, flash-frozen on dry ice, and shipped on ice to ISU. Some bags were damaged or could not be located in the field (31 out of 320 samples). The soil and litter mixture was subsampled from each mesh bag, and then air-dried and finely ground for analysis of C concentrations and δ 13 C at the UC Davis Stable Isotope Facility using an elemental analyzer (Elementar Analysensysteme GmbH, Hanau, Germany) and continuous flow isotope ratio mass spectrometer (Sercon Ltd., Cheshire, UK). Lignin C remaining after the 1-y field incubation was calculated by multiplying f lignin calculated based on a two-source mixing model (see details in ) by the total C concentration in samples from the soil + litter + 13 C β -labeled lignin treatment, with corrections accounting for new C inputs as necessary based on measurements of the samples with unlabeled lignin (see details in ). Soil inorganic N availability We measured ammonium (NH 4 + ) and nitrate (NO 3 - ) in additional replicate soil + litter mixture samples (10:1 mass ratio of soil to litter) from all soils used in the lab incubation after 1, 9, and 18 months. Briefly, 10 g soil mixed with 1 g litter was placed in a 50 mL centrifuge tube, loosely covered, and then incubated at 23 °C in the dark after adjusting soil moisture to field capacity. Water was periodically added to soil samples to replace vapor loss, measured gravimetrically. Soil (~2 g) was subsampled from each centrifuge tube and extracted with 2 M potassium chloride at each timepoint. The soil solution was analyzed by microplate colorimetry for NH 4 + -N . The NO 3 − -N was analyzed by microplate colorimetry or for the 9-month samples, second-derivative spectroscopy ; these methods agreed almost perfectly on a subset of samples (slope = 0.95, R 2 = 0.97). Net N mineralization (sometimes known as potential N mineralization) was calculated as the difference in inorganic N between sets of sampling points (9-month vs. 1-month; 18-month vs. 9-month; 18-month vs. 1-month). Soil geochemical analysis Most physical and geochemical measurements were made on soils from all of the sampling points used for the field and lab incubations, except for particle size and 0.5 M HCl extractions, which were done for the four sampling points per site used for laboratory incubation. Physical and geochemical measurements included soil pH, particle size fractions, 0.5 M HCl-extractable Fe(II) and Fe(III), ammonium oxalate-extracted metals (Al, Fe, Mn), and citrate dithionite-extracted metals (Al, Fe, Mn, and Ca). Some of these data were presented previously in a manuscript describing relationships between soil properties and particulate and mineral-associated organic matter fractions of these soils . Field-moist soil subsamples were measured for pH in 1:1 slurries of soil and deionized water. Air-dried subsamples were used to measure particle size (sand, silt, and clay) by sieving and sedimentation following aggregate dispersion with sodium hexametaphosphate . Field-moist subsamples were extracted with 0.5 M hydrochloric acid (HCl) to measure ionic Fe and highly reactive fractions of Fe(II) and Fe(III) minerals . Concentrations of Fe(II) and Fe(III) were measured colorimetrically and summed as Fe HCl . Additional air-dried subsamples were extracted with acid ammonium oxalate in the dark at pH = 3 to measure organo-metal complexes and short-range-ordered (SRO) phases of Al, Fe, and Mn (denoted Al ox , Fe ox , and Mn ox ), and with sodium citrate dithionite to measure the crystalline and SRO phases of Fe (Fe cd ) as well co-occurring Al, Mn, and Ca (Al cd , Mn cd , and Ca cd ) . Metals were analyzed via inductively coupled plasma optical emission spectrometry (PerkinElmer Optima 5300 DV, Waltham, MA). Extractions of Al and Mn by oxalate and citrate dithionite were very similar ( r = 0.88 and P < 0.001 for Al; r = 0.98 and P < 0.001 for Mn), so we only report Al ox and Mn cd . The difference between Fe cd and Fe ox represents crystalline phases (Fe cd-ox ). We interpret Mn cd as including exchangeable Mn, organo-metal complexes, and poorly crystalline phases. We interpret Ca cd as a measure of exchangeable Ca and Ca in organo-Fe associations. Microbial analysis DNA was extracted from soils for internal transcribed spacer (ITS) rRNA gene amplicon sequencing and quantitative PCR of 16 S and ITS rRNA regions. Each of the four soils per site used for lab incubation was subsampled for DNA extraction at the beginning of the incubation, and additional replicates were extracted after 9 and 14 months. The incubated replicates used for DNA extraction were prepared similarly to the replicates used for CO 2 analyses, and were amended with A. gerardi litter in a 1:10 mass ratio of litter to soil. The field-incubated soils corresponding to the same four sampling points for each site used in the lab incubation were also extracted for DNA, totaling 548 samples overall (156 soils × 3 timepoints for lab incubation and 80 soils for field incubation). Soils were stored at −80 °C before DNA extraction from 250 mg subsamples using the MagAttract PowerSoil DNA EP Kit (Qiagen, USA) on an Eppendorf epMotion 5075 liquid handling robot (Eppendorf North America, USA). Concentrations of DNA were measured using a Quant-iT™ dsDNA high-sensitivity Assay Kit (Invitrogen, USA) to standardize DNA masses for sequencing. Samples were diluted to 10 ng DNA μL −1 prior to sequencing; samples with concentration <10 ng DNA μL −1 were submitted directly. The ITS1 region of the ITS rRNA gene was amplified using the primer sets ITS1f (CTTGGTCATTTAGAGGAAGTAA) and ITS2 (GCTGCGTTCTTCATCGATGC), with PCR conditions as follows: 1 min at 94 °C, followed by 35 cycles of 30 s at 94 °C, 30 s at 52 °C and 30 s at 68 °C, and 10 min at 68 °C. Fungal ITS rRNA gene amplicon sequencing was performed on the Illumina Miseq platform at Argonne National Laboratory with library preparation using the Miseq Reagent Kit V2 (Illumina, USA), producing 2 × 250-bp reads. Quantitative real-time PCR was performed on a CFX96 TM real-time system coupled to a C1000 TM thermal cycler (Bio-Rad, USA) to assess the quantity of 16 S and ITS rRNA genes. Each sample was prepared using 10 μL of SsoFast EvaGreen Supermix, 0.6 μL of each primer, 2 μL of diluted DNA sample, and nuclease-free water to a final volume of 20 μL. Bacterial 16 S rRNA genes were amplified using the primer sets 1055YF(ATGGYTGTCGTCAGCT) and 1392 R (ACGGGCGGTGTGTAC) and the following PCR conditions: 2 min at 50 °C and 10 min at 95 °C, followed by 40 cycles of 15 s at 95 °C and 1 min at 58 °C. Fungal ITS rRNA genes were amplified using the primer sets ITS1F_KYO1 (CTHGGTCATTTAGAGGAASTAA) and ITS2_KYO2 (TTYRCTRCGTTCTTCATC) and the following PCR conditions: 2 min at 50 °C and 2 min at 95 °C, followed by 40 cycles of 30 s at 95 °C, 30 s at 55 °C and 60 sec at 72 °C, and 10 min at 72 °C. Standard curves for 16 S and ITS rRNA genes were constructed using serial 10-fold dilutions from 10 −1 to 10 −8 of known concentrations of synthesized oligonucleotides (Integrated DNA Technologies, USA). Bioinformatics We used the Divisive Amplicon Denoising Algorithm 2 (DADA2) pipeline to process the ITS rRNA gene sequencing data in R statistical software version 3.6.1 . We excluded samples with ≤900 sequences, including 27, 78, and 1 sample collected after 0, 9, and 14 months of the lab incubation, respectively. All functions were run using default parameters suggested by the DADA2 pipeline tutorial. The end product included an amplicon sequence variant (ASV) table recording the number of times each exact ASV was observed in each sample, along with a taxa table recording taxonomy assigned to the ASVs from kingdom to species levels, using the naive Bayesian classifier algorithm and the UNITE database version 10.05.2021. Most ASVs had 251–336 bp, falling within the commonly amplified ITS1 region length of 200–600 bp. Next, we trimmed the ASV tables using the “phyloseq” package in R. ASVs with <10 sequences, i.e., rare ASVs, across all samples were removed. Before trimming, there were 22154 total ASVs and 3118076 total sequences across 442 samples; afterwards, there were 15583 total ASVs and 3085446 total sequences. After removing rare ASVs, there were 4 to 126 ASVs (mean = 55) and 441 to 17234 sequences per sample (mean = 6981). Statistical analysis For the lab incubation, we explored temporal trends in instantaneous C decomposition rate from each C source at each site and in lignin C decomposition rate for each individual sampling point (Supplementary Fig. ), using GAMMs including an autoregressive error term to account for temporal autocorrelation, using the “mgcv” package version 1.8.28 in R 3.6.1. Pairwise correlations between cumulative C decomposition over 6, 12, and 18 months (lignin, litter, soil and field lignin decomposition) and biogeochemical predictors were tested by Pearson correlation. The biogeochemical predictors included several categories, which we define as follows (1) climatic: MAT and MAP; (2) N-related: bulk N, bulk C/N, NH 4 + -N and NO 3 - -N after 1-, 9- and 18-month incubations; (3) geochemical: soil pH, silt+clay, Al ox , Fe ox , Fe cd-ox , Fe HCl , Mn cd , Ca cd ; (4) microbial: fungal composition, fungal Chao1 richness, fungal quantity, bacterial quantity, and fungal-to-bacterial ratio (Supplementary Table ). In the microbial predictors, fungal composition was represented by the first (PC1) or second (PC2) axis of a principal coordinate analysis of ITS rRNA gene sequencing data on soils subsampled from the lab incubation at 14 months, conducted in the “vegan” package. The species-level abundance table (rather than the ASV table) was used to calculate Hellinger distances among samples before the analysis to alleviate the issue of a sparse matrix with many zero values . The PC2 of fungal species composition was significantly ( P < 0.01) correlated with cumulative lignin ( r = 0.37) and litter ( r = 0.47) decomposition in the lab incubation and was thus used as a fungal composition predictor. Similarly, the PC1 of fungal species composition was significantly ( P < 0.01) correlated with cumulative SOC decomposition ( r = 0.35). Overall fungal composition changed little with time during the lab incubation (Supplementary Fig. ). Therefore, for subsequent statistical analyses we used the ITS data from samples collected after 14 months of incubation, because only one sample from this timepoint was excluded from analyses because of low read counts. Fungal richness was represented by the residual of ASV Chao1 index regressed on the square root of the number of total sequences within a sample, a method that accounts for differences in sequencing depth among samples . We used copy numbers of ITS and 16 S rRNA genes in the initial soil samples (1 g dry mass equivalent) as indices of fungal and bacterial quantity in our statistical models. Although fungal and bacterial quantities changed throughout the incubation (Supplementary Fig. ), including data from 9 and 14 months did not improve model performance. Fungal-to-bacterial ratio was calculated as fungal quantity divided by bacterial quantity. We used LMMs and RFMs to identify important predictors for cumulative C decomposition (lignin, litter, soil, and field lignin) variables. We included the above-mentioned climatic, N-related, geochemical, and microbial predictors in models of the laboratory incubation decomposition data. Inorganic N predictors from three timepoints explained some variation in lab litter decomposition in the RFM but including these predictors did not improve model performance or change variable importance of other key predictors. Thus, inorganic N predictors were not retained in the final models, and we conducted the above-mentioned N addition experiment to specifically test the effects of inorganic N on lignin and litter decomposition. For statistical models of field lignin decomposition, we first fit the models including all categories of predictors and found that microbial predictors, silt+clay, and Fe HCl were not important predictors of field lignin decomposition. Therefore, we re-fit the models excluding these candidate predictors because these data were collected only for the field samples from the locations corresponding to the lab incubation. Inorganic N variables in soil + litter mixtures were not measured for field lignin decomposition. In the LMM, homoscedasticity and normality assumptions were met by raw data, except for lab lignin decomposition, which was log10 transformed. To estimate predictor importance, all variables were standardized to a mean of zero and a standard deviation of one to account for magnitude difference. All predictor variables were used as fixed effects and site was included as a random intercept to account for possible intra-site dependence in the LMMs. Adding sampling location as an additional random effect to account for correlations between 0–15 and 15–30 cm samples did not improve model performance. Some candidate predictors were excluded from initial models because of weak pairwise correlations with response variables (usually r < 0.10), and/or moderate-to-strong collinearities with other predictors (usually r > 0.50; Supplementary Table ). We acknowledge that this approach might potentially exclude some important predictors that were correlated with other variables, but we found that decreasing the list of candidate predictor variables was important to achieve stable parameter estimates in cases of collinearity. Some predictors were further removed from final models through comparison of Akaike Information Criterion (AIC) values of nested models using stepwise backward selection. All predictors in the final models exhibited variance inflation factor values <3 and correlation coefficients <0.70 or >−0.70, implying that collinearity was acceptable. The relative contributions of fixed effects were determined by standardized regression coefficient estimates, and their significance was tested by the Wald chi-square test. The LMM performance was evaluated by R 2 representing variance explained by only the fixed effects and by the model, respectively. The LMM analyses were conducted with the “lme4” package . We further used generalized additive mixed models (GAMMs) to verify the linearity of the important biogeochemical predictors defined in the LMMs. Details of RFM and GAMM analyses are described in the . All statistical analyses and plotting were performed in R statistical software version 3.6.1 . We used 20 sites from National Ecological Observatory Network (NEON) to examine decomposition of lignin, bulk litter, and SOC and to test biogeochemical predictors of decomposition of these substrates, including geochemical, microbial, N-related, and climatic variables. Soils amended with C stable isotope (β- 13 C)-labeled and unlabeled lignins and a single natural litter source were incubated in the lab to quantify lignin, litter, and SOC decomposition over 18 months (Fig. ). An additional incubation experiment was also conducted to test the effects of N addition on lignin and litter decomposition. The results of lignin decomposition and its predictors from the lab incubation were further compared with those from a field incubation. The lab incubation enabled us to compare C decomposition among samples while standardizing temperature and moisture, whereas the field incubation allowed us to assess effects of actual site temperature and moisture on lignin decomposition, while also allowing for sample colonization by additional microbes. NEON is a U.S.-based, continental-scale ecological monitoring network that provides open data, samples, and research infrastructure to reveal how ecosystems are responding to environmental change . NEON sites are stratified among domains defined by climate characteristics , not by soil type, and while they naturally contain a wide diversity of soil types, soils at each site are not necessarily representative of the corresponding ecoclimatic domain. For this project, we selected 20 NEON terrestrial sites, denoted by their acronyms as follows: BONA, CPER, DSNY, GRSM, HARV, KONZ, LENO, NIWO, ONAQ, OSBS, PUUM, SJER, SRER, SCBI, TALL, TOOL, UNDE, WREF, WOOD, YELL (Fig. ). These sites span wide edaphic, climatic and ecosystem gradients (Supplementary Fig. ), and they were chosen to span broad differences in biogeochemical characteristics, within constraints of feasibility and permitting. They encompass 9 out of the 12 soil orders in the United States Department of Agriculture (USDA) soil classification system (no Histosols, Oxisols, or Vertisols; Supplementary Table ). The nine soil orders include Alfisols (CPER and SCBI), Andisols (WREF), Aridisols (ONAQ and SRER), Entisols (DSNY, OSBS and PUUM), Gelisols (TOOL), Inceptisols (BONA, GRSM, HARV, LENO and NIWO), Mollisols (KONZ, SJER, WOOD and YELL), Spodosols (UNDE), and Ultisols (TALL). The sites had mean annual temperature (MAT) of −9–22 °C and received 262–2657 mm of mean annual precipitation (MAP). The sites included diverse ecosystem types, such as tundra, forest, wetland, grassland, shrubland, and desert. Soils at each site were sampled by NEON staff during the growing season of 2019 (April–August; later sampling occurred at Alaska sites where soils did not thaw until July or August). Mineral soil samples were collected at two depths (0–15 cm and 15–30 cm), after removing any surface litter or organic horizon (Supplementary Table ), using a 2- to 5-cm diameter corer, according to the standard NEON sampling procedure for that particular site. At each site, samples were collected around the perimeter of one 40 × 40-m “distributed base plot” selected to represent the dominant upland vegetation and soil type of that site whenever possible. Soil at the KONZ site was collected only at 0–15 cm due to the shallow soil depth. Each plot had 16 replicates ( n = 16), denoted sampling points 1–16 hereafter. Point 1 was located 4 m west and 4 m south from the SW corner of each plot, and the other points were located in counterclockwise sequence at 12-m intervals around the perimeter of the plot, each located 4 m outside of the plot boundary (Fig. ). Soil cores from each point were collected and shipped overnight on ice (~4 °C) to Iowa State University (ISU) for use in laboratory and field incubations. Soil from each sample was gently homogenized inside a plastic bag after any coarse roots, macrofauna, or rocks were manually removed. We did not sieve samples except for ONAQ and SRER, where rocks were abundant and were removed by passing soil through a 2-mm sieve. Soils from four sampling points at two depths per site were used for lab incubation and biogeochemical analyses, totaling 156 samples. The four sampling points were mainly selected at the odd number in the middle of each side of the 40 × 40-m plot (red circles in Fig. ), although sampling points from some sites were selected at the numbers next to the middle numbers if soils were not available for both layers. Subsamples of soils used for the lab incubation experiment were brought to field moisture capacity, which was determined for each soil by saturating an additional 20–30-g subsample placed on filter paper in a funnel, and measuring gravimetric water content following 48 h of drainage. Subsamples (1 g dry mass equivalent) from each sampling point and depth were incubated under each of three separate substrate treatments to partition C decomposition among three sources, using measurements of δ 13 C values of CO 2 . We quantified decomposition of C from extant soil organic matter, C from added litter (senesced leaves of Andropogon gerardi , a C 4 grass), and a specific C atom (the C β position of the propyl sidechain) in lignin that was precipitated on the added litter. The lignin was prepared as described in the . Substrate treatments were: (1) soils alone (control); (2) soils amended with A. gerardi litter precipitated with trace natural abundance 13 C lignin (soil + litter + unlabeled lignin); (3) soils amended with A. gerardi litter precipitated with trace lignin labeled with 99 atom percent 13 C at the C β position of each lignin C 9 substructure (soil + litter + 13 C β -labeled lignin). We added uniform litter and synthetic lignin to each of the mineral soils to focus on soil biogeochemical gradients rather than substrate quality. Soils were gently mixed with the litter + lignin mixture in a 250:25:1 ratio of soil:litter:lignin (1 g dry soil mass equivalent was mixed with 100 mg litter and 4 mg lignin). To prepare the litter + lignin mixture, the unlabeled or labeled lignins were precipitated in a 1:25 mass ratio on dried and finely ground leaf litter of A. gerardi (41.9% C, 0.41% N, and δ 13 C = −12.6‰; see Supplementary Methods for more details). The 20 NEON sites comprise ecosystems ranging from C 3 -dominated forest and grassland sites to mixed C 3 –C 4 grasslands and/or plants with Crassulacean acid metabolism, such that the δ 13 C value of the added C 4 litter was always more positive than δ 13 C value of CO 2 derived from soil organic matter at a given site. Soil samples were incubated under oxic conditions in the dark at 23 °C for 571 d. Soil was kept in an open 50 mL centrifuge tube inside a glass jar (946 mL) sealed with a gas-tight aluminum lid with butyl septa for headspace gas purging and sampling. The jars were flushed with CO 2 -free air following periodic headspace sampling as described below, and CO 2 concentrations remained below 5000 ppm during the incubation. Assuming a 1:1 ratio between CO 2 production and oxygen (O 2 ) consumption, O 2 decreased by <2.4% of the initial value (20.9%) during each sampling period. Because the volume of incubated soil was ~1000-fold smaller than the jar headspace, CO 2 produced by soil microbes would diffuse out of the soil and accumulate in the headspace with negligible storage in soil pores. Soil moisture was monitored by recording the mass of each sample, and water was added as necessary to match the original mass of each sample under field moisture capacity every month before 179 d and every other month thereafter (due to the less frequent gas sampling) to replenish vapor lost during headspace flushing. To monitor instantaneous decomposition over time and to avoid accumulation of CO 2 > 5000 ppm in the jar, headspace gas was initially measured at 4 d and 11 d, every other week for another 140 d, and then every month after 179 d (for a total duration of 571 d). The CO 2 concentrations and their δ 13 C values were measured by a tunable diode laser absorption spectrometer (TGA200A, Campbell Scientific, Logan, UT) immediately prior to flushing the headspace . Because jars remained sealed between headspace sampling events, we were able to quantify the entire cumulative production of CO 2 and its δ 13 C value from each replicate over the course of the experiment. The CO 2 production from soil was measured on samples with no addition of litter and lignin, and CO 2 from litter and 13 C β -labeled lignin was calculated by two-source mixing models that used measurements from the litter + unlabeled lignin and litter + 13 C β -labeled lignin treatments, respectively , (see for more details). The C decomposition from soil, litter and lignin were expressed as percentages of their initial C masses (41.9 mg for litter and 264 μg for the 13 C β atom of the labeled lignin, and a variable amount for SOC; Supplementary Table ). We also conducted an N addition experiment to test the effects of N availability on lignin and litter decomposition, using additional subsamples of the 0–15 cm soils collected from the four sampling points described above. For this experiment, the subsamples amended with litter + unlabeled lignin or litter + 13 C β -labeled lignin were also amended with NH 4 NO 3 at 50 mg N g −1 . The amount of added N was relatively high but comparable to inorganic N concentrations often observed in agricultural fields after fertilization . Briefly, 51 mL of 0.0386 mol L −1 NH 4 NO 3 was added to soil samples, and then more water was added as necessary to achieve field moisture capacity. Sample incubation and gas measurements were the same as described above. The 0–15 cm soils from all 16 sampling points at each site were used for field incubation (Fig. ). Soil subsamples (4.5 g dry mass equivalent) were gently mixed with litter + unlabeled lignin or litter + 13 C β -labeled lignin according to the mass ratios and substrate treatments described above. The soil + litter mixtures were then transferred to mesh bags (8 cm × 8 cm in size; 55 μm nylon screen), which allowed entry of fungal hyphae, bacteria, and soil microfauna while minimizing particle loss . The mesh bags were sealed with hot glue and shipped back to the sites of origin and buried at a depth of 0–15 cm at the same locations where soils were initially sampled, and geo-referenced to facilitate retrieval. The mesh bags with litter + unlabeled lignin were buried at even-numbered sampling points for each site, and those with litter + 13 C β -labeled lignin were buried at the odd-numbered sampling points. After ~1 y of field incubation, the mesh bags were retrieved by NEON staff, flash-frozen on dry ice, and shipped on ice to ISU. Some bags were damaged or could not be located in the field (31 out of 320 samples). The soil and litter mixture was subsampled from each mesh bag, and then air-dried and finely ground for analysis of C concentrations and δ 13 C at the UC Davis Stable Isotope Facility using an elemental analyzer (Elementar Analysensysteme GmbH, Hanau, Germany) and continuous flow isotope ratio mass spectrometer (Sercon Ltd., Cheshire, UK). Lignin C remaining after the 1-y field incubation was calculated by multiplying f lignin calculated based on a two-source mixing model (see details in ) by the total C concentration in samples from the soil + litter + 13 C β -labeled lignin treatment, with corrections accounting for new C inputs as necessary based on measurements of the samples with unlabeled lignin (see details in ). We measured ammonium (NH 4 + ) and nitrate (NO 3 - ) in additional replicate soil + litter mixture samples (10:1 mass ratio of soil to litter) from all soils used in the lab incubation after 1, 9, and 18 months. Briefly, 10 g soil mixed with 1 g litter was placed in a 50 mL centrifuge tube, loosely covered, and then incubated at 23 °C in the dark after adjusting soil moisture to field capacity. Water was periodically added to soil samples to replace vapor loss, measured gravimetrically. Soil (~2 g) was subsampled from each centrifuge tube and extracted with 2 M potassium chloride at each timepoint. The soil solution was analyzed by microplate colorimetry for NH 4 + -N . The NO 3 − -N was analyzed by microplate colorimetry or for the 9-month samples, second-derivative spectroscopy ; these methods agreed almost perfectly on a subset of samples (slope = 0.95, R 2 = 0.97). Net N mineralization (sometimes known as potential N mineralization) was calculated as the difference in inorganic N between sets of sampling points (9-month vs. 1-month; 18-month vs. 9-month; 18-month vs. 1-month). Most physical and geochemical measurements were made on soils from all of the sampling points used for the field and lab incubations, except for particle size and 0.5 M HCl extractions, which were done for the four sampling points per site used for laboratory incubation. Physical and geochemical measurements included soil pH, particle size fractions, 0.5 M HCl-extractable Fe(II) and Fe(III), ammonium oxalate-extracted metals (Al, Fe, Mn), and citrate dithionite-extracted metals (Al, Fe, Mn, and Ca). Some of these data were presented previously in a manuscript describing relationships between soil properties and particulate and mineral-associated organic matter fractions of these soils . Field-moist soil subsamples were measured for pH in 1:1 slurries of soil and deionized water. Air-dried subsamples were used to measure particle size (sand, silt, and clay) by sieving and sedimentation following aggregate dispersion with sodium hexametaphosphate . Field-moist subsamples were extracted with 0.5 M hydrochloric acid (HCl) to measure ionic Fe and highly reactive fractions of Fe(II) and Fe(III) minerals . Concentrations of Fe(II) and Fe(III) were measured colorimetrically and summed as Fe HCl . Additional air-dried subsamples were extracted with acid ammonium oxalate in the dark at pH = 3 to measure organo-metal complexes and short-range-ordered (SRO) phases of Al, Fe, and Mn (denoted Al ox , Fe ox , and Mn ox ), and with sodium citrate dithionite to measure the crystalline and SRO phases of Fe (Fe cd ) as well co-occurring Al, Mn, and Ca (Al cd , Mn cd , and Ca cd ) . Metals were analyzed via inductively coupled plasma optical emission spectrometry (PerkinElmer Optima 5300 DV, Waltham, MA). Extractions of Al and Mn by oxalate and citrate dithionite were very similar ( r = 0.88 and P < 0.001 for Al; r = 0.98 and P < 0.001 for Mn), so we only report Al ox and Mn cd . The difference between Fe cd and Fe ox represents crystalline phases (Fe cd-ox ). We interpret Mn cd as including exchangeable Mn, organo-metal complexes, and poorly crystalline phases. We interpret Ca cd as a measure of exchangeable Ca and Ca in organo-Fe associations. DNA was extracted from soils for internal transcribed spacer (ITS) rRNA gene amplicon sequencing and quantitative PCR of 16 S and ITS rRNA regions. Each of the four soils per site used for lab incubation was subsampled for DNA extraction at the beginning of the incubation, and additional replicates were extracted after 9 and 14 months. The incubated replicates used for DNA extraction were prepared similarly to the replicates used for CO 2 analyses, and were amended with A. gerardi litter in a 1:10 mass ratio of litter to soil. The field-incubated soils corresponding to the same four sampling points for each site used in the lab incubation were also extracted for DNA, totaling 548 samples overall (156 soils × 3 timepoints for lab incubation and 80 soils for field incubation). Soils were stored at −80 °C before DNA extraction from 250 mg subsamples using the MagAttract PowerSoil DNA EP Kit (Qiagen, USA) on an Eppendorf epMotion 5075 liquid handling robot (Eppendorf North America, USA). Concentrations of DNA were measured using a Quant-iT™ dsDNA high-sensitivity Assay Kit (Invitrogen, USA) to standardize DNA masses for sequencing. Samples were diluted to 10 ng DNA μL −1 prior to sequencing; samples with concentration <10 ng DNA μL −1 were submitted directly. The ITS1 region of the ITS rRNA gene was amplified using the primer sets ITS1f (CTTGGTCATTTAGAGGAAGTAA) and ITS2 (GCTGCGTTCTTCATCGATGC), with PCR conditions as follows: 1 min at 94 °C, followed by 35 cycles of 30 s at 94 °C, 30 s at 52 °C and 30 s at 68 °C, and 10 min at 68 °C. Fungal ITS rRNA gene amplicon sequencing was performed on the Illumina Miseq platform at Argonne National Laboratory with library preparation using the Miseq Reagent Kit V2 (Illumina, USA), producing 2 × 250-bp reads. Quantitative real-time PCR was performed on a CFX96 TM real-time system coupled to a C1000 TM thermal cycler (Bio-Rad, USA) to assess the quantity of 16 S and ITS rRNA genes. Each sample was prepared using 10 μL of SsoFast EvaGreen Supermix, 0.6 μL of each primer, 2 μL of diluted DNA sample, and nuclease-free water to a final volume of 20 μL. Bacterial 16 S rRNA genes were amplified using the primer sets 1055YF(ATGGYTGTCGTCAGCT) and 1392 R (ACGGGCGGTGTGTAC) and the following PCR conditions: 2 min at 50 °C and 10 min at 95 °C, followed by 40 cycles of 15 s at 95 °C and 1 min at 58 °C. Fungal ITS rRNA genes were amplified using the primer sets ITS1F_KYO1 (CTHGGTCATTTAGAGGAASTAA) and ITS2_KYO2 (TTYRCTRCGTTCTTCATC) and the following PCR conditions: 2 min at 50 °C and 2 min at 95 °C, followed by 40 cycles of 30 s at 95 °C, 30 s at 55 °C and 60 sec at 72 °C, and 10 min at 72 °C. Standard curves for 16 S and ITS rRNA genes were constructed using serial 10-fold dilutions from 10 −1 to 10 −8 of known concentrations of synthesized oligonucleotides (Integrated DNA Technologies, USA). We used the Divisive Amplicon Denoising Algorithm 2 (DADA2) pipeline to process the ITS rRNA gene sequencing data in R statistical software version 3.6.1 . We excluded samples with ≤900 sequences, including 27, 78, and 1 sample collected after 0, 9, and 14 months of the lab incubation, respectively. All functions were run using default parameters suggested by the DADA2 pipeline tutorial. The end product included an amplicon sequence variant (ASV) table recording the number of times each exact ASV was observed in each sample, along with a taxa table recording taxonomy assigned to the ASVs from kingdom to species levels, using the naive Bayesian classifier algorithm and the UNITE database version 10.05.2021. Most ASVs had 251–336 bp, falling within the commonly amplified ITS1 region length of 200–600 bp. Next, we trimmed the ASV tables using the “phyloseq” package in R. ASVs with <10 sequences, i.e., rare ASVs, across all samples were removed. Before trimming, there were 22154 total ASVs and 3118076 total sequences across 442 samples; afterwards, there were 15583 total ASVs and 3085446 total sequences. After removing rare ASVs, there were 4 to 126 ASVs (mean = 55) and 441 to 17234 sequences per sample (mean = 6981). For the lab incubation, we explored temporal trends in instantaneous C decomposition rate from each C source at each site and in lignin C decomposition rate for each individual sampling point (Supplementary Fig. ), using GAMMs including an autoregressive error term to account for temporal autocorrelation, using the “mgcv” package version 1.8.28 in R 3.6.1. Pairwise correlations between cumulative C decomposition over 6, 12, and 18 months (lignin, litter, soil and field lignin decomposition) and biogeochemical predictors were tested by Pearson correlation. The biogeochemical predictors included several categories, which we define as follows (1) climatic: MAT and MAP; (2) N-related: bulk N, bulk C/N, NH 4 + -N and NO 3 - -N after 1-, 9- and 18-month incubations; (3) geochemical: soil pH, silt+clay, Al ox , Fe ox , Fe cd-ox , Fe HCl , Mn cd , Ca cd ; (4) microbial: fungal composition, fungal Chao1 richness, fungal quantity, bacterial quantity, and fungal-to-bacterial ratio (Supplementary Table ). In the microbial predictors, fungal composition was represented by the first (PC1) or second (PC2) axis of a principal coordinate analysis of ITS rRNA gene sequencing data on soils subsampled from the lab incubation at 14 months, conducted in the “vegan” package. The species-level abundance table (rather than the ASV table) was used to calculate Hellinger distances among samples before the analysis to alleviate the issue of a sparse matrix with many zero values . The PC2 of fungal species composition was significantly ( P < 0.01) correlated with cumulative lignin ( r = 0.37) and litter ( r = 0.47) decomposition in the lab incubation and was thus used as a fungal composition predictor. Similarly, the PC1 of fungal species composition was significantly ( P < 0.01) correlated with cumulative SOC decomposition ( r = 0.35). Overall fungal composition changed little with time during the lab incubation (Supplementary Fig. ). Therefore, for subsequent statistical analyses we used the ITS data from samples collected after 14 months of incubation, because only one sample from this timepoint was excluded from analyses because of low read counts. Fungal richness was represented by the residual of ASV Chao1 index regressed on the square root of the number of total sequences within a sample, a method that accounts for differences in sequencing depth among samples . We used copy numbers of ITS and 16 S rRNA genes in the initial soil samples (1 g dry mass equivalent) as indices of fungal and bacterial quantity in our statistical models. Although fungal and bacterial quantities changed throughout the incubation (Supplementary Fig. ), including data from 9 and 14 months did not improve model performance. Fungal-to-bacterial ratio was calculated as fungal quantity divided by bacterial quantity. We used LMMs and RFMs to identify important predictors for cumulative C decomposition (lignin, litter, soil, and field lignin) variables. We included the above-mentioned climatic, N-related, geochemical, and microbial predictors in models of the laboratory incubation decomposition data. Inorganic N predictors from three timepoints explained some variation in lab litter decomposition in the RFM but including these predictors did not improve model performance or change variable importance of other key predictors. Thus, inorganic N predictors were not retained in the final models, and we conducted the above-mentioned N addition experiment to specifically test the effects of inorganic N on lignin and litter decomposition. For statistical models of field lignin decomposition, we first fit the models including all categories of predictors and found that microbial predictors, silt+clay, and Fe HCl were not important predictors of field lignin decomposition. Therefore, we re-fit the models excluding these candidate predictors because these data were collected only for the field samples from the locations corresponding to the lab incubation. Inorganic N variables in soil + litter mixtures were not measured for field lignin decomposition. In the LMM, homoscedasticity and normality assumptions were met by raw data, except for lab lignin decomposition, which was log10 transformed. To estimate predictor importance, all variables were standardized to a mean of zero and a standard deviation of one to account for magnitude difference. All predictor variables were used as fixed effects and site was included as a random intercept to account for possible intra-site dependence in the LMMs. Adding sampling location as an additional random effect to account for correlations between 0–15 and 15–30 cm samples did not improve model performance. Some candidate predictors were excluded from initial models because of weak pairwise correlations with response variables (usually r < 0.10), and/or moderate-to-strong collinearities with other predictors (usually r > 0.50; Supplementary Table ). We acknowledge that this approach might potentially exclude some important predictors that were correlated with other variables, but we found that decreasing the list of candidate predictor variables was important to achieve stable parameter estimates in cases of collinearity. Some predictors were further removed from final models through comparison of Akaike Information Criterion (AIC) values of nested models using stepwise backward selection. All predictors in the final models exhibited variance inflation factor values <3 and correlation coefficients <0.70 or >−0.70, implying that collinearity was acceptable. The relative contributions of fixed effects were determined by standardized regression coefficient estimates, and their significance was tested by the Wald chi-square test. The LMM performance was evaluated by R 2 representing variance explained by only the fixed effects and by the model, respectively. The LMM analyses were conducted with the “lme4” package . We further used generalized additive mixed models (GAMMs) to verify the linearity of the important biogeochemical predictors defined in the LMMs. Details of RFM and GAMM analyses are described in the . All statistical analyses and plotting were performed in R statistical software version 3.6.1 . Supplementary Information Peer Review File
The neuropsychology of healthy aging: the positive context of the University of the Third Age during the COVID-19 pandemic
1e1e832e-9363-4305-a85c-76f25283575b
10115807
Physiology[mh]
The COVID-19 pandemic had a significant impact on older adults, as they were more vulnerable to both the adverse effects of SARS-CoV-2 contagion and the measures taken by governments to contain the spread of infection . While the infection curve has been effectively contained, restrictive measures have increased social isolation and loneliness , which in turn affects well-being and increases physical frailty, cognitive impairment, and mood swings . Nursing home residents and patients with Alzheimer's disease or other dementias are those most likely to suffer the effects of the COVID-19 pandemic , , placing them at highest risk of a fatal outcome or long COVID-19 impact . A substantial amount of data collected on older adults during the pandemic came mostly from online cross-sectional surveys. For example, self-reported data from an online survey were used to examine the impact of the COVID-19 period on well-being, activity levels, sleep quality, and cognitive function . These studies allowed for large samples to be analyzed, but with notable limitations in terms of the generalizability of the results. First, the fact that not all older adults could be reached via online technologies suggests possible biases in the recruitment of the sample. In addition, the use of self-report questionnaires instead of in-depth neuropsychological assessments made it difficult to clearly classify participants as healthy or cognitively impaired. Finally, the lack of longitudinal studies with neuropsychological data collected before the pandemic made it impossible to detect potential changes in cognitive functioning and/or mood, limiting the resulting findings to short-term effects and neglecting the crucial role of "baseline" performance. Indeed, in assessing the multifaceted consequences of the COVID-19 pandemic, it is important to consider that cognitive decline may be part of normal aging (e.g. – ), in addition to a gradual decline in physical abilities that may limit functional abilities of daily living and quality of life (e.g. ). However, little is known about the impact of the COVID-19 pandemic on healthy aging in individuals who reported not being infected with SARS-CoV-2. Taking advantage of available pre-pandemic measurements, it has been previously reported longitudinal evidence of early neuropsychological changes during the pandemic period in healthy older individuals , . A longitudinal study first examined the role of neuropsychogeriatric factors in lockdown fatigue, by comparing data collected in healthy, cognitively aging, individuals before and during the pandemic. Participants were assessed at three different time points during the pandemic, i.e., during the first lockdown period (T1), immediately afterward (T2), and during the second lockdown period (T3). Results highlighted interacting changes of physical functioning, executive attention, and mood deflactions in the COVID-19 pandemic. Subjects with moderate fatigue reported more depressive and anxious symptoms than subjects with mild fatigue. Cognitive performance in terms of psychomotor speed also appears to play an important role in the perception of fatigue associated with COVID-19 restrictive measures. Specifically, the results of principal component and multiple regression analyses demonstrated the contribution of "cognitive" and "psychological" factors (i.e., attentional and executive performance, as well as mood deflections) in explaining handgrip strength and gait speed as two of five determinants of the Fried frailty model . At T3, lockdown fatigue was explained by higher scores on the Beck Depression Inventory and lower performance on the Trail Making Test Part A. The results of a moderated mediation model showed that the effect of psychomotor speed on lockdown fatigue was mediated by depression, with gait speed having a moderating effect on this relationship . Furthermore, fear of infection could be an additional source of concern for this at-risk population, exacerbating anxiety and thus affecting quality of life . Consistent with this hypothesis, we have previously reported longitudinal data demonstrating that perceived threat associated with the consequences of SARS-CoV-2 infection was predicted by a combination of baseline physical, cognitive, and mood measures, i.e., anxiety and frailty in addition to lower information processing speed and language comprehension performance . To date, however, no longitudinal study has examined whether healthy older people may also have responded to the COVID-19 pandemic with improvements in neuropsychological functioning. This important but under-researched topic could shed light on how the cognitive and functional abilities that enable the well-being of older subjects are maintained during a very difficult time such as a pandemic. Identifying the variables that promote or hinder such maintenance will contribute to the development of programs aimed at preventing cognitive decline in healthy older people, as recommended by the American Geriatrics Society . We aimed to fill this gap by examining neuropsychological measures before and during the pandemic in a group of cognitively healthy older people. Prior to the COVID-19 outbreak, participants had joined the initiatives of an innovative laboratory for active and healthy aging dedicated to training, education, and research at the University of the Third Age (UNITRE) in Turin, Italy. This was the ideal context to identify possible positive responses to the pandemic, in addition to the negative outcomes that have been widely reported (e.g. ). A healthy lifestyle that includes cognitive, social, and physical activities was positively correlated with global cognition , which was achieved through informal learning programs and activities such as the University of the Third Age (U3A) . The combination of positive life experiences such as education and participation in cognitive and socially stimulating leisure activities is thought to increase the effectiveness of cognitive processing in aging individuals, also referred to as cognitive reserve (CR) . It improves cognitive function in healthy aging subjects; in particular, CR has been associated with global cognition and has been reported to enhance executive function and attention , . CR has also been shown to be protective against brain damage and dementia , slowing the cognitive aging process or reducing the risk of psychiatric disorders . Indeed, CR can be altered or improved by cognitive, mental, and physical stimulation activities . Based on these assumptions, participants in our study underwent a thorough neuropsychological assessment before and during the pandemic to identify possible changes in (a) global cognition, memory, language, executive, and attentional functions; (b) physical status; and (c) mood changes. At the final time point (i.e., 21 months after the outbreak of the pandemic), subjects' heart rate variability (HRV), which is an indicator of physiological mechanisms of (dys)regulation , was recorded during the presentation of images depicting the initial and most stressful phase of the pandemic. We predicted that CR would not only help participants maintain good neuropsychological and physical function to cope with the negative experiences of the pandemic, but also play a protective role in pandemic emotional dysregulation. However, given the potential negative impact of the COVID-19 pandemic, we predicted a possible decline in motivation in the form of apathy and increased anxiety also due to fear of SARS-CoV-2 infection. Moreover, the extent of pandemic-related apathy might reflect the combination of anxiety and emotional (dys)regulation. Tables and provide a detailed overview of the characteristics of the 39 participants and their cognitive, affective, and physical status before and during the pandemic. All neuropsychological variables are normally distributed, with no missing data. Regarding the sociodemographic characteristics of the participants, 79% were women and 21% were men, their mean age was 70 years (age range: 62–82), and their mean educational level was 13 years (range: 8–17). They attended middle school (N = 2), high school (N = 28), and college or university (N = 9). Forty-one percent of the sample lived alone and the remaining 59% lived with at least one person (in most cases, their partner). According to the Hollingshead index (HI , ), most of the subjects belonged to the medium–high socioeconomic status (SES). Specifically, 8% of them fell into the highest status (HI range: 66–55), 51% into the second (HI range: 54–40), 36% into the third (HI range: 39–30), 5% into the fourth (HI range: 29–20), and none into the lowest (HI range: 19–18). During the pandemic, only one of the participants had been diagnosed with COVID-19 infection, while the others stated that they had not contracted the virus. Participants reported high compliance with most of the Italian Ministry of Health recommendations to contain infection, i.e., maintaining safe distances and frequent hand washing (100% of participants), using face masks (95%), avoiding crowded places (90%), and wearing latex gloves when outdoors (87%). Regarding immunization against SARS-CoV-2, 95% of participants had been vaccinated against SARS-CoV-2, and 77% had completed two vaccination cycles according to the recommendations of the Italian Ministry of Health. Longitudinal analyses of cognitive functioning Prior to the pandemic, subjects' performance on both the Addenbrooke's Cognitive Examination-Revised Version (ACE-R ) and the Mini Mental State Examination (MMSE ) did not indicate the presence of mild cognitive impairment (MCI, i.e. ), as all subjects scored above cut-off scores and did not report any concerns for subjective cognitive decline (SCD ). Their performance was above the reference cut-off on the following tests: the Montreal Cognitive Assessment (MoCA ) (100%), Rey Memory Test (RMT )-15 instant words (97%), RMT-15 delayed words (95%), Trail-Making Test (TMT )-part A (100%), TMT-part B (100%), TMT B-A (97%), and Token Test (TT (100%). Notably, the percentages of results below the cut-off were consistent with the margin of error of the neuropsychological tests in the normative population (see Table ). During the pandemic, subjects' performance on both ACE-R and the MMSE did not indicate the presence of MCI (i.e. ), as all subjects scored above the regulatory cut-off and did not report SCD , which was also assessed with the Cognitive Function Instrument (CFI ). Their performance was above the reference cut-off on the following tests: MoCA (100%), RMT—15 immediate words (97%), RMT—15 delayed words (97%), TMT—Part A (100%), TMT—Part B (100%), TMT B-A (100%), and TT (97%). Again, although some deficits were found in the neuropsychological tests (see Table ), the percentages of scores below the cut-off were consistent with the margin of error of the tests in the normative population. To test the stability of cognitive functions, Bayes factors (BFs) were calculated by comparing individual performance before and during the COVID-19 pandemic. In particular, we used the BF with the paired-samples t-test to evaluate the relationship between the probability of the data under the null hypothesis and that under the alternative hypothesis – . When comparisons were made with BF > 3 (in terms of significantly equal scores), the results showed no longitudinal changes in memory (RMT—15 immediate words, BF 01 = 4.14; RMT—15 delayed words, BF 01 = 4.33), attention (TMT time scores: part A, BF 01 = 5.46; part B, BF 01 = 3.53; and B-A, BF 01 = 3.15), and global cognition, as measured by MMSE (BF 01 = 4.22). The BF suggests that the data have exactly the same probability of occurrence under the hypothesis that repeated measures are the same, as under the hypothesis that they are different . We found evidence of differential performance with a significant increase in raw test scores across time points in global cognition, as measured by ACE-R (t = −2.952, p = 0.006, α = 0.05), executive functions (MoCA, t = −2.609, p = 0.013, α = 0.05), and comprehension of linguistic utterances (TT, t = −4.70, p = 0.001, α = 0.05). Longitudinal analyses of mood The subjects showed longitudinal changes in mood in the form of increasing apathy and anxiety. Prior to the pandemic, none of the subjects scored below the cut-off on the Apathy Rating Scale (AES ) and the Hamilton Rating Scale for Anxiety (HARS ). Instead, a significant increase in scale scores was observed during the pandemic. Thus, 36% of the sample achieved a below cut-off score on AES and 13% on HARS, respectively. It should be noted that in paired t-tests, there was a significant increase in the mean scores of AES (t = −7.43, p < 0.001, α = 0.05) and HARS (t = −3.71, p < 0.001, α = 0.005), as shown in Fig. , which also shows the improvement in ACE-R task performance. On the other hand, if we consider the comparison with BF > 3 (in terms of significantly equal scores), the results show no longitudinal changes in depression (Beck Depression Inventory, BDI , BF 01 = 5.33), hypomania (Mania Scale, MAS , BF 01 = 5.51) and disinhibition (Disinhibition Scale, DIS , BF 01 = 3.07). Longitudinal analyses of physical functioning As shown in Table , most participants were classified as "robust" (72% and 64% before and during the pandemic, respectively); the rest of the sample had a prefrail status (28% and 36% before and during the pandemic, respectively). None of the participants met Fried et al.'s inclusion criteria for frailty status, either before or during the pandemic. Based on the McNemar hypothesis test, no changes in physical frailty status were observed (X 2 = 2.29, p = 0.515, α = 0.05) across timepoints, as shown in contingency Table . Follow-up: correlation matrix and multiple regression analysis To investigate the relationship between neuropsychological performance at follow-up and emotional (dys)regulation as assessed by the low-frequency/high-frequency (LF/HF) ratio of HRV, we first generated a correlation matrix considering the variables that showed a significant difference before and during the pandemic, based on t-tests and BFs. As shown in Table , significant correlations were found between LF/HF and both AES (p < 0.05) and ACE-R (p < 0.05), but not HARS, MoCA, or TT. AES was selected as the dependent variable of a multiple regression model because it is the measure showing the greatest change across timepoints (t = −7.43, p < 0.001, α = 0.05; Fig. ). In addition to the LF/HF ratio, which reflects emotional (dys)regulation at follow-up, we modeled as predictors the variables that showed significant change longitudinally, i.e., ACE-R and HARS, which capture global cognitive function and mood anxiety. As shown in Table , a strongly significant model (R = 0.711, R 2 = 0.506) indicated that AES was significantly predicted by lower ACE-R scores (t = −2.391, p = 0.024) and by higher HARS scores (t = 3.704, p = 0.001) and higher LF/HF ratio (t = 3.011, p = 0.006), gender (t = 0.621, p = 0.540) and age (t = −0.630, p = 0.534). Prior to the pandemic, subjects' performance on both the Addenbrooke's Cognitive Examination-Revised Version (ACE-R ) and the Mini Mental State Examination (MMSE ) did not indicate the presence of mild cognitive impairment (MCI, i.e. ), as all subjects scored above cut-off scores and did not report any concerns for subjective cognitive decline (SCD ). Their performance was above the reference cut-off on the following tests: the Montreal Cognitive Assessment (MoCA ) (100%), Rey Memory Test (RMT )-15 instant words (97%), RMT-15 delayed words (95%), Trail-Making Test (TMT )-part A (100%), TMT-part B (100%), TMT B-A (97%), and Token Test (TT (100%). Notably, the percentages of results below the cut-off were consistent with the margin of error of the neuropsychological tests in the normative population (see Table ). During the pandemic, subjects' performance on both ACE-R and the MMSE did not indicate the presence of MCI (i.e. ), as all subjects scored above the regulatory cut-off and did not report SCD , which was also assessed with the Cognitive Function Instrument (CFI ). Their performance was above the reference cut-off on the following tests: MoCA (100%), RMT—15 immediate words (97%), RMT—15 delayed words (97%), TMT—Part A (100%), TMT—Part B (100%), TMT B-A (100%), and TT (97%). Again, although some deficits were found in the neuropsychological tests (see Table ), the percentages of scores below the cut-off were consistent with the margin of error of the tests in the normative population. To test the stability of cognitive functions, Bayes factors (BFs) were calculated by comparing individual performance before and during the COVID-19 pandemic. In particular, we used the BF with the paired-samples t-test to evaluate the relationship between the probability of the data under the null hypothesis and that under the alternative hypothesis – . When comparisons were made with BF > 3 (in terms of significantly equal scores), the results showed no longitudinal changes in memory (RMT—15 immediate words, BF 01 = 4.14; RMT—15 delayed words, BF 01 = 4.33), attention (TMT time scores: part A, BF 01 = 5.46; part B, BF 01 = 3.53; and B-A, BF 01 = 3.15), and global cognition, as measured by MMSE (BF 01 = 4.22). The BF suggests that the data have exactly the same probability of occurrence under the hypothesis that repeated measures are the same, as under the hypothesis that they are different . We found evidence of differential performance with a significant increase in raw test scores across time points in global cognition, as measured by ACE-R (t = −2.952, p = 0.006, α = 0.05), executive functions (MoCA, t = −2.609, p = 0.013, α = 0.05), and comprehension of linguistic utterances (TT, t = −4.70, p = 0.001, α = 0.05). The subjects showed longitudinal changes in mood in the form of increasing apathy and anxiety. Prior to the pandemic, none of the subjects scored below the cut-off on the Apathy Rating Scale (AES ) and the Hamilton Rating Scale for Anxiety (HARS ). Instead, a significant increase in scale scores was observed during the pandemic. Thus, 36% of the sample achieved a below cut-off score on AES and 13% on HARS, respectively. It should be noted that in paired t-tests, there was a significant increase in the mean scores of AES (t = −7.43, p < 0.001, α = 0.05) and HARS (t = −3.71, p < 0.001, α = 0.005), as shown in Fig. , which also shows the improvement in ACE-R task performance. On the other hand, if we consider the comparison with BF > 3 (in terms of significantly equal scores), the results show no longitudinal changes in depression (Beck Depression Inventory, BDI , BF 01 = 5.33), hypomania (Mania Scale, MAS , BF 01 = 5.51) and disinhibition (Disinhibition Scale, DIS , BF 01 = 3.07). As shown in Table , most participants were classified as "robust" (72% and 64% before and during the pandemic, respectively); the rest of the sample had a prefrail status (28% and 36% before and during the pandemic, respectively). None of the participants met Fried et al.'s inclusion criteria for frailty status, either before or during the pandemic. Based on the McNemar hypothesis test, no changes in physical frailty status were observed (X 2 = 2.29, p = 0.515, α = 0.05) across timepoints, as shown in contingency Table . To investigate the relationship between neuropsychological performance at follow-up and emotional (dys)regulation as assessed by the low-frequency/high-frequency (LF/HF) ratio of HRV, we first generated a correlation matrix considering the variables that showed a significant difference before and during the pandemic, based on t-tests and BFs. As shown in Table , significant correlations were found between LF/HF and both AES (p < 0.05) and ACE-R (p < 0.05), but not HARS, MoCA, or TT. AES was selected as the dependent variable of a multiple regression model because it is the measure showing the greatest change across timepoints (t = −7.43, p < 0.001, α = 0.05; Fig. ). In addition to the LF/HF ratio, which reflects emotional (dys)regulation at follow-up, we modeled as predictors the variables that showed significant change longitudinally, i.e., ACE-R and HARS, which capture global cognitive function and mood anxiety. As shown in Table , a strongly significant model (R = 0.711, R 2 = 0.506) indicated that AES was significantly predicted by lower ACE-R scores (t = −2.391, p = 0.024) and by higher HARS scores (t = 3.704, p = 0.001) and higher LF/HF ratio (t = 3.011, p = 0.006), gender (t = 0.621, p = 0.540) and age (t = −0.630, p = 0.534). The present study took advantage of the availability of pre-pandemic data on the cognitive, physical, and mental status of a cohort of older people who had participated in the UNITRE Healthy and Active Aging Lab initiatives. The opportunity to directly compare neuropsychological data collected before and during the COVID-19 pandemic, and to investigate possible changes (positive or negative), is unique. Although the results of some surveys have shown that older people responded better to the COVID-19 pandemic than younger people in terms of emotional well-being, lower reactivity to stressors, and better mental health – , our study, however, showed for the first time a positive response of healthy older people to the pandemic in terms of neuropsychological performance at different timepoints. Specifically, the data showed (1) significant increases in raw scores on global cognitive ability (ACE-R), executive function (MoCA), and comprehension of linguistic utterances (TT); (2) stable performance on immediate and delayed memory (RMT), executive attention (TMT parts A and B, in addition to TMT B-A), and global cognitive abilities as measured by the MMSE; and (3) stable physical status as measured by phenotypic frailty determinants. Together with the absence of participants classified as "frail," this pattern of findings underscores the maintenance of good physical performance despite inactivity due to the restrictive lockdown measures. Evidence of stable or even improved measures at different timepoints provides new insight into widely held claims about the negative effects of the pandemic in cognitively preserved older people, based on self-reported observations of deterioration in sleep quality, mental and physical health and functioning, symptoms of depression and apathy, and limitations in social relationships . Although improved performance on attention and working memory tasks has also been reported compared before and during the pandemic measurements, the results of this study were obtained by administering questionnaires adapted for remote administration . Interestingly, there is evidence that cognitive decline may be mitigated by so-called CR, i.e., the mind/brain's resilience to age- or disease-related changes . Specifically, CR has been shown to mitigate the cognitive and psychological effects of physiological aging and, most importantly, its accompanying circumstances such as social isolation and loneliness , which are often cited as major contributors to physical and mental changes during the pandemic . The American Geriatrics Society suggested strategies that could promote healthy aging in the COVID-19 era based on domains that consider health and the promotion of cognitive, physical, and socio-relational aspects . As reported by WHO , U3A identified as an informal learning program and activity worldwide , and prior to the pandemic, showed a positive impact on the psychological well-being and quality of life of the elderly population. Indeed, U3A programs emphasized improvements in physical health status, emotional balance (e.g., decreases in depressive symptoms and negative affect), social support, and coping activities and strategies, thus increasing positive self-perception and sense of control as key effects for the decade of aging . Importantly, our results highlight how cognitive functioning can be stimulated and even improved by appropriate interaction with a positive context during the pandemic, as experienced by UNITRE participants who continued to participate in their distance learning courses and shared common experiences via social media. Our study shows that older people can maintain their cognitive levels if they continue to engage in social and educational activities. Moreover, CR appears to play a protective role against the emotional dysregulation associated with the pandemic. We also observed across different timepoints stable mood scores on the BDI, MAS, and DIS scales, which measure depression, hypomania, and disinhibition, respectively. In contrast, pandemic-related effects appeared to be more associated with mood deflections measured by increased apathy (AES) and anxiety (HARS) during the pandemic. The results at the correlation matrix, regarding the relationship between neuropsychological follow-up data and emotional (dys)regulation during HRV recording, showed significant associations between the LF/HF ratio and both AES and ACE-R, but not with HARS, MoCA, or TT. In particular, apathy was associated with greater emotional (dys)regulation, and ACE-R scores were related to an increase in the LF/HF ratio, while suggestive pandemic images were presented. Because apathy showed the greatest change across timepoints, it was selected as the dependent variable of a multiple regression model. On this basis, higher apathy was predicted by poorer global cognitive performance, increased anxiety, and emotional (dys)regulation as measured by a higher ratio of low to high frequencies of HRV. Thus, preserved global cognitive performance appears to play a protective role against the effects of pandemic-related anxiety and emotional dysregulation on apathy. These findings highlight the importance of CR in attenuating the negative associations between lowered mood and cognition and their joint contribution to well-being, even in challenging situations such as a pandemic. The growing concern triggered by a major health emergency such as the COVID-19 pandemic primarily affects the most vulnerable, for example, the older population with cognitive impairment. In contrast, our study shows for the first time in the literature how some healthy older subjects responded positively to the pandemic emergency despite some mood deflections, showing 'the bright side of the moon ' through our study. Therefore, the UNITRE model is a good way to promote active and healthy aging, especially in the face of complex and negative events such as the COVID-19 pandemic, even taking into account demographic changes. In fact, by 2050, one in five people will be over 60 years old, which is a total of two billion people worldwide . Thus, there is an urgent need to plan and implement evidence-based intervention strategies consistent with our study to promote healthy aging that can overcome the adversity of potential future health emergencies. Therefore, UNITRE programs have the potential to accelerate impact by 2023 on healthy aging. Despite the small sample size, our findings are based on objective measures collected before and during the pandemic through in-depth neuropsychological assessments. This rare opportunity provided new insights into the maintenance of cognitive function and its protective role in relation to changes in mood and emotional (dys)regulation in socially active individuals of moderate to high socioeconomic status. Participants From April to October 2019, about 100 socially active older people from the University of the Third Age (UNITRE) in Turin (Italy) joined the initiatives of an innovative laboratory for active and healthy aging, dedicated to training, education and research and development. Before the outbreak of the pandemic, they participated in teaching modules on cognitive functions, physical activity, nutrition and social integration to promote active and healthy aging. Specifically, they were invited to undergo an in-depth neuropsychological assessment to investigate possible associations between cognitive functioning, mood changes, and physical health status over a 3-year period. A group of 81 volunteers (64 women; 60–80 years) agreed to participate in the study, but pandemic restrictions necessitated the establishment of an online assessment. After the lockdown measures were relaxed, 39 subjects agreed to undergo the final date of neuropsychological assessment between June and October 2021. Thirty-five subjects (27 women; 62–82 years) also agreed to participate in a psychophysiological study from October to December 2021 (i.e., 21 months after the pandemic outbreak) to assess emotional reactivity to images associated with the COVID-19 pandemic. All 39 subjects who participated in the full longitudinal study were > 60 years old and thus can be classified as "older adults" . They were not taking any psychotropic medications that could have affected their cognitive abilities or mood, and none of them complained of cognitive decline . The study was conducted in accordance with the Declaration of Helsinki, having been approved by the Ethics Committee of the University of Turin before (Prot. No. 10038) and during (Prot. No. 151786) the pandemic. All participants gave written informed consent before participating in the study. Sociodemographic assessment In addition to the usual sociodemographic characteristics such as gender, age, and education, we collected participants' socioeconomic status (SES) using the Four Factor Index of Social Status , , which takes into account educational level, gender, occupation, and marital status. The Hollingshead Index (HI) combines these four factors to assess the social status of individuals or nuclear families. The HI scores can be aggregated into a set of values covering the social strata, ranging from 66 (the highest social stratum: 66–55) to 18 (the lowest social stratum: 19–18). The higher the score, the higher the SES. For example, social stratum 30–39, which reflects the average socioeconomic level, includes skilled craftsmen, clerical, and sales workers. Sociodemographic characteristics of participants are shown in Table , along with information on the pandemic. The participants were asked whether they had been (a) previously diagnosed with infection by COVID-19 and (b) vaccinated against SARS-CoV-2 (as well as the number of doses administered). In addition, the subjects were asked whether they had followed the recommendations of the Italian Ministry of Health to control infection, i.e., wearing face masks, wearing latex gloves when outdoors, washing hands frequently, keeping safe distances, and avoiding crowded places. Neuropsychological assessment The study included two time points: before the pandemic, from April to October 2019 (baseline), and during the pandemic, from June to December 2021 (follow-up). To avoid fatigue, a detailed neuropsychological assessment was divided into two sections, lasting approximately 45 min and administered on the same day. The neuropsychological battery was designed to assess three domains, namely cognitive performance, mood, as well as physical and health status. We assessed global cognitive performance with ACE-R , which includes the MMSE score. In addition, SCD was assessed before the pandemic using Jessen's criteria and at follow-up using CFI , which includes both self-report and partner-report to provide a more accurate measure. Executive function was assessed using the MoCA and the TMT—part A and B . TMT—Part A was used to assess speed of information processing (i.e., psychomotor speed). We assessed language comprehension with TT , while instant and delayed recall were analyzed with RMT . Different facets of mood were assessed with the following scales: AES , BDI , DIS , HARS , and MAS . We used the Phenotypic Frailty Model and the Cumulative Illness Rating Scale (CIRS ) to assess physical condition or frailty status and medical history. Five criteria were considered to examine the determinants of a possible physical frailty state: (a) weight loss, (b) grip strength, (c) self-reported fatigue, (d) decreased walking speed, and (e) reduced physical activity. Depending on the presence of the five criteria, participants could be classified as robust (none), prefrail (1 or 2), or frail, (3 or more) . COVID-19-related picture stimuli To assess individual differences in physiological arousal associated with the pandemic, we first searched the "Google images" website ( https://images.google.com/ ) for images showing the most critical period between the virus outbreak and the lockdown in Italy. Using the keyword "COVID”, we obtained an initial selection of 124 images, which were then evaluated separately by all authors to select those that met specific inclusion criteria: presence of people regardless of their age, sex, and sociodemographic status in the pandemic context (e.g., healthy individuals, patients, deceased individuals, medical personnel); presence of people in the most typical COVID-19 scenarios (e.g., everyday contexts and hospitals). Based on these criteria, a subset of images was selected for further evaluation based on the following criteria: presence of at least 2 people (i.e., "social" images); absence of redundant images, i.e., same content but different formats and resolutions (i.e., "unique" images); presence of unique signals of the pandemic, such as: masks that had to be worn by at least some of the people depicted; slogans (e.g., "Everything will be fine") and/or banners from balconies; the presence of individuals with exclusively Caucasian facial features (i.e., with the same ethnicity as the study participants) to convey greater familiarity and emotional relevance ("ethnicity"). These criteria resulted in a final data set of 75 images randomly presented to each participant during a psychophysiological recording of heart rate variability with self-determined duration. The entire data set had a duration between 45 and 60 min. Psychophysiological evaluation Thirty-five previously studied subjects (27 women; 62–82 years) underwent further psychophysiological recording of HRV while viewing images reminiscent of the initial and most stressful pandemic phase. HRV reflects cardiac autonomic activity (i.e., sympathovagal balance), which indicates the ability of the autonomic nervous system to respond flexibly to external stimuli and to respond to psychophysiological stressors . The aim of this assessment was to identify possible associations between pandemic-related changes in mood and/or cognitive performance and a proxy for emotional (dys)regulation represented by the ratio of low-frequency to high-frequency HRV, which in turn reflects cardiac sympathovagal balance. These data were acquired with the Nexus-4 blood volume pulse (BVP) and heartbeat measurement device and subsequently processed with custom-designed software in MATLAB 7.10.0 (R2010a) (The Mathworks, Inc; Natick, MA, USA). Each channel was synchronously recorded at 2048 Hz and extracted at 256 Hz to calculate the indices. Signal processing Cardiovascular and respiratory activities were monitored to assess both voluntary and autonomic effects of breathing on heart rate. The IBI (Inter-Beat-Interval extracted from the BVP sensor, recognized as a measure equivalent to the R-R interval from the electrocardiogram) was analyzed. Following the guidelines of the Task Force of the European Society of Cardiology and the North American Society for Pacing and Electrophysiology, typical indices of spectral HRV were used to assess the autonomic nervous system response – . Spectral analysis was performed using Fourier spectral methods and dedicated software. The rhythms were classified as very low frequency (VLF, < 0.04 Hz), low frequency (LF, from 0.04 to 0.15 Hz), and high frequency (HF, from 0.15 to 0.5 Hz) oscillations. This procedure allowed us to calculate the LF/HF ratio, also known as the index of sympathovagal balance. Statistical analysis Analyses were performed using Jamovi Statistics software (version 2.2.5.0). Two normality tests (i.e., Kolmogorov–Smirnov and Shapiro–Wilk) were performed to determine whether the variables were normally distributed. We computed BFs and paired-sample t-tests to compare the data collected before and during the pandemic. The BFs were designed to determine whether variables related to cognition, mood, and physical status remained stable between time points. We tested this hypothesis using the BF, which is a ratio between the probability of the data in the null hypothesis and the alternative hypothesis – . The evidence for similarity of the measures is considered substantial when 2.5 < BF < 10 – . It is noteworthy that in this case the measures considered are statistically similar, as opposed to the hypothesis that they are different. BF is generally considered a more reliable statistical test compared to p-value of a t-test. In particular, the American Psychological Association acknowledged the recommendations of the American Statistical Association and emphasized the importance of using BF, among other methods . In this study, we used BF to compare repeated measures and assume similarities or differences between them. Using BF, we were able to determine whether a model that predicted similarities was significantly better than a model that assumed differences. The BF provides the likelihood ratio for this comparison. To examine the relationship between neuropsychological performance at follow-up and emotional (dys)regulation as indexed by HRV recording, we computed a correlation matrix representing the relationship between (a) the variables associated with a statistically significant difference in the paired t-tests (comparison between before and during the pandemic) and (b) the LF/HF ratio index. The variables that showed a significant difference in the t-tests, in addition to the LF/HF ratio, were then selected as predictors in a multiple regression analysis (adjusted for age and gender), and the neuropsychological scale that showed the greatest change between timepoints was selected as the dependent variable. From April to October 2019, about 100 socially active older people from the University of the Third Age (UNITRE) in Turin (Italy) joined the initiatives of an innovative laboratory for active and healthy aging, dedicated to training, education and research and development. Before the outbreak of the pandemic, they participated in teaching modules on cognitive functions, physical activity, nutrition and social integration to promote active and healthy aging. Specifically, they were invited to undergo an in-depth neuropsychological assessment to investigate possible associations between cognitive functioning, mood changes, and physical health status over a 3-year period. A group of 81 volunteers (64 women; 60–80 years) agreed to participate in the study, but pandemic restrictions necessitated the establishment of an online assessment. After the lockdown measures were relaxed, 39 subjects agreed to undergo the final date of neuropsychological assessment between June and October 2021. Thirty-five subjects (27 women; 62–82 years) also agreed to participate in a psychophysiological study from October to December 2021 (i.e., 21 months after the pandemic outbreak) to assess emotional reactivity to images associated with the COVID-19 pandemic. All 39 subjects who participated in the full longitudinal study were > 60 years old and thus can be classified as "older adults" . They were not taking any psychotropic medications that could have affected their cognitive abilities or mood, and none of them complained of cognitive decline . The study was conducted in accordance with the Declaration of Helsinki, having been approved by the Ethics Committee of the University of Turin before (Prot. No. 10038) and during (Prot. No. 151786) the pandemic. All participants gave written informed consent before participating in the study. In addition to the usual sociodemographic characteristics such as gender, age, and education, we collected participants' socioeconomic status (SES) using the Four Factor Index of Social Status , , which takes into account educational level, gender, occupation, and marital status. The Hollingshead Index (HI) combines these four factors to assess the social status of individuals or nuclear families. The HI scores can be aggregated into a set of values covering the social strata, ranging from 66 (the highest social stratum: 66–55) to 18 (the lowest social stratum: 19–18). The higher the score, the higher the SES. For example, social stratum 30–39, which reflects the average socioeconomic level, includes skilled craftsmen, clerical, and sales workers. Sociodemographic characteristics of participants are shown in Table , along with information on the pandemic. The participants were asked whether they had been (a) previously diagnosed with infection by COVID-19 and (b) vaccinated against SARS-CoV-2 (as well as the number of doses administered). In addition, the subjects were asked whether they had followed the recommendations of the Italian Ministry of Health to control infection, i.e., wearing face masks, wearing latex gloves when outdoors, washing hands frequently, keeping safe distances, and avoiding crowded places. The study included two time points: before the pandemic, from April to October 2019 (baseline), and during the pandemic, from June to December 2021 (follow-up). To avoid fatigue, a detailed neuropsychological assessment was divided into two sections, lasting approximately 45 min and administered on the same day. The neuropsychological battery was designed to assess three domains, namely cognitive performance, mood, as well as physical and health status. We assessed global cognitive performance with ACE-R , which includes the MMSE score. In addition, SCD was assessed before the pandemic using Jessen's criteria and at follow-up using CFI , which includes both self-report and partner-report to provide a more accurate measure. Executive function was assessed using the MoCA and the TMT—part A and B . TMT—Part A was used to assess speed of information processing (i.e., psychomotor speed). We assessed language comprehension with TT , while instant and delayed recall were analyzed with RMT . Different facets of mood were assessed with the following scales: AES , BDI , DIS , HARS , and MAS . We used the Phenotypic Frailty Model and the Cumulative Illness Rating Scale (CIRS ) to assess physical condition or frailty status and medical history. Five criteria were considered to examine the determinants of a possible physical frailty state: (a) weight loss, (b) grip strength, (c) self-reported fatigue, (d) decreased walking speed, and (e) reduced physical activity. Depending on the presence of the five criteria, participants could be classified as robust (none), prefrail (1 or 2), or frail, (3 or more) . To assess individual differences in physiological arousal associated with the pandemic, we first searched the "Google images" website ( https://images.google.com/ ) for images showing the most critical period between the virus outbreak and the lockdown in Italy. Using the keyword "COVID”, we obtained an initial selection of 124 images, which were then evaluated separately by all authors to select those that met specific inclusion criteria: presence of people regardless of their age, sex, and sociodemographic status in the pandemic context (e.g., healthy individuals, patients, deceased individuals, medical personnel); presence of people in the most typical COVID-19 scenarios (e.g., everyday contexts and hospitals). Based on these criteria, a subset of images was selected for further evaluation based on the following criteria: presence of at least 2 people (i.e., "social" images); absence of redundant images, i.e., same content but different formats and resolutions (i.e., "unique" images); presence of unique signals of the pandemic, such as: masks that had to be worn by at least some of the people depicted; slogans (e.g., "Everything will be fine") and/or banners from balconies; the presence of individuals with exclusively Caucasian facial features (i.e., with the same ethnicity as the study participants) to convey greater familiarity and emotional relevance ("ethnicity"). These criteria resulted in a final data set of 75 images randomly presented to each participant during a psychophysiological recording of heart rate variability with self-determined duration. The entire data set had a duration between 45 and 60 min. Thirty-five previously studied subjects (27 women; 62–82 years) underwent further psychophysiological recording of HRV while viewing images reminiscent of the initial and most stressful pandemic phase. HRV reflects cardiac autonomic activity (i.e., sympathovagal balance), which indicates the ability of the autonomic nervous system to respond flexibly to external stimuli and to respond to psychophysiological stressors . The aim of this assessment was to identify possible associations between pandemic-related changes in mood and/or cognitive performance and a proxy for emotional (dys)regulation represented by the ratio of low-frequency to high-frequency HRV, which in turn reflects cardiac sympathovagal balance. These data were acquired with the Nexus-4 blood volume pulse (BVP) and heartbeat measurement device and subsequently processed with custom-designed software in MATLAB 7.10.0 (R2010a) (The Mathworks, Inc; Natick, MA, USA). Each channel was synchronously recorded at 2048 Hz and extracted at 256 Hz to calculate the indices. Cardiovascular and respiratory activities were monitored to assess both voluntary and autonomic effects of breathing on heart rate. The IBI (Inter-Beat-Interval extracted from the BVP sensor, recognized as a measure equivalent to the R-R interval from the electrocardiogram) was analyzed. Following the guidelines of the Task Force of the European Society of Cardiology and the North American Society for Pacing and Electrophysiology, typical indices of spectral HRV were used to assess the autonomic nervous system response – . Spectral analysis was performed using Fourier spectral methods and dedicated software. The rhythms were classified as very low frequency (VLF, < 0.04 Hz), low frequency (LF, from 0.04 to 0.15 Hz), and high frequency (HF, from 0.15 to 0.5 Hz) oscillations. This procedure allowed us to calculate the LF/HF ratio, also known as the index of sympathovagal balance. Analyses were performed using Jamovi Statistics software (version 2.2.5.0). Two normality tests (i.e., Kolmogorov–Smirnov and Shapiro–Wilk) were performed to determine whether the variables were normally distributed. We computed BFs and paired-sample t-tests to compare the data collected before and during the pandemic. The BFs were designed to determine whether variables related to cognition, mood, and physical status remained stable between time points. We tested this hypothesis using the BF, which is a ratio between the probability of the data in the null hypothesis and the alternative hypothesis – . The evidence for similarity of the measures is considered substantial when 2.5 < BF < 10 – . It is noteworthy that in this case the measures considered are statistically similar, as opposed to the hypothesis that they are different. BF is generally considered a more reliable statistical test compared to p-value of a t-test. In particular, the American Psychological Association acknowledged the recommendations of the American Statistical Association and emphasized the importance of using BF, among other methods . In this study, we used BF to compare repeated measures and assume similarities or differences between them. Using BF, we were able to determine whether a model that predicted similarities was significantly better than a model that assumed differences. The BF provides the likelihood ratio for this comparison. To examine the relationship between neuropsychological performance at follow-up and emotional (dys)regulation as indexed by HRV recording, we computed a correlation matrix representing the relationship between (a) the variables associated with a statistically significant difference in the paired t-tests (comparison between before and during the pandemic) and (b) the LF/HF ratio index. The variables that showed a significant difference in the t-tests, in addition to the LF/HF ratio, were then selected as predictors in a multiple regression analysis (adjusted for age and gender), and the neuropsychological scale that showed the greatest change between timepoints was selected as the dependent variable.
Intracellular carbon storage by microorganisms is an overlooked pathway of biomass growth
63abe669-b254-451d-8134-10998d5b9985
10115882
Microbiology[mh]
Microbial assimilation of organic resources is crucial to the flow of C and other nutrients through ecosystems. Soil heterotrophs perform key steps in terrestrial carbon (C) and nutrient cycles, yet how microorganisms use the available organic resources and regulate their allocation to competing metabolic demands remains a subject of research and debate – . Microbial assimilation of organic C into an organism is conceptualized as biomass growth. This is frequently understood as synonymous with an increase in individuals, in other words, the replicative growth of microbial populations. However, many microorganisms are capable of storage, defined as the accumulation of chemical resources in particular forms or compartments to secure their availability for future use by the storing organism . Various microbial storage compounds are known, among them polyhydroxybutyrate (PHB) and triacylglycerides (TAGs) , . These two C-rich storage compounds are of particular interest as they are accumulated by diverse microbial taxa and methods are available for their measurement in soil , . These are both hydrophobic lipids that are stored as inclusion bodies in the cytosol (i.e., intracellular lipid droplets) . PHB is a high-molecular-weight polyester of β-hydroxybutyrate, while TAGs consist of three fatty acids (of diverse structures) esterified to a glycerol backbone . PHB storage is only known among bacteria, while TAGs are accumulated by both bacteria and fungi . Biosynthesis of PHB has been demonstrated by compound-specific measurement in soil and TAGs in marine and soil systems show responsiveness to resource supply consistent with a C-storage function , . Microbial storage could substantially influence microbial fluxes of C and other nutrients , changing our understanding of soil biogeochemical fluxes and their response to environmental changes. Biomass growth is a cornerstone concept at scales from the ecological stoichiometry of individual cells to microbially-explicit models of the C cycle , , and for defining the nutrient demands of organisms and their productivity . Accumulation of storage compounds corresponds to an increase in microbial biomass without replication, and therefore represents an alternative pathway for growth that is not usually considered in the C cycle. There is therefore a need to assess how severely the omission of storage may bias our understanding of C assimilation and utilisation . Conventional measurement of soil microbial biomass uses fumigation with chloroform to lyse cells, followed by extraction of the released biomass into an aqueous solution for measurement (chloroform fumigation-extraction, CFE) . This method assumes a proportionality between extractable and non-extractable biomass . Other measures in widespread use are proxies such as cell membrane lipids or substrate-induced respiration – . Only CFE provides biomass in units of C, however, and these other methods are typically calibrated against it. However, hydrophobic storage compounds like PHB and TAGs are not extractable in aqueous solution and are therefore overlooked by CFE. Furthermore, there is no biological reason to expect proportionality between these storage compounds and any of the conventional biomass proxies. DNA-based measures of microbial abundance and replication also do not capture storage , , since it is not expected to form a constant proportion of each cell’s biomass. Interpretation of microbial storage patterns is facilitated by distinguishing two storage modes, which represent the end-members on a gradient of storage strategies , . Surplus storage is the accumulation of resources that are available in excess of immediate needs, at little to no opportunity cost, while reserve storage accumulates limited resources at the cost of other metabolic functions. Surplus storage of C would be predicted under conditions of C oversupply, when replicative growth is constrained by other factors such as nutrient limitation. Reserve storage, on the other hand, indicates that storage may also occur under C-limited conditions. The evidence assembled from pure culture studies confirms the operation of both storage modes among microorganisms , – . Here we experimentally investigate the importance of microbial storage in soil, and show how storage responses to resource supply and stoichiometry can advance our understanding of resource allocation and microbial biomass growth. We hypothesized as follows: Microbial storage compounds are a quantitatively important pool of soil microbial biomass under C-replete, nutrient-limited conditions. Microbial biomass growth is substantially underestimated by neglecting intracellular storage synthesis. Due to low opportunity costs, surplus storage is likely to be quantitatively more important than reserve storage when measured across an entire soil community. Therefore, nutrient supplementation (N, P, K, and S) will suppress storage compound accumulation in favour of replicative growth. Soil microcosms were incubated under controlled conditions, with C availability manipulated through additions of isotopically labelled ( 13 C and 14 C) glucose, which is common in soil, including as a component of plant root exudates and the most abundant product of plant-derived organic matter decomposition . A combined nutrient treatment (N, P, K, and S) provided inorganic fertilizers common in agriculture. A fully crossed design included three levels of C addition (zero-C, low-C and high-C; 0, 90 and 400 µg C/g soil) and two levels of nutrient supply (no-nutrient and nutrient-supplemented) with nutrients added at a level predicted to enable full C assimilation under the high-C treatment, based on microbial biomass C:N:P ratios typical of agricultural soil and an assumed C-use efficiency of 50%. CO 2 efflux and its isotopic composition was monitored at regular intervals. Microcosms were harvested after 24 and 96 h, with these incubation times selected to balance the synthesis of storage (previously observed over a timeframe of days ) with the risk of artefacts induced by recycling of labelled biomass . Harvested soil was analysed for microbial biomass (by CFE), dissolved organic carbon (DOC), dissolved nitrogen (DN) and the storage compounds PHB and TAGs. In parallel, a set of smaller microcosms (0.5 g soil) was incubated under otherwise identical conditions to measure microbial growth as the incorporation of 18 O from H 2 18 O into DNA . This method captures replicative growth better than tracing specific C substrates. Together these provide integrated observations of heterotrophic microbial biomass, growth and storage in a natural microbiome, examining the importance of storage as a resource-use strategy in response to environmental resource supply and changes in element stoichiometry. Microbial nutrient limitations and CO 2 efflux We first describe observed patterns of soil respiration and dissolved nutrients that aid interpretation of the prevailing resource limitations during storage compound synthesis and degradation. Glucose addition stimulated large increases in CO 2 efflux (Fig. ), primarily derived from glucose mineralization (Fig. ). Nutrient supplementation barely affected CO 2 efflux rates from the zero- or low-C additions and for none of the zero- or low-C treatments was N availability (measured as DN) significantly reduced relative to the control at 24 h (Supplementary Fig. ). Thus, C limitation dominated in the zero- and low-C treatments throughout the experimental period, irrespective of nutrient additions. With high-C addition, CO 2 efflux rates under the two nutrient levels diverged strongly after 12 h, with the no-nutrient treatment declining steadily from 12 h until the end of the experiment. This early decline in mineralization was consistent with the onset of nutrient limitation, after microbial growth on the added glucose had depleted easily available soil nitrogen and driven up the C:N ratio of dissolved resources (Supplementary Fig. ). This depletion in the high-C, no-nutrient treatment was reflected in suppressed DN after 24 h, with only 35.8–62.5% of the zero-C, no-nutrient control (family-wise 95% confidence interval; Supplementary Fig. ). Nutrient limitation was accompanied by an accumulation of highly labelled DOC at 24 h in the soil solution, reflecting unused glucose or soluble by-products in an amount 19.6 ± 2.1% (mean ± standard deviation) of the original C addition (Supplementary Fig. ). Therefore, high C addition without supplementary nutrients resulted in rapid mineralization at first, but nutrient limitation set in within 12 h and continued for the remaining experimental period. Nutrient addition had a strong effect in combination with high-C supply: it accelerated glucose mineralization until 24 h after addition (Fig. ), after which CO 2 efflux dropped precipitously to below that of the high-C, no-nutrient treatment. For this high-C, nutrient-supplemented treatment, dissolved N decreased only moderately over 24 h (56.2–97.9% relative to the zero-C, no-nutrient treatment). With high-C addition after 24 h, DOC was far lower with nutrient supplementation than without (Cohen’s d >> 1, family-wise p < 0.001), and DOC level for this treatment did not change further to 96 h, despite having higher N availability at 24 h than the no-nutrient treatment (Cohen’s d » 1, family-wise p < 0.001). This indicates that the microbial community had depleted the added C and re-entered C-limited conditions. Therefore, high C addition with supplementary nutrients maintained rapid C mineralization through the first 24 h, but glucose depletion then reasserted C-limitation for the rest of the experimental period. Presence and synthesis of microbial storage compounds PHB and TAGs were both found in the control soil (zero-C, no nutrients after 24 h; Fig. ), together representing a C pool 0.25 ± 0.03 (mean ± standard deviation) times as large as the extractable microbial biomass C (MBC, by CFE; Fig. ). This ratio of stored C (PHB + TAG) to extractable MBC ranged from 0.19 ± 0.02 to 0.46 ± 0.08 over all treatments, indicating that storage is a significant pool of biomass not only under C-replete conditions, as hypothesized, but even when C availability is limited. Furthermore, the common measures of soil microbial biomass rely on extraction of water-soluble C after chloroform fumigation, which does not capture these highly hydrophobic storage compounds. This suggests that microbial biomass C may be widely underestimated in soil, and calls for methodological advancements to more systematically capture these (and possibly other) storage compounds in assessments of microbial growth. The two storage compounds were both responsive to the supply of C and complementary nutrients ( p < 0.01), but with very different behaviours. At both timepoints, the low input of C stimulated only a moderate increase in total PHB, irrespective of nutrient supply. In contrast, high C input stimulated a large increase in PHB, particularly when not supplemented with nutrients (a 308% increase over the zero-glucose, no-nutrient treatment at 96 h, with Hodges-Lehmann median difference of 36.0–42.9 µg C g −1 ). In comparison, extractable biomass reflected a non-significant mean difference of only 33% between these treatments. Nutrient supply significantly suppressed PHB storage, even in the absence of added C (nutrient main effect, robust ANOVA of medians 24 h: F (1,∞) = 35, p < 0.001; 96 h: F (1,∞) = 275, p < 0.001). Isotopic composition ( 13 C) indicated that assimilation of glucose C into new PHB continued between 24 and 96 h under the nutrient-limited conditions of the high-C, no-nutrient treatment (Hodges-Lehmann median difference of 10.2–13.3 µg C g −1 , 95% confidence interval), while extractable microbial biomass C showed no significant change. For the high-C, nutrient-supplemented treatment, the increased C limitation after 24 h induced degradation of PHB during this later incubation period (median reduction of 2.7–8.6 µg C g −1 ). The PHB storage pool therefore responded dynamically to shifts in resource stoichiometry on a timescale of hours to days, with changes as expected from a surplus storage strategy. These observations are consistent with PHB biosynthesis in pure culture , which is stimulated by excess C availability in diverse bacterial taxa . This study demonstrates such microbial storage dynamics in a terrestrial ecosystem. At the end of the incubation, stored C across the various treatments was sufficient to support 109–347 h of microbial respiration at the CO 2 efflux rate of the zero-C, no-nutrient treatment (i.e., basal respiration). Much longer periods would be envisaged if accompanied by strong downregulation of energy use in response to the stress . Storage could thus be a crucial resource for withstanding starvation or other stress. A surplus storage strategy is particularly effective at buffering microbial activity by levelling out fluctuations in resource availability and stoichiometry . Furthermore, storage representing a substantial proportion of biomass offers a resource for regrowth following disturbance, indicating a potential role of storage in supporting resilience of this soil microbial community. In these ways, the resources stored in PHB could support the resistance and resilience of this soil microbial community against environmental disturbance . Storage of TAGs was enhanced by C input (Fig. ), but its response to resource stoichiometry differed greatly from PHB. Over 24 h, nutrient supplementation stimulated more TAG accumulation, rather than suppressing it (main nutrient effect F (1,∞) = 10.8, p = 0.001 and nutrient:glucose interactions between zero-C and the two C-supplemented treatments, both p < 0.01), while over 96 h, nutrient supply had little effect with C addition and increased TAGs when C was not added (95% confidence interval for Hodges-Lehmann median difference 0.5–4.7 µg C g −1 ). The TAG response to C and nutrient supply over 96 h resembled changes in extractable microbial biomass (Fig. ), which was increased by C supply but not significantly enhanced by nutrients (ANOVA main effect of C supply at 96 h: F (2,17) = 7.1, p = 0.006). Therefore, unlike PHB, TAG synthesis was not stimulated by a stoichiometric surplus of available C, suggesting a reserve storage function for this compound. Notably, the relative allocation of glucose C between PHB and TAG remained relatively constant (PHB:TAG ratio of glucose-derived C ranged between 7.0 and 11.5 across all treatments) because the C source used for TAG biosynthesis varied more strongly than total TAG levels in response to C supply. This corroborates a reserve storage function of TAG, with total storage synthesis regulated independently of C supply and drawing on whichever C resources are available, whether glucose- or soil-derived. One advantage of a reserve storage strategy is that strategic stores are assembled even under conditions of chronic resource shortage. This allows for bursts of activity to support, for example, reproduction or transition to a resilient starvation state . Therefore, while reserve storage may be quantitatively smaller than surplus storage (reflected here in the lower amounts and changes in TAG relative to PHB; Fig. ), it can help communities to persist under conditions of sustained stress, and even exhibit resilience against additional disturbances. A reserve storage function for TAG contrasts with most observations of TAG accumulation in pure culture in response to excess C . Our observations also contrast with an earlier report that fungal TAG accumulation in a forest soil was largely eliminated by complementary nutrient supply , but much larger amounts of C were provided in that experiment (16 mg glucose-C g −1 ). The observed patterns of TAG storage are however consistent with abundant evidence of reserve storage among microorganisms, in particular that C storage occurs despite or in response to declining or limiting C availability . For example, Rhodococcus opacus accumulated 21% of cell dry weight as TAG in the presence of excess N . In our experiment C was traced into both bacterial (16:1ω7) and fungal (18:2ω6) TAGs (Supplementary material Figs. and ). The fungal biomarker 18:2ω6 was only a minor contributor to TAG incorporation in the current experiment, yet even this fungal TAG was not suppressed by nutrient addition. Our results suggest that both fungi and bacteria employed TAGs as a reserve storage form, with overall levels of TAG storage more closely linked to replicative growth than to resource stoichiometry. In summary, the response of PHB storage to different C and nutrient conditions was largely consistent with the hypothesized surplus storage mode. In contrast, patterns of TAG storage were better characterized by the reserve storage mode. There is no a priori reason to expect distinct storage strategies to correspond to different compounds, since both PHB and TAG can in principle provide C storage and mobilization under comparable conditions. Since some bacterial taxa can synthesize both PHB and TAGs , , the question arises whether these compounds fulfil different storage functions in individual organisms, or whether the different responses emerge at a community scale, with each compound used by a different set of microbial taxa following divergent storage strategies. The first possibility would suggest as-yet unidentified differences in the metabolism of these compounds that distinguish them for different storage purposes. On the other hand, if storage strategy and preferred storage forms are correlated across taxa, then storage traits could prove useful as proxies of resource allocation strategy in microbial trait-based frameworks. Microbial storage as a component of biomass growth The incorporation of C into soil microbial biomass is an essential step in the terrestrial C cycle , and appropriate estimates of these flows are required for understanding and managing ecosystem C balances . We simultaneously performed a parallel experiment using identical treatments and temperature and moisture conditions to measure microbial growth using 18 O incorporation into DNA . This method is calibrated to units of C based on extractable biomass from the CFE method, and therefore does not capture hydrophobic PHB or TAG storage. We compared the 18 O-DNA-based measure of growth with the incorporation of isotopically labelled glucose C into storage compounds (Fig. ). This provides a comparison of magnitude using a lower bound for storage synthesis by neglecting the biosynthesis of storage from other C sources and any degradation of labelled storage during the incubation. Furthermore, only two storage forms were measured here, whereas other microbial storage compounds are also known . Storage comprised up to 279 ± 72% more biomass growth than observed by the DNA-based method (for the high-C, no-nutrient treatment at 24 h, Fig. ). Even under conditions of C limitation (zero and low-C treatments), biomass growth through allocation to storage represented an additional 16–96% incorporation of C into biomass. Intracellular storage evidently plays a quantitatively significant role in microbial assimilation of C under a broad range of stoichiometric conditions, and biomass growth would be substantially underestimated by neglecting storage. Microbial growth is a central variable in microbially-explicit models of the C cycle , so the substantial scale of storage also encourages a reassessment of model inputs and interpretation of results wherever short-term measurements or dynamic changes are involved. The important model parameter of carbon-use efficiency is typically measured over 24-h periods , but over this time-frame we observed storage changes that constituted a substantial component of the microbial C balance. This suggests that more nuanced representations of microbial metabolism and C allocation may be required to accurately account for microbial C use. Microbial biomass growth is frequently understood as synonymous with the replicative growth of microbial populations. However, the incorporation of C into storage compounds represents an alternative growth pathway (Fig. ), which differs from replicative growth in crucial ways. Models of microbial growth typically assume that increases in biomass match the elemental stoichiometry of the total biomass (the assumption of stoichiometric homoeostasis ), and therefore implement overflow respiration of excess C under conditions of C surplus . However, substantial incorporation of C into otherwise nutrient-free PHB and TAG clearly does not follow whole-organism stoichiometry. Growth in storage therefore increases total biomass in a stoichiometrically unbalanced manner. The short experimental timeframe here is representative of environmental resource pulse and depletion processes, such as the arrival of a root tip in a particular soil volume or death and decay of a nearby organism. Storage provides stoichiometric buffering during such transient resource pulses, which is predicted to increase C and N retention over the longer term . By enhancing the efficiency with which microbes incorporate transient resource pulses and supporting metabolic activity through periods of resource scarcity, storage can contribute to the survival of microbes facing stressful habitat changes. Resource availability in natural and agroecosystems changes over various time-scales, and we hypothesize that microbial storage may also be responsive to, for example, seasonal changes in belowground C inputs, supporting microbial activity through resource-poor winter periods or dry summers. Moreover, storage enables a diversification of resource-use strategies, reflected here in the contrasting responses of PHB and TAG. Ecosystem stability is promoted by diverse strategies within the community , suggesting that storage can contribute to resistance and resilience of microbial communities facing environmental disturbances. These findings encourage greater recognition of storage synthesis and degradation as pathways of microbial biomass change, in addition to cellular replication. Accounting for microbial storage as a key ecophysiological strategy can enrich our understanding of microbial resource use and its contributions to biogeochemical cycles and ecosystem responses under global change. 2 efflux We first describe observed patterns of soil respiration and dissolved nutrients that aid interpretation of the prevailing resource limitations during storage compound synthesis and degradation. Glucose addition stimulated large increases in CO 2 efflux (Fig. ), primarily derived from glucose mineralization (Fig. ). Nutrient supplementation barely affected CO 2 efflux rates from the zero- or low-C additions and for none of the zero- or low-C treatments was N availability (measured as DN) significantly reduced relative to the control at 24 h (Supplementary Fig. ). Thus, C limitation dominated in the zero- and low-C treatments throughout the experimental period, irrespective of nutrient additions. With high-C addition, CO 2 efflux rates under the two nutrient levels diverged strongly after 12 h, with the no-nutrient treatment declining steadily from 12 h until the end of the experiment. This early decline in mineralization was consistent with the onset of nutrient limitation, after microbial growth on the added glucose had depleted easily available soil nitrogen and driven up the C:N ratio of dissolved resources (Supplementary Fig. ). This depletion in the high-C, no-nutrient treatment was reflected in suppressed DN after 24 h, with only 35.8–62.5% of the zero-C, no-nutrient control (family-wise 95% confidence interval; Supplementary Fig. ). Nutrient limitation was accompanied by an accumulation of highly labelled DOC at 24 h in the soil solution, reflecting unused glucose or soluble by-products in an amount 19.6 ± 2.1% (mean ± standard deviation) of the original C addition (Supplementary Fig. ). Therefore, high C addition without supplementary nutrients resulted in rapid mineralization at first, but nutrient limitation set in within 12 h and continued for the remaining experimental period. Nutrient addition had a strong effect in combination with high-C supply: it accelerated glucose mineralization until 24 h after addition (Fig. ), after which CO 2 efflux dropped precipitously to below that of the high-C, no-nutrient treatment. For this high-C, nutrient-supplemented treatment, dissolved N decreased only moderately over 24 h (56.2–97.9% relative to the zero-C, no-nutrient treatment). With high-C addition after 24 h, DOC was far lower with nutrient supplementation than without (Cohen’s d >> 1, family-wise p < 0.001), and DOC level for this treatment did not change further to 96 h, despite having higher N availability at 24 h than the no-nutrient treatment (Cohen’s d » 1, family-wise p < 0.001). This indicates that the microbial community had depleted the added C and re-entered C-limited conditions. Therefore, high C addition with supplementary nutrients maintained rapid C mineralization through the first 24 h, but glucose depletion then reasserted C-limitation for the rest of the experimental period. PHB and TAGs were both found in the control soil (zero-C, no nutrients after 24 h; Fig. ), together representing a C pool 0.25 ± 0.03 (mean ± standard deviation) times as large as the extractable microbial biomass C (MBC, by CFE; Fig. ). This ratio of stored C (PHB + TAG) to extractable MBC ranged from 0.19 ± 0.02 to 0.46 ± 0.08 over all treatments, indicating that storage is a significant pool of biomass not only under C-replete conditions, as hypothesized, but even when C availability is limited. Furthermore, the common measures of soil microbial biomass rely on extraction of water-soluble C after chloroform fumigation, which does not capture these highly hydrophobic storage compounds. This suggests that microbial biomass C may be widely underestimated in soil, and calls for methodological advancements to more systematically capture these (and possibly other) storage compounds in assessments of microbial growth. The two storage compounds were both responsive to the supply of C and complementary nutrients ( p < 0.01), but with very different behaviours. At both timepoints, the low input of C stimulated only a moderate increase in total PHB, irrespective of nutrient supply. In contrast, high C input stimulated a large increase in PHB, particularly when not supplemented with nutrients (a 308% increase over the zero-glucose, no-nutrient treatment at 96 h, with Hodges-Lehmann median difference of 36.0–42.9 µg C g −1 ). In comparison, extractable biomass reflected a non-significant mean difference of only 33% between these treatments. Nutrient supply significantly suppressed PHB storage, even in the absence of added C (nutrient main effect, robust ANOVA of medians 24 h: F (1,∞) = 35, p < 0.001; 96 h: F (1,∞) = 275, p < 0.001). Isotopic composition ( 13 C) indicated that assimilation of glucose C into new PHB continued between 24 and 96 h under the nutrient-limited conditions of the high-C, no-nutrient treatment (Hodges-Lehmann median difference of 10.2–13.3 µg C g −1 , 95% confidence interval), while extractable microbial biomass C showed no significant change. For the high-C, nutrient-supplemented treatment, the increased C limitation after 24 h induced degradation of PHB during this later incubation period (median reduction of 2.7–8.6 µg C g −1 ). The PHB storage pool therefore responded dynamically to shifts in resource stoichiometry on a timescale of hours to days, with changes as expected from a surplus storage strategy. These observations are consistent with PHB biosynthesis in pure culture , which is stimulated by excess C availability in diverse bacterial taxa . This study demonstrates such microbial storage dynamics in a terrestrial ecosystem. At the end of the incubation, stored C across the various treatments was sufficient to support 109–347 h of microbial respiration at the CO 2 efflux rate of the zero-C, no-nutrient treatment (i.e., basal respiration). Much longer periods would be envisaged if accompanied by strong downregulation of energy use in response to the stress . Storage could thus be a crucial resource for withstanding starvation or other stress. A surplus storage strategy is particularly effective at buffering microbial activity by levelling out fluctuations in resource availability and stoichiometry . Furthermore, storage representing a substantial proportion of biomass offers a resource for regrowth following disturbance, indicating a potential role of storage in supporting resilience of this soil microbial community. In these ways, the resources stored in PHB could support the resistance and resilience of this soil microbial community against environmental disturbance . Storage of TAGs was enhanced by C input (Fig. ), but its response to resource stoichiometry differed greatly from PHB. Over 24 h, nutrient supplementation stimulated more TAG accumulation, rather than suppressing it (main nutrient effect F (1,∞) = 10.8, p = 0.001 and nutrient:glucose interactions between zero-C and the two C-supplemented treatments, both p < 0.01), while over 96 h, nutrient supply had little effect with C addition and increased TAGs when C was not added (95% confidence interval for Hodges-Lehmann median difference 0.5–4.7 µg C g −1 ). The TAG response to C and nutrient supply over 96 h resembled changes in extractable microbial biomass (Fig. ), which was increased by C supply but not significantly enhanced by nutrients (ANOVA main effect of C supply at 96 h: F (2,17) = 7.1, p = 0.006). Therefore, unlike PHB, TAG synthesis was not stimulated by a stoichiometric surplus of available C, suggesting a reserve storage function for this compound. Notably, the relative allocation of glucose C between PHB and TAG remained relatively constant (PHB:TAG ratio of glucose-derived C ranged between 7.0 and 11.5 across all treatments) because the C source used for TAG biosynthesis varied more strongly than total TAG levels in response to C supply. This corroborates a reserve storage function of TAG, with total storage synthesis regulated independently of C supply and drawing on whichever C resources are available, whether glucose- or soil-derived. One advantage of a reserve storage strategy is that strategic stores are assembled even under conditions of chronic resource shortage. This allows for bursts of activity to support, for example, reproduction or transition to a resilient starvation state . Therefore, while reserve storage may be quantitatively smaller than surplus storage (reflected here in the lower amounts and changes in TAG relative to PHB; Fig. ), it can help communities to persist under conditions of sustained stress, and even exhibit resilience against additional disturbances. A reserve storage function for TAG contrasts with most observations of TAG accumulation in pure culture in response to excess C . Our observations also contrast with an earlier report that fungal TAG accumulation in a forest soil was largely eliminated by complementary nutrient supply , but much larger amounts of C were provided in that experiment (16 mg glucose-C g −1 ). The observed patterns of TAG storage are however consistent with abundant evidence of reserve storage among microorganisms, in particular that C storage occurs despite or in response to declining or limiting C availability . For example, Rhodococcus opacus accumulated 21% of cell dry weight as TAG in the presence of excess N . In our experiment C was traced into both bacterial (16:1ω7) and fungal (18:2ω6) TAGs (Supplementary material Figs. and ). The fungal biomarker 18:2ω6 was only a minor contributor to TAG incorporation in the current experiment, yet even this fungal TAG was not suppressed by nutrient addition. Our results suggest that both fungi and bacteria employed TAGs as a reserve storage form, with overall levels of TAG storage more closely linked to replicative growth than to resource stoichiometry. In summary, the response of PHB storage to different C and nutrient conditions was largely consistent with the hypothesized surplus storage mode. In contrast, patterns of TAG storage were better characterized by the reserve storage mode. There is no a priori reason to expect distinct storage strategies to correspond to different compounds, since both PHB and TAG can in principle provide C storage and mobilization under comparable conditions. Since some bacterial taxa can synthesize both PHB and TAGs , , the question arises whether these compounds fulfil different storage functions in individual organisms, or whether the different responses emerge at a community scale, with each compound used by a different set of microbial taxa following divergent storage strategies. The first possibility would suggest as-yet unidentified differences in the metabolism of these compounds that distinguish them for different storage purposes. On the other hand, if storage strategy and preferred storage forms are correlated across taxa, then storage traits could prove useful as proxies of resource allocation strategy in microbial trait-based frameworks. The incorporation of C into soil microbial biomass is an essential step in the terrestrial C cycle , and appropriate estimates of these flows are required for understanding and managing ecosystem C balances . We simultaneously performed a parallel experiment using identical treatments and temperature and moisture conditions to measure microbial growth using 18 O incorporation into DNA . This method is calibrated to units of C based on extractable biomass from the CFE method, and therefore does not capture hydrophobic PHB or TAG storage. We compared the 18 O-DNA-based measure of growth with the incorporation of isotopically labelled glucose C into storage compounds (Fig. ). This provides a comparison of magnitude using a lower bound for storage synthesis by neglecting the biosynthesis of storage from other C sources and any degradation of labelled storage during the incubation. Furthermore, only two storage forms were measured here, whereas other microbial storage compounds are also known . Storage comprised up to 279 ± 72% more biomass growth than observed by the DNA-based method (for the high-C, no-nutrient treatment at 24 h, Fig. ). Even under conditions of C limitation (zero and low-C treatments), biomass growth through allocation to storage represented an additional 16–96% incorporation of C into biomass. Intracellular storage evidently plays a quantitatively significant role in microbial assimilation of C under a broad range of stoichiometric conditions, and biomass growth would be substantially underestimated by neglecting storage. Microbial growth is a central variable in microbially-explicit models of the C cycle , so the substantial scale of storage also encourages a reassessment of model inputs and interpretation of results wherever short-term measurements or dynamic changes are involved. The important model parameter of carbon-use efficiency is typically measured over 24-h periods , but over this time-frame we observed storage changes that constituted a substantial component of the microbial C balance. This suggests that more nuanced representations of microbial metabolism and C allocation may be required to accurately account for microbial C use. Microbial biomass growth is frequently understood as synonymous with the replicative growth of microbial populations. However, the incorporation of C into storage compounds represents an alternative growth pathway (Fig. ), which differs from replicative growth in crucial ways. Models of microbial growth typically assume that increases in biomass match the elemental stoichiometry of the total biomass (the assumption of stoichiometric homoeostasis ), and therefore implement overflow respiration of excess C under conditions of C surplus . However, substantial incorporation of C into otherwise nutrient-free PHB and TAG clearly does not follow whole-organism stoichiometry. Growth in storage therefore increases total biomass in a stoichiometrically unbalanced manner. The short experimental timeframe here is representative of environmental resource pulse and depletion processes, such as the arrival of a root tip in a particular soil volume or death and decay of a nearby organism. Storage provides stoichiometric buffering during such transient resource pulses, which is predicted to increase C and N retention over the longer term . By enhancing the efficiency with which microbes incorporate transient resource pulses and supporting metabolic activity through periods of resource scarcity, storage can contribute to the survival of microbes facing stressful habitat changes. Resource availability in natural and agroecosystems changes over various time-scales, and we hypothesize that microbial storage may also be responsive to, for example, seasonal changes in belowground C inputs, supporting microbial activity through resource-poor winter periods or dry summers. Moreover, storage enables a diversification of resource-use strategies, reflected here in the contrasting responses of PHB and TAG. Ecosystem stability is promoted by diverse strategies within the community , suggesting that storage can contribute to resistance and resilience of microbial communities facing environmental disturbances. These findings encourage greater recognition of storage synthesis and degradation as pathways of microbial biomass change, in addition to cellular replication. Accounting for microbial storage as a key ecophysiological strategy can enrich our understanding of microbial resource use and its contributions to biogeochemical cycles and ecosystem responses under global change. Experimental design Topsoil (0–25 cm) was collected in November 2017 from the Reinshof experimental farm near Göttingen, Germany (51°29′51.0″ N, 9°55′59.0″ E) following an oat crop. Five samples along a 50 m field transect were mixed to provide a single homogenized soil sample. The soil was a Haplic Luvisol, pH 5.4 (CaCl 2 ), C org 1.4% . Soil was stored at 4 °C for one week prior to sieving (2 mm) and then distributed into airtight 100 mL microcosms in laboratory bottles with the equivalent of 25 g dry soil at 48% of water holding capacity (WHC). Four replicates were prepared for each treatment and sampling timepoint. Microcosms were placed in a climate-controlled room at 15 °C and preincubated for one week before adding treatment solutions. Treatment solutions provided glucose as a C source (0, 90 or 400 µg C/g soil) in a fully crossed design with added nutrients or a no-nutrient control (combined (NH 4 ) 2 SO 4 and KH 2 PO 4 , respectively 0.613 and 0.106 µmol g −1 soil). Glucose levels were selected to probe the effects of C supply on storage, with additions above and below the magnitude of MBC having potentially contrasting effects on microbial growth . Glucose treatments contained uniformly isotopically labelled glucose (3 at% 13 C and 0.19 kBq 14 C per microcosm, respectively from Sigma-Aldrich, Munich, Germany and from American Radiolabelled Chemicals, Saint Louis, U.S.A.). The 14 C label in the added glucose enabled rapid and accurate measurement of glucose-derived C in liquid extracts by scintillation counting, while 13 C was traced in all other pools. The same amount of nutrients was used in all nutrient-addition treatments, with this set to be sufficient for the complete utilisation of all C added in the high glucose treatment, assuming a C:N:P ratio of 38:5:1 for an agricultural microbial community and a C-use efficiency of 50% . Addition of the treatment solutions raised the soil moisture to 70% of WHC, after which the microcosms were sealed with air-tight butyl rubber septa and their headspace flushed with CO 2 -free synthetic air. Headspace gas was sampled with a 30 mL gas syringe at regular intervals and collected in evacuated exetainers (Labco, Ceredigion, U.K.) for measurement by gas chromatography—isotope ratio mass spectrometry (GC-Box coupled via a Conflo III interface to a Delta plus XP mass spectrometer, all Thermo Fischer, Bremen, Germany). After gas sampling, the headspace in each microcosm was again flushed with CO 2 -free air. Microcosms were harvested 24 and 96 h after application of the treatment solutions. The soil in each microcosm was thoroughly mixed by hand for 30 sec and subsampled for chemical analysis. Chemical analysis Extractable microbial biomass was measured by CFE , . A total of two 5 g subsamples of moist soil were taken from each microcosm. One was immediately extracted by shaking in 20 mL of 0.05 M K 2 SO 4 for 1 h at room temperature, then centrifuged and the supernatant filtered. The other was exposed to a chloroform-saturated atmosphere for 24 h, after which residual chloroform was removed by repeated evacuation and the fumigated soil was extracted in the same manner as the non-fumigated subsample. Extractable MBC was calculated as the difference in DOC between the fumigated and non-fumigated samples, measured on a Multi N/C 2100 S analyser (Analytik Jena, Jena, Germany). CFE biomass is reported here as extractable biomass, without conversion with uncertain extraction efficiencies. Glucose-derived MBC was similarly calculated from the difference in radioactivity ( 14 C) of the extracts as measured on a Hidex 300 SL scintillation counter (TDCR efficiency correction, Hidex, Turku, Finland) using Rotiszint Eco Plus scintillation cocktail (Carl Roth, Karlsruhe, Germany). DOC and DN were determined respectively as organic carbon and total nitrogen in the extracts of the unfumigated soil. PHB was determined by Soxhlet extraction of 4 g freeze-dried soil into chloroform, followed by acid-catalysed transesterification in ethanol and GC-MS quantification of the resulting ethyl hydroxybutyrate on a 7890 A gas chromatograph (DB1‐MS column, 100% dimethyl polysiloxane, 15 m long, inner diameter 0.25 mm, film thickness 0.25 μm), with helium (5.0) as the mobile phase at a flow rate of 1 mL min −1 , coupled to a 7000 A triple quadrupole mass spectrometer (all Agilent, Waldbronn, Germany) . Injection volume was 1 μL at an inlet temperature of 270 °C and split ratio of 25:1. The GC temperature was: 42 °C isothermal for 7 min; ramped to 77 °C at 5 °C min −1 ; then to 155 °C at 15 °C min −1 ; held for 15 min; and then ramped to 200 °C at 10 °C min −1 . The transfer line temperature was 280 °C, with electron ionization at 70 eV. Quantification was based on ions at m/z 43, 60, and 87 for the ethyl 3‐hydroxybutyrate analyte, and at m/z 57, 71, and 85 for the undecane internal standard. Identity and purity of peaks was confirmed by scan measurement across the range m/z 40 to 300. The same chromatographic conditions were used for determination of the PHB isotopic composition on a Thermo GC Isolink coupled with a Conflo IV interface to a MAT 253 isotope ratio mass spectrometer (all Thermo Fisher, Bremen, Germany), but with splitless injection. The measured isotopic compositions were corrected for C added in derivatization . TAGs were quantified as neutral lipid fatty acids as follows: Lipids were first extracted from 5 g frozen soil into a single-phase chloroform–methanol–water solution, purified by solvent extraction, and neutral lipids separated from more polar lipids on a silica solid-phase extraction column. Following removal of the solvent by evaporation, the purified TAGs were hydrolysed (0.5 M NaOH in MeOH, 10 min at 100 °C) and methylated (12.5 M BF 3 in MeOH, 15 min at 85 °C), followed by extraction into hexane, drying and redissolution in toluene. The resulting fatty acid methyl esters were quantified by GC-MS on a 7890 A gas chromatograph (DB-5 MS column, 5%-phenyl methylpolysiloxane, 30 m coupled to a DB1‐MS 15 m long, both with an inner diameter 0.25 mm and film thickness 0.25 μm) with an injection volume of 1 µl into the splitless inlet heated to 270 °C, and at a constant flow of helium (4.6) of 1.2 mL min −1 , coupled to a 5977B series mass spectrometer (Agilent, Waldbronn, Germany), set to 70 eV electron impact energy, with the GC oven programme as follows: initial temperature 80 °C isothermal for 1 min, ramped at 10 °C min −1 to 171 °C, ramped at 0.7 °C min −1 to 196 °C, isothermal for 4 min, ramped at 0.5 °C min −1 to 206 °C, and ramped at 10 °C min −1 to the final temperature of 300 °C, isothermal for 10 min for column reconditioning. Isotopic composition was determined in triplicate using a Trace GC 2000 (CE Instruments ThermoQuest Italia, S.p.A), coupled with a Combustion Interface III to a DeltaPlus isotope-ratio mass spectrometer (Thermo Finnigan, Bremen, Germany) using the same GC parameters. Growth was estimated by 18 O incorporation into DNA , . Parallel microcosms were prepared with 0.50 g soil in 2 mL Eppendorf tubes (Eppendorf, Hamburg, Germany) and incubated alongside the larger microcosms. This smaller scale was necessitated by the cost of 18 O-water. This is nevertheless larger than the soil amounts typically used for DNA extraction, which achieve consistent measures of bacterial and fungal community composition. This is also orders of magnitude larger than the scale of microbial interactions . These considerations, alongside the care taken to ensure identical conditions of temperature, moisture and handling, give confidence that this incubation was representative of the same processes occurring in the larger microcosms. Treatment solutions were prepared at the same concentrations as for the larger microcosms, but enriched with 97 at% H 2 18 O so that addition provided a final soil solution of 4.2 at% 18 O. Tubes were withdrawn from incubation 24 h and 96 h after addition and immediately frozen at −80 °C. DNA was subsequently extracted using MP Bio FastDNA Spin Kit for Soil (MP Biomedicals, Solon, OH, USA). DNA concentration in the extract was measured on an Implen MP80 nanophotometer (Implen, Munich, Germany) at 260 nm, with A260/280 and A260/A230 to confirm quality, and 50 µL was pipetted into silver capsules, freeze dried, and measured by TC/EA (Thermo Finnigan, Bremen, Germany) coupled with a Conflo III interface to a Delta V Plus isotope ratio mass spectrometer (all Thermo Finnigan, Bremen, Germany). The total measured O content of the sample, the O content of the DNA (31% by mass), and the 18 O natural abundance of unlabelled control samples were used to calculate the background 18 O from the kit. This background 18 O was deducted to obtain 18 O abundance of the DNA, which was applied in a 2-pool mixing model with 70% of O in new DNA derived from water (model detailed in Supplementary ). This provided the fraction of extracted DNA that had been newly synthesized during the incubation period. This fraction was multiplied by extractable microbial biomass to arrive at gross biomass growth in units of µg C g −1 soil. Statistical analysis Statistical analysis was performed in R with preliminary calculations in Microsoft Excel (version 16.67). Results for CO 2 , MBC, DOC, DN, TAG, PHB, and isotopic compositions were calculated for each independent sample and reported as mean ± standard deviation for each treatment group, unless otherwise noted. Comparisons between these pools were similarly calculated at the sample level before expressing as mean ± standard deviation. DN and DOC data were log-transformed to satisfy assumptions for ANOVA (Shapiro-Wilk’s test of normality and Levene’s test for homogeneity of variance), followed by Tukey’s HSD test for pairwise comparisons of treatment effects. The same analyses were performed on untransformed extractable microbial biomass data. Ranges for treatment effects on DN, DOC and MBC reported in the text reflect 95% family-wise confidence intervals from pair-wise Tukey’s HSD tests. Where relevant, effect sizes were computed as Cohen’s d, using the effsize package . Levels of labelled storage compounds showed considerable heteroskedasticity that could not be consistently corrected by transformation, particularly due to very high levels of unsaturated fatty acids in one of the 24 h samples. This conceivably reflected a hotspot of fungal activity in the soil. This datapoint was therefore conservatively retained since this would comprise relevant variability in the soil. Analysis of storage compounds (PHB and TAG) proceeded by robust ANOVA of medians for each timepoint separately using the R package WRS2 . Consistent with the median-based robust ANOVA, storage differences between treatments reported in the text are median differences, with uncertainty given as 95% confidence intervals calculated by the Hodges-Lehmann estimator (R package DescTools ). Comprehensive pairwise post-hoc comparisons of medians was performed using medpb to provide significance indicators in figures (Fig. ) (R package WRS ), with Benjamini-Hochberg adjustment of p values for multiple comparisons. Growth estimation by 18 O incorporation used DNA concentration and its 18 O enrichment to determine mean gross microbial growth for each treatment in relative terms, and the associated standard deviation. The corresponding mean extractable microbial biomass values were applied to convert to absolute units of µg C, using standard rules of error propagation , to provide the DNA-based measure of mean microbial biomass growth for each treatment. These DNA-based growth estimates were combined with the mean production of labelled storage compounds (sum of C in glucose-derived PHB and TAG), again using rules of error propagation, to obtain estimates of total (DNA-based and storage) mean biomass growth and associated standard deviations. These were subjected to 2-way ANOVA and Tukey HSD to test the significance and size of treatment effects (Fig. ). Arithmetic comparisons between MBC, growth and storage pools (for example, the relative scales of DNA-based growth and storage growth) were calculated using mean values with error propagation. Reporting summary Further information on research design is available in the linked to this article. Topsoil (0–25 cm) was collected in November 2017 from the Reinshof experimental farm near Göttingen, Germany (51°29′51.0″ N, 9°55′59.0″ E) following an oat crop. Five samples along a 50 m field transect were mixed to provide a single homogenized soil sample. The soil was a Haplic Luvisol, pH 5.4 (CaCl 2 ), C org 1.4% . Soil was stored at 4 °C for one week prior to sieving (2 mm) and then distributed into airtight 100 mL microcosms in laboratory bottles with the equivalent of 25 g dry soil at 48% of water holding capacity (WHC). Four replicates were prepared for each treatment and sampling timepoint. Microcosms were placed in a climate-controlled room at 15 °C and preincubated for one week before adding treatment solutions. Treatment solutions provided glucose as a C source (0, 90 or 400 µg C/g soil) in a fully crossed design with added nutrients or a no-nutrient control (combined (NH 4 ) 2 SO 4 and KH 2 PO 4 , respectively 0.613 and 0.106 µmol g −1 soil). Glucose levels were selected to probe the effects of C supply on storage, with additions above and below the magnitude of MBC having potentially contrasting effects on microbial growth . Glucose treatments contained uniformly isotopically labelled glucose (3 at% 13 C and 0.19 kBq 14 C per microcosm, respectively from Sigma-Aldrich, Munich, Germany and from American Radiolabelled Chemicals, Saint Louis, U.S.A.). The 14 C label in the added glucose enabled rapid and accurate measurement of glucose-derived C in liquid extracts by scintillation counting, while 13 C was traced in all other pools. The same amount of nutrients was used in all nutrient-addition treatments, with this set to be sufficient for the complete utilisation of all C added in the high glucose treatment, assuming a C:N:P ratio of 38:5:1 for an agricultural microbial community and a C-use efficiency of 50% . Addition of the treatment solutions raised the soil moisture to 70% of WHC, after which the microcosms were sealed with air-tight butyl rubber septa and their headspace flushed with CO 2 -free synthetic air. Headspace gas was sampled with a 30 mL gas syringe at regular intervals and collected in evacuated exetainers (Labco, Ceredigion, U.K.) for measurement by gas chromatography—isotope ratio mass spectrometry (GC-Box coupled via a Conflo III interface to a Delta plus XP mass spectrometer, all Thermo Fischer, Bremen, Germany). After gas sampling, the headspace in each microcosm was again flushed with CO 2 -free air. Microcosms were harvested 24 and 96 h after application of the treatment solutions. The soil in each microcosm was thoroughly mixed by hand for 30 sec and subsampled for chemical analysis. Extractable microbial biomass was measured by CFE , . A total of two 5 g subsamples of moist soil were taken from each microcosm. One was immediately extracted by shaking in 20 mL of 0.05 M K 2 SO 4 for 1 h at room temperature, then centrifuged and the supernatant filtered. The other was exposed to a chloroform-saturated atmosphere for 24 h, after which residual chloroform was removed by repeated evacuation and the fumigated soil was extracted in the same manner as the non-fumigated subsample. Extractable MBC was calculated as the difference in DOC between the fumigated and non-fumigated samples, measured on a Multi N/C 2100 S analyser (Analytik Jena, Jena, Germany). CFE biomass is reported here as extractable biomass, without conversion with uncertain extraction efficiencies. Glucose-derived MBC was similarly calculated from the difference in radioactivity ( 14 C) of the extracts as measured on a Hidex 300 SL scintillation counter (TDCR efficiency correction, Hidex, Turku, Finland) using Rotiszint Eco Plus scintillation cocktail (Carl Roth, Karlsruhe, Germany). DOC and DN were determined respectively as organic carbon and total nitrogen in the extracts of the unfumigated soil. PHB was determined by Soxhlet extraction of 4 g freeze-dried soil into chloroform, followed by acid-catalysed transesterification in ethanol and GC-MS quantification of the resulting ethyl hydroxybutyrate on a 7890 A gas chromatograph (DB1‐MS column, 100% dimethyl polysiloxane, 15 m long, inner diameter 0.25 mm, film thickness 0.25 μm), with helium (5.0) as the mobile phase at a flow rate of 1 mL min −1 , coupled to a 7000 A triple quadrupole mass spectrometer (all Agilent, Waldbronn, Germany) . Injection volume was 1 μL at an inlet temperature of 270 °C and split ratio of 25:1. The GC temperature was: 42 °C isothermal for 7 min; ramped to 77 °C at 5 °C min −1 ; then to 155 °C at 15 °C min −1 ; held for 15 min; and then ramped to 200 °C at 10 °C min −1 . The transfer line temperature was 280 °C, with electron ionization at 70 eV. Quantification was based on ions at m/z 43, 60, and 87 for the ethyl 3‐hydroxybutyrate analyte, and at m/z 57, 71, and 85 for the undecane internal standard. Identity and purity of peaks was confirmed by scan measurement across the range m/z 40 to 300. The same chromatographic conditions were used for determination of the PHB isotopic composition on a Thermo GC Isolink coupled with a Conflo IV interface to a MAT 253 isotope ratio mass spectrometer (all Thermo Fisher, Bremen, Germany), but with splitless injection. The measured isotopic compositions were corrected for C added in derivatization . TAGs were quantified as neutral lipid fatty acids as follows: Lipids were first extracted from 5 g frozen soil into a single-phase chloroform–methanol–water solution, purified by solvent extraction, and neutral lipids separated from more polar lipids on a silica solid-phase extraction column. Following removal of the solvent by evaporation, the purified TAGs were hydrolysed (0.5 M NaOH in MeOH, 10 min at 100 °C) and methylated (12.5 M BF 3 in MeOH, 15 min at 85 °C), followed by extraction into hexane, drying and redissolution in toluene. The resulting fatty acid methyl esters were quantified by GC-MS on a 7890 A gas chromatograph (DB-5 MS column, 5%-phenyl methylpolysiloxane, 30 m coupled to a DB1‐MS 15 m long, both with an inner diameter 0.25 mm and film thickness 0.25 μm) with an injection volume of 1 µl into the splitless inlet heated to 270 °C, and at a constant flow of helium (4.6) of 1.2 mL min −1 , coupled to a 5977B series mass spectrometer (Agilent, Waldbronn, Germany), set to 70 eV electron impact energy, with the GC oven programme as follows: initial temperature 80 °C isothermal for 1 min, ramped at 10 °C min −1 to 171 °C, ramped at 0.7 °C min −1 to 196 °C, isothermal for 4 min, ramped at 0.5 °C min −1 to 206 °C, and ramped at 10 °C min −1 to the final temperature of 300 °C, isothermal for 10 min for column reconditioning. Isotopic composition was determined in triplicate using a Trace GC 2000 (CE Instruments ThermoQuest Italia, S.p.A), coupled with a Combustion Interface III to a DeltaPlus isotope-ratio mass spectrometer (Thermo Finnigan, Bremen, Germany) using the same GC parameters. Growth was estimated by 18 O incorporation into DNA , . Parallel microcosms were prepared with 0.50 g soil in 2 mL Eppendorf tubes (Eppendorf, Hamburg, Germany) and incubated alongside the larger microcosms. This smaller scale was necessitated by the cost of 18 O-water. This is nevertheless larger than the soil amounts typically used for DNA extraction, which achieve consistent measures of bacterial and fungal community composition. This is also orders of magnitude larger than the scale of microbial interactions . These considerations, alongside the care taken to ensure identical conditions of temperature, moisture and handling, give confidence that this incubation was representative of the same processes occurring in the larger microcosms. Treatment solutions were prepared at the same concentrations as for the larger microcosms, but enriched with 97 at% H 2 18 O so that addition provided a final soil solution of 4.2 at% 18 O. Tubes were withdrawn from incubation 24 h and 96 h after addition and immediately frozen at −80 °C. DNA was subsequently extracted using MP Bio FastDNA Spin Kit for Soil (MP Biomedicals, Solon, OH, USA). DNA concentration in the extract was measured on an Implen MP80 nanophotometer (Implen, Munich, Germany) at 260 nm, with A260/280 and A260/A230 to confirm quality, and 50 µL was pipetted into silver capsules, freeze dried, and measured by TC/EA (Thermo Finnigan, Bremen, Germany) coupled with a Conflo III interface to a Delta V Plus isotope ratio mass spectrometer (all Thermo Finnigan, Bremen, Germany). The total measured O content of the sample, the O content of the DNA (31% by mass), and the 18 O natural abundance of unlabelled control samples were used to calculate the background 18 O from the kit. This background 18 O was deducted to obtain 18 O abundance of the DNA, which was applied in a 2-pool mixing model with 70% of O in new DNA derived from water (model detailed in Supplementary ). This provided the fraction of extracted DNA that had been newly synthesized during the incubation period. This fraction was multiplied by extractable microbial biomass to arrive at gross biomass growth in units of µg C g −1 soil. Statistical analysis was performed in R with preliminary calculations in Microsoft Excel (version 16.67). Results for CO 2 , MBC, DOC, DN, TAG, PHB, and isotopic compositions were calculated for each independent sample and reported as mean ± standard deviation for each treatment group, unless otherwise noted. Comparisons between these pools were similarly calculated at the sample level before expressing as mean ± standard deviation. DN and DOC data were log-transformed to satisfy assumptions for ANOVA (Shapiro-Wilk’s test of normality and Levene’s test for homogeneity of variance), followed by Tukey’s HSD test for pairwise comparisons of treatment effects. The same analyses were performed on untransformed extractable microbial biomass data. Ranges for treatment effects on DN, DOC and MBC reported in the text reflect 95% family-wise confidence intervals from pair-wise Tukey’s HSD tests. Where relevant, effect sizes were computed as Cohen’s d, using the effsize package . Levels of labelled storage compounds showed considerable heteroskedasticity that could not be consistently corrected by transformation, particularly due to very high levels of unsaturated fatty acids in one of the 24 h samples. This conceivably reflected a hotspot of fungal activity in the soil. This datapoint was therefore conservatively retained since this would comprise relevant variability in the soil. Analysis of storage compounds (PHB and TAG) proceeded by robust ANOVA of medians for each timepoint separately using the R package WRS2 . Consistent with the median-based robust ANOVA, storage differences between treatments reported in the text are median differences, with uncertainty given as 95% confidence intervals calculated by the Hodges-Lehmann estimator (R package DescTools ). Comprehensive pairwise post-hoc comparisons of medians was performed using medpb to provide significance indicators in figures (Fig. ) (R package WRS ), with Benjamini-Hochberg adjustment of p values for multiple comparisons. Growth estimation by 18 O incorporation used DNA concentration and its 18 O enrichment to determine mean gross microbial growth for each treatment in relative terms, and the associated standard deviation. The corresponding mean extractable microbial biomass values were applied to convert to absolute units of µg C, using standard rules of error propagation , to provide the DNA-based measure of mean microbial biomass growth for each treatment. These DNA-based growth estimates were combined with the mean production of labelled storage compounds (sum of C in glucose-derived PHB and TAG), again using rules of error propagation, to obtain estimates of total (DNA-based and storage) mean biomass growth and associated standard deviations. These were subjected to 2-way ANOVA and Tukey HSD to test the significance and size of treatment effects (Fig. ). Arithmetic comparisons between MBC, growth and storage pools (for example, the relative scales of DNA-based growth and storage growth) were calculated using mean values with error propagation. Further information on research design is available in the linked to this article. Supplmentary information Peer Review File Reporting Summary
A senescence-based prognostic gene signature for colorectal cancer and identification of the role of SPP1-positive macrophages in tumor senescence
7e236eab-e137-45bd-92bb-788f217f1a36
10115976
Internal Medicine[mh]
Colorectal cancer (CRC) is one of the most prevalent malignancies and the third leading cause of cancer-related mortality worldwide. Early stage CRC can be treated with surgical resection, adjuvant radiation, or chemotherapy. The standard treatment strategy for metastatic CRC is combined chemotherapy and targeted agents, including immune checkpoint inhibitors (ICIs); however, the 5-year overall survival (OS) rate remains relatively low (10–14%) in these cases due to drug resistance. Thus, a further understanding of the mechanisms of treatment failure in CRC remains crucial for improving the survival outcomes of CRC patients. In the past few years, the role of senescence in cancer has been widely investigated. Cellular senescence is characterized by aberrant changes in cell morphology, gene expression, chromatin, and metabolism induced by continuous microenvironmental stimulation. Generally, cellular senescence serves as a complement to programmed cell death and helps maintain tissue homeostasis. However, the effects of senescence on cancer cells are complex. Despite their protective role in certain contexts, senescent cells may promote tumorigenesis, development, and relapse . The senescence-associated secretory phenotype (SASP), characterized by the secretion of a series of proinflammatory chemokines and cytokines, can mediate the function of neighboring cells, such as immune cells, stromal cells, and adjacent non-tumor epithelial cells in the surrounding tumor microenvironment (TME) . Recent evidence suggests the role of cellular senescence in tumor immune escape. Pereira et al. revealed that senescent cells can evade immune clearance by secreting SASP factors, such as IL-6, to upregulate HLA-E, which suppresses natural killer (NK) cells and T cell clearance in premalignant lesions . During the development of hepatocellular carcinoma (HCC), chemokine CCL2, another SASP factor, recruits immature suppressive myeloid cells that inhibit NK cell function and promotes the progression of HCC . Cellular senescence can induce drug resistance during cancer treatment. In addition, the SASP-related factor amphiregulin contributes to chemoresistance via upregulating programmed cell death 1 ligand (PD‐L1) expression in recipient cancer cells and creating an immunosuppressive TME . In summary, cell senescence can cause therapeutic resistance and lead to poor survival outcomes in cancer patients. Given the importance of cell senescence in tumors, many studies have investigated the expression of senescence-associated genes in cancer and have constructed survival prediction models. However, little is known regarding the prognostic role of senescence and its immune-mediated functions in CRC. Bulk transcriptomics allow scientists to comprehensively understand tumor features. Thus, several analyses regarding cell senescence have been performed based on bulk transcriptome data, and attempts have been made to decompose the bulk data into lineage-specific constituents using deconvolution algorithms. However, single-cell RNA sequencing (scRNA-seq) enables the accurate identification of different cell types and recognizes their distinct characteristics in various biological states and conditions . In the field of cell senescence, scRNA-seq has been used to understand aging of the nervous, hemopoietic, and immune systems. Thus, combined bulk transcriptome and scRNA-seq analyses provide unique insights into the SASP features of CRC and help identify potential therapeutic markers. In this study, we constructed a senescence-related prognostic model for CRC patients based on the SenMayo gene list . We discovered that high-risk patients not only had poor prognosis and strong senescent features but also presented an immunosuppressive TME and resistance to immunotherapy. ScRNA-seq analysis revealed that one of the model genes, secreted phosphoprotein 1 (SPP1), was highly expressed in a subset of macrophages. This subset secretes relatively high levels of SASP factors and may contribute to the senescence of tumor cells. Data collection of bulk transcriptome and senescence gene sets For the construction cohort, clinical features, RNA-seq expression data, and somatic mutation data were downloaded from the Cancer Genome Atlas-Colon Adenocarcinoma (TCGA-COAD) database ( https://cancergenome.nih.gov/ ). For validation, clinical features and RNA-seq expression data of GSE17536, GSE17537 and GSE38832 were obtained from the Gene Expression Omnibus (GEO) database ( https://www.ncbi.nlm.nih.gov/geo/ ). GSE17536 and GSE17537 were merged into a single cohort because they were derived from the same study. RNA-seq expression data of patients from the GSE213331 cohort and their pathological response to neoadjuvant ICI were collected to validate the model’s ability to predict immunotherapy response. The SenMayo gene list was downloaded from the of the study by Saul . Patients with unrecorded expression of genes in the SenMayo gene set were excluded. Construction and validation of the prognostic senescence-related gene model Univariate Cox regression analysis was performed to identify genes predicting disease-specific survival (DSS)-predicted genes. DSS was defined as the interval from diagnosis to CRC-associated death. Senescence-related genes with |hazard ratio (HR)| > 1.0 and p-value< 0.05 were included in the model construction. To minimize the risk of overfitting, the least absolute shrinkage and selection operator (LASSO) algorithm was performed with tenfold cross validation and run for 1,000 cycles with a random stimulation of 1,000 times. Risk scores were calculated using the R package ‘glmnet.’ The senescence risk score for each patient was calculated as follows: f ( x ) = ∑ n = 1 n ( r e g r e s s i o n c o e f f i c i e n t * e x p r e s s i o n l e v e l o f t h e g e n e ) The 1-, 2-, and 3-year ROC curves of the risk model were constructed to evaluate its prognostic performance. Patients were stratified into low- and high-risk subgroups using the median score as the cutoff value. Kaplan–Meier survival curves were plotted to compare the DSS between the two groups in the construction and validation cohorts. Functional enrichment analysis We used the R package ‘limma’ to identify the expression of differentially expressed gene (DEG) sets between the high- and low-risk groups. The thresholds were set at |log2FC| > 1.0, along with a p-value< 0.05. The R package ‘clusterProfiler’ was used to explore the biological attributes of the DEGs. Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis, Gene Ontology (GO) pathway enrichment analysis, and Gene Set Enrichment Analysis (GSEA) were conducted. The heatmap constructed by the R package ‘ggplot2’ was used for result visualization. Mutation analysis The mutation annotation format (MAF) downloaded from TCGA database was created with the ‘maftools’ package. Mutations in SenMayo genes were compared between the high- and low-risk subgroups. Exploration of immune-related signatures Upregulated or downregulated immune-related pathways in the high-risk groups were analyzed using the GSEA software. We used CIBERSORT ( https://cibersort.stanford.edu ) to analyze the relative levels of 22 tumor-infiltrating immune cells in high- and low-risk patients. The relationship between gene expression levels and immune cell infiltration was evaluated using the TIMER2.0 database ( http://timer.comp-genomics.org/ ). The Immuno-Oncology Biological Research (IOBR) R package was used to assess the immune features and immune cell infiltration in high- and low-risk groups. Single-cell sequencing data collection and processing The raw unique molecular identifiers (UMI) count matrix of the single-cell dataset GSE132465 was downloaded . For quality control, the raw gene expression matrix was filtered, normalized using the Seurat R package, and selected according to the following criteria: cells with > 1,000 UMI counts, > 200 genes and< 6,000 genes, and< 20% mitochondrial gene expression in UMI counts. Gene expression matrices from filtered cells were normalized and scaled. The uniform manifold approximation and projection (UMAP) method was used to lower the dimensions of the data, and t-distributed stochastic neighbor embedding (t-SNE) projection was applied to cluster and visualize the results. The cells were annotated using canonical cell surface markers. Differentially expressed genes (DEGs) were detected using the FindMarker function. Gene expression levels across various cell subtypes were determined using the DoHeatmap function. Immunofluorescence staining For immunofluorescence (IF), surgical specimens of benign colon tissues (colonic diverticula), low-grade colon tumors without venous or nervous system invasion, and high-grade colon tumors with venous or nervous system invasion were collected. All procedures involving human tissue experiments were approved by the Ethics Committee of the Shanghai Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China. The IF staining was implemented based on formalin-fixed paraffin-embedded (FFPE) tissues which were cut into 5 µm thick slides for each panel test. The slides were dewaxed, rehydrated, and subjected to epitope retrieval by boiling in citrate antigen retrieval solution (pH = 6; Servicebio #G1206) for 3 min. After cooling, the slides were washed three times in phosphate-buffered saline (PBS) for 5 min. Proteins were blocked with bovine serum albumin (BSA) for 30 min. One antigen was added in each round, including the BSA block, primary and secondary antibody incubation, and antigen retrieval. The above procedures were repeated until the three biomarkers, CD68 (Servicebio #GB113150, 1:3000), p21 (Servicebio #GB11153, 1:4000), and SPP1 (abcam # ABS135915, 1:200), were added. The secondary antibodies for the three biomarkers were horseradish peroxidase (HRP)-labeled goat anti-rabbit antibody (Servicebio #GB23303, 1:500) for CD68 and p21 and Cy3-labling goat anti-rabbit antibody (Servicebio #GB21303, 1:500). An AutoFluo quencher was applied and the nuclei were stained with 4, 6-diamidino-2- phenylindole (DAPI, Servicebio #GB1012) before the slides were blocked using an antifade mounting medium (Servicebio #GB1401). Images were captured using an inverted fluorescence microscope. Statistical analysis Statistical analyses were conducted using R software (version 4.2.2, https://www.r-project.org/ ) and its appropriate packages. Kaplan–Meier analysis was used to assess and compare survival between the different subgroups. Log-rank two-tailed p< 0.05 was considered as statistically significant. For the construction cohort, clinical features, RNA-seq expression data, and somatic mutation data were downloaded from the Cancer Genome Atlas-Colon Adenocarcinoma (TCGA-COAD) database ( https://cancergenome.nih.gov/ ). For validation, clinical features and RNA-seq expression data of GSE17536, GSE17537 and GSE38832 were obtained from the Gene Expression Omnibus (GEO) database ( https://www.ncbi.nlm.nih.gov/geo/ ). GSE17536 and GSE17537 were merged into a single cohort because they were derived from the same study. RNA-seq expression data of patients from the GSE213331 cohort and their pathological response to neoadjuvant ICI were collected to validate the model’s ability to predict immunotherapy response. The SenMayo gene list was downloaded from the of the study by Saul . Patients with unrecorded expression of genes in the SenMayo gene set were excluded. Univariate Cox regression analysis was performed to identify genes predicting disease-specific survival (DSS)-predicted genes. DSS was defined as the interval from diagnosis to CRC-associated death. Senescence-related genes with |hazard ratio (HR)| > 1.0 and p-value< 0.05 were included in the model construction. To minimize the risk of overfitting, the least absolute shrinkage and selection operator (LASSO) algorithm was performed with tenfold cross validation and run for 1,000 cycles with a random stimulation of 1,000 times. Risk scores were calculated using the R package ‘glmnet.’ The senescence risk score for each patient was calculated as follows: f ( x ) = ∑ n = 1 n ( r e g r e s s i o n c o e f f i c i e n t * e x p r e s s i o n l e v e l o f t h e g e n e ) The 1-, 2-, and 3-year ROC curves of the risk model were constructed to evaluate its prognostic performance. Patients were stratified into low- and high-risk subgroups using the median score as the cutoff value. Kaplan–Meier survival curves were plotted to compare the DSS between the two groups in the construction and validation cohorts. We used the R package ‘limma’ to identify the expression of differentially expressed gene (DEG) sets between the high- and low-risk groups. The thresholds were set at |log2FC| > 1.0, along with a p-value< 0.05. The R package ‘clusterProfiler’ was used to explore the biological attributes of the DEGs. Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis, Gene Ontology (GO) pathway enrichment analysis, and Gene Set Enrichment Analysis (GSEA) were conducted. The heatmap constructed by the R package ‘ggplot2’ was used for result visualization. The mutation annotation format (MAF) downloaded from TCGA database was created with the ‘maftools’ package. Mutations in SenMayo genes were compared between the high- and low-risk subgroups. Upregulated or downregulated immune-related pathways in the high-risk groups were analyzed using the GSEA software. We used CIBERSORT ( https://cibersort.stanford.edu ) to analyze the relative levels of 22 tumor-infiltrating immune cells in high- and low-risk patients. The relationship between gene expression levels and immune cell infiltration was evaluated using the TIMER2.0 database ( http://timer.comp-genomics.org/ ). The Immuno-Oncology Biological Research (IOBR) R package was used to assess the immune features and immune cell infiltration in high- and low-risk groups. The raw unique molecular identifiers (UMI) count matrix of the single-cell dataset GSE132465 was downloaded . For quality control, the raw gene expression matrix was filtered, normalized using the Seurat R package, and selected according to the following criteria: cells with > 1,000 UMI counts, > 200 genes and< 6,000 genes, and< 20% mitochondrial gene expression in UMI counts. Gene expression matrices from filtered cells were normalized and scaled. The uniform manifold approximation and projection (UMAP) method was used to lower the dimensions of the data, and t-distributed stochastic neighbor embedding (t-SNE) projection was applied to cluster and visualize the results. The cells were annotated using canonical cell surface markers. Differentially expressed genes (DEGs) were detected using the FindMarker function. Gene expression levels across various cell subtypes were determined using the DoHeatmap function. For immunofluorescence (IF), surgical specimens of benign colon tissues (colonic diverticula), low-grade colon tumors without venous or nervous system invasion, and high-grade colon tumors with venous or nervous system invasion were collected. All procedures involving human tissue experiments were approved by the Ethics Committee of the Shanghai Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China. The IF staining was implemented based on formalin-fixed paraffin-embedded (FFPE) tissues which were cut into 5 µm thick slides for each panel test. The slides were dewaxed, rehydrated, and subjected to epitope retrieval by boiling in citrate antigen retrieval solution (pH = 6; Servicebio #G1206) for 3 min. After cooling, the slides were washed three times in phosphate-buffered saline (PBS) for 5 min. Proteins were blocked with bovine serum albumin (BSA) for 30 min. One antigen was added in each round, including the BSA block, primary and secondary antibody incubation, and antigen retrieval. The above procedures were repeated until the three biomarkers, CD68 (Servicebio #GB113150, 1:3000), p21 (Servicebio #GB11153, 1:4000), and SPP1 (abcam # ABS135915, 1:200), were added. The secondary antibodies for the three biomarkers were horseradish peroxidase (HRP)-labeled goat anti-rabbit antibody (Servicebio #GB23303, 1:500) for CD68 and p21 and Cy3-labling goat anti-rabbit antibody (Servicebio #GB21303, 1:500). An AutoFluo quencher was applied and the nuclei were stained with 4, 6-diamidino-2- phenylindole (DAPI, Servicebio #GB1012) before the slides were blocked using an antifade mounting medium (Servicebio #GB1401). Images were captured using an inverted fluorescence microscope. Statistical analyses were conducted using R software (version 4.2.2, https://www.r-project.org/ ) and its appropriate packages. Kaplan–Meier analysis was used to assess and compare survival between the different subgroups. Log-rank two-tailed p< 0.05 was considered as statistically significant. Construction of senescence-related model The whole work diagram was summarized in . Among the 125 SenMayo genes, 18 were prognostic according to univariate Cox regression analysis. LASSO Cox regression analysis was conducted to build a multigene prognostic model with the least chance of overfitting, based on the 18 genes . Seven genes were finally selected into the model according to the optimal value of λ and the multivariate hazard ratios are shown in . The genes included prostaglandin E receptor 2 (PTGER2), fibroblast growth factor 2 (FGF2), insulin like growth factor binding protein 3 (IGFBP3), angiopoietin like 4 (ANGPTL4), dickkopf WNT signaling pathway inhibitor 1 (DKK1) and wingless-type MMTV integration site family member 16 (WNT16). The risk score was calculated using the following formula: senescence risk score = (-0.744 × expression level of PTGER2) + (0.295 ×the expression level of FGF2) + (0.155 × the expression level of IGFBP3) + (0.520 × the expression level of ANGPTL4) + (0.106 × the expression level of DKK1) + (0.337 × the expression level of WNT16) + (0.012 × expression level of SPP1). The correlation matrix revealed that the expression of each model gene was independent of other genes. Validation of the model in the training and validation cohorts Next, we evaluated the prognostic value of our model in TCGA and validation cohorts (GSE17536/7 and GSE38832). In each cohort, we categorized patients into low- and high-risk groups using the median value as the threshold. Survival analysis was performed for low- and high-risk patients. In the training cohort, the 5-year DSS rates were 84.7% (95% confidence interval (CI): 75.6–94.9%) for low-risk patients and 71.7% (95% CI: 59.0–87.1%) for high-risk patients (p = 0.016, ). In the two validation cohorts, high-risk patients had worse survival outcomes than low-risk patients (p< 0.001 in GSE17536/7, ; p = 0.027 in GSE38832, ). The AUC at 1, 2, and 3 years were 0.731, 0.651, and 0.643, respectively, in TCGA cohort ; 0.658, 0.669, and 0.669, respectively, in GSE17536/7 cohort ; and 0.666, 0.693, and 0.670, respectively, in GSE38832 cohort . Taken together, our senescence-related 7-gene risk model accurately distinguished high-risk patients from low-risk patients, and its prognostic ability was stable. Clinicopathological features of senescent high- and low-risk patients In addition to the prognosis, we investigated the basic clinicopathological features of the high- and low-risk groups . High-risk patients were more likely to have tumors with venous invasion (p = 0.001) and higher American Joint Committee on Cancer (AJCC) T stages (p = 0.006), and N stages (p = 0.001). The proportion of metastases was also higher in the high-risk patients (p = 0.012). In summary, the clinicopathological features were more aggressive in the senescent high-risk patients than in the low-risk patients. The landscape of DEGs and mutations in high-risk and low-risk subgroups To explore the genomic characteristics of low- and high-risk patients, we identified DEGs between the two subgroups using a fold change cutoff value of 1.5 and a p-value< 0.05 . Compared with the low-risk group, there were 375 upregulated and 4 downregulated genes in the high-risk group. The expression of the top50 upregulated genes in high-risk patients is shown in a heatmap . We then examined the mutation status of the SenMayo genes in the two groups and listed the top 10 differentially mutated genes. More mutated senescence-related genes, including SLIT-ROBO Rho GTPase activating protein 3 (SRGAP3), vacuolar protein sorting 13 homolog B (VPS13B), titin (TTN), nuclear GTPase, germinal center associated (NUGGC), integrin subunit beta 4 gene (ITGB4), nestin (NES), nuclear receptor corepressor 1 (NCOR1), and polycystic kidney disease protein 1-like 1 (PKD1L1), were found in the high-risk group . GO pathway analysis revealed enrichment mainly in the extracellular matrix (ECM) and structure-related pathways . The top 1 KEGG enrichment pathway was phagosome, which participates in the elimination of senescent cells. Others included ECM-receptor interactions, focal adhesions, and protein digestion and absorption . We calculated the SenMayo signature scores of the two subgroups and confirmed that the scores were significantly higher in the high-risk patients , indicating that this group was burdened with strong SASP features. Comparison of TME features in the high-risk and low-risk groups We then compared the TME features between the high- and low-risk subgroups. Using the CIBERSORT algorithm, we analyzed the abundance of 22 different immune cells in the two groups . The proportions of plasma cells, CD4 + memory T cells, monocytes, and dendritic cells were obviously lower in the high-risk subgroup than in the low-risk subgroup . In contrast, the infiltration levels of macrophages (including the M1 and M2 subtypes), mast cells, and neutrophils were significantly higher in high-risk patients. Further analysis of TME signatures using the IOBR package revealed that the TME of high-risk patients was immunosuppressive, exclusive, and exhausted . Moreover, low-risk patients may be more sensitive to immunotherapy according to their higher mismatch repair (MMR) and homologous recombination scores . High-risk patients exhibited stronger epithelial-mesenchymal transition (EMT) signatures . Collectively, these results indicate an immunosuppressive TME in high-risk patients. Therefore, we examined the association between senescence risk score and the efficacy of immunotherapy in a rectal cancer cohort. Non-pCR patients had significantly higher senescence risk scores than pCR patients (p = 0.024; ). We also compared the expression other IO biomarkers in the two subgroups. While no difference of tumor mutation burden (TMB) was found, we discovered significant enhanced PD-L1 expression in the high-risk group. Identification of SPP1 + macrophages as a key component in cell senescence based on scRNA transcriptomic analysis Based on the close relationship between the immune microenvironment and cellular senescence, we focused on the expression of our model genes in immune cells using scRNA transcriptomic analysis. After scaling and normalizing the expression matrix, we used CD45 as a marker to broadly categorize the cells into immune and non-immune cell populations . Immune cells were selected and dimensionality reduction was performed. Using specifical canonical markers defined in the literature, cells were divided into the following clusters : B cells (‘MZB1’), CD4 positive cells (‘CD4,’ ‘IL2RA,’ ‘CXCR3,’ ‘CCR4’), CD8 positive cells (‘CD8A,’ ‘CD8B’), regulatory T cells (‘IL2RA’), and myeloid cells (‘LYZ,’ ‘MARCO,’ ‘CD68,’ ‘FCGR3A’). The expression levels of the model genes were examined in the five populations . Strong expression of SPP1 was observed, particularly in myeloid cells . Given the significantly enhanced infiltration of macrophages in high-risk tumors, we selected myeloid cell clusters for further analysis. The myeloid cell population was divided into dendritic cells (DCs) (‘BIRC3,’ ‘HLA-DPB1’), macrophages (‘CD163,’ ‘CD68,’ ‘CD14’) and monocytes (‘IL1RN’) as shown in . After clustering the macrophages into three subgroups by dimensionality reduction, we found that SPP1 was highly expressed, particularly in cluster 0 . We compared the expression of the SenMayo genes between clusters 0 and 1. In addition to SPP1, nine other senescence-related genes were upregulated in cluster 0 macrophages, indicating the SASP features of this sub-cluster . We performed immunofluorescence assays to detect the association between SPP1 + macrophages and tumor senescence. Normal colon tissues, low-grade colon tumor tissues, and high-grade colon tumor tissues were stained. A larger number of SPP1 (red)-positive macrophages (green) and surrounding senescent tumor cells (p21 positive, pink) were observed in high-grade tumors than in low-grade tumors. In normal tissues, the proportions of SPP1 + macrophages and p21 + tumor cells were low . The whole work diagram was summarized in . Among the 125 SenMayo genes, 18 were prognostic according to univariate Cox regression analysis. LASSO Cox regression analysis was conducted to build a multigene prognostic model with the least chance of overfitting, based on the 18 genes . Seven genes were finally selected into the model according to the optimal value of λ and the multivariate hazard ratios are shown in . The genes included prostaglandin E receptor 2 (PTGER2), fibroblast growth factor 2 (FGF2), insulin like growth factor binding protein 3 (IGFBP3), angiopoietin like 4 (ANGPTL4), dickkopf WNT signaling pathway inhibitor 1 (DKK1) and wingless-type MMTV integration site family member 16 (WNT16). The risk score was calculated using the following formula: senescence risk score = (-0.744 × expression level of PTGER2) + (0.295 ×the expression level of FGF2) + (0.155 × the expression level of IGFBP3) + (0.520 × the expression level of ANGPTL4) + (0.106 × the expression level of DKK1) + (0.337 × the expression level of WNT16) + (0.012 × expression level of SPP1). The correlation matrix revealed that the expression of each model gene was independent of other genes. Next, we evaluated the prognostic value of our model in TCGA and validation cohorts (GSE17536/7 and GSE38832). In each cohort, we categorized patients into low- and high-risk groups using the median value as the threshold. Survival analysis was performed for low- and high-risk patients. In the training cohort, the 5-year DSS rates were 84.7% (95% confidence interval (CI): 75.6–94.9%) for low-risk patients and 71.7% (95% CI: 59.0–87.1%) for high-risk patients (p = 0.016, ). In the two validation cohorts, high-risk patients had worse survival outcomes than low-risk patients (p< 0.001 in GSE17536/7, ; p = 0.027 in GSE38832, ). The AUC at 1, 2, and 3 years were 0.731, 0.651, and 0.643, respectively, in TCGA cohort ; 0.658, 0.669, and 0.669, respectively, in GSE17536/7 cohort ; and 0.666, 0.693, and 0.670, respectively, in GSE38832 cohort . Taken together, our senescence-related 7-gene risk model accurately distinguished high-risk patients from low-risk patients, and its prognostic ability was stable. In addition to the prognosis, we investigated the basic clinicopathological features of the high- and low-risk groups . High-risk patients were more likely to have tumors with venous invasion (p = 0.001) and higher American Joint Committee on Cancer (AJCC) T stages (p = 0.006), and N stages (p = 0.001). The proportion of metastases was also higher in the high-risk patients (p = 0.012). In summary, the clinicopathological features were more aggressive in the senescent high-risk patients than in the low-risk patients. To explore the genomic characteristics of low- and high-risk patients, we identified DEGs between the two subgroups using a fold change cutoff value of 1.5 and a p-value< 0.05 . Compared with the low-risk group, there were 375 upregulated and 4 downregulated genes in the high-risk group. The expression of the top50 upregulated genes in high-risk patients is shown in a heatmap . We then examined the mutation status of the SenMayo genes in the two groups and listed the top 10 differentially mutated genes. More mutated senescence-related genes, including SLIT-ROBO Rho GTPase activating protein 3 (SRGAP3), vacuolar protein sorting 13 homolog B (VPS13B), titin (TTN), nuclear GTPase, germinal center associated (NUGGC), integrin subunit beta 4 gene (ITGB4), nestin (NES), nuclear receptor corepressor 1 (NCOR1), and polycystic kidney disease protein 1-like 1 (PKD1L1), were found in the high-risk group . GO pathway analysis revealed enrichment mainly in the extracellular matrix (ECM) and structure-related pathways . The top 1 KEGG enrichment pathway was phagosome, which participates in the elimination of senescent cells. Others included ECM-receptor interactions, focal adhesions, and protein digestion and absorption . We calculated the SenMayo signature scores of the two subgroups and confirmed that the scores were significantly higher in the high-risk patients , indicating that this group was burdened with strong SASP features. We then compared the TME features between the high- and low-risk subgroups. Using the CIBERSORT algorithm, we analyzed the abundance of 22 different immune cells in the two groups . The proportions of plasma cells, CD4 + memory T cells, monocytes, and dendritic cells were obviously lower in the high-risk subgroup than in the low-risk subgroup . In contrast, the infiltration levels of macrophages (including the M1 and M2 subtypes), mast cells, and neutrophils were significantly higher in high-risk patients. Further analysis of TME signatures using the IOBR package revealed that the TME of high-risk patients was immunosuppressive, exclusive, and exhausted . Moreover, low-risk patients may be more sensitive to immunotherapy according to their higher mismatch repair (MMR) and homologous recombination scores . High-risk patients exhibited stronger epithelial-mesenchymal transition (EMT) signatures . Collectively, these results indicate an immunosuppressive TME in high-risk patients. Therefore, we examined the association between senescence risk score and the efficacy of immunotherapy in a rectal cancer cohort. Non-pCR patients had significantly higher senescence risk scores than pCR patients (p = 0.024; ). We also compared the expression other IO biomarkers in the two subgroups. While no difference of tumor mutation burden (TMB) was found, we discovered significant enhanced PD-L1 expression in the high-risk group. + macrophages as a key component in cell senescence based on scRNA transcriptomic analysis Based on the close relationship between the immune microenvironment and cellular senescence, we focused on the expression of our model genes in immune cells using scRNA transcriptomic analysis. After scaling and normalizing the expression matrix, we used CD45 as a marker to broadly categorize the cells into immune and non-immune cell populations . Immune cells were selected and dimensionality reduction was performed. Using specifical canonical markers defined in the literature, cells were divided into the following clusters : B cells (‘MZB1’), CD4 positive cells (‘CD4,’ ‘IL2RA,’ ‘CXCR3,’ ‘CCR4’), CD8 positive cells (‘CD8A,’ ‘CD8B’), regulatory T cells (‘IL2RA’), and myeloid cells (‘LYZ,’ ‘MARCO,’ ‘CD68,’ ‘FCGR3A’). The expression levels of the model genes were examined in the five populations . Strong expression of SPP1 was observed, particularly in myeloid cells . Given the significantly enhanced infiltration of macrophages in high-risk tumors, we selected myeloid cell clusters for further analysis. The myeloid cell population was divided into dendritic cells (DCs) (‘BIRC3,’ ‘HLA-DPB1’), macrophages (‘CD163,’ ‘CD68,’ ‘CD14’) and monocytes (‘IL1RN’) as shown in . After clustering the macrophages into three subgroups by dimensionality reduction, we found that SPP1 was highly expressed, particularly in cluster 0 . We compared the expression of the SenMayo genes between clusters 0 and 1. In addition to SPP1, nine other senescence-related genes were upregulated in cluster 0 macrophages, indicating the SASP features of this sub-cluster . We performed immunofluorescence assays to detect the association between SPP1 + macrophages and tumor senescence. Normal colon tissues, low-grade colon tumor tissues, and high-grade colon tumor tissues were stained. A larger number of SPP1 (red)-positive macrophages (green) and surrounding senescent tumor cells (p21 positive, pink) were observed in high-grade tumors than in low-grade tumors. In normal tissues, the proportions of SPP1 + macrophages and p21 + tumor cells were low . CRC is one of the most prevalent malignancies with high mortality rates worldwide. In this study, we constructed a senescence prognostic model based on the SenMayo gene panel using public bulk transcriptome data and discovered a relationship between SASP and the immunosuppressive microenvironment. We investigated the expression of senescence prognostic genes in scRNA-identified immune cell populations and identified SPP1 + macrophages as an important TME component that leads to tumor senescence. Cellular senescence is elicited by various intrinsic and extrinsic stresses, including replicative exhaustion and cancer therapies such as chemotherapy and radiation. SenMayo is a novel gene set designed by Saul et al. to identify cells expressing high levels of SASP genes and to evaluate the clinical senescence burden. Based on SenMayo, we constructed a prognostic model that could distinguish CRC patients with strong SASP features and poor survival outcomes. The model consisted of the following seven genes: PTGER2, FGF2, IGFBP3, ANGPTL4, DKK1, WNT16, and SPP1. Among these risk factors, FGF2 (also known as bFGF, a basic fibroblast growth factor) is a well-known survival factor, and a higher level of FGF2 is secreted by senescent cells than by pre-senescent cells. It has also been reported that FGF2 can shift macrophages towards an M2-like phenotype and alter tumor immunity, which can therefore be a therapeutic target in cancer treatment . IGFBP3 is known for its pleiotropic ability to regulate cell proliferation, apoptosis, and differentiation. It has recently been shown that IGFBP3 is an upregulated secretory factor of senescent cells and is associated with SASP . ANGPTL4 encodes a secreted glycoprotein that promotes angiogenesis and inhibits ferroptosis . ANGPTL4 participates in tumorigenesis and therapeutic resistance through autocrine and paracrine activity . DKK1 is a WNT signaling pathway inhibitor that could trigger early onset of the cellular senescence . WNT16 is a secreted signaling protein that is overexpressed during stress- and oncogene-induced senescence . A previous study has reported that paracrine WNT16B attenuates the effects of cytotoxic therapy . SPP1 is a secreted cytokine closely associated with tumorigenesis, invasion, and metastasis. SPP1 could upregulate the expression of interferon (IFN)-γ and interleukin (IL)-12 and modulate the function of various TME components. Importantly, previous studies have reported that a special subtype of tumor-associated macrophages (TAM) with strong SPP1 expression presents immunosuppressive features and is positively correlated with EMT markers . As cell senescence can modulate the immune environment, we further investigated the immune features of senescent high-risk patients. Immune-related gene signature sets indicate an immunosuppressive phenotype in senescent high-risk tumors. According to the results of CIBERSORT, this population distinctly exhibited highly infiltrating macrophages. Thus, we hypothesized that macrophages contribute to tumor cell senescence and SASP features in high-risk patients. To evaluate the expression of senescence-related genes in immune cells, we identified immune cell populations using scRNA-seq data and further divided them into various subtypes. We found a particularly high expression of SPP1, one of our model genes, in myeloid cells. Based on previous evidence and our CIBERSORT results, we next focused on the expression of SPP1 in macrophages. It has been discovered that there are two distinct subsets of TAMs in CRC, the SPP1 + subset and the C1QC + subset. While C1QC + TAMs preferentially express phagocytosis- and antigen presentation-related genes, SPP1 + TAMs have a proangiogenic signature and are more likely to engage in crosstalk with cancer-associated fibroblasts (CAFs) and endothelial cells . Patients with strong SPP1 + TAM infiltration show resistance to immunotherapy and poor prognosis. Our study is the first to report that SPP1 + TAMs exhibit stronger SASP features than C1QC + TAMs. This subpopulation of TAMs highly expresses senescent factors such as CCL20, CXCL1, MMP12, CXCL10, IL6, and CCL5. Using human benign colon tissues and colon tumor tissues, we found that SPP1 + macrophages were particularly enriched in high-grade tumors. We observed a large number of senescent tumor cells around the SPP1 + macrophages, whereas there were fewer SPP1 + macrophages and senescent cells in low-grade tumors and benign colon tissues. This result further indicates the role of SPP1 + macrophages in the development of SASP features in CRC. Therefore, targeting SPP1 + macrophages may alter the senescent state of tumor cells and reverse immunotherapeutic resistance. Our study has several limitations. First, our model was based on gene expression in CRC patient samples, and the incorporation of clinical factors may have improved the efficacy of the model score. The predictability of immunotherapy response in our model needs to be further validated in larger cohorts. The intrinsic association between macrophages and the senescent tumor environment revealed by our model should be further investigated in vivo and in vitro . Despite these limitations, our study provides novel insights into senescence-immune interactions in CRC and an effective prognostic model to guide ICI treatment. Our study presents a novel model based on senescence-related genes that can identify CRC patients with a poor prognosis and an immunosuppressive TME. SPP1 + macrophages may correlate with cell senescence, leading to a poor prognosis. The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found within the article/ . The studies involving human participants were reviewed and approved by Ethics Committee of Shanghai Ruijin Hospital, Shanghai Jiao Tong University School of Medicine. The patients/participants provided their written informed consent to participate in this study. EM and SS contributed to conceptualization, supervision and writing- review and editing. SY performed investigation, data duration and writing-original draft. MC performed data duration and verification. LX worked on investigation and software. All authors contributed to the article and approved the submitted version.
The impact of artificial intelligence on the person-centred, doctor-patient relationship: some problems and solutions
1a5148df-7a66-4e42-abe3-7e1d0feb10bf
10116477
Patient-Centered Care[mh]
In the Western world, the demand for healthcare professionals is increasing and the population is ageing . As a result, the workload is high, and healthcare systems in developed countries suffer from ever-increasing cost pressures and backlog . The British Medical Association (BMA) warned that COVID-19 further disrupted care pathways in the United Kingdom and that it will take the NHS years to clear backlogs . Against this background, new technologies that can contribute to improved efficiency and ultimately, improved care, are welcome. Artificial intelligence (AI) tools are increasingly being developed and deployed in the healthcare sector. Technologies which perform as well as, or better than humans already exist . High hopes are placed on AI technology to improve all aspects of healthcare including saving time [ – ]. It is hoped that the time that AI can save can be used to improve doctor-patient relationships and making it more person-centred . Person-centred care is about ensuring that "people's preferences, needs and values guide clinical decisions" . Person-centred care is considered to be the gold standard for doctor-patient relationships . Besides improving satisfaction, decreasing malpractice, and improving employee retention rates, such an approach is said to improve health outcomes . Bauchat et al. argue that empathy forms a cornerstone of person-centred care . They argue that this value is necessary for forming the partnerships and the effective communication that is instrumental to person-centred care . Furthermore, reaching a consensus via shared decision-making without understanding the world from someone else’s standpoint (i.e. empathising with the other) is a difficult task. For these reasons, empathy is foundational to person-centred care, and critical for its practice. . Empathy is also important in so far as it triggers compassion, which can be characterised as “feelings of warmth, concern and care for the other, as well as a strong motivation to improve the other’s wellbeing”(p.875) . As argued by Jeffrey, empathy is a skilled response, whereas compassion is a reactive response . Therefore, empathy is helpful insofar as it acts as a precursor for compassion which allows doctors to act in their patients’ best interest. Yet, nowadays, time is often insufficient for doctors to develop the type of empathetic and compassionate relationship with their patients that is necessary for person-centred care [ – ]. Artificial intelligence (AI) is commonly cited as a potential solution to the problems faced by healthcare today, including addressing the aforementioned dissatisfaction surrounding the nature of the doctor-patient relationship. AI, is argued, has the potential to “give the gift of time” and could, therefore, allow the doctor and the patient to enter more meaningful discussions with respect to care. For example, tools that enable doctors to outsource certain tasks allow the doctor to spend this saved time on something else, and the hope is that this “will bring a new emphasis on the nurturing of the precious inter-human bond, based on trust, clinical presence, empathy and communication” (p.6). The idea that the advent of AI in healthcare may help resolve longstanding issues inhibiting the practice of patient-centred care, such as the lack of time, is appealing in theory. However, the impact of the large-scale deployment of AI on the doctor-patient relationship is unclear and difficult to predict . Sceptics have argued that AI may further dehumanise the practice of medicine . AI tools which lack value plurality may encourage a way back to paternalism, only this time, imposed by the AI, rather than the human practitioner . For example, IBM Watson’s role is to rank treatment options based on outcome statistics presented in terms of ‘disease-free survival’ and to show a synthesis of the published evidence relevant to the clinical situation . However, McDougall argues, that this ranking should be driven by individual patient preferences . Others have raised the possibility that the quest for economic efficiency in healthcare will dictate that time saved by the use of AI will be used to push more patients through the system as opposed to enhancing person-centred care . This paper is guided by the following question: how can AI impact the empathetic and compassionate doctor-patient relationship? We identify and critically discuss the main topics in the literature relating to key values relevant to person-centred doctor-patient relationships, with a particular focus on empathetic and compassionate care. Finally, we identify and discuss concrete ways forward proposed in the literature that could support the beneficial deployment of AI in healthcare, and the doctor-patient relationship in particular. Search strategy and selection criteria We conducted a review of the literature to identify arguments both for the positive or negative impact AI might have on the doctor-patient relationship. Our initial search was conducted using methods commonly associated with systematic reviews in order to ensure a comprehensive coverage of the literature and identify the main topics discussed relating to the impact of AI on the doctor-patient relationship. Searches were conducted in between 1 and 30 April 2021. We included broad search terms to include as many relevant papers as possible (“artificial intelligence”, “machine learning”, “doctor-patient relationship”, “physician–patient relationship”, “therapeutic alliance”, etc.), but included “empathy” and “compassion” as more specific search terms in order to reflect the aim of the research question. We searched 5 differents databases (PubMed, SCOPUS, Web of Science, PhilPapers, Google Scholar). Search results included papers published from database inception to the date of the search. We found 4848 papers. After deleting duplicates, there were 997 papers left. Iterative sessions took place between AS, NH, AK, and FL in order to screen the titles and abstracts and identify the relevant papers. After this initial screening, 146 were identified as potentially relevant. The next step was full-paper screening following which 45 papers were retained. We used an iterative process to synthesise and interpret the data, during fortnightly sessions between all authors. Throughout this process, papers were selected based on the selection criteria discussed with AS, NH, AK, and FL. We decided to select papers written in English, that engaged actively with the question of impact of AI on doctor-patient relationships, and excluded papers that only briefly addressed the question. We deliberately kept the selection criteria broad in order to identify the values emerging from the literature. This enabled us to identify the main issues covered by the literature and identify concrete ways forward to ensure that the use of AI tools benefits the doctor-patient relationship (see Table , Fig. , and Table for the full search strategy). The aim of this paper was to be “evidence-informed” rather than “evidence-based”, meaning that evidence is understood as “contextually bound but also individually interpreted and particularised within that context” . Therefore, we took a critical approach to reviewing the literature . We chose this approach based on the premise that the question we are asking requires “clarification and insight” as opposed to “data”, in which case a systematic review would have been more appropriate . To this end, as explained above, the approach we adopted was an interpretive and discursive synthesis of existing literature based upon purposive selection of the evidence . We conducted a review of the literature to identify arguments both for the positive or negative impact AI might have on the doctor-patient relationship. Our initial search was conducted using methods commonly associated with systematic reviews in order to ensure a comprehensive coverage of the literature and identify the main topics discussed relating to the impact of AI on the doctor-patient relationship. Searches were conducted in between 1 and 30 April 2021. We included broad search terms to include as many relevant papers as possible (“artificial intelligence”, “machine learning”, “doctor-patient relationship”, “physician–patient relationship”, “therapeutic alliance”, etc.), but included “empathy” and “compassion” as more specific search terms in order to reflect the aim of the research question. We searched 5 differents databases (PubMed, SCOPUS, Web of Science, PhilPapers, Google Scholar). Search results included papers published from database inception to the date of the search. We found 4848 papers. After deleting duplicates, there were 997 papers left. Iterative sessions took place between AS, NH, AK, and FL in order to screen the titles and abstracts and identify the relevant papers. After this initial screening, 146 were identified as potentially relevant. The next step was full-paper screening following which 45 papers were retained. We used an iterative process to synthesise and interpret the data, during fortnightly sessions between all authors. Throughout this process, papers were selected based on the selection criteria discussed with AS, NH, AK, and FL. We decided to select papers written in English, that engaged actively with the question of impact of AI on doctor-patient relationships, and excluded papers that only briefly addressed the question. We deliberately kept the selection criteria broad in order to identify the values emerging from the literature. This enabled us to identify the main issues covered by the literature and identify concrete ways forward to ensure that the use of AI tools benefits the doctor-patient relationship (see Table , Fig. , and Table for the full search strategy). The aim of this paper was to be “evidence-informed” rather than “evidence-based”, meaning that evidence is understood as “contextually bound but also individually interpreted and particularised within that context” . Therefore, we took a critical approach to reviewing the literature . We chose this approach based on the premise that the question we are asking requires “clarification and insight” as opposed to “data”, in which case a systematic review would have been more appropriate . To this end, as explained above, the approach we adopted was an interpretive and discursive synthesis of existing literature based upon purposive selection of the evidence . How are decision made in doctor-patient relationships? Patient involvement in decision-making is a central aspect of person-centred care . Increasing the patient’s autonomy by encouraging their involvement in decision-making processes is a powerful pushback against the outdated paternalistic model of care . Elwyn et al. argue that shared decision-making rests on the acceptance that individual self-determination is a good, and therefore desirable goal . Thus, supporting patient autonomy is important within this framework . Some AI tools may have the potential to increase patient autonomy, and therefore the practice of shared decision-making . Zaliauskaite discusses patient autonomy within the context of technological advances and argues that an effective way to ensure patient’s autonomy is the implementation of legal instruments such as informed consent, advance directives and Ulysses contracts (a contract to bind oneself in the future) . She suggests that technologies such as mobile apps that are used by patients for self-monitoring (collecting any form of health data) may increase autonomy and, in the best case scenario, shift the doctor-patient relationship towards a customer-service type format, where both sides have a balanced distribution of rights and responsibilities, and thereby an equal input/share in the decision-making process . However, one could argue that it is questionable whether a balanced distribution of rights and responsibilities is feasible in a doctor-patient relationship which is commonly characterised by the vulnerability of the patient towards the doctor and epistemic imbalances. Additionally, there seems to be a risk that such a relationship becomes purely transactional and subject to market pressures. In contrast, De lara et al. present bioethical perspectives within the context of big data and data processing in rheumatology and argue that relationships must preserve fiduciary duties, which implies a power imbalance. According to them, this is necessary in order to protect the promise of an ethical relationship of trust between doctors and patients . A more fundamental problem arises when considering the type of patient autonomy an AI tool can support within a framework of shared decision-making. It is unclear how an algorithm could take preferences of different people (e.g. regarding treatment goals) into account . This could give rise to a new form of paternalism in which the AI makes decisions on behalf of patients and doctors. The difference with the old form of paternalism is that this time, the paternalistic relationship would be vis-à-vis the AI, not the doctor. In other words, “doctor knows best – but the computer knows more and makes fewer mistakes” . This new form of paternalism would be fundamentally at odds with the principle of shared decision-making. Jotterand et al. (2020) as well as Rainey and Erden (2020) share similar concerns, explaining that in the context of neurotechnology in psychiatry, AI tools are potentially dangerously reductive. This is because they are unable to comprehend social, psychological, biological, and spiritual dimensions. Therefore, they too, argue that AI tools should be designed to allow for value plurality . McDougall uses IBM Watson as an example to argue that AI machines should be designed and built in a way that allows for value plurality, namely the ability to take into account different patients’ preferences and priorities. IBM Watson’s role is to rank treatment options based on outcome statistics presented in terms of ‘disease-free survival’ and to show a synthesis of the published evidence relevant to the clinical situation . However, McDougall argues, that this ranking should be driven by individual patient preferences (e.g., one patient might choose further treatment whereas another might choose palliation). Without taking into account value plurality, there is a real risk of the AI’s decisions undermining the patient’s autonomy . Black box AI tools are arguably particularly threatening to shared decision making as the absence of explainability might hurt patient autonomy by preventing the patient from making informed decisions . Doctors may have more time to spend talking to patients, but if they are unable to provide the necessary explanations about certain treatment decisions/ prognoses and/or diagnoses suggested by the AI, the benefits of extra time may be limited . In summary, the emerging literature is divided on whether AI will enhance the doctor-patient relationship by encouraging shared decision-making through increased patient autonomy or create a new form of paternalism by hindering value-plurality. The next section will focus on the impact of AI on another important aspect of person-centred care, the practice of empathetic care and how it relates to efficiency. The tension between empathetic and efficient doctor-patient relationships Bauchat et al. argue that empathy forms the cornerstone of person-centred care. Multiple studies support these claims [ – ]. Empathy can be described as “…the ability to understand a person’s standpoint, their experience of illness and, through this cognitive resonance, feel motivated to help them…”(p.1) . Empathy facilitates doctors’ understanding of the disease from the standpoint of the patient, as well as individual patients’ values and goals . However, doctors and patients must be able to enter meaningful discussions in order for doctors’ to be able to appreciate and comprehend the patient’s standpoint. The practice of empathy therefore requires time . The medical literature is rich in accounts promoting AI as a great time saver creating space for more meaningful and empathetic relationships to be developed with patients [ , – , , , , ]. There is already some evidence to suggest that AI can save doctors’ time. Printz explains that the AI tool Watson for Oncology needs 40 s to capture and analyse data, then generate treatment recommendations based on the available data . In comparison, manually collecting and analysing the data takes on average 20 min, decreasing to 12 min when oncologists become more familiar with cases . It is unclear, however, if this saved time will be used to enhance the doctor-patient relationship. In his book “Deep Medicine”, Topol argues that AI tools have the potential to help doctors in a wide array of tasks and therefore could free up time which could be used to build a positive relationship with the patient . Aminololama-Shakeri and Lopez argue that AI is the next step towards a more patient-centred system of care in breast-imaging. They observe that because radiologists will have more time to spend with their patients, this will enable them to prioritise the relational aspects of their work. This newfound time, they argue, will also enable radiologists to focus on treatment on top of diagnosis. They explain that this could be achieved by creating a form of hybrid training which would incorporate imaging to medical and surgical oncology training, which has already been suggested for cardiovascular surgeons . This account seems somewhat paradoxical, as if time saved using AI tools results in radiologists taking on other tasks such as treatment, it is unclear how this, in itself, improves the empathetic doctor-patient relationship. Sparrow and Hatherley , in contrast, suggest that the economics of healthcare, especially in for-profit environments but also in the public sector, will dictate that more patients will pass through the system and more tasks will need to be taken on by individuals. They argue that there is no reason to believe that the time saved by the use of AI will result in more empathetic doctor-patient relationships but rather it will allow higher patient throughput. Topol is certainly not oblivious to market laws and has suggested that doctors must get together to create a movement demanding that time saved is not used to squeeze more patients through the system . Sparrow and Hatherley have a pessimistic outlook on the ability of doctors to initiate change, at least in the US context. Using several historical examples (such as universal basic healthcare), they argue that doctors have been unable to motivate any changes under any administration in the US . Whether time saved will be used to promote empathetic relationships or used to increase throughput of patients will largely depend on how much value healthcare systems will place on empathy as a healthcare value versus efficiency. This is to a large extent an empirical question, and therefore, more research is needed to determine this. Of course, achieving patient-centred empathetic care is also dependent upon patients being able to trust their doctors and their recommendations. The next section addresses the impact of AI tools on the doctor-patient trust relationship. The role of explainability and its impact on the doctor-patient relationship AI tools can be seen as a new, third actor, in the two-way doctor-patient relationship. Just as the doctor-patient relationship is founded on trust , patients and doctors alike must be able to develop a trust relationship with the AI tool they are using. In order for someone to warrant trust, they need to demonstrate their trustworthiness. One way of doing this is by indicating their reliability. In the case of AI this might require features such as explainability, validity and freedom from algorithmic bias, as well as clear pathways of accountability . AI tools do not always conform to these values. For example, AI tools are not necessarily built to be transparent . The continuous search for increased accuracy often compromises AI’s explainability . The best AI tools, from a performance perspective are, therefore, not necessarily transparent . Triberti et al. argue that the lack of explainability could lead to a phenomenon of “decision paralysis” due to the trust issues for the users of the AI tool, generated from the lack of explainability . The issue of AI explainability raises a number of ethical questions including, whether it would be justifiable to dismiss the use of highly efficient AI on explainability grounds. Ho argues that uncritical deference to doctors over (unexplainable) AI tools that have outperformed humans may lead to preventable morbidity and is ethically irresponsible . According to this view, the deployment of an AI tool might end up becoming compulsory as a matter of due diligence and its use might effectively become an epistemic obligation . Others argue that explainable AIs might give rise to a more productive doctor-patient relationship by increasing the transparency of decision-making. Mabillard et al. propose a framework of “reasoned transparency” which entails elements such as abundant communication about AI tools and services and reassurance on data confidentiality. In a reasoned transparency framework, explainable AI is seen as a powerful tool due to its increased transparency, and therefore, its ability to generate trust relationships, between the AI the doctor and the patient. This is because the doctor can give much more precise information and explain, for example, which specific parameter played a role in an AI tool’s prediction . Even in cases where AI tools’ output is not directly explainable, probabilities are and doctors may be able to justify diagnoses and explain procedures in a manner understandable to patients, even if the latter are unfamiliar with statistical jargon. Similarly, patients might be happy to develop a trust relationship with the AI tools that they use as part of their self-management and retain a trust relationship with their doctor, on the grounds of explanations of probabilities and statistics the doctor provides. This will be dependent on medical education changing accordingly, as will be discussed below. Kerasidou suggests that in an AI-assisted healthcare system, there might be a shift away from human-specific skills if patients and healthcare systems start to value the increased accuracy and efficacy of AI tools over relational values such as interpersonal trust . In this context, one could argue that AI tools do not necessarily need to be explainable (or transparent) to improve the doctor-patient relationship, especially if they systematically out-perform human doctors. Patients and doctors alike might start perceiving trustworthiness as based on the level of certainty or accuracy offered by AI tools, as opposed to a high level of transparency. According to Banja, if our main interest is the accuracy of clinical decision making, then “just like Watson on Jeopardy! , AI is going to win the machine-versus-human contest every time” (p.34). He further suggests that AI technologies are held to an unfairly high standard as excessive attention is paid to their errors as opposed to human errors. In this context, one could argue that AI tools do not necessarily need to be explainable (and therefore transparent) to improve the doctor-patient relationship, especially if they systematically out-perform human doctors. Patients and doctors alike might start basing their trustworthy relationship on the understanding that AI tools offer a high level of certainty, as opposed to a high level of transparency. De Lara et al. explains that medicine is already full of black boxes . For example, not all doctors and patients need to understand how electromagnetic radiation works when dealing with an MRI machine. Bjerring and Busch, however, argue that AI is a different type of black box . They explain that, currently, there is always a human in the loop who is able to give an explanation of how technology works (for example, there will be an engineer able to explain how an MRI machine works), but this cannot be said of some AI systems . Beyond issues relating to accuracy and efficiency, explainability is also linked with the problem of accountability. Carter et al., discussing AI-assisted breast cancer diagnostic tools, suggest that a lack of explainability is problematic if the doctor is expected to take responsibility, i.e., be accountable, for decisions involving AI systems . Furthermore, it is unclear to whom responsibility for AI-mediated decisions should be delegated, and how the interactions between AI tools and doctors will develop given this uncertainty . A shift in the attribution of responsibility from the doctor to other stakeholders (e.g. AI developers, vendors) may have a negative impact on the doctor-patient relationship as traditional systems of accountability become compromised. Generally, therefore, the argument is that due to their lack of transparency and difficulties surrounding systems of accountability unexplainable, black box AI could have a negative impact on the doctor-patient relationship. On the other hand, the use of highly efficient, albeit unexplainable, AI tools could be morally justified – and indeed encouraged – given the potential health benefits resulting from their accuracy. Further research is necessary to determine how different types of AI tools should be used in different clinical situations. So far, we have outlined the main debates in the literature regarding the likely impact of AI tools on the practice of person-centred, doctor-patient relationships. The following sections present suggestions found in the literature which aim to ensure that the implementation of AI tools benefits the doctor-patient relationship. Solutions The literature suggests that (1) ensuring that AI systems retain an assistive role in clinical encounters and (2) adapting medical education to ensure future doctors are prepared for an AI-assisted work environment may improve doctor-patient relationships. What is the role of AI tools in healthcare? Many have observed that the impact of AI on person-centred care is likely to depend on the role it occupies in clinical contexts; assisting versus replacing human practitioners. The ideal role for AI in healthcare is currently unclear . Yun, Lee, et al. shed some light on current dynamics between AI machines and people . Using a combination of a behavioural and MRI-based neural investigation, they found that, generally, participants demonstrated an intention to follow the advice of a human doctor rather than an AI machine. In the behavioural experiment, they found that participants' self-reported willingness to follow AI recommendations increased if the AI was able to conduct personalised conversations, but they were still more likely to state they preferred human doctors’ recommendations. In a second experiment using neuroimaging, they identified the neurocognitive mechanisms that underlie responses to personalised conversation conducted by AI tools versus human doctors . They found inconsistencies with the first experiment: participants’ brain responses showed apathy towards medical AI tools, even when using personalised conversational styles. Human doctors, in contrast, elicited a pro-social response. This experiment suggests a future where AI may be better accepted by patients if it acts as an assistant to human doctors rather than replaces them. Furthermore, a review investigating patients’ and publics’ attitudes towards AI found that while AI was viewed positively overall, participants strongly preferred AI tools to be assistive, with only a minority believing that the technology should either fully replace the doctor or not be used at all . Several studies in the field of mental health support the view that AI can only have a positive impact on the doctor-patient relationship in an assistive role by improving openness, communication, and avoiding potential complications in interpersonal relationships [ , , ]. For example, supporting the view that AI can only positively impact the doctor-patient relationship in an assistive role, Szalai argues that AI-based addendum therapy for patients with borderline personality disorder can be beneficial . This is done using algorithms capable of identifying emotional tone of a narrative and fine-grained emotions. Patients may be more willing to disclose information to the AI than to the human doctor, even when they know that the human doctor can access the information. On the other hand, Luxton warns of the risk of AI tools replacing human doctors arguing that the imperfection of the psychotherapist is an essential part of the healing process. He argues that patients must be warned, and stakeholders must be mindful of the ethical implications of the use of these types of AI tools for mental healthcare . There is some evidence that clinicians also believe that assistive AI may have a positive role to play in doctor-patient relationships. An exploratory survey conducted with general practitioners in the UK showed that they too believe in a restricted role of AI within general practice . Opinions were extremely varied as to how AI tools may be incorporated in practice. The overwhelming majority of the respondents were sceptic as to the ability of AI tools to help with diagnoses, save time, etc . Interestingly, however, the study shows that the views of GPs are often far removed from those of AI experts . The latter forecast that primary care will be radically transformed as evidence suggests that mHealth tools enable patients to monitor key variables without the need for traditional check-ups. Mihai warns, though, that this may backfire as patients might worry obsessively about continuous monitoring which is likely to be counterproductive , presumably because this might unnecessarily increase the demand on healthcare services as a result. To mitigate this phenomenon, strategies could be put into place where the readings are automatically sent to the doctor but only visible to the patient if they so wish, and alerts are only sent in cases of emergency. These views suggest that AI tools can only have a positive impact on the doctor-patient relationship if they are used in an assistive manner, that is, ensuring human-to-human empathetic relationships are preserved. Karches argues that AI should not replace the human doctor, particularly in caring for people with chronic and terminal illnesses, as human doctors are able to “offer wisdom and compassion from his or her own experience of being human” (p.108). Therefore, (preferences for) the use of AI may be influenced by illness type and level of empathy required . In summary, if public acceptability of AI tools is a concern, current evidence seems to suggest that introducing them in an assistive capacity in healthcare is less likely to have a negative impact on the doctor-patient relationship. Assistive tools, especially explainable ones, may even support empathetic and trust-based doctor-patient relationships by giving sufficient space to the doctor to perform their role. They can also promote shared decision-making by allowing doctors and patients to take their own preferences into account. It is likely that the use of AI tools in healthcare may spread as patients and doctors adapt to their use, indeed Banja observes that humans are robust anthropomorphisers and thus the acceptance of AI tools is very likely to increase with time . What are medical professionals’ educational needs in an AI-based system? Whether AI impacts the doctor-patient relationship positively or negatively depends on the structural aspects of the healthcare system within which AI is being deployed. For example, in order for AI to help promote empathetic doctor-patient relationships, it needs to be deployed within a system that already supports empathy as a core healthcare value . This arguably starts with defining appropriate medical curricula. Tripti and Lalitbhushan suggest that it is important that doctors learn how to interact with AI systems and large data sets while at the same time providing humane and compassionate care . These relational skills will define their role in future healthcare given that AI systems are likely to take over some of the knowledge aspects of their job . In other words, they argue that AI tools are likely to cognitively surpass humans, making it necessary for human providers to adapt to working together with AI tools . Kolanska et al. go further by arguing that the doctor’s role should evolve to be closer to an engineer, that is, with an understanding of big data and computer science . In the context of psychiatry, Kim et al. explain that most medical schools have lagged behind shifts brought about by the increasing use of technology. Given that AI is likely to assist psychiatrists, Kim et al. argue that medical education ought to reflect this newly defined role for both doctors and AI in the provision of healthcare . Another approach to preserving the doctor-patient relationship in the age of AI is to increase the focus on soft skills in the medical curriculum . Besides the importance of AI literacy, Wartman et al. also suggest that empathy and compassion are skills that should be cultivated or taught throughout the curriculum and actively kept at the centre of medical practice . Lagrew and Jenkins explain that besides the importance of the study of new technologies, the best doctors will be those who understand how it “feels” to be a patient . Chen suggests a related approach, she observes that technical knowledge and skills are no longer the exclusive domain of the medical profession, as knowledge is now easily accessible to the public and AI is developing diagnostic skills. Thus, she argues, other relevant competencies should be further developed, such as the ability to know when and how to apply knowledge in order to best help the patient in a compassionate manner . Alrassi et al. similarly underline the importance of selecting medical students who have high empathy, communication skills, and emotional intelligence in order to ensure appropriate care in a future relying increasingly on emotionless AI tools. In summary, adapting medical education appropriately is seen as crucial to ensure that empathetic care, trust relationships, and shared decision-making are preserved in AI-assisted healthcare systems. It is argued that this can be achieved through an increased focus on data science in the curriculum whilst preserving a strong emphasis on relational skills. Patient involvement in decision-making is a central aspect of person-centred care . Increasing the patient’s autonomy by encouraging their involvement in decision-making processes is a powerful pushback against the outdated paternalistic model of care . Elwyn et al. argue that shared decision-making rests on the acceptance that individual self-determination is a good, and therefore desirable goal . Thus, supporting patient autonomy is important within this framework . Some AI tools may have the potential to increase patient autonomy, and therefore the practice of shared decision-making . Zaliauskaite discusses patient autonomy within the context of technological advances and argues that an effective way to ensure patient’s autonomy is the implementation of legal instruments such as informed consent, advance directives and Ulysses contracts (a contract to bind oneself in the future) . She suggests that technologies such as mobile apps that are used by patients for self-monitoring (collecting any form of health data) may increase autonomy and, in the best case scenario, shift the doctor-patient relationship towards a customer-service type format, where both sides have a balanced distribution of rights and responsibilities, and thereby an equal input/share in the decision-making process . However, one could argue that it is questionable whether a balanced distribution of rights and responsibilities is feasible in a doctor-patient relationship which is commonly characterised by the vulnerability of the patient towards the doctor and epistemic imbalances. Additionally, there seems to be a risk that such a relationship becomes purely transactional and subject to market pressures. In contrast, De lara et al. present bioethical perspectives within the context of big data and data processing in rheumatology and argue that relationships must preserve fiduciary duties, which implies a power imbalance. According to them, this is necessary in order to protect the promise of an ethical relationship of trust between doctors and patients . A more fundamental problem arises when considering the type of patient autonomy an AI tool can support within a framework of shared decision-making. It is unclear how an algorithm could take preferences of different people (e.g. regarding treatment goals) into account . This could give rise to a new form of paternalism in which the AI makes decisions on behalf of patients and doctors. The difference with the old form of paternalism is that this time, the paternalistic relationship would be vis-à-vis the AI, not the doctor. In other words, “doctor knows best – but the computer knows more and makes fewer mistakes” . This new form of paternalism would be fundamentally at odds with the principle of shared decision-making. Jotterand et al. (2020) as well as Rainey and Erden (2020) share similar concerns, explaining that in the context of neurotechnology in psychiatry, AI tools are potentially dangerously reductive. This is because they are unable to comprehend social, psychological, biological, and spiritual dimensions. Therefore, they too, argue that AI tools should be designed to allow for value plurality . McDougall uses IBM Watson as an example to argue that AI machines should be designed and built in a way that allows for value plurality, namely the ability to take into account different patients’ preferences and priorities. IBM Watson’s role is to rank treatment options based on outcome statistics presented in terms of ‘disease-free survival’ and to show a synthesis of the published evidence relevant to the clinical situation . However, McDougall argues, that this ranking should be driven by individual patient preferences (e.g., one patient might choose further treatment whereas another might choose palliation). Without taking into account value plurality, there is a real risk of the AI’s decisions undermining the patient’s autonomy . Black box AI tools are arguably particularly threatening to shared decision making as the absence of explainability might hurt patient autonomy by preventing the patient from making informed decisions . Doctors may have more time to spend talking to patients, but if they are unable to provide the necessary explanations about certain treatment decisions/ prognoses and/or diagnoses suggested by the AI, the benefits of extra time may be limited . In summary, the emerging literature is divided on whether AI will enhance the doctor-patient relationship by encouraging shared decision-making through increased patient autonomy or create a new form of paternalism by hindering value-plurality. The next section will focus on the impact of AI on another important aspect of person-centred care, the practice of empathetic care and how it relates to efficiency. Bauchat et al. argue that empathy forms the cornerstone of person-centred care. Multiple studies support these claims [ – ]. Empathy can be described as “…the ability to understand a person’s standpoint, their experience of illness and, through this cognitive resonance, feel motivated to help them…”(p.1) . Empathy facilitates doctors’ understanding of the disease from the standpoint of the patient, as well as individual patients’ values and goals . However, doctors and patients must be able to enter meaningful discussions in order for doctors’ to be able to appreciate and comprehend the patient’s standpoint. The practice of empathy therefore requires time . The medical literature is rich in accounts promoting AI as a great time saver creating space for more meaningful and empathetic relationships to be developed with patients [ , – , , , , ]. There is already some evidence to suggest that AI can save doctors’ time. Printz explains that the AI tool Watson for Oncology needs 40 s to capture and analyse data, then generate treatment recommendations based on the available data . In comparison, manually collecting and analysing the data takes on average 20 min, decreasing to 12 min when oncologists become more familiar with cases . It is unclear, however, if this saved time will be used to enhance the doctor-patient relationship. In his book “Deep Medicine”, Topol argues that AI tools have the potential to help doctors in a wide array of tasks and therefore could free up time which could be used to build a positive relationship with the patient . Aminololama-Shakeri and Lopez argue that AI is the next step towards a more patient-centred system of care in breast-imaging. They observe that because radiologists will have more time to spend with their patients, this will enable them to prioritise the relational aspects of their work. This newfound time, they argue, will also enable radiologists to focus on treatment on top of diagnosis. They explain that this could be achieved by creating a form of hybrid training which would incorporate imaging to medical and surgical oncology training, which has already been suggested for cardiovascular surgeons . This account seems somewhat paradoxical, as if time saved using AI tools results in radiologists taking on other tasks such as treatment, it is unclear how this, in itself, improves the empathetic doctor-patient relationship. Sparrow and Hatherley , in contrast, suggest that the economics of healthcare, especially in for-profit environments but also in the public sector, will dictate that more patients will pass through the system and more tasks will need to be taken on by individuals. They argue that there is no reason to believe that the time saved by the use of AI will result in more empathetic doctor-patient relationships but rather it will allow higher patient throughput. Topol is certainly not oblivious to market laws and has suggested that doctors must get together to create a movement demanding that time saved is not used to squeeze more patients through the system . Sparrow and Hatherley have a pessimistic outlook on the ability of doctors to initiate change, at least in the US context. Using several historical examples (such as universal basic healthcare), they argue that doctors have been unable to motivate any changes under any administration in the US . Whether time saved will be used to promote empathetic relationships or used to increase throughput of patients will largely depend on how much value healthcare systems will place on empathy as a healthcare value versus efficiency. This is to a large extent an empirical question, and therefore, more research is needed to determine this. Of course, achieving patient-centred empathetic care is also dependent upon patients being able to trust their doctors and their recommendations. The next section addresses the impact of AI tools on the doctor-patient trust relationship. AI tools can be seen as a new, third actor, in the two-way doctor-patient relationship. Just as the doctor-patient relationship is founded on trust , patients and doctors alike must be able to develop a trust relationship with the AI tool they are using. In order for someone to warrant trust, they need to demonstrate their trustworthiness. One way of doing this is by indicating their reliability. In the case of AI this might require features such as explainability, validity and freedom from algorithmic bias, as well as clear pathways of accountability . AI tools do not always conform to these values. For example, AI tools are not necessarily built to be transparent . The continuous search for increased accuracy often compromises AI’s explainability . The best AI tools, from a performance perspective are, therefore, not necessarily transparent . Triberti et al. argue that the lack of explainability could lead to a phenomenon of “decision paralysis” due to the trust issues for the users of the AI tool, generated from the lack of explainability . The issue of AI explainability raises a number of ethical questions including, whether it would be justifiable to dismiss the use of highly efficient AI on explainability grounds. Ho argues that uncritical deference to doctors over (unexplainable) AI tools that have outperformed humans may lead to preventable morbidity and is ethically irresponsible . According to this view, the deployment of an AI tool might end up becoming compulsory as a matter of due diligence and its use might effectively become an epistemic obligation . Others argue that explainable AIs might give rise to a more productive doctor-patient relationship by increasing the transparency of decision-making. Mabillard et al. propose a framework of “reasoned transparency” which entails elements such as abundant communication about AI tools and services and reassurance on data confidentiality. In a reasoned transparency framework, explainable AI is seen as a powerful tool due to its increased transparency, and therefore, its ability to generate trust relationships, between the AI the doctor and the patient. This is because the doctor can give much more precise information and explain, for example, which specific parameter played a role in an AI tool’s prediction . Even in cases where AI tools’ output is not directly explainable, probabilities are and doctors may be able to justify diagnoses and explain procedures in a manner understandable to patients, even if the latter are unfamiliar with statistical jargon. Similarly, patients might be happy to develop a trust relationship with the AI tools that they use as part of their self-management and retain a trust relationship with their doctor, on the grounds of explanations of probabilities and statistics the doctor provides. This will be dependent on medical education changing accordingly, as will be discussed below. Kerasidou suggests that in an AI-assisted healthcare system, there might be a shift away from human-specific skills if patients and healthcare systems start to value the increased accuracy and efficacy of AI tools over relational values such as interpersonal trust . In this context, one could argue that AI tools do not necessarily need to be explainable (or transparent) to improve the doctor-patient relationship, especially if they systematically out-perform human doctors. Patients and doctors alike might start perceiving trustworthiness as based on the level of certainty or accuracy offered by AI tools, as opposed to a high level of transparency. According to Banja, if our main interest is the accuracy of clinical decision making, then “just like Watson on Jeopardy! , AI is going to win the machine-versus-human contest every time” (p.34). He further suggests that AI technologies are held to an unfairly high standard as excessive attention is paid to their errors as opposed to human errors. In this context, one could argue that AI tools do not necessarily need to be explainable (and therefore transparent) to improve the doctor-patient relationship, especially if they systematically out-perform human doctors. Patients and doctors alike might start basing their trustworthy relationship on the understanding that AI tools offer a high level of certainty, as opposed to a high level of transparency. De Lara et al. explains that medicine is already full of black boxes . For example, not all doctors and patients need to understand how electromagnetic radiation works when dealing with an MRI machine. Bjerring and Busch, however, argue that AI is a different type of black box . They explain that, currently, there is always a human in the loop who is able to give an explanation of how technology works (for example, there will be an engineer able to explain how an MRI machine works), but this cannot be said of some AI systems . Beyond issues relating to accuracy and efficiency, explainability is also linked with the problem of accountability. Carter et al., discussing AI-assisted breast cancer diagnostic tools, suggest that a lack of explainability is problematic if the doctor is expected to take responsibility, i.e., be accountable, for decisions involving AI systems . Furthermore, it is unclear to whom responsibility for AI-mediated decisions should be delegated, and how the interactions between AI tools and doctors will develop given this uncertainty . A shift in the attribution of responsibility from the doctor to other stakeholders (e.g. AI developers, vendors) may have a negative impact on the doctor-patient relationship as traditional systems of accountability become compromised. Generally, therefore, the argument is that due to their lack of transparency and difficulties surrounding systems of accountability unexplainable, black box AI could have a negative impact on the doctor-patient relationship. On the other hand, the use of highly efficient, albeit unexplainable, AI tools could be morally justified – and indeed encouraged – given the potential health benefits resulting from their accuracy. Further research is necessary to determine how different types of AI tools should be used in different clinical situations. So far, we have outlined the main debates in the literature regarding the likely impact of AI tools on the practice of person-centred, doctor-patient relationships. The following sections present suggestions found in the literature which aim to ensure that the implementation of AI tools benefits the doctor-patient relationship. The literature suggests that (1) ensuring that AI systems retain an assistive role in clinical encounters and (2) adapting medical education to ensure future doctors are prepared for an AI-assisted work environment may improve doctor-patient relationships. What is the role of AI tools in healthcare? Many have observed that the impact of AI on person-centred care is likely to depend on the role it occupies in clinical contexts; assisting versus replacing human practitioners. The ideal role for AI in healthcare is currently unclear . Yun, Lee, et al. shed some light on current dynamics between AI machines and people . Using a combination of a behavioural and MRI-based neural investigation, they found that, generally, participants demonstrated an intention to follow the advice of a human doctor rather than an AI machine. In the behavioural experiment, they found that participants' self-reported willingness to follow AI recommendations increased if the AI was able to conduct personalised conversations, but they were still more likely to state they preferred human doctors’ recommendations. In a second experiment using neuroimaging, they identified the neurocognitive mechanisms that underlie responses to personalised conversation conducted by AI tools versus human doctors . They found inconsistencies with the first experiment: participants’ brain responses showed apathy towards medical AI tools, even when using personalised conversational styles. Human doctors, in contrast, elicited a pro-social response. This experiment suggests a future where AI may be better accepted by patients if it acts as an assistant to human doctors rather than replaces them. Furthermore, a review investigating patients’ and publics’ attitudes towards AI found that while AI was viewed positively overall, participants strongly preferred AI tools to be assistive, with only a minority believing that the technology should either fully replace the doctor or not be used at all . Several studies in the field of mental health support the view that AI can only have a positive impact on the doctor-patient relationship in an assistive role by improving openness, communication, and avoiding potential complications in interpersonal relationships [ , , ]. For example, supporting the view that AI can only positively impact the doctor-patient relationship in an assistive role, Szalai argues that AI-based addendum therapy for patients with borderline personality disorder can be beneficial . This is done using algorithms capable of identifying emotional tone of a narrative and fine-grained emotions. Patients may be more willing to disclose information to the AI than to the human doctor, even when they know that the human doctor can access the information. On the other hand, Luxton warns of the risk of AI tools replacing human doctors arguing that the imperfection of the psychotherapist is an essential part of the healing process. He argues that patients must be warned, and stakeholders must be mindful of the ethical implications of the use of these types of AI tools for mental healthcare . There is some evidence that clinicians also believe that assistive AI may have a positive role to play in doctor-patient relationships. An exploratory survey conducted with general practitioners in the UK showed that they too believe in a restricted role of AI within general practice . Opinions were extremely varied as to how AI tools may be incorporated in practice. The overwhelming majority of the respondents were sceptic as to the ability of AI tools to help with diagnoses, save time, etc . Interestingly, however, the study shows that the views of GPs are often far removed from those of AI experts . The latter forecast that primary care will be radically transformed as evidence suggests that mHealth tools enable patients to monitor key variables without the need for traditional check-ups. Mihai warns, though, that this may backfire as patients might worry obsessively about continuous monitoring which is likely to be counterproductive , presumably because this might unnecessarily increase the demand on healthcare services as a result. To mitigate this phenomenon, strategies could be put into place where the readings are automatically sent to the doctor but only visible to the patient if they so wish, and alerts are only sent in cases of emergency. These views suggest that AI tools can only have a positive impact on the doctor-patient relationship if they are used in an assistive manner, that is, ensuring human-to-human empathetic relationships are preserved. Karches argues that AI should not replace the human doctor, particularly in caring for people with chronic and terminal illnesses, as human doctors are able to “offer wisdom and compassion from his or her own experience of being human” (p.108). Therefore, (preferences for) the use of AI may be influenced by illness type and level of empathy required . In summary, if public acceptability of AI tools is a concern, current evidence seems to suggest that introducing them in an assistive capacity in healthcare is less likely to have a negative impact on the doctor-patient relationship. Assistive tools, especially explainable ones, may even support empathetic and trust-based doctor-patient relationships by giving sufficient space to the doctor to perform their role. They can also promote shared decision-making by allowing doctors and patients to take their own preferences into account. It is likely that the use of AI tools in healthcare may spread as patients and doctors adapt to their use, indeed Banja observes that humans are robust anthropomorphisers and thus the acceptance of AI tools is very likely to increase with time . What are medical professionals’ educational needs in an AI-based system? Whether AI impacts the doctor-patient relationship positively or negatively depends on the structural aspects of the healthcare system within which AI is being deployed. For example, in order for AI to help promote empathetic doctor-patient relationships, it needs to be deployed within a system that already supports empathy as a core healthcare value . This arguably starts with defining appropriate medical curricula. Tripti and Lalitbhushan suggest that it is important that doctors learn how to interact with AI systems and large data sets while at the same time providing humane and compassionate care . These relational skills will define their role in future healthcare given that AI systems are likely to take over some of the knowledge aspects of their job . In other words, they argue that AI tools are likely to cognitively surpass humans, making it necessary for human providers to adapt to working together with AI tools . Kolanska et al. go further by arguing that the doctor’s role should evolve to be closer to an engineer, that is, with an understanding of big data and computer science . In the context of psychiatry, Kim et al. explain that most medical schools have lagged behind shifts brought about by the increasing use of technology. Given that AI is likely to assist psychiatrists, Kim et al. argue that medical education ought to reflect this newly defined role for both doctors and AI in the provision of healthcare . Another approach to preserving the doctor-patient relationship in the age of AI is to increase the focus on soft skills in the medical curriculum . Besides the importance of AI literacy, Wartman et al. also suggest that empathy and compassion are skills that should be cultivated or taught throughout the curriculum and actively kept at the centre of medical practice . Lagrew and Jenkins explain that besides the importance of the study of new technologies, the best doctors will be those who understand how it “feels” to be a patient . Chen suggests a related approach, she observes that technical knowledge and skills are no longer the exclusive domain of the medical profession, as knowledge is now easily accessible to the public and AI is developing diagnostic skills. Thus, she argues, other relevant competencies should be further developed, such as the ability to know when and how to apply knowledge in order to best help the patient in a compassionate manner . Alrassi et al. similarly underline the importance of selecting medical students who have high empathy, communication skills, and emotional intelligence in order to ensure appropriate care in a future relying increasingly on emotionless AI tools. In summary, adapting medical education appropriately is seen as crucial to ensure that empathetic care, trust relationships, and shared decision-making are preserved in AI-assisted healthcare systems. It is argued that this can be achieved through an increased focus on data science in the curriculum whilst preserving a strong emphasis on relational skills. Many have observed that the impact of AI on person-centred care is likely to depend on the role it occupies in clinical contexts; assisting versus replacing human practitioners. The ideal role for AI in healthcare is currently unclear . Yun, Lee, et al. shed some light on current dynamics between AI machines and people . Using a combination of a behavioural and MRI-based neural investigation, they found that, generally, participants demonstrated an intention to follow the advice of a human doctor rather than an AI machine. In the behavioural experiment, they found that participants' self-reported willingness to follow AI recommendations increased if the AI was able to conduct personalised conversations, but they were still more likely to state they preferred human doctors’ recommendations. In a second experiment using neuroimaging, they identified the neurocognitive mechanisms that underlie responses to personalised conversation conducted by AI tools versus human doctors . They found inconsistencies with the first experiment: participants’ brain responses showed apathy towards medical AI tools, even when using personalised conversational styles. Human doctors, in contrast, elicited a pro-social response. This experiment suggests a future where AI may be better accepted by patients if it acts as an assistant to human doctors rather than replaces them. Furthermore, a review investigating patients’ and publics’ attitudes towards AI found that while AI was viewed positively overall, participants strongly preferred AI tools to be assistive, with only a minority believing that the technology should either fully replace the doctor or not be used at all . Several studies in the field of mental health support the view that AI can only have a positive impact on the doctor-patient relationship in an assistive role by improving openness, communication, and avoiding potential complications in interpersonal relationships [ , , ]. For example, supporting the view that AI can only positively impact the doctor-patient relationship in an assistive role, Szalai argues that AI-based addendum therapy for patients with borderline personality disorder can be beneficial . This is done using algorithms capable of identifying emotional tone of a narrative and fine-grained emotions. Patients may be more willing to disclose information to the AI than to the human doctor, even when they know that the human doctor can access the information. On the other hand, Luxton warns of the risk of AI tools replacing human doctors arguing that the imperfection of the psychotherapist is an essential part of the healing process. He argues that patients must be warned, and stakeholders must be mindful of the ethical implications of the use of these types of AI tools for mental healthcare . There is some evidence that clinicians also believe that assistive AI may have a positive role to play in doctor-patient relationships. An exploratory survey conducted with general practitioners in the UK showed that they too believe in a restricted role of AI within general practice . Opinions were extremely varied as to how AI tools may be incorporated in practice. The overwhelming majority of the respondents were sceptic as to the ability of AI tools to help with diagnoses, save time, etc . Interestingly, however, the study shows that the views of GPs are often far removed from those of AI experts . The latter forecast that primary care will be radically transformed as evidence suggests that mHealth tools enable patients to monitor key variables without the need for traditional check-ups. Mihai warns, though, that this may backfire as patients might worry obsessively about continuous monitoring which is likely to be counterproductive , presumably because this might unnecessarily increase the demand on healthcare services as a result. To mitigate this phenomenon, strategies could be put into place where the readings are automatically sent to the doctor but only visible to the patient if they so wish, and alerts are only sent in cases of emergency. These views suggest that AI tools can only have a positive impact on the doctor-patient relationship if they are used in an assistive manner, that is, ensuring human-to-human empathetic relationships are preserved. Karches argues that AI should not replace the human doctor, particularly in caring for people with chronic and terminal illnesses, as human doctors are able to “offer wisdom and compassion from his or her own experience of being human” (p.108). Therefore, (preferences for) the use of AI may be influenced by illness type and level of empathy required . In summary, if public acceptability of AI tools is a concern, current evidence seems to suggest that introducing them in an assistive capacity in healthcare is less likely to have a negative impact on the doctor-patient relationship. Assistive tools, especially explainable ones, may even support empathetic and trust-based doctor-patient relationships by giving sufficient space to the doctor to perform their role. They can also promote shared decision-making by allowing doctors and patients to take their own preferences into account. It is likely that the use of AI tools in healthcare may spread as patients and doctors adapt to their use, indeed Banja observes that humans are robust anthropomorphisers and thus the acceptance of AI tools is very likely to increase with time . Whether AI impacts the doctor-patient relationship positively or negatively depends on the structural aspects of the healthcare system within which AI is being deployed. For example, in order for AI to help promote empathetic doctor-patient relationships, it needs to be deployed within a system that already supports empathy as a core healthcare value . This arguably starts with defining appropriate medical curricula. Tripti and Lalitbhushan suggest that it is important that doctors learn how to interact with AI systems and large data sets while at the same time providing humane and compassionate care . These relational skills will define their role in future healthcare given that AI systems are likely to take over some of the knowledge aspects of their job . In other words, they argue that AI tools are likely to cognitively surpass humans, making it necessary for human providers to adapt to working together with AI tools . Kolanska et al. go further by arguing that the doctor’s role should evolve to be closer to an engineer, that is, with an understanding of big data and computer science . In the context of psychiatry, Kim et al. explain that most medical schools have lagged behind shifts brought about by the increasing use of technology. Given that AI is likely to assist psychiatrists, Kim et al. argue that medical education ought to reflect this newly defined role for both doctors and AI in the provision of healthcare . Another approach to preserving the doctor-patient relationship in the age of AI is to increase the focus on soft skills in the medical curriculum . Besides the importance of AI literacy, Wartman et al. also suggest that empathy and compassion are skills that should be cultivated or taught throughout the curriculum and actively kept at the centre of medical practice . Lagrew and Jenkins explain that besides the importance of the study of new technologies, the best doctors will be those who understand how it “feels” to be a patient . Chen suggests a related approach, she observes that technical knowledge and skills are no longer the exclusive domain of the medical profession, as knowledge is now easily accessible to the public and AI is developing diagnostic skills. Thus, she argues, other relevant competencies should be further developed, such as the ability to know when and how to apply knowledge in order to best help the patient in a compassionate manner . Alrassi et al. similarly underline the importance of selecting medical students who have high empathy, communication skills, and emotional intelligence in order to ensure appropriate care in a future relying increasingly on emotionless AI tools. In summary, adapting medical education appropriately is seen as crucial to ensure that empathetic care, trust relationships, and shared decision-making are preserved in AI-assisted healthcare systems. It is argued that this can be achieved through an increased focus on data science in the curriculum whilst preserving a strong emphasis on relational skills. The literature shows that AI has the potential to disrupt person-centred doctor-patient relationships. AI tools could support the practice of shared decision-making by increasing patient autonomy. Alternatively, AI tools could harm shared decision-making by creating a new form of paternalism due to their lack of value plurality. Similarly, AI tools have the potential to improve the practice of empathetic care by saving time. However, it is unclear if the saved time will be used to practice empathetic care or used for other activities including pushing more patients through the system. Trustworthy relationships could also be affected by the use of AI tools. Generally, explainable AI tools are considered to be valuable tools for supporting trust relationships given their transparent nature. Blackbox AI tools, however, could negatively impact trust relationships due to their inherent complexity. The literature proposes several ways forward to ensure that AI tools support, rather than hinder, person-centred doctor-patient relationships. A handful of studies suggest that when AI is used as an assistive tool, this may have a positive impact on the doctor-patient relationship (e.g. Eysenbach et al.; Szalai). However, it is argued that patients and doctors may be unlikely to accept a shift to AI-led medical care, and such a shift could harm the doctor-patient relationship as AI tools are incapable of reproducing inherently human qualities of empathy and compassion. In the longer term, the debate is still open with regards to how human preferences for AI-led healthcare will evolve. Patients and doctor alike might start favouring the increased accuracy of AI-led care. However, current evidence regarding human preference points to the fact that this is not yet the case. There is broad agreement in the literature that the impact of AI on the doctor-patient relationship will influence and be influenced by the education of medical professionals. Most authors seem to suggest that medical education should focus on AI literacy and emotional intelligence, with some emphasising the importance of one over the other. This combination underlines the importance of upholding empathetic care while ensuring that patients understand the tools used by the doctor, therefore contributing to the development of trust relationships. Prior to concluding we should note that this paper has some limitations. It focuses on academic literature published in English, therefore, although it aimed to be comprehensive, it is possible that some issues have been overlooked. Second, we have not focussed on a specific type of AI tool. It is possible that relevant issues may vary and depend on the specific usage and role of the tool. Furthermore, most of the debates surrounding the use of AI in healthcare are speculative given the current limited adoption of AI tools. While there are some implementation studies available, few focus specifically on the doctor-patient relationship, and only those would have been selected for this literature review, given our search terms. Finally, there was no patient and public involvement (PPI) as part of this project. We encourage researchers undertaking future studies on this topic to involve patients. It is clear that AI could act as a disruptor to healthcare systems, it is therefore necessary to think about its exact place and role within wider healthcare systems to ensure that its deployment is beneficial for the doctor-patient relationship. On this basis we argue that healthcare systems and related stakeholders, including citizens and policy makers, need to consider the type of values they wish to promote in an AI-augmented healthcare system, and workflows should be adapted accordingly.
Interactions between rootstocks and compost influence the active rhizosphere bacterial communities in citrus
51a3a3ae-b8e3-45d1-b01b-ff66ed186a4d
10116748
Microbiology[mh]
The rhizosphere is the region around the root characterized by high concentrations of plant-derived organic exudates that serve as signal molecules and nutrient sources for microbial recruitment . The microbial communities of the rhizosphere, which constitute the “rhizobiome,” are essential for plant health as they can increase plant nutrient uptake and resistance to several biotic and abiotic stresses through mechanisms including induced systemic resistance, suppression of plant pathogens, and solubilization of soil minerals [ – ]. Most fruit tree crops are composed of two parts: the aboveground fruit-bearing part, the scion, and the belowground part, the rootstock, which provides anchorage and is responsible for water and nutrient uptake. The scion and rootstock, which are often genetically different, are joined through the process of grafting . New rootstocks are developed to adapt to soilborne stresses and diseases and to modulate the horticultural characteristics of the scion. The history of rootstock use and breeding in modern citrus production has been shaped by diseases such as Phytophthora root rot, Citrus tristeza virus , and more recently huanglongbing (HLB, a.k.a. citrus greening) . The rootstock genotype cannot only modulate horticultural traits such as tree size and productivity but can also influence the composition of the rhizosphere microbial communities . The genotype influence on the rhizobiome can even extend to within-species differences as demonstrated in grapes [ – ], apples [ – ], tomatoes , and Populus sp. . Root health is a critical factor for tree growth as it directly influences a tree’s ability to cope with adverse biotic and abiotic stressors. Despite the importance of the rhizobiome for plant nutrient availability , few studies have examined the direct link between the rootstock genotype-based recruitment of rhizosphere bacterial communities and the availability of root nutrients for plant uptake. The potential impacts of plant genotype on the rhizobiome composition and nutrient availability are particularly relevant because they suggest the potential for agricultural production systems to maximize benefits from rhizobiomes indirectly through the choice of rootstocks. Just as rootstocks are bred to resist specific soilborne diseases, plant genotypes with desired phenotypes can be used as a microbiome engineering tool to select candidate taxa (e.g., to serve as biofertilizers or biocontrol agents) for agricultural microbiome engineering [ – ]. In addition, the study of the host genes associated with the selection of microbial communities can be used to support microbiome-focused crop breeding . Citrus is a globally important perennial fruit crop, but its production faces challenges, particularly from the devastating disease HLB [ – ]. Several strategies, including the use of selected rootstock genotypes , ground application of specific nutrients , and soil amendments (e.g., compost and plant biostimulants such as humic substances, seaweed extracts, and microbial inoculants) , have been proposed to improve root health and crop production in citrus. In addition, there is increased interest to understand the composition and function of the citrus microbiome to help optimize and maximize future agricultural microbiome engineering solutions [ – ]. In citrus, rootstock selection is essential for the success or failure of a citrus operation , and the benefits of using specially selected rootstocks has been documented in numerous publications [ – ]. Recent studies have also shown that the root metabolic composition may differ among citrus rootstocks [ – ]. This raises the question of whether different citrus rootstocks may recruit distinct rhizosphere bacterial communities that could impact root nutrient cycling. Florida is one of the largest citrus producers in the USA with more than 60 million trees on 143,000 harvested ha . Most citrus in Florida is grown on naturally infertile soils that have little organic matter and are unable to retain more than a minimal amount of soluble nutrients , directly affecting the establishment of trees during the early phase when rapid development of the tree canopy is critical. This situation is exacerbated when trees become infected with HLB and fibrous roots start to decline . Increasing soil carbon availability through the application of compost can provide a wide range of benefits for root health and production, including improving nutrient and water retention and nutrient availability . Application of compost can also impact the soil microbiome and increase microbial diversity , which has been linked to reduced disease incidence . A recent study showed that compost application increased the bacterial diversity in the apple rhizosphere of two rootstocks, and that interactions between compost and rootstocks controlled variations in the rhizobiome composition that may determine increases in tree biomass . However, the interaction between compost and rootstocks in the citrus rhizobiome has not been explored, nor has the relationship between rhizobiome taxa and root nutrient concentrations. A recent work showed that predicted bacterial functions in the rhizobiome of grapes were similar among different rootstocks . This suggests that the potential functions of bacterial rhizobiomes recruited by different rootstocks of the same crop may be redundant and evenly spread. Whether this is the case for other crops such as citrus remains to be determined, as well as how the application of compost may impact microbial functions in the citrus rhizobiome and root nutrient availability. To date, the study of rootstock effects on the rhizobiome of crops has been predominately performed using a DNA-based amplicon sequencing approach. However, RNA-based estimates can be more accurate for soil microbiome studies [ – ] since relic DNA is abundant in soil and obscures estimates of soil microbial diversity. In addition, highly active microbial taxa may be rare or even absent from DNA-based approaches for the study of soil microbial communities [ – ]. Therefore, we used extracted 16S rRNA from the citrus rhizosphere to: (1) examine the effect of different citrus rootstocks and/or compost on the abundance, diversity, composition, and predicted functionality of active rhizosphere bacterial communities, and (2) determine the relationships between active rhizosphere bacterial communities and root nutrient concentrations and identify potential bacterial taxa correlated with changes in root nutrients. We hypothesized that the rootstock genotype determines variations in diversity and composition of the rhizobiome, and that the rhizobiome bacterial community is richer and more diverse in soils treated with compost compared to the control, resulting in greater root nutrient concentrations. Study site, experimental design, and management The field study was carried out in a commercial citrus orchard in Southwest Florida (Hendry County, FL, USA) under HLB-endemic conditions . The soil at the study site is a sandy spodosol according to the soil taxonomy of USDA , consisting of a surface layer, which is low in organic matter (< 1.5%) and soil N content [< 10 mg/kg of ammonium (NH 4 + ) + nitrate (NO 3 − )], and a subsurface layer with poor drainage . Trees were planted in August 2019 in double rows on raised beds separated by furrows at a spacing of 3.7 m within rows and 7.6 m between rows (358 trees/ha). General management of the orchard followed practices determined by the orchard operator and included seepage irrigation, insecticide, herbicide and fertilizer applications, and other standard management practices. Trees consisted of ‘Valencia’ sweet orange scion ( Citrus sinensis ) on four different rootstocks: (i) X-639 ( C. reticulata ‘Cleopatra’ × Poncirus trifoliata ‘Rubidoux’); (ii) US-802 ( C. maxima ‘Siamese’ × P. trifoliata ‘Gotha Road’); (iii) US-812 ( C. reticulata ‘Sunki’ × P. trifoliata ‘Benecke’); and (iv) US-897 ( C. reticulata ‘Cleopatra’ × P. trifoliata ‘Flying Dragon’). Two treatments were assayed: compost and no compost (control). The field experiment was a randomized split-plot design with treatment (compost or control) as the main plot and rootstock (X-639, US-802, US-812, or US-897) as the subplot (Supplementary Fig. S ). Plots were arranged in eight blocks (16 beds) across a 9-ha experimental site with each block containing two beds either treated with compost or untreated (control). Each bed contained 200 experimental trees, 100 per row, arranged in sets of 50 trees on each of the four rootstocks (Supplementary Fig. S ). Subplots consisted of one bed containing compost and one bed without compost. There were 64 experimental units in total (8 blocks × 2 treatments × 4 rootstocks). Two months after planting (November), compost was applied at a rate of 12.4 tons/ha and incorporated in beds by a shallow till; the other half of the beds did not receive any compost. Following this initial application, compost was applied every 6 months at the same rate (12.4 tons/ha) by broadcast spreading. The locally sourced compost (Kastco Agriculture Service, Naples, FL, USA) was made from yard waste. The physicochemical characteristics of the compost were as follows: C:N ratio, 24.9; organic matter, 23.6%; pH in water, 7.7; total solids, 51.14%; conductivity, 3.1 mS/cm; phosphorus (P), 0.08%; potassium (K), 0.26%; sulfur (S), 0.09%; calcium (Ca), 3.28%; magnesium (Mg), 0.31%; iron (Fe), 2500 ppm; manganese (Mn), 67.5 ppm; and boron (B), 100 ppm. Rhizosphere sample collection Fibrous roots (≤ 1 mm in diameter) with soil attached were collected in August 2021, two years after planting and after 4 consecutive compost applications, from eight trees from each experimental unit under the canopy, and pooled. Roots were separated in the field and used for the following: (1) root nutrient analysis (about 50 g of roots) and (2) isolation of rhizosphere soil and subsequent RNA extraction (about 10 g of roots). Fibrous roots for microbial analyses were placed in 50-mL sterile centrifuge tubes, immediately flash frozen in liquid nitrogen, and stored at −80° until analysis. Rhizosphere soil for RNA extraction was isolated using sterile phosphate-buffered saline (PBS) solution as described previously . Root nutrient analysis Root samples for quantification of macro (N, P, K, Mg, Ca, and S) and micronutrients (B, Zn, Mn, Fe, and Cu) were sent to a commercial laboratory (Waters Agricultural Laboratories Inc., Camilla, GA, USA) and analyzed using inductively coupled plasma (ICP) emission spectroscopy . RNA extraction and reverse transcription of RNA to cDNA RNA from 1 g of rhizosphere soil was extracted using the RNA PowerSoil ® Total RNA Isolation kit (Qiagen, USA) according to manufacturer’s instructions. The RNA obtained was quantified using the Qubit ™ RNA High Sensitivity assay kit (Thermo Scientific, USA), treated with DNase I (RNase free) (Qiagen, USA) to remove co-extracted DNA following the manufacturer’s directions, and kept at −80 °C until analysis. The High-Capacity cDNA Reverse Transcription Kit was used for reverse transcription reactions with RNase inhibitor (Thermo Scientific, USA), following the manufacturer’s instructions, and using 150–200 ng RNA in a final volume of 20 μL. Synthesis of cDNA was achieved with the use of random primers. The concentration of cDNA was measured using the Qubit™ DNA High Sensitivity assay kit (Thermo Scientific, USA) and kept at −80 °C until analysis. qPCR assays The total abundance of active bacterial communities was determined by quantitative PCR (qPCR) using the 16S rRNA gene as a molecular marker and cDNA as a template. Quantitative amplifications were performed following the procedures, primers, and thermal conditions previously described by Castellano-Hinojosa et al. and using a QuantStudio 3 Real-Time PCR system (ThermoFisher, USA). Calibration curves had a correlation coefficient r 2 > 0.99 in all assays. The efficiency of PCR amplification was between 90 and 100%. Library preparation and sequencing analysis The extracted cDNA was sent for sequencing at the DNA Services Facility at the University of Illinois, Chicago, IL, USA. The V4 region of the bacterial 16S rRNA gene was amplified using the 515Fa and 926R primers following the Earth Microbiome Project protocol . Raw reads were analyzed using QIIME2 v2018.4 following the procedures described in full detail in Castellano-Hinojosa and Strauss . Briefly, bacterial rRNA gene sequence reads were assembled and dereplicated using DADA2 with the paired-end setting into representative amplicon sequence variants (ASVs). ASVs were assigned to the SILVA 132 database using the naïve Bayes classifier in QIIME2 . After quality filtering, denoising, and chimera removal, 4743365 16S rRNA sequences (mean of 74115 per sample) were obtained from the total of 64 samples. Rarefaction curves reached saturation for all samples, indicating sequencing depth was sufficient (data not shown). Raw sequence data were deposited in NCBI’s Sequence Read Archive under BioProject PRJNA837574. Analysis of the diversity and composition of active rhizosphere bacterial communities Alpha (Shannon and Inverse Simpson) and beta-diversity analyses were performed on log-normalized data to avoid rarefaction errors using the R package “phyloseq” v1.24.0 . Beta-diversity analysis included a nonmetric multidimensional scaling (NMDS) on Bray-Curtis distance. Differences in community composition between rootstocks, treatments, and their interaction were tested by permutational analysis of variance (PERMANOVA). The nonparametric analysis ANOSIM based on the relative abundance of the bacterial ASVs was used to examine similarities between rootstocks for each treatment. R values close to 1 indicate dissimilarity between treatments. Differentially abundant bacterial taxa between treatments at the phylum and genus taxonomic levels were detected using the DESeq2 package . p -values ≤ 0.05 were considered significant. Functional characteristics of active rhizosphere bacterial communities PICRUSt2 was used to predict the functional capabilities at the category and pathway levels of active rhizosphere bacterial communities based on 16S rRNA gene amplicon data as described by Douglas et al. . Significant differences in functional characteristics between groups of samples were studied using the Welch’s t -test, followed by Benjamini–Hochberg-FDR as a multiple test correction . Quantification of a root multinutrient cycling index Belowground soil biodiversity has a key role in determining ecosystem functioning . Because bacterial communities perform multiple simultaneous functions (multifunctionality), rather than a single measurable process, we constructed a root multinutrient cycling index (MNC) analogous to the widely used multifunctionality index [ – ] using the root nutrients N, P, K, Mg, Ca, S, B, Zn, Mn, Fe, and Cu. These nutrients deliver some of the fundamental supporting and regulating ecosystem services [ – ] and are essential for crop growth, particularly for citrus trees in HLB-endemic conditions . For example, two of the most limiting nutrients for primary production in terrestrial ecosystems are N and P . Potassium, the third essential macronutrient for plants, is involved in numerous biological processes that contribute to crop growth, including protein synthesis, enzyme activation, and photosynthesis . Calcium (Ca) plays a role in cell division and elongation . Magnesium is essential for chlorophyll and an important cofactor of several enzymes . Sulfur acts as a signaling molecule in stress management as well as normal metabolic processes . Micronutrients such as B, Zn, Mn, Fe, and Cu are essential to achieve high plant productivity . Each of the eleven root nutrients were normalized (log-transformed) and standardized using the Z-score transformation. To derive a quantitative MNC value for each treatment and rootstock, we averaged the standardized scores of all individual nutrient variables . The MNC index provides a straightforward and interpretable measure of the ability of bacterial communities to sustain multiple functions simultaneously [ – ]. It measures all functions on a common scale of standard deviation units, has good statistical properties, and shows good correlating with previously established indices that quantify multifunctionality . Pearson’s correlation analysis was used to estimate the relationship between bacterial abundance, alpha- and beta-diversity, and MNC using the cor.test function in R. Identification of the active taxonomic and predicted functional core rhizobiome We studied whether the application of compost impacts the active taxonomic and predicted functional core rhizobiome of citrus. ASVs (at the genus level) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways present in at least 75% of the samples were identified as the taxonomic and predicted functional core rhizobiome, respectively, in the control and treated soils . Statistical analyses All statistical analyses were conducted in the R environment (v3.5.1; http://www.r-project.org/ ). Means of bacterial abundance, alpha diversity, and root nutrients were compared via linear mixed-effects (LME) models, with rootstock (X-639, US-802, US-812, or US-897) and treatment (compost or control) considered random factors and dependent variables, respectively, by using the function “lme” in the “nlme” package. Significant effects were determined by analysis of variance (ANOVA) ( p ≤ 0.05). A Tukey’s post hoc test was calculated by using the function “lsmeans.” We used a multiple regression model with variance decomposition analysis to evaluate the relative importance of the differentially abundant taxa between treatments for explaining variations in root nutrients using the R package “relaimpo” . Structural equation modelling (SEM) was used to evaluate the relationships between rootstock, compost, MNC, bacterial abundance, alpha- and beta-diversity, and predicted functionality. The a priori model is shown in Supplementary Fig. S . Path coefficients of the model and their associated p -values were calculated . We used bootstrap to test the probability that a path coefficient differs from zero since some of the variables introduced were not normally distributed . When these data manipulations were completed, we parameterized our model using our data set and tested its overall goodness of fit. We used the χ 2 test ( χ 2 ; the model has a good fit when χ 2 is ≤ 2 and P is ≥ 0.05) and the root mean square error (MSE) of approximation (RMSEA; the model has a good fit when RMSEA is ∼≤ 0.05 and P is ∼≥ 0.05) . All SEM analyses were conducted using AMOS 20.0 (AMOS IBM, USA). Significant differences in the relative abundance of ASVs and pathways between taxonomic and predicted functional core rhizobiomes in the control vs. treated soils were calculated using the Welch’s t -test and the Benjamini–Hochberg False Discovery Rate (FDR) multiple-test correction using the R package “sgof.” The field study was carried out in a commercial citrus orchard in Southwest Florida (Hendry County, FL, USA) under HLB-endemic conditions . The soil at the study site is a sandy spodosol according to the soil taxonomy of USDA , consisting of a surface layer, which is low in organic matter (< 1.5%) and soil N content [< 10 mg/kg of ammonium (NH 4 + ) + nitrate (NO 3 − )], and a subsurface layer with poor drainage . Trees were planted in August 2019 in double rows on raised beds separated by furrows at a spacing of 3.7 m within rows and 7.6 m between rows (358 trees/ha). General management of the orchard followed practices determined by the orchard operator and included seepage irrigation, insecticide, herbicide and fertilizer applications, and other standard management practices. Trees consisted of ‘Valencia’ sweet orange scion ( Citrus sinensis ) on four different rootstocks: (i) X-639 ( C. reticulata ‘Cleopatra’ × Poncirus trifoliata ‘Rubidoux’); (ii) US-802 ( C. maxima ‘Siamese’ × P. trifoliata ‘Gotha Road’); (iii) US-812 ( C. reticulata ‘Sunki’ × P. trifoliata ‘Benecke’); and (iv) US-897 ( C. reticulata ‘Cleopatra’ × P. trifoliata ‘Flying Dragon’). Two treatments were assayed: compost and no compost (control). The field experiment was a randomized split-plot design with treatment (compost or control) as the main plot and rootstock (X-639, US-802, US-812, or US-897) as the subplot (Supplementary Fig. S ). Plots were arranged in eight blocks (16 beds) across a 9-ha experimental site with each block containing two beds either treated with compost or untreated (control). Each bed contained 200 experimental trees, 100 per row, arranged in sets of 50 trees on each of the four rootstocks (Supplementary Fig. S ). Subplots consisted of one bed containing compost and one bed without compost. There were 64 experimental units in total (8 blocks × 2 treatments × 4 rootstocks). Two months after planting (November), compost was applied at a rate of 12.4 tons/ha and incorporated in beds by a shallow till; the other half of the beds did not receive any compost. Following this initial application, compost was applied every 6 months at the same rate (12.4 tons/ha) by broadcast spreading. The locally sourced compost (Kastco Agriculture Service, Naples, FL, USA) was made from yard waste. The physicochemical characteristics of the compost were as follows: C:N ratio, 24.9; organic matter, 23.6%; pH in water, 7.7; total solids, 51.14%; conductivity, 3.1 mS/cm; phosphorus (P), 0.08%; potassium (K), 0.26%; sulfur (S), 0.09%; calcium (Ca), 3.28%; magnesium (Mg), 0.31%; iron (Fe), 2500 ppm; manganese (Mn), 67.5 ppm; and boron (B), 100 ppm. Fibrous roots (≤ 1 mm in diameter) with soil attached were collected in August 2021, two years after planting and after 4 consecutive compost applications, from eight trees from each experimental unit under the canopy, and pooled. Roots were separated in the field and used for the following: (1) root nutrient analysis (about 50 g of roots) and (2) isolation of rhizosphere soil and subsequent RNA extraction (about 10 g of roots). Fibrous roots for microbial analyses were placed in 50-mL sterile centrifuge tubes, immediately flash frozen in liquid nitrogen, and stored at −80° until analysis. Rhizosphere soil for RNA extraction was isolated using sterile phosphate-buffered saline (PBS) solution as described previously . Root samples for quantification of macro (N, P, K, Mg, Ca, and S) and micronutrients (B, Zn, Mn, Fe, and Cu) were sent to a commercial laboratory (Waters Agricultural Laboratories Inc., Camilla, GA, USA) and analyzed using inductively coupled plasma (ICP) emission spectroscopy . RNA from 1 g of rhizosphere soil was extracted using the RNA PowerSoil ® Total RNA Isolation kit (Qiagen, USA) according to manufacturer’s instructions. The RNA obtained was quantified using the Qubit ™ RNA High Sensitivity assay kit (Thermo Scientific, USA), treated with DNase I (RNase free) (Qiagen, USA) to remove co-extracted DNA following the manufacturer’s directions, and kept at −80 °C until analysis. The High-Capacity cDNA Reverse Transcription Kit was used for reverse transcription reactions with RNase inhibitor (Thermo Scientific, USA), following the manufacturer’s instructions, and using 150–200 ng RNA in a final volume of 20 μL. Synthesis of cDNA was achieved with the use of random primers. The concentration of cDNA was measured using the Qubit™ DNA High Sensitivity assay kit (Thermo Scientific, USA) and kept at −80 °C until analysis. The total abundance of active bacterial communities was determined by quantitative PCR (qPCR) using the 16S rRNA gene as a molecular marker and cDNA as a template. Quantitative amplifications were performed following the procedures, primers, and thermal conditions previously described by Castellano-Hinojosa et al. and using a QuantStudio 3 Real-Time PCR system (ThermoFisher, USA). Calibration curves had a correlation coefficient r 2 > 0.99 in all assays. The efficiency of PCR amplification was between 90 and 100%. The extracted cDNA was sent for sequencing at the DNA Services Facility at the University of Illinois, Chicago, IL, USA. The V4 region of the bacterial 16S rRNA gene was amplified using the 515Fa and 926R primers following the Earth Microbiome Project protocol . Raw reads were analyzed using QIIME2 v2018.4 following the procedures described in full detail in Castellano-Hinojosa and Strauss . Briefly, bacterial rRNA gene sequence reads were assembled and dereplicated using DADA2 with the paired-end setting into representative amplicon sequence variants (ASVs). ASVs were assigned to the SILVA 132 database using the naïve Bayes classifier in QIIME2 . After quality filtering, denoising, and chimera removal, 4743365 16S rRNA sequences (mean of 74115 per sample) were obtained from the total of 64 samples. Rarefaction curves reached saturation for all samples, indicating sequencing depth was sufficient (data not shown). Raw sequence data were deposited in NCBI’s Sequence Read Archive under BioProject PRJNA837574. Alpha (Shannon and Inverse Simpson) and beta-diversity analyses were performed on log-normalized data to avoid rarefaction errors using the R package “phyloseq” v1.24.0 . Beta-diversity analysis included a nonmetric multidimensional scaling (NMDS) on Bray-Curtis distance. Differences in community composition between rootstocks, treatments, and their interaction were tested by permutational analysis of variance (PERMANOVA). The nonparametric analysis ANOSIM based on the relative abundance of the bacterial ASVs was used to examine similarities between rootstocks for each treatment. R values close to 1 indicate dissimilarity between treatments. Differentially abundant bacterial taxa between treatments at the phylum and genus taxonomic levels were detected using the DESeq2 package . p -values ≤ 0.05 were considered significant. PICRUSt2 was used to predict the functional capabilities at the category and pathway levels of active rhizosphere bacterial communities based on 16S rRNA gene amplicon data as described by Douglas et al. . Significant differences in functional characteristics between groups of samples were studied using the Welch’s t -test, followed by Benjamini–Hochberg-FDR as a multiple test correction . Belowground soil biodiversity has a key role in determining ecosystem functioning . Because bacterial communities perform multiple simultaneous functions (multifunctionality), rather than a single measurable process, we constructed a root multinutrient cycling index (MNC) analogous to the widely used multifunctionality index [ – ] using the root nutrients N, P, K, Mg, Ca, S, B, Zn, Mn, Fe, and Cu. These nutrients deliver some of the fundamental supporting and regulating ecosystem services [ – ] and are essential for crop growth, particularly for citrus trees in HLB-endemic conditions . For example, two of the most limiting nutrients for primary production in terrestrial ecosystems are N and P . Potassium, the third essential macronutrient for plants, is involved in numerous biological processes that contribute to crop growth, including protein synthesis, enzyme activation, and photosynthesis . Calcium (Ca) plays a role in cell division and elongation . Magnesium is essential for chlorophyll and an important cofactor of several enzymes . Sulfur acts as a signaling molecule in stress management as well as normal metabolic processes . Micronutrients such as B, Zn, Mn, Fe, and Cu are essential to achieve high plant productivity . Each of the eleven root nutrients were normalized (log-transformed) and standardized using the Z-score transformation. To derive a quantitative MNC value for each treatment and rootstock, we averaged the standardized scores of all individual nutrient variables . The MNC index provides a straightforward and interpretable measure of the ability of bacterial communities to sustain multiple functions simultaneously [ – ]. It measures all functions on a common scale of standard deviation units, has good statistical properties, and shows good correlating with previously established indices that quantify multifunctionality . Pearson’s correlation analysis was used to estimate the relationship between bacterial abundance, alpha- and beta-diversity, and MNC using the cor.test function in R. We studied whether the application of compost impacts the active taxonomic and predicted functional core rhizobiome of citrus. ASVs (at the genus level) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways present in at least 75% of the samples were identified as the taxonomic and predicted functional core rhizobiome, respectively, in the control and treated soils . All statistical analyses were conducted in the R environment (v3.5.1; http://www.r-project.org/ ). Means of bacterial abundance, alpha diversity, and root nutrients were compared via linear mixed-effects (LME) models, with rootstock (X-639, US-802, US-812, or US-897) and treatment (compost or control) considered random factors and dependent variables, respectively, by using the function “lme” in the “nlme” package. Significant effects were determined by analysis of variance (ANOVA) ( p ≤ 0.05). A Tukey’s post hoc test was calculated by using the function “lsmeans.” We used a multiple regression model with variance decomposition analysis to evaluate the relative importance of the differentially abundant taxa between treatments for explaining variations in root nutrients using the R package “relaimpo” . Structural equation modelling (SEM) was used to evaluate the relationships between rootstock, compost, MNC, bacterial abundance, alpha- and beta-diversity, and predicted functionality. The a priori model is shown in Supplementary Fig. S . Path coefficients of the model and their associated p -values were calculated . We used bootstrap to test the probability that a path coefficient differs from zero since some of the variables introduced were not normally distributed . When these data manipulations were completed, we parameterized our model using our data set and tested its overall goodness of fit. We used the χ 2 test ( χ 2 ; the model has a good fit when χ 2 is ≤ 2 and P is ≥ 0.05) and the root mean square error (MSE) of approximation (RMSEA; the model has a good fit when RMSEA is ∼≤ 0.05 and P is ∼≥ 0.05) . All SEM analyses were conducted using AMOS 20.0 (AMOS IBM, USA). Significant differences in the relative abundance of ASVs and pathways between taxonomic and predicted functional core rhizobiomes in the control vs. treated soils were calculated using the Welch’s t -test and the Benjamini–Hochberg False Discovery Rate (FDR) multiple-test correction using the R package “sgof.” Root nutrient analysis Treatment with compost had a significant effect on root K, Mg, and Mn concentrations (Supplementary Fig. S ). The K concentration was significantly lower in roots from US-897 in treated soils compared to the control. Significantly greater Mg concentrations were detected in roots from US-802 and US-812 in treated soils compared to the controls. For US-802, US-812, and US-897, the Mn concentrations were also significantly greater in roots from the treated soils compared to the controls. Rootstock had a significant effect on Ca, S, and Mn concentrations (Supplementary Fig. S ). In control soils, roots from US-802 had significantly higher Ca concentrations compared to US-812. In soils treated with compost, significantly higher Ca concentrations were detected in roots from US-802 compared to X-639. The S concentrations were also significantly higher in roots from US-802 and US-812 compared to X-639. Significantly higher Mn concentrations were detected in roots from US-812 and US-897 compared to US-802. There were no significant differences in N, P, S, B, Zn, Fe, and Cu concentrations among rootstocks and treatments (Supplementary Fig. S ). Abundance and alpha- and beta-diversity of active rhizosphere bacterial communities Rootstock, treatment, and rootstock and treatment interaction had a significant effect on the abundance of rhizosphere bacteria (Supplementary Fig. S ). A significantly greater number of bacteria were detected in the rhizobiome of US-812 and US-897 compared to US-802 and X-639 in treated soils, whereas there were no differences in bacterial abundance between rootstocks in the control soils (Supplementary Fig. S ). Alpha-diversity was significantly affected by rootstock, treatment, and rootstock and treatment interaction (Fig. ). Compost application significantly increased the number of observed ASVs and the values of the Shannon and Simpson indices for US-812 and US-897 compared to the control soils (Fig. A). In the control soils, alpha-diversity was significantly greater in the rhizobiome of X-639 compared to US-812 and US-897 (Fig. A). In the treated soils, US-802 had significantly lower alpha-diversity compared to US-812 and US-897 (Fig. A). NMDS analysis on Bray-Curtis distance together with a PERMANOVA analysis showed significant differences in the composition of bacterial communities between treatments and rootstock and treatment interaction ( p < 0.001) and no significant differences between rootstocks ( p = 0.055) (Fig. B). A subsequent ANOSIM analysis showed there were no significant differences in beta-diversity between rootstocks for the control soils, but that the composition of the bacterial community significantly differed between rootstocks in the treated soil except for US-812 vs. X-639 and US-897 vs. X-639 (Supplementary Table S ). Bacterial community composition and differentially abundant taxa between rootstocks and treatments On average, Proteobacteria (48.25%), Acidobacteria (12.9%), Chloroflexi (8.5%), Cyanobacteria (6.4%), Bacteroidetes (6.1%), Actinobacteria (5.9%), and Planctomycetes (5.8%) were the most abundant bacterial phyla across all rootstocks and treatments (Supplementary Fig. S ). Active bacterial ASVs significantly enriched and depleted between treatments for each of the rootstocks were identified at the phylum (Supplementary Fig. S ) and genus (Fig. ) taxonomic levels. Regardless of the rootstock, compost application significantly increased the relative abundance of ASVs belonging to the phyla Firmicutes, Latescibacteria, Tectomicrobia, and candidate phyla GAL15 and FCPU426 compared to control soils (Supplementary Fig. S ). However, more abundant phyla such as Proteobacteria, Nitrospirae, Cyanobacteria, Chloroflexi, Bacteroidetes , Actinobacteria, and Acidobacteria had both enriched and depleted taxa within the same phyla in soils treated with compost compared to the controls, suggesting treatment effects on bacterial taxa assigned to these phyla were not phylum-specific (Supplementary Fig. S ). Significantly enriched (e.g., Acidothermus , Anaeromyxobacter , Aridibacter , Azohydromonas , Crinalium , Lysobacter , Pseudomonas , Nitrospira , Sphingobium , Sphingomonas , Planctomyces , Pedomicrobium , and Woodsholea ) and depleted genera (e.g., Caldithrix , Cupriavidus , and Nevskia ) in the treated soils were identified across all rootstocks compared to the control soils (Fig. ). Other genera had both significantly enriched and depleted ASVs such as Acidibacter , Bauldia , Bryobacter , Burkholderia , Devosia , Hyphomicrobium , Mesorhizobium , Microvirga , Varibacter , and Rhizomicrobium . Overall, US-812 and US-897 showed a greater proportion of enriched rather than depleted (78% and 22% and 75% and 25%, respectively) ASVs compared to US-802 (60% and 40%, respectively) and X-639 (62% and 38%, respectively). Potential contributions of differentially abundant active taxa to root nutrient concentrations All differently abundant active bacterial genera contributed to the variations in root nutrient concentrations (Fig. ). For example, genera belonging to Acidobacteria such as Aridibacter , Bryobacter , Candidatus Koribacter , and Candidatus Solibacter were found important and positively correlated with root Mg and Fe concentrations, whereas others such as Streptomyces were important for predicting changes in root N, Mg, Ca, S, Zn, and Fe concentrations. Genera assigned to Bacteriodetes , such as Chitinophaga , Flavisolibacter , Niastella , and Terrimonas , were important and positively correlated with root P concentrations. Callithrix (Calditrichaeota phylum) and Thermosporothrix (Chloroflexi) were positively correlated with root K and P, respectively (Fig. ). Genera belonging to Cyanobacteria, such as Leptolyngbya , Nostoc , Oscillatoria , and Microcoleus , were important for predicting changes in root N, P, and K concentrations and were positively correlated with these root nutrients. Bacillus and Fictibacillus (Firmicutes phylum) were positively correlated with root P, S, and Mn, whereas Nitrospira (Nitrospirae phylum) was important for predicting changes in root N and Fe (Fig. ). Genera belonging to Planctomycetes, such as Gemmata and Planctomyces , were important and positively correlated with Fe and Cu, whereas those assigned to Verrucomicrobia phylum (e.g., Candidatus Xiphinematobacter and Chthoniobacter ) were positively correlated with Zn. Within Proteobacteria, there were 41 genera that were important and positively or negatively correlated with all root nutrients (Fig. ). These included Burkholderia , Dongia , and Methylobacterium which were positively correlated with root Ca and Hyphomicrobium and Pedomicrobium which were negatively correlated with this root nutrient (Fig. ). Relationships between microbial diversity and the MNC The MNC index increased in soils treated with compost compared to the controls for US-812 and US-897 (Fig. ). There were significant positive relationships between bacterial alpha- and beta-diversity and MNC for US-812 and US-897 rootstocks (Fig. ). Concerning each component of the multinutrient cycling index, alpha- and beta-diversity significantly and positively correlated with root Mg and Mn concentrations for all rootstocks and with root Zn for US-812 and US-897 (Supplementary Fig. S A, B). Root K was significantly and negatively correlated with alpha- and beta-diversity for US-812 and US-897 (Supplementary Fig. S A, B). Root N, P, and Ca concentrations were significantly and positively correlated alpha- and beta-diversity for US-812. Root Cu was positively correlated with beta-diversity for all rootstocks (Supplementary Fig. S B). Predicted functional traits of active rhizosphere bacterial communities NMDS analysis on Bray-Curtis distance together with a PERMANOVA analysis showed significant differences in the predicted functionality of bacterial communities among rootstocks ( p = 0.002) and no significant differences between treatments ( p > 0.01) and rootstock and treatment interaction ( p > 0.01) (Fig. ). There were no significant differences in the mean proportion of predicted KEGG categories between rootstocks and treatments, and the categories of energy metabolism and biosynthesis of other secondary metabolites accounted for more than 60% of the predicted functions (Supplementary Fig. S A). There were only 5 predicted pathways with significant differences between rootstocks and treatments (Supplementary Fig. S B). Both in the control and treated soils, the pathways of biosynthesis of secondary metabolites and various plant secondary metabolites were significantly more abundant in the rhizobiome of US-802 and X-639 compared to US-812 and US-897. Carbon and nitrogen metabolism pathways were significantly more abundant in the treated soils compared to controls for US-812 and US-897. The pathway involved in tryptophan metabolism had a significantly greater relative abundance in soils treated with compost compared to control for US-802 and X-639 (Supplementary Fig. S B). Relationships between rootstock, compost, MNC, bacterial abundance, alpha- and beta-diversity, and predicted functionality Our SEM model explained 78%, 63%, 58%, 47%, and 43% of the variance found in MNC, beta-diversity, bacterial abundance, predicted functionality, and alpha-diversity (Fig. ). Rootstock and compost had significant positive effects on MNC and beta-diversity, with compost showing stronger impacts (Fig. ). Rootstock and compost had a significant positive effect on predicted functionality and bacterial abundance, respectively. Compost showed a significant positive effect on bacterial abundance and alpha-diversity (Fig. ). Identification of the active taxonomic and predicted functional core rhizobiome The taxonomic core rhizobiome was formed by bacterial taxa belonging to the same eleven genera in the control and treated soils and whose relative abundances did not significantly differ between treatments (Supplementary Table S ). The predicted functional core rhizobiome comprised the same thirteen pathways in the control and treated soils (Supplementary Table S ). However, eight of these pathways (tryptophan metabolism, nitrogen metabolism, carbohydrate metabolism, lipid metabolism, metabolism of other amino acids, metabolism of cofactors and vitamins, and xenobiotics and biodegradation metabolism) were significantly more abundant in the treated soils compared to the control (Supplementary Table S ). Treatment with compost had a significant effect on root K, Mg, and Mn concentrations (Supplementary Fig. S ). The K concentration was significantly lower in roots from US-897 in treated soils compared to the control. Significantly greater Mg concentrations were detected in roots from US-802 and US-812 in treated soils compared to the controls. For US-802, US-812, and US-897, the Mn concentrations were also significantly greater in roots from the treated soils compared to the controls. Rootstock had a significant effect on Ca, S, and Mn concentrations (Supplementary Fig. S ). In control soils, roots from US-802 had significantly higher Ca concentrations compared to US-812. In soils treated with compost, significantly higher Ca concentrations were detected in roots from US-802 compared to X-639. The S concentrations were also significantly higher in roots from US-802 and US-812 compared to X-639. Significantly higher Mn concentrations were detected in roots from US-812 and US-897 compared to US-802. There were no significant differences in N, P, S, B, Zn, Fe, and Cu concentrations among rootstocks and treatments (Supplementary Fig. S ). Rootstock, treatment, and rootstock and treatment interaction had a significant effect on the abundance of rhizosphere bacteria (Supplementary Fig. S ). A significantly greater number of bacteria were detected in the rhizobiome of US-812 and US-897 compared to US-802 and X-639 in treated soils, whereas there were no differences in bacterial abundance between rootstocks in the control soils (Supplementary Fig. S ). Alpha-diversity was significantly affected by rootstock, treatment, and rootstock and treatment interaction (Fig. ). Compost application significantly increased the number of observed ASVs and the values of the Shannon and Simpson indices for US-812 and US-897 compared to the control soils (Fig. A). In the control soils, alpha-diversity was significantly greater in the rhizobiome of X-639 compared to US-812 and US-897 (Fig. A). In the treated soils, US-802 had significantly lower alpha-diversity compared to US-812 and US-897 (Fig. A). NMDS analysis on Bray-Curtis distance together with a PERMANOVA analysis showed significant differences in the composition of bacterial communities between treatments and rootstock and treatment interaction ( p < 0.001) and no significant differences between rootstocks ( p = 0.055) (Fig. B). A subsequent ANOSIM analysis showed there were no significant differences in beta-diversity between rootstocks for the control soils, but that the composition of the bacterial community significantly differed between rootstocks in the treated soil except for US-812 vs. X-639 and US-897 vs. X-639 (Supplementary Table S ). On average, Proteobacteria (48.25%), Acidobacteria (12.9%), Chloroflexi (8.5%), Cyanobacteria (6.4%), Bacteroidetes (6.1%), Actinobacteria (5.9%), and Planctomycetes (5.8%) were the most abundant bacterial phyla across all rootstocks and treatments (Supplementary Fig. S ). Active bacterial ASVs significantly enriched and depleted between treatments for each of the rootstocks were identified at the phylum (Supplementary Fig. S ) and genus (Fig. ) taxonomic levels. Regardless of the rootstock, compost application significantly increased the relative abundance of ASVs belonging to the phyla Firmicutes, Latescibacteria, Tectomicrobia, and candidate phyla GAL15 and FCPU426 compared to control soils (Supplementary Fig. S ). However, more abundant phyla such as Proteobacteria, Nitrospirae, Cyanobacteria, Chloroflexi, Bacteroidetes , Actinobacteria, and Acidobacteria had both enriched and depleted taxa within the same phyla in soils treated with compost compared to the controls, suggesting treatment effects on bacterial taxa assigned to these phyla were not phylum-specific (Supplementary Fig. S ). Significantly enriched (e.g., Acidothermus , Anaeromyxobacter , Aridibacter , Azohydromonas , Crinalium , Lysobacter , Pseudomonas , Nitrospira , Sphingobium , Sphingomonas , Planctomyces , Pedomicrobium , and Woodsholea ) and depleted genera (e.g., Caldithrix , Cupriavidus , and Nevskia ) in the treated soils were identified across all rootstocks compared to the control soils (Fig. ). Other genera had both significantly enriched and depleted ASVs such as Acidibacter , Bauldia , Bryobacter , Burkholderia , Devosia , Hyphomicrobium , Mesorhizobium , Microvirga , Varibacter , and Rhizomicrobium . Overall, US-812 and US-897 showed a greater proportion of enriched rather than depleted (78% and 22% and 75% and 25%, respectively) ASVs compared to US-802 (60% and 40%, respectively) and X-639 (62% and 38%, respectively). All differently abundant active bacterial genera contributed to the variations in root nutrient concentrations (Fig. ). For example, genera belonging to Acidobacteria such as Aridibacter , Bryobacter , Candidatus Koribacter , and Candidatus Solibacter were found important and positively correlated with root Mg and Fe concentrations, whereas others such as Streptomyces were important for predicting changes in root N, Mg, Ca, S, Zn, and Fe concentrations. Genera assigned to Bacteriodetes , such as Chitinophaga , Flavisolibacter , Niastella , and Terrimonas , were important and positively correlated with root P concentrations. Callithrix (Calditrichaeota phylum) and Thermosporothrix (Chloroflexi) were positively correlated with root K and P, respectively (Fig. ). Genera belonging to Cyanobacteria, such as Leptolyngbya , Nostoc , Oscillatoria , and Microcoleus , were important for predicting changes in root N, P, and K concentrations and were positively correlated with these root nutrients. Bacillus and Fictibacillus (Firmicutes phylum) were positively correlated with root P, S, and Mn, whereas Nitrospira (Nitrospirae phylum) was important for predicting changes in root N and Fe (Fig. ). Genera belonging to Planctomycetes, such as Gemmata and Planctomyces , were important and positively correlated with Fe and Cu, whereas those assigned to Verrucomicrobia phylum (e.g., Candidatus Xiphinematobacter and Chthoniobacter ) were positively correlated with Zn. Within Proteobacteria, there were 41 genera that were important and positively or negatively correlated with all root nutrients (Fig. ). These included Burkholderia , Dongia , and Methylobacterium which were positively correlated with root Ca and Hyphomicrobium and Pedomicrobium which were negatively correlated with this root nutrient (Fig. ). The MNC index increased in soils treated with compost compared to the controls for US-812 and US-897 (Fig. ). There were significant positive relationships between bacterial alpha- and beta-diversity and MNC for US-812 and US-897 rootstocks (Fig. ). Concerning each component of the multinutrient cycling index, alpha- and beta-diversity significantly and positively correlated with root Mg and Mn concentrations for all rootstocks and with root Zn for US-812 and US-897 (Supplementary Fig. S A, B). Root K was significantly and negatively correlated with alpha- and beta-diversity for US-812 and US-897 (Supplementary Fig. S A, B). Root N, P, and Ca concentrations were significantly and positively correlated alpha- and beta-diversity for US-812. Root Cu was positively correlated with beta-diversity for all rootstocks (Supplementary Fig. S B). NMDS analysis on Bray-Curtis distance together with a PERMANOVA analysis showed significant differences in the predicted functionality of bacterial communities among rootstocks ( p = 0.002) and no significant differences between treatments ( p > 0.01) and rootstock and treatment interaction ( p > 0.01) (Fig. ). There were no significant differences in the mean proportion of predicted KEGG categories between rootstocks and treatments, and the categories of energy metabolism and biosynthesis of other secondary metabolites accounted for more than 60% of the predicted functions (Supplementary Fig. S A). There were only 5 predicted pathways with significant differences between rootstocks and treatments (Supplementary Fig. S B). Both in the control and treated soils, the pathways of biosynthesis of secondary metabolites and various plant secondary metabolites were significantly more abundant in the rhizobiome of US-802 and X-639 compared to US-812 and US-897. Carbon and nitrogen metabolism pathways were significantly more abundant in the treated soils compared to controls for US-812 and US-897. The pathway involved in tryptophan metabolism had a significantly greater relative abundance in soils treated with compost compared to control for US-802 and X-639 (Supplementary Fig. S B). Our SEM model explained 78%, 63%, 58%, 47%, and 43% of the variance found in MNC, beta-diversity, bacterial abundance, predicted functionality, and alpha-diversity (Fig. ). Rootstock and compost had significant positive effects on MNC and beta-diversity, with compost showing stronger impacts (Fig. ). Rootstock and compost had a significant positive effect on predicted functionality and bacterial abundance, respectively. Compost showed a significant positive effect on bacterial abundance and alpha-diversity (Fig. ). The taxonomic core rhizobiome was formed by bacterial taxa belonging to the same eleven genera in the control and treated soils and whose relative abundances did not significantly differ between treatments (Supplementary Table S ). The predicted functional core rhizobiome comprised the same thirteen pathways in the control and treated soils (Supplementary Table S ). However, eight of these pathways (tryptophan metabolism, nitrogen metabolism, carbohydrate metabolism, lipid metabolism, metabolism of other amino acids, metabolism of cofactors and vitamins, and xenobiotics and biodegradation metabolism) were significantly more abundant in the treated soils compared to the control (Supplementary Table S ). We found that the rootstock genotype determined differences in the diversity of active rhizosphere bacterial communities. The rootstock genotype also impacted how compost altered the abundance, diversity, and composition and predicted functions of these active communities. Variations in the active bacterial rhizobiome were strongly linked to root nutrient cycling, and these interactions were root-nutrient- and rootstock-specific. Together, these findings have important agronomic implications as they indicate the potential for agricultural production systems to maximize benefits from rhizobiomes through the choice of selected rootstocks and the application of compost. Direct positive relationships between enriched taxa in treated soils and specific root nutrients were detected which will help identify potentially important taxa for development of agricultural microbiome engineering solutions to improve root nutrient uptake. We also found significant differences in specific predicted functions related to soil nutrient cycling (C, N, and tryptophan metabolisms) in the active bacterial rhizobiome among rootstocks, particularly in soils treated with compost. These results suggest that potential functions of active bacterial rhizobiomes are rootstock-specific rather than redundant among citrus rootstocks. The rootstock genotype determined differences in bacterial diversity but not community composition of the active bacterial rhizobiome in untreated soils. Previous studies have shown that the root metabolic composition can differ among citrus rootstocks [ – ], which may explain the differences in bacterial diversity among rootstocks in this study. The finding that composition (beta-diversity) remained unchanged among rootstocks in untreated soil suggests no or only a minor influence of rootstocks on the recruitment of bacterial communities in the rhizosphere, which agrees with previously published studies . In addition to influencing the bacterial diversity, the rootstocks used in this study were found to directly influence nutrient cycling through alterations of the active rhizobiome. For example, root N, P, and Ca concentrations were significantly and positively correlated with alpha- and beta-diversity for US-812, but not the other rootstocks. Other root nutrients such as Mg and Cu were positively correlated with alpha- and beta-diversity for all rootstocks, suggesting no rootstock effect but a key role of the active bacterial rhizobiome in driving root nutrient cycling. Magnesium is essential for increasing root system and fruit quality as it promotes the reduction of reactive oxygen species (ROS) and distribution of sugars in the plant . Copper is required for plant growth and development as it is involved in different physiological processes such as photosynthesis, respiration, and ethylene . Regardless of the rootstock genotype, compost application altered the composition of the active bacterial rhizobiome. These compost-driven differences in beta-diversity could be due to the bacterial community shifting from oligotrophic to more copiotrophic bacterial taxa in treated soils, as previously observed in the rhizosphere of apple rootstocks treated with compost . For example, a proliferation of known fast-growing copiotrophic consumers of labile C (e.g., Actinobacteria, Bacteroidetes , Chloroflexi, Gemmatimonadetes, and Firmicutes) was observed for all rootstocks in soils treated with compost. These variations in beta-diversity of the active bacterial rhizobiome did not affect core rhizobiome taxa, which suggests that other microbes within the rhizobiome were more responsive to compost application. However, compost application only increased the bacterial abundance and alpha-diversity of the rhizobiome of US-812 and US-897 but not US-802 and X-639. Significant positive correlations between increased bacterial diversity and root multinutrient cycling were detected for US-812 and US-897 rootstocks, suggesting a rootstock-specific impact of compost on the rhizobiome community composition that in turn influences root nutrient cycling. Previous studies have also linked increased soil microbial abundance and diversity to nutrient availability after compost application . Interestingly, US-897 and US-812 are known for their positive influence on fruit quality, whereas US-802 and X-639 are known to produce lower-quality fruit . Whether this effect will be enhanced with compost amendments will need to be investigated as the trees become more mature. The interaction between citrus rootstocks and compost was a stronger determinant of changes in bacterial abundance, diversity, and community composition of the active bacterial rhizobiome than compost or rootstocks alone. While recent studies have shown that rootstocks and compost can alter microbial diversity and community composition in the rhizobiome of different crops, our results provide strong evidence of compost and rootstock interactions driving changes in the active rhizobiome (alpha- and beta-diversity) with direct impacts on root nutrient availability. Although recent studies have shown that soil microbial diversity promotes multifunctionality in natural ecosystems [ – ], these observations were mainly restricted to nutrient cycling in bulk soils. Here, we expand on those findings by showing that these interactions also occur in the rhizosphere where they can be controlled not only by the rootstock genotype but also the application of compost. For instance, we observed a strong positive correlation between Zn and Mn root concentrations and alpha- and beta-diversity in the rhizobiome of US-812 and US-897 rootstocks. Zn is a micronutrient with a key role in plant defense against pathogens , whereas Mn is essential for photosynthesis and a limiting factor for plant growth . Although our results suggest that the rhizobiome composition improves root nutrient cycling, it is uncertain whether this ultimately translates into increased plant growth, crop production, and stress and disease tolerance in the longer term. At the time of the study, no differences in tree growth and health were observed with the compost amendment, but US-897 produced the most fruit in the first year of production, while US-802 was the most vigorous rootstock (data not shown). These results are expected in this early stage of growth, and it may take several years of treatments and until trees reach full maturity before increases in productivity due to any microbe-induced effect may be observed. Specific active genera in the rhizobiome of composted soils were strongly correlated with root nutrient concentrations. Some of these genera include known plant growth-promoting (PGP) bacteria such as Bacillus , Streptomyces , Pseudomonas , Mesorhizobium , Sphingomonas , and Rhizobium that can solubilize nutrients such as P, S, and Ca and produce diverse phytohormones and siderophores. Although correlation does not imply causation, we found significant associations between several other genera and specific root nutrients in the citrus rhizobiome. For example, members of Acidobacteria were correlated with root Fe, which agrees with several studies reporting that Acidobacteria are avid rhizosphere colonizers and can produce siderophores . We also found strong correlations between members of Bacteroidetes and root P concentration which is in line with previous observations of genera assigned to Bacteroidetes playing a critical role in solubilization of P in the plant rhizosphere . Cyanobacteria genera such as Leptolyngbya , Nostoc , Oscillatoria , and Microcoleus were important for predicting changes in root N, P, and K concentrations which is not surprising as Cyanobacteria are known to improve the availability of N, P, and K through N-fixation and solubilization . We also identified genera assigned to Planctomycetes which were correlated with Fe, which is in accordance with previous studies showing their ability to produce siderophores in soils . Together, this knowledge provides valuable information for selecting candidate taxa for future agricultural microbiome engineering solutions . For example, members of the differentially abundant genera in this study may represent candidate taxa for designing microbial consortia with a potential to serve as biofertilizers . Most predicted functions in the rhizobiome were shared among citrus rootstocks, thus supporting the concept of functional redundancy between plant genotypes of the same crop . However, there were significant differences in functional pathways related to biosynthesis of secondary metabolites and C, N, and tryptophan metabolisms among rootstocks in untreated soils, and compost application increased the abundance of these potential functions; however, the magnitude of the responses was rootstock-specific. Overall, these results are different from those of a recent study examining predicted functions in the rhizobiome of different rootstocks for grapevines . That study found no differences in predicted functions between grapevine rootstocks using the Tax4fun tool to predict functional potential. As recently demonstrated, Tax4fun and PICRUSt2 can lead to differences in predicted bacterial functions which could explain the different results between rhizobiome studies . While Marasco et al. used a DNA-based approach to characterize rhizobiome communities for grapevines, we used RNA-based estimates to predict bacterial functions which can be a more accurate and reliable approach for functional predictions in rhizobiomes . In addition, it cannot be ruled out that genotype-specific root exudates determine rhizobiome functions . Interestingly, we detected eight pathways within the predicted functional core citrus rhizobiome that were more abundant in treated soils compared to the control for all rootstocks. These pathways were related to key functions for plant growth such as N, carbohydrate and lipid metabolisms, and metabolism of cofactors and vitamins. While compost had no impact on the core taxonomic rhizobiome of all rootstocks, it appeared to influence the taxonomic and predicted functional core rhizobiomes. Although PICRUSt2 is frequently used to predict functions of microbial communities and its effectiveness has been established in multiple environmental studies that utilized both amplicon sequencing and metagenome sequencing , we acknowledge it has some limitations , and other approaches such as shotgun metagenome sequencing can provide more accurate functional profiles of microbiomes. However, our results provide a good starting place for future studies of functional differences between rhizobiomes under the influence of different rootstock genotypes. This study showed that the interaction between citrus rootstocks and compost can influence active rhizosphere bacterial communities with impacts on root nutrient concentrations. In particular, the response of the rhizobiome bacterial abundance, diversity, and community composition to compost was rootstock-specific. Specific bacterial taxa therefore appear to be driving changes in root nutrient concentrations in the active rhizobiome of different citrus rootstocks. Whether rootstock genotype-specific impacts on rhizosphere microbes also determine variations in nutrient concentration in rhizosphere soil and other parts of the tree (e.g., leaves and trunk) should be explored in future studies. In addition, several potential functions of active bacterial rhizobiomes recruited by different citrus rootstocks did not appear to be redundant but rather rootstock-specific. Longer-term studies will determine to what extent rhizobiome alterations impact aboveground traits, especially tree growth and productivity but also resilience to HLB. The study of root exudate composition could also help identify associations of individual taxa with specific root exudate compounds and provide an understanding of how rootstocks and compost control these relationships. Additional file 1: Fig. S1. Schematic diagram of the field study illustrating the experimental design (A); an untreated control plot (left) and a compost-treated plot (right) – trees are arranged in two rows on raised beds separated by furrows for drainage (B); a grafted citrus tree composed of scion and rootstock that are united at the graft union (C). Fig. S2. A priori generic structural equation model (SEM) used in this study. The numbers in the arrows denote example references used to support our predictions (see References section). Fig. S3. Root nutrient content of citrus trees on four different rootstocks. Soils were untreated (control) or treated with compost. Different letters above the bars indicate significant differences between rootstocks and treatments (linear mixed-effect model and Tukey's HSD; n = 8; *, p ≤ 0.05; **, p ≤ 0.01; ***, p ≤ 0.001). Values are expressed as mean with standard error. Fig. S4. Total abundance of active bacterial communities in the rhizosphere of citrus trees on four different rootstocks. Soils were untreated (control) or treated with compost. Different letters above the bars indicate significant differences between rootstocks and treatments (linear mixed-effect model and Tukey's HSD, n = 8; *, p ≤ 0.05; **, p ≤ 0.01; ***, p ≤ 0.001). Values are expressed as mean with standard error. Fig. S5. Relative abundance of bacterial ASVs at the phylum taxonomic level in the rhizosphere of four different rootstocks. Soils were untreated (control) or treated with compost. Fig. S6. Differentially abundant ASVs at the genus taxonomic level between compost and control treatments for each rootstock. The fold change is shown on the X axis and genera are listed on the Y axis. Each colored dot represents an ASV that was identified by DESeq2 analysis as significantly differentially abundant ( p ≤ 0.05). Fig. S7. Heatmaps of Spearman correlation coefficients between bacterial alpha (A) and beta (B) diversity and root nutrients for each rootstock. The shading from blue to red represents low-to-high positive correlation. *, p ≤ 0.05; **, p ≤ 0.01; ***, p ≤ 0.001. Fig. S8. Mean proportion of predicted KEGG categories (A) and pathways (B) in the rhizosphere of citrus trees on four different rootstocks. Soils were untreated (control) or treated with compost. For each row, different letters indicate significant differences between treatments and rootstocks (Tukey's HSD, p < 0.05; n = 8). Table S1. Rootstocks used in this study and their parentage. Table S2. Significance and similarity using the non-parametric multivariate ANOSIM statistical method. Numbers in bold indicate significant effect at p < 0.05. R values close to 1 indicate dissimilarity between treatments. Table S3. ASVs (at the genus level) present in at least 75% of the samples in the control and treated soils identified as the active taxonomic core rhizobiome and their relative abundances. For each row, different letters between treatments indicate significant according to the Welch’s t-test and Benjamini–Hochberg FDR multiple test correction ( p < 0.05). Table S4. KEGG pathways present in at least 75% of the samples in the control and treated soils identified as the active functional core rhizobiome and their relative abundances. For each row, different letters between treatments indicate significant according to the Welch’s t-test and Benjamini–Hochberg FDR multiple test correction ( p < 0.05)
Patient-centered care and geriatric knowledge translation among healthcare providers in Vietnam: translation and validation of the patient-centered care measure
7e2b06e3-4880-467e-8eea-ee9187f3fe85
10116792
Patient-Centered Care[mh]
People are living longer than previous generations, which reflects the collective advancements in economic development, public health and medicine. Longer lives provide opportunities for individuals to continue productive contributions to their families and communities. However, these opportunities depend on ensuring the optimal health of aging people. The challenge of maintaining functional ability at older ages requires realigning the design of healthcare delivery systems to the social and health needs of aging people, who face increased risk of illnesses and declining functional ability due to their environment and the accumulation of molecular and cellular damage. Care coordination, an element of patient-centered care, is associated with the reduced risk of socialization of aging adult hospitalization, which refers to the utilization of hospital beds as a substitute for long-term care. By facilitating access to community services and specialized care within and beyond the health system for aging adults, patient-centered care promotes successful transitions into the community, reduces readmissions, and improves patient satisfaction and health outcomes, especially among aging people with multimorbidity. Failure to provide appropriate care to meet the complex needs of aging people may lead to poor health, care dependency, and social isolation. Consequently, disparities exist in the distribution of good health between populations of aging people, especially in developing countries where most aging people are located. This study aims to provide a validated instrument to assess the provision of patient-centered geriatric care in health facilities across Vietnam, which has a rapidly growing aging population. Vietnam’s national aging survey suggests a gap in the provision of patient-centered geriatric care because 1 in 3 aging adults reported an unmet healthcare need for chronic diseases or sensory impairment, despite receiving clinical treatment for an acute illness or injury in the past year. Vietnamese aging adults who received clinical services in the past year had the same level of unmet need for assistance compared with those who had not sought healthcare in the past year. Similarly, aging adults with multimorbidity had 185% or 224% higher risk of unmet health need after receiving healthcare from public or private healthcare providers, respectively, compared to those with one chronic need. In Vietnam, recent policies, such as the Decision 2151/QD-BYT of 2015, outline national plans to prioritize patient preferences in healthcare delivery and promote quality improvement interventions to improve patient-centered care. In 2023, Vietnam’s National Assembly approved amendments to the Law on Medical Examination and Treatment that incorporated new provisions for patient-centered care. Although healthcare providers may know about person-centered geriatric care, they may be limited in their capacity to translate geriatric knowledge into the practice of quality care. An assessment of the perception of providers, patients, and caregivers is a widely accepted method to evaluate the delivery of quality care, including patient-centered care. To our knowledge, no study has developed a tool to assess patient-centered care in Vietnamese. This study provides an instrument to assess the practice of patient-centered geriatric care, which is an essential component of quality care that translates geriatric knowledge to action for aging people. While diverse instruments are available for assessing patient-centered care, we selected the Patient-Centered Care (PCC) measure because its development was guided by a comprehensive conceptualization of patient-centered care that was derived from an integrative review of conceptual, empirical, and clinical evidence. The content of the PCC measure covers specific activities that constitute patient-centered care across clinical programs within the context of acute care. We translated, validated the cross-cultural relevance, and piloted the PCC measure among healthcare providers in Hanoi, Vietnam. The patient-centered care measure Surveys and rating scales are a type of assessment tools to examine practice processes in a systematic way. The PCC measure is a validated instrument with 20 statements and a response scale that ranges from not at all (0) to very much so (5). The items describe activities that operationalize patient-centered care in the context of acute care. The PCC measure grouped activities into three components of patient-centered care, which are holistic, collaborative, and responsive care. Holistic care is reflective of comprehensive care and health promotion for patients. It contains one sub-domain with four items on attending to patients’ needs and a second sub-domain with five items on the provision of information to help patients manage their needs and health conditions. Collaborative care contains seven items that describe activities to facilitate shared decision-making with the patient. The four items for responsive care operationalize the individualization of care within the hospital and after discharge. The PCC measure may be used to assess the fidelity of patient-centered care practice and interventions by healthcare providers. Healthcare providers rate how their daily practices compare with the list of activities that operationalize patient-centered care in clinical settings. The Content Validity Indexing (CVI) score for the original PCC measure was greater than 0.90 for the three subscales, which indicates that nurse practitioners deemed all the questions to be highly relevant to PCC. Only the KR-20 coefficient values for the collaborative subscale reached the 0.70 criterion for ascertaining the reliability of newly developed measures. Cross-cultural translation of the patient-centered care measure The PCC measure was translated and piloted as part of a larger study, which validated tools to assess the capacity of healthcare providers to provide quality geriatric care in Vietnam. The PCC measure was translated with the forward-backward method, which is a longstanding adaptation method for cross-cultural research. A bilingual native Vietnamese speaker translated the PCC measure from English to Vietnamese. A panel of five bilingual researchers reviewed and revised the translation. A different bilingual translator, who did not see the English version of the PCC measure, translated the revised Vietnamese PCC measure back to English. The English back-translation was compared to the original version of the instrument to detect alterations in meaning. Cross-cultural validation of the patient-centered care measure An expert panel of seven bilingual geriatric experts rated each question of the Vietnamese Patient-Centered Care measure (VPCC) on a scale from 1 to 4 ( not relevant, somewhat relevant, very relevant, or highly relevant ) using an online data collection tool. The expert panel rated the equivalence of the translation to the original English text as either yes or no. The expert panel included nurses, physicians and researchers, with post-graduate training, who had at least a decade of experience in either geriatric research or clinical practice. We assess the relevance of the VPCC measure to geriatric care in Vietnam by using the ratings from the bilingual expert panel to calculate content validity index (CVI) scores at both the item (I-CVI) and scale (S-CVI) levels. The CVI process has been documented to predict potentially problematic survey items. While the S-CVI measures the proportion of the survey judged relevant, the I-CVI measures the proportion of agreement on the relevance of each item. S-CVI was calculated as the averages of all the item level CVIs (S-CVI/Ave). 28 The modified kappa ( K m) statistic, which accounts for the probability of some chance agreement among experts, was derived from the I-CVI score. An instrument with K m statistic above 0.74, I-CVI score of at least 0.78, or S-CVI/Ave score of at least 0.90 has excellent content validity. . Squire and colleagues, adapted the CVI process to measure translation equivalence at the item (TI-CVI) and scale (TS-CVI/Ave) levels. We used the expert panel ratings to calculate TI-CVI and TS-CVI/Ave scores to evaluate the translation equivalence of the VPCC measure. We used Microsoft Excel 2016 for the CVI calculations. Piloting the patient-centered care measure The study was approved by the Institutional Review Boards at Johns Hopkins Bloomberg School of Public Health and Hanoi University of Public Health. Approvals were obtained from the administrative leaders of each health facility prior to data collection. Interviewers participated in a two-day training, including pretest at a health facility. The finalized VPCC measure incorporated feedback from the expert panel and pretest. Interviewers administered the VPCC measure to healthcare providers between March and April 2019. In addition to the VPPC measure, the survey included sections on geriatric knowledge assessment using the Vietnamese version of the Knowledge about Older Patients-Quiz (VKOP-Q) and demographic characteristics of the respondents. The Knowledge about Older Patients-Quiz is an instrument to assess gaps in the geriatric knowledge among healthcare providers. It contains 30 dichotomous (true or false) statements to measure the knowledge of healthcare providers about the appropriate care for hospitalized older adults, as well as the healthcare providers’ certainty in their responses. The study sample size was calculated with statistical power analysis using the effect size, probability of not having a type II error (power), and the probability of committing a type I error (alpha). Effect size is the difference in means among comparison groups of healthcare providers. Power and alpha were set at 0.80 and 0.05, respectively. A minimum sample size of 79 was required to avoid type II error with a medium effect size. We used convenience sampling strategy to select the health facilities from two urban districts and three suburban districts, across the three levels (commune, district/provincial, and central) of healthcare facilities in Vietnam. Communes provide basic health services, while patients who require specialized care are referred to district/provincial or central health facilities. The average number of eligible respondents per health facility at the commune, district/provincial, and central levels were 5, 76, and 135 healthcare providers, respectively. Quota sampling method was used so that the sample was proportional to the health facility size. Commune, district/provincial, and central levels were assigned maximum values of 2, 10, and 20 participants per health facility, respectively. Data Analysis Data were entered into a form on Kobo Toolbox, a secure web-based application for data collection and management. Data entry was verified by two researchers. Data were exported to Stata 15 software for analysis. Data were coded based on the instructions provided by Sidani et al. The items were grouped into 3 subscales: holistic care (9 items), collaborative care (7 items), and responsive care (4 items). Summed indexes were calculated for each subscale. Average index scores were computed by dividing the summed indexes by the total number of questions for each subscale. Possible values for the averaged index scores ranged from 0 to 5. Higher scores indicated a more favorable assessment of the implementation of patient-centered care. Only one missing data was observed and it was handled by listwise deletion for that subscale. Measures of central tendency and dispersion were computed for each of the index scores. The Shapiro-Wilk test was used to evaluate deviance from a normal distribution for the averaged index scores. The observed index scores were left-skewed and did not pass the normality tests. Hence, we used nonparametric tests, which are appropriate when there is a violation of parametric assumptions. Mann-Whitney U and Kruskal-Wallis tests were used to assess intergroup differences for each item and the average index scores. We tested the null hypothesis that comparison groups were based on the work experiences and demographics of healthcare providers. We examined differences in means by occupation, post-graduate education status, health facility level, and prior geriatric training. We described the PCC themes of the higher and lower rated activities among the VPCC measure. Examination of the distributions revealed the majority of the average index scores ranged from 3 to 5, further supporting the need for binary rather than continuous dependent variable analyses. Therefore, the average index scores were collapsed to create binary variables. The collapsed index scores were coded as ≥ 0 and < 4 = 0 for low implementation and ≥ 4 and ≤ 5 = 1 for high implementation. The binary index variables were used in the multiple logistic regression analyses. Multiple logistic regression models were specified to test the a priori null hypothesis that geriatric knowledge is not different among healthcare providers with perception of high implementation compared with low implementation of PCC measures. Odds ratios (OR) were used to measure the association between the geriatric knowledge score and the dependent variables, adjusting for the characteristics of healthcare providers. The variables in the regression models were defined in Table . A p -value equal to or lower than 0.05 was regarded as statistically significant. Surveys and rating scales are a type of assessment tools to examine practice processes in a systematic way. The PCC measure is a validated instrument with 20 statements and a response scale that ranges from not at all (0) to very much so (5). The items describe activities that operationalize patient-centered care in the context of acute care. The PCC measure grouped activities into three components of patient-centered care, which are holistic, collaborative, and responsive care. Holistic care is reflective of comprehensive care and health promotion for patients. It contains one sub-domain with four items on attending to patients’ needs and a second sub-domain with five items on the provision of information to help patients manage their needs and health conditions. Collaborative care contains seven items that describe activities to facilitate shared decision-making with the patient. The four items for responsive care operationalize the individualization of care within the hospital and after discharge. The PCC measure may be used to assess the fidelity of patient-centered care practice and interventions by healthcare providers. Healthcare providers rate how their daily practices compare with the list of activities that operationalize patient-centered care in clinical settings. The Content Validity Indexing (CVI) score for the original PCC measure was greater than 0.90 for the three subscales, which indicates that nurse practitioners deemed all the questions to be highly relevant to PCC. Only the KR-20 coefficient values for the collaborative subscale reached the 0.70 criterion for ascertaining the reliability of newly developed measures. The PCC measure was translated and piloted as part of a larger study, which validated tools to assess the capacity of healthcare providers to provide quality geriatric care in Vietnam. The PCC measure was translated with the forward-backward method, which is a longstanding adaptation method for cross-cultural research. A bilingual native Vietnamese speaker translated the PCC measure from English to Vietnamese. A panel of five bilingual researchers reviewed and revised the translation. A different bilingual translator, who did not see the English version of the PCC measure, translated the revised Vietnamese PCC measure back to English. The English back-translation was compared to the original version of the instrument to detect alterations in meaning. An expert panel of seven bilingual geriatric experts rated each question of the Vietnamese Patient-Centered Care measure (VPCC) on a scale from 1 to 4 ( not relevant, somewhat relevant, very relevant, or highly relevant ) using an online data collection tool. The expert panel rated the equivalence of the translation to the original English text as either yes or no. The expert panel included nurses, physicians and researchers, with post-graduate training, who had at least a decade of experience in either geriatric research or clinical practice. We assess the relevance of the VPCC measure to geriatric care in Vietnam by using the ratings from the bilingual expert panel to calculate content validity index (CVI) scores at both the item (I-CVI) and scale (S-CVI) levels. The CVI process has been documented to predict potentially problematic survey items. While the S-CVI measures the proportion of the survey judged relevant, the I-CVI measures the proportion of agreement on the relevance of each item. S-CVI was calculated as the averages of all the item level CVIs (S-CVI/Ave). 28 The modified kappa ( K m) statistic, which accounts for the probability of some chance agreement among experts, was derived from the I-CVI score. An instrument with K m statistic above 0.74, I-CVI score of at least 0.78, or S-CVI/Ave score of at least 0.90 has excellent content validity. . Squire and colleagues, adapted the CVI process to measure translation equivalence at the item (TI-CVI) and scale (TS-CVI/Ave) levels. We used the expert panel ratings to calculate TI-CVI and TS-CVI/Ave scores to evaluate the translation equivalence of the VPCC measure. We used Microsoft Excel 2016 for the CVI calculations. The study was approved by the Institutional Review Boards at Johns Hopkins Bloomberg School of Public Health and Hanoi University of Public Health. Approvals were obtained from the administrative leaders of each health facility prior to data collection. Interviewers participated in a two-day training, including pretest at a health facility. The finalized VPCC measure incorporated feedback from the expert panel and pretest. Interviewers administered the VPCC measure to healthcare providers between March and April 2019. In addition to the VPPC measure, the survey included sections on geriatric knowledge assessment using the Vietnamese version of the Knowledge about Older Patients-Quiz (VKOP-Q) and demographic characteristics of the respondents. The Knowledge about Older Patients-Quiz is an instrument to assess gaps in the geriatric knowledge among healthcare providers. It contains 30 dichotomous (true or false) statements to measure the knowledge of healthcare providers about the appropriate care for hospitalized older adults, as well as the healthcare providers’ certainty in their responses. The study sample size was calculated with statistical power analysis using the effect size, probability of not having a type II error (power), and the probability of committing a type I error (alpha). Effect size is the difference in means among comparison groups of healthcare providers. Power and alpha were set at 0.80 and 0.05, respectively. A minimum sample size of 79 was required to avoid type II error with a medium effect size. We used convenience sampling strategy to select the health facilities from two urban districts and three suburban districts, across the three levels (commune, district/provincial, and central) of healthcare facilities in Vietnam. Communes provide basic health services, while patients who require specialized care are referred to district/provincial or central health facilities. The average number of eligible respondents per health facility at the commune, district/provincial, and central levels were 5, 76, and 135 healthcare providers, respectively. Quota sampling method was used so that the sample was proportional to the health facility size. Commune, district/provincial, and central levels were assigned maximum values of 2, 10, and 20 participants per health facility, respectively. Data were entered into a form on Kobo Toolbox, a secure web-based application for data collection and management. Data entry was verified by two researchers. Data were exported to Stata 15 software for analysis. Data were coded based on the instructions provided by Sidani et al. The items were grouped into 3 subscales: holistic care (9 items), collaborative care (7 items), and responsive care (4 items). Summed indexes were calculated for each subscale. Average index scores were computed by dividing the summed indexes by the total number of questions for each subscale. Possible values for the averaged index scores ranged from 0 to 5. Higher scores indicated a more favorable assessment of the implementation of patient-centered care. Only one missing data was observed and it was handled by listwise deletion for that subscale. Measures of central tendency and dispersion were computed for each of the index scores. The Shapiro-Wilk test was used to evaluate deviance from a normal distribution for the averaged index scores. The observed index scores were left-skewed and did not pass the normality tests. Hence, we used nonparametric tests, which are appropriate when there is a violation of parametric assumptions. Mann-Whitney U and Kruskal-Wallis tests were used to assess intergroup differences for each item and the average index scores. We tested the null hypothesis that comparison groups were based on the work experiences and demographics of healthcare providers. We examined differences in means by occupation, post-graduate education status, health facility level, and prior geriatric training. We described the PCC themes of the higher and lower rated activities among the VPCC measure. Examination of the distributions revealed the majority of the average index scores ranged from 3 to 5, further supporting the need for binary rather than continuous dependent variable analyses. Therefore, the average index scores were collapsed to create binary variables. The collapsed index scores were coded as ≥ 0 and < 4 = 0 for low implementation and ≥ 4 and ≤ 5 = 1 for high implementation. The binary index variables were used in the multiple logistic regression analyses. Multiple logistic regression models were specified to test the a priori null hypothesis that geriatric knowledge is not different among healthcare providers with perception of high implementation compared with low implementation of PCC measures. Odds ratios (OR) were used to measure the association between the geriatric knowledge score and the dependent variables, adjusting for the characteristics of healthcare providers. The variables in the regression models were defined in Table . A p -value equal to or lower than 0.05 was regarded as statistically significant. Content and translation validation Figure shows the final VPCC measure. Table presents the scale-level results of the CVI indices. K m values were identical to the CVI indices. On the item level, all 20 questions had excellent validity ratings. None of the I-CVI was below 0.50, which is the criteria for rejection. The S-CVI/Ave was 0.96, which translates to excellent overall content validity. Similarly, all of the items had TI-CVI scores ≥ 0.78, which means the translation equivalence was rated excellent. The TS- CVI/Ave was 0.94, which means that the overall translation equivalence of the instrument was rated excellent. Patient-centered geriatric care among healthcare providers The VPCC was administered to 112 nurses and physicians in 30 facilities across Hanoi. The demographics of the participants in the pilot were summarized in Table . The measures of central tendency indicated that healthcare providers perceived they provided a moderately high level of patient-centered geriatric care. The means for the complete scale and subscales were presented in Table . The highest rated subscale was for collaborative care, while the lowest rated subscale was for responsive care. The differences between the highest and lowest-rated subscales were confirmed with the statistically significant Friedman test and Wilcoxon signed rank sum test. Results of the comparison of median scores for the PCC and its subscales were presented in Fig. . The self-assessment of patient-centered care was largely homogeneous across groups and the Kruskal-Wallis tests were not significant. The five higher and lower rated activities were listed in Table . Similar to the subscale findings, the highest-rated items were activities grouped in either the provision of information or collaborative care subscales. The lowest-rated items were spread across the subscales, except for collaborative care. The lowest rated activities were related to the provision of psychosocial care and coordination within and beyond the health system. Results from the multivariate logistic regression models for the four binary index scores were presented in Table . After controlling for healthcare provider characteristics, the odds of the perception of high implementation of collaborative care were increased by 21% for each increase in geriatric knowledge score. We fail to reject the null hypotheses for holistic care, responsive care and PCC. Figure shows the final VPCC measure. Table presents the scale-level results of the CVI indices. K m values were identical to the CVI indices. On the item level, all 20 questions had excellent validity ratings. None of the I-CVI was below 0.50, which is the criteria for rejection. The S-CVI/Ave was 0.96, which translates to excellent overall content validity. Similarly, all of the items had TI-CVI scores ≥ 0.78, which means the translation equivalence was rated excellent. The TS- CVI/Ave was 0.94, which means that the overall translation equivalence of the instrument was rated excellent. The VPCC was administered to 112 nurses and physicians in 30 facilities across Hanoi. The demographics of the participants in the pilot were summarized in Table . The measures of central tendency indicated that healthcare providers perceived they provided a moderately high level of patient-centered geriatric care. The means for the complete scale and subscales were presented in Table . The highest rated subscale was for collaborative care, while the lowest rated subscale was for responsive care. The differences between the highest and lowest-rated subscales were confirmed with the statistically significant Friedman test and Wilcoxon signed rank sum test. Results of the comparison of median scores for the PCC and its subscales were presented in Fig. . The self-assessment of patient-centered care was largely homogeneous across groups and the Kruskal-Wallis tests were not significant. The five higher and lower rated activities were listed in Table . Similar to the subscale findings, the highest-rated items were activities grouped in either the provision of information or collaborative care subscales. The lowest-rated items were spread across the subscales, except for collaborative care. The lowest rated activities were related to the provision of psychosocial care and coordination within and beyond the health system. Results from the multivariate logistic regression models for the four binary index scores were presented in Table . After controlling for healthcare provider characteristics, the odds of the perception of high implementation of collaborative care were increased by 21% for each increase in geriatric knowledge score. We fail to reject the null hypotheses for holistic care, responsive care and PCC. This study is the first to validate a culturally relevant instrument to measure the practice of patient-centered geriatric care in Vietnam. The PCC measure was selected for translation because it specifies activities that operationalize the process of providing comprehensive patient-centered care, instead of the general perspectives of healthcare providers or single domains of patient-centered care. [ , , ] Process measures are sensitive to the differences in the quality of care and have the advantage of reproducibility. . The Vietnamese translation of the PCC measure followed international standards for cross-cultural adaptation of surveys in health services research to reduce the threats to data validity and improve instrument reliability. [ , , , ] The S-CVI/Ave and TS-CVI/Ave scores for the VPCC measure demonstrated excellent content validity and translation equivalence. As Vietnam’s population continues to age rapidly, the validated VPCC measure may be used to assess the process and implementation of patient-centered care from the perspectives of healthcare providers, patients and their families. . Healthcare providers were the focus of the pilot study because they are active agents in the delivery of quality care. Overall, healthcare providers had a moderately high assessment of their implementation of patient-centered care. The high ratings for PCC was congruent with studies among similar professional groups in Canada. However, the high PCC scores may reflect the documented tendencies of healthcare professionals to overrate their performance or provide the expected answer on self-report instruments. The highest rated subscales were the practice of collaborative care and the provision of information. We previously reported that these healthcare providers scored highly on VKOP-Q items related to the knowledge of appropriate family interventions for geriatric care. Higher geriatric knowledge score was associated with increasing odds of high implementation of collaborative care, which suggests some knowledge translation among the healthcare providers in this study. Provision of health information and shared-decision making, which is an outcome of collaborative care, are particularly important to promote treatment adherence and improved health outcomes among aging patients with multimorbidity. Healthcare providers’ perceived excellence and confidence in providing information to patients is necessary for improving health literacy, trust in provider-patient relationships, enabling aging patients to participate in shared decision-making with the healthcare provider, and overall satisfaction with care. [ – ]. The lowest rated subscales in the pilot study were the implementation of holistic attendance to patients’ needs and responsive care. Both subscales reflect the individualization of treatment to meet the patient’s needs, resources, and preferences, during and after discharge from the hospital. Specifically, the lowest rated activities within these subscales pertained to meeting the social and emotional needs of patients, which were traditionally beyond the health system. The provision of holistic attendance to patients’ needs and responsive care are crucial for maintaining the dignity of aging patients and ensuring their continuity of care. These findings on perceived low practice of holistic and responsive care in the pilot study were corroborated by analysis of Vietnam’s national aging survey which showed that aging adults with multimorbidity had higher odds of unmet health needs, even among those who received medical care in the past year. In the same analysis, it was reported that healthcare did not reduce the risk of unmet needs for assistance among aging adults, which suggests fragmented coordination between social and healthcare systems. These perceived gaps in the implementation of holistic care were documented in another study, which reported that the healthcare providers prioritized physical needs, but patients wanted to discuss their feelings and how to manage psychosocial concerns. . Furthermore, an aspect of holistic care is addressing psychosocial needs, such as teaching patients how to manage emotional and social problems of anxiety or social isolation. Patients with met psychosocial needs are more likely to feel prepared for discharge and recovery outside the hospital. Emotional and social needs encompass elder mistreatment, which is under-recognized and associated with somatic symptoms, such as pain. The provision of holistic care may increase the probability of connecting vulnerable aging patients with appropriate information and care. Without appropriate training, healthcare providers may not feel confident about negotiating the balance between patient-centered care and cultural competency, especially related to psychosocial needs that are considered culturally sensitive. . The lower-rated activities for responsive care were related to service coordination for aging people across different levels within and beyond the health sector. Poor service coordination is interconnected with the low scores on addressing the social and emotional needs of aging patients because healthcare providers need to lean on existing networks of multisector services to facilitate timely and appropriate referrals to holistically meet the needs of aging patients. The convenience sampling of healthcare providers from Hanoi, which is mostly urban and suburban, is susceptible to selection bias. In addition, healthcare providers were not recruited from the private sector and health facilities managed by other ministries, including the military health system. Consequently, the result may not be generalizable to other healthcare providers. The inclusion of at least two health facilities for each facility level is likely to have broadened healthcare provider selection and potentially avoided some of the selection bias. Relying on the clinical supervisors to facilitate recruitment may have compounded the bias and threatened the internal validity of the study. However, the demographics, clinical roles and experience level of the respondents were varied. Self-rated assessment of healthcare providers may not be congruent with external observations of their performance. Studies have documented both discordances and congruencies in the perceptions of patient-centered care by healthcare providers and patients, which highlights the need for future studies on the perception of patient-centered care among aging people in Vietnam. Furthermore, hypothesis test results for knowledge translation in this pilot sample depended on the assumption that PCC implementation was best measured as a binary variable. In a larger sample, the hypothesis test results could be robust to other ways of coding the average index scores. We did not collect information on the established processes for coordinated care within and across sectors at the participating health facilities, and lack of such information could undermine the ability of healthcare providers to excel in this aspect of patient-centered care. Studies have documented teamwork and established care pathways as critical ingredients to efficient care coordination for older patients. The use of interviewers may have increased the risk of social desirability bias in the ratings. However, the confidentiality of responses was communicated to respondents and the multivariate logistic regression models adjusted for interviewer effect. Future research should include additional investigation of the psychometric properties of the VPCC measure, as well as its relevance to different population groups and healthcare contexts in Vietnam. This study successfully adapted and validated the cross-cultural relevance of the PCC measure for geriatric care in Vietnam. In our pilot study, the highest-rated subscales were the provision of information and collaborative care, while the lowest-rated subscales were the holistic attendance to patients’ needs and responsive care. Attention to the psychosocial needs of aging patients and poor coordination of care within and beyond the health system were the lowest-rated PCC activities by healthcare providers in this pilot study. Despite the limitations of this study, it revealed the need for further assessment of the practice of patient-centered geriatric care across health facilities in Vietnam.
Toward patient-centered treatment goals for duchenne muscular dystrophy: insights from the “Your Voice” study
2b35cd0c-53de-4732-b006-4f5d7036767c
10116803
Patient-Centered Care[mh]
Over the past three decades, patient-centered research has emerged as critically important for understanding the impact of treatments on key stakeholders . Patients have become not only central to outcomes measurement for new treatments , but are also increasingly integrated into research teams from their inception, through implementation, analysis, and dissemination of results [ – ]. With this increased focus on the patient’s perspective, concepts deemed relevant have grown in depth and breadth, expanding well beyond objective measurement. The subjective experience of quality of life (QOL) is increasingly recognized as fundamental to treatment outcomes . That QOL means different things to different people along a disability trajectory has led to a substantial body of research on adaptation effects , resilience , and mediators of treatment burden . Early work in the field of QOL relied on qualitative methods to identify and develop concepts that could then be measured using closed-ended questions that were eminently quantitative [ – ]. As the field of QOL research evolved, researchers increasingly used “mixed methods”, which combined qualitative and quantitative methods to yield novel insights . Such approaches involved content analysis of qualitative data collected via open-ended questions, coding this content with numbers representing different themes, and then using statistical methods to compare groups on these coded themes. Mixed-method research has led to important developments in theory, measurement development, program development and evaluation, and evaluation research . The present study utilizes a mixed-method approach to investigate important domains related to burden of illness, underlying reasons for the impact on patients’ lives, and treatment goals for Duchenne Muscular Dystrophy (DMD). DMD is a genetic disorder characterized by progressive muscle degeneration and weakness caused by an absence of dystrophin, a protein that helps keep muscle cells intact . This progressive, rare, and irreversible neuromuscular disorder occurs primarily in males—1 in 5050 live births [ – ]. Usually diagnosed by age 5, the disorder presents as delayed development that includes motor difficulties and may include cognitive impairment and attention deficit disorders . On average by age 10–12, progressive muscle weakness leads to loss of ambulation, upper-limb function problems, and comorbid conditions such as scoliosis and muscular contractures . By age 15, patients experience increased difficulty breathing and life-threatening heart and lung conditions . DMD patients face profound uncertainty regarding lifespan, typically dying in their 20 s to early 30 s , although medical advances have led to longer life expectancies . The present study sought to understand how patient or caregiver goals for DMD treatment vary as a function of the severity of disease progression. Disease progression was characterized in terms of ambulation status to facilitate recruitment across phases of ambulation disability. Nonetheless, the underlying reasons (themes) for this variation will be described, and the relative importance of specific domains will be illustrated comparing Best Days and Worst Days. The impact of this fluidity in goals, reasons, and priorities will be discussed in terms of meaningful treatment goals from the perspective of DMD patients and caregivers. Study planning and commencement In 2015, the Jett Foundation provided a patient-reported outcome report to the Federal Drug Administration (FDA) on patients living with Duchenne who were being treated with eteplirsen to help inform regulatory decision making. Since that time, much has happened in the Duchenne space and, as of 2018, there were 29 ongoing clinical trials studying treatments for Duchenne. In 2017, the Jett Foundation’s Duchenne Biotechnology Council, a group of industry partners and key opinion leaders working in the Duchenne space, identified needs in Duchenne trials, including the need to identify aspects of daily living that are important to patients at every stage of the disease. In late 2017, the research group began planning for a survey that would study the patient experience and identify outcomes that are important to them and identified necessary logistical support and funding mechanisms. In early 2018, the group submitted a meeting request through the Office of Patient Affairs and obtained FDA feedback. The FDA inputs were included in the protocol and after IRB approval, the study commenced. Sample The “Your Voice” study sample was recruited from the Jett Foundation, other DMD-related patient advocacy organizations, and patients who had opted-in to be contacted for research participation through Engage Health’s EnCompass® database. Participants were recruited using email communication and posts to social media sites. Eligible participants were 18 years of age or older, a parent of a patient younger than 17 years of age or a parent of a patient older than 18 years if the patient was unable to answer for themselves; willing and able to sign consent / assent; a United States resident; and willing to participate in a one-hour interview. Participants had to provide documentation of DMD diagnosis of themselves or their child, for patients and caregivers, respectively. Such documentation included a genetic diagnosis from a relevant testing laboratory, physician-consult notes, school notes describing Individual Education Program accommodations and disease name, or medical record of diagnosis. Procedure The “Your Voice” study design and interview questions were developed in collaboration with key stakeholders, including DMD patients (AL), family caregivers (CM, JM), pharmaceutical researchers (DS, SJ, PE), and clinicians (NM, LL). Participant recruitment was stratified by level of ambulation disability to provide representation for people/caregivers of ambulatory, transitional, and non-ambulatory stages of disability progression. Participants were recruited within stage in cohorts of five (5), and recruitment continued until saturation was deemed met (i.e., no new or important information gleaned from the final-cohort interviews ). Following informed consent from an adult patient or caregiver and, when applicable, assent from a minor child, confirmation of DMD diagnosis, and group assignment based on ambulation stage, study participants were interviewed by telephone by trained interviewers. The interviews were conducted in English and took approximately 45 min. Participants were allowed to abstain from answering any question, and were allowed to stop at any time. Data were fully de-identified after collection to ensure confidentiality. An honorarium of $100 was paid for each completed interview, and one interview was allowed, representing each person with DMD. Caregivers were asked to represent their child’s experience. The interview proceeded in two parts and followed a qualitative method developed to better understand patient/caregiver experience . Participants were first asked open-ended questions (“un-aided questions”) about burdens associated with DMD, including important functions which DMD prevented the individual from doing, and why it was important to them. The interview utilized a skip logic such that questions were only asked of participants to whom they pertain. For example, if a participant stated that they had no issues with personal-care matters, they were not asked subsequent questions about it. Prompts then specified burden and life-impact categories impacted by DMD (“aided questions”), and queried further description of the impact. These questions with domains specifying aspects of DMD impact (i.e., burden) and life-impact categories summarizing life domains known to be affected by DMD in the medical research literature, and with input from clinical experts (LL, NM), DMD patients and caregivers, representatives from DMD patient organizations, and Engage Health. Measures In addition to the qualitative measures described above as part of the interview, this study also used the following quantitative measures to assess ambulatory status, and demographic / clinical characteristics. Self-Assessment of Ambulatory Status was assessed using the Lowes Lab Ambulatory Status Algorithm (LLASA), an unvalidated clinician-derived algorithm that is used in clinical practice. This categorization utilizes a branching logic to identify the questions appropriate to the person with DMD’s level of disability. Respondents are asked three to five questions in order to categorize the person with DMD as either ambulatory, transitional, or non-ambulatory. Demographic / clinical characteristics included age of the person with DMD, gender, race/ethnicity, state of residence, education level of the patient and their mother. Family socioeconomic status was captured by whether the family had a computer at home, a car or van at home, the option of a free lunch at school, and whether they owned or rented their home . Clinical characteristics included use of steroids for DMD and participation in DMD clinical trials. Statistical analysis Two independent raters (SJ, DS) from different organizations coded the qualitative data according to a coding guide, which also included instructions for resolving differences. While the coding guide included functional-activity categories that reflected the research literature and consultations with DMD experts, the coders were explicitly tasked with also identifying new categories that reflected participant responses. Responses were analyzed separately by ambulatory status category: Ambulatory (capable of walking); Transitional (when ambulation becomes a problem, and the child requires assistance); and Non-Ambulatory (incapable of walking, wheelchair dependent). Descriptive analyses summarized participant responses to the aided and un-aided questions as a function of ambulation category. The Kruskall Wallis non-parametric rank test compared ambulation group responses. This statistic is used for comparing two or more independent samples of equal or different sample sizes, and it is the non-parametric equivalent of the one-way analysis of variance (ANOVA). Non-parametric tests are useful with relatively small sample sizes, which may not have normal distributions and thus may violate assumptions of parametric tests. In 2015, the Jett Foundation provided a patient-reported outcome report to the Federal Drug Administration (FDA) on patients living with Duchenne who were being treated with eteplirsen to help inform regulatory decision making. Since that time, much has happened in the Duchenne space and, as of 2018, there were 29 ongoing clinical trials studying treatments for Duchenne. In 2017, the Jett Foundation’s Duchenne Biotechnology Council, a group of industry partners and key opinion leaders working in the Duchenne space, identified needs in Duchenne trials, including the need to identify aspects of daily living that are important to patients at every stage of the disease. In late 2017, the research group began planning for a survey that would study the patient experience and identify outcomes that are important to them and identified necessary logistical support and funding mechanisms. In early 2018, the group submitted a meeting request through the Office of Patient Affairs and obtained FDA feedback. The FDA inputs were included in the protocol and after IRB approval, the study commenced. The “Your Voice” study sample was recruited from the Jett Foundation, other DMD-related patient advocacy organizations, and patients who had opted-in to be contacted for research participation through Engage Health’s EnCompass® database. Participants were recruited using email communication and posts to social media sites. Eligible participants were 18 years of age or older, a parent of a patient younger than 17 years of age or a parent of a patient older than 18 years if the patient was unable to answer for themselves; willing and able to sign consent / assent; a United States resident; and willing to participate in a one-hour interview. Participants had to provide documentation of DMD diagnosis of themselves or their child, for patients and caregivers, respectively. Such documentation included a genetic diagnosis from a relevant testing laboratory, physician-consult notes, school notes describing Individual Education Program accommodations and disease name, or medical record of diagnosis. The “Your Voice” study design and interview questions were developed in collaboration with key stakeholders, including DMD patients (AL), family caregivers (CM, JM), pharmaceutical researchers (DS, SJ, PE), and clinicians (NM, LL). Participant recruitment was stratified by level of ambulation disability to provide representation for people/caregivers of ambulatory, transitional, and non-ambulatory stages of disability progression. Participants were recruited within stage in cohorts of five (5), and recruitment continued until saturation was deemed met (i.e., no new or important information gleaned from the final-cohort interviews ). Following informed consent from an adult patient or caregiver and, when applicable, assent from a minor child, confirmation of DMD diagnosis, and group assignment based on ambulation stage, study participants were interviewed by telephone by trained interviewers. The interviews were conducted in English and took approximately 45 min. Participants were allowed to abstain from answering any question, and were allowed to stop at any time. Data were fully de-identified after collection to ensure confidentiality. An honorarium of $100 was paid for each completed interview, and one interview was allowed, representing each person with DMD. Caregivers were asked to represent their child’s experience. The interview proceeded in two parts and followed a qualitative method developed to better understand patient/caregiver experience . Participants were first asked open-ended questions (“un-aided questions”) about burdens associated with DMD, including important functions which DMD prevented the individual from doing, and why it was important to them. The interview utilized a skip logic such that questions were only asked of participants to whom they pertain. For example, if a participant stated that they had no issues with personal-care matters, they were not asked subsequent questions about it. Prompts then specified burden and life-impact categories impacted by DMD (“aided questions”), and queried further description of the impact. These questions with domains specifying aspects of DMD impact (i.e., burden) and life-impact categories summarizing life domains known to be affected by DMD in the medical research literature, and with input from clinical experts (LL, NM), DMD patients and caregivers, representatives from DMD patient organizations, and Engage Health. In addition to the qualitative measures described above as part of the interview, this study also used the following quantitative measures to assess ambulatory status, and demographic / clinical characteristics. Self-Assessment of Ambulatory Status was assessed using the Lowes Lab Ambulatory Status Algorithm (LLASA), an unvalidated clinician-derived algorithm that is used in clinical practice. This categorization utilizes a branching logic to identify the questions appropriate to the person with DMD’s level of disability. Respondents are asked three to five questions in order to categorize the person with DMD as either ambulatory, transitional, or non-ambulatory. Demographic / clinical characteristics included age of the person with DMD, gender, race/ethnicity, state of residence, education level of the patient and their mother. Family socioeconomic status was captured by whether the family had a computer at home, a car or van at home, the option of a free lunch at school, and whether they owned or rented their home . Clinical characteristics included use of steroids for DMD and participation in DMD clinical trials. Two independent raters (SJ, DS) from different organizations coded the qualitative data according to a coding guide, which also included instructions for resolving differences. While the coding guide included functional-activity categories that reflected the research literature and consultations with DMD experts, the coders were explicitly tasked with also identifying new categories that reflected participant responses. Responses were analyzed separately by ambulatory status category: Ambulatory (capable of walking); Transitional (when ambulation becomes a problem, and the child requires assistance); and Non-Ambulatory (incapable of walking, wheelchair dependent). Descriptive analyses summarized participant responses to the aided and un-aided questions as a function of ambulation category. The Kruskall Wallis non-parametric rank test compared ambulation group responses. This statistic is used for comparing two or more independent samples of equal or different sample sizes, and it is the non-parametric equivalent of the one-way analysis of variance (ANOVA). Non-parametric tests are useful with relatively small sample sizes, which may not have normal distributions and thus may violate assumptions of parametric tests. Sample Table displays the demographic and clinical characteristics of the study sample. The study sample reflected the perspective of minor and adult patients and their caregivers. Table provides a breakdown of patient/caregiver groupings within each ambulation category. Since the focus of investigation was the DMD patient regardless of the source (i.e., patient or caregiver), results will be described across sources in terms of the impact on the DMD patient. Patient demographics DMD patients in the sample had a mean age of 12.3 years (SD = 6.1). Study participants lived throughout the United States, with greater representation on the East, West, and Southern coasts. The sample was predominantly white (87%), non-Hispanic (97%), and the median current education level was Elementary School. Caregiver demographics Caregivers were predominantly female, and the median level of education was some college. Almost all participants reported owning a computer in the home (99%), and having a car or van (93%). Most participants owned their own home (70%), and about one third reported having the option of a free lunch at school. Clinical characteristics One third of the DMD patients described in the study sample was ambulatory, one third was transitional, and one third non-ambulatory (n = 20 per group). The age ranges for each Lowes-Algorithm stage were 3–14 years of age for ambulatory, 6–17 years of age for transitional, and 10–33 years of age for non-ambulatory. Seventy-five percent of the people with DMD currently used long-term steroids, while 10% had used them in the past but not currently, and 15% had never used them. While over half (53%) of the people with DMD had never participated in a clinical trial, about a quarter of the sample was currently in a trial and a quarter had participated in the past. The current trial participants were primarily transitional patients (n = 7 of 14). Treatment goals by ambulation group Treatment goals were derived from participant answers to the following question about burden of disease: "What is the most important thing that you wish you/your child could do but cannot because of Duchenne?" The following categories of patient-centered treatment goals were built on the research literature and clinical experts: daily functioning, sports/recreation, personal care, travel/transportation, communication, relationships, employment, healthcare needs, and education. Additionally, coders identified a new category “Health,” reflecting stamina, muscles aches, and concerns about longevity. As patients progressed in disability, there were differences in functional areas deemed most important. While daily functioning and sports/recreation remained the most important priority areas across ambulation groups, there were notable differences in the stated importance of health and relationships (Kruskall Wallis H = 12.24 and 5.28, p = 0.002 and 0.02, respectively). Specifically, health became less prominent as the disability progressed from ambulatory to transitional to non-ambulatory phases of disability; whereas relationships became more prominent as one progressed to the non-ambulatory phase from the ambulatory or transitional phases (Fig. ). Other indicators of differences across groups were revealed by some categories only being mentioned for transitional and non-ambulatory patients (i.e., travel, education) and another only being mentioned by ambulatory patients (i.e., communication). Reasons for treatment goals by ambulation group The reasons underlying the importance of the abovementioned functional categories was addressed with the question "Why is this thing important to you/your child?” Potential categories for content coding built on the research literature and clinical experts. These reasons included: self-esteem/self-confidence, connection with others, financial situation, time commitment, and independence. Additionally, coders identified two new categories referred to as “Accessibility” and “Enjoyment.” The former reflected being able to get into places, and from one place to the next. The latter reflected enjoying life and relishing the experience. As patients progressed in disability, the reasons underlying the importance of a particular functional domain differed across ambulation groups (Fig. ). There were notable differences in the prominence of self-esteem/confidence and independence (Kruskall Wallis H = 9.46 and 7.35, p = 0.009 and 0.025, respectively). Specifically, self-esteem / confidence was most important for ambulatory patients, and became less prominent for patients in the transitional and non-ambulatory phases of disability. In contrast, independence was less important for ambulatory patients, and became increasing prominent for patients in the transitional and non-ambulatory phases of disability. There were, however, similarities in the importance of connection with others and enjoyment across ambulation groups. For the domains where there were ambulation-group differences in importance as noted above, the content of the reasons was somewhat distinct. Table provides examples of interview content by ambulation group. For the self-esteem/confidence domain, ambulatory and transitional patients were more focused on fitting in and not feeling different, whereas non-ambulatory patients were more concerned about feeling restricted. For the independence domain, the content for ambulatory patients exemplified feeling different from others and being motivated by independence. Among people in the transitional category, independence was more related to the challenges of functioning in a school environment and worry for the child. Once non-ambulatory, the focus of independence was related to dependence on ventilators, waiting for others to do something necessary, and the decreasing motivation to even try to do something independently because it was so difficult. In contrast, for domains with similar importance across ambulation groups, the content was relatively similar. The domain of connection with others was related to making and maintaining friendships in the greater community. The domain of enjoyment was related to the importance of being able to be happy, to have goals that engendered a sense of accomplishment, and having something to look forward to. The domain of time commitment reflected how much DMD treatments and doctor’s appointments impacted the patient’s participation in school and other normal activities. The domain of accessibility, which was not mentioned among ambulatory patients, was reflective of similar concerns for transitional and non-ambulatory patients: physical access to different parts of their environment. As they became non-ambulatory, this content reflected a frustration with being stuck in the same place and missing out on desirable activities. Worst days vs. best days by ambulation group Figure display results of queries about life domains that impact the person with DMD’s worst and best days, respectively. It is notable that emotional functioning (e.g., sadness, anger, low self-esteem, etc.) is most prominent for all ambulation groups for both best and worst days. Functional aspects impact best days across groups as well. Behavioral issues (e.g., aggressive, prone to meltdowns, uncooperative, etc.) are most prominent for ambulatory patients’ worst days, and only somewhat notable for transitional patients. For non-ambulatory patients, behavioral issues were not at all pertinent. More domains overall were noted as having an impact on worst days compared to best days. Goals for new DMD treatments Figure displays the results related to desired goals for a new DMD therapy. In addition to displaying the overall sum of outcomes mentioned by ambulation category, this figure shows the number of mentions of specific functional goals, general QOL goals, and concerns about safety, ease of use, and effectiveness. Functional goals were multidimensional, focusing on improving or maintaining muscle function and strength, organ function, independence, communication and/or cognition, stability, and energy. Concerns about safety related to tolerability, such as avoiding issues related to long-term steroid use (e.g., immunosuppression, bone loss, emotional volatility, weight gain ). Tolerability also referred to concerns about sudden death, bone loss, cataracts, and pain. Ease of use related to convenience and schedule of dosing so that the treatment had minimal interference with daily life. Access reflected affordability, how soon it would be available, having a broad label, and frequency and distance of travel required to utilize a potential therapy. Effectiveness referred to the direct biological effects of the drug, such as desiring that it produced the dystrophin protein, led to metabolic change, increased bone density, reduced pain, increased growth rate, impacted lifespan, related to finding a cure, and that the preclinical results led to real clinical impact. When asked about desired outcomes of DMD drug therapies, participants in the three ambulation groups noted similar numbers of endpoints related to functional and general QOL concerns (Kruskall Wallis H = 2.77, 5.07, and 4.83, respectively; p = 0.25, 0.8, and 0.09, respectively).There were, however, group differences in the number of mentions of concerns about safety/ease/effectiveness, with ambulatory patients ranking this concern higher than non-ambulatory and transitional patients (Kruskall Wallis H = 13.44, p = 0.001). Table displays the demographic and clinical characteristics of the study sample. The study sample reflected the perspective of minor and adult patients and their caregivers. Table provides a breakdown of patient/caregiver groupings within each ambulation category. Since the focus of investigation was the DMD patient regardless of the source (i.e., patient or caregiver), results will be described across sources in terms of the impact on the DMD patient. Patient demographics DMD patients in the sample had a mean age of 12.3 years (SD = 6.1). Study participants lived throughout the United States, with greater representation on the East, West, and Southern coasts. The sample was predominantly white (87%), non-Hispanic (97%), and the median current education level was Elementary School. Caregiver demographics Caregivers were predominantly female, and the median level of education was some college. Almost all participants reported owning a computer in the home (99%), and having a car or van (93%). Most participants owned their own home (70%), and about one third reported having the option of a free lunch at school. Clinical characteristics One third of the DMD patients described in the study sample was ambulatory, one third was transitional, and one third non-ambulatory (n = 20 per group). The age ranges for each Lowes-Algorithm stage were 3–14 years of age for ambulatory, 6–17 years of age for transitional, and 10–33 years of age for non-ambulatory. Seventy-five percent of the people with DMD currently used long-term steroids, while 10% had used them in the past but not currently, and 15% had never used them. While over half (53%) of the people with DMD had never participated in a clinical trial, about a quarter of the sample was currently in a trial and a quarter had participated in the past. The current trial participants were primarily transitional patients (n = 7 of 14). Treatment goals were derived from participant answers to the following question about burden of disease: "What is the most important thing that you wish you/your child could do but cannot because of Duchenne?" The following categories of patient-centered treatment goals were built on the research literature and clinical experts: daily functioning, sports/recreation, personal care, travel/transportation, communication, relationships, employment, healthcare needs, and education. Additionally, coders identified a new category “Health,” reflecting stamina, muscles aches, and concerns about longevity. As patients progressed in disability, there were differences in functional areas deemed most important. While daily functioning and sports/recreation remained the most important priority areas across ambulation groups, there were notable differences in the stated importance of health and relationships (Kruskall Wallis H = 12.24 and 5.28, p = 0.002 and 0.02, respectively). Specifically, health became less prominent as the disability progressed from ambulatory to transitional to non-ambulatory phases of disability; whereas relationships became more prominent as one progressed to the non-ambulatory phase from the ambulatory or transitional phases (Fig. ). Other indicators of differences across groups were revealed by some categories only being mentioned for transitional and non-ambulatory patients (i.e., travel, education) and another only being mentioned by ambulatory patients (i.e., communication). The reasons underlying the importance of the abovementioned functional categories was addressed with the question "Why is this thing important to you/your child?” Potential categories for content coding built on the research literature and clinical experts. These reasons included: self-esteem/self-confidence, connection with others, financial situation, time commitment, and independence. Additionally, coders identified two new categories referred to as “Accessibility” and “Enjoyment.” The former reflected being able to get into places, and from one place to the next. The latter reflected enjoying life and relishing the experience. As patients progressed in disability, the reasons underlying the importance of a particular functional domain differed across ambulation groups (Fig. ). There were notable differences in the prominence of self-esteem/confidence and independence (Kruskall Wallis H = 9.46 and 7.35, p = 0.009 and 0.025, respectively). Specifically, self-esteem / confidence was most important for ambulatory patients, and became less prominent for patients in the transitional and non-ambulatory phases of disability. In contrast, independence was less important for ambulatory patients, and became increasing prominent for patients in the transitional and non-ambulatory phases of disability. There were, however, similarities in the importance of connection with others and enjoyment across ambulation groups. For the domains where there were ambulation-group differences in importance as noted above, the content of the reasons was somewhat distinct. Table provides examples of interview content by ambulation group. For the self-esteem/confidence domain, ambulatory and transitional patients were more focused on fitting in and not feeling different, whereas non-ambulatory patients were more concerned about feeling restricted. For the independence domain, the content for ambulatory patients exemplified feeling different from others and being motivated by independence. Among people in the transitional category, independence was more related to the challenges of functioning in a school environment and worry for the child. Once non-ambulatory, the focus of independence was related to dependence on ventilators, waiting for others to do something necessary, and the decreasing motivation to even try to do something independently because it was so difficult. In contrast, for domains with similar importance across ambulation groups, the content was relatively similar. The domain of connection with others was related to making and maintaining friendships in the greater community. The domain of enjoyment was related to the importance of being able to be happy, to have goals that engendered a sense of accomplishment, and having something to look forward to. The domain of time commitment reflected how much DMD treatments and doctor’s appointments impacted the patient’s participation in school and other normal activities. The domain of accessibility, which was not mentioned among ambulatory patients, was reflective of similar concerns for transitional and non-ambulatory patients: physical access to different parts of their environment. As they became non-ambulatory, this content reflected a frustration with being stuck in the same place and missing out on desirable activities. Figure display results of queries about life domains that impact the person with DMD’s worst and best days, respectively. It is notable that emotional functioning (e.g., sadness, anger, low self-esteem, etc.) is most prominent for all ambulation groups for both best and worst days. Functional aspects impact best days across groups as well. Behavioral issues (e.g., aggressive, prone to meltdowns, uncooperative, etc.) are most prominent for ambulatory patients’ worst days, and only somewhat notable for transitional patients. For non-ambulatory patients, behavioral issues were not at all pertinent. More domains overall were noted as having an impact on worst days compared to best days. Figure displays the results related to desired goals for a new DMD therapy. In addition to displaying the overall sum of outcomes mentioned by ambulation category, this figure shows the number of mentions of specific functional goals, general QOL goals, and concerns about safety, ease of use, and effectiveness. Functional goals were multidimensional, focusing on improving or maintaining muscle function and strength, organ function, independence, communication and/or cognition, stability, and energy. Concerns about safety related to tolerability, such as avoiding issues related to long-term steroid use (e.g., immunosuppression, bone loss, emotional volatility, weight gain ). Tolerability also referred to concerns about sudden death, bone loss, cataracts, and pain. Ease of use related to convenience and schedule of dosing so that the treatment had minimal interference with daily life. Access reflected affordability, how soon it would be available, having a broad label, and frequency and distance of travel required to utilize a potential therapy. Effectiveness referred to the direct biological effects of the drug, such as desiring that it produced the dystrophin protein, led to metabolic change, increased bone density, reduced pain, increased growth rate, impacted lifespan, related to finding a cure, and that the preclinical results led to real clinical impact. When asked about desired outcomes of DMD drug therapies, participants in the three ambulation groups noted similar numbers of endpoints related to functional and general QOL concerns (Kruskall Wallis H = 2.77, 5.07, and 4.83, respectively; p = 0.25, 0.8, and 0.09, respectively).There were, however, group differences in the number of mentions of concerns about safety/ease/effectiveness, with ambulatory patients ranking this concern higher than non-ambulatory and transitional patients (Kruskall Wallis H = 13.44, p = 0.001). The present study provides useful information about treatment goals for DMD from the perspective of key stakeholders: patients and their caregivers. It highlights some consistent values across the ambulation disability trajectory, as well as introducing an evolution of priorities as the person with DMD becomes more disabled in ambulation. The breakdown of results by ambulation disability was an explicit choice to help elucidate how treatment goals change over ambulation disability progression. It does not invalidate other aspects of disease progression. For example, daily functioning and recreation remain important for all patients, while relationships become a more prominent focus as disability progresses. This finding may reflect both adaptation and changing priorities. Non-ambulatory patients/parents have had more time to cope with and thus to adapt to realities such as not being able to play sports. At the same time, they may be increasingly aware of disability-related decline in peer relationships at a time when peers without DMD are more social. This increased awareness may render the maintenance of any relationships particularly important as the disease progresses. The underlying drivers of the DMD burden domains and their meaning also evolved over the disability trajectory. For example, while self-esteem and confidence were drivers of goals for all patients, the foci were distinct at different stages of disability. For patients earlier in the disability trajectory, the concern was more about fitting in and not feeling different, whereas later they were more related to not feeling restricted. This difference may also reflect the increased isolation and loss of independence that patients experience as their disability progresses. Early on, they may be able to participate in a mainstream, school environment whereas with increased ambulation and other disability progression, such participation becomes increasingly challenging due to problems with building accessibility or access to independent educational programs. As a result, younger patients may be more aware of how they are different from their peers whereas older patients may be habituated to this difference and be more aware of frequently feeling restricted by DMD. These changes in values and underlying meaning of the same concept over the disability trajectory are important insights gleaned from this study. There is a substantial evidence base suggesting that when people experience changes in health, they may change their internal standards, values, and / or conceptualization of a target concept . While much research has documented that these “response shifts” can influence the interpretation of treatment outcomes over time, the present study highlights how treatment goals, and even the underlying meaning of a broadly stated goal, may shift over time. This insight has important implications for designing treatments at different stages of the disability trajectory. It suggests, for example, that treatments that enable patients to feel more like their peers and fit in are particularly important when patients remain ambulatory. School-based interventions aimed at teaching tolerance and inclusion may also be implicated. Later in the trajectory, desirable treatments are deemed those that are accessible and not time-consuming to take so that patients can maintain some degree of independence and maintain social relationships. The acknowledged importance of relationships with family and friends among non-ambulatory patients may reflect social isolation from peers and an appreciation for all that these people are doing for them to keep them healthy . Of all of the functional domains addressed in the present study, emotional functioning was found to be central in participants’ descriptions of best and worst days. This insight may have implications for the development of behavioral interventions to help patients and caregivers to cope with the emotional challenges of DMD. Coping interventions that might be worth considering in DMD include teaching coping flexibility for patients and their caregivers [ – ] and mindfulness . This direct information about DMD burden domains leads to insights related to goals for new DMD treatments. They underscored the importance of maintaining and improving function, tolerability, and biological effectiveness. The domains directly noted by study participants could be useful for guiding outcome measurement for DMD clinical trials. In particular, such outcome measurement should be tailored to the patient’s disability stage with different domains reflected for ambulatory, transitional, and non-ambulatory patients. While the present work has important advantages of addressing key concepts using content analysis qualitative data, its limitations must be acknowledged. First, the sample sizes are relatively small, which is not uncommon in qualitative research. This situation prevents most statistical analyses due to low power. This was dealt with by primarily focusing on raw counts of number of mentions, and by using non-parametric tests and doing so sparingly. Future research might create close-ended questions to address these same key concepts and implementing a larger-scale study of patients and caregivers. A second limitation relates to the use of an unvalidated algorithm for categorizing patients’ stage of ambulation disability. Future research might validate this classification scheme. Alternatively, future work might utilize other validated methods for classifying amulation status. For example, ACTIVLIM is a measure of activity limitations for patients with upper and/or lower limb impairments. The scale measures a patient's ability to perform daily activities requiring the use of the upper and/or lower limbs, whatever the strategies involved. ACTIVLIM has been validated in children (age 6–15) and in adults (age 16–80) with a neuromuscular disorder . A third limitation is that we did not measure or adjust for caregiver fatigue as they answered the interview questions. If they caregivers were DMD carriers, their answers might have reflected their own feelings of fatigue and muscle weakness in addition to their perceptions of their child’s experience of these symptoms. Future research should not only track whether the maternal caregiver is a carrier, but also should track and statistically adjust for the caregiver’s personal experience of fatigue and muscle weakness when rating to their perceptions of their child’s experience, assuming adequate statistical power to do so. In summary, the present study utilized content analysis of qualitative data to highlight important domains of DMD burden, underlying reasons for this importance, and goals for new treatment. It highlights variability in these concerns across the disability trajectory, and provides a roadmap for patient-centered DMD drug and intervention development. Based on our findings, this roadmap would entail a continued biomedical treatment focus on maintaining daily functioning and recreation, and a tailored behavioral-intervention approach to managing social and emotional functioning over the course of the disease. While earlier in the disability trajectory, the interventions might focus on dealing with concerns about not fitting in and feeling different, later they would focus on reframing the restrictions caused by their disability. Helping people with DMD to master their emotional functioning would benefit both the patients themselves and their caregivers, as emotional functioning was found to be central in participants’ descriptions of best and worst days.
Frequency of Amblyopia in strabismus patients presenting to tertiary care hospital
ee96e9a0-9d6b-4c20-a621-a67c2f2c6e57
10117198
Ophthalmology[mh]
Reduced vision in one eye (amblyopia) is caused by faulty visual development early in infancy. The weaker (or lazy) eye is prone to wandering inward or outward. Amblyopia is a condition that affects people from infancy to the age of seven. It is the most common cause of eyesight loss in children. Lazy eye rarely affects both eyes . A squint, also known as strabismus, is a condition of misalignment of the eyes, and if it is not properly treated, it effects binocularity and depth perception. It is more prevalent in young children, although it can happen to anyone at any age. One eye may turn in, out, up, or down, while the other gazes forward . Heredity, eye muscle weakness or an issue with the nerves in the eye muscles, cataracts, glaucoma, corneal scars, optic nerve illness, refractive errors, eye tumors, injuries and retinal disease, among other things, can substantially impair one’s eyesight causing squint . Amblyopia develops when there is a significant disparity in the capacity to focus between the two eyes. Other vision issues are the most common cause of amblyopia. It is critical to address these other issues, or the brain will begin to rely on the eye with a better vision, resulting in amblyopia . Commonly, amblyopia is of either refractory, strabismic, sensory deprivation, or of meridional type . Early aberrant visual experience in children can disrupt interocular alignment, causing strabismus, interfere with sensory development, causing amblyopia, and change the path of emmetropization, causing ametropias in one or both eyes. Given that each of these disorders has the ability to modify visual perception, the existence of any one of these conditions in early life could cause one or both of the other two. In this regard, amblyopia is significantly linked to the occurrence of anisometropia and/ or strabismus during early childhood . The rationale of the study is to find the magnitude of amblyopia with reference to type of squint among the strabismus patients visiting the Ophthalmology Department of Hayatabad Medical Complex, Peshawar, Pakistan, so that the strategies could be made for early diagnosis and prompt treatment. As no reliable data is available regarding the frequency of amblyopia in the strabismus patients in Khyber Pakhtunkhwa, this study will contribute to the future planning regarding strabismus Amblyopia management. A cross sectional study was carried out in the Department of Ophthalmology, Hayatabad Medical Complex, Peshawar, Pakistan, from April 2022 to October 2022, after the approval of the Ethical Committee. Patients who presented to the Outpatient Department with squint were screened for amblyopia. The sample size was calculated using WHO sample size formula with 95% confidence interval, 3% margin of error and expected frequency of amblyopia by 19% in patients with strabismus (squint) and was 237 . Non-probability consecutive sampling was used to enroll the patients. Patients of all ages, gender and types of squints were included, while patients with previous history of ocular trauma, surgery or any neurological disease were excluded. Base line demographic information of patients (age, gender, duration of complain) were considered. Informed consent was obtained from parents/ care givers, patients themselves, ensuring confidentiality, and explaining the risk and benefit involved to the patients who took part in this study. All the patients who met the inclusion criteria, with squint, were assessed using a standard approach that included a medical history, visual acuity (Snellen chart), orthoptic assessment, silt lamp biomicroscopy and indirect fundoscopy. Frequency of amblyopia was noted on an especially designed proforma. Amblyopia was further graded into mild and moderate and dense categories. All the recorded data was analyzed using IBM-SPSS version 23. Quantitative variables like age and duration of complain were measured as mean and standard deviation. Qualitative variables like gender, side of eye, single or bilateral eye involvement were measured in terms of frequency percentages. Factors were stratified by age, gender, side of eye and type of squint and grade of amblyopia. Post stratification chi square test was applied, p ≤ 0.05 being considered statistically significant. Two hundred and thirty-seven (237) patients in total participated in the study. The patients’ ages ranged from 1 year to 63 years. Males were found to be less prevalent than females (46% vs. 54%). 160 (67.5%) cases were registered with uniocular squint (right eye affected in 93 and left eye in 67 cases). 77 (44%) cases were registered with alternating squint . In 76 cases, the best corrected visual acuity was 6/ 6 in both eyes. Exotropia was seen in 91 (38.3%), while esotropia was observed in 146 (61.6%) cases. Amblyopia was observed in 113 out of 160 (70.6%) cases of uniocular squint, while in alternating squint, it was 11 out of 77 (14.2%). A total of 161 patients got any grade of amblyopia with strabismus (67%). In esotropia, amblyopia was observed in 73.2% (107 out of 146), while exotropia was observed in 59.3% (54 out of 91) and had associated amblyopia as shown in . Amblyopia was further graded as mild, moderate and dense type. Moderate amblyopia was found in 78 (48.4%) cases, while mild in 54 (33.5%) and dense in 29 (18%) were less common. The overall prevalence was found to be 84.6% in both forms of squint . Hashemi et al. assessed strabismus, exotropia and esotropia to have a pooled prevalence of 1.93%, 1.23%, and 0.77%, respectively, and our study differed in the said prevalence due to our preset inclusion criteria for our study duly approved. Another meta-analysis performed by Budan Hu et al. analyzed a total of 97 trials, involving 4,645,274 kids and 7,706 amblyopic patients. Amblyopia was present in 1.36% of people worldwide being generally of 95% CI: 1.27-1.46%. Males were more likely to experience amblyopia (OR = 0.885, 95% CI: 0.795-0.985, P = 0.025) than females (1.24%, 95% CI: 0.94-1.54%), our results slightly differing in gender distribution . According to Dikova SP and col., 42 (2.5%) of the 1,675 children had amblyopia, of whom 3% had deprivation amblyopia, 59% had anisometropic amblyopia (25), 31% had isoametropic amblyopia (25), and 7% had strabismic amblyopia. 73% of the cases (27) had unilateral amblyopia, and 27% had bilateral amblyopia (15) . The findings of our study corresponded closely to their observations. Our results were also consistent with the description of childhood amblyopia provided by the Pediatric Eye Disease Investigator Group (PEDIG), which stated that the most common causes of amblyopia were strabismus and anisometropia. A quarter of the youngsters were also discovered to have components of both, such as strabismus and untreated refractive defects . Our study results were parallel to what was found by Flom and Neumaier’s study of amblyopic kids in kindergarten through sixth grade, 38% of them having strabismus, 34% having anisometropia of 1 diopter or higher, and 28% having both strabismus and anisometropia. According to the results of the current investigation, anisometropia and strabismus are the main causes of unilateral amblyopia . A dramatic decline in the prevalence of unilateral amblyopia among young adults, as well as the prevalence of current strabismus were found over the course of generation. The prevalence of severe (both unilaterally and bilaterally) strabismus remained steady. These changes coincided with the implementation of the national screening programme for children and the increased usefulness of amblyopia and strabismus treatments. The “natural history” of these diseases and their prevalence in adolescence may have been altered as a result of these early therapies . Strabismic amblyopia is a preventable condition if early attention and diagnosis is made.We need to implement screening programmes at school and Madrasa level to prevent lifelong visual impairment. Conflict of interest The authors state no conflict of interest. Informed Consent and Human and Animal Rights statement Informed consent has been obtained from all individuals included in this study. Authorization for the use of human subjects Ethical approval: The research related to human use complies with all the relevant national regulations, institutional policies, is in accordance with the tenets of the Helsinki Declaration, and has been approved by the review board of Hayatabad Medical Complex Peshawar, Pakistan. Acknowledgements None. Sources of Funding This study did not receive any financial grant from funding agencies in the public, commercial, or non-profit sectors. Disclosures None.
Position Paper: SGIM Sex- and Gender-Based Women’s Health Core Competencies
e1b0130e-aae1-4e93-88ce-b0cfccddb350
10117249
Internal Medicine[mh]
Twenty-five years ago, the American Board of Internal Medicine (ABIM) published the first core competencies in women’s health. They were developed in response to accumulating data demonstrating gaps in the knowledge and skills of internal medicine (IM) residents in women’s health. The competencies addressed a broad spectrum of domains, including medical knowledge; interviewing and counseling skills; clinical skills and procedures; and professionalism. Included under clinical skills and procedures was the requirement, for the first time, that residents demonstrate competency in conducting a breast and pelvic exam, and a Pap test. To emphasize the importance of these recommendations, the ABIM identified women’s health as a separate content area in the ABIM certification examination blueprint, as well as in the Board certifying examination. Coincident with the ABIM publication, other national policy-making organizations were documenting deficiencies in the education and training of physicians in women’s health and advocating for change. In 1996, the American College of Physicians (ACP) stated explicitly that “all physicians who provide primary care to women should be competent to diagnose and manage the most common conditions women present in the ambulatory setting.” They acknowledged the interdisciplinary nature of women’s health and the need to collaborate with other disciplines, especially gynecology, in physician education and training. These landmark publications occurred during concerted efforts by professional organizations, and by the federal government, to increase research on women’s health, and to create new models of physician education and clinical care. The signature national program was the federally funded National Academic Centers of Excellence in Women’s Health. Key areas of change that make it imperative to readdress core competencies in women’s health include: Advances in our understanding of the effects of sex and gender on conditions that cause the greatest morbidity and mortality in women. Research initiated by the National Institutes of Health, and further inspired by the 2001 Institute of Medicine Report, “Does Sex Matter?,” provided pivotal new data on conditions present in both women and men but that disproportionally affect women and/or have important implications for patient care. Advances were initially most notable in cardiovascular disease but now extend widely to other conditions, such as autoimmune disorders, osteoporosis, and dementia. Advances in our understanding of health conditions that affect women . Research on reproductive health conditions across the lifespan, notably the menopause, changed our understanding of reproductive aging and the effects of exogenous hormones, and fundamentally changed the practice of medicine. Other areas of important progress include advances in breast cancer prevention, detection, and treatment, and in cervical cancer prevention and screening. Adoption of a more inclusive concept of women . New more inclusive terminology extends the term “women” to refer to individuals assigned female sex at birth, as well as to those who self-identify as women, irrespective of sex assigned at birth, and includes transgender individuals and gender-diverse people. These individuals often avoid care due to discrimination and adverse experiences related to inadequate care from uninformed physicians. Internal Medicine physicians must be trained to provide informed care, including reproductive and preventive health care; organ-specific cancer screening; and knowledge of the medical effects of gender-affirming hormone therapy and surgery. Awareness of health disparities in racial/ethnic minority women and gender-diverse individuals . The Covid-19 pandemic highlighted disparities in care to these vulnerable populations, related partly to economic/social issues, access to health care, and explicit and implicit bias. Black women as a group, for example, have the lowest life expectancy of US women due to an undue burden of chronic diseases, such as hypertension, diabetes, and CVD. They also have the highest rates of maternal mortality and pregnancy complications, such as preeclampsia. Transgender individuals have higher rates of mood disorders than cisgender women and have unique health needs that are often not addressed. Internal Medicine physicians play a key role in reducing barriers to care and must be deliberate in incorporating knowledge of disparities into residency education and patient care. Evolution of reproductive health . Unacceptably high rates of maternal morbidity and mortality in the USA highlight the need for training on effective preconception counseling. Given the health risks posed by pregnancy, and particularly undesired pregnancy, internists must be well versed in the full range of contraception options, including emergency contraception. As the prevalence of vascular risk factors is elevated in many patients cared for by internists, it is imperative that internists are able to effectively counsel patients about estrogen-free contraceptives, including intrauterine contraceptives (IUDs) and subdermal contraceptive implants (e.g., Nexplanon). To decrease barriers to long-acting, effective contraception use, all residents should receive training in the placement and removal of subdermal contraceptive implants and in the removal of IUDs. Training in IUD placement should be made available to interested residents. In addition, IM physicians must be familiar with medication regimens that can be used to manage early pregnancy loss and abortion. In regions with limited access to legal abortion services, awareness of complications that may follow attempts at pregnancy termination is warranted. Despite these advances and progress in our understanding of women’s health, studies demonstrate continuing deficiencies in IM residents’ knowledge and skills. – Reasons for the slow progress are complex and include uncertainty about what should be included in the curriculum and which disciplines are responsible for teaching. In addition, many model interdisciplinary or stand-alone, non-gynecology women’s health services that were important clinical teaching sites closed when federal funding for the National Centers of Excellence in Women’s Health ceased in 2007 and market forces favored other service models. Women’s health leaders and faculty involved in curriculum development turned to other activities when outside funding and institutional support waned. The ACP provided guidance in a 2018 Position Paper on the scope of sex- and gender-based women’s health conditions that IM training should encompass. The purview includes medical and mental health conditions that are more prevalent or manifest differently in individuals who identify as women; routine office gynecological, reproductive, and peri-partum care; interpersonal and sexual violence; and health disparities in racial/ethnic minority women and LGBTQIA individuals. The ACP was also explicit in its belief that internists are responsible for the primary and comprehensive care of women across the lifespan. The Women and Medicine Commission and the Sex- and Gender-Based Women’s Health Education Interest Group appreciate that in 2022–2023, the SGIM Council has committed to: reaffirm the role, and the responsibility, of general internal medicine physicians in providing comprehensive sex- and gender-based care across the lifespan to all patients who identify as women, including reproductive care, breast care, and care of pregnant and menopausal patients, and endorse core competencies in sex- and gender-based women’s health that align with advances in medical knowledge and an understanding of the full context in which women of diverse race/ethnicity, socioeconomic status, sexual orientation, and gender lead their lives. Table provides a broad overview of core competencies that IM residents should demonstrate to provide comprehensive care to women and gender-diverse individuals in a general medicine clinical setting; the provides a more detailed scope of content. These documents were developed using the approach detailed below: Competencies were developed using several sources, including the 2021 Accreditation Council for Graduate Medical Education (ACGME) Program Requirements for Internal Medicine and the 2023 ABIM Certification Examination Blueprint. , Competencies are directed to IM residents; however, they are applicable to general medicine and primary care physicians in practice. Content is organized around the six domain competencies recommended by ACGME and is intended to supplement or reinforce existing ACGME-guided residency program curricular guidelines in the areas of women’s and gender health. It does not include more global competencies residents need to attain to graduate. Women’s health is defined in the document as “the unique manifestations of conditions in individuals assigned female sex at birth, or who self-identify as women or non-binary, irrespective of sex at birth.” Content acknowledges health disparities in racial/ethnic and gender-diverse minority populations served by many IM training programs and the imperative to create education and training experiences for residents that address the needs of women and gender-diverse individuals in the community. Other SGIM groups are working on specific competency recommendations for these special populations. This work is complementary to theirs during the development process. The competencies affirm and promote the importance of using a sex and gender lens when conducting research and in the application of research to all aspects of medical education and clinical care. The textbook, “Sex- and Gender-Based Women’s Health: A Practical Guide for Primary Care,” published in 2021 and edited and authored by SGIM members, is an important complementary resource to the recommended competencies: Sex- and Gender-Based Women’s Health: A Practical Guide for Primary Care. Tilstra, S., Kwolek, D., Mitchell, J.L., Dolan, B.M., Carson, M.P. (Eds.) Springer Nature. Springer, 1st ed. 2020 edition (January 20, 2021) 10.1007/978-3-030-50695-7_29. This textbook is an evidence-based guide with current clinical guidelines for general internists and learners at all levels of training. For clinician-educators, each chapter provides a curriculum for recommended ACGME-based core competency topics outlined in Table and the , with measurable learning objectives and multiple-choice questions that can aid in teaching. Additional resources are provided in References. – Practical steps to facilitate the integration of sex- and gender-based women’s health competencies into resident education and training are provided below. Identify current program-specific strengths and deficiencies in residents’ knowledge and skills to inform curriculum development (i.e., a needs assessment). Identify existing programs and potential partners within the department, medical school, or wider institution that offer similar content. Developing collaborations is key for curriculum development, teaching, and clinical skill-building, and for modeling the central role of IM in managing women’s and gender health issues with the appropriate use of consultants. Identify protected resident educational time for dedicated sex- and gender-specific educational activities, as well as other teaching venues for integrating this content (e.g., conferences, resident report, Grand Rounds). Protected educational sessions during residents’ ambulatory rotations are a particular opportune time to teach sex- and gender-based women’s health and help ensure a unified educational experience for all residents. Identify, mentor, and support core faculty/teachers. Some may come from other disciplines. Partner with community members to ensure that community needs are reflected in curriculum development and teaching. The inclusion of community members as teachers highlights the value of learned experience and is a powerful way to increase residents’ awareness of disparities in care and engage them in advocacy. Build capacity at clinical teaching sites to expedite and model comprehensive care to women and gender-diverse individuals. Provide enhanced sex- and gender-based women’s health educational, clinical, teaching, leadership, and scholarship opportunities for selected interested residents. Provide sex- and gender-based women’s health education and development to all faculty (“teach the teachers”) through continuing medical education courses, retreats, and other venues. Develop sex- and gender-specific Entrustable Professional Activities (EPAs) to measure residents’ learning progress that can be mapped to required ACGME competency domains. Provide training and credentialing of IM faculty to perform and teach the placement and removal of subdermal contraceptive implants and IUDs. Unnecessary barriers to the delivery of comprehensive contraceptive care in primary care outpatient settings must be removed. Recommended initial credentialing of faculty includes: Subdermal contraceptive implant (e.g., Nexplanon) placement and removal: complete FDA-mandated training; demonstrate competence with the supervised placement of one implant and the supervised removal of one implant; attestation of competency by staff (MD, NP, PA, or CNM) credentialed in Internal Medicine, Family Medicine, Pediatrics, or Gynecology. IUD placement and removal: complete training and demonstrate competence with the supervised removal of one IUD and the placement of five IUDs with attestation of competency by staff (MD, NP, PA, or CNM) credentialed in Internal Medicine, Family Medicine, Pediatrics or Gynecology. The SGIM sex- and gender-based core competencies address a decades-old gap and changing health care landscape in the care of women and gender-diverse individuals. The competencies fulfill a societal and training need and offer practical guidelines for residency training programs to help prepare IM residents to care for women of diverse race/ethnicity, sexual orientation, and gender. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 45.9 KB)
Self-reported oral hygiene practice and utilization of dental services by dental technology students in Port Harcourt, Rivers State, Nigeria
15d4e621-528b-4740-900e-dc37663ca4bc
10117480
Dental[mh]
Oral health is an essential part of general health, as, poor oral health has been associated with systemic disease such as cardiovascular disease, diabetes mellitus, etc. Hence, maintaining good oral hygiene is vital to a sound general health. Oral health is described by the American Dental Association as being multifaceted and includes the ability to speak, smile, smell, taste, touch, chew, swallow, and convey a range of emotion through facial expressions with confidence and without pain, discomfort, and disease of the craniofacial complex. The two most common oral diseases of public health importance are dental caries and periodontal diseases, and plaque causes these common oral diseases. Elimination of plaque is essential in the maintenance of good oral hygiene and hence good oral health. Brushing of teeth twice a day with fluoridated toothpaste and flossing in-between the teeth can help reduce plaque accumulation. Also, regular dental check-up improves oral health. Some studies, however, revealed that most individuals utilise dental services when they are in pain or in need of emergency treatment – Dental auxiliaries are dental personnel that assist the dentist in the management of dental patients. They are classified as non-operating and operating dental auxiliaries. The operating dental auxiliaries help with professional scaling of teeth (dental therapists), while the non-operating dental auxiliaries (dental surgery technicians) assist the dentists by arranging instruments for dental procedures and giving oral hygiene instructions to patients after dental procedures, hence dental auxiliaries contribute immensely to the care of patients. The dental technology students are dental surgery technicians under training. A study by Akpata reported that dental auxiliaries will contribute to the realisation of the oral health vision in Nigeria. They are therefore expected to maintain good oral health behaviour, hence becoming role models for the less educated and the society at large. There is a paucity in the literature on the oral hygiene practices of the dental technology students. The aim of this study, therefore, is to assess the oral hygiene practice of the dental technology students and to assess their level of utilization of dental services. This was a descriptive cross-sectional study conducted among dental technology students of Rivers State School of Health Technology, River's state, Nigeria. Ethical approval was obtained from the Health Research and Ethics committee of the University of Port Harcourt Teaching Hospital (UPTH/ADM/90/S. II/VOL.XI/1033), followed by partispant's consent. The study population consisted of 109 dental technology students in the second and final year of the Rivers State School of Health Technology, River's state, Nigeria. The inclusion criteria were students at least 18 years old, dental technology students of Rivers State School of Health Technology and those who gave consent. The investigator administered 110 self-administered questionnaires, but 109 were correctly filled, giving a percentage response of 99%. The questionnaires were pre-tested among dental students of the University of Port Harcourt, River's state, to ensure simplicity and ease of understanding by the participants. The questionnaire had three sections. Section A had questions on sociodemographic, section B included questions on self-reported oral hygiene practices and section C had questions on the pattern of dental service utilization. Data analysis was done using Statistical Package for Social Science (SPSS) version 20.0 (IBM SPSS for windows, Armonk, New York, IBM Corp). Descriptive statistics of frequency and percentage were used to present the results. The mean age of the subjects was 23.66 years; age range was between 18 – 42 years. There were 20 males and 89 females with M: F of 1:4.45. The mean age for females was 23.27 years, while that for males was 25.40 years, 64.22% of the study population were in 200 level while 35.78% were in 300 level/ final year, 11.93% of the study population are married. This is as shown in . Almost a ll the participants (95.41%) used toothbrush and paste, none used chewing stick alone, while 4.59% used both chewing stick and toothbrush as cleaning aid, 99.1% brush their teeth twice daily, also 99.1% of the participants used soft/medium textured toothbrush while 0.9% do not know the type of toothbrush being used. Twenty-two percent (22%) of the participants used horizontal/scrub method of toothbrushing, 10.1% used modified bass method, while 59.6% used both horizontal and modified bass methods, whereas 5.0% do not know which method of toothbrushing they use. Majority of the participants (89.9%) changed their toothbrush between 1–3 months, 8.3% change their toothbrush after 3 months, while 1.8% do not know when they change their toothbrush. A large proportion of the participants (97.2%) used dental floss as an interdental cleaning aid, while 2.8% used toothpick as an interdental cleaning aid. ( ) Concerning visitation to the dentist, 93.6% of the participants had visited the dentist previously; majority of this, visited every 6 months and mainly for routine dental check-up (81.37%), while 10.71% visited for dental pain ( ), 87% of the participants had scaling and polishing done previously while 23% had not done Scaling and polishing before. ( ) Oral health is a significant aspect of the general well-being of an individual, as it affects daily activities and productivity such as learning and work. There is an increase in the awareness about oral health recently as global policy has been made to improve oral health. The dental surgery technicians are important in achieving ideal oral health among dental patients; hence, this study reviews the oral hygiene practice and utilization of dental services of dental technology students, who are dental surgery technicians under training. Also, very few studies have been done among this group. In this study, majority of the participants cleaned their teeth with toothbrush and paste, this finding is similar to a study carried out among university students, in which all the students clean their teeth with toothbrush and paste, this finding probably reflects their level of education, as the students examined were 200 level and the 300 level (finalists), who were already exposed to sufficient dental education and it will be easier to impart the public with what is already part of their own lifestyle. A few of the participants used both toothbrush and chewing stick to brush, this may be because some people believe that the additional use of chewing stick with toothbrush, produces better result in reducing the amount of food debri. Chewing stick is known to contain antimicrobial agents that is beneficial for prevention and treatment of periodontal disease. In addition, a study by Al-Otaibi et al reported that the use of chewing stick appears to be more effective than tooth brushing for removal of plaque from the embrasures, hence improving interproximal health. Nearly all the participants brush their teeth twice daily; this result is in agreement with a previous study conducted among dental auxiliaries in Nigeria, which stated that 71.9% brush their teeth twice daily, but in contrast to other studies conducted among other group of students, in which less than half of the population brushed their teeth twice daily. , Nearly all the participants use medium toothbrush to clean their teeth, only 10.1% of the population used the modified bass brushing method, however, 6.4% do not know the brushing method they use. This finding is not too encouraging and it may be as a result of less emphasis placed on these participants by their teachers who might have taken it for granted that everyone, including the public from where these students were drawn, ought to have been familiar with the ideal brushing method. Various toothbrushing methods such as fones, horizontal scrub, charters and stillman were reported as effective method of toothbrushing, , while bass or the modified bass technique was reported to be more effective in other studies. , It has been reported globally, that no particular toothbrushing technique completely removes plaque. However, there seems to be a consensus agreement that the bass technique cleans the sulcus and subgingival areas. Majority of the participants change their toothbrushes every three months, this follows the trend of a previous study conducted in Italy, few change their toothbrush after three months, while others do not know when they ought to change their toothbrush. This finding calls for more emphasis on the importance of changing of toothbrushes after three months, as it has been documented that toothbrushes become frayed after 3 months of usage, as the tooth brushes become old and are less effective. The interdental region is an important area of the oral cavity because it is the area with the highest amount of food and plaque deposits and gingivitis and periodontitis start and occur more in the interdental region of the oral cavity, it is essential to clean the in-between the teeth as the regular toothbrushes cannot clean this area adequately, Interdental cleaning has been recommended as a supportive aid to toothbrushing to reduce dental plaque accumulation interproximally thereby improving oral health. In this study, almost all the participants used dental floss to clean in-between their teeth, this is in contrast to previous studies , , , , in which very minute percentage of the participants used dental floss. This difference may be as a result of exposure of the dental technology students to dentistry. The utilization of dental services is commonly measured by the annual number of dental visits of an individual. Majority of the participants have visited the dentist before, this is in contrast to previous study among undergraduate students, in which majority never attended dental clinic. In this present study, most of the participants attended dental clinic within the last six months and for routine dental check-up, this is very commendable and should be encouraged among students, this finding is in contrast to previous studies among dental auxiliaries' students, in which majority attended the dental clinic for scaling and polishing. Also, in a study by Braimoh et al, the few undergraduate students who attended dental clinic, did so because of dental pain and extractions. The result from this study could be as a result of the exposure to dentistry. Also, it could be that the routine dental check-up was a requirement for the students in the school. Majority of the participants had scaling and polishing done in the past. However, there were few who had never done scaling and polishing before, which means that the knowledge of the importance of scaling and polishing is not adequate among dental technology students, who will become dental surgery technicians in future, and are to educate patients on the importance of oral hygiene and preventive dental treatment especially scaling and polishing, which helps in reducing the incidence of preventable oral diseases especially dental caries and periodontal disease. Hence, the dental technology students need to be educated on the importance of scaling and polishing. The dental technology students, are the potential dental surgery technicians, which are important in assisting patients maintain good oral health, hence, they are expected to have optimal oral hygiene. The self-reported oral hygiene practices of the dental technology students in this present study are commendable, however, further education on the importance of scaling and polishing is advocated,
Social and self-stigma during COVID-19 pandemic: Egyptians’ perspectives
35c5e7bb-e961-4110-ae02-36dc22e08236
10118092
Health Communication[mh]
The social stigma associated with diseases has been defined as a negative social judgment that results in refusal and/or rejection. This negative judgment may be directed towards those suffering from diseases or self (self-stigma) . Stigmatization causes, in addition to discrimination, many other harmful effects as concealing disease and delaying access and, consequently, utilization of health care . Over the last decades, social stigma has been associated with various mental disorders and sexually transmitted diseases . However, social stigma recently has spread to pandemic-causing infections with the emergence and re-emergence of several infectious diseases in small outbreaks and even pandemics. Stigma has long been a dominant concern of people with infectious diseases due to fears of contagion and risk of death . There have been outbreaks of Ebola and Zika viruses in the past ten years, in addition to an influenza A (H1N1) in the 2009 pandemic, where an associated stigma was reported . This infectious outbreaks-associated stigma was an obstacle that hindered the adoption of preventive behaviours, which led to more severe consequences, increased virus transmission, and a challenge to the prevention and control efforts to control these outbreaks . Recently, the COVID-19 pandemic has affected the whole world; it is caused by the SARS-CoV-2 virus and has had multiple genetic variants and a myriad of related symptoms in addition to a rising toll of fatality . The fear of COVID-19 infection is a major driver for disease-related discrimination and stigma, which adds to the fears generated from the uncertainties around the disease cause of infection and fatality and perception of guilt as a source of infection . The double burden of previous quarantine enforcement by law and both dis/misinformation on social media are also direct causes of stigma ; other drivers include a lack of awareness of one’s rights not to be stigmatized and the weak policies against discrimination and the reluctance of the authorities to enforce them . Most research reported experiencing stigmatization in individuals not only with current COVID-19 infection and in those recovered patients and their families, Asian minorities, and healthcare workers (HCWs) who have experienced COVID-19-associated stigma . This study aims to assess social and self-stigma related to COVID-19 infection and other associated factors among a sample of Egyptians during the pandemic. Study design and sample size A cross-sectional study was conducted in July 2021 among 553 adult Egyptians. A sample size of 334 participants was calculated at a 95% confidence level and an alpha error of 5% using Epi Info 7 software based on the prevalence of stigma among Egyptian healthcare workers (HCWs) who are involved in treating COVID-19 patients as they were the closest population in culture and perceived attitudes to our target population . Data collection An online questionnaire was designed following the standard approach suggested in the behavioural insights research for COVID-19 by guidance to the Member States in the WHO European Region , the modified Berger HIV stigma scale , and after a review of the previously published studies on infectious disease-related stigma . The online questionnaire was distributed through various social networking packages using the snowball sampling method, starting from known individuals, and then they were asked to share the survey with others. The estimated time to fulfil the questionnaire was about 10–15 minutes. The questionnaire started with an introduction that explained the study’s rationale and objective, explaining that participation is voluntary, ensuring the confidentiality of responses and that completing the questionnaire implied the participants’ consent to participate in the study . The questionnaire was designed in the native language of Egypt; Arabic, and was formed of 3 sections: Sociodemographic questions: the section included questions about age, gender, education, and residence. Sources of Knowledge regarding COVID-19: using multiple options formats, which included social networks, the Egyptian Ministry of Health and Population website, Health care workers, Television and radio, posters and brochures, and Newspapers and magazines Questions about COVID-19 infection stigma, formed of 3 sub-scales: social stigma toward current COVID-19 patients (12 questions), Social stigma toward recovered COVID-19 patients (6 questions) and negative self-image if being a COVID-19 patient (perceived self-stigma) (7 questions). The respondents were then given a three-point Likert scale (agree, not sure, disagree). For each response denoting stigma, a score of one was given, while the responses denoting no stigma scored 0. The total score was calculated by adding up the responses denoting stigma in each sub-scale (Social stigma toward current COVID-19 patients, social stigma toward recovered COVID-19 patients and Negative self-image if being a COVID-19 patient; self-stigma) and then converting it into a percent-score (by division by the maximum score and then multiplying by 100). Then, using the same method, the three sub-scales were added to calculate the total COVID-19-related stigma score. The current study followed the methodology of previous studies methods in categorizing stigma due to the lack of universal cut-off points for stigma scores . Each sub-scale and the total stigma score were categorized into three categories; mild, moderate, and severe stigma, using the 33rd and 66th percentile cut-off values of the distribution of scores. Validity and reliability The content validity of the questionnaire was evaluated by ten experts of different disciplines: family medicine, public health and tropical medicine, Suez Canal University. After analysing the views of the experts, the authors revised language and questions that were unclear, confusing, or perhaps unacceptable to the participants. The questionnaire was then examined for face validity in a pilot research with 15 participants to ensure the questions’ clarity, appropriateness, and simplicity, as well as their multiple-choice format. Minor modifications of a few questions followed feedback from the pilot study. Confirmatory factor analysis (CFA) was conducted to assess construct validity. The CFA findings revealed a statistically significant association between the survey items. The Cronbach’s alpha values were used to assess the questionnaire’s internal consistency. Cronbach’s alpha = 0.753 for the full scale was found in the current study’s pilot trial, showing acceptable reliability . However, for the three sub-scales, it was = 0.667, = 0.779, and = 0.642, respectively. Statistical analysis Data were coded, entered, and analysed using Microsoft Excel version 2016. Data analyses were performed using IBM-SPSS software version 22.0. Sociodemographic characteristics, sources of Knowledge regarding COVID-19 and answers to stigma questions were presented using descriptive statistics, including frequencies and percentages . The scores for the stigma total and subscales were presented using mean ±SD. A Chi-square test was used to test the statistical significance of categorical data. Fisher’s exact test was used whenever Chi-square test assumptions were violated (i.e., when more than 20% of the expected values were less than five or cell values equal 0). Multivariable regression analysis was used to assess factors affecting the overall COVID-19-related stigma score. P-value < 0.05 was considered statistically significant. Ethical considerations The study was approved by the Research Ethics committee (REC) of the Faculty of Medicine, Suez Canal University, with a code of 4203. The study participants provided electronically written informed consent after being informed about the purpose of the study and the importance of the online form before data collection. A cross-sectional study was conducted in July 2021 among 553 adult Egyptians. A sample size of 334 participants was calculated at a 95% confidence level and an alpha error of 5% using Epi Info 7 software based on the prevalence of stigma among Egyptian healthcare workers (HCWs) who are involved in treating COVID-19 patients as they were the closest population in culture and perceived attitudes to our target population . An online questionnaire was designed following the standard approach suggested in the behavioural insights research for COVID-19 by guidance to the Member States in the WHO European Region , the modified Berger HIV stigma scale , and after a review of the previously published studies on infectious disease-related stigma . The online questionnaire was distributed through various social networking packages using the snowball sampling method, starting from known individuals, and then they were asked to share the survey with others. The estimated time to fulfil the questionnaire was about 10–15 minutes. The questionnaire started with an introduction that explained the study’s rationale and objective, explaining that participation is voluntary, ensuring the confidentiality of responses and that completing the questionnaire implied the participants’ consent to participate in the study . The questionnaire was designed in the native language of Egypt; Arabic, and was formed of 3 sections: Sociodemographic questions: the section included questions about age, gender, education, and residence. Sources of Knowledge regarding COVID-19: using multiple options formats, which included social networks, the Egyptian Ministry of Health and Population website, Health care workers, Television and radio, posters and brochures, and Newspapers and magazines Questions about COVID-19 infection stigma, formed of 3 sub-scales: social stigma toward current COVID-19 patients (12 questions), Social stigma toward recovered COVID-19 patients (6 questions) and negative self-image if being a COVID-19 patient (perceived self-stigma) (7 questions). The respondents were then given a three-point Likert scale (agree, not sure, disagree). For each response denoting stigma, a score of one was given, while the responses denoting no stigma scored 0. The total score was calculated by adding up the responses denoting stigma in each sub-scale (Social stigma toward current COVID-19 patients, social stigma toward recovered COVID-19 patients and Negative self-image if being a COVID-19 patient; self-stigma) and then converting it into a percent-score (by division by the maximum score and then multiplying by 100). Then, using the same method, the three sub-scales were added to calculate the total COVID-19-related stigma score. The current study followed the methodology of previous studies methods in categorizing stigma due to the lack of universal cut-off points for stigma scores . Each sub-scale and the total stigma score were categorized into three categories; mild, moderate, and severe stigma, using the 33rd and 66th percentile cut-off values of the distribution of scores. The content validity of the questionnaire was evaluated by ten experts of different disciplines: family medicine, public health and tropical medicine, Suez Canal University. After analysing the views of the experts, the authors revised language and questions that were unclear, confusing, or perhaps unacceptable to the participants. The questionnaire was then examined for face validity in a pilot research with 15 participants to ensure the questions’ clarity, appropriateness, and simplicity, as well as their multiple-choice format. Minor modifications of a few questions followed feedback from the pilot study. Confirmatory factor analysis (CFA) was conducted to assess construct validity. The CFA findings revealed a statistically significant association between the survey items. The Cronbach’s alpha values were used to assess the questionnaire’s internal consistency. Cronbach’s alpha = 0.753 for the full scale was found in the current study’s pilot trial, showing acceptable reliability . However, for the three sub-scales, it was = 0.667, = 0.779, and = 0.642, respectively. Statistical analysis Data were coded, entered, and analysed using Microsoft Excel version 2016. Data analyses were performed using IBM-SPSS software version 22.0. Sociodemographic characteristics, sources of Knowledge regarding COVID-19 and answers to stigma questions were presented using descriptive statistics, including frequencies and percentages . The scores for the stigma total and subscales were presented using mean ±SD. A Chi-square test was used to test the statistical significance of categorical data. Fisher’s exact test was used whenever Chi-square test assumptions were violated (i.e., when more than 20% of the expected values were less than five or cell values equal 0). Multivariable regression analysis was used to assess factors affecting the overall COVID-19-related stigma score. P-value < 0.05 was considered statistically significant. Ethical considerations The study was approved by the Research Ethics committee (REC) of the Faculty of Medicine, Suez Canal University, with a code of 4203. The study participants provided electronically written informed consent after being informed about the purpose of the study and the importance of the online form before data collection. Data were coded, entered, and analysed using Microsoft Excel version 2016. Data analyses were performed using IBM-SPSS software version 22.0. Sociodemographic characteristics, sources of Knowledge regarding COVID-19 and answers to stigma questions were presented using descriptive statistics, including frequencies and percentages . The scores for the stigma total and subscales were presented using mean ±SD. A Chi-square test was used to test the statistical significance of categorical data. Fisher’s exact test was used whenever Chi-square test assumptions were violated (i.e., when more than 20% of the expected values were less than five or cell values equal 0). Multivariable regression analysis was used to assess factors affecting the overall COVID-19-related stigma score. P-value < 0.05 was considered statistically significant. The study was approved by the Research Ethics committee (REC) of the Faculty of Medicine, Suez Canal University, with a code of 4203. The study participants provided electronically written informed consent after being informed about the purpose of the study and the importance of the online form before data collection. Participants’ general characteristics A total of 553 participants were studied. Most of the participants were within the age range of 18–65 years old. Most of them (n = 399, 72.2%) were females, and with university level of education (n = 349, 63.1%). The most reported Source of Knowledge regarding COVID-19 in order was through social networks (n = 408, 73.8%), the website of the Egyptian Ministry of Health and Population (n = 290, 52.4%) and healthcare workers (n = 284, 51.4%), while the least reported source was Newspapers and magazines (n = 35, 6.3%) . Social stigma toward current COVID-19 patients As regards social stigma toward current COVID-19 patients, the most reported situations were: not dealing with people coming back from abroad even while observing precautionary measures (n = 143, 25.9%), being afraid of dealing with people who just come back from abroad (n = 127, 23%), not supporting a close friend who has been infected with COVID-19 (n = 104, 18.8%), being afraid of a neighbour, a close relative or friend working in the medical field (n = 77, 13.9%) and victims of COVID-19 should be buried away from usual burring places (n = 68, 12.3%). However, there was no stigmatization directed towards poor and uneducated people. Also, the least reported stigmatizing situations were: COVID-19-infected persons should feel ashamed (n = 12, 2.2%) or be blamed for their illness (n = 3, 0.5%). Social stigma toward recovered COVID-19 patients As regards social stigma toward those currently infected with COVID-19, the most reported s situations where: after recovery, they were not able to usually deal with COVID-19 patients released from quarantine (n = 179, 32.4%) or allow their family members to do so either (n = 245, 44.3%). However, all other situations were reported less than 10%. Negative self-image if being a COVID-19 patient Regarding negative self-image regarding being a COVID-19 patient (self-stigma), the most reported situations where it would be a catastrophic situation if people suspected their infection with COVID-19 (n = 215, 38.9%) they would keep it secret if one of their family members were infected (n = 123, 22.2%), they would also keep it secret if they have infected themselves (n = 99, 17.9%), in addition to feeling guilty if got the infection and blame it on their careless behaviours and not following social distancing (n = 91, 16.5%). They would also keep it secret if one of a family member died due to their infection (n = 69, 12.5%). However, all other situations were reported with less than 10%. COVID-19-related stigma score The mean overall COVID-19-related stigma score was 4.7±3.1. The mean scores for the subscales were: 1.35±1.6 for social stigma toward current COVID-19 patients, 1.1±1.5 for social stigma toward recovered COVID-19 patients, and 2.1±1.4 for negative self-image if being a COVID-19 patient. The highest reported stigma category, for sub-scales and total, was mild stigma as the following: Social stigma toward current COVID-19 patients (n = 488, 88.2%), Social stigma toward recovered COVID-19 patients (n = 355, 64.2%), negative self-image for being a COVID-19 patient; perceived self-stigma (n = 396, 71.6%) and total stigma score (n = 488, 88.2%) respectively . The relation between general characteristics and total stigma score was not statistically significant (P-value >0.05) . The relation between sources of information and total stigma score showed that the participants getting their information from social networks have a higher total stigma score (P-value <0.05) . As regards the multivariable regression models for predicting the total stigma score, participants’ overall COVID-19-related stigma score was negatively associated with higher level of education (β = -0.091, 95% CI: -0.802, -0.037, p = 0.032), positively associated with using social networks for information (β = 1.545, 95% CI: 0.951, 2.139, p = 0.001) and negatively associated with getting information from health care workers (β = 0.271, 95% CI: -1.093, -0.026, p = 0.04). These relations were all statistically significant (P-value <0.05) . A total of 553 participants were studied. Most of the participants were within the age range of 18–65 years old. Most of them (n = 399, 72.2%) were females, and with university level of education (n = 349, 63.1%). The most reported Source of Knowledge regarding COVID-19 in order was through social networks (n = 408, 73.8%), the website of the Egyptian Ministry of Health and Population (n = 290, 52.4%) and healthcare workers (n = 284, 51.4%), while the least reported source was Newspapers and magazines (n = 35, 6.3%) . As regards social stigma toward current COVID-19 patients, the most reported situations were: not dealing with people coming back from abroad even while observing precautionary measures (n = 143, 25.9%), being afraid of dealing with people who just come back from abroad (n = 127, 23%), not supporting a close friend who has been infected with COVID-19 (n = 104, 18.8%), being afraid of a neighbour, a close relative or friend working in the medical field (n = 77, 13.9%) and victims of COVID-19 should be buried away from usual burring places (n = 68, 12.3%). However, there was no stigmatization directed towards poor and uneducated people. Also, the least reported stigmatizing situations were: COVID-19-infected persons should feel ashamed (n = 12, 2.2%) or be blamed for their illness (n = 3, 0.5%). As regards social stigma toward those currently infected with COVID-19, the most reported s situations where: after recovery, they were not able to usually deal with COVID-19 patients released from quarantine (n = 179, 32.4%) or allow their family members to do so either (n = 245, 44.3%). However, all other situations were reported less than 10%. Regarding negative self-image regarding being a COVID-19 patient (self-stigma), the most reported situations where it would be a catastrophic situation if people suspected their infection with COVID-19 (n = 215, 38.9%) they would keep it secret if one of their family members were infected (n = 123, 22.2%), they would also keep it secret if they have infected themselves (n = 99, 17.9%), in addition to feeling guilty if got the infection and blame it on their careless behaviours and not following social distancing (n = 91, 16.5%). They would also keep it secret if one of a family member died due to their infection (n = 69, 12.5%). However, all other situations were reported with less than 10%. The mean overall COVID-19-related stigma score was 4.7±3.1. The mean scores for the subscales were: 1.35±1.6 for social stigma toward current COVID-19 patients, 1.1±1.5 for social stigma toward recovered COVID-19 patients, and 2.1±1.4 for negative self-image if being a COVID-19 patient. The highest reported stigma category, for sub-scales and total, was mild stigma as the following: Social stigma toward current COVID-19 patients (n = 488, 88.2%), Social stigma toward recovered COVID-19 patients (n = 355, 64.2%), negative self-image for being a COVID-19 patient; perceived self-stigma (n = 396, 71.6%) and total stigma score (n = 488, 88.2%) respectively . The relation between general characteristics and total stigma score was not statistically significant (P-value >0.05) . The relation between sources of information and total stigma score showed that the participants getting their information from social networks have a higher total stigma score (P-value <0.05) . As regards the multivariable regression models for predicting the total stigma score, participants’ overall COVID-19-related stigma score was negatively associated with higher level of education (β = -0.091, 95% CI: -0.802, -0.037, p = 0.032), positively associated with using social networks for information (β = 1.545, 95% CI: 0.951, 2.139, p = 0.001) and negatively associated with getting information from health care workers (β = 0.271, 95% CI: -1.093, -0.026, p = 0.04). These relations were all statistically significant (P-value <0.05) . The results of the current study reported stigmatization of mild degree among more than three-quarters of the studied participants toward current COVID-19 patients (88.2%), about two-thirds toward recovered COVID-19 patients (64.2%), and about three-quarters for negative self-image for being a COVID-19 patient; perceived self-stigma (71.6%) respectively. This was consistent with what was found by Sangma et al. in their study in India, as they reported a higher proportion of significant social stigma and discrimination in terms of personalized stigma and negative self-image. Patel et al. also in India reported similar results of stigmatization, however, on different populations; of healthcare workers, as well as the findings of Lu et al. among Taiwanese frontline healthcare workers. However, Jayakody et al. in Sri Lanka reported a lower proportion of the population suffering COVID-19-related stigma, despite having severe adverse consequences at the individual and family levels. This relatively lower degree of stigma but in large proportion scale of the studied population could be explained by the fact that the pandemic, by the time of data collection, was at its second year with widely distributed active cases all over the country . This made the population more oriented, accepting, and contacting active cases almost everywhere. In Egypt, a recent survey conducted by the Central Agency for Public Mobilization and Statistics (CAPMAS) reflected the effect of the slowdown in economic and business activity, as more than a quarter of respondents stated losing their jobs due to the COVID-19 pandemic sequences. This uprising of financial hardship due to increased unemployment has increased grievances and social stigma . Stigma towards COVID-19 is different from the case of HIV in one significant aspect. COVID-19-related stigma is mainly due to the fear of its ease of transmissibility, high spread, and fatality among the population, while that related to HIV is linked to the unacceptable break of religious boundaries in Egypt, where the main religions are Islam, followed by Christianity . This fact is a possible explanation for the paradoxical finding of a mild degree but a large proportion of COVID-19-related stigma in the current study. COVID-19-related stigma was negatively associated with a higher level of education in the current study. This association is consistent with the finding of Abdelhafiz et al. study on Knowledge, perceptions, and attitude of Egyptians towards COVID-19, where less educated people thought that infection is associated with stigma as the Yuan et al. systematic review and meta-analysis study. The higher level of education provides enhanced access to correct and precise information and, as a result, builds a well-established stone of Knowledge about infectious diseases, infectiousness, and preventive measures. This, in turn, facilitates their ability to discriminate between accurate information and misinformation. On the contrary, those with poor Knowledge, as a disadvantage given by the lower level of education, got misled by the factitious news and wrong information . Conversely, COVID-19-related stigma was positively associated with using social networks for information and negatively associated with getting information from healthcare workers in the current study. That finding agreed again with Yuan et al. systematic review and meta-analysis study, and Islam et al. showed that a one-point increase in baseline knowledge is associated with a 0.51-point decrease in baseline stigma index. Social media provides both an opportunity and a challenge for the public during the COVID-19 pandemic. The opportunity is that it allows information about the disease and prevention but puts them on a great challenge of being affected by misinformation, especially when they are not competent enough to distinguish it, becoming a direct source of fear, anxiety, and stigma . Limitations The current study findings should be viewed in light of some limitations. The study’s observational nature was conducted to explore the situation in this new area of inquiry. It was not used to infer causal relationships. And due to the COVID-19 critical situation to achieve social distance, the researchers used the online data collection method. Consequently, the researchers recommend conducting further studies using face-to-face interviews using a probability sampling technique. The current study findings should be viewed in light of some limitations. The study’s observational nature was conducted to explore the situation in this new area of inquiry. It was not used to infer causal relationships. And due to the COVID-19 critical situation to achieve social distance, the researchers used the online data collection method. Consequently, the researchers recommend conducting further studies using face-to-face interviews using a probability sampling technique. Social and self-stigma related to COVID-19 infection were mild from the Egyptian perspective but found in a large proportion of the population. However, the perceived stigma was mainly affected negatively by the source from where the population got their health-related information. On the other hand, getting this information from healthcare workers better impacts perceiving stigma. Finally, being with lower education levels makes the population more vulnerable to stigmatizing COVID-19 cases. Recommendations Based on the current study’s findings, we recommend more legislative actions for the control of social media regarding disseminating health-related information and to increase awareness campaigns to counteract the adverse effects of getting such information from social media instead of the suitable channels for this. Based on the current study’s findings, we recommend more legislative actions for the control of social media regarding disseminating health-related information and to increase awareness campaigns to counteract the adverse effects of getting such information from social media instead of the suitable channels for this. S1 Appendix (DOCX) Click here for additional data file. S1 Data (XLSX) Click here for additional data file.
Bacterial and fungal community composition and community-level physiological profiles in forest soils
2697347e-8513-4fab-91fb-a26e9db5c59d
10118104
Microbiology[mh]
80%–90% of the processes in soils may be mediated by microorganisms . Therefore, the composition and functioning of the soil microbial communities have significant implications for carbon (C) and nutrient cycling, especially in forest soils rich in organic matter. The composition of plant litter (e.g., the relative proportions of cellulose and lignin) may affect the structure and functioning of microbial communities . In addition, soil properties, such as nutrient concentrations and pH, influence these aspects of the microbial communities [ – ]. However, it remains important to develop an improved understanding of (i) the factors that strongly influence the composition and functioning of microbial communities, (ii) the relationships in potential functioning and community composition between the soil O and A horizons, and (iii) the degree to which microbial community diversity is translated into functional diversity in forest soils. BIOLOG microplate technique and polymerase chain reaction–denaturing gradient gel electrophoresis (PCR-DGGE) analysis of rDNA fragments have often been used to characterize potential functioning and community composition of microbial community, respectively . In response to criticism that community-level physiological profile (CLPP) based on the BIOLOG microplate provide a biased representation of the functional ability of culturable bacteria capable of rapidly growing on substrates in the BIOLOG plate [ – ], Lladó and Baldrian suggested that CLPP was able to evaluate the functional potential of the fast-growing copiotrophic bacteria , which are active or potentially active bacteria that largely contribute to nutrient cycling in soils [ – ]. For fungal CLPPs, the limitation that the BIOLOG plate selects only culturable microorganisms is minor because the majority of fungi are culturable except for obligate symbionts . To date, many studies have supported the utility of CLPP for studying the potential functioning of the microbial community and comparing microbial communities in different samples [ – ]. Although DGGE analysis does not provide direct phylogenetic information and underestimates total microbial diversity, it is a rapid and inexpensive method that quantitatively detects differences in the diversity and composition of microbial communities, which is comparable to those of high-throughput sequencing . Using the BIOLOG microplate technique and PCR-DGGE analysis of 16S and 18S rDNA fragments, we characterized two attributes of the bacterial and fungal communities in the O and A horizons of forest soils, namely their potential functioning and composition. In this study, the following three hypotheses were tested. We hypothesized that microbial community composition and potential functioning will differ distinctly between the O and A horizons because of differences in organic matter composition and pH (Hypothesis 1), but that these microbial characteristics would covary between the O and A horizons owing to the possible linkage in organic matter characteristics and pH between the two horizons at each site. Thus, differences in the O horizon between sites are accompanied by corresponding differences in the A horizon between sites for both community composition and potential functioning (Hypothesis 2). In addition, we hypothesized that potential functioning of the microbial community is associated with the community composition (Hypothesis 3). Soils Litter and soil samples were collected from the O and A horizons, respectively, at each of 12 forest sites (35.81°N–36.34°N, 137.81°E–137.99°E) in Nagano Prefecture, Japan, in November, 2009 ( n = 6 for Andosols; n = 6 for Cambisols). The altitude of the sampling sites ranged from 700 to 2,045 m. Most of the sites were situated in conifer forests (vegetation at most sites was Larix kaempferi , with Pinus densiflora and Cryptomeria japonica at some sites). Because the vegetation was the same at most sites, we did not examine the effect of vegetation on microbial community composition and potential functioning. At each site, samples were collected from five plots, and then were pooled and mixed to form a composite sample and sieved through a 2 mm mesh. The litter and soil samples were stored at −20 °C for DNA extraction, and at 4 °C for BIOLOG and microbial biomass C measurements. A portion of the soil sample was air-dried for chemical analyses, whereas a portion of the litter sample was dried at 70 °C and then ground to powder using a blender (Osaka Chemical WB-1, Osaka, Japan). For pH measurement, the dried litter samples were not ground. The pH was measured from a soil–water suspension (1:2.5, w/v) or a litter–water suspension (1:50, w/v) with a glass electrode . Organic C and total nitrogen (N) contents were determined by dry combustion using an elemental analyzer (Thermo Finnigan Flash EA1112, Waltham, MA, USA) . The ground litter sample was fractionated into water-soluble polysaccharide, hemicellulose, cellulose, lignin, and lipids at Createrra, Inc. (Tokyo, Japan) using the proximate analytical method of Waksman and Stevens with some modifications . Organically bound (Al p and Fe p ) and non-crystalline and organically bound forms (Al o and Fe o ) of aluminum (Al) and iron (Fe) were extracted with 0.1 M sodium pyrophosphate (pH 10) and 0.2 M acid ammonium oxalate (pH 3), respectively . Aluminum and Fe were analyzed using flameless and flame atomic absorption spectrometry (Perkin Elmer 5100 PC, Tokyo, Japan), respectively. All data are expressed on a dry weight basis. All the analyses, including the microbial ones described below, were performed in 2009 and 2010. PCR-DGGE analysis of 16S and 18S rDNA fragments We analyzed the composition of the bacterial and fungal communities in the litter and soil samples by PCR-DGGE . Total soil DNA was extracted from 0.4 g of each sample using the FastDNA SPIN Kit for Soil (MP Biomedicals, Illkirch-Graffenstaden, France) in accordance with the manufacturer’s instructions. Given the difficulty of extracting DNA from the Andosol samples, heat-treated skim milk (treated at 115 °C for 5 min) was added before cell lysis to inhibit DNA adsorption to humic acid and allophane in the soil . However, DNA could not be extracted from two samples from the A horizon of Andosols. Thus, four samples from the A horizon of Andosols were used for the following analysis. The bacterial 16S rDNA fragment for DGGE analysis was amplified by PCR with the primer set 968f-GC and 1378r. After initial denaturation at 94 °C for 2 min, 34 amplification cycles were performed (denaturation at 94 °C for 15 s, annealing at 55 °C for 30 s, and extension at 68 °C for 30 s). The fungal 18S rDNA fragment was amplified with the primer set NS1 and GCFung. After denaturation at 94 °C for 2 min, 30 amplification cycles were performed (denaturation at 94 °C for 15 s, annealing at 50 °C for 30 s, and extension at 68 °C for 30 s). The PCR products were purified using the QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA). The DGGE analysis was performed with a DCode Universal Mutation Detection System (BioRad Laboratories, Hercules, CA, USA). For bacterial community analysis, a 6% polyacrylamide gel with a linear denaturing gradient range of 50%–70% was used to separate the 16S rDNA PCR products (100% denaturant is defined as 7 M urea and 40% [v/v] formamide). Each lane was loaded with 200 ng purified PCR products. The PCR products were separated by electrophoresis at 58 °C and 50 V for 18 h. For fungal community analysis, a 7% polyacrylamide gel with a linear denaturing gradient ranging from 20% to 45% was used, and the running condition was 60°C and 50 V for 20 h. In the bacterial and fungal DGGE analyses, DGGE Markers III and IV (Nippon Gene, Toyama, Japan) were used as the molecular markers, respectively . After electrophoresis, the gels were stained with SYBR Green I Nucleic Acid Gel Stain (Cambrex Bio Science, Rockland, ME, USA) and scanned with a ChemiDoc XRS system (BioRad Laboratories). Gel images were analyzed using Fingerprinting II software (BioRad Laboratories). Community diversity was evaluated using the number of DGGE bands (species richness) and the Shannon–Wiener diversity index ( H′ ). H′ was calculated using the following equation: H′ = −Σ p i (ln p i ), where p i is the proportion of the intensity of each band to the sum of intensity per profile. The proportion of the intensity of each band was also used in the principal coordinate analysis (PCoA). BIOLOG and microbial biomass C measurements CLPPs of bacteria and fungi, based on the BIOLOG ECO plate and SFN2 plate (BIOLOG, Hayward, USA), respectively , were determined to assess the potential functioning and functional diversity . The CLPP was conducted within one week after sampling. For the fungal CLPP, the inoculation solution was supplemented with antibiotics (10 mg L −1 streptomycin sulfate and 5 mg L −1 chlortetracycline) to inhibit bacterial growth. The ECO plates were incubated at 28 °C for 72 h, and absorbances at 595 nm and 750 nm were measured with a microplate reader (Model 680XR, BioRad). After correcting for the absorbances at 595 nm and 750 nm in each well at 0 h and the water well at 72 h, the value for each well used for subsequent analysis was the 595 nm absorbance (color development plus turbidity) minus the 750 nm absorbance (turbidity) at 72 h . The SFN2 plates were incubated for 168 h; the absorbance was measured at 750 nm and corrected for the readings in each well at 0 h and in the water well at 168 h. Well optical density values of less than 0.1 were set to zero. The overall color development in each plate was expressed as the average well color development (AWCD). Potential functional diversity was calculated using the Shannon–Wiener index ( H ′): H ′ = −Σ p i (ln p i ), where p i is the proportion of the absorbance value of the i th substrate to the sum of absorbance values of all substrates in a plate. The proportion of the absorbance value of each substrate was also used in the PCoA. Microbial biomass C in the soils was measured using the chloroform fumigation–extraction method as described previously . Soil was fumigated with ethanol-free chloroform for 24 h at 25°C and then extracted with 0.5 M K 2 SO 4 for 30 min. The organic C content in the extracts was measured with an organic C analyzer (Shimadzu TOC-V, Kyoto, Japan). Soil microbial biomass C (C mic ) was calculated using a conversion factor ( k EC = 0.49) as follows : C mic (μg g −1 ) = E C / k EC , where E C = (amount of C (μg g −1 ) extracted by 0.5 M K 2 SO 4 from fumigated soil)–(amount of C (μg g −1 ) extracted by 0.5 M K 2 SO 4 from non-fumigated soil). Statistical analyses Welch’s t -test was used to detect a significant difference between the means of two samples. Scheffé’s test together with one-way analysis of variance (ANOVA) were used to evaluate significant differences for multiple-group comparisons. Pearson correlation analysis was performed to measure the strength of the associations between variables. These analyses were conducted using BellCurve for Excel (Social Survey Research Information, Tokyo, Japan). The PCoA was performed on the data from the DGGE analysis and CLPP using Bray–Curtis dissimilarity matrices. Procrustes analysis, a method of comparing two sets of configurations, was performed to assess the extent that the CLPP and DGGE data yielded similar results with respect to PCoA ordinations among samples with 999 permutations. The PCoA and Procrustes analyses were conducted with the vegan package in R (version 4.1.2). Litter and soil samples were collected from the O and A horizons, respectively, at each of 12 forest sites (35.81°N–36.34°N, 137.81°E–137.99°E) in Nagano Prefecture, Japan, in November, 2009 ( n = 6 for Andosols; n = 6 for Cambisols). The altitude of the sampling sites ranged from 700 to 2,045 m. Most of the sites were situated in conifer forests (vegetation at most sites was Larix kaempferi , with Pinus densiflora and Cryptomeria japonica at some sites). Because the vegetation was the same at most sites, we did not examine the effect of vegetation on microbial community composition and potential functioning. At each site, samples were collected from five plots, and then were pooled and mixed to form a composite sample and sieved through a 2 mm mesh. The litter and soil samples were stored at −20 °C for DNA extraction, and at 4 °C for BIOLOG and microbial biomass C measurements. A portion of the soil sample was air-dried for chemical analyses, whereas a portion of the litter sample was dried at 70 °C and then ground to powder using a blender (Osaka Chemical WB-1, Osaka, Japan). For pH measurement, the dried litter samples were not ground. The pH was measured from a soil–water suspension (1:2.5, w/v) or a litter–water suspension (1:50, w/v) with a glass electrode . Organic C and total nitrogen (N) contents were determined by dry combustion using an elemental analyzer (Thermo Finnigan Flash EA1112, Waltham, MA, USA) . The ground litter sample was fractionated into water-soluble polysaccharide, hemicellulose, cellulose, lignin, and lipids at Createrra, Inc. (Tokyo, Japan) using the proximate analytical method of Waksman and Stevens with some modifications . Organically bound (Al p and Fe p ) and non-crystalline and organically bound forms (Al o and Fe o ) of aluminum (Al) and iron (Fe) were extracted with 0.1 M sodium pyrophosphate (pH 10) and 0.2 M acid ammonium oxalate (pH 3), respectively . Aluminum and Fe were analyzed using flameless and flame atomic absorption spectrometry (Perkin Elmer 5100 PC, Tokyo, Japan), respectively. All data are expressed on a dry weight basis. All the analyses, including the microbial ones described below, were performed in 2009 and 2010. We analyzed the composition of the bacterial and fungal communities in the litter and soil samples by PCR-DGGE . Total soil DNA was extracted from 0.4 g of each sample using the FastDNA SPIN Kit for Soil (MP Biomedicals, Illkirch-Graffenstaden, France) in accordance with the manufacturer’s instructions. Given the difficulty of extracting DNA from the Andosol samples, heat-treated skim milk (treated at 115 °C for 5 min) was added before cell lysis to inhibit DNA adsorption to humic acid and allophane in the soil . However, DNA could not be extracted from two samples from the A horizon of Andosols. Thus, four samples from the A horizon of Andosols were used for the following analysis. The bacterial 16S rDNA fragment for DGGE analysis was amplified by PCR with the primer set 968f-GC and 1378r. After initial denaturation at 94 °C for 2 min, 34 amplification cycles were performed (denaturation at 94 °C for 15 s, annealing at 55 °C for 30 s, and extension at 68 °C for 30 s). The fungal 18S rDNA fragment was amplified with the primer set NS1 and GCFung. After denaturation at 94 °C for 2 min, 30 amplification cycles were performed (denaturation at 94 °C for 15 s, annealing at 50 °C for 30 s, and extension at 68 °C for 30 s). The PCR products were purified using the QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA). The DGGE analysis was performed with a DCode Universal Mutation Detection System (BioRad Laboratories, Hercules, CA, USA). For bacterial community analysis, a 6% polyacrylamide gel with a linear denaturing gradient range of 50%–70% was used to separate the 16S rDNA PCR products (100% denaturant is defined as 7 M urea and 40% [v/v] formamide). Each lane was loaded with 200 ng purified PCR products. The PCR products were separated by electrophoresis at 58 °C and 50 V for 18 h. For fungal community analysis, a 7% polyacrylamide gel with a linear denaturing gradient ranging from 20% to 45% was used, and the running condition was 60°C and 50 V for 20 h. In the bacterial and fungal DGGE analyses, DGGE Markers III and IV (Nippon Gene, Toyama, Japan) were used as the molecular markers, respectively . After electrophoresis, the gels were stained with SYBR Green I Nucleic Acid Gel Stain (Cambrex Bio Science, Rockland, ME, USA) and scanned with a ChemiDoc XRS system (BioRad Laboratories). Gel images were analyzed using Fingerprinting II software (BioRad Laboratories). Community diversity was evaluated using the number of DGGE bands (species richness) and the Shannon–Wiener diversity index ( H′ ). H′ was calculated using the following equation: H′ = −Σ p i (ln p i ), where p i is the proportion of the intensity of each band to the sum of intensity per profile. The proportion of the intensity of each band was also used in the principal coordinate analysis (PCoA). CLPPs of bacteria and fungi, based on the BIOLOG ECO plate and SFN2 plate (BIOLOG, Hayward, USA), respectively , were determined to assess the potential functioning and functional diversity . The CLPP was conducted within one week after sampling. For the fungal CLPP, the inoculation solution was supplemented with antibiotics (10 mg L −1 streptomycin sulfate and 5 mg L −1 chlortetracycline) to inhibit bacterial growth. The ECO plates were incubated at 28 °C for 72 h, and absorbances at 595 nm and 750 nm were measured with a microplate reader (Model 680XR, BioRad). After correcting for the absorbances at 595 nm and 750 nm in each well at 0 h and the water well at 72 h, the value for each well used for subsequent analysis was the 595 nm absorbance (color development plus turbidity) minus the 750 nm absorbance (turbidity) at 72 h . The SFN2 plates were incubated for 168 h; the absorbance was measured at 750 nm and corrected for the readings in each well at 0 h and in the water well at 168 h. Well optical density values of less than 0.1 were set to zero. The overall color development in each plate was expressed as the average well color development (AWCD). Potential functional diversity was calculated using the Shannon–Wiener index ( H ′): H ′ = −Σ p i (ln p i ), where p i is the proportion of the absorbance value of the i th substrate to the sum of absorbance values of all substrates in a plate. The proportion of the absorbance value of each substrate was also used in the PCoA. Microbial biomass C in the soils was measured using the chloroform fumigation–extraction method as described previously . Soil was fumigated with ethanol-free chloroform for 24 h at 25°C and then extracted with 0.5 M K 2 SO 4 for 30 min. The organic C content in the extracts was measured with an organic C analyzer (Shimadzu TOC-V, Kyoto, Japan). Soil microbial biomass C (C mic ) was calculated using a conversion factor ( k EC = 0.49) as follows : C mic (μg g −1 ) = E C / k EC , where E C = (amount of C (μg g −1 ) extracted by 0.5 M K 2 SO 4 from fumigated soil)–(amount of C (μg g −1 ) extracted by 0.5 M K 2 SO 4 from non-fumigated soil). Welch’s t -test was used to detect a significant difference between the means of two samples. Scheffé’s test together with one-way analysis of variance (ANOVA) were used to evaluate significant differences for multiple-group comparisons. Pearson correlation analysis was performed to measure the strength of the associations between variables. These analyses were conducted using BellCurve for Excel (Social Survey Research Information, Tokyo, Japan). The PCoA was performed on the data from the DGGE analysis and CLPP using Bray–Curtis dissimilarity matrices. Procrustes analysis, a method of comparing two sets of configurations, was performed to assess the extent that the CLPP and DGGE data yielded similar results with respect to PCoA ordinations among samples with 999 permutations. The PCoA and Procrustes analyses were conducted with the vegan package in R (version 4.1.2). DGGE analysis of bacterial and fungal communities Bacterial richness (i.e., the number of bands in a 16S rDNA DGGE fingerprint) in the A horizon was significantly higher in the Andosols than in the Cambisols, but no significant difference was observed in the O horizon between the Andosols and Cambisols ( ). Both soil types exhibited significantly higher bacterial richness in the A horizon than in the O horizon ( p <0.05). Bacterial H ′ was also significantly greater in the A horizon than in the O horizon ( p <0.05), whereas no significant difference was observed between the soil types in each of the O and A horizons. Only a significant negative influence of hemicellulose content on bacterial H ′ in the O horizon ( p <0.05; ) and a significant positive influence of Fe p content on bacterial H ′ in the A horizon ( p <0.05; ) were observed. Contrary to the observations for the bacterial community diversity, no significant differences in fungal community richness and H ′ between soil types and between soil horizons were observed ( ). Altitude had a significant positive influence on the fungal community richness and H ′ in the O horizon ( p <0.05; ). Fungal H ′ was significantly positively correlated with lignin content ( p <0.05) and negatively correlated with water-soluble polysaccharide content ( p <0.05) in the O horizon ( ). No significant influence of soil properties on the fungal community diversity was observed in the A horizon ( ). In the PCoA ordination, the DGGE profiles were clearly divided between the O and A horizons for both bacterial ( ) and fungal ( ) communities. In both horizons, explicit distinction between the soil types for both microbial communities was not observed (Figs , ). Altitude and pH influenced the bacterial DGGE profiles in common with the O and A horizons (Figs and ), whereas a significant influence of C/N ratio on the fungal DGGE profiles was discerned in both horizons (Figs and ). BIOLOG analysis of bacterial and fungal communities The AWCD and metabolic diversity, H ′, of the bacterial community based on a BIOLOG ECO plate analysis exhibited no significant differences between the soil types and between the soil horizons ( ). Organic C ( p <0.05) and water-soluble polysaccharide contents ( p <0.05) negatively affected the bacterial AWCD, whereas lignin content ( p <0.05) was positively correlated with the bacterial AWCD in the O horizon ( ). Water-soluble polysaccharide content negatively affected potential functional diversity, H ′, of the bacterial community in the O horizon ( p <0.05). No significant influence of litter properties on AWCD and potential functional diversity of the fungal community was observed in the O horizon ( ). In the A horizon, no soil properties were significantly correlated with AWCD and metabolic diversity of the bacterial and fungal communities other than a significant negative influence of soil pH on the fungal potential functional diversity ( p <0.05; ). It is noteworthy that microbial biomass C was not significantly correlated with AWCD and metabolic diversity of the bacterial and fungal communities. In the PCoA ordination, the bacterial CLPP tended to be separated between the O and A horizons ( ), but such a trend was not observed for the fungal CLPP ( ). In each of the O and A horizons, no distinct difference was detected between the soil types for both microbial CLPPs (Figs , ). In the O horizon, the same indicators of nutrient status for litter (i.e., C/N ratio, lignin, lipids, total N, and water-soluble polysaccharide contents) significantly affected the CLPP for the bacterial and fungal communities (Figs ). In the A horizon, all soil properties examined did not influence the CLPP for both bacterial and fungal communities other than pH affecting the fungal CLPP ( ). Bacterial richness (i.e., the number of bands in a 16S rDNA DGGE fingerprint) in the A horizon was significantly higher in the Andosols than in the Cambisols, but no significant difference was observed in the O horizon between the Andosols and Cambisols ( ). Both soil types exhibited significantly higher bacterial richness in the A horizon than in the O horizon ( p <0.05). Bacterial H ′ was also significantly greater in the A horizon than in the O horizon ( p <0.05), whereas no significant difference was observed between the soil types in each of the O and A horizons. Only a significant negative influence of hemicellulose content on bacterial H ′ in the O horizon ( p <0.05; ) and a significant positive influence of Fe p content on bacterial H ′ in the A horizon ( p <0.05; ) were observed. Contrary to the observations for the bacterial community diversity, no significant differences in fungal community richness and H ′ between soil types and between soil horizons were observed ( ). Altitude had a significant positive influence on the fungal community richness and H ′ in the O horizon ( p <0.05; ). Fungal H ′ was significantly positively correlated with lignin content ( p <0.05) and negatively correlated with water-soluble polysaccharide content ( p <0.05) in the O horizon ( ). No significant influence of soil properties on the fungal community diversity was observed in the A horizon ( ). In the PCoA ordination, the DGGE profiles were clearly divided between the O and A horizons for both bacterial ( ) and fungal ( ) communities. In both horizons, explicit distinction between the soil types for both microbial communities was not observed (Figs , ). Altitude and pH influenced the bacterial DGGE profiles in common with the O and A horizons (Figs and ), whereas a significant influence of C/N ratio on the fungal DGGE profiles was discerned in both horizons (Figs and ). The AWCD and metabolic diversity, H ′, of the bacterial community based on a BIOLOG ECO plate analysis exhibited no significant differences between the soil types and between the soil horizons ( ). Organic C ( p <0.05) and water-soluble polysaccharide contents ( p <0.05) negatively affected the bacterial AWCD, whereas lignin content ( p <0.05) was positively correlated with the bacterial AWCD in the O horizon ( ). Water-soluble polysaccharide content negatively affected potential functional diversity, H ′, of the bacterial community in the O horizon ( p <0.05). No significant influence of litter properties on AWCD and potential functional diversity of the fungal community was observed in the O horizon ( ). In the A horizon, no soil properties were significantly correlated with AWCD and metabolic diversity of the bacterial and fungal communities other than a significant negative influence of soil pH on the fungal potential functional diversity ( p <0.05; ). It is noteworthy that microbial biomass C was not significantly correlated with AWCD and metabolic diversity of the bacterial and fungal communities. In the PCoA ordination, the bacterial CLPP tended to be separated between the O and A horizons ( ), but such a trend was not observed for the fungal CLPP ( ). In each of the O and A horizons, no distinct difference was detected between the soil types for both microbial CLPPs (Figs , ). In the O horizon, the same indicators of nutrient status for litter (i.e., C/N ratio, lignin, lipids, total N, and water-soluble polysaccharide contents) significantly affected the CLPP for the bacterial and fungal communities (Figs ). In the A horizon, all soil properties examined did not influence the CLPP for both bacterial and fungal communities other than pH affecting the fungal CLPP ( ). Bacterial CLPP, and bacterial and fungal community composition showed distinct differences between the O and A horizons in the PCoA ( ). In addition, the bacterial species richness and H ′ based on the DGGE profile were significantly greater in the A horizon than in the O horizon ( p <0.05; ). These findings supported Hypothesis 1 that microbial community composition and potential functioning differ between the O and A horizons. This might be attributed to the fact that the composition of organic matter, the substrate of microorganisms, largely differs between plant litter in the O horizon (mainly cellulose, lignin, and hemicellulose) and soil organic matter in the A horizon (primarily humic substances) . According to Kirchman, high proportions of cellulose and hemicellulose would select for certain bacteria in the O horizon , which would lower the bacterial species richness and H ′ in the O horizon and differentiate the microbial community composition and potential functioning between the two horizons. However, with regard to the fungal CLPP, no distinct difference was observed between the O and A horizons in the PCoA ( ). We do not have a reasonable explanation for the fungal CLPP result. With respect to the bacterial DGGE profiles in the O and A horizons (Figs and ), the pH, nutrient status (e.g., water-soluble polysaccharide, lignin, organic C, and total N contents, and C/N ratio), and altitude had significant influences. Significant effects of nutrient status and, to a greater extent, pH on bacterial community composition have been reported by several studies [ – , – ]. For the fungal community, the C/N ratio was a common driver that shaped the community composition in both horizons (Figs and ). Lauber et al. and Bao et al. also observed a significant effect of C/N ratio on soil fungal community composition. For the bacterial and fungal CLPPs, several nutrient status indicators had significant influences in the O horizon ( ), but such effects were not observed in the A horizon ( ). Similarly, Klimek et al. reported that the bacterial CLPP was influenced by nutrient status in the O horizon, but such an effect was not observed in the A horizon in temperate forest soils . In contrast, significant effects of nutrient status on the bacterial CLPP were observed in pine forest soil and Mediterranean forest soils . Hence, the contribution of nutrient status to the CLPP is likely to vary between sites, and factor(s) influencing the CLPP might be site-specific. No strong influence of soil type on the DGGE profiles and the CLPP was detected in either the O or A horizons (Figs and ). This result differed from previous observations for arable soils where the soil type was the primary determinant of bacterial community composition . This inconsistency might be ascribed to the greater variation in the litter and soil properties between sites in forests than in arable fields. It should be noted that although mycorrhizal fungi would play important roles in forests, our approach used in this study was not able to specify their composition and potential functioning. We used Procrustes analysis to explore the link between the O and A horizons for the DGGE profiles and for the CLPPs, but observed no significant correlations between the horizons ( ). This observation did not substantiate Hypothesis 2 that differences in the O horizon between sites are accompanied by corresponding differences in the A horizon between sites for both community composition and potential functioning. The present results suggested that different factors had considerable influences on microbial communities between the O and A horizons. For example, for the fungal DGGE profiles, significant influences of total N, organic C, and pH were apparent only in the O horizon (Figs and ). Interestingly, Procrustes analysis revealed significant couplings between the bacterial and fungal communities for both community composition and potential functioning, i.e., bacterial DGGE profile−fungal DGGE profile ( p <0.05 for O horizon; p <0.01 for A horizon) and bacterial CLPP−fungal CLPP ( p = 0.001 for O horizon; p <0.01 for A horizon) in each of the O and A horizons ( ). In addition, the potential functional diversity (i.e., H ′ based on the BIOLOG results) of the bacterial and fungal communities was significantly correlated in each horizon ( p <0.001 for O horizon and p <0.01 for A horizon; Tables and ). Coelho et al. reported a significant congruence between bacterial and microeukaryotic PCoA ordinations based on pyrosequencing data for 16S/18S rDNA in several sediment samples . Singh et al. reported a significant correlation between bacterial and fungal terminal restriction fragment length polymorphisms in a Procrustes analysis of grassland soils . A possible explanation for the present result is that common factors had significant influences on the profiles of both bacterial and fungal communities: pH in the bacterial and fungal DGGE profiles ( ) and total N, C/N ratio, lignin, and lipids in the bacterial and fungal CLPP ( ) in the O horizon, and C/N ratio in the bacterial and fungal DGGE profiles ( ) in the A horizon (no common factors were detected for the bacterial and fungal CLPPs in the A horizon). It should be noted that there will be non-considered factors that explain microbial community composition and potential functioning, especially for the A horizon, in the present study because, contrary to the results for the O horizon ( ), most of the examined factors had no significant influence on the DGGE and CLPP except for the bacterial DGGE profile in the A horizon ( ). Although a significant coupling was observed between the bacterial community composition and potential functioning (i.e., bacterial DGGE profile−bacterial CLPP) in the A horizon ( p <0.01; ), such a result was not obtained for the fungal community (i.e., fungal DGGE profile−fungal CLPP) in the A horizon and for both bacterial and fungal communities in the O horizon ( ). In addition, the genetic diversity (i.e., species richness and H ′ based on the DGGE profile) were not significantly correlated to the potential functional diversity ( H ′ based on the BIOLOG results) for bacterial and fungal communities in the O and A horizons (Tables and ). These results did not support Hypothesis 3 that potential functioning of the microbial community is associated with the community composition. The present observations would be affected, probably to a substantial degree, by the fact that the BIOLOG results reflect only rapidly growing microorganisms, whereas cultivation-independent DGGE analysis reflects the entire microbial community in the soil. However, in a previous cluster analysis, bacterial DGGE and bacterial CLPP results were similar for tea plantation and forest soils . Furthermore, strong couplings have been reported for bacterial CLPP and phospholipid fatty acid profiles in a montane area , and bacterial CLPP and bacterial community composition in pitcher plant microcosms . Further research is warranted to understand the correspondence between microbial community composition and potential functioning. Both the CLPP and DGGE profile were clearly separated between the O and A horizons in PCoA ordinations except for the fungal CLPP. No significant couplings of the CLPP between the O and A horizons and of the DGGE profile between the two horizons were detected by Procrustes analysis for both bacterial and fungal communities. Also, no significant coupling was observed between the community composition and the potential functioning except for the bacterial community in the A horizon. In addition, the community diversity was not associated with the potential functional diversity for the bacterial and fungal communities. Unexpectedly, significant links between the bacterial and fungal DGGE profiles and between the bacterial and fungal CLPPs were observed in each of the O and A horizons. The present results do not fully unravel the factors that shape the composition and potential functioning of microbial communities in forest soils, especially for the A horizon, and further studies are warranted to elucidate the factors. Nevertheless, the present study clearly showed that different factors have substantial influences on microbial communities between the O and A horizons but that common factors affect both the bacterial and fungal communities in each horizon. These results have important implications for C and nutrient cycling in forest soils rich in organic matter.
Pharmacogenomic profile of a central European urban random population-Czech population
b046b51e-f62f-45af-8152-1cf397e16b5f
10118108
Pharmacology[mh]
Evidence shows the genetic components of inter-individual variability in drug response, which has been linked to ethnicity . However, self-described ethnicity does not predict an individual patient’s genotype or response to medication, becoming a drug prescription based on ethnicity an oversimplification of the multifaceted interplay of ancestry and drug response . Pharmacogenomics (PGx) aims to underlie individual differences in drug use, both in terms of efficacy and toxicity, by defining genetic profiles. Increasing evidence suggests that variants in genes encoding for drug metabolizing enzymes/transporters directly influence their function, which in turn results both into adverse drug reactions (ADRs) and/or altered efficacy . Consequently, most medications are beneficial to a subset of the treated patients, while the remaining ones will either not react to the medications or develop ADRs, which are significant causes of mortality and morbidity . The Kardiovize study is a prospective epidemiological survey of 1% (n = 2160) of the urban population of Brno, the second city of the Czech Republic, a Central European country . The Kardiovize study aimed to determine the presence and burden of cardiovascular risk factors in adults aged 25–64 years and to conduct genetic risk analyses. An advantage of DNA extracted from blood samples of a subset of the Kardiovize participants, representing the prevalence of drug consumption of the overall cohort. We genotyped 59 SNPs within a selected but comprehensive array of genes associated with different drug metabolizing rates. Although some previous studies provided reference frequencies of alleles tested in this study in the Czech Republic or neighboring European countries , most of them focused on specific diseased patient cohorts and not on the general population. In this study, our goal was to contribute to the definition of a PGx profiling of a general Czech population, in particular for the most commonly used medications. To this purpose, by crossing with validated medicine use from the Kardiovize population surveys, we report here in a general Czech population sample PGx outcomes for widely used widely used drugs (warfarin, atorvastatin and metaprolol). Moreover, by comparing genotyping results to a Finnish reference population (SUPER-Finland study) , we investigate significant differences in the PGx metabolic activities between these two European ethnicities. Study design The Kardiovize study has been previously described . Recruitment and baseline examinations were completed in 2014 with planned follow-up at 5-year intervals through 2030. The baseline study protocol was approved by the ethics committee of St Anne’s University Hospital, Brno, Czech Republic (reference 2 G/2012), following the Declaration of Helsinki. Written consent was obtained from all participants. Data were stored using the web-based research electronic data capture (REDCap) . For the current analysis we used data from participants with complete anthropometric measurements, sociodemographic and life-style information, and genotype data (subjects with missing genotypes for > 4 SNPs were excluded). DNA extraction and sequencing DNA was extracted from participants’ blood samples using the DNeasy Blood & Tissue Kit from QIAGEN (Germany), according to manufacturer’s instructions. We selected 59 SNPs in 13 loci identified by genome-wide association studies as being associated with different drug metabolizing rates ( BCHE , CYP1A2 , CYP2C19 , CYP2D6 , CYP3A5 , F2 , F5 , IFNL3 , SLCO1B1 , TPMT , UGT1A1 , VKORC1 ). All DNAs were quantified with Nanodrop Lite before any futher processing. For SNP genotyping with OpenArray plate in total 60 ng of sample DNA was loaded to each OpenArray sub-array (Thermo Fisher Scientific TM , Vienna, Austria). 2x TaqMan™ OpenArray™ Real-Time PCR Master Mix was used according to manufacturer protocol (Thermo Fisher Scientific TM , Vienna, Austria). OpenArray plate preparation was done using AccuFill system according to manufacturer protocol (Thermo Fisher Scientific TM , Vienna, Austria). SNP calling was done using TaqMan Genotyper Software (Thermo Fisher Scientific TM , Vienna, Austria). Auto-calling function was used and each assay was manually checked. For CNV determination two detection assays were used: CYP2D6 exon9 in combination with RNAseP and CYP2D6 intron2 in combination with RNAseP (Thermo Fisher Scientific TM , Vienna, Austria). All reactions were carried our according to manufacturer protocol on 384-well plate, each reaction in quadruplicate. In each reaction 10 ng of DNA was used. Real-time PCR run was done in QuantStudio 12K Flex Real-Time PCR System (Thermo Fisher Scientific TM , Vienna, Austria). Data analysis was done using CopyCaller® Software (Thermo Fisher Scientific TM , Vienna, Austria). For each run a reference sample with known CNV count was used to determine sample calls. GeneRx database The GeneRx database ( https://www.generx.fi/ ) has been developed by Abomics ( www.abomics.fi ) and includes information about genotypes that are associated with clinically relevant variation in >200 drugs, either considering the responsiveness to the drugs or drug-induced adverse effects. The database is a collection of recommendations for the most clinically relevant and actionable pharmacogenetic drug-gene pairs. The contents of the database reflect published expert opinion pharmacogenetic recommendations and contents of commercial pharmacogenetic test panels. The database is regularly updated based on newly published literature, which is reviewed for each gene-drug pair, and recommendations are changed when needed. Recommendations are mostly based on published expert opinions and pharmacogenetic recommendation articles, e.g. by the Clinical Pharmacogenetics Implementation Consortium (CPIC). The FDA’s list of pharmacogenetic biomarkers in drug labels is also followed for update process. In the database, predicted phenotypes based on genotypes, considers the four metabolizer types for drug metabolizing enzymes: 1) Poor Metabolizer (PM). Medication is broken down very slowly. May experience side effects at standard doses; 2) Intermediate Metabolizer (IM). Slow rate of metabolism. May be exposed to excessive drug plasma concentration at standard doses, potentially causing side effects; 3) Normal Metabolizer (NM). Normal rate of metabolism. Expected normal efficacy at standard doses; 4) Ultrarapid Metabolizer (UM). Medication is rapidly broken down, potentially leading to lack of efficacy. These descriptions of changes in metabolism are opposite in the case of prodrugs for which the risk of over-exposure and adverse effects is more pronounced in UMs, as lack of efficacy is expected in IMs and PMs. For transporter protein genes (such as SLCO1B1 ), the predicted phenotype classes are increased, normal, decreased, and poor function. For blood coagulation factors F2 and F5, the phenotype stratification considers risk classification for venous thromboembolism (increased or significantly increased risk of venous thromboembolism). IFNL3 phenotypes are associated with the response to antiviral hepatitis C treatment (favorable or unfavorable response genotypes). For VKORC1 , the phenotypes represent the expression levels of the enzyme (normal or reduced expression), which links it to warfarin sensitivity. The raw genetic data were interpreted to diplotypes and assigned to predicted phenotypes by Abomics’ in-house interpretation software. The allelic phenotypes are represented in the . Matching of diplotypes to phenotypes was done according to CPIC guidelines for CYP2C9, CYP2C19, CYP2D6, CYP3A5, IFNL3, SLCO1B1, TPMT and UGT1A1. For VKORC1, heterozygous carriers of rs9923231 (-1639G>A) were given the phenotype “reduced expression” and homozygous carriers the phenotype “remarkably reduced expression” of the enzyme. Statistical analyses Descriptive statistics were used to summarize the dataset and the distribution of genetic variants. Continuous variables were expressed as mean and standard deviation (SD), and tested for differences with independent-sample t tests. Categorical variables were expressed as absolute frequencies and percentages, and tested for differences with chi-squared tests. The distributions of specific SNPs between the Czech and the Finnish populations were compared using the Chi-squared test. Possible deviations from Hardy-Weinberg equlibrium for all detected haplotypes in the Czech population was tested with HardyWeinberg package for R . All other statistical analyses were performed on the SPSS software (version 26.0, SPSS, Chicago, IL, USA), and p-values < 0.05 were considered statistically significant. The Kardiovize study has been previously described . Recruitment and baseline examinations were completed in 2014 with planned follow-up at 5-year intervals through 2030. The baseline study protocol was approved by the ethics committee of St Anne’s University Hospital, Brno, Czech Republic (reference 2 G/2012), following the Declaration of Helsinki. Written consent was obtained from all participants. Data were stored using the web-based research electronic data capture (REDCap) . For the current analysis we used data from participants with complete anthropometric measurements, sociodemographic and life-style information, and genotype data (subjects with missing genotypes for > 4 SNPs were excluded). DNA was extracted from participants’ blood samples using the DNeasy Blood & Tissue Kit from QIAGEN (Germany), according to manufacturer’s instructions. We selected 59 SNPs in 13 loci identified by genome-wide association studies as being associated with different drug metabolizing rates ( BCHE , CYP1A2 , CYP2C19 , CYP2D6 , CYP3A5 , F2 , F5 , IFNL3 , SLCO1B1 , TPMT , UGT1A1 , VKORC1 ). All DNAs were quantified with Nanodrop Lite before any futher processing. For SNP genotyping with OpenArray plate in total 60 ng of sample DNA was loaded to each OpenArray sub-array (Thermo Fisher Scientific TM , Vienna, Austria). 2x TaqMan™ OpenArray™ Real-Time PCR Master Mix was used according to manufacturer protocol (Thermo Fisher Scientific TM , Vienna, Austria). OpenArray plate preparation was done using AccuFill system according to manufacturer protocol (Thermo Fisher Scientific TM , Vienna, Austria). SNP calling was done using TaqMan Genotyper Software (Thermo Fisher Scientific TM , Vienna, Austria). Auto-calling function was used and each assay was manually checked. For CNV determination two detection assays were used: CYP2D6 exon9 in combination with RNAseP and CYP2D6 intron2 in combination with RNAseP (Thermo Fisher Scientific TM , Vienna, Austria). All reactions were carried our according to manufacturer protocol on 384-well plate, each reaction in quadruplicate. In each reaction 10 ng of DNA was used. Real-time PCR run was done in QuantStudio 12K Flex Real-Time PCR System (Thermo Fisher Scientific TM , Vienna, Austria). Data analysis was done using CopyCaller® Software (Thermo Fisher Scientific TM , Vienna, Austria). For each run a reference sample with known CNV count was used to determine sample calls. The GeneRx database ( https://www.generx.fi/ ) has been developed by Abomics ( www.abomics.fi ) and includes information about genotypes that are associated with clinically relevant variation in >200 drugs, either considering the responsiveness to the drugs or drug-induced adverse effects. The database is a collection of recommendations for the most clinically relevant and actionable pharmacogenetic drug-gene pairs. The contents of the database reflect published expert opinion pharmacogenetic recommendations and contents of commercial pharmacogenetic test panels. The database is regularly updated based on newly published literature, which is reviewed for each gene-drug pair, and recommendations are changed when needed. Recommendations are mostly based on published expert opinions and pharmacogenetic recommendation articles, e.g. by the Clinical Pharmacogenetics Implementation Consortium (CPIC). The FDA’s list of pharmacogenetic biomarkers in drug labels is also followed for update process. In the database, predicted phenotypes based on genotypes, considers the four metabolizer types for drug metabolizing enzymes: 1) Poor Metabolizer (PM). Medication is broken down very slowly. May experience side effects at standard doses; 2) Intermediate Metabolizer (IM). Slow rate of metabolism. May be exposed to excessive drug plasma concentration at standard doses, potentially causing side effects; 3) Normal Metabolizer (NM). Normal rate of metabolism. Expected normal efficacy at standard doses; 4) Ultrarapid Metabolizer (UM). Medication is rapidly broken down, potentially leading to lack of efficacy. These descriptions of changes in metabolism are opposite in the case of prodrugs for which the risk of over-exposure and adverse effects is more pronounced in UMs, as lack of efficacy is expected in IMs and PMs. For transporter protein genes (such as SLCO1B1 ), the predicted phenotype classes are increased, normal, decreased, and poor function. For blood coagulation factors F2 and F5, the phenotype stratification considers risk classification for venous thromboembolism (increased or significantly increased risk of venous thromboembolism). IFNL3 phenotypes are associated with the response to antiviral hepatitis C treatment (favorable or unfavorable response genotypes). For VKORC1 , the phenotypes represent the expression levels of the enzyme (normal or reduced expression), which links it to warfarin sensitivity. The raw genetic data were interpreted to diplotypes and assigned to predicted phenotypes by Abomics’ in-house interpretation software. The allelic phenotypes are represented in the . Matching of diplotypes to phenotypes was done according to CPIC guidelines for CYP2C9, CYP2C19, CYP2D6, CYP3A5, IFNL3, SLCO1B1, TPMT and UGT1A1. For VKORC1, heterozygous carriers of rs9923231 (-1639G>A) were given the phenotype “reduced expression” and homozygous carriers the phenotype “remarkably reduced expression” of the enzyme. Descriptive statistics were used to summarize the dataset and the distribution of genetic variants. Continuous variables were expressed as mean and standard deviation (SD), and tested for differences with independent-sample t tests. Categorical variables were expressed as absolute frequencies and percentages, and tested for differences with chi-squared tests. The distributions of specific SNPs between the Czech and the Finnish populations were compared using the Chi-squared test. Possible deviations from Hardy-Weinberg equlibrium for all detected haplotypes in the Czech population was tested with HardyWeinberg package for R . All other statistical analyses were performed on the SPSS software (version 26.0, SPSS, Chicago, IL, USA), and p-values < 0.05 were considered statistically significant. Characteristics of the study populations The current analysis was conducted on a subset of a total of 2160 participants with complete health interviews, anthropometric assessment and genotyping, which satisfied inclusion/exclusion criteria . The prevalence of hypertension and dyslipidaemia in this overall cohort was 38.7%, and 67.1%, respectively . Accordingly, describes succinctly the medications (by broad classes) taken by the overall baseline population (n = 2160; men = 977, women = 1183). A total of randomly selected 250 participants, aged 25 to 64 years (mean = 47.9 years; SD = 11.3) were included for PGx profiling. No statistically significant indications for deviation from Hardy-Weinberg equilibrium were detected, suggesting a genetically balanced, random sample ( Table) . The prevalence of hypertension, hyperlipidaemia, and diabetes mellitus in this sub-cohort was 38.9% and 71.5%, and 4.4% respectively, similar to the overall cohort. The prevalence of the combinations with either diabetes mellitus and hypertension; or diabetes mellitus and hyperlipidaemia; or hypertension with hyperlipidaemia were respectively 3.6%, 3.2%, and 36.9%. Single nucleotide polymorphism (SNPs) in drug metabolizing genes The list of the 59 SNPs that we detected in 13 drug-metabolizing genes, which have been previously identified by numerous genome-wide association studies as being associated with different drug metabolizing rates , is summarized in . Detected haplotype frequencies, as well as proportions of homozygotes and heterozygotes, are presented in . The 13 genes include BCHE (pseudocholinesterase); CYP1A2 , CYP2C9 , CYP2C19 , CYP2D6 , CYP3AS (all members of the cytochrome P450 mixed-function oxidase system family); F2 , F5 (coagulation factors); IFNL3 (interferon Lambda 3); SLCO1B1 (a solute carrier organic anion transporter family member); TPMT (thiopurine methyltransferase); UGT1A1 (a uridine diphosphate glucuronosyltransferase); and VKORC1 (a subunit of vitamin K epoxide reductase complex). Use of selected widely used drugs among different PGx profile in the Czech population Subsequently, we focused our attention on the most commonly used medications used by our participants. In our study cohort 8.5% individuals used the anticoagulant warfarin. Warfarin dosage required to reach therapeutic drug levels is heavily affected by two enzymes, CYP2C9 and VKORC1 (Tables and ). CYP2C9 enzyme metabolizes warfarin while VKORC1 (an enzyme involved in vitamin K recycling) is the target of warfarin. A common variant in the VKORC1 gene (tested in the gene panel) is associated with increased sensitivity to the drug. Only 42.9% of individuals using warfarin exhibited normal expression of VKORC1 , with 38.1% exhibiting reduced expression and 19% exhibiting markedly reduced expression. Among the warfarin users, 61.9% were normal metabolizers and 38.1% were intermediate metabolizers for CYP2C9 . Genetic analyses of SLCO1B1, a molecular transporter involved in hepatic uptake of statins, showed that atorvastatin users (16% of the total study population) were for 70% with normal function of the transporter, while the remaining 30% of users exhibited decreased function . Finally, the utilization of metoprolol by 22% of the study population was associated with a CYP2D6-dependent normal metabolism by 92.8% of individuals, with only 5.3% and 1.9% of individuals displaying intermediate and poor metabolism, respectively . Comparison between Czech and Finnish PGx profiles As a comparative ethnically different study population, we considered a Finnish reference population, consisting of 9262 non-related individuals participating in the SUPER-Finland study, previously described . We used the same list of the 59 SNPs that we detected in 13 drug-metabolizing genes, which is summarized in . We detected significant pharmacogenomics differences in the SNPs specifically for CYP2D6 , CYP2C19 and UGT1A1 between the Czech and the Finnish populations . In particular, for CYP2D6 the Czech population contained less ultra-rapid metabolizers (UM) (p<0.001); for CYP2C19 the Czech population contained more normal metabolizers (NM) (~45% versus 40%) and more intermediate metabolizers (IM) (~29% versus ~26%) (p<0.001); for UGT1A1 the Czech population contained ~37% NM, while Finns included 32% (p<0.001) . CYP2C19 is a key marker to rationalize clopidogrel treatment, CYP2D6 is responsible for metabolizing anti-depressants and anti-psychotics, while UGT1A1 metabolises many endogenous substances and clinical drugs, such as steroids, bilirubin and irinotecan. The current analysis was conducted on a subset of a total of 2160 participants with complete health interviews, anthropometric assessment and genotyping, which satisfied inclusion/exclusion criteria . The prevalence of hypertension and dyslipidaemia in this overall cohort was 38.7%, and 67.1%, respectively . Accordingly, describes succinctly the medications (by broad classes) taken by the overall baseline population (n = 2160; men = 977, women = 1183). A total of randomly selected 250 participants, aged 25 to 64 years (mean = 47.9 years; SD = 11.3) were included for PGx profiling. No statistically significant indications for deviation from Hardy-Weinberg equilibrium were detected, suggesting a genetically balanced, random sample ( Table) . The prevalence of hypertension, hyperlipidaemia, and diabetes mellitus in this sub-cohort was 38.9% and 71.5%, and 4.4% respectively, similar to the overall cohort. The prevalence of the combinations with either diabetes mellitus and hypertension; or diabetes mellitus and hyperlipidaemia; or hypertension with hyperlipidaemia were respectively 3.6%, 3.2%, and 36.9%. The list of the 59 SNPs that we detected in 13 drug-metabolizing genes, which have been previously identified by numerous genome-wide association studies as being associated with different drug metabolizing rates , is summarized in . Detected haplotype frequencies, as well as proportions of homozygotes and heterozygotes, are presented in . The 13 genes include BCHE (pseudocholinesterase); CYP1A2 , CYP2C9 , CYP2C19 , CYP2D6 , CYP3AS (all members of the cytochrome P450 mixed-function oxidase system family); F2 , F5 (coagulation factors); IFNL3 (interferon Lambda 3); SLCO1B1 (a solute carrier organic anion transporter family member); TPMT (thiopurine methyltransferase); UGT1A1 (a uridine diphosphate glucuronosyltransferase); and VKORC1 (a subunit of vitamin K epoxide reductase complex). Subsequently, we focused our attention on the most commonly used medications used by our participants. In our study cohort 8.5% individuals used the anticoagulant warfarin. Warfarin dosage required to reach therapeutic drug levels is heavily affected by two enzymes, CYP2C9 and VKORC1 (Tables and ). CYP2C9 enzyme metabolizes warfarin while VKORC1 (an enzyme involved in vitamin K recycling) is the target of warfarin. A common variant in the VKORC1 gene (tested in the gene panel) is associated with increased sensitivity to the drug. Only 42.9% of individuals using warfarin exhibited normal expression of VKORC1 , with 38.1% exhibiting reduced expression and 19% exhibiting markedly reduced expression. Among the warfarin users, 61.9% were normal metabolizers and 38.1% were intermediate metabolizers for CYP2C9 . Genetic analyses of SLCO1B1, a molecular transporter involved in hepatic uptake of statins, showed that atorvastatin users (16% of the total study population) were for 70% with normal function of the transporter, while the remaining 30% of users exhibited decreased function . Finally, the utilization of metoprolol by 22% of the study population was associated with a CYP2D6-dependent normal metabolism by 92.8% of individuals, with only 5.3% and 1.9% of individuals displaying intermediate and poor metabolism, respectively . As a comparative ethnically different study population, we considered a Finnish reference population, consisting of 9262 non-related individuals participating in the SUPER-Finland study, previously described . We used the same list of the 59 SNPs that we detected in 13 drug-metabolizing genes, which is summarized in . We detected significant pharmacogenomics differences in the SNPs specifically for CYP2D6 , CYP2C19 and UGT1A1 between the Czech and the Finnish populations . In particular, for CYP2D6 the Czech population contained less ultra-rapid metabolizers (UM) (p<0.001); for CYP2C19 the Czech population contained more normal metabolizers (NM) (~45% versus 40%) and more intermediate metabolizers (IM) (~29% versus ~26%) (p<0.001); for UGT1A1 the Czech population contained ~37% NM, while Finns included 32% (p<0.001) . CYP2C19 is a key marker to rationalize clopidogrel treatment, CYP2D6 is responsible for metabolizing anti-depressants and anti-psychotics, while UGT1A1 metabolises many endogenous substances and clinical drugs, such as steroids, bilirubin and irinotecan. In a Czech random population sample, around 20–30% of patients showed a poor PGx metabolism to warfarin and atorvastatin, exposing them to risk for ADRs. While these data may reflect inter-individual variability rates in the genetic responses to warfarin, atorvastatin and metoprolol that has been previously reported in several European countries , at the PGx level it reveals that warfarin and atorvastatin have been prescribed routinely to an alarmingly high number of Czech individuals in Kardiovize cohort who carry actionable pharmacogenetic variants. Recently PGx has acquired importance to define individual differences in drug efficacy and toxicity, by taking into account the underlying genetic profile . Closely connected to PGx is the discipline of personalized medicine which is linked to the use of an individual’s genetic profile to guide the most suitable therapeutic choice by suggesting predictions about whether that patient will have beneficial effects from a medication or, conversely or suffer important side effects . In several European populations, particularly in those living in countries with lower income, information about the prevalence of pharmacogenomic biomarkers is incomplete or lacking. Studies have shown that ~91% of ADR are type A which are directly related to drug metabolism, hence likely identified by genetic testing . The clinical relevance of our findings showing a high prevalence of impaired metabolism in association with warfarin treatment is consistent with prior work highlighting an increased risk of bleeding in patients carrying variants for CYP2C9 . Accordingly, the GIFT study has reported that genotype-guided warfarin dosing in the range of therapeutic international normalized ratio, improves outcomes such as bleeding, and death . In addition to CYP2C9, other similar enzymes such as CYP2C8 serve as minor pathways to metabolize warfarin. Interestingly, it has been shown that genetic variations of the gene encoding the CYP2C8 drug metabolizing enzymes can lead to clinical differences in drug metabolism and ultimately variations in drug effectiveness and toxicities within populations in the same country. This is observed in different populations living in Jordan , which include Chechens (~1%) and Circassians, genetically isolated groups . Chechens display several pharmacogenomics variants (i.e ABCB1 , VDR ) resembling those present in Europeans and Finnish . Our Kardiovize study was conducted in South Moravia region (Brno) of Czech Republic. In a similar fashion, future studies should assess differences in these important pharmacogene polymorphisms also in a larger Czech population representative of Czech (~65%), Moravian (~5%) and Slovak (~1%) ethnicities. PGx (SNPs) variants may impact on the structure and the activity of the protein/enzyme, and they are predicted to produce normally functioning, or less functioning, enzymes upon transcription/translation, as it has been described among others for CYP2D6 , CYP2C19 and UGT1A1 . An additional finding of our study is to highlight interethnic differences of the latter common pharmacogenetics variants ( CYP2D6 , CYP2C19 and UGT1A1 ) between Czech and Finnish population studies, suggesting the utility of PGx-informed prescription based on variant genotyping. For CYP2C19 , our data are consistent with the differences observed in the prevalence of high-risk genotypes in a large study assessing the pharmacogenomic biomarkers allelic spectrum in 18 European populations by analyzing 1,931 pharmacogenomics biomarkers in 231 genes . The latter was a pan-European PGx biomarkers spectrum including Croatian, Czech, Dutch, German, Greek, Hungarian, Maltese, Polish, Serbian, Slovenian, Turkish, Cypriot, Italian, Lithuanian, Russian, Slovakian, Spanish and Ukrainian, but not Finnish . Other reports also demonstrated that the Finnish pharmacogenome is rather distinct from non-Finnish Europeans . Our data show that Czech individuals are more NM and less RM for CYP2D6 (metabolizing psychoactive medications such as SSRI, tricyclic antidepressants) compared to Finnish individuals. In this respect, according to the SUPER-Finland study, based on the imputation of 9262 individuals, there is a higher frequency of CYP2D6 UM in Finland compared with non-Finnish Europeans . The Finns have been shown to have a high frequency of CYP2D6 UM phenotypes compared with the ancestral European population . A recent pan-European survey comprising 258,888 individuals demonstrated that the prevalence of depression in Finns doubles the one in Czechs . CYP2D6 genotype had a substantial clinical effect on anti-psychotics exposure and on their therapeutic failure . Pre-emptive CYP2D6 genotyping would be valuable for personalizing anti-depressant and anti-psychotics dosing and treatment. UGT1A1 metabolizes chemotherapeutic drugs such as irinotecan, used against colorectal cancer. Our data show decreased NM in Finns compared to Czechs; the SUPER-Finland showed a 22-fold enrichment of the UGT1A1 decreased function variant rs4148323 (UGT1A1*6) in Finland compared with non-Finnish Europeans from the GnomAD v2.1.1 database . Interestingly, the decrease in colorectal cancer mortality in Finland, as monitored by a retrospective analysis of the WHO mortality database, is one of the lowest in Europe despite the constant improvements in screening programs and detection . The World Health Organization’s (WHO) European region includes 53 states with diverse sociopolitical and economic backgrounds. In general, comparisons among countries can help to identify opportunities for the reduction of inequalities in health managements and outcomes. For instance, by comparing two urban population-based samples from Central Europe (Czech and Swiss) we found that increasing age and being male were the main determinants of poor metabolic health independent of obesity status . In terms of CVD, the first cause of death worldwide, the Central and Easter European (CEE) countries have the highest CVD mortality in the EU, which also occur at younger ages . The European region is considered “a natural epidemiologic laboratory” that, due to its enormous diversity, can provide useful lessons . Innovative solutions must be sought to improve the cardiovascular, mental and health in CEE countries. These efforts should take into consideration not only the local context, idiosyncrasy, traditions, social factors and equity implications , but also genetic variability. Although few pharmacogenetic tests have been implemented as the standard of care in health systems worldwide, pharmacogenetic evidence-based approaches that rigorously interrogates whether a genetic test genuinely improves the quality of care in a cost-effective and country-specific manner is warranted. Our PGx study on the Kardiovize Brno 2030 database shows heterogeneity in the metabolic profile of the population for the most common widely used medications. Of note by crossing with validated medicine we showed that in a relevant proportion the some widely used medications, i.e. warfarin and atorvastatin, are administered to patients with high-risk genotypes exposing them at risk for ADR. Finally comparing our findings in the Czech population with a Finnish reference population we showed interethnic differences in some common pharmacogenetics variants among ethnically different populations worldwide. Our results may contribute to facilitate European integration of PGx and to support pre-emptive PGx testing. S1 Table Allelic phenotypes of the raw genetic data. (XLSX) Click here for additional data file.
Implementing case-based collaborative learning curriculum via webinar in internal medicine residency training: A single-center experience
a353a5c0-f6b6-4237-8703-ae0ff4efca34
10118346
Internal Medicine[mh]
Case-based collaborative learning (CBCL) is a structured, student-centered approach that incorporates pre-session reading, readiness assessment, and interactive case-based sessions, and it has been shown to improve medical students’ knowledge. CBCL has also been used in resident training in a dermatology residency program. A previous study assessed the residents’ knowledge using content-related questions and surveyed the acceptance of CBCL. The results showed that CBCL improved the residents’ knowledge, and it was found superior to the traditional didactic teaching. Adding CBCL courses into standard residency training may improve the quality of education. At present, in China, standardized residency training of internal medicine is a uniform, 3-year program. The rotation in cardiology department is usually 4 months in length. The traditional resident training relies largely on bedside teaching, and classroom-based teaching in the format of didactic teaching, case discussion, and journal club. However, this training model has several limitations. Firstly, although bedside teaching provides opportunities for in-depth learning, the type and severity of cases vary, and the quality of rotation heterogeneously differs. Secondly, large tertiary educational hospitals may concentrate on certain areas of subspecialty, while the access to some other common topics of subspecialty may be limited. Thirdly, while didactic teaching mainly concentrate on simple knowledge transfer, its efficiency and acceptability among learners has been questioned. In regard to case discussion, despite regularly held in Chinese educational hospitals, it is usually more for clinical purposes, but not carefully designed and held to meet the educational needs of residents. Therefore, adding a systematically designed CBCL course with selected topics may fill in this gap. However, during the coronavirus disease 2019 (COVID-19) pandemic in early 2020, it is not recommended to gather indoors. Webinars have become an reliable means of delivering courses, and have been proven to be a valuable teaching method that can fulfill a variety of educational needs, such as resident training, continuous medical education, patient education, and more. The use of webinar has been drastically increased in the COVID pandemic, as indicated by number of publications. Webinars provide the opportunity for remote meetings, save time that would otherwise be spent commuting, and are particularly beneficial for residents who may be post-call or unable to leave clinical areas. In this present study, we aimed to test the influence of the CBCL curriculum in webinar format on quality of residency training and residents’ satisfaction. 2.1. Patient and public involvement statement Patients or the public were not involved in the design, or conduct, or reporting, or dissemination plans of our research. 2.2. Study design Between February and April 2020, we implemented 8 CBCL sessions in webinar format in an internal medicine residency training program in Beijing Tsinghua Changgung Hospital, Beijing, China. In total, 9 residents were invited to participate in the study and were recruited on a voluntary basis. At baseline, residents participated in learning activities about 5 h/wk, including didactic teaching, case discussion, and journal club, which were delivered by peers or faculty members. No preexisting session used any group discussion, including CBCL. 2.3. Curriculum development and implementation The need assessment for the curriculum development included a survey to faculty regarding the knowledge gap of rotating residents and analysis of end-of-rotation examination result from the training program. A total of 8 topics were chosen: angina pectoris, acute myocardial infarction, heart failure, hypertension, atrial fibrillation, infective endocarditis, cardiomyopathy, and myocarditis. Under these topics, cases with typical presentations were prepared and promoted for discussion. The learning objective was to improve residents’ knowledge on these topics. 1.2.3. Reading materials and readiness assessment. Pre-session reading materials included lecture slides, the latest local guidelines, and guidelines published by the American College of Cardiology/American Heart Association and the European Society of Cardiology. A readiness assessment with 10 to 20 multiple choice questions (MCQs) was conducted online 1 day prior to the session. Answers and detailed explanations were released immediately upon completion of the questions to consolidate the relevant concepts and knowledge. Notification messages were sent weekly to leaners to remind them about the readiness assessment prior to the webinar session. 2.2.3. The CBCL sessions in webinar format. The structure of the discussion session fulfilled the principle of the CBCL teaching model, combining small-group and large-group discussions, while in an online manner using Tencent Meeting software. Residents were divided into 3 groups during small-group discussion using WeChat software. Residents were instructed to keep video on and audio off when they were silent. However, they could turn on the audio for questioning and speaking at any time. Residents were encouraged to lead the discussion on most occasions. Two faculty members facilitated the entire session and summarized the answers to the questions. Residents were required to submit a summary report after each session. 2.4. Outcomes Fifty MCQs were delivered to assess residents’ knowledge before and after the curriculum. The MCQs were randomly selected from an online standardized question bank developed previously by hospital faculty for the assessment of internal medicine residents. Difficulty levels were balanced, and 80% had 1 correct answer while 20% had multiple correct answers. Two surveys, at the end of the second session and the last session, were delivered to residents to assess their satisfactory on CBCL. The surveys included 5-point Likert scale scores to evaluate residents’ attitudes toward CBCL, self-assessed improvement, satisfaction of case selection, satisfaction of teaching method, and their attitude regarding their participation in similar courses in the future. 2.5. Statistical analysis Changes in knowledge were assessed using the paired t test to compare the mean values of the MCQ scores before and after the curriculum. The Wilcoxon signed-rank test was used to compare 5-point Likert scale scores obtained from the 2 surveys. The statistical analysis was performed using SPSS 23.0 software (IBM, Armonk, NY). Patients or the public were not involved in the design, or conduct, or reporting, or dissemination plans of our research. Between February and April 2020, we implemented 8 CBCL sessions in webinar format in an internal medicine residency training program in Beijing Tsinghua Changgung Hospital, Beijing, China. In total, 9 residents were invited to participate in the study and were recruited on a voluntary basis. At baseline, residents participated in learning activities about 5 h/wk, including didactic teaching, case discussion, and journal club, which were delivered by peers or faculty members. No preexisting session used any group discussion, including CBCL. The need assessment for the curriculum development included a survey to faculty regarding the knowledge gap of rotating residents and analysis of end-of-rotation examination result from the training program. A total of 8 topics were chosen: angina pectoris, acute myocardial infarction, heart failure, hypertension, atrial fibrillation, infective endocarditis, cardiomyopathy, and myocarditis. Under these topics, cases with typical presentations were prepared and promoted for discussion. The learning objective was to improve residents’ knowledge on these topics. 1.2.3. Reading materials and readiness assessment. Pre-session reading materials included lecture slides, the latest local guidelines, and guidelines published by the American College of Cardiology/American Heart Association and the European Society of Cardiology. A readiness assessment with 10 to 20 multiple choice questions (MCQs) was conducted online 1 day prior to the session. Answers and detailed explanations were released immediately upon completion of the questions to consolidate the relevant concepts and knowledge. Notification messages were sent weekly to leaners to remind them about the readiness assessment prior to the webinar session. 2.2.3. The CBCL sessions in webinar format. The structure of the discussion session fulfilled the principle of the CBCL teaching model, combining small-group and large-group discussions, while in an online manner using Tencent Meeting software. Residents were divided into 3 groups during small-group discussion using WeChat software. Residents were instructed to keep video on and audio off when they were silent. However, they could turn on the audio for questioning and speaking at any time. Residents were encouraged to lead the discussion on most occasions. Two faculty members facilitated the entire session and summarized the answers to the questions. Residents were required to submit a summary report after each session. Pre-session reading materials included lecture slides, the latest local guidelines, and guidelines published by the American College of Cardiology/American Heart Association and the European Society of Cardiology. A readiness assessment with 10 to 20 multiple choice questions (MCQs) was conducted online 1 day prior to the session. Answers and detailed explanations were released immediately upon completion of the questions to consolidate the relevant concepts and knowledge. Notification messages were sent weekly to leaners to remind them about the readiness assessment prior to the webinar session. The structure of the discussion session fulfilled the principle of the CBCL teaching model, combining small-group and large-group discussions, while in an online manner using Tencent Meeting software. Residents were divided into 3 groups during small-group discussion using WeChat software. Residents were instructed to keep video on and audio off when they were silent. However, they could turn on the audio for questioning and speaking at any time. Residents were encouraged to lead the discussion on most occasions. Two faculty members facilitated the entire session and summarized the answers to the questions. Residents were required to submit a summary report after each session. Fifty MCQs were delivered to assess residents’ knowledge before and after the curriculum. The MCQs were randomly selected from an online standardized question bank developed previously by hospital faculty for the assessment of internal medicine residents. Difficulty levels were balanced, and 80% had 1 correct answer while 20% had multiple correct answers. Two surveys, at the end of the second session and the last session, were delivered to residents to assess their satisfactory on CBCL. The surveys included 5-point Likert scale scores to evaluate residents’ attitudes toward CBCL, self-assessed improvement, satisfaction of case selection, satisfaction of teaching method, and their attitude regarding their participation in similar courses in the future. Changes in knowledge were assessed using the paired t test to compare the mean values of the MCQ scores before and after the curriculum. The Wilcoxon signed-rank test was used to compare 5-point Likert scale scores obtained from the 2 surveys. The statistical analysis was performed using SPSS 23.0 software (IBM, Armonk, NY). A total of 9 residents participated in the CBCL curriculum in webinar format, of whom 3 (33.3%) residents were male. Among them, 6 residents were postgraduate year one (PGY-1), two were PGY-2 and one was PGY-3. Participants’ demographic characteristics and course details are shown in Table . The majority of residents had inpatient cardiology rotation experience (0.9 ± 1.2 months). The overall course attendance rate was 94.4%. The average time for CBCL study was 6.3 ± 4.1 h/wk. Pre-session learning time was 4.1 ± 2.1 h/wk. Post-session summary lasted for 2.2 ± 2.3 h/wk. 3.1. Changes in knowledge All residents completed the pre- and post-curriculum assessments with 50 MCQs. The total score was 100 with dedication of 2 points for each question. The mean scores were 68.0 ± 12.3 and 75.1 ± 9.9, respectively ( P = .029). 3.2. Satisfactory assessment Survey results for satisfactory assessment are presented in Figure . All 9 residents responded to the surveys (response rate, 100%). Among them, 5 (55.6%) residents selected “like” or “extremely like” in overall satisfaction at week 2, while in the second survey conducted at week 8, the number increased to 8 (88.9%). In terms of self-assessed improvement, most students ranked positively at week 2 (6, 66.7%) and week 8 (8, 88.9%). In both surveys, the majority of participants (88.9% and 100%) reported satisfaction with the case used for the teaching. When queried about their attitude toward the teaching methodology, 6 (66.7%) residents responded positively at week 2 and 7 (77.8%) at week 8. Only 4 (44.4%) residents agreed to participate in similar courses in the future in the first survey, and the number improved to 7 (77.8%) at week 8. To compare the results from the 2 surveys, the answers in the 5-point Likert scale scores were graded with values ranging from 1 for “extremely dislike/disagree” to 5 for “extremely like/agree.” The higher score indicates a higher level of satisfaction. The median scores from the initial survey were 4 for overall attitude, self-evaluated improvement in clinical reasoning abilities, satisfaction with case selection, and satisfaction with teaching methods. The participants gave a median score of 3 for agreement on future participation. The results of the median scores from this survey indicated overall satisfaction of the residents with the course. Additionally, the repeated survey at week 8 remained consistent with the first survey, and even showed a tendency of improvement on overall attitude and agreement on future participation (Fig. ). All residents completed the pre- and post-curriculum assessments with 50 MCQs. The total score was 100 with dedication of 2 points for each question. The mean scores were 68.0 ± 12.3 and 75.1 ± 9.9, respectively ( P = .029). Survey results for satisfactory assessment are presented in Figure . All 9 residents responded to the surveys (response rate, 100%). Among them, 5 (55.6%) residents selected “like” or “extremely like” in overall satisfaction at week 2, while in the second survey conducted at week 8, the number increased to 8 (88.9%). In terms of self-assessed improvement, most students ranked positively at week 2 (6, 66.7%) and week 8 (8, 88.9%). In both surveys, the majority of participants (88.9% and 100%) reported satisfaction with the case used for the teaching. When queried about their attitude toward the teaching methodology, 6 (66.7%) residents responded positively at week 2 and 7 (77.8%) at week 8. Only 4 (44.4%) residents agreed to participate in similar courses in the future in the first survey, and the number improved to 7 (77.8%) at week 8. To compare the results from the 2 surveys, the answers in the 5-point Likert scale scores were graded with values ranging from 1 for “extremely dislike/disagree” to 5 for “extremely like/agree.” The higher score indicates a higher level of satisfaction. The median scores from the initial survey were 4 for overall attitude, self-evaluated improvement in clinical reasoning abilities, satisfaction with case selection, and satisfaction with teaching methods. The participants gave a median score of 3 for agreement on future participation. The results of the median scores from this survey indicated overall satisfaction of the residents with the course. Additionally, the repeated survey at week 8 remained consistent with the first survey, and even showed a tendency of improvement on overall attitude and agreement on future participation (Fig. ). In the present study, we assessed the effects of the CBCL curriculum in webinar format on internal medicine residents’ knowledge and their attitudes toward this teaching module. We found that the CBCL sessions in webinar format was associated with improved knowledge mastery on cardiovascular diseases as evidenced by improved MCQs scores. The teaching module was relatively well accepted by residents, and the acceptance rate showed some improvement along with the conduction of the curriculum. Collectively, these results demonstrated that the CBCL sessions in webinar format could be advantageous for training internal medicine residents. CBCL uses a flipped-classroom model, integrating the elements of case-based learning and problem-based learning. The flipped-classroom model has been adopted by several medical centers worldwide. Allenbaugh et al tested the flipped classroom curriculum in cardiology residency training. They randomized 98 internal medicine residents into flipped classroom curriculum group (with weekly case discussion) and control group. Despite positive perceptions of the flipped classroom curriculum group, the survey found no significant difference in residents’ knowledge, attitudes or preparedness. The overall results were highly consistent with those of other studies, and highlighted the different effects of flipped classroom curriculum on training postgraduate and undergraduate medical students. The authors attributed the differences to residents (i.e., residents facing difficulty in balancing their limited time on work responsibilities and pre-session readings). As a novel teaching method, CBCL lacks robust data for cardiology residency training. It may have similar difficulty in engaging residents in the same settings. However, study from Krupat et al has demonstrated the advantages of CBCL was described to be “engaging,” “fun” and “thought-provoking” by students. Furthermore, we minimized the pre-session reading materials to reduce the study load in our curriculum. The survey showed that the overall time needed per week was 6.3 ± 4.1 hours, which seemed to be acceptable, according to the positive feedback obtained from residents and overall improvement in their performance. The adoption of CBCL with minimized study load might be essential in maintaining participants willingness to study. CBCL was originally designed as a classroom-based group discussion in the same physical space with involvement of each small group sitting around a table. This current CBCL curriculum was presented in webinar format particularly for the reason of COVID-19 pandemic, to reduce the commuting time, especially for post-call residents, and to improve the attendance rate. This advantage has been well reflected in the high attendance rate (94.4%) of the course. Using the CBCL curriculum in webinar format may also provide other benefits. We found that leaners who did not speak loudly in the public expressed their thoughts in the chat box during the discussion, which encouraged them to communicate with peers and tutors. Furthermore, the CBCL curriculum in webinar format provided the possibility of involving tutors and learners from different geographic locations. Establishment of a standardized CBCL curriculum in webinar format may be advantageous to integrate educational resources between different centers and improve the homogeneity in residency training even in the post-pandemic era. However, there are still some disadvantages of the CBCL sessions in webinar format. For instance, there is a concern regarding impaired engagement when leaners are learning separately. This drawback is of great importance when didactic lectures are delivered. CBCL has been thought to be engaging, as the nature of active thinking and communication it provoked during the process. Thus, the CBCL sessions in webinar format enabled residents to actively learn, and maintained the advantages of the active learning strategies, such as the improved knowledge mastery and a high acceptance rate, which were also evident in our study. In addition, hardware- and software-related concerns are noteworthy. A pretest to enhance the quality of the CBCL sessions in webinar format is essentially warranted to the smooth conduction of the course. There were several limitations in the present study. First, it was a single-center study and only involved residents in the internal medicine training program, which may limit the generalizability of the findings. Second, the course assessment was conducted using MCQs and lacked a control group for making comparison. The surveys to investigate satisfaction of learners may carry confounder bias as the authors are also faculty from the same hospital. Third, the number of participants was small, which hindered us to perform a randomized trial, and only allowed for a preliminary study. An initially planned assessment at 6 months after the course was interrupted by the pandemic and was not able to conduct with the completion of training of these residents. A study with more participants and longer follow-up period is now conducting for further investigation. Implementing the CBCL curriculum in webinar format for cardiology residents was resulted in the improved knowledge mastery and a high acceptance rate. Conceptualization: Rong He. Data curation: Ying Xie, Fang Liu, Ou Zhang, Wei Xiang, Le Miao. Formal analysis: Wei Xiang, Le Miao, Ping Zhang. Funding acquisition: Rong He. Investigation: Lanting Zhao, Lingyun Kong. Methodology: Rong He, Ying Xie. Project administration: Ying Xie. Resources: Fang Liu, Ou Zhang. Supervision: Lanting Zhao, Lingyun Kong. Writing – original draft: Rong He, Ying Xie. Writing – review & editing: Ping Zhang.
Evaluating the readability, quality and reliability of online patient education materials on transcutaneuous electrical nerve stimulation (TENS)
5d2aa085-4088-4218-95ca-bfbe371ba299
10118348
Patient Education as Topic[mh]
Pain, an important health concern, restricts the activities of daily life and causes loss of work productivity. Pain symptoms are observed in 50% of hospitalized patients and >70% of accident cases who are presented to the emergency department. The prevalence of chronic pain among adults is estimated to be 45% worldwide, and it causes severe disability in 15% of this patient population. In addition, pain leads to financial problems both in terms of treatment and time lost from work. According to a previous study, healthcare and work productivity losses in the United States are estimated to cost an average of 600 billion US dollars annually. Treatments in pain management are not effective to the desired extent in patients’ clinics. In a previous study, 40% of patients with chronic pain believed that the treatments were inadequate and >60% of patients believed that the administered medications were ineffective. Medication or interventional therapies for pain management are tailored to individual needs. In addition to improving the quality of life, these treatments help patients return to normal work productivity. Early diagnosis followed by early treatment also prevents the development of chronic pain. Transcutaneous Electrical Nerve Stimulation (TENS) is a neuromodulation method that is used worldwide for treating acute and chronic pain. TENS activates nerve fibers by delivering electrical impulses to the skin surface, increases endogenous opioid release, and modifies electrical stimulation to reduce pain. TENS is recommended by healthcare providers for treating inflammatory, neuropathic, and musculoskeletal pain, either alone or in combination with other treatment modalities. TENS is a popular treatment method preferred by patients and healthcare providers as it is inexpensive, available without a prescription, easy and reliable, and has less side effects and complications. Contraindications may include patients with cardiac pacemakers or implanted cardiac defibrillators. Caution should be exercised in patients with active tumors, epilepsy, and deep vein thrombosis as well as pregnant patients. Among several applications of TENS, the most commonly used applications are conventional TENS, which generates strong and painless stimuli, and acupuncture-like TENS, which generates strong and painless pulsed stimuli. In the digital age, patients search for information about their diseases and treatment options using internet-based patient education materials (PEMs). Although there is a considerable amount of information available on the internet, patients’ utilization of this information relies on their health literacy. Health literacy refers to the ability of individuals to perform basic reading and numerical tasks in a healthcare setting. In the US, approximately one-third of adults and two-third of those aged ≥ 65 years only have basic health literacy. To ensure that the information available on the internet is easy to understand for the readers, it must be prepared in a simple language. The US Department of Health and Human Services and the National Institutes of Health have recommended that internet-based PEMs should be written in a language below the sixth-grade level. It can be said that the websites prepared above this value will not be to the benefit of the reader and that the site developers should make an assessment about this issue beforehand. In addition to readability, the reliability and quality of information in the digital environment have been extensively studied in the literature. Patients may receive information about device features and usage method related to TENS on the internet. TENS is frequently recommended by health professionals for treating acute and chronic pain and is available without a prescription. In this study, we aimed to evaluate the readability, quality, and reliability of TENS-related internet-based PEMs. Further, we attempt to identify website typologies that provide highly reliable information about TENS. For this observational and cross-sectional study, 2 independent authors (V.H. and E.O.) searched the term “Transcutaneous Electrical Nerve Stimulation” on Google ( https://www.google.com ), the most popular search engine, on September 15, 2022. A neutral term (Transcutaneous Electrical Nerve Stimulation) was used to obtain a larger sample of websites. In cases of possible discrepancies between the authors during the evaluation of the websites, a third independent author (Y.E.) made the final decision. According to the July 2022 data, Google was used in this study because it is the leading search engine with a market share of 83.84%. 2.1. Websites selection criteria The web browsing history and cookies were deleted during the search so that the results of the research would not be affected. In addition, the study was continued by signing out from all Google accounts and using Google Incognito mode. After the search, the uniform resource locators (URLs) of the first 200 websites were recorded, as per the methodologies described in similar studies. The 10 websites listed on the first page of the Google search engine were considered as the most viewed websites. Websites unrelated to TENS, those marked as “Ad,” those requesting subscriptions or registrations, those with non-English language content, duplicate websites, and websites without text content but with video or audio content were not included in this study. As PEMs were examined in this study, journal articles were not included. In addition, videos, images, figures and tables in the text, website URLs, punctuation marks, references, telephone, addresses, and author information were not included in the evaluation to avoid false results. Of the first 200 websites obtained using the search term, 24 duplicate websites were removed, resulting in 176 websites. Further, after rigorous evaluation according to the exclusion criteria, 74 websites were removed and 102 websites were finally included in this study (Fig. ). During the evaluation of websites, there may be cases where no evaluation criteria can be found on the home page. According to the 3-click rule used in such cases, the website user can access any information up to 3 mouse clicks. Although there is no official rule, it is believed that if the information cannot be accessed in 3 clicks, the user has not achieved the purpose and will leave the site. 2.2. Ethical considerations The University of Dokuz Eylul Noninterventional Research Ethics Committee approval was obtained before the study (7443-GOA 2022/29-20). 2.3. Website typology Websites were classified into six categories according to the ownership and type. In addition, URL extensions (.com, .org, .edu, .gov, and .net) were also assessed during the evaluation. Based on these extensions, it was determined whether they originated from a professional, commercial, or government organization. Typologies include commercial (websites created for making profit, selling products, with URL extensions of.com and.net), government (websites created and managed by an official government agency, with URL extension of .gov), health portals (websites providing useful health-related information, with URL extensions of .com and .net), news (websites created by newspapers and magazines, with URL extensions of .com and .net), nonprofit organizations (charitable/supportive/educational websites created by nonprofit organizations, with URL extensions of .org), and professional (websites created by individuals or organizations with professional medical qualifications, with URL extensions of .edu and .com) websites. 2.4. Reliability of websites The Journal of American Medical Association (JAMA) Benchmark criteria examines online information and resources based on 4 criteria: authorship, references/sources, indication of date, and ownership. Each criterion is scored between 0 and 1 point. The final score is determined between 0 and 4. A score of 4 indicates the highest reliability of websites. Websites with a JAMA score of ≤2 are considered to have low reliability, whereas those with a JAMA score of ≥3 are considered to have high reliability. (Table ). The Health on the Net Foundation (HON) was established to promote the online distribution and efficient use of reliable and useful health information. This organization aims to standardize the reliability of information on the internet. To meet the HONcode criteria, author credentials should be stated, privacy policy should be disclosed, website financing and advertising policy should be stated, patient–physician relationship should be complemented, and contact information should be disclosed. (Table ). In this study, we examined whether the website homepage or associated URL had a HONcode stamp. 2.5. Quality of websites In total, 16 DISCERN criteria with 16 questions were used to assess the website quality. Questions are scored between 1 and 5 points. Two authors independently evaluated the websites according to their DISCERN scores, and the final score was determined based on the average score. The final DISCERN score was 16–80; the scores and quality of websites were as follows: 16–27, very poor; 28–38, poor; 39–50, fair; 51–62, good; and 63–80, excellent. (Table ). The Global Quality Score (GQS) is a 5-point scale, which is used to assess the overall quality of websites. These scores are used to determine whether the website is useful for the readers. Accordingly, a score of 5 indicates excellent quality, whereas a score of 1 indicates poor quality. (Table ). The GQS and quality of websites were as follows: 4–5, high quality; 3, medium quality; and 1–2, low quality. 2.6. Readability The readability of the websites was determined by evaluating the texts using “ https://readabilityformulas.com/free-readability-formula-tests.php .” The Flesch Reading Ease Score (FRES), Flesch–Kincaid grade level (FKGL), Simple Measure of Gobbledygook (SMOG), Gunning Fog (GFOG), Coleman–Liau score (CL), automated readability index (ARI), and Linsear Write (LW) were used as readability formulas. (Table ). All websites were sorted and saved. Texts were copied and saved using Microsoft Office Word 2007 (Microsoft Corporation, Redmond, WA). The average readability level for all readability formulas was based on the sixth-grade level recommended by the American Medical Association and the National Institute of Health The score in the FRES formula decreases with an increase in the average readability scores in SMOG, GFOG, CL, FKGL, ARI, and LW formulas. The accepted readability level in the FRES formula is ≥60.0, whereas it is <7 for the other 6 formulas. Thus, higher FRES scores and lower scores in other formulas indicate better readability. 2.7. Content analysis We investigated the websites to determine whether they included topics related to TENS (indications, contraindications, procedure, complications, or TENS types) according to their typology. 2.8. Statistical analysis IBM SPSS Software 25.0 (SPSS Inc., Chicago, IL) was used for performing statistical analysis. Independent variables included website typologies, top 10 websites, and other websites. The dependent variables were the presence of DISCERN, JAMA, GQS, and HONcode. Frequency data are expressed as number (n) and percentage (%). Continuous data are represented as the mean ± standard deviation. For comparison of frequency variables, Chi-square or Fisher’s exact tests were used. Kruskal–Wallis or Mann–Whitney U tests were used to compare groups with continuous data, such as readability indices and sixth-grade levels. A P value of <.05 was considered to indicate statistical significance. The web browsing history and cookies were deleted during the search so that the results of the research would not be affected. In addition, the study was continued by signing out from all Google accounts and using Google Incognito mode. After the search, the uniform resource locators (URLs) of the first 200 websites were recorded, as per the methodologies described in similar studies. The 10 websites listed on the first page of the Google search engine were considered as the most viewed websites. Websites unrelated to TENS, those marked as “Ad,” those requesting subscriptions or registrations, those with non-English language content, duplicate websites, and websites without text content but with video or audio content were not included in this study. As PEMs were examined in this study, journal articles were not included. In addition, videos, images, figures and tables in the text, website URLs, punctuation marks, references, telephone, addresses, and author information were not included in the evaluation to avoid false results. Of the first 200 websites obtained using the search term, 24 duplicate websites were removed, resulting in 176 websites. Further, after rigorous evaluation according to the exclusion criteria, 74 websites were removed and 102 websites were finally included in this study (Fig. ). During the evaluation of websites, there may be cases where no evaluation criteria can be found on the home page. According to the 3-click rule used in such cases, the website user can access any information up to 3 mouse clicks. Although there is no official rule, it is believed that if the information cannot be accessed in 3 clicks, the user has not achieved the purpose and will leave the site. The University of Dokuz Eylul Noninterventional Research Ethics Committee approval was obtained before the study (7443-GOA 2022/29-20). Websites were classified into six categories according to the ownership and type. In addition, URL extensions (.com, .org, .edu, .gov, and .net) were also assessed during the evaluation. Based on these extensions, it was determined whether they originated from a professional, commercial, or government organization. Typologies include commercial (websites created for making profit, selling products, with URL extensions of.com and.net), government (websites created and managed by an official government agency, with URL extension of .gov), health portals (websites providing useful health-related information, with URL extensions of .com and .net), news (websites created by newspapers and magazines, with URL extensions of .com and .net), nonprofit organizations (charitable/supportive/educational websites created by nonprofit organizations, with URL extensions of .org), and professional (websites created by individuals or organizations with professional medical qualifications, with URL extensions of .edu and .com) websites. The Journal of American Medical Association (JAMA) Benchmark criteria examines online information and resources based on 4 criteria: authorship, references/sources, indication of date, and ownership. Each criterion is scored between 0 and 1 point. The final score is determined between 0 and 4. A score of 4 indicates the highest reliability of websites. Websites with a JAMA score of ≤2 are considered to have low reliability, whereas those with a JAMA score of ≥3 are considered to have high reliability. (Table ). The Health on the Net Foundation (HON) was established to promote the online distribution and efficient use of reliable and useful health information. This organization aims to standardize the reliability of information on the internet. To meet the HONcode criteria, author credentials should be stated, privacy policy should be disclosed, website financing and advertising policy should be stated, patient–physician relationship should be complemented, and contact information should be disclosed. (Table ). In this study, we examined whether the website homepage or associated URL had a HONcode stamp. In total, 16 DISCERN criteria with 16 questions were used to assess the website quality. Questions are scored between 1 and 5 points. Two authors independently evaluated the websites according to their DISCERN scores, and the final score was determined based on the average score. The final DISCERN score was 16–80; the scores and quality of websites were as follows: 16–27, very poor; 28–38, poor; 39–50, fair; 51–62, good; and 63–80, excellent. (Table ). The Global Quality Score (GQS) is a 5-point scale, which is used to assess the overall quality of websites. These scores are used to determine whether the website is useful for the readers. Accordingly, a score of 5 indicates excellent quality, whereas a score of 1 indicates poor quality. (Table ). The GQS and quality of websites were as follows: 4–5, high quality; 3, medium quality; and 1–2, low quality. The readability of the websites was determined by evaluating the texts using “ https://readabilityformulas.com/free-readability-formula-tests.php .” The Flesch Reading Ease Score (FRES), Flesch–Kincaid grade level (FKGL), Simple Measure of Gobbledygook (SMOG), Gunning Fog (GFOG), Coleman–Liau score (CL), automated readability index (ARI), and Linsear Write (LW) were used as readability formulas. (Table ). All websites were sorted and saved. Texts were copied and saved using Microsoft Office Word 2007 (Microsoft Corporation, Redmond, WA). The average readability level for all readability formulas was based on the sixth-grade level recommended by the American Medical Association and the National Institute of Health The score in the FRES formula decreases with an increase in the average readability scores in SMOG, GFOG, CL, FKGL, ARI, and LW formulas. The accepted readability level in the FRES formula is ≥60.0, whereas it is <7 for the other 6 formulas. Thus, higher FRES scores and lower scores in other formulas indicate better readability. We investigated the websites to determine whether they included topics related to TENS (indications, contraindications, procedure, complications, or TENS types) according to their typology. IBM SPSS Software 25.0 (SPSS Inc., Chicago, IL) was used for performing statistical analysis. Independent variables included website typologies, top 10 websites, and other websites. The dependent variables were the presence of DISCERN, JAMA, GQS, and HONcode. Frequency data are expressed as number (n) and percentage (%). Continuous data are represented as the mean ± standard deviation. For comparison of frequency variables, Chi-square or Fisher’s exact tests were used. Kruskal–Wallis or Mann–Whitney U tests were used to compare groups with continuous data, such as readability indices and sixth-grade levels. A P value of <.05 was considered to indicate statistical significance. 3.1. Website typologies Of the 200 websites ranked using the Google search engine, 102 websites that met the inclusion criteria were compared according to their typology. Commercial websites (n = 36, 35.5%) constituted the highest proportion of websites, followed by professional (n = 17, 16.7%) websites (Fig. ). The 10 search results that appeared on the first page of Google were considered the first websites that attract the attention of users. When the top 10 and other search results were evaluated by their typologies, no statistically significant difference was noted ( P = .661). Among the top 10 websites, professional, nonprofit, health portal, and government organizations were evenly represented (2 each), and news and commercial sites were also equally represented (one each) (Fig. ). 3.2. Comparison of readability, quality and reliability scores of top 10 and other websites The readability levels of the top 10 websites were compared with those of the other 92 websites, and no statistically significant difference was observed among the readability levels of websites (FRES, P = .987; GFOG, P = .778; FKGL, P = .897; CL, P = .315, and SMOG, P = .897). There was no statistically significant association among the DISCERN quality ( P = .586), GQS quality ( P = .274), and JAMA credibility ( P = .809) scores of the top 10 and other websites. In contrast, a statistically significant correlation was noted among the HONcode entities of the top 10 and other websites ( P = .008) (Table ). 3.3. Reliability and quality evaluation Among the 102 websites, the mean JAMA score was 2.00 ± 0.99, mean GQS score was 2.22 ± 1.18, and mean DISCERN score was 39.01 ± 20.42. Based on these results, we determined that the websites included in this study had moderate reliability and moderate-to-poor quality. Significant correlations were noted between the typologies of all websites and JAMA reliability scores ( P < .001), DISCERN quality scores ( P < .001), and GQS ( P < .001). This statistical difference could be explained by the fact that the websites developed by health portals received higher JAMA, DISCERN, and GQS scores. These scores were found to be lower in commercial websites. Overall, 8.8% of the websites received a JAMA score of ≥3, indicating high reliability; moreover, 16.7% of the websites were identified as high quality according to the GQS values. The presence of HONcode was detected in 16 (15.7%) websites. The highest number of HONcodes was detected in health portals (n = 7) (Table ). A significant correlation was observed among the JAMA, DISCERN, and GQS scores according to website typology ( P < .001). Accordingly, commercial websites were found to be of low credibility and very poor quality, whereas health portal and nonprofit websites were found to be dominated by high quality content. When JAMA scores were evaluated according to website typology, the reliability of websites was ranked from high to low in the following order: health portal > nonprofit organization > professional > news > government > commercial. When the GQS results were evaluated according to website typology, the quality scale of websites was ranked from high to low in the following order: health portal > nonprofit organization > professional > government > news > commercial (Fig. ). 3.4. Readability evaluation In the text readability analysis of 102 websites, the mean FRES was 47.91 ± 13.79 (difficult) and mean GFOG was 14.04 ± 2.74 (very difficult). Further, the mean FKGL and SMOG were 11.20 ± 2.85 and 10.53 ± 2.11 years of education, respectively. The CL index was 11.74 ± 1.94 years of education, whereas the ARI index was 11.52 ± 3.26 years of education. When the typologies of the sites were compared with the readability indices, no statistically significant association was noted except for the CL index ( P = .025) (FRES, P = .050; GFOG, P = .205; FKGL, P = .156; SMOG, P = .177; ARI, P = .190; and LW, P = .438) (Fig. ). According to the CL index, government websites were more readable than others. However, it was still found to be significantly difficult to read when compared to the 6th grade reading level. When the mean readability indices of all websites were compared with the sixth-grade reading level, a statistically significant difference was observed among them ( P < .001) (Table ). According to this result, the mean readability indices of all websites were significantly higher than the 6th grade level. 3.5. Correlation analysis Correlation analysis revealed a weak positive correlation between the JAMA scores and DISCERN scores ( R = 0.935, P < .001), GQS ( R = 0.927 P < .001) and the presence of HONcode ( R = 0.351, P < .001). A correlation was also observed between DISCERN scores and GQS ( R = 0.940, P < .001) and the presence of HONcode ( R = 0.405, P < .001) (Table ). 3.6. Content analysis Content analysis revealed the number websites associated with the content of each topic related to TENS as follows: indications, 89 (87%); contraindications, 30 (29.4%); procedures, 76 (74.5%); complications, 68 (66.7%); and TENS types, 20 (19.6%). Comparison of the content between the top 10 and other websites revealed a statistically significant difference in the contents of contraindications ( P = .035) and complications ( P = .002). This significant difference may be because 60% of the top 10 websites indicated contraindications and 80% indicated complications. When website typologies and content were compared, a statistically significant difference was observed in the content of complications ( P = .001) and TENS types ( P = .003). Complication was not mentioned in 90% of commercial websites, and TENS types were mentioned more in professional and health portal websites (Table ). Of the 200 websites ranked using the Google search engine, 102 websites that met the inclusion criteria were compared according to their typology. Commercial websites (n = 36, 35.5%) constituted the highest proportion of websites, followed by professional (n = 17, 16.7%) websites (Fig. ). The 10 search results that appeared on the first page of Google were considered the first websites that attract the attention of users. When the top 10 and other search results were evaluated by their typologies, no statistically significant difference was noted ( P = .661). Among the top 10 websites, professional, nonprofit, health portal, and government organizations were evenly represented (2 each), and news and commercial sites were also equally represented (one each) (Fig. ). The readability levels of the top 10 websites were compared with those of the other 92 websites, and no statistically significant difference was observed among the readability levels of websites (FRES, P = .987; GFOG, P = .778; FKGL, P = .897; CL, P = .315, and SMOG, P = .897). There was no statistically significant association among the DISCERN quality ( P = .586), GQS quality ( P = .274), and JAMA credibility ( P = .809) scores of the top 10 and other websites. In contrast, a statistically significant correlation was noted among the HONcode entities of the top 10 and other websites ( P = .008) (Table ). Among the 102 websites, the mean JAMA score was 2.00 ± 0.99, mean GQS score was 2.22 ± 1.18, and mean DISCERN score was 39.01 ± 20.42. Based on these results, we determined that the websites included in this study had moderate reliability and moderate-to-poor quality. Significant correlations were noted between the typologies of all websites and JAMA reliability scores ( P < .001), DISCERN quality scores ( P < .001), and GQS ( P < .001). This statistical difference could be explained by the fact that the websites developed by health portals received higher JAMA, DISCERN, and GQS scores. These scores were found to be lower in commercial websites. Overall, 8.8% of the websites received a JAMA score of ≥3, indicating high reliability; moreover, 16.7% of the websites were identified as high quality according to the GQS values. The presence of HONcode was detected in 16 (15.7%) websites. The highest number of HONcodes was detected in health portals (n = 7) (Table ). A significant correlation was observed among the JAMA, DISCERN, and GQS scores according to website typology ( P < .001). Accordingly, commercial websites were found to be of low credibility and very poor quality, whereas health portal and nonprofit websites were found to be dominated by high quality content. When JAMA scores were evaluated according to website typology, the reliability of websites was ranked from high to low in the following order: health portal > nonprofit organization > professional > news > government > commercial. When the GQS results were evaluated according to website typology, the quality scale of websites was ranked from high to low in the following order: health portal > nonprofit organization > professional > government > news > commercial (Fig. ). In the text readability analysis of 102 websites, the mean FRES was 47.91 ± 13.79 (difficult) and mean GFOG was 14.04 ± 2.74 (very difficult). Further, the mean FKGL and SMOG were 11.20 ± 2.85 and 10.53 ± 2.11 years of education, respectively. The CL index was 11.74 ± 1.94 years of education, whereas the ARI index was 11.52 ± 3.26 years of education. When the typologies of the sites were compared with the readability indices, no statistically significant association was noted except for the CL index ( P = .025) (FRES, P = .050; GFOG, P = .205; FKGL, P = .156; SMOG, P = .177; ARI, P = .190; and LW, P = .438) (Fig. ). According to the CL index, government websites were more readable than others. However, it was still found to be significantly difficult to read when compared to the 6th grade reading level. When the mean readability indices of all websites were compared with the sixth-grade reading level, a statistically significant difference was observed among them ( P < .001) (Table ). According to this result, the mean readability indices of all websites were significantly higher than the 6th grade level. Correlation analysis revealed a weak positive correlation between the JAMA scores and DISCERN scores ( R = 0.935, P < .001), GQS ( R = 0.927 P < .001) and the presence of HONcode ( R = 0.351, P < .001). A correlation was also observed between DISCERN scores and GQS ( R = 0.940, P < .001) and the presence of HONcode ( R = 0.405, P < .001) (Table ). Content analysis revealed the number websites associated with the content of each topic related to TENS as follows: indications, 89 (87%); contraindications, 30 (29.4%); procedures, 76 (74.5%); complications, 68 (66.7%); and TENS types, 20 (19.6%). Comparison of the content between the top 10 and other websites revealed a statistically significant difference in the contents of contraindications ( P = .035) and complications ( P = .002). This significant difference may be because 60% of the top 10 websites indicated contraindications and 80% indicated complications. When website typologies and content were compared, a statistically significant difference was observed in the content of complications ( P = .001) and TENS types ( P = .003). Complication was not mentioned in 90% of commercial websites, and TENS types were mentioned more in professional and health portal websites (Table ). In this study, we aimed to investigate the readability, quality, and reliability of internet-based PEM on TENS. We also attempted to determine the types of websites that are easily readable and highly reliable. We investigated the association of readability of sites with their reliability and quality. We compared the top 10 websites on the first page with other websites in terms of quality, readability, and reliability. TENS, the neuromodulation of central nociceptive stimulation, is known to be superior to other stand-alone treatments for acute pain and more effective than placebo for osteoarthritis and musculoskeletal pain. According to the report by the French National Authority for Health, TENS would be useful and beneficial only in those cases of chronic pain where pharmacological treatment failed. The advantages of TENS include excellent safety profile when recommended by a healthcare provider, prepurchase trial phase of the device, and reasonable price compared with other treatments. As TENS is available without a prescription, it is easily accessible and allows patients to use it without the guidance of a health professional. However, patients lack sufficient knowledge regarding the use of health products and use internet as a source to obtain information. Information obtained from internet also has several disadvantages in terms of reliability and quality, and the text usually written in a language that is difficult to understand by the readers. Internet has become an important source of health-related information. According to a 2018 study, 9 out of 10 Americans use internet and 72% of them search for health-related content. Well-informed patients are known to have increased satisfaction and overall better health outcomes, emphasizing the importance of quality patient education. Patients have access to reliable quality information on any health topic, which is written in a language they can understand, and can immediately evaluate the prevention, diagnosis, and treatment of diseases. In this study, when the websites were compared according to their typologies, no statistically significant difference was observed between the top 10 and other websites. Commercial websites constituted the highest proportion of all websites. The 10 search results on the first page of Google were relatively evenly distributed across typologies. There was a statistically significant difference between website typologies and reliability and quality scores. This significant difference was associated with the presence of HONcode as well as high JAMA, DISCERN, and GQS values on the websites of health portal and nonprofit organizations. These scores were lower for commercial websites. There were no statistically significant differences between the top 10 and other websites in terms of reliability, quality, and readability. When the readability indices of all websites were evaluated by their typologies, no statistically significant difference was found except for the CL index. Based on the CL indices, videos from government sources were found to be more readable than the other typologies. There was no statistically significant difference in readability scores between the top 10 and other websites. In this study, websites developed by commercial sources accounted for the highest proportion, which is similar to the result reported by several studies investigating online information on different topics. We found that commercial websites had lower JAMA, DISCERN, and GQS values. These data indicate that commercial websites do not provide sufficient reliable and quality information on TENS. In this study, only 1 of the top 10 websites had commercial content. Bağcier et al found 2 commercial websites in the top 10 websites evaluated in their study on myofascial pain syndrome. In contrast to the results of our study, Koçyiğit et al found no commercial content in the first 10 websites obtained in their study on ankylosing spondylitis. In general, internet users act based on the top 10 websites that appear on the first page of Google. According to the results of previous studies and the current study, there are no or few websites with commercial content in the top 10 websites. It may be possible that Google restricts these low-quality and low-reliability commercial sites to prevent users from being misinformed. In our study, the presence of HONcode, a quality indicator, was detected in 16 (15.6%) websites. Kocyigit et al found HONcode in 17.9% of the websites and Ahmadi et al found it in 12.9% of the websites analyzed in their respective studies. The results of the current study are similar to those reported in the literature. Correlation analysis revealed a weak positive correlation between the presence of HONcode and DISCERN, JAMA, and GQS values. Most patients consider that the first and most trusted source of information is health professionals. Therefore, health professionals have a duty of providing quality and reliable information. Further, patients should be aware that they can find more reliable and quality information during their search on websites with HONcode. The median DISCERN score (39.01 ± 20.42) in our study indicated fair quality. Wrigley et al and Killip et al reported median DISCERN scores with fair quality (43 and 43.8, respectively), similar to our results. The median JAMA score in our study was 2.00 ± 0.99. Similar to our results, Goldenberg et al and Halboub et al reported median JAMA scores of 2.34 ± 1.11 and 2.08 ± 1.05, respectively. In our study, 8.8% of the websites were found to be highly reliable according to the JAMA scores. Arif et al found 37% of the sites to be highly reliable, whereas Basavakumar et al detected high reliable JAMA scores in 43% of websites. The reason for the higher number of reliable websites in both studies compared with that in our study is that they included websites containing scientific journals in their studies. Further, as we focused on TENS in the current study, there were a large number of commercial sites selling devices, resulting in lower reliability rates. Although academic sites have highly reliable content, readers have low interest in websites that contain complex medical terminology. For this reason, we excluded academic websites in our study to match the general interest of users. When all website typologies were compared with their readability, no statistically significant difference was noted except for the CL index. According to this index, it was concluded that government-oriented websites are easier to read than other websites. When we evaluated the readability results of our study according to the sixth-grade level recommended by the American Medical Association and the National Institute of Health, we found that the results were higher than the recommended level. Risoldi Cochrane et al and Kecojevic et al indicated that government websites are easier to read than commercial websites. Although Kecojevic et al did not use academic websites in their study, they found that the readability levels were higher than the sixth-grade level, similar to the results of our study. It is important to understand that information with a high level of readability can be presented to a wider audience and more people can benefit from it. In this study, content analysis revealed that indication (87%) and procedure (74.5%) were the most common topics. When the content was analyzed according to the sources, we found that a low number of commercial sites reported content related to complications. Complications related to devices or procedures are rarely mentioned on commercial- or product sales-oriented websites, indicating the financial goals of the website. Considering that users order TENS devices online without a prescription, it may be advisable for them to be more careful and obtain information from trusted sources, possibly from healthcare providers, to avoid any adverse effect. The results of our study are influenced by some factors. The increase in digitalization and the ease of access to information quickly results in people trying to reach health-related information without contacting a health care provider. However, it is very difficult for users to reach accurate and quality information in the internet ocean. It is obvious that users are misled in a platform where all kinds of right or wrong information can be written without any control mechanism, financial interests are at the forefront and advertising is at every corner. Checking this information by the authorities and taking penal actions will help the public to reach the right information. We have obtained similar results by analyzing the influence of the gaps caused by the uncontrolledness of all this platform in our study. In this study, it has been determined that the reliability and quality of patient education materials related to TENS on the internet are low and their readability is difficult. Governments should take initiatives and take measures to ensure the dissemination of accurate and reliable information on the Internet, in cooperation with international and national health institutions/organizations. It is recommended that users continue their research on websites that are safe, high quality and suitable for public readability, away from financial interests. 4.1. Limitations of this study The limitations of the study include the fact that only 1 search term (Transcutaneous Electrical Nerve Stimulation) was used during the study. Further, we only searched English language websites using a single search engine, that is, Google. In addition, as there is no consensus on which readability index provides the most accurate results, the indices commonly used in the literature were considered. The reliability, readability, and quality of the websites were assessed using a set of scales and criteria, and the results showed 97% similarity between the authors. This 3% difference indicates a bias between the authors, which can be stated as another limitation of the study. 4.2. Strengths of this study We determined the reliability, quality, and readability of online information on TENS, which can be obtained by patients for common pain symptoms through online sources without a prescription. With the increase in digitalization, we revealed the challenges associated with TENS as patients prefer to obtain information from digital media instead of healthcare institutions. For this purpose, we excluded academic sites and searched for websites that would be of interest to the average educated population. The limitations of the study include the fact that only 1 search term (Transcutaneous Electrical Nerve Stimulation) was used during the study. Further, we only searched English language websites using a single search engine, that is, Google. In addition, as there is no consensus on which readability index provides the most accurate results, the indices commonly used in the literature were considered. The reliability, readability, and quality of the websites were assessed using a set of scales and criteria, and the results showed 97% similarity between the authors. This 3% difference indicates a bias between the authors, which can be stated as another limitation of the study. We determined the reliability, quality, and readability of online information on TENS, which can be obtained by patients for common pain symptoms through online sources without a prescription. With the increase in digitalization, we revealed the challenges associated with TENS as patients prefer to obtain information from digital media instead of healthcare institutions. For this purpose, we excluded academic sites and searched for websites that would be of interest to the average educated population. We found that internet-based PEM on TENS had low reliability and moderate-to-poor quality content. Websites were found to have significantly higher readability scores than Grade 6 recommended by the National Institute of Health. In the analysis of quality and reliability, commercial websites showed lower scores, whereas health portals and nonprofit organizations showed higher scores. The low amount of content related to complications on commercial websites was attributed to the financial goals of these websites. In the digital age, quality, reliable, and readable websites developed by health professionals can significantly contribute to providing accurate and understandable information to wider population. The authors thank Enago https://www.enago.com.tr for their assistance in manuscript translation and editing. Conceptualization: Yüksel Erkin, Erkan Özduran. Data curation: Yüksel Erkin, Volkan Hanci, Erkan Özduran. Formal analysis: Yüksel Erkin, Volkan Hanci. Funding acquisition: Yüksel Erkin. Investigation: Yüksel Erkin. Methodology: Erkan Özduran. Project administration: Erkan Özduran. Resources: Erkan Özduran. Supervision: Volkan Hanci. Writing – original draft: Volkan Hanci, Erkan Özduran. Writing – review & editing: Volkan Hanci, Erkan Özduran.
The challenge continues
8672ccf1-dac8-4075-b57e-5b775ed17a8b
10118673
Physiology[mh]
The state of Sergipe contribution to GH research: from Souza Leite to Itabaianinha syndrome
1290bf1a-59b6-422d-ab5b-2ff7c8d4d76a
10118753
Physiology[mh]
Before reporting the history from Sousa Leite on Itabaianinha syndrome, we need to clarify concepts that are used in the text. While the ability to grow is a characteristic of all living beings, growth hormone (GH) is an achievement of vertebrates to increase their body, increasing their ability to reproduce and to obtain food. Therefore, we call the “somatotrophic system” all the mechanisms involved in growth, that is, the somatotrophic axis and the extrapituitary circuits. The first factor, which is critical for body size, includes the hypothalamic factors GH-releasing hormone (GHRH), somatostatin, ghrelin, pituitary GH, and circulating (or “endocrine”) insulin-like growth factor 1 (IGF1). The other circuit, the extrapituitary circuits, which is relevant for body functions, comprises insulin, IGF2, and the local production of GH, IGF1, IGF2, IGF binding proteins (IGBPs) and several growth factors, such as fibroblast growth factor (FGF), vascular endothelial growth factor (VEGF), platelet-derived growth factors (PDGFs), transforming growth factor-β (TGF-β), and connective tissue growth factor (CTGF), acting in different tissues. In this article, we will highlight the roles of the components of the somatotrophic system by studying congenital and severe isolated GH deficiency (IGHD). In this article, we highlight the roles of the components of the somatotrophic system in relation to the findings of our studies performed on a cohort of Itabaianinha subjects presenting with congenital, severe, isolated GH deficiency (IGHD) due to a homozygous inactivating mutation in the GH-releasing hormone receptor gene. Sergipe is the smallest Brazilian state located in the northeast of the country. Perhaps due to its small size, its endocrinological vocation is linked to the study of GH disorders . One of the pioneers of studies on acromegaly, the physician José Dantas de Souza Leite (1859-1925) was born in the southwest of Sergipe in the city of Santa Luzia do Itanhy. He graduated in Medicine at the first Faculty of Medicine in Brazil ( Faculdade de Medicina da Bahia ), which was created in 1808 by Prince João VI after the transfer of the Portuguese throne to Brazil, when Portugal was about to be invaded by Napoleon Bonaparte's troops. Afterwards, Souza Leite graduated again in Medicine in Paris, where he attended the service of Prof. Charcot at the Salpêtrière Hospital and became a disciple of the renowned neurologist Pierre Marie . In 1886, Pierre Marie coined the term “acromegaly” to describe a deforming condition associated with the growth of extremities, which was already described in the 16th century by the Dutch physician Johannes Wier but poorly understood until then . Four years later, Souza Leite presented his doctoral thesis “De l'acromégalie: maladie de Marie”. He thoroughly described the clinical picture, evolution, differential diagnosis, prognosis, treatment, and pathology of the pituitary gland in acromegaly. In 1891, the New Sydenham Society of London published the book “Essays on Acromegaly”, which was authored by Pierre Marie and Souza Leite and translated into English . Souza Leite was internationally acknowledged for his contribution to the initial characterization of acromegaly one century after his death occurred at age 66 in Rio de Janeiro as a full Member of the National Academy of Medicine . In 2014, the Brazilian Society of Endocrinology and Metabolism (Sociedade Brasileira de Endocrinologia e Metabologia/SBEM) created the “José Dantas de Souza Leite Award”, which is granted every two years to a Brazilian researcher who has contributed to the development of Endocrinology , and its first winners included Licio Augusto Velloso (Campinas University, São Paulo), Berenice Bilharinho de Mendonça (São Paulo University, São Paulo), and Ana Luiza Maia and Poli Mara Spritzer, who were both from the Federal University of Rio Grande Sul. In 2022, Manuel Hermínio de Aguiar Oliveira from the Federal University of Sergipe received this award. The relationship between acromegaly and the hypersecretion of GH had to wait for the demonstration of GH existence by Evans and Long in 1921 . Evans' contribution was crucial not only to demonstrate its existence but also to establish the first bioassays for GH. The isolation and molecular characterization of GH was accomplished by Li and Evans in 1944 . The concept of a circulating “sulfation factor” mediating the effects of GH in peripheral tissues was proposed in the 1950s by Salmon and Daughaday . Subsequently, it was proven that this sulfation factor is, in fact, a somatomedin capable of competing for insulin binding sites, implying a structural and functional homology between somatomedin and insulin . This led to the “somatomedin hypothesis” and further the characterization of both IGF1 and IGF2 . The existence of GH releasing hormone (GHRH) was suggested in the early 1960s by Reichlin , who caused lesions in the hypothalamus in rats and demonstrated that the GH content in the pituitary gland decreased, suggesting the presence of a hypothalamic GHRH. GHRH was initially isolated from pancreatic tumours causing acromegaly, and later hypothalamic human GHRH was shown to be identical to the one isolated from pancreatic tumours . Subsequently, in the 1970s, Guillemin and Schally identified several hypothalamic factors, including somatostatin, for which they won the Nobel Prize in Physiology or Medicine, although Samuel McCann (1925-2006) demonstrated its existence. McCann, along with Geoffrey Harris, established the theory of hypothalamic factors . McCann contributed to the training of José Antunes-Rodrigues (also in ) and Ayrton Moreira from the Faculty of Medicine of Riberão Preto at USP (FMRP/USP), who were mentors of the first author of this article. Moreira was the main inspiration behind the creation of the Endocrinology Service at the Federal University of Sergipe, where most of the following data were produced . In this scientific family tree, it is also possible to demonstrate cross talk with the South American Nobel Prize winner Bernardo Alberto Houssay (1887-1971) for his discovery of the role of the anterior pituitary in the regulation of carbohydrate metabolism. Houssay trained Miguel Rolando Covian (1913-1992) . Already internationally recognized, Covian accepted the invitation of Dr. Zeferino Vaz to join the faculty of the newly founded FMRP/USP as head of the Department of Physiology, bringing his prestige to this nascent institution where he worked from 1955 to 1992 . In his work at FMRP/USP, he successively led a group of notable researchers: José Venâncio Pereira Leite, Renato Hélios Migliorini, Carlos Renato Negreiros de Paiva, César Timo-Iaria, Andrés Negro-Vilar, Maria Carmela Lico, Anette Hoffmann, José Antunes Rodrigues, Ricardo Marseillan, Aldo Bolten Lucion, among others . In 1954, Zeferino Vaz invited another eminent teacher, physician, and researcher, Hélio Lourenço de Oliveira, to head the newly created Department of Internal Medicine at FMRP/USP. Hélio Lourenço brought in José Veríssimo, who introduced Ayrton Moreira to Antunes-Rodrigues. Since then, physiology and internal medicine have been studied with a fertile connection in which basic and clinical research and education with a humanistic approach have been shared . Before leaving Argentina, due to problems with the Peronist government, Covian completed a postdoctoral fellowship at Johns Hopkins University in Baltimore, as the first author of this article did at the same institution under the supervision of Roberto Salvatori, the second author of this paper. Additionally, at Johns Hopkins University, Herbert Evans obtained his medical degree several years ago. Thus, people and institutions cross the paths of science, with the common objective of acquiring knowledge for the benefit of humanity. While understanding of the roles of actors in the somatotrophic system has expanded greatly, the actual impact of GH deficiency (GHD) on the body remains controversial. Idiopathic GHD, an important cause of short stature in childhood, may disappear in adulthood, raising doubts about its nature or relevance. On the other hand, acquired GHD is often part of hypopituitarism from different aetiologies, mainly pituitary tumours, surgery, or irradiation, which are often associated with deficits in other pituitary hormones, with a lack or inadequacy of the respective replacement therapies. These circumstances make it difficult to filter the role of GHD from its muddled influences. Genetically isolated GHD (IGHD) may be an alternative to evaluate the biological impacts of GH, but it is rare, and a significant number of affected individuals receive GH replacement during childhood in the other cohorts of congenital IGHD . Nearly 30 years ago, we described a large cohort of individuals with severe short stature due to congenital IGHD caused by the homozygous c.57+1G>A mutation in the GHRH receptor (GHRHR) gene ( GHRHR OMIM n.618157), with most residing in the city Itabaianinha located just 60 km (37 miles) from Santa Luzia do Itanhy , Souza Leite's hometown. These subjects exhibit a classical IGHD 1B phenotype, with very low (but detectable) serum levels of GH that is accompanied, in most cases, by IGF1 concentrations close to or below the detection limit and an autosomal recessive mode of inheritance. Michael Thorner, who isolated GHRH from a pancreatic tumour causing pituitary hyperplasia and acromegaly in the 1980s, emphasized that this experiment of nature demonstrates the vital importance of GHRH in addition to its role in growth . Moreover, this experiment of nature is writing the natural history of IGHD through the description of Itabaianinha syndrome . As a typical IGHD 1 B case, patients with Itabaianinha syndrome showed low but not absent serum GH levels. GH peaks were lower than 1 ng/mL in both clonidine and insulin tolerance tests, and no response to GHRH was observed . This is combined with life-long severe reduction of circulating IGF1 and considerable IGF2 upregulation, proven by an increase in the IGF2/IGFBP-3 ratio, which is a measure of its bioavailability . We hypothesize that both residual GH secretion (allowing some residual GH functions and immune tolerance to exogenous GH) and IGF2 upregulation (contributing to IGF bioavailability to some vital tissues, such as the brain, eye, and teeth) may have physiological implications . In fact, this model of IGHD, in which most adults have never received GH replacement therapy, makes it possible to analyse the effects of the somatotrophic axis (pituitary GH and circulating IGF1) and extrapituitary circuits (IGF2 and local production of IGF1 and IGF2) on body size and body functions. The main physical findings of the Itabaianinha untreated IGHD adult subjects were proportionate short stature, doll facies, high-pitched voice, central obesity, and wrinkled skin. However, these individuals had several additional phenotypic characteristics, arguably with a greater number of beneficial than harmful consequences to their health. They exhibited normal quality of life and normal longevity , with increased healthspan, that is, the period of life without disabling morbidities . In this review, we update the consequences on body size and body functions . The data in show an uneven reduction in bone, as expressed in standard deviation scores (SDSs) and nonbone measures, corrected by body surface. It adds very recent data about the dental arches and the mesiodistal measurement of the teeth to several previously published papers . The pattern of cephalometric measures explains the doll facies and their high-pitched voice. The reduction in teeth width is of lesser magnitude than height and cephalometric measurements, with the latter two measurements reflecting postnatal growth of bone tissue. The less marked reduction in the size of the teeth coupled to a greater reduction of most jaw dimensions can have deleterious consequences, such as crowding, malocclusion, and periodontal disease , but it can have benefits, providing a masticatory advantage. Accordingly, tooth growth parallels ocular axial length and head circumference (brain) development , other important elements of environmental adaptation and survival capacity. The growth of the eyes and of the brain seems to be minimally affected by GHD. While the mean stature of affected individuals was 78% of that in the controls, their ocular axial length was 96%, and their head circumference was 92% of the normal local controls. Indeed, ocular axial length reaches its final dimension at approximately 13 years of age before the maximal activation of the somatotrophic axis, while the brain has 83.6% of its growth completed within the first year of life with essentially full growth achieved during the first 3 years of life . Tooth growth seems mostly a prenatal process that is partially independent from stature. Therefore, teeth, eye, and brain growth may involve different patterns of temporal regulation than whole-body growth, suggesting other regulatory mechanisms in addition to the somatotropic axis. On the other hand, some organs show size reduction (corrected for body surface): thyroid, heart, uterus, and spleen. Conversely, ovary and prostate sizes were similar to controls . Skin functions It is intuitive that the skin, being the covering of the body, is significantly influenced by the somatotrophic axis that controls body size. Skin has many functions, some protective (against microorganisms, dehydration, ultraviolet light, and mechanical damage) and others homeostatic (sweating and production of vitamin D). A mutual influence exists between the skin, growth and the somatotrophic axis, as skin produces IGF1 and vitamin D, and GH and IGF1 exert several actions on the skin . These untreated IGHD subjects exhibited a reduction in sweating but had normal vitamin D levels and phosphorus-calcium homeostasis . In addition, their skin appeared prematurely wrinkled and remained susceptible to cancer , as detailed later in this article. Muscle function and balance Although these IGHD individuals had small bones and muscles, their volumetric bone mineral density, corrected for bone size, was normal . Additionally, they had better muscle strength parameters (adjusted for weight and fat-free mass) and greater peripheral resistance to fatigue than controls . Not surprisingly, there were no reports of spontaneous fractures in this cohort, and the prevalence of vertebral fracture was reduced in older IGHD individuals compared to age-matched controls . They presented satisfactory walking and postural balance with no increased risk of falling , although they had moderate peripheral vestibular impairment without clinical consequences, as they were quite active in agriculture, horseback riding, and sports. Quality of life, reproduction, sleep, and sensory perception IGHD individuals exhibit normal quality of life , despite shorter and more fragmented sleep . The external and internal genitalia are essentially normal, which guarantees sexual life with a person of normal stature, with preserved reproductive capacity . The organs of sense present a generally very satisfactory performance (little, if any, vision impairment), with mild changes in cochlear function (mild high-tone sensorineural hearing loss) and labyrinth function (moderate peripheral vestibular impairment) . These minor problems do not disturb their normal quality of life. Body composition, cardio-metabolism, vascular, immune, and cancer data The changes in body composition include decreased fat-free mass and increased percent body fat . IGHD subjects eat proportionally more but healthier food than local controls matched for age and gender. In fact, their estimated energy intake corrected by body weight is higher than controls. In addition, they consume, in percentage, more proteins, less carbohydrates, and equal amounts of lipids . They show increased areas under the curves of GLP-1 and ghrelin and hunger attenuation in response to a mixed meal . They also exhibit reduced FGF21 and β-Klotho levels. These FGF21 and β-Klotho levels may not have been significantly influenced by the test meal but rather reflected their spontaneous morning secretion. This suggests that lower FGF21 and β-Klotho secretion is compatible with healthy status and longevity . Together, these “enteroendocrine” connections may result in a favourable outcome in terms of environmental adaptation, ensuring adequate food intake, and may confer metabolic and vascular benefits . Despite visceral adiposity , these IGHD subjects have increased insulin sensitivity , accompanied by high serum adiponectin . Insulin sensitivity may contribute to normal longevity but does not prevent the development of diabetes, which is present in 15% of adult IGHD subjects when assessed by OGTT , likely due to reduced β-cell function . Diabetes has also been reported in patients from the Israeli cohort with Laron dwarfism due to GH insensitivity caused by mutations in the GH receptor gene , while there was no self-reported diagnosis of diabetes in the Ecuadorian cohort with the same genetic defect . Metabolic fatty liver disease is more prevalent in IGHD adults than in local controls, without progression to advanced forms of hepatitis . These IGHD subjects had high serum total and LDL cholesterol levels . They also exhibited higher circulating C-reactive protein, an increase in systolic blood pressure in adults, and arterial hypertension in older age, without evidence of cardiac hypertrophy or an increase in carotid intima media thickness or coronary and abdominal aortic atherosclerosis . Cerebral vasoreactivity, a surrogate marker of cerebrovascular disease, was not impaired in these subjects, and IGHD did not affect quantitative measures of the vascular and neural retina . Therefore, retinal development, such as in the teeth, eye, and brain, may involve different patterns of regulation than whole-body growth, suggesting other regulatory mechanisms in addition to the somatotrophic axis. All these systems are extremely important for environmental adaptation and are responsible for hierarchical functions. Immune function is also very important for environmental adaptation and survival capacity. Accordingly, we did not observe significant immune deficits in this cohort, especially for the most prevalent pathogens in the region. We observed no difference between IGHD and controls regarding a history of infectious diseases, baseline serology, and in the response to hepatitis B, tetanus, and bacillus Calmette-Guérin vaccinations or in the positivity to PPD, streptokinase or candidin skin tests . These IGHD subjects have a higher prevalence of periodontal disease than local controls, probably caused by their dental crowding . The apparently normal immune function suggests that many immune cells use extrapituitary circuits (local GH/IGFs), independent from the somatotrophic axis. We also found that macrophages from IGHD subjects are less prone to Leishmania amazonensis infection than GH-sufficient controls and that they appear to cope better with SARS-CoV-2 infection than controls . Resistance to Leishamnia infection may be one of the reasons for the spread of this mutation in the Itabaianinha region. In the entire IGHD Itabaianinha cohort, during 28 years of medical care, our team did not diagnose any cases of breast, colon, or prostate cancer . The absence of these common neoplasms suggests that GH and IGF1 deficiency protects against DNA damage and favours apoptosis of damaged cells, thus reducing the risk of cancer. Thus far, we have found one IGHD subject with a skin tag, which was found to be a fibroepithelial polyp by pathological examination, and seven epidermoid skin cancers, one lethal, indicating a vulnerability of their skin to tumour development . Additionally, a 25-year-old woman who had intermittently received GH replacement therapy from age 11 to 18 developed an ependymoma extending from the fourth ventricle to the end of the thoracic spine. She underwent three surgical procedures without obvious evidence of tumour recurrence during the 10-year follow-up. Healthspan and lifespan Although it is intuitive that geriatric medicine seeks to extend lifespan, in the last three decades, its main strategy has been the compression of morbidity. This strategy delays the age of onset of chronic disease and disability rather than increasing survival, limiting morbidity to a shorter period and closer to the end of life, thus reducing the total amount of disease and disability. More recently, the theory of morbidity compression has evolved to promote the concept of healthspan, that is, the period of life free from major chronic clinical diseases and disabilities. To achieve optimal longevity (long life, but primarily well-being), the duration of life without significant comorbidities (healthspan) must be significantly extended . IGHD individuals from Itabaianinha are very active throughout their lives and generally have a healthy old age, with an extended healthspan and a lifespan comparable to that of their relatives without GHD . Some are centenarians, and many of those who die at an advanced age die from external causes, such as accidents or preventable conditions . Therefore, these individuals constitute a model of optimal longevity in light of modern geriatrics (long life, but mainly with well-being). These data are complementary to the extensive experimentation led by Dr Andrej Bartke of Southern Illinois University School of Medicine, Springfield, which showed that IGHD mice due to GHRH or GHRH receptor mutations and mice with GH resistance live longer than their normal siblings with an extended healthspan . MicroRNAs signatures MicroRNAs (miRNAs) are important regulators of metabolism and healthy ageing . MicroRNAs are short noncoding RNA segments that can induce target mRNA cleavage and translational repression and play a central role in the posttranscriptional regulation of cell function . They can be measured in the systemic circulation, where they can act as endocrine hormones regulating various physiological processes. Circulating miRNAs can also target genes in cells of different tissues and organs. The signature of circulating miRNA can potentially serve as a noninvasive diagnosis of chronic diseases, such as cancer, diabetes, and cardiovascular disease . We found a significant regulation of age-related miRNAs in Itabaianinha IGHD subjects . These miRNAs have an important overlap with serum-regulated miRNAs in GH-deficient mice, which have a remarkable extension of healthspan and lifespan . Of note, the target genes predicted for serum-regulated miRNAs in IGHD subjects contribute to insulin-, inflammation-, and ageing-related pathways, such as the mTOR and FoxO pathways. The main upregulated age-related miRNAs, miR-100-5p, miR-195-5p, miR-181b-5p and miR-30e-5p, have been found to regulate the in vitro expression of the age-related genes mTOR, AKT, NFκB and IRS1. Therefore, normal longevity is mirrored by a favourable miRNA signature. Roles of the components of the somatotrophic system in body size and body functions shows in simplified form the roles of the components of the somatotrophic system in body size and body functions. The somatotropic axis is crucial for body size and composition and skin and is important for some body functions, such as metabolism, voice production and auditive and vestibular functions. On the other hand, extrapituitary circuits are crucial for the growth of some organs, such as teeth, eyes and the brain. In conclusion, Sergipe has contributed to the study of GH excess (Souza Leite) and GH deficiency (with the description of Itabaianinha syndrome). This last line of research, lasting almost thirty years, has sought to establish the role of the components of the somatotrophic system in body size and body functions. The balance of conditions associated with this severe and congenital IGHD shows that the benefits outweigh the harms. Our hypothesis is that having very little exposure to GH throughout life may be more advantageous than having normal GH secretion followed by a decline caused by an acquired pituitary insult. It is intuitive that the skin, being the covering of the body, is significantly influenced by the somatotrophic axis that controls body size. Skin has many functions, some protective (against microorganisms, dehydration, ultraviolet light, and mechanical damage) and others homeostatic (sweating and production of vitamin D). A mutual influence exists between the skin, growth and the somatotrophic axis, as skin produces IGF1 and vitamin D, and GH and IGF1 exert several actions on the skin . These untreated IGHD subjects exhibited a reduction in sweating but had normal vitamin D levels and phosphorus-calcium homeostasis . In addition, their skin appeared prematurely wrinkled and remained susceptible to cancer , as detailed later in this article. Although these IGHD individuals had small bones and muscles, their volumetric bone mineral density, corrected for bone size, was normal . Additionally, they had better muscle strength parameters (adjusted for weight and fat-free mass) and greater peripheral resistance to fatigue than controls . Not surprisingly, there were no reports of spontaneous fractures in this cohort, and the prevalence of vertebral fracture was reduced in older IGHD individuals compared to age-matched controls . They presented satisfactory walking and postural balance with no increased risk of falling , although they had moderate peripheral vestibular impairment without clinical consequences, as they were quite active in agriculture, horseback riding, and sports. IGHD individuals exhibit normal quality of life , despite shorter and more fragmented sleep . The external and internal genitalia are essentially normal, which guarantees sexual life with a person of normal stature, with preserved reproductive capacity . The organs of sense present a generally very satisfactory performance (little, if any, vision impairment), with mild changes in cochlear function (mild high-tone sensorineural hearing loss) and labyrinth function (moderate peripheral vestibular impairment) . These minor problems do not disturb their normal quality of life. The changes in body composition include decreased fat-free mass and increased percent body fat . IGHD subjects eat proportionally more but healthier food than local controls matched for age and gender. In fact, their estimated energy intake corrected by body weight is higher than controls. In addition, they consume, in percentage, more proteins, less carbohydrates, and equal amounts of lipids . They show increased areas under the curves of GLP-1 and ghrelin and hunger attenuation in response to a mixed meal . They also exhibit reduced FGF21 and β-Klotho levels. These FGF21 and β-Klotho levels may not have been significantly influenced by the test meal but rather reflected their spontaneous morning secretion. This suggests that lower FGF21 and β-Klotho secretion is compatible with healthy status and longevity . Together, these “enteroendocrine” connections may result in a favourable outcome in terms of environmental adaptation, ensuring adequate food intake, and may confer metabolic and vascular benefits . Despite visceral adiposity , these IGHD subjects have increased insulin sensitivity , accompanied by high serum adiponectin . Insulin sensitivity may contribute to normal longevity but does not prevent the development of diabetes, which is present in 15% of adult IGHD subjects when assessed by OGTT , likely due to reduced β-cell function . Diabetes has also been reported in patients from the Israeli cohort with Laron dwarfism due to GH insensitivity caused by mutations in the GH receptor gene , while there was no self-reported diagnosis of diabetes in the Ecuadorian cohort with the same genetic defect . Metabolic fatty liver disease is more prevalent in IGHD adults than in local controls, without progression to advanced forms of hepatitis . These IGHD subjects had high serum total and LDL cholesterol levels . They also exhibited higher circulating C-reactive protein, an increase in systolic blood pressure in adults, and arterial hypertension in older age, without evidence of cardiac hypertrophy or an increase in carotid intima media thickness or coronary and abdominal aortic atherosclerosis . Cerebral vasoreactivity, a surrogate marker of cerebrovascular disease, was not impaired in these subjects, and IGHD did not affect quantitative measures of the vascular and neural retina . Therefore, retinal development, such as in the teeth, eye, and brain, may involve different patterns of regulation than whole-body growth, suggesting other regulatory mechanisms in addition to the somatotrophic axis. All these systems are extremely important for environmental adaptation and are responsible for hierarchical functions. Immune function is also very important for environmental adaptation and survival capacity. Accordingly, we did not observe significant immune deficits in this cohort, especially for the most prevalent pathogens in the region. We observed no difference between IGHD and controls regarding a history of infectious diseases, baseline serology, and in the response to hepatitis B, tetanus, and bacillus Calmette-Guérin vaccinations or in the positivity to PPD, streptokinase or candidin skin tests . These IGHD subjects have a higher prevalence of periodontal disease than local controls, probably caused by their dental crowding . The apparently normal immune function suggests that many immune cells use extrapituitary circuits (local GH/IGFs), independent from the somatotrophic axis. We also found that macrophages from IGHD subjects are less prone to Leishmania amazonensis infection than GH-sufficient controls and that they appear to cope better with SARS-CoV-2 infection than controls . Resistance to Leishamnia infection may be one of the reasons for the spread of this mutation in the Itabaianinha region. In the entire IGHD Itabaianinha cohort, during 28 years of medical care, our team did not diagnose any cases of breast, colon, or prostate cancer . The absence of these common neoplasms suggests that GH and IGF1 deficiency protects against DNA damage and favours apoptosis of damaged cells, thus reducing the risk of cancer. Thus far, we have found one IGHD subject with a skin tag, which was found to be a fibroepithelial polyp by pathological examination, and seven epidermoid skin cancers, one lethal, indicating a vulnerability of their skin to tumour development . Additionally, a 25-year-old woman who had intermittently received GH replacement therapy from age 11 to 18 developed an ependymoma extending from the fourth ventricle to the end of the thoracic spine. She underwent three surgical procedures without obvious evidence of tumour recurrence during the 10-year follow-up. Although it is intuitive that geriatric medicine seeks to extend lifespan, in the last three decades, its main strategy has been the compression of morbidity. This strategy delays the age of onset of chronic disease and disability rather than increasing survival, limiting morbidity to a shorter period and closer to the end of life, thus reducing the total amount of disease and disability. More recently, the theory of morbidity compression has evolved to promote the concept of healthspan, that is, the period of life free from major chronic clinical diseases and disabilities. To achieve optimal longevity (long life, but primarily well-being), the duration of life without significant comorbidities (healthspan) must be significantly extended . IGHD individuals from Itabaianinha are very active throughout their lives and generally have a healthy old age, with an extended healthspan and a lifespan comparable to that of their relatives without GHD . Some are centenarians, and many of those who die at an advanced age die from external causes, such as accidents or preventable conditions . Therefore, these individuals constitute a model of optimal longevity in light of modern geriatrics (long life, but mainly with well-being). These data are complementary to the extensive experimentation led by Dr Andrej Bartke of Southern Illinois University School of Medicine, Springfield, which showed that IGHD mice due to GHRH or GHRH receptor mutations and mice with GH resistance live longer than their normal siblings with an extended healthspan . MicroRNAs (miRNAs) are important regulators of metabolism and healthy ageing . MicroRNAs are short noncoding RNA segments that can induce target mRNA cleavage and translational repression and play a central role in the posttranscriptional regulation of cell function . They can be measured in the systemic circulation, where they can act as endocrine hormones regulating various physiological processes. Circulating miRNAs can also target genes in cells of different tissues and organs. The signature of circulating miRNA can potentially serve as a noninvasive diagnosis of chronic diseases, such as cancer, diabetes, and cardiovascular disease . We found a significant regulation of age-related miRNAs in Itabaianinha IGHD subjects . These miRNAs have an important overlap with serum-regulated miRNAs in GH-deficient mice, which have a remarkable extension of healthspan and lifespan . Of note, the target genes predicted for serum-regulated miRNAs in IGHD subjects contribute to insulin-, inflammation-, and ageing-related pathways, such as the mTOR and FoxO pathways. The main upregulated age-related miRNAs, miR-100-5p, miR-195-5p, miR-181b-5p and miR-30e-5p, have been found to regulate the in vitro expression of the age-related genes mTOR, AKT, NFκB and IRS1. Therefore, normal longevity is mirrored by a favourable miRNA signature. shows in simplified form the roles of the components of the somatotrophic system in body size and body functions. The somatotropic axis is crucial for body size and composition and skin and is important for some body functions, such as metabolism, voice production and auditive and vestibular functions. On the other hand, extrapituitary circuits are crucial for the growth of some organs, such as teeth, eyes and the brain. In conclusion, Sergipe has contributed to the study of GH excess (Souza Leite) and GH deficiency (with the description of Itabaianinha syndrome). This last line of research, lasting almost thirty years, has sought to establish the role of the components of the somatotrophic system in body size and body functions. The balance of conditions associated with this severe and congenital IGHD shows that the benefits outweigh the harms. Our hypothesis is that having very little exposure to GH throughout life may be more advantageous than having normal GH secretion followed by a decline caused by an acquired pituitary insult.
Management of gestational hypothyroidism: results of a Brazilian survey
62c10ad0-4e10-49cd-a269-74e9f894ca87
10118917
Gynaecology[mh]
A normal pregnancy results in a number of important physiological and hormonal changes that modify thyroid function, resulting in greater susceptibility to hypothyroidism in women with reduced thyroidal reserve or iodine deficiency. Both overt and subclinical hypothyroidism are common in pregnancy. Hypothyroid women are more subject to infertility, have increased prevalence of miscarriage, anemia, gestational hypertension, placental abruption, and postpartum bleeding ( - ). Untreated overt maternal hypothyroidism is associated with adverse events for the neonate: preterm birth, low birth weight, acute respiratory distress syndrome, and neurocognitive deficits ( ). Pregnant women with subclinical hypothyroidism seem more prone to preterm delivery and their newborns are more frequently admitted to intensive care ( ). Given the preventable obstetric and neonatal risks involved, recent publications have introduced recommendations for the management of gestational hypothyroidism. In order to determine the impact of those recommendations as well as the most prevalent clinical practices, the Brazilian Society of Endocrinology and Metabolism (Sociedade Brasileira de Endocrinologia e Metabologia – SBEM) made available a survey to its members. An electronic questionnaire survey was sent by e-mail to SBEM members, followed by two reminder e-mails. The survey was based on clinical case scenarios and inquired about the clinical practices related to the management of hypothyroidism during pregnancy. The survey included questions pertaining to diagnostic evaluation, choice of therapy, and follow-up. Most questions required a single best response to be selected from multiple choices. Respondents were asked to indicate either a single (10 questions) or multiple (2 questions) answers. All frequencies were adjusted on a 100% basis excluding the non-respondents. Survey responses were anonymously collected and stored electronically by the survey service provider. Respondent profile Our survey included 406 respondents with the following regional distribution: 57% from the southeast region of Brazil; 18% from the south; 14% from the northeast; 8% from the central-west; and 3% from the north. Respondent distribution by field of practice was as follows: 89% endocrinologists, 8% gynecologists/obstetricians, 2.5% general practitioners, and 0.5% from other specialties. Six respondents did not provide care to fertile-aged or pregnant women and, for that reason, were excluded from the study. In both public and private Brazilian health care centers, most of the pregnant women with hypothyroidism receive care from an endocrinologist alone (57%), an endocrinologist with an obstetrician (39%), or an obstetrician alone (4%). Screening Eighty-one percent out of the endocrinologists chose a universal thyroid dysfunction screening while 13% advocate selective screening based on risk factors. Another 4% do not conduct any systematic screening. The most frequently cited risk factors in the stratification of pregnant women at high risk of thyroid dysfunction are presented in . Different laboratory tests were indicated by the respondents for hypothyroidism screening in pregnant women, as shown in . The best time point for screening was pointed out as the pregestational visit (73%), followed by the first prenatal visit (24%). By contrast, 3% of the respondents reported no specific time. Most of the physicians (67%) stated that they follow up their pregnant patients even the ones who had normal TSH levels on the first trimester screening tests. Twenty-six percent out of these physicians remeasure thyroid function for all their pregnant patients in the second and third trimesters of pregnancy; 25% reassess the thyroid function only in pregnant patients with risk factors; and 16% measure thyroid antibodies. The remaining 33% of the surveyed clinicians do not monitor their patients in this setting. Diagnosis and treatment Eighty-two per cent of the respondents consider a TSH level of > 2.5 mIU/L to initiate treatment of gestational hypothyroidism. For 9% of those surveyed, the cut-off point is a TSH of > 5 mIU/L, and 7% adopt the laboratory reference range to guide their management strategy. The most widely used laboratory tests for monitoring the treatment of pregnant women are reported in . The goal of treatment in gestational hypothyroidism is to maintain TSH concentrations < 2.5 mIU/L in the first trimester and < 3.0 mIU/L in the second and third trimesters, according to 94% of the surveyed physicians. Considering women with a history of hypothyroidism due to chronic autoimmune thyroiditis, TSH levels of < 2.5 mIU/L (receiving LT 4 ), and planning pregnancy, 96% of the respondents are aware of the importance of adjusting the dose of LT 4 as soon as pregnancy is confirmed. Among these, 48% increase the LT 4 dose before reassessing thyroid function whereas 47% reassess thyroid function first. For the remaining 4%, it is only necessary to reevaluate thyroid function in the second trimester of gestation. Most of the physicians monitor their pregnant patients with hypothyroxinemia by measuring TSH and FT 4 levels (35%) or thyroid antibodies (28%) each trimester; 22% chose LT 4 treatment, and 15% do not follow or treat these patients. Some clinical scenarios were presented in our survey. The approaches followed by the respondents are described in . Our survey included 406 respondents with the following regional distribution: 57% from the southeast region of Brazil; 18% from the south; 14% from the northeast; 8% from the central-west; and 3% from the north. Respondent distribution by field of practice was as follows: 89% endocrinologists, 8% gynecologists/obstetricians, 2.5% general practitioners, and 0.5% from other specialties. Six respondents did not provide care to fertile-aged or pregnant women and, for that reason, were excluded from the study. In both public and private Brazilian health care centers, most of the pregnant women with hypothyroidism receive care from an endocrinologist alone (57%), an endocrinologist with an obstetrician (39%), or an obstetrician alone (4%). Eighty-one percent out of the endocrinologists chose a universal thyroid dysfunction screening while 13% advocate selective screening based on risk factors. Another 4% do not conduct any systematic screening. The most frequently cited risk factors in the stratification of pregnant women at high risk of thyroid dysfunction are presented in . Different laboratory tests were indicated by the respondents for hypothyroidism screening in pregnant women, as shown in . The best time point for screening was pointed out as the pregestational visit (73%), followed by the first prenatal visit (24%). By contrast, 3% of the respondents reported no specific time. Most of the physicians (67%) stated that they follow up their pregnant patients even the ones who had normal TSH levels on the first trimester screening tests. Twenty-six percent out of these physicians remeasure thyroid function for all their pregnant patients in the second and third trimesters of pregnancy; 25% reassess the thyroid function only in pregnant patients with risk factors; and 16% measure thyroid antibodies. The remaining 33% of the surveyed clinicians do not monitor their patients in this setting. Eighty-two per cent of the respondents consider a TSH level of > 2.5 mIU/L to initiate treatment of gestational hypothyroidism. For 9% of those surveyed, the cut-off point is a TSH of > 5 mIU/L, and 7% adopt the laboratory reference range to guide their management strategy. The most widely used laboratory tests for monitoring the treatment of pregnant women are reported in . The goal of treatment in gestational hypothyroidism is to maintain TSH concentrations < 2.5 mIU/L in the first trimester and < 3.0 mIU/L in the second and third trimesters, according to 94% of the surveyed physicians. Considering women with a history of hypothyroidism due to chronic autoimmune thyroiditis, TSH levels of < 2.5 mIU/L (receiving LT 4 ), and planning pregnancy, 96% of the respondents are aware of the importance of adjusting the dose of LT 4 as soon as pregnancy is confirmed. Among these, 48% increase the LT 4 dose before reassessing thyroid function whereas 47% reassess thyroid function first. For the remaining 4%, it is only necessary to reevaluate thyroid function in the second trimester of gestation. Most of the physicians monitor their pregnant patients with hypothyroxinemia by measuring TSH and FT 4 levels (35%) or thyroid antibodies (28%) each trimester; 22% chose LT 4 treatment, and 15% do not follow or treat these patients. Some clinical scenarios were presented in our survey. The approaches followed by the respondents are described in . We observed a high degree of agreement between the consensus guidelines and the daily practice of most Brazilian respondents of our electronic survey. However, some aspects are contradictory. In particular, the choice of the majority of endocrinologists for universal screening for thyroid dysfunction. The guidelines recommend an active search for high-risk pregnant women, although the evidence to support this recommendation is limited, which justifies the disparity in practices. The leading argument in favor of selective screening based on risk factors is the scarcity of evidence regarding the benefits of using LT 4 to treat subclinical gestational hypothyroidism ( ). Nevertheless, Dosiou and cols. demonstrated that even when those benefits are lacking, universal screening remains a cost-effective practice ( ). In a recent review on this topic, Vila and cols. concluded in favor of universal screening based on the high prevalence of thyroid dysfunction during pregnancy, simple diagnosis, available effective and safe treatment, and the possibility of early intervention, thus reducing the burden of disease ( ). It is possible that our endocrinologists were influenced by this literature. The presence of goiter, or obesity, a personal history of thyroid disease or surgery, cervical radiation therapy, miscarriage, infertility, and a family history of autoimmune or thyroid diseases in first-degree relatives are the risk factors indicated in our survey as well as in the guidelines. Regarding the laboratory test for screening, there was a slight preference for TSH and thyroid antibodies compared to TSH alone. Thyroid autoimmunity can be found in up to 10% of pregnant women ( , ) and has a positive association with miscarriage and prematurity ( , , ). However, this may not be a causal relationship. Furthermore, to date, no studies have been developed evaluating the efficacy of screening by measuring anti-TPO antibodies (TPOAb) ( ). The preferred screening test should be sensitive, specific, simple, inexpensive, and widely available ( ). These characteristics are consistent with the recommendation of TSH as the screening test of choice provided that trimester-specific reference ranges be applied. Sixty-seven per cent of the surveyed physicians follow up their pregnant patients with normal TSH on the screening tests by reassessing thyroid function (most of the respondents) or measuring thyroid antibodies. This is an important issue, as demonstrated by Vermiglio and cols. ( ), in mildly iodine-deficient regions, were screening performed only during the first trimester of gestation could entail more than 40% missed diagnoses of hypothyroidism. The TSH cut-off point of > 2.5 mIU/L for treatment of gestational hypothyroidism is widely accepted among those surveyed. However, 9% of them use the reference ranges provided by the laboratories to guide their therapeutic management – a fact to be regarded with caution because those ranges may not be adjusted to the population of pregnant women. Regarding the tests for monitoring treatment, the leading choices were 1) TSH and FT 4 , 2) TSH alone, and 3) TSH and total T 4 (TT 4 ). Determination of TSH levels is the most sensitive method for diagnosis and monitoring of hypothyroidism. Measuring FT 4 can also be helpful, but the most common determination methods are influenced by the low serum albumin levels and high TBG levels typical of pregnancy. Alternatively, TT 4 could be determined and adjusted by multiplying its range by 1.5-fold ( ). Thyroxine requirements increase at the initial stages of pregnancy, and this need is progressive up until 16 to 20 weeks of gestation ( ). Therefore, hypothyroid pregnant women who are already on LT 4 replacement will typically require an increase in the preconception LT 4 dose. According to the guidelines, pregestational TSH levels should ideally be maintained < 2.5 mIU/L ( , , - ). However, newly pregnant hypothyroid women without a previous TSH determination or immediate access to their physician should increase their daily dose of LT 4 by 30% or include 2 additional LT 4 tablets in their weekly dosage ( , ). If TSH is first measured after pregnancy is confirmed, LT 4 dosage can be increased on the basis of the variation in TSH levels ( ). In our survey, 47% of the respondents indicated that they reassess thyroid function as soon as pregnancy is confirmed while 48% augment the dose of LT 4 after pregnancy confirmation but before remeasuring thyroid function. It is likely that access to health care services have an influence on the chosen course of action. Maternal hypothyroxinemia is defined as normal TSH concentrations with FT 4 levels below the 5 th percentile for the reference range. The current consensus recommends that hypothyroxinemia should not be treated ( ), since FT 4 measurements can be inconsistent during pregnancy and the evidence of the impact of this condition on the neuropsychological development of the offspring is scarce ( ). Nevertheless, approximately 22% of those surveyed would treat this clinical condition. A number of clinical scenarios were proposed in our survey. In the case of pregnant women with subclinical hypothyroidism diagnosed in the first trimester, negative for TPOAb, with no previous history of LT 4 replacement, 55% of the clinicians choose treatment while 37% would monitor these women by TSH and FT 4 measurements in each trimester of gestation. The consensus statements admit the treatment of this condition, considering its potential benefits and low risk, but the data are insufficient to support a recommendation ( , ). Thus, the choice of more than one-third of those surveyed is also justified. In the setting of subclinical gestational hypothyroidism (without previous LT 4 treatment) and positive TPOAb, the vast majority of the practitioners initiate replacement with LT 4 , which is in agreement with the guidelines ( , ). For pregnant women with no previous use of LT 4 , euthyroid on the screening test, yet TPOAb-positive, 82% of the surveyed physicians maintain follow-up with TSH and FT 4 determinations. The consensus statements corroborate this approach, to be conducted every 4 weeks up to mid-pregnancy and at least one more time between 26 and 32 weeks of gestation ( ). Some clinical issues addressed in our survey were largely consistent with the current recommendations: the choice of pregestational or first prenatal visit for screening ( ) , the cut point of TSH > 2.5 mIU/L for diagnosis and treatment of hypothyroidism during pregnancy, and the goal of maintaining TSH levels within the target range for each trimester of gestation when in use of LT 4 ( , , ).F Consensus statements are important tools in modern medicine and have been incorporated into the daily practice of the respondents. However a few controversial aspects persist, such as universal screening versus aggressive case finding.
Facts about the history of the AE&M
a8732cf0-fc4f-4019-ae4e-57df7176ad85
10118926
Physiology[mh]
Health-related quality of life associated with diabetic retinopathy in patients at a public primary care service in southern Brazil
2b0a56a3-10ea-447c-80e7-ef368ef16ad3
10118975
Ophthalmology[mh]
Diabetic retinopathy (DR) is one of the leading causes of preventable visual impairment and blindness worldwide, despite existing accurate diagnostic technologies and effective treatments ( , ). It is usually asymptomatic until late stages and could lead to a sudden visual loss affecting the individual’s functional capabilities (e.g., mobility, independence, self-care, and ability to perform daily activities such as work and leisure) ( ). In Brazil, DR accounts for approximately 25% of all years lived with disability from diabetes mellitus ( ). The impact of DR on quality of life (also referred to as health-related quality of life [HRQoL]) has been reported in several countries ( , ). The results suggest that DR severity has a significant negative impact on HRQoL among patients with diabetes. However, this finding is not consistent across studies ( ). HRQoL is a complex construct involving an individual’s perception of his or her health state (including physical, mental, and social domains) ( ). Sociocultural differences may influence HRQoL perception ( ). Therefore, it is particularly important to provide information on the DR impact on HRQoL considering the social context ( ). The EuroQol five dimensions (EQ-5D) ( ) is the most commonly used generic preference-based measure of HRQoL (other measures include the SF-36 and the HUI-3) ( ). It encompasses five health dimensions (mobility, self-care, usual activities, pain/discomfort, and anxiety/depression), all of which contain three severity levels, resulting in 234 possible health states. The health states can be converted into utility values (with a single value representing an individual’s preferences for a given health state) based on preferences of the general population (also referred to as tariffs). Utility values range from 1 (equaling full health) to 0 (equaling death). Negative values may also occur indicating that a person’s health state is worse than death ( ). The obtained utility values can be used to calculate the quality-adjusted life-years (QALY) measure by multiplying the values by the amount of time spent in a specific health state ( ). Many national guidelines (such as those from Brazil and the UK) ( , ) recommend using QALYs in economic evaluations to compare benefits from health technologies ( ). To the best of our knowledge, no study has been conducted using Brazilian EQ-5D tariffs to describe utility values according to DR health states. Therefore, this study aimed to establish the utility values for different health states associated with DR in a Brazilian sample to provide input to model-based economic evaluations, and to explore potential differences in HRQoL among DR health states. Study design and population This was a cross-sectional study including a convenience sample of patients with type 2 diabetes mellitus (T2D) who underwent teleophthalmology screening at a public primary care service in Southern Brazil from 2014 to 2016. Patients with T2D who were registered at the service were invited to participate by phone calls or were referred by the service’s family physicians for screening. Individuals with T2D who were older than 18 years were included. Patients were excluded if they had type 1 diabetes (T1D; n = 5, 2%), cognition problems (n = 0), blindness due to a disease other than T2D (n = 1, 0.4%), and unreadable retinal photographs due to lens opacity (n = 20, 8.6%). Patients with T1D were not included because of the low prevalence of this disease in primary care. Prior to the measurements, study requirements were explained to the patients by one of the three trained family physicians performing the teleophthalmology screening ( ). Patients who agreed to participate provided written informed consent. A legal guardian signed the written informed consent in case of blindness. A sample size of 126 patients was required to detect a difference of 0.1 in mean utility value between two DR health states with an α of 0.05 and power of β = 80%. The study was approved by the Ethics Committee of the Hospital de Clínicas de Porto Alegre . Teleophthalmology screening Retinal photographs were taken by the aforementioned trained family physicians. Images of two fields of each eye were captured by using the Canon CR-2 Digital Retinal (Canon U.S.A., Inc., Melville, NY, USA). Retinal photographs were remotely evaluated and classified by two ophthalmologists of the teleophthalmology screening based on the International Clinical Diabetic Retinopathy and Diabetic Macular Edema Disease Severity Scale ( ). More details about the teleophthalmology screening training and work process have been described by other authors ( ). Diabetic retinopathy health states Five DR health states were defined based on economic evaluation models previously published in the literature ( - ): absent (NoDR), non-sight-threatening (Non-STDR), sight-threatening (STDR), and bilateral blindness (BB). The Non-STDR health state included mild and moderate nonproliferative DR. The STDR health state included severe nonproliferative, proliferative DR, and clinically significant macular edema. The categorization as Non-STDR and STDR was based on the worse eye. Patients were asked to report previously diagnosed eye conditions. Patients reporting a complete vision loss in both eyes due to T2D and presenting retinographic findings suggestive of vision loss due to DR were classified as BB. Measure of Health-Related Quality of Life – Utility values The EQ-5D is a standardized, generic preference-based measure of HRQoL developed by the EuroQol Group ( ). The three-level version of the EQ-5D consists of two pages: the EQ-5D descriptive system and the EQ visual analogue scale (EQVAS). The descriptive system comprises five HRQoL dimensions (mobility, self-care, usual activities, pain/discomfort, and anxiety/depression). Each dimension has three severity levels (no problems, some problems, and extreme problems). The EQVAS records the patient’s self-rated health on a vertical visual analogue scale ranging from 0 to 100, where the endpoints are labeled “best imaginable health state” and “worst imaginable health state” ( ). The visual analogue scale can be used as a quantitative measure of health outcome that reflects the patient’s own judgment ( ). The patients completed the questionnaire before the retinal photographs were taken. The three-level version of the EQ-5D has been validated in Portuguese, and Brazilian tariffs have been published ( ). Description of variables Demographic and clinical variables of interest for descriptive analysis were collected from electronic medical records: age (years), sex, self-reported skin color (white or non-white), education level (no/primary education, secondary education, higher education), diabetes duration in years, diabetes treatment, glycated hemoglobin (HbA1c), diagnosis of hypertension, creatinine, albuminuria, dialysis, low-density lipoprotein cholesterol (LDL), triglycerides (TG), high-density lipoprotein cholesterol (HDL), presence of foot ulcers or lower-extremity amputation, previous coronary heart disease, stroke, ophthalmic diseases, and other self-reported comorbidities. We collected the most recent laboratory results available in the patients’ electronic medical record within 12 months prior to the screening. Diabetes control was defined as an HbA1c level ≤ 7.0% ( ). Hypertension was defined as current antihypertensive therapy and/or hypertension diagnosis reported in the medical record. Levels of systolic and diastolic blood pressure below 140 mmHg and 90 mmHg, respectively, were classified as controlled hypertension. Chronic kidney disease was defined as any abnormal albuminuria in a spot urine sample (≥ 17 mg/L or 20–200 mg/g Cr) or a glomerular filtration rate < 90 mL/min/1.73 m 2 ( ). Dyslipidemia was defined as values of LDL cholesterol ≥ 160 mg/dL, or TG ≥ 150 mg/dL, or HDL < 40 mg/dL (men) and < 50 mg/dL (women) ( ). Dialysis, lower-extremity amputation, foot ulcers, coronary heart disease, stroke, and ophthalmic disease were inquired through direct questions. Ophthalmic diseases included refractive errors, cataract, glaucoma, ocular toxoplasmosis, and other self-reported ocular conditions. Age-related macular degeneration was assessed by two of the aforementioned ophthalmologists through digital retinal photographs and by patient self-report. Statistics analyses Missing data related to demographic and clinical variables (214 [2.6%] out of 8026 values) were imputed by means of regression models. Descriptive statistics were calculated using pooled data from the 10 imputed data sets. Means ± standard deviations (SD) were used to describe normally distributed variables, and medians and interquartile range (IQR) were used for nonparametric variables. The normality of variables was evaluated by histogram graphs and the Kolmogorov-Smirnov test. The utility values for different health states associated with diabetic retinopathy were assessed with adjustment for potential confounders using analysis of covariance (ANCOVA). The variables included in the adjusted analysis were selected from the two following sources: a) from a theoretical model based on the current literature, those variables associated with HRQoL, such as age, sex, other comorbidities, ophthalmic diseases, and macrovascular and microvascular complications ( , ) and b) from a previous univariate analysis, those variables found to be associated with utility values ( p ≤ 0.05). Diabetes duration, HbA1c, diabetes control, and type of treatment were not included in the adjusted analysis because they are usually not associated with HRQoL, despite their strong association with DR ( , , ). Additional adjusted analysis was performed after excluding the cases of BB because this group was very small (two cases). We opted to perform ANCOVA because there was homogeneity of variances in utility values at each level of DR (Levene’s test, p = 0.27) and the residuals followed an approximately normal distribution. For the adjusted analysis, we grouped chronic kidney disease, foot ulcers, and lower-extremity amputation into a single variable named “microvascular complications”. Coronary heart disease and stroke were grouped into a variable named “macrovascular complications”. Cataract, glaucoma, ocular toxoplasmosis, age-related macular degeneration, and other self-reported ocular diseases were grouped into a variable named “ophthalmic diseases”. A variable named “other comorbidities” included other self-reported diseases not included in the three previous variables, such as cancer and rheumatologic and dermatologic disorders. Additional interaction analysis was undertaken considering all possible interactions between variables included in the adjusted analysis. IBM SPSS Statistics version 24.0 was used to perform all analyses. This was a cross-sectional study including a convenience sample of patients with type 2 diabetes mellitus (T2D) who underwent teleophthalmology screening at a public primary care service in Southern Brazil from 2014 to 2016. Patients with T2D who were registered at the service were invited to participate by phone calls or were referred by the service’s family physicians for screening. Individuals with T2D who were older than 18 years were included. Patients were excluded if they had type 1 diabetes (T1D; n = 5, 2%), cognition problems (n = 0), blindness due to a disease other than T2D (n = 1, 0.4%), and unreadable retinal photographs due to lens opacity (n = 20, 8.6%). Patients with T1D were not included because of the low prevalence of this disease in primary care. Prior to the measurements, study requirements were explained to the patients by one of the three trained family physicians performing the teleophthalmology screening ( ). Patients who agreed to participate provided written informed consent. A legal guardian signed the written informed consent in case of blindness. A sample size of 126 patients was required to detect a difference of 0.1 in mean utility value between two DR health states with an α of 0.05 and power of β = 80%. The study was approved by the Ethics Committee of the Hospital de Clínicas de Porto Alegre . Retinal photographs were taken by the aforementioned trained family physicians. Images of two fields of each eye were captured by using the Canon CR-2 Digital Retinal (Canon U.S.A., Inc., Melville, NY, USA). Retinal photographs were remotely evaluated and classified by two ophthalmologists of the teleophthalmology screening based on the International Clinical Diabetic Retinopathy and Diabetic Macular Edema Disease Severity Scale ( ). More details about the teleophthalmology screening training and work process have been described by other authors ( ). Five DR health states were defined based on economic evaluation models previously published in the literature ( - ): absent (NoDR), non-sight-threatening (Non-STDR), sight-threatening (STDR), and bilateral blindness (BB). The Non-STDR health state included mild and moderate nonproliferative DR. The STDR health state included severe nonproliferative, proliferative DR, and clinically significant macular edema. The categorization as Non-STDR and STDR was based on the worse eye. Patients were asked to report previously diagnosed eye conditions. Patients reporting a complete vision loss in both eyes due to T2D and presenting retinographic findings suggestive of vision loss due to DR were classified as BB. The EQ-5D is a standardized, generic preference-based measure of HRQoL developed by the EuroQol Group ( ). The three-level version of the EQ-5D consists of two pages: the EQ-5D descriptive system and the EQ visual analogue scale (EQVAS). The descriptive system comprises five HRQoL dimensions (mobility, self-care, usual activities, pain/discomfort, and anxiety/depression). Each dimension has three severity levels (no problems, some problems, and extreme problems). The EQVAS records the patient’s self-rated health on a vertical visual analogue scale ranging from 0 to 100, where the endpoints are labeled “best imaginable health state” and “worst imaginable health state” ( ). The visual analogue scale can be used as a quantitative measure of health outcome that reflects the patient’s own judgment ( ). The patients completed the questionnaire before the retinal photographs were taken. The three-level version of the EQ-5D has been validated in Portuguese, and Brazilian tariffs have been published ( ). Demographic and clinical variables of interest for descriptive analysis were collected from electronic medical records: age (years), sex, self-reported skin color (white or non-white), education level (no/primary education, secondary education, higher education), diabetes duration in years, diabetes treatment, glycated hemoglobin (HbA1c), diagnosis of hypertension, creatinine, albuminuria, dialysis, low-density lipoprotein cholesterol (LDL), triglycerides (TG), high-density lipoprotein cholesterol (HDL), presence of foot ulcers or lower-extremity amputation, previous coronary heart disease, stroke, ophthalmic diseases, and other self-reported comorbidities. We collected the most recent laboratory results available in the patients’ electronic medical record within 12 months prior to the screening. Diabetes control was defined as an HbA1c level ≤ 7.0% ( ). Hypertension was defined as current antihypertensive therapy and/or hypertension diagnosis reported in the medical record. Levels of systolic and diastolic blood pressure below 140 mmHg and 90 mmHg, respectively, were classified as controlled hypertension. Chronic kidney disease was defined as any abnormal albuminuria in a spot urine sample (≥ 17 mg/L or 20–200 mg/g Cr) or a glomerular filtration rate < 90 mL/min/1.73 m 2 ( ). Dyslipidemia was defined as values of LDL cholesterol ≥ 160 mg/dL, or TG ≥ 150 mg/dL, or HDL < 40 mg/dL (men) and < 50 mg/dL (women) ( ). Dialysis, lower-extremity amputation, foot ulcers, coronary heart disease, stroke, and ophthalmic disease were inquired through direct questions. Ophthalmic diseases included refractive errors, cataract, glaucoma, ocular toxoplasmosis, and other self-reported ocular conditions. Age-related macular degeneration was assessed by two of the aforementioned ophthalmologists through digital retinal photographs and by patient self-report. Missing data related to demographic and clinical variables (214 [2.6%] out of 8026 values) were imputed by means of regression models. Descriptive statistics were calculated using pooled data from the 10 imputed data sets. Means ± standard deviations (SD) were used to describe normally distributed variables, and medians and interquartile range (IQR) were used for nonparametric variables. The normality of variables was evaluated by histogram graphs and the Kolmogorov-Smirnov test. The utility values for different health states associated with diabetic retinopathy were assessed with adjustment for potential confounders using analysis of covariance (ANCOVA). The variables included in the adjusted analysis were selected from the two following sources: a) from a theoretical model based on the current literature, those variables associated with HRQoL, such as age, sex, other comorbidities, ophthalmic diseases, and macrovascular and microvascular complications ( , ) and b) from a previous univariate analysis, those variables found to be associated with utility values ( p ≤ 0.05). Diabetes duration, HbA1c, diabetes control, and type of treatment were not included in the adjusted analysis because they are usually not associated with HRQoL, despite their strong association with DR ( , , ). Additional adjusted analysis was performed after excluding the cases of BB because this group was very small (two cases). We opted to perform ANCOVA because there was homogeneity of variances in utility values at each level of DR (Levene’s test, p = 0.27) and the residuals followed an approximately normal distribution. For the adjusted analysis, we grouped chronic kidney disease, foot ulcers, and lower-extremity amputation into a single variable named “microvascular complications”. Coronary heart disease and stroke were grouped into a variable named “macrovascular complications”. Cataract, glaucoma, ocular toxoplasmosis, age-related macular degeneration, and other self-reported ocular diseases were grouped into a variable named “ophthalmic diseases”. A variable named “other comorbidities” included other self-reported diseases not included in the three previous variables, such as cancer and rheumatologic and dermatologic disorders. Additional interaction analysis was undertaken considering all possible interactions between variables included in the adjusted analysis. IBM SPSS Statistics version 24.0 was used to perform all analyses. We included 206 out of the 232 patients who underwent the teleophthalmology screening. The mean age of the patients included was 63.5 ± 10.6 years, 60.7% (n = 125) were female, 85% (n = 175) were of white ethnicity, and 50.5% (n = 104) had secondary education. The patients included in the study had a statistically significantly higher mean utility value compared with those who were excluded due to unreadable retinal photographs (0.765 ± 0.19 vs. 0.636 ± 0.18, respectively, p =0.004). However, there were no significant differences between included and excluded patients regarding HbA1c (7.5% vs 7.0%, respectively, p = 0.13), diabetes control (68.4% vs 71.1%, respectively, p = 0.49), and diabetes duration (8.7 vs. 8.2 years, respectively, p = 0.71). The overall prevalence of DR was 23.8% (n = 49). In all, 15.5% (n = 32) of the patients had Non-STDR, 7.3% (n=15) had STDR, and 1% (n = 2) had BB ( ). The percentage of patients reporting full health was 25.7% (n = 53). The mean utility was 0.773 ± 0.17 in patients with NoDR and 0.739 ± 0.24 in those with DR. Patients with DR and no BB presented a mean utility of 0.755 ± 0.23, whereas those with BB presented a mean utility of 0.356 ± 0.21. Appendices 1 and 2 provide a more detailed description of the sample regarding the five EQ-5D dimensions of quality of life according to DR health states. shows the utility values for the different health states related to DR with and without adjustment for potential confounders. The mean utility value of the various DR health states decreased after adjustment. The adjusted mean utility was 0.748 (95% CI, 0.698 – 0.798) for a NoDR health state, 0.752 (95% CI, 0.679 – 0.825) for Non-STDR, 0.628 (95% CI, 0.521 – 0.736) for STDR, and 0.355 (95% CI, 0.105 – 0.606) for BB. The adjusted analysis performed after excluding the two cases of BB showed a statistically significant utility decrement between patients at NoDR and STDR health sates (0.748 vs. 0.628, respectively, p = 0.04). No significant differences were found between NoDR and non-STDR health states (0.748 vs. 0.752, respectively, p = 1.0) and between non-STDR and STDR health states (0.752 vs . 0.628, respectively, p = 0.07). The interaction analysis showed statistically significant interactions between DR health states and other comorbidities (F 3,66 = 3679, p = 0.01); between sex, skin color, and DR health states (F 1,66 = 6020, p = 0.01); between other comorbidities, macrovascular complications, and DR health states (F 1,66 = 8596, p < 0.001); and between skin color and microvascular complications (F 1,66 = 3974, p = 0.05). The interaction between DR health states and the variables included in the adjusted analysis is presented in . This study established the utility values for different health states associated with DR in a sample of patients with T2D undergoing teleophthalmology screening at a public primary care service in Southern Brazil. The results suggest that a later DR health state is associated with a significant decrement in HRQoL compared to the absence of retinopathy in patients with T2D. Additional interaction analysis suggests that the utility values for different health states associated with DR may depend on a combination of DR with other factors such as sex, skin color, other comorbidities, and macrovascular complications. Additional research is needed to further establish this association. Research exploring the utility values of different health states associated with DR has provided mixed results ( - ). Heintz and cols. ( ) found no difference in mean utility values across various levels of DR severity in a sample of Swedish patients with T1D and T2D. Fenwick and cols. also found no differences in a sample of Australian patients with T1D and T2D ( ). Our study differs from both these studies because it was based on a different population (comprising only individuals with T2D) and included different variables in the adjusted analysis (i.e., did not include variables strongly associated with DR, such as diabetes duration and HbA1c, which are usually not directly related to HRQoL). Similar to our results, Polack and cols. found that late DR health states were associated with a lower mean utility value compared with the absence of DR in a sample of patients with T2D in India ( ). Lloyd and cols. found significant utility value decrements associated with lower visual acuity in a sample of T1D and T2D patients in the UK ( ). The present study also found no HRQoL differences in early DR health states (i.e., between patients without DR and Non-STDR), which is in agreement with other studies suggesting that early DR health states are unlikely to be strongly correlated with any of the dimensions of HRQoL ( , , ). This study has a number of limitations that need to be discussed. First, the EQ-5D may not be sensitive enough to detect small differences in HRQoL during early DR health states ( ). The validity of EQ-5D compared to other generic preference-based measures of HRQoL (e.g., HUI-3) regarding DR progression is controversial ( , ). Researchers have proposed adding bolt-ons to expand EQ-5D descriptive systems considering visual symptoms however, this is still under investigation ( ). Second, the convenience sample only allowed us to assess patients registered at a primary care service, thus potentially reducing the generalizability of the findings. Therefore, we would advise researchers to only use these numbers as preliminary input to model-based economic evaluations. Nonetheless, it is noteworthy that the mean overall utility value reported in our study (0.76 ± 0.19) was similar to values found in developed countries, such as the UK (0.77 ± 0.27) ( ) and the Netherlands (0.74 ± 0.27) ( ). Third, this study did not directly assess visual acuity, which is known to be associated with lower HRQoL in late DR health sates ( , ). Consequently, we had to rely on DR diagnosis/classification by image without adjustment for the visual acuity potential confounder. Nevertheless, to be able to populate model-based economic evaluations, the utility values should be classified according to DR health states instead of visual acuity ( ). Fourth, patients with unreadable photographs were excluded from the study, which may have biased the results due to selective patient exclusion. However, the lower HRQoL presented by the excluded patients compared with those included in the study may be related to the lens opacity instead of DR, since there was no difference regarding diabetes control and duration between them. Bearing in mind that HRQoL could be different across country populations and that one of the main outcomes of economic evaluations (i.e., QALY gained) relies on utility values, this study was the first attempt to describe HRQoL associated with DR health states in a Brazilian primary care setting based on general population preferences. These results may be useful as preliminary input to model-based economic evaluations. Further research is needed to investigate the impact of DR progression on HRQoL in a representative sample of the Brazilian population. In conclusion, this study established the utility values for different health states associated with DR in a Southern Brazilian sample of patients with T2D undergoing teleophthalmology screening at a public primary care service. The results suggest that a late DR health state is associated with decrements in HRQoL. The findings may be useful as preliminary input to model-based economic evaluations.
New update to the guidelines on testing predictive biomarkers in non-small-cell lung cancer: a National Consensus of the Spanish Society of Pathology and the Spanish Society of Medical Oncology
c341b112-8483-4403-9647-1e9075eec80b
10119050
Internal Medicine[mh]
Non-small cell lung cancer (NSCLC) is a group of tumours with the greatest number of identified therapeutic targets, some of which have clinical utility from the earliest stages. Undoubtedly, providing the correct molecular diagnosis is required to offer the best therapeutic option to each patient, which should be applied as widely as possible. Fortunately, in recent years, important advances in both molecular diagnostic techniques and personalized therapies have been achieved. This document aims to offer new recommendations for the detection of predictive biomarkers of NSCLC and will be an update to those already published in 2012, 2015 and 2020 as a result of this consensus between the Spanish Society of Medical Oncology (SEOM) and the Spanish Society of Pathology (SEAP) . Several types of samples can be useful for the study of biomarkers, such as biopsies, surgical specimens and/or cytology, as long as they contain a sufficient number of tumour cells and have been correctly processed . The decision about which one to consider will depend on the experience and technologies available at each laboratory. In general, using the most recent sample is recommended, especially in previously treated patients . The sample should be stored in 10% buffered neutral formalin for 6–48 h depending on its size (6–12 h for small samples and 24–48 h for surgical resections), and a minimum of 50–100 cells that are viable for immunohistochemistry (IHC) and fluorescence in situ hybridization (FISH) studies should be present. The use of alternative fixatives, such as mercury-based or alcohol-based fixatives, should be avoided. For cytological samples, the cell block is processed exactly like a biopsy. Smears are fixed in 96% alcohol and should be stained with Papanicolaou. From these materials, most biomarker studies can also be performed . For techniques based on nucleic acid extraction, the threshold of analytical sensitivity, the limit of detection (LOD), of the method used must be known. Each type of technique has different minimum requirements, including 30% tumour cells for direct sequencing, 5% for real-time polymerase chain reaction (PCR) and 20% for next-generation sequencing (NGS) . In addition, the type of mutation can change the sensitivity threshold. Thus, a range of nucleic acid content between 5 and 10% is required to detect point mutations and small insertions or deletions, and a range up to 30% is needed to correctly analyse alterations in the number of copies . Two alternative methods for redundant molecular detection are recommended, if necessary. Regarding the management of all types of biological samples, protocols that allow both anatomopathological diagnosis and biomarker detection are required. Table shows the biomarkers that must be detected in patients with NSCLC, and Table provides other biomarkers of interest in these patients. EGFR Mutations in the epidermal growth factor receptor ( EGFR ) gene are identified in approximately 10–16% of NSCLCs, with a higher frequency among patients with adenocarcinoma and nonsmoking patients . The most frequent mutations that are directly related to sensitivity to anti- EGFR tyrosine kinase inhibitors (TKIs) affect exon 19 and consist of deletions that preserve the reading frame (in-frame deletions) between codons 746 and 759 (amino acids leucine, arginine, glutamate and alanine, LREA) (45–50%), followed by missense point mutations in exon 21, with substitution of the amino acid leucine with arginine at position 858 (L858R) (35–45%). Several EGFR-TKIs have been approved for the first-line treatment of patients with metastatic disease and EGFR -activating mutations (exon 19 deletion, L858R) , including osimertinib (the preferred option in most guidelines), gefitinib, erlotinib, afatinib and dacomitinib. Osimertinib is also approved as an adjuvant treatment after complete surgical resection in adult patients with EGFR -activating mutations . Other mutations, such as insertions in exon 20, should also be detected due to their different effects in the disease, and treatment requirements . In relation to the methodology, the clinical tests for EGFR detection should be able to detect all the individual mutations that have been reported with a frequency of at least 1% in EGFR -mutated NSCLC. Highly sensitive methods are recommended. Regarding the result reports, the mutations that have been detected and the sensitivity of the detection methods used should be specified, among other data . The initial recommendations for the diagnosis of mutations in EGFR have undergone some changes, including the fact that any cytological sample with adequate cellularity and preservation can be used, the need to use techniques with high sensitivity compared to Sanger sequencing as the reference method and the lack of sensitivity of IHC for the diagnosis of mutations in clinical practice . Most patients with sensitizing EGFR mutations (exon 19 deletion and exon 21 mutations, L858R) receive an anti- EGFR TKI regimen, with the most frequent molecular mechanism of acquired resistance being the EGFR T790M mutation in patients receiving first- or second-generation anti- EGFR TKIs (50–60% of cases). Techniques that can detect this mutation in at least 5% of viable cells should be used, including new digital PCR methods . The mechanisms leading to acquired resistance against TKIs vary, including intragenic mutations, gene amplification or fusion and functional adaptation with histological transformation. Accordingly, mechanisms of acquired resistance should be monitored using tumour biopsy or liquid biopsy (LB) . ALK Anaplastic lymphoma kinase ( ALK ) rearrangements are present in 2–5% of advanced NSCLCs . These tumours are arising more often in younger patients, and females with or without minimal prior tobacco smoking exposure. The disease is frequently aggressive in its clinical course and presents with thromboembolic events, and metastases in the liver, serosal surfaces and brain are common . Outcomes, including survival, are dramatically improved with specific ALK TKIs and, at present, median overall survival for stage IV patients frequently exceeds 5 years. Crizotinib was the first drug approved in this context and, since then, second- (ceritinib, alectinib and brigatinib) and third-generation (lorlatinib) TKIs are available in the European Union for the treatment of untreated patients and for those following progression on prior inhibitors . The benefit of individual drugs in these pretreated patients depends on the mechanism of resistance, which frequently involves acquired mutations in the ALK kinase . The histological types eligible for ALK rearrangement tests should include all adenocarcinomas, carcinomas with non-squamous histological evidence and squamous tumours in patients younger than 50 years of age and/or with low or no tobacco exposure (i.e. < 10 pack-years) . The key methods for detecting ALK gene rearrangement are IHC, FISH and NGS. At present IHC represents a fast, reliable and cost-effective method to detect ALK fusions . Its use in cytology smears is quite controversial, although recent studies have proven the suitability of the method . The most commonly used antibodies for detecting rearrangements are D5F3 (Ventana ® ALK [D5F3] CDx Assay, Tucson, Arizona, USA) and 5A4 (Novocastra ® , Leica Biosystems ® , Buffalo Grove, Illinois, USA), although the latter is not included in a diagnostic kit . The cecal appendix is suitable as both a positive and a negative control. It must be fixed and processed under the same conditions as the patient sample. A positive tumour case can also be used as a control. The role of FISH as the optimal standard methodology is currently controversial, although there are automated reader algorithms approved by the United States Food and Drug Administration (FDA) that greatly increase reliability . When there is a positive IHC result as manifested by strong granular cytoplasmic staining with either of the 5A4 or D5F3 antibodies, confirmation by a second technique is not mandatory . However, it is strongly advisable in cases that are inconclusive. This diagnostic redundancy is also helpful if unusual FISH staining is found . Lastly, the methods based on NGS and RNA assays are highly specific and there are numerous studies that demonstrate their value for detecting fusions in patients who show negative results with other techniques . Variant testing for specific rearrangements in ALK , which may provide some useful information in terms of predicting response to specific inhibitors, does not yet have sufficient data for recommendation, although it could be useful in the future . In some circumstances, LB may replace tissue tumour biomarker analysis, and ALK profiling in circulating tumour DNA (ctDNA) may serve as a treatment guiding tool . ALK mutations are emerging as important resistance mechanisms to ALK TKIs, and ALK mutation testing in this scenario may provide crucial treatment guiding information as newer-generation ALK TKIs display different efficacies against different ALK mutations . ROS1 The c-ros oncogene 1 ( ROS1 ) encodes a receptor with tyrosine kinase activity. Activating gene rearrangements with several partner genes are found in approximately 1% of NSCLCs, particularly those arising in young, nonsmoking patients . These tumours are frequently associated with thrombotic events and have the propensity to develop central nervous system (CNS) metastases . ROS1 fusions occur almost exclusively in adenocarcinomas, frequently in those with a solid component and signet-ring cells . This histological profile is also typical of tumours harbouring an ALK translocation. Indeed, both receptors have a 77% similarity in their ATP-binding domain. Crizotinib was the initial TKI approved for the first- or second-line treatment of stage IV lung cancer patients with ROS1 rearrangement . More recently, TKIs such as lorlatinib, entrectinib and repotrectinib are being studied but are not yet approved for this indication . Currently, it is recommended to carry out ROS1 testing in patients with advanced stage lung adenocarcinoma, regardless of the clinical characteristics. ROS1 testing is not recommended in squamous cell carcinoma, except in low or light smokers . Three technologies are used to detect the following ROS1 rearrangements: IHC, cytogenetic techniques, particularly FISH , and molecular techniques such as reverse transcription PCR (RT-PCR) and particularly NGS . IHC is generally recommended as a screening method and positive cases should be confirmed with another orthogonal method (e.g., FISH or NGS), due to the variable specificity of the two commercially available antibodies (D4D6, Cell Signalling Technology and SP384, Ventana Medical Systems ® ) . The specimen for analysis should include at least 20 tumour cells and each laboratory should validate its own interpretation range . An external control must be available, other than cecal appendix, and it is advisable also to have a positive tumour control. The existence of positive peritumoural reactive pneumocytes has also been considered as a control. Of note, ROS1 expression, typically focal, can be found in up to one-third of tumours without underlying ROS1 rearrangements, but with other genomic alterations (e.g., mutations of EGFR , kirsten rat sarcoma virus [ KRAS ], BRAF or human epidermal growth factor receptor 2 [ HER2 ], and ALK rearrangements) . In addition, non-specific immunostaining has also been observed in the histological subtype of infiltrating mucinous adenocarcinoma and in non-tumour tissue . FISH is one of the reference techniques. It uses dual-colour break-apart probes and a count of at least 50 tumour cells is recommended . A tumour should be considered positive when at least 50% of tumour cells have break-apart signals (separated by ≥ 1 signal diameter), and/or 3’ isolated signals (frequently marked with green fluorochrome) . False positives and false negatives have been described, attributable to both methodological and biological causes . Last, NGS technologies (DNA or RNA-based), have shown high sensitivity and specificity in tumour samples and also in ctDNA . BRAF Mutations of the B-Raf proto-oncogene ( BRAF ) are observed in 2% of lung carcinomas, are exclusive to other tumour types and appear mostly in adenocarcinomas, especially of the papillary type (80%) . The most frequent mutation is BRAF V600E (Val600Glu) (50%), which predominates in women and may entail greater tumour aggressiveness, while the rest are more common in men or patients who smoke . The European Medicines Agency (EMA) and the FDA approved dabrafenib and trametinib after their efficacy was demonstrated in phase II clinical trials in patients with the BRAF V600 mutation . In the case of the FDA, the approval includes the need to detect the mutation with the NGS Oncomine Dx Target Test ® panel . Currently, any PCR method with adequate sensitivity and quality to identify BRAF mutations is allowed. However, detection of this mutation individually is not recommended, so this mutation is usually studied in NGS panels, which analyse at least exons 11 and 15 of that gene. PD-L1 Immune checkpoint inhibitors (ICIs), programmed cell death protein-1/ligand-1 (PD-1/PD-L1) inhibitors and to a lesser extent cytotoxic T-lymphocyte-associated protein 4 (CTLA-4) blockers, have proven to be an effective strategy in the treatment of lung cancer, NSCLC and small cell lung cancer (SCLC), over the past 15 years . At present, randomized clinical trial data support them as the standard treatment for patients with locally advanced or metastatic NSCLC, whether in monotherapy or in combination with chemotherapy . More recently, PD-1/PD-L1 blockade has been shown to be also effective in the adjuvant and neo-adjuvant context in patients with early disease . Although PD-L1 is far from being an ideal biomarker, the magnitude of benefit from PD-1/PD-L1 blockers in monotherapy is related to the tumour expression of PD-L1 . In contrast, PD-L1 expression does not predict the efficacy of combination regimens of chemotherapy plus PD-1/PD-L1 inhibitors, PD-1 plus CTLA-4 blockers or chemotherapy plus PD-1 plus CTLA-4 blockers . PD-L1 testing is based on IHC, currently the only validated predictive test. The diversity of IHC assays and cut-off points that define a positive result have been a source of confusion and have driven a number of harmonizing efforts by the scientific community . Current guidelines for the determination of the PD-L1 biomarker recommend the usual pre-analytical conditions of IHC testing. PD-L1 expression is evaluated by determining the percentage of tumour cells with partial or full membrane staining of any intensity. There are several PD-L1 clones available for IHC testing. The four most widely used in pathology laboratories are 22C3 and 28–8 by Agilent (which share the Autostainer LINK 48 ® diagnostic platform by Agilent ® ), SP263 by MedImmune ® /Ventana ® , and SP142 by Spring ® /Bioscience ® /Ventana ® (which share the Ventana ® BenchMark diagnostic platform) . The performance characteristics of the 22C3 and 28–8 assays appear to be similar based on side-by-side evaluation in retrospective cohorts. SP263 and E1L3N, used in routine practice but not approved as companion diagnostic tests, can show comparable patterns of staining to the approved assays when properly validated. The one consistent outlier has been the SP142 assay, which shows lower tumour cell staining, despite the fact that SP142 antibody recognizes identical or nearly identical epitopes as SP263 and E1L3N . The SP142 assay was reportedly optimized for both tumour cell and immune cell scoring. However, its performance as an immune cell marker is further confounded by poor interobserver agreement in the interpretation of immune cell expression . Regarding sample selection, if more than one tissue block is available for a given tumour, the most representative sample should be tested. More than one block may be tested when the reporting pathologist determines that additional testing is necessary to establish the tumour’s PD-L1 status. If additional blocks from the same sample are tested, the results from all tested blocks should be combined as though testing had been carried out in a single paraffin block . It is not uncommon for the only material available to come from cytology samples. In these cases, it should be noted that the use of PD-L1 IHC kits that are validated for formalin-fixed paraffin-embedded (FFPE) biopsy samples, and not specifically for cytology samples, may be used if the cytology samples were processed according to the same pre-analytical conditions as required by the kits . NTRK Fusions of neurotrophic tyrosine receptor kinase ( NTRK ) can be present in a wide variety of tumours, in both adults and paediatric patients, with the estimated frequency in NSCLC being less than 1%. Most are found in adenocarcinomas, and the most frequently involved gene is NTRK1 . No clinical or pathological data characterise affected patients, but identifying them is crucial since tropomyosin receptor kinase inhibitors (iTRKs) are available, such as larotrectinib and entrectinib, which have been approved by the EMA and the FDA for the treatment of tumours with NTRK fusions . Despite the marked efficacy, resistance often develops, and clinical results for second-generation iTRKs are already available . Two following strategies are recommended to detect these alterations: NGS with a panel including the study of the three genes (i.e., NTRK1, 2 and 3 ), and an adequate number of rearrangement pairs, or screening by IHC with mandatory subsequent confirmation of all positive results obtained by NGS . RET Fusions of the gene rearranged during transfection ( RET ) are observed in different types of tumours, with a frequency of 1–2% in NSCLC, mainly in adenocarcinomas in nonsmoking patients. KIF5B is the most frequent rearrangement pair . The presence of calcifications in the form of psammoma bodies should suggest RET fusions . Currently, selective inhibitors are available, such as selpercatinib and pralsetinib, which have high response rates, although their approval by regulatory agencies for first-line treatment in patients with advanced disease is conditional on the results obtained in ongoing phase III studies . The optimal detection method for RET fusion is NGS, but FISH or PCR can also be used . Regarding the design of an efficient fusion search algorithm in RET , it is important to consider the following: (1) NGS based on the study of RNA is more sensitive than if only DNA is studied, and (2) the results of FISH can be difficult to interpret . KRAS Mutations in KRAS are identified in 25% of patients with NSCLC. They are found in all histological subtypes of adenocarcinoma, although they are more common in the invasive mucinous variant. They are also detected in 5% of squamous tumours . Their presence confers biological and clinical heterogeneity and may have no prognostic value . Mutations in KRAS are usually located in codons 12, 13 and 61. Mutations in codon 12 account for 80% of cases and are generally substitutions of glycine by cysteine ( KRAS G12C ), valine ( KRAS G12V ) or aspartic acid ( KRAS G12D ), with frequencies of 10–13%, 5 and 4%, respectively . KRAS G12C and KRAS G12V mutations are usually related to smoking and activate the RalGDS/Ral/FLIP pathway, while the KRAS G12D mutation is more typical in nonsmoking patients and seems to activate the PI3K/AKT/mTOR and RAF/MEK/ERK pathways. In addition, KRAS G12C shows greater phosphorylation of ERK1/2. More than 50% of NSCLCs with KRAS mutations present another mutation, and three subgroups can be established: the KP subgroup has mutations in tumour protein 53 (TP53) and represents 40% of cases; the KL subgroup, where serine/threonine kinase 11 ( STK11 ), Kelch-like ECH-associated protein 1 ( KEAP1 ) or liver kinase B1 ( LKB1 ) is identified, is usually associated with low percentages of PD-L1; and the KC subgroup, which is characterized by inactivation of CDK2A/B, is associated with a mucinous histology . In contrast, it is very rare to find EGFR mutations, thus KRAS and EGFR mutations are considered mutually exclusive. The used technique to identify KRAS mutations is usually PCR. Thus far, their detection in isolation is not recommended, but should be included in NGS panels . After years without effective therapies, several inhibitors have been developed that have shown activity in phase II trials against the KRAS G12C mutation, such as sotorasib and adagrasib ; therefore, they have been approved by the FDA, although the benefit of these agents in monotherapy or in combination, and to which patients they should be administered, are being studied in phase III trials. MET Oncogenic activation of the mesenchymal epithelial transition factor ( MET ) gene in NSCLC can occur mainly by amplification (1–5%) or by the presence of mutations in exon 14 (3–4%) that reduce degradation of the MET protein . In NSCLC with a sarcomatoid morphology, the frequency of mutations in exon 14 can be up to 22% . Between 5 and 20% of patients with EGFR mutations acquire resistance to EGFR -TKIs through MET amplification . Currently, the results of several clinical trials demonstrate the activity and tolerability of oral drugs such as capmatinib, tepotinib and savolitinib in patients with mutations in exon 14, and capmatinib and tepotinib have already been approved by the EMA . Conjugated antibodies such as telisotuzumab vedotin, bispecific antibodies such as amivantamab and others are also being studied in patients with MET amplification. The technique of choice to study MET amplification is FISH, since it allows an estimation of the increase in the number of copies and clonal amplification with more precision. Due to the heterogeneity of mutations in exon 14, the optimal detection method in this case is NGS. An NGS panel with sufficient coverage should be used. To avoid false-negative results, an RNA panel is recommended . MET overexpression by amplification or mutation of the gene can be detected, but the predictive value of MET IHC is still controversial . HER2 Overexpression, amplification and mutations of HER2 can be found in NSCLC, which are identified in 3–38%, 3% and 1–4% of patients, respectively . The most frequent mutations are insertions in exon 20 (the tyrosine kinase domain), with the insertion/duplication of the four amino acids tyrosine, valine, methionine and alanine (YVMA) at codon 776 (YVMA 776–779 ins) being the most frequent (80–90%) . These mutations are mainly associated with patients with adenocarcinoma, nonsmoking patients and women . The most recent data suggest that these mutations are the best predictors of a clinical benefit with anti- HER2 therapies (e.g., trastuzumab deruxtecan), regardless of the type of mutation and the presence of overexpression and amplification . Regarding the methodologies for evaluating HER2 status, DNA- or RNA-based NGS is the most appropriate method to select patients compared to IHC and FISH . Amplification has been described as a resistance mechanism following targeted treatments . Mutations in the epidermal growth factor receptor ( EGFR ) gene are identified in approximately 10–16% of NSCLCs, with a higher frequency among patients with adenocarcinoma and nonsmoking patients . The most frequent mutations that are directly related to sensitivity to anti- EGFR tyrosine kinase inhibitors (TKIs) affect exon 19 and consist of deletions that preserve the reading frame (in-frame deletions) between codons 746 and 759 (amino acids leucine, arginine, glutamate and alanine, LREA) (45–50%), followed by missense point mutations in exon 21, with substitution of the amino acid leucine with arginine at position 858 (L858R) (35–45%). Several EGFR-TKIs have been approved for the first-line treatment of patients with metastatic disease and EGFR -activating mutations (exon 19 deletion, L858R) , including osimertinib (the preferred option in most guidelines), gefitinib, erlotinib, afatinib and dacomitinib. Osimertinib is also approved as an adjuvant treatment after complete surgical resection in adult patients with EGFR -activating mutations . Other mutations, such as insertions in exon 20, should also be detected due to their different effects in the disease, and treatment requirements . In relation to the methodology, the clinical tests for EGFR detection should be able to detect all the individual mutations that have been reported with a frequency of at least 1% in EGFR -mutated NSCLC. Highly sensitive methods are recommended. Regarding the result reports, the mutations that have been detected and the sensitivity of the detection methods used should be specified, among other data . The initial recommendations for the diagnosis of mutations in EGFR have undergone some changes, including the fact that any cytological sample with adequate cellularity and preservation can be used, the need to use techniques with high sensitivity compared to Sanger sequencing as the reference method and the lack of sensitivity of IHC for the diagnosis of mutations in clinical practice . Most patients with sensitizing EGFR mutations (exon 19 deletion and exon 21 mutations, L858R) receive an anti- EGFR TKI regimen, with the most frequent molecular mechanism of acquired resistance being the EGFR T790M mutation in patients receiving first- or second-generation anti- EGFR TKIs (50–60% of cases). Techniques that can detect this mutation in at least 5% of viable cells should be used, including new digital PCR methods . The mechanisms leading to acquired resistance against TKIs vary, including intragenic mutations, gene amplification or fusion and functional adaptation with histological transformation. Accordingly, mechanisms of acquired resistance should be monitored using tumour biopsy or liquid biopsy (LB) . Anaplastic lymphoma kinase ( ALK ) rearrangements are present in 2–5% of advanced NSCLCs . These tumours are arising more often in younger patients, and females with or without minimal prior tobacco smoking exposure. The disease is frequently aggressive in its clinical course and presents with thromboembolic events, and metastases in the liver, serosal surfaces and brain are common . Outcomes, including survival, are dramatically improved with specific ALK TKIs and, at present, median overall survival for stage IV patients frequently exceeds 5 years. Crizotinib was the first drug approved in this context and, since then, second- (ceritinib, alectinib and brigatinib) and third-generation (lorlatinib) TKIs are available in the European Union for the treatment of untreated patients and for those following progression on prior inhibitors . The benefit of individual drugs in these pretreated patients depends on the mechanism of resistance, which frequently involves acquired mutations in the ALK kinase . The histological types eligible for ALK rearrangement tests should include all adenocarcinomas, carcinomas with non-squamous histological evidence and squamous tumours in patients younger than 50 years of age and/or with low or no tobacco exposure (i.e. < 10 pack-years) . The key methods for detecting ALK gene rearrangement are IHC, FISH and NGS. At present IHC represents a fast, reliable and cost-effective method to detect ALK fusions . Its use in cytology smears is quite controversial, although recent studies have proven the suitability of the method . The most commonly used antibodies for detecting rearrangements are D5F3 (Ventana ® ALK [D5F3] CDx Assay, Tucson, Arizona, USA) and 5A4 (Novocastra ® , Leica Biosystems ® , Buffalo Grove, Illinois, USA), although the latter is not included in a diagnostic kit . The cecal appendix is suitable as both a positive and a negative control. It must be fixed and processed under the same conditions as the patient sample. A positive tumour case can also be used as a control. The role of FISH as the optimal standard methodology is currently controversial, although there are automated reader algorithms approved by the United States Food and Drug Administration (FDA) that greatly increase reliability . When there is a positive IHC result as manifested by strong granular cytoplasmic staining with either of the 5A4 or D5F3 antibodies, confirmation by a second technique is not mandatory . However, it is strongly advisable in cases that are inconclusive. This diagnostic redundancy is also helpful if unusual FISH staining is found . Lastly, the methods based on NGS and RNA assays are highly specific and there are numerous studies that demonstrate their value for detecting fusions in patients who show negative results with other techniques . Variant testing for specific rearrangements in ALK , which may provide some useful information in terms of predicting response to specific inhibitors, does not yet have sufficient data for recommendation, although it could be useful in the future . In some circumstances, LB may replace tissue tumour biomarker analysis, and ALK profiling in circulating tumour DNA (ctDNA) may serve as a treatment guiding tool . ALK mutations are emerging as important resistance mechanisms to ALK TKIs, and ALK mutation testing in this scenario may provide crucial treatment guiding information as newer-generation ALK TKIs display different efficacies against different ALK mutations . The c-ros oncogene 1 ( ROS1 ) encodes a receptor with tyrosine kinase activity. Activating gene rearrangements with several partner genes are found in approximately 1% of NSCLCs, particularly those arising in young, nonsmoking patients . These tumours are frequently associated with thrombotic events and have the propensity to develop central nervous system (CNS) metastases . ROS1 fusions occur almost exclusively in adenocarcinomas, frequently in those with a solid component and signet-ring cells . This histological profile is also typical of tumours harbouring an ALK translocation. Indeed, both receptors have a 77% similarity in their ATP-binding domain. Crizotinib was the initial TKI approved for the first- or second-line treatment of stage IV lung cancer patients with ROS1 rearrangement . More recently, TKIs such as lorlatinib, entrectinib and repotrectinib are being studied but are not yet approved for this indication . Currently, it is recommended to carry out ROS1 testing in patients with advanced stage lung adenocarcinoma, regardless of the clinical characteristics. ROS1 testing is not recommended in squamous cell carcinoma, except in low or light smokers . Three technologies are used to detect the following ROS1 rearrangements: IHC, cytogenetic techniques, particularly FISH , and molecular techniques such as reverse transcription PCR (RT-PCR) and particularly NGS . IHC is generally recommended as a screening method and positive cases should be confirmed with another orthogonal method (e.g., FISH or NGS), due to the variable specificity of the two commercially available antibodies (D4D6, Cell Signalling Technology and SP384, Ventana Medical Systems ® ) . The specimen for analysis should include at least 20 tumour cells and each laboratory should validate its own interpretation range . An external control must be available, other than cecal appendix, and it is advisable also to have a positive tumour control. The existence of positive peritumoural reactive pneumocytes has also been considered as a control. Of note, ROS1 expression, typically focal, can be found in up to one-third of tumours without underlying ROS1 rearrangements, but with other genomic alterations (e.g., mutations of EGFR , kirsten rat sarcoma virus [ KRAS ], BRAF or human epidermal growth factor receptor 2 [ HER2 ], and ALK rearrangements) . In addition, non-specific immunostaining has also been observed in the histological subtype of infiltrating mucinous adenocarcinoma and in non-tumour tissue . FISH is one of the reference techniques. It uses dual-colour break-apart probes and a count of at least 50 tumour cells is recommended . A tumour should be considered positive when at least 50% of tumour cells have break-apart signals (separated by ≥ 1 signal diameter), and/or 3’ isolated signals (frequently marked with green fluorochrome) . False positives and false negatives have been described, attributable to both methodological and biological causes . Last, NGS technologies (DNA or RNA-based), have shown high sensitivity and specificity in tumour samples and also in ctDNA . Mutations of the B-Raf proto-oncogene ( BRAF ) are observed in 2% of lung carcinomas, are exclusive to other tumour types and appear mostly in adenocarcinomas, especially of the papillary type (80%) . The most frequent mutation is BRAF V600E (Val600Glu) (50%), which predominates in women and may entail greater tumour aggressiveness, while the rest are more common in men or patients who smoke . The European Medicines Agency (EMA) and the FDA approved dabrafenib and trametinib after their efficacy was demonstrated in phase II clinical trials in patients with the BRAF V600 mutation . In the case of the FDA, the approval includes the need to detect the mutation with the NGS Oncomine Dx Target Test ® panel . Currently, any PCR method with adequate sensitivity and quality to identify BRAF mutations is allowed. However, detection of this mutation individually is not recommended, so this mutation is usually studied in NGS panels, which analyse at least exons 11 and 15 of that gene. Immune checkpoint inhibitors (ICIs), programmed cell death protein-1/ligand-1 (PD-1/PD-L1) inhibitors and to a lesser extent cytotoxic T-lymphocyte-associated protein 4 (CTLA-4) blockers, have proven to be an effective strategy in the treatment of lung cancer, NSCLC and small cell lung cancer (SCLC), over the past 15 years . At present, randomized clinical trial data support them as the standard treatment for patients with locally advanced or metastatic NSCLC, whether in monotherapy or in combination with chemotherapy . More recently, PD-1/PD-L1 blockade has been shown to be also effective in the adjuvant and neo-adjuvant context in patients with early disease . Although PD-L1 is far from being an ideal biomarker, the magnitude of benefit from PD-1/PD-L1 blockers in monotherapy is related to the tumour expression of PD-L1 . In contrast, PD-L1 expression does not predict the efficacy of combination regimens of chemotherapy plus PD-1/PD-L1 inhibitors, PD-1 plus CTLA-4 blockers or chemotherapy plus PD-1 plus CTLA-4 blockers . PD-L1 testing is based on IHC, currently the only validated predictive test. The diversity of IHC assays and cut-off points that define a positive result have been a source of confusion and have driven a number of harmonizing efforts by the scientific community . Current guidelines for the determination of the PD-L1 biomarker recommend the usual pre-analytical conditions of IHC testing. PD-L1 expression is evaluated by determining the percentage of tumour cells with partial or full membrane staining of any intensity. There are several PD-L1 clones available for IHC testing. The four most widely used in pathology laboratories are 22C3 and 28–8 by Agilent (which share the Autostainer LINK 48 ® diagnostic platform by Agilent ® ), SP263 by MedImmune ® /Ventana ® , and SP142 by Spring ® /Bioscience ® /Ventana ® (which share the Ventana ® BenchMark diagnostic platform) . The performance characteristics of the 22C3 and 28–8 assays appear to be similar based on side-by-side evaluation in retrospective cohorts. SP263 and E1L3N, used in routine practice but not approved as companion diagnostic tests, can show comparable patterns of staining to the approved assays when properly validated. The one consistent outlier has been the SP142 assay, which shows lower tumour cell staining, despite the fact that SP142 antibody recognizes identical or nearly identical epitopes as SP263 and E1L3N . The SP142 assay was reportedly optimized for both tumour cell and immune cell scoring. However, its performance as an immune cell marker is further confounded by poor interobserver agreement in the interpretation of immune cell expression . Regarding sample selection, if more than one tissue block is available for a given tumour, the most representative sample should be tested. More than one block may be tested when the reporting pathologist determines that additional testing is necessary to establish the tumour’s PD-L1 status. If additional blocks from the same sample are tested, the results from all tested blocks should be combined as though testing had been carried out in a single paraffin block . It is not uncommon for the only material available to come from cytology samples. In these cases, it should be noted that the use of PD-L1 IHC kits that are validated for formalin-fixed paraffin-embedded (FFPE) biopsy samples, and not specifically for cytology samples, may be used if the cytology samples were processed according to the same pre-analytical conditions as required by the kits . Fusions of neurotrophic tyrosine receptor kinase ( NTRK ) can be present in a wide variety of tumours, in both adults and paediatric patients, with the estimated frequency in NSCLC being less than 1%. Most are found in adenocarcinomas, and the most frequently involved gene is NTRK1 . No clinical or pathological data characterise affected patients, but identifying them is crucial since tropomyosin receptor kinase inhibitors (iTRKs) are available, such as larotrectinib and entrectinib, which have been approved by the EMA and the FDA for the treatment of tumours with NTRK fusions . Despite the marked efficacy, resistance often develops, and clinical results for second-generation iTRKs are already available . Two following strategies are recommended to detect these alterations: NGS with a panel including the study of the three genes (i.e., NTRK1, 2 and 3 ), and an adequate number of rearrangement pairs, or screening by IHC with mandatory subsequent confirmation of all positive results obtained by NGS . Fusions of the gene rearranged during transfection ( RET ) are observed in different types of tumours, with a frequency of 1–2% in NSCLC, mainly in adenocarcinomas in nonsmoking patients. KIF5B is the most frequent rearrangement pair . The presence of calcifications in the form of psammoma bodies should suggest RET fusions . Currently, selective inhibitors are available, such as selpercatinib and pralsetinib, which have high response rates, although their approval by regulatory agencies for first-line treatment in patients with advanced disease is conditional on the results obtained in ongoing phase III studies . The optimal detection method for RET fusion is NGS, but FISH or PCR can also be used . Regarding the design of an efficient fusion search algorithm in RET , it is important to consider the following: (1) NGS based on the study of RNA is more sensitive than if only DNA is studied, and (2) the results of FISH can be difficult to interpret . Mutations in KRAS are identified in 25% of patients with NSCLC. They are found in all histological subtypes of adenocarcinoma, although they are more common in the invasive mucinous variant. They are also detected in 5% of squamous tumours . Their presence confers biological and clinical heterogeneity and may have no prognostic value . Mutations in KRAS are usually located in codons 12, 13 and 61. Mutations in codon 12 account for 80% of cases and are generally substitutions of glycine by cysteine ( KRAS G12C ), valine ( KRAS G12V ) or aspartic acid ( KRAS G12D ), with frequencies of 10–13%, 5 and 4%, respectively . KRAS G12C and KRAS G12V mutations are usually related to smoking and activate the RalGDS/Ral/FLIP pathway, while the KRAS G12D mutation is more typical in nonsmoking patients and seems to activate the PI3K/AKT/mTOR and RAF/MEK/ERK pathways. In addition, KRAS G12C shows greater phosphorylation of ERK1/2. More than 50% of NSCLCs with KRAS mutations present another mutation, and three subgroups can be established: the KP subgroup has mutations in tumour protein 53 (TP53) and represents 40% of cases; the KL subgroup, where serine/threonine kinase 11 ( STK11 ), Kelch-like ECH-associated protein 1 ( KEAP1 ) or liver kinase B1 ( LKB1 ) is identified, is usually associated with low percentages of PD-L1; and the KC subgroup, which is characterized by inactivation of CDK2A/B, is associated with a mucinous histology . In contrast, it is very rare to find EGFR mutations, thus KRAS and EGFR mutations are considered mutually exclusive. The used technique to identify KRAS mutations is usually PCR. Thus far, their detection in isolation is not recommended, but should be included in NGS panels . After years without effective therapies, several inhibitors have been developed that have shown activity in phase II trials against the KRAS G12C mutation, such as sotorasib and adagrasib ; therefore, they have been approved by the FDA, although the benefit of these agents in monotherapy or in combination, and to which patients they should be administered, are being studied in phase III trials. Oncogenic activation of the mesenchymal epithelial transition factor ( MET ) gene in NSCLC can occur mainly by amplification (1–5%) or by the presence of mutations in exon 14 (3–4%) that reduce degradation of the MET protein . In NSCLC with a sarcomatoid morphology, the frequency of mutations in exon 14 can be up to 22% . Between 5 and 20% of patients with EGFR mutations acquire resistance to EGFR -TKIs through MET amplification . Currently, the results of several clinical trials demonstrate the activity and tolerability of oral drugs such as capmatinib, tepotinib and savolitinib in patients with mutations in exon 14, and capmatinib and tepotinib have already been approved by the EMA . Conjugated antibodies such as telisotuzumab vedotin, bispecific antibodies such as amivantamab and others are also being studied in patients with MET amplification. The technique of choice to study MET amplification is FISH, since it allows an estimation of the increase in the number of copies and clonal amplification with more precision. Due to the heterogeneity of mutations in exon 14, the optimal detection method in this case is NGS. An NGS panel with sufficient coverage should be used. To avoid false-negative results, an RNA panel is recommended . MET overexpression by amplification or mutation of the gene can be detected, but the predictive value of MET IHC is still controversial . HER2 Overexpression, amplification and mutations of HER2 can be found in NSCLC, which are identified in 3–38%, 3% and 1–4% of patients, respectively . The most frequent mutations are insertions in exon 20 (the tyrosine kinase domain), with the insertion/duplication of the four amino acids tyrosine, valine, methionine and alanine (YVMA) at codon 776 (YVMA 776–779 ins) being the most frequent (80–90%) . These mutations are mainly associated with patients with adenocarcinoma, nonsmoking patients and women . The most recent data suggest that these mutations are the best predictors of a clinical benefit with anti- HER2 therapies (e.g., trastuzumab deruxtecan), regardless of the type of mutation and the presence of overexpression and amplification . Regarding the methodologies for evaluating HER2 status, DNA- or RNA-based NGS is the most appropriate method to select patients compared to IHC and FISH . Amplification has been described as a resistance mechanism following targeted treatments . Overexpression, amplification and mutations of HER2 can be found in NSCLC, which are identified in 3–38%, 3% and 1–4% of patients, respectively . The most frequent mutations are insertions in exon 20 (the tyrosine kinase domain), with the insertion/duplication of the four amino acids tyrosine, valine, methionine and alanine (YVMA) at codon 776 (YVMA 776–779 ins) being the most frequent (80–90%) . These mutations are mainly associated with patients with adenocarcinoma, nonsmoking patients and women . The most recent data suggest that these mutations are the best predictors of a clinical benefit with anti- HER2 therapies (e.g., trastuzumab deruxtecan), regardless of the type of mutation and the presence of overexpression and amplification . Regarding the methodologies for evaluating HER2 status, DNA- or RNA-based NGS is the most appropriate method to select patients compared to IHC and FISH . Amplification has been described as a resistance mechanism following targeted treatments . The tumour mutation burden (TMB) refers to the number of somatic mutations present in the tumour, excluding polymorphisms and germline mutations from all variants, expressed per megabase in the studied exome. The mutations acquired by tumour cells may lead to abnormal protein structure, and consequently, to the expression of neoantigens that can elicit an immunotherapy response. Interestingly, there is no clear correlation between the expression of PD-L1 and TMB . Many studies have shown that high TMB in tumours results in a better therapeutic effect with anti-PD-1/PD-L1 immunotherapy, including in some lung cancers , but there is no definitive validation for the use of this immunotherapy in clinical practice. However, exploratory analysis of the Keynote 042 trial suggests that among patients with tumours expressing PD-L1 in ≥ 50% of cells, only those whose TMB was higher than the median exhibited any therapeutic benefit with PD-1/PD-L1 inhibitors as compared to chemotherapy . In fact, TMB is not a reliable predictor of outcome for NSCLC or SCLC treated with chemotherapy plus ICI blockade or dual immuno-oncology regimens. With regard to testing for TMB, targeted NGS is considered to be a good alternative to more complex massive sequencing, and some recent data have validated the use of large panels . Harmonization studies are still required to validate the interconnectivity between different NGS studies, the heterogeneity of the numbers of included genes, horizontal coverage, the required optimal depth, the chemical sequencing type and the bioinformatic algorithms used . If eventually drugs are approved based on TMB cut-offs, the harmonization efforts underway could be very useful. Detection of TMB in the blood (bTMB) is feasible, but robust data on its clinical utility are still lacking. Microsatellite instability-high/deficient MisMatch repair (MSI-H/dMMR) predicts the efficacy of ICIs in gastric cancer and colon cancer. However the incidence of MSI-H/dMMR in lung cancer is low , and further investigation is needed to determine whether MSI-H/dMMR can be used as a predictive biomarker in this context. At present, the standard measure commonly used to judge MSI-H is the Bethesda method . Of note, patients with MSI-H have a higher probability of having high TMB, but not vice versa . The predictive role of genomic aberrations underlying lung cancer have also been investigated . Gene alterations typically perceived as associated with immunotherapy response include those in TP53 or KRAS , and on the contrary aberrations affecting EGFR , ALK , ROS , RET , KEAP1 or LKB1 are less likely to be associated to checkpoint blockade benefit . In any case, at present, the available data do not support treatment recommendations based only on those genomic determinations. Tumour inflammatory biomarkers such as related gene signatures or tissue cell content (T cell subtypes, myeloid cells, etc.) are investigational at present as they are peripheral blood-related immunotherapy and microbiome efficacy biomarkers. A high percentage of patients with NSCLC are diagnosed in advanced stages. These patients are not eligible for surgery; thus, the diagnosis is established through small biopsies and cytological samples. The development of imaging techniques that guide fine needle aspiration (FNA) and fine needle aspiration biopsy (FNAB) allows the acquisition of high-quality samples in the quantities necessary to perform a complete diagnosis, both morphologically and via biomarkers (Fig. ) . International guidelines recommend that regardless of the type of sample : (1) a precise morphological diagnosis should be determined (i.e., NSCLC subtype); (2) the diagnosis of NSCLC not otherwise specified (NSCLC-NOS) should account for less than 10% of diagnoses with small samples/cytology; (3) IHC/immunocytochemistry (ICC) should be used judiciously; and iv) samples should be saved for biomarker studies. In the case of FNA/FNAB guided by imaging techniques, the use of rapid on-site evaluation (ROSE) is recommended. In addition to assessing whether a sample is sufficiently adequate to facilitate a diagnostic approach, ROSE also allows control of the entire pre-analytical phase and in situ preparation of the sample for analysis of the necessary biomarkers according to the preliminary diagnostic impression . In the case of cytological samples, samples should be handled in situ for proper processing by the following: (1) smears with immediate 96% alcohol fixation; (2) air-dried smears stained with Giemsa ® /Diff-Quik ® ; (3) cell blocking; and (4) washing a needle in liquid cytology fixative (which provides good RNA conservation). All these types of cytological samples are useful for biomarker detection by IHC/ICC, FISH and PCR-based techniques . IHC/ICC offers excellent results, which are comparable to those obtained by cell block biopsy and in smears previously stained with Papanicolaou. Unstained cytological smears stained with Giemsa ® /Diff-Quik ® and Papanicolaou are excellent substrates for FISH. Whole nuclei are analysed and, therefore, the signals observed are real, with no truncation effect due to the cutting of the paraffin samples. Moreover, DNA and RNA are of better quality in samples not fixed in formalin. Molecular studies are usually less challenging on surgical specimens due to a greater amount of tissue. However, difficulty remains, and surgical pieces should be adequately fixed within 24–48 h. Necrotic areas should be avoided, and detection should be performed on samples with at least 30% viable tumour cellularity. In addition, the pieces should be adequately cut according to macroscopic protocols (e.g., white paper on pathology), including sufficient number of sections of the tumour. Tumours no larger than 3 cm should be included in their entirety. A good histological study is the first biomarker, since it will determine the histological subtype and guide further molecular detection (superimposable to that indicated above for small biopsy and cytology). Some histological subtypes are associated with different molecular alterations, although any clinical or histological variable should not be considered exclusive for biomarker detection in lung cancer. Another debate concerns molecular detection in tumours with different histological subtypes (a very common occurrence in adenocarcinomas). In these cases, testing samples corresponding to the most frequent subtype would be convenient, and detection in secondary subtypes can be added, especially if they present mucinous differentiation, in clear cells or signet-ring cells or in the presence of psammoma bodies. For all the biomarker analysis methods cited above, Fig. shows an update of the protocol to be followed to analyse a biological sample of NSCLC. In recent years, the need for multigenic tests in patients with lung cancer has increased, including alterations in oncogenic factors, computations or resistance mechanisms . NGS allows the sequencing of long and complex genes and multiple genes in a patient sample to identify alterations in factors and targets, minimizing the use of tissues in a short period and their use in daily clinical practice. The European Society for Medical Oncology (ESMO) has established recommendations on whether multigenic tumour NGS can be used and how to profile metastatic cancers following the classification of the Scale for Clinical Actionability of Molecular Targets (ESCAT) . ESCAT is a framework that classifies the correspondence between a drug and genomic alterations according to their ability to act at the following three levels : (1) from the perspective of public health; (2) from the perspective of academic clinical research centres; and (3) at the level of each individual patient. With regard to lung cancer, the general recommendations for daily practice consider that a tumour or plasma sample from a patient with advanced nonsquamous NSCLC is profiled using NGS technology to detect level I ESCAT alterations (the alteration-drug pairing is associated with a better outcome in clinical trials, ready for routine use). The tests should include EGFR , ALK , ROS1 , BRAF , RET , HER2 , NTRK , KRAS and MET . In addition, for clinical research centres, multigene sequencing is strongly recommended for innovative drugs and clinical trials, as opposed to use in individual patients, for whom few clinically significant findings are expected with NGS . Selection of the size of the NGS panel depends on the type of alterations to be studied, the response time required and the costs that can be expected . Each service should implement the panel that best meets its needs and be very familiar with its coverage to be able to expand molecular studies if the initial results are completely negative. Thus, to identify treatable fusions, the use of RNA panels is recommended because they are more sensitive than those that exclusively use DNA . The starting nucleic acids are obtained mainly from formalin-fixed and paraffin-embedded samples (including both tissue samples and cell blocks from cytological samples) or from the nucleic acids present in the plasma. In the first stage of sample preparation, an exhaustive review of all the material from each patient is essential for the selection of the most appropriate paraffin block considering the pre-analytical variables (insufficient fixation and all fixatives that are not neutral buffered formalin at 10% must be avoided) and the percentage of tumour cellularity (optimal cutoff point: equal to or greater than 30%) . The two methodological approaches of NGS most implemented in clinical practice are bridge amplification (Illumina ® , San Diego, CA, USA) and emulsion PCR (ThermoFisher Scientific ® , Waltham, MA, USA), which each have strengths and weaknesses. Among the advantages of Illumina ® bridge amplification, this method allows identification of unknown alterations and the use of larger gene panels, while ThermoFisher Scientific ® emulsion PCR requires less starting material and yields molecular results with shorter response times . The molecular findings obtained should be reflected in the NGS results report, together with the relevant conclusions regarding the tumour of each patient. The results should be discussed in a multidisciplinary committee, since increasing evidence indicates that this practice improves clinical outcomes . The concept of LB encompasses tests performed on a sample of peripheral blood or other biological fluid with the objective of detecting circulating tumour cells (CTCs) or fragments of nucleic acids from a tumour, such as circulating free DNA (cfDNA), ctDNA, circulating exosomes, platelet RNA and circulating tumour RNA (ctRNA), which can be isolated from blood (plasma) or urine, pleural fluids, ascites, cerebrospinal fluid (CSF) and saliva. LB has high specificity (96%), but its sensitivity is only 66% . Therefore, a negative result is not definitive and requires tissue confirmation. Therefore, its role is complementary to biopsy, and current recommendations are based on two clinical contexts when a tissue sample is limited or insufficient: (a) detection of molecular sensitive alterations and (b) detection of resistance mechanisms after progression on a TKI . A third context would be treatment efficacy monitoring based on the ctDNA load for minimal residual disease (MRD), an attractive approach that is not yet well established technically (quantification units need to be established), but where fluctuating levels of circulating nontumour DNA can affect the results . Its use in early diagnosis presents difficulties due to low sensitivity in localized disease. Essentially, the greatest development has been in EGFR mutations, but currently, LB is being incorporated into tests for other molecular alterations, although the detection of gene rearrangements from circulating RNA continues to be a technical challenge awaiting resolution . Some technical requirements of LB are the need for larger than usual blood volumes (two 10-ml tubes). Plasma is preferred rather than serum for nucleic acid extraction. The maximum waiting time until plasma extraction is 2 h for tubes with ethylenediaminetetraacetic acid (EDTA) and 3 days for tubes with special preservatives (Streck ® ). Blood should not be frozen before plasma extraction. DNA extraction should be performed with protocols designed for small and fragmented DNA . Notably, up to 10% of people over 65 years of age present clonal haematopoiesis phenomena that may be misinterpreted as false-positive findings . The use of techniques with high sensitivity, such as digital PCR, is recommended. In the case of NGS, good agreement is evident for tissue results, except for the variants that are found with an allelic frequency lower than 1% . However, tools such as the unique molecular identifier (UMI) can be used to optimize detection. Two commercial NGS platforms (Guardant360 ® and FoundationOne Liquid CDx) have FDA approval for the analysis of solid tumours, including lung carcinomas . In short, LB will be progressively incorporated into molecular diagnosis, treatment monitoring, MRD detection and early diagnosis as soon as prospective studies confirm its clinical utility. Molecular testing is becoming an essential diagnostic tool and a part of standard management in cancer patients. Both laboratories and pathologists face new challenges in order to meet this novel requirement in patient care. Pathology laboratories must incorporate reliable methods to ensure optimal sample quality and processing to reduce the risk of errors in molecular biology tests . Furthermore, practicing pathologists need to go beyond diagnosis and classification in order to produce information that will be required to guide treatment accurately and to do so in a timely manner . The results of predictive biomarkers often determine which therapy (e.g., chemotherapy, immunotherapy, or targeted therapy) patients receive. Laboratory errors may therefore result in wrong or suboptimal treatment decisions and consequently harm the patient. To assure high-quality testing, laboratories must have a quality assurance system in place and comply with relevant international standards from certified organizations such as the International Organization for Standardization (ISO), the College of American Pathologists (CAP), or the Clinical Laboratory Improvement Amendments (CLIA) (Table ) . Progress in personalized medicine is limited, in part, because of the lack of standardized European and international documentation and insufficient guidelines for pre-analytical workflows. The pre-analytical process has recently been examined in some detail by the SPIDIA ® project ( http://www.spidia.eu ). The following data were considered necessary to issue a report in accordance with good practice guidelines (Table ) : (1) patient identification: the patient must be identified correctly—laboratories require a minimum of two unique patient identifiers plus a unique sample identifier on the request form and the report; (2) reporting style and content: long reports are rarely read in full, and length matters; one page or, better still, single-screen reports are preferred, provided that they are legible. Clear presentation of the results, the test(s) performed, and any limitations of the tests (e.g., whether all possible mutations or a selection of the more common ones were tested) must be included; (3) interpretation: the result of the test must be described and be provided with an appropriate interpretation, particularly when this involves a treatment decision; (4) integrated reporting: the need for integrating patients’ results is widely acknowledged. As gene panel testing becomes more widespread, the results of different gene tests should be merged into a single report. Also, results from several pathology specialties on individual patients need to be integrated into the same report. Surgical Pathology services should obtain accreditation on quality assurance. We believe that all laboratories providing molecular pathology services should have laboratory accreditation according to ISO 15189 or their national equivalent. Accreditation provides patients, staff, service users and commissioners with evidence of laboratory competence. Currently, in patients with NSCLC, a clear genomic diagnostic strategy that allows the establishment of optimal therapeutic indications for each must be defined. With this objective, in the new consensus of the SEOM and the SEAP, the following recommendations are proposed: (1) EGFR , BRAF , KRAS and MET mutations, ALK , ROS1 , RET and NTRK translocations and PD-L1 expression must be detected in NSCLC; (2) other emerging biomarkers such as the HER2 mutation and immune biomarkers such as TMB, MSI, STK11 and KEAP1 are recommended, especially if NGS is available; (3) molecular detection can be performed at any stage of NSCLC or in clinically selected patients, and new therapeutic indications that require this information can be established; (4) the availability of NGS strongly facilitates molecular diagnosis in a precise and effective manner, and its use should be immediately generalized; (5) LB has an increasing role in molecular diagnosis, especially if tissue is limited, and its role in the follow-up of treatment is also promising, both in MRD detection and early diagnosis; (6) a tumour sample must be appropriately analysed for correct prioritization of the molecular detection test to be performed, and good quality control throughout the process is essential; (7) adequate multidisciplinary collaboration between the different professionals involved is needed to achieve the highest quality in the diagnostic process and in the detection of the best therapeutic approach for each patient with NSCLC at any stage of the disease.
Neutron tomography of sealed copper alloy animal coffins from ancient Egypt
002efc0d-55e0-4169-bec7-bb191a13520e
10119080
Forensic Medicine[mh]
The mummification of animals was a widespread practice in ancient Egypt. Animal remains, believed to be physical incarnations of deities, votive offerings or part of a ritual performance, have been discovered inside many religious complexes, mostly dating to the 1st millennium BCE – . Remains were sometimes placed within statues of animals or inside boxes featuring a representation of the animal on the top , – . Such boxes are interchangeably referred to in Egyptological literature as ‘animal coffins’ or ‘votive boxes’, although it is not always clear if they systematically contain the remains of an animal, or if they were votive in nature rather than performing some ritual function . Many animal species were depicted on the boxes, including falcons, cats, mongooses, snakes, eels, lizards, and shrews. Different materials were used in the manufacture of the boxes, including wood, limestone, and copper alloy (notably bronze or leaded bronze , ), and they vary widely in their shape and size. A small limestone box topped by a shrew figure, discovered in Saqqara, was found to contain a shrew mummy using X-radiography . No remains of wrappings were discovered in the box, although their original presence could not be discounted. X-radiography has also been used to uncover a snake mummy inside a wooden box topped by a snake figure . Copper alloy votive boxes were typically made by casting, with an opening at one end which was subsequently sealed with a plaster plug and metal panel. Many of these boxes were not intact when discovered, and typically no animal remains were present inside. Fragments of cat bone were found inside an opening in a bronze box with a seated bronze cat figure on top (British Museum EA65795). In some cases, only pieces of textile were found inside the boxes, possibly the remains of animal wrappings . X-ray computed tomography (CT) has been successfully applied to the non-invasive study of wrapped and unwrapped animal mummies, using medical CT scanners, laboratory-based microfocus X-ray CT systems, and with synchrotron microtomography – . A limitation of X-ray imaging is the presence of metal—particularly lead or leaded bronze—or other very dense materials in the beam path. X-ray attenuation by dense objects leads to image artefacts in reconstructed X-ray CT volumes, such as streaking and beam hardening , . These artefacts can obscure features of interest, particularly those in lower density materials. A previous study applied X-radiography and X-ray CT to a group of eight intact copper alloy votive boxes in the British Museum collection, topped by depictions of eels, reptiles, and human–eel–cobra hybrids . X-ray CT scans gave evidence for animal bones inside some of the boxes, although the image quality was hampered by strong X-ray attenuation from the metal, despite the use of X-ray tube voltages up to 450 kV. Indications of the manufacturing process of the boxes and the methods used to seal the open ends were observed in both X-radiography and X-ray CT. Inside three of the boxes, very dense objects were identified from the intense streaking artifacts in the data. These were suspected to be made from copper alloy or lead, but no further detail could be revealed due to their thickness and density. Neutron imaging has been established as a complimentary non-invasive technique to X-ray imaging , . Whilst X-rays are generally more strongly attenuated by elements with a greater atomic number, neutrons show no such correlation; notably, in contrast to X-rays, neutrons are strongly attenuated by hydrogen and weakly attenuated by metals, including lead. Neutron imaging can therefore be particularly effective for detecting organic material, and light materials more generally, enclosed within dense casings—for example, water inside porous rock . These properties have successfully been applied to cultural heritage studies: revealing the organic content of bronze Tibetan Buddha statues , and relics inside an altar stone ; examining the manufacture of a bronze ship model ; and mapping corrosion phases in iron swords . Neutron tomography has also been compared with X-ray CT in the study of a wrapped cat mummy . In a 1985 study by Jett, Sturman, and Drayman-Weisser, a bronze falcon statue in the Walters Art Museum, Baltimore, was found to contain bird bones by use of an endoscope inserted through a small opening in its head. X-radiography was unable to reveal further information on the remains due to strong X-ray attenuation from the bronze; neutron radiography gave a much-improved image of the bones inside the statue . This article details the neutron tomography of six copper alloy votive boxes from the British Museum collection previously studied with X-radiography and X-ray CT imaging . These rare examples of still-sealed boxes were selected based on the potential presence of interesting features/material within suggested by the previous study. There is a debate among Egyptologists regarding the nature and function of such boxes, hence there is an interest in checking for the presence or absence of animal mummies within. Many boxes are very small and would not have been able to accommodate the body of the mummified animal represented on top, or at least not the complete body. It is also difficult to prove that these boxes were exclusively votive objects. Recent discussions regarding mass-produced Egyptian bronzes—which are contemporaneous to this group of six boxes, and often found in similar archaeological contexts—suggest that some could have been used during rituals , which may have also been the case for some of these boxes. Building on the past work, the aims of the experiment were to identify organic remains inside the boxes, to gain insight into the box manufacture, and to identify the unknown dense objects previously observed inside three of the boxes. Three of the votive boxes examined in this study—British Museum accession numbers EA27584, EA49144, and EA49146—were discovered in Naukratis in the western Nile Delta in 1885. Naukratis was an international harbour founded in the late seventh century BCE and was a key part of a trade network between the Mediterranean world and the Nile Valley. The boxes feature depictions of lizards and eels, and are thought to date from 500 to 300 BCE , , . Box EA36167, topped by a lizard figure, was discovered in Tell el-Yehudiyeh in the eastern Nile Delta and was purchased by the British Museum in 1876. The box is attributed to the Late Period (664–332 BCE), and no further information is known on its findspot or context. A narrow crack runs along the base of the box, but it does not appear to go all the way through to the interior. Boxes EA71428 and EA36151 (each with unknown provenance) are both topped with a part-eel, part-cobra figure, with a human head. Box EA71428 was registered into the British Museum collection in 1989 and box EA36151 was purchased by the Museum in 1867. They are both likely dated to the Late Period, early Ptolemaic period at the latest, mid-seventh to third century BCE. Votive boxes with depictions of eels and lizards were associated in ancient Egypt with the solar and creator god Atum . Atum is often represented in anthropomorphic form, as a human-headed part-eel, part-cobra creature wearing a double crown. All six votive boxes studied are made from copper alloy, and are still sealed by a plaster plug; a drill hole is present in the plug of box EA49144, although it does not fully penetrate it. Details and photographs of the six boxes are given in Table . In the results detailed below, the following convention has been used for the orientation of tomographic slices: right/left—corresponding with the proper right/left of the creature surmounting the box; front—the end of the box nearest to the head of the creature (opposite the sealed opening). Top view slices are oriented such that the right wall is at the bottom and the front wall is to the right of the slice. Side view slices are oriented such that the box base is at the bottom and the front wall is to the right of the slice. Front view slices are viewed from the front of the box, such that left and right walls are mirrored. The uncertainties in distance measurements are ± 0.2 mm for the first five boxes listed (0.055 mm reconstructed voxel size), and ± 0.4 mm for box EA36151 (0.103 mm voxel size). EA27584 The box topped by two lizards was found to contain animal remains and textile pieces seemingly used to wrap the remains prior to their placement in the box (Fig. ). Although the animal remains are in a fragmentary condition, a long bone measuring approx. 8.1 mm can be seen (Fig. a). The textile could be linen, cotton or wool , but is thought to be linen, since it is commonly used in animal mummy wrappings . The textile has a loose weave, with 1–2 mm spacing between the threads. Evidence of the lost-wax casting technique can be seen in the presence of chaplets and a layer of core material (possibly clay) covering the internal walls (Fig. b). The chaplets are embedded in the box walls and core material and are strongly attenuating of neutrons (linear attenuation coefficients of 1.5–1.6 cm −1 ), likely due to hydrogen-containing metal corrosion products such as hydroxides. The significantly higher amount of corrosion on the chaplets suggests they were made of iron, since iron is more likely to oxidise than copper alloys, and thus corrodes more quickly. Further evidence of corrosion is visible on the outer layers of the box and lizard figures. There is no visible discontinuity between the box and the loops or lizard figures, suggesting that all were manufactured in a single casting. Neutron CT volume renders of box EA27584 are given in Supplementary Information Fig. . EA49144 Neutron tomography of the box topped by the eel figure and two suspension loops revealed the presence of a plaster plug on one side and fragments and concretions on the opposite end, but no identifiable animal remains (Fig. ). The most interesting feature is the presence of a textile fragment in the plaster plug, clearly showing the individual threads, spaced approx. 0.8 mm apart and arranged in a plain weave, (i.e., the warp and weft threads cross at right angles) (Fig. a). The plug continues up to 46 mm into the box. There is a sharp drop in the neutron attenuation of the plug from approx. 0.3–0.1 cm −1 beyond a depth of 27 mm (at the boundary between denoted regions 1 and 2 in Fig. b,c). It is not clear what is the cause of this difference in attenuation; one possible explanation is that the plaster in the plug only extends to this depth, and beyond is only the more weakly-attenuating textile. The 3.5 mm wide drill hole in the plug, visible from outside the box, extends approx. 5 mm into the plug. Another, narrow void running along the length of the plug, was revealed by neutron CT. This void is not straight, and varies from 0.2 to 0.9 mm width, suggesting that it is not a drill hole, but perhaps a gap from the folding of the textile inside the plug. The fragmented material in the box comprises a matrix containing multiple rounded objects up to 1.2 mm in width with neutron attenuations ranging from 0.6 to 0.9 cm −1 . An additional unidentified object is present inside the box, close to the plug (Fig. b). The object is approx. 7.7 × 1.5 × 0.3 mm in size, with the shape of a curled piece of paper or fabric, and has a neutron attenuation of 0.9–1.0 cm −1 . EA49146 Several small vertebrae measuring approx. 1 × 1.5 × 2 mm were revealed by the neutron CT of box EA49146 (Fig. ), as well as multiple loose fragments, possibly bone. The neutron attenuation coefficients of the vertebrae and the fragments were measured to be 0.3–0.5 cm −1 , comparable with the long bone in box EA27584 (0.4–0.5 cm −1 ). A small textile fragment was seen near the top of the box interior, measuring approx. 3 × 3 mm. Three corroded chaplets are present, one of which has broken from the right wall and is lying loosely inside the box. EA36167 Box EA36167 has different regions and various materials throughout its interior. Immediately behind the metal panel sealing the opening is a plaster layer of 13–20 mm length, followed by a textile bundle measuring approximately 50 × 20 × 11 mm. Beyond the bundle, at the end of the box furthest from its opening, is a low-attenuating material at the interface with the box walls, with a void in its centre (Fig. ). Three corroded chaplets—showing as bright areas—are present inside the walls of the box, one in the wall opposite the opening, and one in each of the side walls. Within the textile bundle there seem to be several small bones and bone fragments, including a complete long bone of 6.2 mm length, and vertebrae measuring approx. 1.5 × 1.5 × 2.0 mm. Towards the end of the bundle furthest from the box opening appears to be an intact lizard skull, with mandible and orbits seen in top-view CT slices (Fig. ). The mandible is approx. 8.5 mm wide × 10.4 mm long, and the orbits approx. 1.8 mm wide × 3.1 mm long. Although it is not possible to identify the species of lizard from the neutron tomography data due to the variability of sizes within species, the sizes of the bones are consistent with lizards of the Mesalina genus, several species of which are endemic to northern Africa – . The lizard figure on top of the votive box is decorated with spots and stripes running along the length of the back; several species of the Mesalina genus are also spotted and/or striped. An X-ray micro CT scan of M. rubropunctata was accessed on the MorphoSource repository , for comparison with the remains found inside EA36167; the sizes of the orbits, mandible, C1 vertebra and long bones of this specimen are given in Supplementary Information Table , and are broadly similar to those measured from the neutron CT scan of the votive box. The material at the end of the box furthest from the opening can be identified as lead due to its low neutron attenuation compared with strong X-ray attenuation previously reported in this region . Based on the shape of the lead, and the fact that it surrounds two chaplets, it is assumed that the lead was introduced to the box in a molten state. Given the void in the lead, we cannot exclude that something was originally inside. A small region of concretion, possibly clay/soil with mineral inclusions, is seen inside the lead, in contact with the textile bundle. Regions of strong attenuation, due to corrosion, are present in the lead at opposite ends of the void (see Supplementary Information Fig. ). A potential cause of this corrosion is decaying animal matter in proximity with the lead. EA71428 Previous X-radiography and X-ray CT scans of box EA71428 uncovered two long objects of high density, each spanning the length of the box interior; neutron tomography shows low attenuation from these objects (0.2–0.4 cm −1 ), suggesting they are made from lead (Fig. ). The upper lead object (approx. 179 mm long, 13 mm tall) fits inside the eel figure on the box, matching its sinuous form on its upper surface and spreading to a flat, wide base (Figs. b and ). Due to its shape, it is assumed that the upper lead piece was poured into the box in a molten state whilst the box was upside down, and it subsequently detached from the internal box walls once solidified. On the underside of the upper lead piece, across most of its length, is a layer of corrosion which could have originated from proximity to decaying organic material. This corrosion penetrates more deeply into the lead where it is in contact with the plaster and textile plug, and at two points further into the box, where it appears to have spread into the lead from distinct small areas on the surface (Fig. c,d). The lower lead object is rectangular in cross section (approx. 166 × 13 × 5 mm), and roughly flat along its length, leaning at an angle against the internal wall of the box (Figs. b and e). This lower lead object appears to act as a support for the upper, preventing it from falling to the base of the box interior. A plug containing folded textile is seen immediately inside the square opening at the rear of the box (Figs. b and e). It is thought that this textile is surrounded by the plaster which can be seen visually on the box exterior behind the damaged metal plate covering the opening. A small amount of fragmentary material (approx. 23 × 23 × 13 mm) is visible at the front end of the box (Fig. b). The larger fragments show a strong neutron attenuation of 2.0–3.2 cm −1 , much higher than that measured for the bones in the other boxes. It is not known what these fragments are, but the attenuation appears to be too strong to be from mineralised bone. Evidence of the manufacture of box EA71428 seen in neutron tomography supports the conclusions drawn in the previous study : the box and eel part of the figure were hollow cast together; the cobra hood with human crowned head was separately solid cast, and attached to the box with a small support at the back of the cobra hood. These attachments appear to have been made by fusion welding (joining the molten pieces together, sometimes using a molten filler alloy of the same composition as the pieces) or hard soldering (joining using a molten alloy of slightly lower melting temperature and a flux) , since no material of significantly different neutron attenuation is present at the joins. Signs of lost-wax casting of the box are evidenced by corroded metal chaplets which are seen throughout: three in the side walls, one in the base, one in the top (passing through the eel figure). Two detached chaplets are present inside the box, thought to have been originally located near the box opening, in the base and the side wall, respectively. Neutron CT revealed a filled round hole, approx. 3.6 mm in diameter, on the box near a point at which the eel body meets the top of the box (Fig. c). The neutron attenuation of the fill material in this hole is similar to that of the lead inside the box; it is possible that this is an ancient repair of a defect which formed during casting. EA36151 Neutron tomography provided evidence for the manufacture, contents and later additions to EA36151, the largest votive box studied for this work. Between the eel figure and the top of the box, a faint boundary is visible in the tomographic data (Fig. a). This indicates that the box and figure were cast separately and later joined together, possibly using hard soldering. There are 11 chaplets present in the box remnant from the casting process: four in each side wall, roughly evenly spaced across the length of the box, and three in the base. The attenuation coefficients of the chaplets (1.3–2.0 cm −1 ) are comparable with those in the other five boxes examined in this study, and thus they are also assumed to be corroded iron. A filled hole of approx. 3 mm diameter is present in the base of the box, where a chaplet may previously have been present. From microscopic imaging of the fill material, it appears to be made from wax, resin or plaster—or a mixture containing multiple components (Fig. a). The neutron attenuation of this filler (2.4–2.6 cm −1 ) is markedly greater than for the 11 chaplets. Microscopic imaging and measured neutron attenuation (2.5–2.6 cm −1 ) of the top of the double crown worn by the figure also indicate wax or resin has been applied to this area. It is likely that these two additions to the box were made after its discovery. Several areas of repair to the box are seen on the neutron CT images. A repair on the left side of the box, appears to have been made during the original manufacturing process, possibly to cover a hole which formed during casting (Fig. b). A larger repair, at the front-right corner of the box, is part of a conservation treatment undertaken in the British Museum in 1977: Bondapaste (a polyester resin) was used to repair a region of corrosion and cracking in this corner . Neutron tomography of this corner (Fig. c–e) reveals damage in the top, right and base walls, and the relatively high attenuation of the resin (1.4–2.0 cm −1 ) compared with the copper alloy walls (0.6–0.7 cm −1 ). A neutron CT volume render image with microscopy highlighting this conservation treatment is given in Supplementary Information Fig. . A region comprising lead is present inside the box, directly behind the repaired surface. The lead contains several rounded voids, up to 5.3 mm wide, and envelops two of the chaplets which penetrate the box interior, implying that the lead was poured into the box in a molten state. It is possible that the lead was added as part of an ancient repair, to provide support to this area of the box. Box EA36151 also contains loose, fragmented material (approx. 60 × 35 × 22 mm). There are no complete bones that could be identified, although several potential bone fragments are present. The additional presence of highly attenuating objects (1.0–1.6 cm −1 ) in the loose material could indicate soil or sand containing bound water or hydrates. Further neutron CT slice images of the fragments inside box EA36151 are given in Supplementary Information Fig. . A summary of the findings inside the six votive boxes from the neutron CT study is given in Supplementary Information Table . Corroded chaplets were found in each box, as summarised in Supplementary Information Fig. . The lead regions found in three of the boxes were segmented  and their volumes calculated. The masses of lead in boxes EA36167, EA71428 and EA36151 were calculated to be 54 ± 3 g, 338 ± 15 g and 351 ± 14 g, respectively, using an assumed density for lead of 11.3 g cm −3 (further details in Supplementary Information Table ); these values are likely to be slight overestimates due to the presence of lead corrosion. The box topped by two lizards was found to contain animal remains and textile pieces seemingly used to wrap the remains prior to their placement in the box (Fig. ). Although the animal remains are in a fragmentary condition, a long bone measuring approx. 8.1 mm can be seen (Fig. a). The textile could be linen, cotton or wool , but is thought to be linen, since it is commonly used in animal mummy wrappings . The textile has a loose weave, with 1–2 mm spacing between the threads. Evidence of the lost-wax casting technique can be seen in the presence of chaplets and a layer of core material (possibly clay) covering the internal walls (Fig. b). The chaplets are embedded in the box walls and core material and are strongly attenuating of neutrons (linear attenuation coefficients of 1.5–1.6 cm −1 ), likely due to hydrogen-containing metal corrosion products such as hydroxides. The significantly higher amount of corrosion on the chaplets suggests they were made of iron, since iron is more likely to oxidise than copper alloys, and thus corrodes more quickly. Further evidence of corrosion is visible on the outer layers of the box and lizard figures. There is no visible discontinuity between the box and the loops or lizard figures, suggesting that all were manufactured in a single casting. Neutron CT volume renders of box EA27584 are given in Supplementary Information Fig. . Neutron tomography of the box topped by the eel figure and two suspension loops revealed the presence of a plaster plug on one side and fragments and concretions on the opposite end, but no identifiable animal remains (Fig. ). The most interesting feature is the presence of a textile fragment in the plaster plug, clearly showing the individual threads, spaced approx. 0.8 mm apart and arranged in a plain weave, (i.e., the warp and weft threads cross at right angles) (Fig. a). The plug continues up to 46 mm into the box. There is a sharp drop in the neutron attenuation of the plug from approx. 0.3–0.1 cm −1 beyond a depth of 27 mm (at the boundary between denoted regions 1 and 2 in Fig. b,c). It is not clear what is the cause of this difference in attenuation; one possible explanation is that the plaster in the plug only extends to this depth, and beyond is only the more weakly-attenuating textile. The 3.5 mm wide drill hole in the plug, visible from outside the box, extends approx. 5 mm into the plug. Another, narrow void running along the length of the plug, was revealed by neutron CT. This void is not straight, and varies from 0.2 to 0.9 mm width, suggesting that it is not a drill hole, but perhaps a gap from the folding of the textile inside the plug. The fragmented material in the box comprises a matrix containing multiple rounded objects up to 1.2 mm in width with neutron attenuations ranging from 0.6 to 0.9 cm −1 . An additional unidentified object is present inside the box, close to the plug (Fig. b). The object is approx. 7.7 × 1.5 × 0.3 mm in size, with the shape of a curled piece of paper or fabric, and has a neutron attenuation of 0.9–1.0 cm −1 . Several small vertebrae measuring approx. 1 × 1.5 × 2 mm were revealed by the neutron CT of box EA49146 (Fig. ), as well as multiple loose fragments, possibly bone. The neutron attenuation coefficients of the vertebrae and the fragments were measured to be 0.3–0.5 cm −1 , comparable with the long bone in box EA27584 (0.4–0.5 cm −1 ). A small textile fragment was seen near the top of the box interior, measuring approx. 3 × 3 mm. Three corroded chaplets are present, one of which has broken from the right wall and is lying loosely inside the box. Box EA36167 has different regions and various materials throughout its interior. Immediately behind the metal panel sealing the opening is a plaster layer of 13–20 mm length, followed by a textile bundle measuring approximately 50 × 20 × 11 mm. Beyond the bundle, at the end of the box furthest from its opening, is a low-attenuating material at the interface with the box walls, with a void in its centre (Fig. ). Three corroded chaplets—showing as bright areas—are present inside the walls of the box, one in the wall opposite the opening, and one in each of the side walls. Within the textile bundle there seem to be several small bones and bone fragments, including a complete long bone of 6.2 mm length, and vertebrae measuring approx. 1.5 × 1.5 × 2.0 mm. Towards the end of the bundle furthest from the box opening appears to be an intact lizard skull, with mandible and orbits seen in top-view CT slices (Fig. ). The mandible is approx. 8.5 mm wide × 10.4 mm long, and the orbits approx. 1.8 mm wide × 3.1 mm long. Although it is not possible to identify the species of lizard from the neutron tomography data due to the variability of sizes within species, the sizes of the bones are consistent with lizards of the Mesalina genus, several species of which are endemic to northern Africa – . The lizard figure on top of the votive box is decorated with spots and stripes running along the length of the back; several species of the Mesalina genus are also spotted and/or striped. An X-ray micro CT scan of M. rubropunctata was accessed on the MorphoSource repository , for comparison with the remains found inside EA36167; the sizes of the orbits, mandible, C1 vertebra and long bones of this specimen are given in Supplementary Information Table , and are broadly similar to those measured from the neutron CT scan of the votive box. The material at the end of the box furthest from the opening can be identified as lead due to its low neutron attenuation compared with strong X-ray attenuation previously reported in this region . Based on the shape of the lead, and the fact that it surrounds two chaplets, it is assumed that the lead was introduced to the box in a molten state. Given the void in the lead, we cannot exclude that something was originally inside. A small region of concretion, possibly clay/soil with mineral inclusions, is seen inside the lead, in contact with the textile bundle. Regions of strong attenuation, due to corrosion, are present in the lead at opposite ends of the void (see Supplementary Information Fig. ). A potential cause of this corrosion is decaying animal matter in proximity with the lead. Previous X-radiography and X-ray CT scans of box EA71428 uncovered two long objects of high density, each spanning the length of the box interior; neutron tomography shows low attenuation from these objects (0.2–0.4 cm −1 ), suggesting they are made from lead (Fig. ). The upper lead object (approx. 179 mm long, 13 mm tall) fits inside the eel figure on the box, matching its sinuous form on its upper surface and spreading to a flat, wide base (Figs. b and ). Due to its shape, it is assumed that the upper lead piece was poured into the box in a molten state whilst the box was upside down, and it subsequently detached from the internal box walls once solidified. On the underside of the upper lead piece, across most of its length, is a layer of corrosion which could have originated from proximity to decaying organic material. This corrosion penetrates more deeply into the lead where it is in contact with the plaster and textile plug, and at two points further into the box, where it appears to have spread into the lead from distinct small areas on the surface (Fig. c,d). The lower lead object is rectangular in cross section (approx. 166 × 13 × 5 mm), and roughly flat along its length, leaning at an angle against the internal wall of the box (Figs. b and e). This lower lead object appears to act as a support for the upper, preventing it from falling to the base of the box interior. A plug containing folded textile is seen immediately inside the square opening at the rear of the box (Figs. b and e). It is thought that this textile is surrounded by the plaster which can be seen visually on the box exterior behind the damaged metal plate covering the opening. A small amount of fragmentary material (approx. 23 × 23 × 13 mm) is visible at the front end of the box (Fig. b). The larger fragments show a strong neutron attenuation of 2.0–3.2 cm −1 , much higher than that measured for the bones in the other boxes. It is not known what these fragments are, but the attenuation appears to be too strong to be from mineralised bone. Evidence of the manufacture of box EA71428 seen in neutron tomography supports the conclusions drawn in the previous study : the box and eel part of the figure were hollow cast together; the cobra hood with human crowned head was separately solid cast, and attached to the box with a small support at the back of the cobra hood. These attachments appear to have been made by fusion welding (joining the molten pieces together, sometimes using a molten filler alloy of the same composition as the pieces) or hard soldering (joining using a molten alloy of slightly lower melting temperature and a flux) , since no material of significantly different neutron attenuation is present at the joins. Signs of lost-wax casting of the box are evidenced by corroded metal chaplets which are seen throughout: three in the side walls, one in the base, one in the top (passing through the eel figure). Two detached chaplets are present inside the box, thought to have been originally located near the box opening, in the base and the side wall, respectively. Neutron CT revealed a filled round hole, approx. 3.6 mm in diameter, on the box near a point at which the eel body meets the top of the box (Fig. c). The neutron attenuation of the fill material in this hole is similar to that of the lead inside the box; it is possible that this is an ancient repair of a defect which formed during casting. Neutron tomography provided evidence for the manufacture, contents and later additions to EA36151, the largest votive box studied for this work. Between the eel figure and the top of the box, a faint boundary is visible in the tomographic data (Fig. a). This indicates that the box and figure were cast separately and later joined together, possibly using hard soldering. There are 11 chaplets present in the box remnant from the casting process: four in each side wall, roughly evenly spaced across the length of the box, and three in the base. The attenuation coefficients of the chaplets (1.3–2.0 cm −1 ) are comparable with those in the other five boxes examined in this study, and thus they are also assumed to be corroded iron. A filled hole of approx. 3 mm diameter is present in the base of the box, where a chaplet may previously have been present. From microscopic imaging of the fill material, it appears to be made from wax, resin or plaster—or a mixture containing multiple components (Fig. a). The neutron attenuation of this filler (2.4–2.6 cm −1 ) is markedly greater than for the 11 chaplets. Microscopic imaging and measured neutron attenuation (2.5–2.6 cm −1 ) of the top of the double crown worn by the figure also indicate wax or resin has been applied to this area. It is likely that these two additions to the box were made after its discovery. Several areas of repair to the box are seen on the neutron CT images. A repair on the left side of the box, appears to have been made during the original manufacturing process, possibly to cover a hole which formed during casting (Fig. b). A larger repair, at the front-right corner of the box, is part of a conservation treatment undertaken in the British Museum in 1977: Bondapaste (a polyester resin) was used to repair a region of corrosion and cracking in this corner . Neutron tomography of this corner (Fig. c–e) reveals damage in the top, right and base walls, and the relatively high attenuation of the resin (1.4–2.0 cm −1 ) compared with the copper alloy walls (0.6–0.7 cm −1 ). A neutron CT volume render image with microscopy highlighting this conservation treatment is given in Supplementary Information Fig. . A region comprising lead is present inside the box, directly behind the repaired surface. The lead contains several rounded voids, up to 5.3 mm wide, and envelops two of the chaplets which penetrate the box interior, implying that the lead was poured into the box in a molten state. It is possible that the lead was added as part of an ancient repair, to provide support to this area of the box. Box EA36151 also contains loose, fragmented material (approx. 60 × 35 × 22 mm). There are no complete bones that could be identified, although several potential bone fragments are present. The additional presence of highly attenuating objects (1.0–1.6 cm −1 ) in the loose material could indicate soil or sand containing bound water or hydrates. Further neutron CT slice images of the fragments inside box EA36151 are given in Supplementary Information Fig. . A summary of the findings inside the six votive boxes from the neutron CT study is given in Supplementary Information Table . Corroded chaplets were found in each box, as summarised in Supplementary Information Fig. . The lead regions found in three of the boxes were segmented  and their volumes calculated. The masses of lead in boxes EA36167, EA71428 and EA36151 were calculated to be 54 ± 3 g, 338 ± 15 g and 351 ± 14 g, respectively, using an assumed density for lead of 11.3 g cm −3 (further details in Supplementary Information Table ); these values are likely to be slight overestimates due to the presence of lead corrosion. In this work, neutron tomography was utilised to non-invasively study the contents of six votive boxes made of copper alloys from ancient Egypt to search for the presence of faunal remains, and to understand the manufacture of the containers, building on a previous X-ray imaging study. In addition, neutrons were used to identify as lead the dense material present inside three of the six boxes, as previously observed with X-rays. It was difficult to isolate and identify the animal remains in the boxes due to the complexity of the internal assembly and the comparable neutron attenuation of the loose material, textile and plaster also present. Nonetheless, bones were observed in three of the six votive boxes studied (EA27584, EA49146 and EA36167), with possible broken-down bones also present in the larger two boxes (EA71428 and EA36151). Most of the bones are in a fragmentary condition, however complete long bones were observed in boxes EA27584 and EA36167. The neutron CT images also revealed an apparent intact lizard skull inside box EA36167. The dimensions of this skull and the styling of the lizard figure in the casting atop the box are similar to those of lizards of the Mesalina genus; however, the variability of skeleton sizes between species/age of lizards makes it difficult to determine the species from the neutron CT scan. Skulls in the remaining boxes were not identified, and it is assumed that either they have broken down over time, or they were not initially present. Textile fragments were observed inside the three boxes in which animal bones were also present, suggesting that the animals were wrapped before being placed inside the boxes. Within boxes EA36167, EA71428 and EA36151 are significant amounts of lead. Because lead has a much lower melting point than copper and its alloys, the lead must have been placed inside the boxes after they were cast. The shape and distribution of the lead in boxes EA36167 and EA36151 indicate that the lead was molten when introduced to the box, whereas the long, rectangular lower piece in EA71428 was likely solid when inserted. The upper lead piece in EA71428 is probably the result of molten lead being poured into the box whilst it was inverted, since its shape closely follows that of the void inside the eel figure surmounting the box. In ancient Egypt, lead held a magical status and was a material of choice in the manufacture love charms, in rituals of execration of enemies, or, particularly interestingly in our case, in the protection of mummies . Horus eye incision-plates applied over the incision by the embalmer could be made from lead, although other materials are attested. A lead core was also previously discovered inside a bronze falcon figure from Saqqara . It seems that only a small range of Egyptian divine figures or sacred items were regularly made out of lead, perhaps due to symbolic connotations of this material rather than its cheap economic cost (see lead figures of Nefertum and of child deities, as well as models of the Osirian processional barge in lead from Thonis-Heracleion ). Neutron tomography of box EA71428 revealed no inscriptions on the surfaces of the lead pieces. It is plausible that the addition of the lead could also have been prompted by practical uses, such as lowering the centre of mass of boxes with tall, solid metal figures at one end, or providing additional support to weak or damaged areas, as may be the case in EA36151. Such explanations would appear less valid for EA36167, however. Of the six boxes studied in this work, there are loops for suspension on the three boxes without lead inside, and lead is present in each of the three boxes without loops. It is surmised that boxes featuring loops would have been suspended from shrine or temple walls, cult statues or sacred boats used in procession, rather than placed on a surface. Neutron imaging also reveals corrosion on the surface of the lead in boxes EA36167 and EA71428. The higher neutron attenuation seen in the regions of corrosion suggests the presence of hydrogen-containing corrosion products, possibly a result of contact of the lead with the air and decaying animal remains, and the plaster plug in the case of box EA71428. Evidence for the use of lost-wax casting in the votive box manufacture is seen in the presence of multiple chaplets in each box. The strong neutron-attenuation of the chaplets indicates that they contain hydrogen-bearing corrosion products; it is proposed that iron chaplets were used, since it is a material less resilient to corrosion than copper alloys. The number of chaplets present in each box is roughly proportional to the box dimensions, since they were intended to ensure structural stability of the core materials inside the mould after the wax was melted and removed. Core material remains inside box EA27584, with the chaplets embedded inside it. In most cases, the animals depicted on top of the boxes appear to have been cast together with the box. The largest boxes (EA36151 and EA71428) have features suggesting that part or the entire animal was soldered or fusion welded to the top surface of the box. The variety of techniques used to make the boxes, in addition to the variety in their dimensions, suggests that there was not a standardised production method, although comparable manufacturing techniques seem to have been used for small-sized boxes. In this work we show that neutron CT is an effective alternative or complementary technique to X-ray CT for the non-destructive examination of ancient Egyptian copper alloy votive boxes, given their often high lead content, and the presence of lead and/or organic material contained within. While the presence of lead created streaking and beam hardening reconstruction artefacts in X-ray CT , the use of neutrons allowed us to virtually unseal the votive boxes and reveal their organic/low density content, including faunal remains and textile wrappings. Neutron CT also revealed repairs and damage to box EA36151 which were not detected by X-ray CT, due to the proximity of lead to the damaged area, and the low density, but strongly neutron-attenuating later additions in wax or resin. In box EA36167, the lead region obscured the animal remains and wrappings on the X-ray CT scan which were subsequently revealed with neutrons in this work. This work provides further evidence for the use of copper alloy votive boxes in ancient Egypt, showing that animal remains were wrapped in linen and placed inside the boxes before they were sealed, and that the cast animal figures upon the boxes were potentially intended to correspond to the remains within. Neutron tomography of the votive boxes was conducted on the IMAT beamline at the ISIS pulsed neutron and muon source (Rutherford Appleton Laboratory, UK). IMAT is a cold neutron instrument which captures images of objects based on their neutron attenuation – . The tomography process on IMAT is similar to that for laboratory-based X-ray CT and synchrotron radiation CT scans: a series of radiographs are acquired throughout a step-by-step rotation about the vertical axis. These projections are used to create a volumetric reconstruction of the object in which every voxel describes the local neutron attenuation coefficient as a grayscale value. For our set-up we used a pinhole size of 40 mm with 10 m pinhole-to-sample distance, giving a best achievable resolution of about 100 μm. The votive boxes were mounted with their longest axis aligned vertically. This configuration allowed the boxes to be positioned as close to the detector as possible, minimising image blurring and variations in beam path length through the box during scanning. The mounts were made of aluminium, with Teflon tape used to protect the surface of the boxes. Both these materials have low neutron attenuation coefficients. Projection images were acquired using an ANDOR Zyla sCMOS 4.2 PLUS camera (2048 × 2048 pixels) coupled with an optical lens and a 100 µm thick ZnS/LiF scintillator sheet. An acquisition time of 30 s per projection was selected for each scan, as a compromise between improved counting statistics and the total time available for the experiment. Each box was rotated through 360° during the scan. The number of projections was selected for each box based on the Nyquist–Shannon theorem—i.e., S (π/2), where S is the number of horizontal pixels covered by the box at its maximum width throughout the scan. The scan parameters used for each votive box are given in Supplementary Information Table . Flat field (illumination of the detector with the sample outside the field of view) and dark field (with the beam shutter closed) images were acquired for each scan and used to correct the projections for inhomogeneity in the neutron beam intensity, detector pixel response, and camera noise. The projections were corrected using the Fiji distribution of the ImageJ software package . Bright spots on the projections due to high energy gamma interactions were removed using the “remove outliers” function in Fiji. CT reconstruction was performed in the Octopus Reconstruction software package , using a parallel beam filtered back projection algorithm . Due to the larger size of boxes EA71428 and EA36151, they were scanned in two overlapping vertical sections; the resulting reconstructed volumes were subsequently stitched together using the “Pairwise Stitching” plugin in Fiji . Segmentation and volume rendering of the tomographic datasets was performed using VGStudio MAX 3.3 (Volume Graphics GmbH, Germany). Maximum intensity projection (MIP) images—i.e., two-dimensional visualisations of the highest attenuating voxels across multiple tomographic slices—have been used in this article to highlight three-dimensional features in the boxes, including textiles and metal corrosion. The MIP images were made using the “Z Project” function in Fiji. The volumes of segmented lead pieces were calculated using the “Voxel Counter” plugin in Fiji. In addition, microscopy of specific details highlighted from the imaging investigation was conducted with a VHX-5000 digital microscope (Keyence, Japan), operated in reflective light mode, without filters. Supplementary Information.
Barriers and applied activity, quality of life and self-efficacy in prostate cancer survivors 1 year after completing radiotherapy
8b3a4615-d3ca-4338-83ef-22ea4f3d9b2c
10119230
Internal Medicine[mh]
The constant improvement of cancer treatments as well as diagnostic methods has significantly increased the life expectancy of cancer patients. Survival of a cancer diagnosis is expected to be greater than 60% , which is a major health challenge . A considerable number of cancer patients experience comorbidities and symptoms secondary to cancer, even years after initial treatment . Patients who survive cancer treatment often experience persistent side effects such as sleep disturbances , pain and fatigue . In addition, they experience other comorbidities such as diabetes, osteoporosis, cardiovascular disease, functional impairment and ultimately an increased risk of new primary cancers . Prostate cancer is a significant health burden expected to increase over the next years due to the recent survival data . Despite earlier detection, prostate cancer patients use to receive treatment and exhibit side effects of therapy during long-term survival . A relevant aspect of cancer survivorship is related to lifestyle behaviours, with a key role in physical activity . According to previous studies, physical activity can improve survival, the risk of cancer recurrence and the quality of life of cancer survivors . Most survivors do not engage in regular physical activity, and less than 30% achieve minimum levels, despite the benefits of physical activity . Different studies have explored factors related to physical activity after a cancer diagnosis, finding education, age, body mass index, occupation and receiving specific cancer therapies among the most important . Results obtained in various meta-analyses have shown an inverse association between amounts of physical activity after diagnosis and cancer-specific mortality in prostate cancer survivors . Those systematic reviews indicate that the highest levels of total, recreational, non-sedentary occupational, and vigorous physical activity, including higher metabolic equivalent (MET) hours per week, were significantly related to reduced risk for all-cause mortality. Despite the volume of evidence indicating the benefits of regular physical activity for health and functioning , people with cancer are far less likely to engage in physically active lifestyles, and the enrolment of these patients in physical activity (PA) programs remains unsuccessful . Little is known about why the majority of people with cancer fail to integrate regular physical activity into their lifestyle . It has been suggested that an understanding of potential barriers that affect participation by cancer patients could provide important information necessary for developing interventions that have a greater likelihood of success . Previous research has identified different aspects related to physical activity levels such as pain, cancer treatment-related side effects, fatigue, motivation, comorbid medical conditions and time . Despite this, the literature referring to prostate cancer survivors examining barriers to physical activity is very limited and has not explored the specific profile of long-term patients after radiotherapy. The objectives of our study were to (i) measure self-reported PA levels, (ii) assess perceived barriers to PA, (iii) and determine quality of life and self-efficacy to manage chronic disease of prostate cancer survivor 1 year after radiotherapy treatment. All these factors are determinants in improving the enrolment of prostate cancer survivors in PA programs. Design and ethics A cross-sectional study was conducted between January 2022 and April 2022. Before being included in the study, patients received detailed information about the study goals and procedure and gave their informed consent to participate. The study was approved by a local committee on research ethics. Population Prostate cancer survivor patients treated with radiotherapy were recruited from the Radiation Oncology Service of the “Complejo Hospitalario Universitario” (Granada). The eligibility criteria for the prostate cancer patients included histologically documented prostate cancer, 1 year after completion of radiotherapy treatment and no on-going cancer treatment. The control cohort included aged-matched healthy men with similar body weight and height, with no previous history of cancer. Control participants were recruited by word-of-mouth and were excluded if they exhibited any history of cancer. Matching for aged and BMI was achieved by individually selecting the control subject with the closest available match for age and BMI to the prostate cancer survivor patients. Case and control participants were excluded if they had one of these conditions: under 18 years of age, neurologic pathologies limiting voluntary mobility, orthopaedic and cardiovascular pathologies, learning disability or if telephone contact was inappropriate due to dementia, or other cognitive or communication impairment. An a priori power analysis based on a pilot study (unpublished) of 10 subjects (effect size of 0.80) was performed with the G*Power 3.1.9.2 software (3.1.9.2v; Statistical Power Analyses for Windows, Universität Düsseldorf, Germany) resulting in a sample size of 104 patients (52 per group) and a statistical power of 90%. Considering a hypothetical dropout rate of 10%, 58 patients were needed in each group. Recruitment ended when the required sample size was reached for each group. Measurements Participants were assessed by telephone always by the same investigators previously trained. An initial assessment interview was conducted to confirm that the patients met the inclusion criteria. Data regarding comorbidities, anthropometric data, prostate cancer characteristics and cancer treatment were obtained from the medical history. The Charlson index was used to assess comorbidities which has been validated in several disorders and is one of the most widely used scoring systems for assessing comorbidities. The participant’s perception of benefits for physical activity and potential barriers was measured with the Spanish version of Exercise Benefits/Barriers Scale (EBBS) . The scale includes 43 items separated into two subscales: 14 items refer to barriers and 29 items refer to benefits . The scale is designed based on a 4-point Likert scale: strongly disagree (1), disagree (2), agree (3), strongly agree (4). For the benefits subscale, the answer range varies between 29 and 116 and the higher the score, the more positively the individual perceives exercise. For the barriers subscale, the answer range varies between 14 and 56, and the higher the score, the more negatively the individual perceives exercise. When all items are summed to obtain a total score, the barrier to exercise subscale items are reverse scored. In contrast, when only the barriers to exercise subscale is calculated, no inverse score is applied to these items . When the total sum of barriers and benefits is summed, the score can range from 43 to 172. In this case, the higher the score, the more positively the individual perceives exercise . The physical activity levels were evaluated with the Spanish version of the International Physical Activity Questionnaire (IPAQ) . It has been validated and previously used in cancer patients. This questionnaire was designed to quantify physical activity in transportation, household chores, work and leisure time. Subjects are asked to report both the frequency and duration of activities performed during the last week divided into three categories: walking, moderate activities and vigorous activities. Activity is calculated as the total time spent in the three activity categories. A metabolic equivalent (MET) is used to weight the total task time, resulting in an estimate of activity that is expressed as MET-min/week and adjusted for body weight . To assess quality of life, the five-dimension, three-level EuroQol (EQ-5D-3L) was used in its Spanish version, which is divided into two distinct sections . The first section is divided into 5 items related to mobility, usual activities, self-care, anxiety/depression and pain/discomfort. Each of the items has three response levels corresponding to “no problems”, “some problems” or “extreme problems”. The second part of the scale consists of a visual analogue scale (VAS) in which the respondents must self-assess their current health status by assigning a score between 0 (worst imaginable health status) and 100 (best imaginable health status). The EQ-5D-3L has previously been used in prostate cancer patients . The Spanish version of the scale to measure Self-Efficacy to Manage Chronic Disease (SEMCD-S) was used to assess self-efficacy . The scale consists of 4 items which are answered with a score from 1 (no confidence) to 10 (total confidence). To obtain the result of the scale, the mean of the 4 items is calculated. If more than one of the items is not answered, the final score cannot be calculated. The SEMCD-S has been used previously in cancer patients . Data analysis Statistical analysis was performed with IBM SPSS Statistics software for Windows, Version 20.0 (IBM Corp. Released 2011; Armonk, NY: IBM Corp). Descriptive statistics were used to describe sample baseline characteristics. Categorical variables are presented as a percentage , and continuous variables are presented as the mean ± standard deviation. The Kolmogorov–Smirnov test was performed to assess continuous data normality, prior to statistical analysis. For data with a normal distribution, Student’s t test was performed, a Wilcoxon test to non-parametric variables and a χ 2 test for nominal variables. The statistical analysis was conducted at a 95% confidence level. A p value p < 0.05 was considered statistically significant. A cross-sectional study was conducted between January 2022 and April 2022. Before being included in the study, patients received detailed information about the study goals and procedure and gave their informed consent to participate. The study was approved by a local committee on research ethics. Prostate cancer survivor patients treated with radiotherapy were recruited from the Radiation Oncology Service of the “Complejo Hospitalario Universitario” (Granada). The eligibility criteria for the prostate cancer patients included histologically documented prostate cancer, 1 year after completion of radiotherapy treatment and no on-going cancer treatment. The control cohort included aged-matched healthy men with similar body weight and height, with no previous history of cancer. Control participants were recruited by word-of-mouth and were excluded if they exhibited any history of cancer. Matching for aged and BMI was achieved by individually selecting the control subject with the closest available match for age and BMI to the prostate cancer survivor patients. Case and control participants were excluded if they had one of these conditions: under 18 years of age, neurologic pathologies limiting voluntary mobility, orthopaedic and cardiovascular pathologies, learning disability or if telephone contact was inappropriate due to dementia, or other cognitive or communication impairment. An a priori power analysis based on a pilot study (unpublished) of 10 subjects (effect size of 0.80) was performed with the G*Power 3.1.9.2 software (3.1.9.2v; Statistical Power Analyses for Windows, Universität Düsseldorf, Germany) resulting in a sample size of 104 patients (52 per group) and a statistical power of 90%. Considering a hypothetical dropout rate of 10%, 58 patients were needed in each group. Recruitment ended when the required sample size was reached for each group. Participants were assessed by telephone always by the same investigators previously trained. An initial assessment interview was conducted to confirm that the patients met the inclusion criteria. Data regarding comorbidities, anthropometric data, prostate cancer characteristics and cancer treatment were obtained from the medical history. The Charlson index was used to assess comorbidities which has been validated in several disorders and is one of the most widely used scoring systems for assessing comorbidities. The participant’s perception of benefits for physical activity and potential barriers was measured with the Spanish version of Exercise Benefits/Barriers Scale (EBBS) . The scale includes 43 items separated into two subscales: 14 items refer to barriers and 29 items refer to benefits . The scale is designed based on a 4-point Likert scale: strongly disagree (1), disagree (2), agree (3), strongly agree (4). For the benefits subscale, the answer range varies between 29 and 116 and the higher the score, the more positively the individual perceives exercise. For the barriers subscale, the answer range varies between 14 and 56, and the higher the score, the more negatively the individual perceives exercise. When all items are summed to obtain a total score, the barrier to exercise subscale items are reverse scored. In contrast, when only the barriers to exercise subscale is calculated, no inverse score is applied to these items . When the total sum of barriers and benefits is summed, the score can range from 43 to 172. In this case, the higher the score, the more positively the individual perceives exercise . The physical activity levels were evaluated with the Spanish version of the International Physical Activity Questionnaire (IPAQ) . It has been validated and previously used in cancer patients. This questionnaire was designed to quantify physical activity in transportation, household chores, work and leisure time. Subjects are asked to report both the frequency and duration of activities performed during the last week divided into three categories: walking, moderate activities and vigorous activities. Activity is calculated as the total time spent in the three activity categories. A metabolic equivalent (MET) is used to weight the total task time, resulting in an estimate of activity that is expressed as MET-min/week and adjusted for body weight . To assess quality of life, the five-dimension, three-level EuroQol (EQ-5D-3L) was used in its Spanish version, which is divided into two distinct sections . The first section is divided into 5 items related to mobility, usual activities, self-care, anxiety/depression and pain/discomfort. Each of the items has three response levels corresponding to “no problems”, “some problems” or “extreme problems”. The second part of the scale consists of a visual analogue scale (VAS) in which the respondents must self-assess their current health status by assigning a score between 0 (worst imaginable health status) and 100 (best imaginable health status). The EQ-5D-3L has previously been used in prostate cancer patients . The Spanish version of the scale to measure Self-Efficacy to Manage Chronic Disease (SEMCD-S) was used to assess self-efficacy . The scale consists of 4 items which are answered with a score from 1 (no confidence) to 10 (total confidence). To obtain the result of the scale, the mean of the 4 items is calculated. If more than one of the items is not answered, the final score cannot be calculated. The SEMCD-S has been used previously in cancer patients . Statistical analysis was performed with IBM SPSS Statistics software for Windows, Version 20.0 (IBM Corp. Released 2011; Armonk, NY: IBM Corp). Descriptive statistics were used to describe sample baseline characteristics. Categorical variables are presented as a percentage , and continuous variables are presented as the mean ± standard deviation. The Kolmogorov–Smirnov test was performed to assess continuous data normality, prior to statistical analysis. For data with a normal distribution, Student’s t test was performed, a Wilcoxon test to non-parametric variables and a χ 2 test for nominal variables. The statistical analysis was conducted at a 95% confidence level. A p value p < 0.05 was considered statistically significant. A total of 120 men, 60 prostate cancer survivors treated with radiotherapy and 60 aged-matched controls were finally included (Fig. ). The characteristics of the study population are summarized in Table . Demographic characteristics were similar in both groups. The mean of comorbidities of the patients was similar in the two groups. The cancer survivors group presented with a higher BMI. Of the sample, a diagnosis of stage II (76.66%) and stage III (20%) cancer was most commonly identified. In addition to radiotherapy, almost the entire sample indicated that some type of cancer-related treatment had been received, with hormonal therapy being the most reported (30%), followed by surgery (18.33%). In Table , barriers and applied activity measures were presented per group. Regarding perception of benefits for physical activity and potential barriers, significant differences were also observed between groups with worse results in the cancer patients group for the benefits and barriers subscales and the overall score ( p < 0.001). There were significant differences for the total physical activity levels ( p = 0.018) with higher levels of physical activity in the control group. In Table , quality of life and self-efficacy to manage chronic disease differences between groups are presented. Significant differences were found, the cancer patients group presented with poorer results in the following EQ-5D subscales: self-care ( p = 0.045), usual activities ( p < 0.001), pain/discomfort ( p < 0.001), anxiety/depression ( p = 0.026) and VAS ( p < 0.001). Regarding self-efficacy, significant differences were also observed between groups ( p = 0.040) with a greater score in the control group. This cross-sectional study aimed to measure self-reported PA levels of prostate cancer survivors after radiotherapy treatment, assess perceived barriers to PA in cancer survivors and determine quality of life and self-efficacy to manage chronic disease. Those aspects can be related to PA levels after a prostate cancer radiotherapy treatment. Findings of this study appear to suggest that self-reported PA levels after a radiotherapy treatment in prostate cancer survivors were lower than control age-matched men with similar body weight and height and presented more barriers to physical activity. The population characteristics in our study is similar to other studies . Due to the fact that the mean age of the samples studied is representative of those who are candidates for radiotherapy. In addition, the inclusion and exclusion criteria of this study have the potential to eliminate people with older ages due to the greater likelihood that they present comorbidities that could significantly influence the study variables. Diagnosis of prostate cancer usually led to undergo radiotherapy treatment. This treatment can substantially raise some impairments on health-related quality of life and associated lifestyles impacting current and future health of patients. In this line, prostate cancer patient profile needs to identify particularly concrete variables that can impact morbidity and mortality. Regarding the first aim, our results revealed that self-reported PA level was lower in prostate cancer survivors after radiotherapy than control aged-matched men, thus agreeing with previous studies which showed that the proportion of prostate cancer patient who undertake regular exercise is low . Despite the fact that the recommendations of the American College of Sports Medicine are 150 min (min) of moderate intensity or 75 min of vigorous physical activity per week to improve their overall health in cancer patients, prostate cancer survivors showed fewer minutes of moderate ( p < 0.020) and vigorous ( p < 0.036) physical activity than controls. In line with our results, Ozdemir K et al. observed that only 20.7% of prostate patients in their study were physically active. Our second aim was to analyze whether cancer survivors presented barriers and knew the benefits of PA. Our findings clearly demonstrate that prostate cancer patients after treatment presented more barriers and lower knowledge about benefits of PA than controls. Our study is in line to previous reviews that explored the influence of benefits and barriers of PA in prostate cancer survivors, the importance of understanding the characteristics of physical activity participation, the perceived barriers to exercise and the benefits of exercise are well known. These showed that the key facilitators to participation in PA include advice and guidance from healthcare professionals or specialists, avoiding the ‘rest-paradigm’ . The study of Min J et al. explored the relationship between PA levels and the most common barriers in prostate cancer, consistent with our results showing that prostate cancer patients present more of a barrier to activity than healthy controls. Our study shows that 1 year after diagnosis, prostate cancer patients remain inactive when compared to similar age and gender controls; this can be curious because control subjects have a similar number of comorbidities. One reason to those differences in PA levels between groups can be the information provided to subjects about the relevance of PA on their clinical profile; another reason can be the differences among major cancer survivor groups’ overall health behaviour. While a cancer diagnosis has been referred to as a possible ‘teachable moment’ where cancer patients can be more motivated to make lifestyle changes to improve health outcomes, the marker of physical activity has been reported to be under-considered among prostate cancer survivors in the long term after diagnosis . The third aim was to determine quality of life and self-efficacy to manage chronic disease after a prostate cancer radiotherapy treatment. Despite quality of life has a large spectrum and numerous factors can condition the state estimate, low physical activity levels influenced negatively in quality of life . Our results showed that prostate cancer survivors with low moderate and vigorous physical activity levels presented a worse self-perceived health status. Along the same lines, previous studies observed that prostate cancer survivors with higher PA levels are associated with better self-perceived quality of life . Similarly, levels of self-efficacy were low in prostate cancer survivors. Mosher CE et al. showed that self-efficacy plays an important role in PA and health promotion. The study of Yang R et al. observed that information support program improved self-efficacy during oncological medical treatment; nevertheless, it is necessary to provide information support after coadjutant treatment. Study limitations We must take into account some factors to properly interpret the results of the study. To begin with, as this is a cross-sectional study, and therefore cross-sectional data collection, it is impossible to establish a direction of causality. In addition, the number of participants was suggested to be sufficient to complete an adequate sample size; however, the individuals in the convenience sample consisted of only one region, which may influence the external validity of the results. Finally, the adjuvant treatment that patients received may have interfered with the results of the study. Concretely, hormone therapy can be of interest, but at long term, the possible impacts of those treatments have been reported as minimal . In another side, other authors have described no significant differences on clinical profile according to adjuvant treatments on prostate cancer at long term . Even so, this is an aspect that may be relevant, and future studies comparing patients with hormone therapy added to radiotherapy and those without hormone therapy are necessary to contrast the results. We must take into account some factors to properly interpret the results of the study. To begin with, as this is a cross-sectional study, and therefore cross-sectional data collection, it is impossible to establish a direction of causality. In addition, the number of participants was suggested to be sufficient to complete an adequate sample size; however, the individuals in the convenience sample consisted of only one region, which may influence the external validity of the results. Finally, the adjuvant treatment that patients received may have interfered with the results of the study. Concretely, hormone therapy can be of interest, but at long term, the possible impacts of those treatments have been reported as minimal . In another side, other authors have described no significant differences on clinical profile according to adjuvant treatments on prostate cancer at long term . Even so, this is an aspect that may be relevant, and future studies comparing patients with hormone therapy added to radiotherapy and those without hormone therapy are necessary to contrast the results. In conclusion, the results of this study reveal that self-reported PA levels, as measured using the IPAQ, were low in prostate cancer survivors after treatment. Results also showed worse perception of benefits for PA and potential barriers by the cancer survivors. Similarly, the quality of life and self-efficacy to manage chronic disease of prostate cancer survivors was lower. These results sustenance the need to design intervention programs focusing on these outcomes.
A Mouse Model of Multiple System Atrophy: Bench to Bedside
02f81a61-7bbd-499e-a9ae-d10cbd1c462b
10119356
Pathology[mh]
The term multiple system atrophy (MSA) was introduced in 1969 by Oppenheimer and Graham . They observed overlapping clinical presentations in the syndromes of sporadic olivopontocerebellar atrophy (OPCA), striatonigral degeneration (SND), and Shy-Drager syndrome and therefore suggested the unifying diagnosis of MSA. The accuracy of this suggestion was confirmed 20 years later by the neuropathological observation of argyrophilic inclusion bodies with “tubular structure” in the oligodendrocytes of patients with different combinations of MSA syndromes. These oligodendroglial aggregates were named glial cytoplasmic inclusions (GCIs) . Another 9 years were needed to identify filamentous alpha-synuclein (a-syn) as a component of GCIs linking MSA with Parkinson’s disease (PD) and dementia with Lewy bodies (DLB) within the group of a-synucleinopathies . MSA is a rare, rapidly progressive neurodegenerative disorder with the profile of an orphan disease. The usual symptom onset is in the fifth decade of life. Its incidence ranges between 0.1 and 2.4/100,000 per year increasing with age, and the estimated prevalence is 4.4/100,000 . Men and women are similarly affected. The disease duration after first diagnosis is 7.9 ± 2.8 years, i.e., much shorter than in PD . The spectrum of symptoms in MSA includes parkinsonism, ataxia, autonomic dysfunction, and pyramidal signs in various combinations . The studies on the natural history of the disease indicate that usually, non-motor symptoms are the first to be reported. These may include orthostatic hypotension, urogenital dysfunction, sleep, and respiratory disorder (stridor). The actual clinical diagnosis of MSA according to the current criteria is possible only after the onset of motor symptoms, which, based on the predominant syndrome, define the Parkinsonian variant (MSA-P) or the cerebellar variant (MSA-C) . The clinical diagnosis can be set with different degrees of certainty (possible or probable), but the final diagnosis is currently possible only postmortem with the demonstration of GCIs in the brain. The diagnostic accuracy is often reduced by the overlapping symptoms with other disorders like PD, DLB, or progressive supranuclear palsy (PSP) . The clinical decline of MSA patients is very rapid, and they may reach milestones of disability within a short period of about 5 years after the first diagnosis . The devastating character of MSA is determined not only by its rash progression but also by the lack of efficient therapy. The symptomatic treatments have usually limited benefit and are unable to stop the progression of the degeneration . In summary, MSA presents a serious medical and social problem with its delayed diagnosis according to the current criteria, low diagnostic accuracy, rapid progression, and early disability of the patients parallel to the lack of efficient therapy. For all these reasons, it is of paramount importance to understand the disease mechanisms and define molecular targets that may support an improved early diagnosis as well as serve as the key towards disease modification in MSA. One of the main sources to get a glimpse into the disease mechanisms is usually the neuropathological examination of the brain. The neuropathological milestones of MSA include widespread pathognomonic a-syn-positive GCIs, selective neurodegeneration with SND and OPCA of various severity and combinations, and degeneration of autonomic CNS centers in the brainstem and spinal cord. In the postmortem brain, gliosis, myelin changes, and demyelination usually accompany the neurodegeneration . Neuroinflammatory signatures in the MSA brain include microglial upregulation of toll-like receptor 4 (TLR4) , myeloperoxidase (MPO) , and inflammasome-related proteins like NLRP3, ASC, and caspase 1 . In addition, the pro-inflammatory cytokines TNF-α, IL-1β, and Il-6 are found increased in the cerebrospinal fluid (CSF) of MSA patients . Therefore, neuroinflammatory changes linked to microglial activation have been suggested as a possible player in MSA pathogenesis . The myelin dysfunction with possible early relocation and accumulation of p25a/TPPP from the myelin sheaths to the oligodendroglial soma has indicated a possible primary oligodendrogliopathy , which may be an early pathogenic event in the disease cascade. Finally, a specificity of a-synucleinopathy has been described at cellular and molecular level in MSA as compared to Lewy body (LB) disorders. A-syn in GCIs is rich in post-translational modifications like phosphorylation at Ser129 and widespread nitration . The initially shown disease-specific widespread ectopic aggregation of a-syn in oligodendrocytes in MSA is accompanied by neuronal cytoplasmic and nuclear a-syn inclusions, which structurally differ from LBs . Recent findings have suggested a different structure of the a-syn fibrils in MSA with a different seeding profile as compared to other a-synucleinopathies . It is unclear yet whether the disease-specific a-syn strains are causative for the different a-synucleinopathies or rather represent a secondary event of specific misfolding within a different pathogenic environment. It is known that a-syn fibrils are not the only constituent of GCIs and the structure of these pathological aggregates includes a large number of other components . Neuropathological analysis has suggested the disruption of the ubiquitin–proteasome system (UPS) and the autophagy-lysosomal pathway (ALP) in MSA , but it remains unclear whether these defects have a causative role in GCI formation or rather represent a consequence of the effects of misfolded a-syn on the function of UPS and ALP. Finally, it is still under debate whether the inclusion pathology in MSA plays a detrimental role in the disease pathogenesis or represents a rescue mechanism of the cells and acts as a “trash bin” for the misfolded proteins accumulating in the cell. The origin of a-syn in oligodendrocytes of MSA is largely uncertain. Earlier studies have claimed that a-syn is a neuronal protein, which is not expressed in mature oligodendroglia . However, laser dissection of oligodendroglia from MSA and control brains has suggested that MSA oligodendrocytes show a tendency to express more SNCA mRNA than control oligodendrocytes , supporting a possible oligodendroglial a-synucleinopathy. Oligodendroglial progenitor cells (OPCs) are known to express SNCA mRNA, but maturation to oligodendrocytes is associated with physiological decline of a-syn expression . A postmortem analysis in MSA patients has proposed an increased number of striatal OPCs , which may indicate a dysfunctional maturation of the oligodendroglial lineage in MSA. Although the density of OPCs has been identified increased in the white matter of the MSA brain, it has been linked to demyelination, but not to accumulation of a-syn in OPCs . The findings in minimal change MSA cases, in which the disease is characterized by widespread GCIs, restricted neuronal loss, and short duration, have suggested that a-syn-associated oligodendroglial pathology may lead to neuronal dysfunction sufficient to cause clinical symptoms before overt neuronal loss in MSA . Finally, the neuropathological examination of peripheral and autonomic nerves in MSA has evidenced the presence of phosphorylated S129 a-syn and a-syn oligomers with nerve fiber degeneration in the skin . Importantly, Schwann cells in cranial and spinal nerves, spinal and sympathetic ganglia, but only rarely in visceral nerves have been shown to form filamentous a-syn inclusions of phosphorylated a-syn in MSA . To that, myenteric neurons in MSA have been reported to present with shrinkage of the soma without phosphorylated a-syn accumulation . Although the number of studies on the peripheral a-synucleinopathy in MSA is limited, the notion is that Schwann cell synucleinopathy may precede the nerve dysfunction similar to the a-syn oligodendroglial pathology in the CNS. So far, the cause of MSA is unknown. No mutations have been identified in the coding region of SNCA . COQ2 mutations, linked to mitochondrial dysfunction, have been linked to family cases of MSA in the Japanese population, but not in other cohorts . Genome wide association studies (GWAS) identified several potentially interesting gene loci, including the FBXO47, ELOVL7, EDN1, and MAPT, but no association of SNCA and COQ2 variants with MSA . Importantly, MSA and inflammatory bowel disease have been reported to share common genetics including common variants of the C7 gene supporting immune dysfunction in both disorders . The current understanding is that certain genetic background may predispose to MSA. On the other hand, environmental factors associated with oxidative stress and toxicity like those linked to occupational history of farming may be more common in MSA cases as in controls . In summary, MSA is a multifactorial disorder with a rapid progression and selective neuronal loss possibly mediated by a-syn pathology, oligodendroglial dysfunction, and neuroinflammatory signaling. Disease models are instrumental in understanding the contribution of each of these components in the pathogenesis of the disease and provide a testbed for novel therapeutic approaches for MSA. Modelling MSA has been approached both in vitro in cell culture and in vivo in rodents and non-human primates. The early in vitro models have been mostly based on a-syn overexpression in glial cells (primary or cell lines) to study the effects of a-syn on their biology in respect to survival , susceptibility to oxidative stress and pro-inflammatory signals , and the role of p25a/TPPP in GCI formation . Such models have been further relevant as biosensor systems to study the seeding properties of MSA-derived a-syn oligomers versus those derived from PD brains . Recently, induced pluripotent stem cells (iPSCs) have been reprogrammed from somatic cells (fibroblasts or blood cells) of MSA patients and further differentiated into neurons disclosing possible mitochondrial dysfunction in MSA as compared to cells of healthy controls . MSA iPSC-derived neural progenitor cells (NPCs), which give rise to both neuronal and glial cells, have been found to compensate functionally the putative mitochondrial deficit at baseline as compared to cells of healthy controls. However, the MSA cellular pathology becomes apparent in the dish after exposure to very low doses of oxidative stress , further consolidating the idea of the multifactorial origin of MSA with a combined role of a genetic predisposition and environmental trigger. The in vivo models of MSA have been focusing on replicating the neuropathological and symptomatic phenotype of the disease. Initial neurotoxin models tried to replicate the SND by combining selective striatal and nigral neurotoxins . These models have been instrumental to study the pathophysiology of SND, but their major limitation has been the lack of a-syn pathology. Recently, the finding of MSA-specific a-syn strains and their prion-like spreading has triggered not only significant interest in this rare disease, but has opened a new avenue for preclinical in vivo modelling based on the strain-specific spreading of a-syn . However, typical GCIs, which are widely spread in the human MSA brain, are not readily seen in the rodent brain after intracerebral inoculation of a-syn fibrils. This discrepancy may be due to the different neurobiology of mice and humans, the limited experimental observation periods, a difference between the in vitro generated PFFs and the pathological a-syn fibrils in patients, or other technical issues. Similar observations have been seen when a-syn fibrils have been introduced through the external urethral sphincter or detrusor, propagating to the CNS . The general notion from an increasing number of studies using a-syn spreading models confirms that healthy oligodendrocytes are not readily accumulating fibrillar a-syn in their cytoplasm and possibly a preceding oligodendroglial dysfunction is needed to trigger the GCI formation. This has been supported by a recent experiment, in which only transgenic mice with oligodendroglial a-syn overexpression which get intracerebral inoculation of a-syn polymorphs are prone to accelerating an MSA-like phenotype in a strain-dependent manner . The a-syn spreading models are crucial for understanding the specific features of protein misfolding and properties in MSA versus other synucleinopathies, and thus shed light on pathogenic mechanisms involved in the progression of the disease . Unfortunately, to date, these models have not been able to recapitulate convincingly MSA symptomatology with its characteristic underlying selective neurodegenerative pathology. The third strategy to model MSA has been the overexpression of a-syn in oligodendrocytes either in constitutive or inducible transgenic mice or by AAV targeted a-syn overexpression in the substantia nigra and striatum of mice, rats, or primates . In all overexpression models, irrespective of the mode of overexpression, a delayed progressive neurodegeneration with variable phenotype and intensity has been identified to accompany the formation of GCI-like structures in parallel to signs of neuroinflammation. All these findings have supported the causative role of oligodendroglial a-synucleinopathy in MSA neurodegeneration. However, related to the specific overexpression approach, different patterns of selective neurodegeneration, neuroinflammation, and specific functional phenotypes have been reported. The PLP-a-syn transgenic mouse is generated by overexpression of human wild-type a-syn under the proteolipid protein (PLP) promoter to drive the transgene in oligodendrocytes . This genetic modification results in the progressive accumulation, oligomerization, and aggregation of a-syn in oligodendrocytes throughout the CNS replicating the GCI pathology of human MSA . This finding is similar to observations in other transgenic mice with constitutive overexpression of human a-syn in oligodendroglia under alternative cell-specific promoters like the myelin basic protein (MBP) or the 2′,3′-cyclic-nucleotide 3′-phosphodiesterase (CNP) promoters . Intriguingly, the PLP-a-syn mouse shows progressive nigral and striatal neurodegeneration, modelling SND of the MSA-P subtype . Nigral neuronal loss is detectable already at 4 months of age of the PLP-a-syn mice , while the loss of GABAergic medium spiny neurons in the striatum is detected at 12 months of age . In comparison, the MBP-a-syn mouse model has been recently suggested to represent a model of MSA-C with loss of Purkinje cells detected at 4 months of age . Interestingly, the PLP-a-syn mouse shows increased vulnerability of the olivopontocerebellar system to exogenous mitochondrial stress induced by 3-nitropropionic acid and proteolytic dysfunction triggered by proteasome inhibition , leading to OPCA in this MSA model, but never in wild-type mice. Linked to this underlying neuropathology, the PLP-a-syn mouse shows progressive motor disability becoming overt at 6 months of age including shortened stride length, elevated stride length variability, slowness, and loss of balance and coordination evidenced in beam walking and, later on, in pole climbing . In addition, the PLP-a-syn mouse model presents several non-motor deficits, which replicate classical non-motor symptoms in MSA. Among those is the neurogenic bladder dysfunction with a typical detrusor-sphincter dyssynergia and increased postvoid residual urine volume, associated with early loss of parasympathetic outflow neurons in the lumbosacral intermediate columns of the spinal cord already at 2 months of age and delayed degeneration of the pontine micturition center . Brainstem centers, including the locus coeruleus, the nucleus ambiguus, the laterodorsal tegmental nucleus, and the pedunculopontine tegmental nucleus, degenerate early in the individual life of PLP-a-syn mice . This pathology leads to cardiovascular symptoms (increased heart rate variability ), respiratory deficits , and sleep disturbances including rapid eye movement (REM) sleep without atonia replicating human MSA premotor symptoms . Intriguingly, the widespread GCI pathology in the PLP-a-syn mouse brain leads to strictly selective neuronal loss. For example, the olfactory bulbs show oligodendroglial a-syn accumulation without loss of tyrosine hydroxylase-positive neurons and no disturbances in the olfactory function , which recapitulates human MSA in contrast to the early loss of smell in PD . In summary, the PLP-a-syn mouse provides an excellent phenotypic replication of human MSA that proves the strong face validity of the model (Fig. , Fig. ). This MSA mouse has served to study the mechanisms of MSA neurodegeneration and understand the causative role of oligodendroglial a-synucleinopathy. It recapitulates oligodendroglial a-syn-triggered SND with increased vulnerability of the olivopontocerebellar system to exogenous stressors as well as degeneration in autonomic centers. In addition, the PLP-a-syn mouse presents with progressive region-specific microglial activation triggered by the oligomeric a-syn accumulation reminiscent of the microgliosis reported in postmortem analysis of MSA brains . The analysis of the model proposes important role of microglial activation and neuroinflammatory responses in driving the progression of the disease . Alternatively, demyelination is seen in end-stage MSA , while in the PLP-a-syn mouse, myelin dysfunction is detected without evident loss of myelin up to 18 months of age . Similarly, astrogliosis is not a prominent pathological feature in the progression of disease in the PLP-a-syn mouse model , despite the presence of astrogliosis in the postmortem MSA brains . Prominent demyelination and astrogliosis may represent secondary late events in the pathogenesis of MSA and due to the limited observation time and overall duration of the disease in the PLP-a-syn model may not be detected in the mouse brain. However, demyelination and astrogliosis, but not microglial activation and SND, are reported as part of the neurodegenerative process in the MBP-a-syn mouse , proposing that oligodendroglial a-syn overexpression in the two transgenic models may switch on different pathogenic pathways, defining different selective neurodegeneration and phenotype. This difference has not been addressed to date, but one putative explanation may be the generation of different a-syn oligomeric polymorphs, which define each of the phenotypes. Preclinical therapeutic screening provides the rationale for any clinical trial. The relevance of the applied experimental model to the tested therapeutic target is crucial to ensure meaningful outcomes. All relevant current MSA models are based on the assumption that pathological a-syn is the cause of MSA neurodegeneration. On one hand, such a-syn models of MSA are advantageous when testing a-syn targeting treatment strategies, because they provide a clear-cut readout of efficacy based on a-syn pathology modulation . Furthermore, the a-syn-based models of MSA provide the possibility to screen targeting of other relevant pathways and disease mechanisms downstream of a-syn pathology like neuroinflammation , neurotrophic disbalance , epigenetic impairment , or demyelination . On the other hand, the mechanistic replication of a-synucleinopathy in the rodent CNS may deviate from the actual trigger(s) of the human disease, as the etiology of the disease remains elusive. In combination, the limited knowledge on the initiation of MSA, the inter-species neurobiological differences between rodents and humans, and the common deviation in study design (drug dose, time of therapy initiation and relative duration, readouts, etc.) between preclinical studies and clinical trials may contribute in part to the still disappointing outcomes of clinical trials in MSA. The lack of relevant biomarkers, which may serve to monitor the biological activity of any intervention in relation to slowing disease progression in MSA patients, has been critical. The recent reports on possible progression biomarkers like neurofilament light chain or neuroimaging features including advances in a-syn imaging will be crucial to provide relevant measures of target engagement in the near future. In summary, the models with targeted oligodendroglial overexpression of wild-type human a-syn provide a good replication of MSA-like neurodegeneration induced by oligodendroglial a-synucleinopathy. The PLP-a-syn mouse offers the most complete mechanistic recapitulation of the MSA phenotype with neurogenic bladder dysfunction, cardiovascular and respiratory deficits, REM sleep abnormalities, and progressive motor disability with underlying striatonigral degeneration and increased susceptibility of the olivopontocerebellar system to exogenous stress factors like oxidative or proteolytic stress. The limitations of the model relate to the differences in the neurobiology between mice and humans as well as the lacking information on the initiation event(s) in human MSA that makes their replication obscure in any of the current models. The PLP-a-syn mouse serves well to test the target engagement and efficacy of therapies targeting a-syn pathology and downstream pathways including neuroinflammation, disrupted neurotrophic support, and others (Fig. ); however, its positive predictive validity has not yet been confirmed. Future development of MSA models involving human disease-specific cells may be needed to overcome the existing limitations and provide better preclinical tools for biomarker and treatment development for MSA. Below is the link to the electronic supplementary material. Supplementary file1 (PDF 778 KB)
Expansion of the sagittal suture induces proliferation of skeletal stem cells and sustains endogenous calvarial bone regeneration
35769bb8-a6c7-439b-b694-34833a49f30f
10120053
Suturing[mh]
Single cell RNA-seq Profiling Identifies a Significantly High Number of cSSCs in the Calvarial Sutures of 4-d-old Mice Versus Older Mice. To evaluate and identify different populations of cells in the calvarial sutures of young and older mice, we performed scRNA-seq analyses. We utilized sutures explanted from 4-d-old mice, an age when the suture is actively and naturally expanding, and sutures explanted from skeletally mature 2-mo-old, 4-mo-old, and 14-mo-old mice, representing ages when the sutures are functionally closed and have matured into synarthroses ( ). In sutures of 4-d-old mice, an unbiased cluster analysis of all the isolated cells identified various cell clusters, which include cell types of the hematopoietic lineage, epithelial lineage, and osteogenic lineage ( ) (refer to Materials and Methods for the methodology utilized for the classification of the clusters). Three clusters, named osteogenic cells cluster 1, 2, and 3, make up all the cells of osteogenic lineage. The analysis of the calvarial sutures of 2-mo-old, 4-mo-old, and of 14-mo-old mice also reveals the presence of cells of the hematopoietic lineage and of the osteogenic lineage ( ). However, at these ages, only one cluster containing a small number of cells could be identified as constituent of the osteogenic lineage. A quantification of the total osteogenic cells across all ages (indicated as percentage of the total cells evaluated, representing a more accurate quantitative parameter since the final number of cells used in each scRNA-seq assay variates from experiment to experiment) confirms that in the sutures of 4-d-old mice, the osteogenic cells are ~44% of all the cells, whereas in 2-mo-old mice they decrease to ~5% and in 4 mo old and 14 mo old they become less than 0.3% of all cells ( ). Conversely, non-osteogenic cells increase with aging, spanning from ~56% in 4-d-old mice to ~95% in 2-mo-old mice, up to more than 99% in 4-mo and 14-mo-old mice ( ). We then interrogated the scRNA-seq data to distinguish and quantify, across the four different ages, cells expressing Prx1/Prrx1 , Ctsk , Axin2 , and Gli1 , the four cSSCs markers independently identified by different groups of investigators ( – ). The cluster analysis indicates that these four cSSCs markers identify cells within the osteogenic cell clusters of all ages, and that these cells significantly diminish as the mice age ( ). The quantitative analysis ( ) more specifically shows that in sutures of 4-d-old mice, a significant fraction of the total cells express Prx1/Prrx1 (~41%) and Ctsk (~48%), whereas cells expressing Gli1 and Axin 2 are less represented (~10 % and ~8%, respectively). Differently, Prx1/Prrx1 , Ctsk , Axin2 , and Gli1 expressing cells are almost absent in the functionally closed sutures of all the older mice (values spanning from ~1 to 2% in 2-mo-old mice to 0.2 to 1% in 14-mo-old mice), confirming that cSSCs significantly diminish in older mice. Collectively, these data indicate that expression of Prx1/Prrx1 , Ctsk , Gli1 , and Axin2 markers identifies cells with similar gene expression profiles, suggesting that these cells are all representative of the cSSCs population. Furthermore, the data show a correlation between the elevated number of cSSCs and the actively expanding sutures of 4-d-old mice. Overlapping Expression of Prx1/Prrx1, Ctsk, Gli1, and Axin2 is Observed in Proliferating cSSCs. To confirm that expression of Prx1/Prrx1 , Ctsk , Gli1 , and Axin2 identifies cells representative of the same cSSCs population, we performed a re-clustering analysis of the osteogenic cells of the sutures of 4-d-old mice. This analysis identified five different subclusters of cells: 1) progenitor cells (PC), 2) proliferative osteogenic cells (PRO), 3) osteoblast precursors (OP), 3) mature osteoblasts (MO), and 4) osteocytes (OC) ( ) (refer to Materials and Methods for the methodology utilized for the classification of the subclusters). Expression of Prx1/Prrx1 , Ctsk , Gli1 , and Axin2 significantly overlaps with, and is mainly detectable within, the progenitor cells and, in part, the osteoblast precursors ( ). A dot plot showing the quantification of the expression of Prx1/Prrx1 in each of the subclusters confirms that Prx1/Prrx1 is highly expressed in the progenitor cells subcluster ( SI Appendix , Fig. S1 ). Similar trends are observed in the dot plots showing the quantification of the expression of Axin2 , Ctsk , and Gli1 ( SI Appendix , Fig. S2 ). To validate the subclustering analysis and to assess the grade of differentiation of the various subclusters, we performed a pseudotime analysis and visualized the distribution of the subclusters along the obtained trajectory ( ). This analysis identified the progenitor cell subcluster as the earliest subcluster, while the mature osteoblasts and the osteocytes subclusters are projected at later time points, as the latest subclusters ( ). When we overlapped the expression of Prx1/Prrx1 with the pseudotime trajectory, we found Prx1/Prrx1 to be mainly expressed within the early subclusters ( ). Similar trends were observed for the expression of Ctsk , Gli1 , and Axin2 ( SI Appendix , Fig. S3 ). A quantitative evaluation of the Ctsk , Gli1 , and Axin2 expressing cells within the early subclusters (progenitor cells and the proliferative osteogenic cells) shows that the vast majority of these cells (from 92% to 100%) co-express Prx1/Prrx1 ( SI Appendix , Table S1 ).Then, confirming that Prx1 /Prrx1 , Ctsk , Gli1 , and Axin2 are most represented in early differentiation stages of the osteoblastic lineage, we compared their expression to the expression of genes that are associated with more differentiated cells of the osteoblastic lineage. Data show that these genes ( Alpl , Runx2 , Ibsp , Sp7 , Col1a1 , and Osteocalcin ) are mainly expressed further down into the trajectory ( ). Since activation of Wnt signaling induces osteoblastic differentiation in cSSCs ( , ), we further evaluated the level of expression of β-catenin and Tcf7 across the subclusters ( SI Appendix , Fig. S4 ). This analysis showed a reduced expression of β-catenin and an almost undetectable expression of Tcf7 in the progenitor cells subcluster, confirming that Wnt signaling is down-regulated in undifferentiated cells of the 4-d-old sutures and up-regulated as cells move into their differentiation path. These data indicate that the expression of Prx1/Prrx1 , Ctsk , Gli1 , and Axin2 identifies a population of undifferentiated cells within the sutures of 4-d-old mice. Finally, to assess whether the undifferentiated cells of the sutures are proliferating, we interrogated the scRNA-seq analysis to visualize the expression of Birc5 , Ccnd1 , Espl1 , and Ki67 , four genes commonly utilized to assess cell proliferation ( – ). This analysis confirms that expression of the proliferation genes is highly detected within Prx1/Prrx1 , Axin2 , Ctsk , and Gli1 expressing cells of the 4-d-old sutures ( SI Appendix , Fig. S5 ). Collectively, these results indicate that expression of Prx1/Prrx1 , Axin2 , Ctsk , and Gli1 identifies undifferentiated and proliferating osteogenic cells within the expanding calvarial sutures of 4-d-old mice. Expansion of the Sagittal Suture Induces Proliferation of the sSSCs. Since the naturally expanding sutures of the 4-d-old mice are enriched in cSSCs, we next hypothesized that a mechanical expansion of a functionally closed suture of skeletally mature 2-mo-old (8 wk old) mice can reverse engineer what naturally occurs in 4-d-old mice. More specifically, we hypothesized that the mechanical expansion can induce proliferation of the cSSCs. To test this hypothesis, we utilized 2-mo-old mice, expanded their sagittal suture as previously described ( ), and collected the tissue samples after 7 d of expansion ( ). A histological examination confirmed that, after 7 d, the sagittal suture is effectively expanded ( ). Then, we utilized scRNA-seq to analyze the cellular composition and the gene expression profile of cells of the non-expanded sutures (control, mock surgery) and the expanded sutures (test). As indicated by the UMAPs, the cell cluster analysis identifies a cluster of osteogenic cells in both the expanded and the non-expanded sutures ( ). However, compared to the osteogenic cells of the non-expanded sutures, the osteogenic cells of the expanded sutures quadruplicate after 7 d of expansion ( ). On the contrary, after expansion, non-osteogenic cells diminish only by a small percentage (from approximately 99% to 95%) ( ). More specifically, we found that the cells expressing Prx1/Prrx1 quadruplicate (from 1.4% to 5.2%), that cells expressing Ctsk triplicate (from 2.3% to 6.9%), and that cells expressing Axin2 or Gli1 duplicate (from 0.8% to 1.6% and from 0.1% to 0.2%, respectively) ( ). Then, to test whether the observed cell increase was specific for the cSSCs expressing Prx1/Prrx1 , Ctsk , Axin 2 , and Gli1 , we also quantified cells expressing Runx2 , Sp7 ( Osterix ), and Osteocalcin ( Bglap ), which represent cells more differentiated along the osteoblastic lineage. Data indicate that, contrary to the sSSCs, the more differentiated cells decrease in expanded sutures, with cells expressing Osteocalcin decreasing up to almost 1/3 of the original number ( ). An independent experiment, utilizing intravital microscopy (IVM) to quantify Prx1/Prrx1 expressing cells by means of co-expression of enhanced green fluorescent protein (EGFP) in non-expanded and expanded sagittal sutures of 2-mo-old Prx1-creER-EGFP mice ( ), confirms the quantitative scRNA-seq analysis. In fact, when the number of green fluorescent cells were quantified in three different locations along the non-expanded and expanded sagittal sutures of 2-mo-old mice, we found that green fluorescent cells quadruplicate upon expansion ( ). Finally, to test whether the increase of the number of cSSCs during suture expansion is due to proliferation activity, we utilized the scRNA-seq data and quantified the expression of Birc5 , Ccnd1 , Espl1 , and Ki67 in cells of the expanded and of the non-expanded sutures ( ). Data indicate that compared to Prx1/Prrx1 expressing cells of the non-expanded sutures, Prx1/Prrx1 expressing cells of the expanded sutures present with higher level of expression of all four genes ( ). Similar results were observed in Ctsk , the Gli1 , and the Axin2 expressing cells ( SI Appendix , Fig. S6 ). On the contrary, more osteoblastic-differentiated cells expressing Runx2 , Sp7 , or Osteocalcin presented similar levels of expression of Birc5 , Ccnd1 , Espl1 , and Ki67 in both non-expanded and expanded sutures ( ). An independent evaluation, using quantitative PCR and in situ hybridization, performed 2 d after expansion also confirmed that Prx1/Prrx1 expressing cells proliferate during the mechanically induced expansion of the sutures ( SI Appendix , Fig. S7 ). Importantly, since Rindone et al ( ) recently reported that the creation of a subcritical defect of 1 mm in diameter in the parietal bone can stimulate a significant expansion of the cSSCs, we further analyzed whether the creation of the two 0.25 mm of diameter “anchoring holes” of the expansion device could, per se, induce any significant increase in the number of the cSSCs. To this end, we compared the number of osteogenic and non-osteogenic cells of the 2-mo-old non-surgery mice (as in ) with the number of osteogenic and non-osteogenic cells of the 2-mo-old mock surgery mice (no expansion device inserted, as in ) and observed no increment in the percentage of the osteogenic cells after the mock surgery. This is also the case for the Prx1/Prrx1 , Ctsk , Gli1 , and Axin2 expressing cells (compare with ). Overall, these data indicate that mechanical expansion of the sagittal sutures increases the number and induces proliferation of the cSSCs cSSCs of the Mechanically Expanded Sutures and cSSCs of the Naturally Expanding Sutures Present with Similar Gene Expression Profiles. To assess the similarity between the cSSCs of the mechanically expanded sutures of the 2-mo-old mice and the cSSCs of the naturally expanding sutures of the 4-d-old mice, we first evaluated the cluster analysis of the mechanically expanded sutures. This analysis shows that, similar to what we observed for the cells of the 4-d-old sutures ( ), cells expressing Prx1/Prrx1 , Ctsk , Gi1 , and Axin2 of the mechanically expanded sutures are almost exclusively located in the osteogenic cells cluster ( SI Appendix , Fig. S8 ). Then, we performed a re-clustering analysis of the mechanically expanded osteogenic cells. This analysis identified only two different subclusters of cells: the progenitor cells subcluster and the osteoblast precursors subcluster ( ). Expression of Prx1/Prrx1 , Ctsk , Gli1 , and Axin2 is detectable in both subclusters ( ). To assess the grade of differentiation of the two identified subclusters, we performed a pseudotime analysis and visualized the distribution of the subclusters along the obtained trajectory ( ). Similar to what we found in the naturally expanding sutures of 4-d-old mice, we identified the progenitor cells as the earliest subcluster, while the osteoblast precursors are projected at later time point ( ). As expected, when we overlapped the expression of Prx1/Prrx1 with the pseudotime trajectory, we found Prx1/Prrx1 to be expressed in both subclusters ( ), whereas Ibsp , Col1a1 , and O steocalcin , markers of more differentiated osteoblastic cells, are almost undetectable ( ). Confirming that, in the mechanically expanding sutures, the expression of Prx1/Prrx1 overlaps with the expression of Ctsk , Gli1 , and Axin2 , cells, a quantitative evaluation of the cells expressing these genes in the progenitor cells subcluster shows that the vast majority of these cells (from 96% to 100%) co-express Prx1/Prrx1 ( SI Appendix , Table S2 ). A similar result was observed in the naturally expanding sutures ( SI Appendix , Table S1 ). Finally, to confirm that the progenitor cells of the mechanically expanding sutures are similar to the progenitor cells of the naturally expanding sutures, we repeated the subclusters analysis combining the data from both samples ( ). First, this analysis does not identify any additional subcluster, indicating that there is consistency of clustering between the two samples ( ). Second, the data indicate that the progenitor cells and the osteoblast precursors of the mechanically expanding sutures overlap with the progenitor cells and the osteoblast precursors of the naturally expanding sutures ( ). Collectively, these data indicate that cSSCs of the mechanically expanding sutures present with a gene expression profiles that resemble that of the sSSCs of the naturally expanding sutures of 4-d-old animals. Suture Distraction Enhances Regeneration of Calvarial Critical Size Bone Defects. Since the calvaria of newborn mice is enriched of cSSCs, which resemble, at least in terms of gene expression, the cSSCs of the mechanically expanded sutures, and since newborn mice can fully regenerate calvarial bone defects, we hypothesized that the mechanical expansion of the functionally closed sagittal suture of 2-mo-old (8 wk old) mice could reverse engineer the spontaneous expanding suture of the newborn mice and could, consequently, sustain the complete regeneration of a c-CSD created in the parietal bone of these skeletally mature mice. To test this hypothesis, simultaneously to the expansion of the sagittal suture, we created a c-CSD within the parietal bone of the mouse calvaria, 3 mm lateral from the sagittal sutures, and 1 mm mesial to the lambdoid suture ( SI Appendix , Fig. S9 ). In control mice, the two small holes that would hold the expansion device, as well as a c-CSD, were created in the same locations as in the test mice, but no expansion device was inserted. Sixty days after surgery, the c-CSDs of the control group showed limited amount of regenerated bone ( ). On the contrary, the c-CSDs of the test group regenerated up to ~100% of the missing bone ( ). Microcomputed tomography (µCT) quantification confirmed that the bone volume (BV) and the bone fraction of bone volume over total volume (BV/TV) of the c-CSDs created simultaneously with the suture expansion are significantly higher than the BV and BV/TV of the c-CSDs created in control mice ( ). As expected, since we previously showed that progeny of the cSSCs expressing Prx1/Prrx1 is responsible for regeneration of calvarila bone defects ( ), a lineage tracing analysis performed in Prx1-creER-EGFP;tdTOMATO mice confirmed that the defect in the test mice is regenerated by these cells ( SI Appendix , Fig. S10 ). We conclude that expansion of a functionally closed suture, by means of induced proliferation of the preexisting cSSCs, can sustain regeneration of calvarial bone defect otherwise unable to heal. Importantly, since regeneration does not occur in 10-mo-old mice ( SI Appendix , Fig. S11 ), which represents an age in between the 4-mo-old and the 14-mo-old ones analyzed by scRNA-seq, we also propose that the suture expansion-sustained regeneration can occur only when a certain minimum number of preexisting cSSCs are present with the suture. Wnt Signaling Regulates Prx1/Prrx1 Expressing Cells of the Expanding Suture during the Regeneration of the Calvarial Critical Size Bone Defects. Since previous studies in our laboratory have shown that Prx1/Prrx1 expressing cells are Wnt-responsive cells ( , ), and since the scRNA-seq analysis of the expanding sutures of 4-d-old mice shows that Wnt signaling is up-regulated in differentiating cells of the osteoblastic lineage ( SI Appendix , Fig. S4 ), next we sought to investigate whether Wnt signaling in Prx1/Prrx1 expressing cells influences the suture expansion-sustained regeneration of the c-CSDs. To this end, we utilized 2-mo-old Prx1-creER-EGFP;β-catenin mice to conditionally inactivate β-catenin , and therefore canonical Wnt signaling, by means of cre recombinase (creER) in Prx1/Prrx1 expressing cells during expansion and during the regeneration process. As indicated by the µCT rendering and by the histological analysis of the defects ( ), we found that the tamoxifen-induced genetic blockade of Wnt signaling in Prx1/Prrx1 expressing cells significantly impairs the capacity of the c-CSD to regenerate during expansion of the suture. The remodeling of the sagittal suture upon expansion was also impaired by the blockade ( ). µCT quantification of the regenerated bone in the c-CSDs of β-catenin inactivated mice revealed a reduction of the regenerated bone when compared to mice with active canonical Wnt signaling (control), with significant difference in BV and BV/TV ( ). The effective inactivation of Wnt signaling in Prx1/Prrx1 expressing cells was validated by analyzing the gene expression of Axin2 (a Wnt target gene) and β-catenin in FAC-sorted EGFP+ cells obtained from tamoxifen treated Prx1-creER-EGFP;β-catenin mice ( ). On the basis of these data, we conclude that canonical Wnt signaling is required during the suture expansion-sustained bone regeneration. PRX1/PRRX1 Expressing Cells are Located in the Expanding Human Calvarial Sutures. To test the translational significance of the findings observed in the mouse model, we tested whether PRX1/PRRX1 is expressed in cells of the human sagittal suture. Previous studies in humans have shown that mutation of PRX1/PRRX1 or deletion of chromosome 1q23.3-q25.1 (the portion of the chromosome 1 that carries the human PRX1/PRRX1 gene) results in pre- and postnatal growth retardation, with microcephaly, micrognathia, and other skeletal malformations ( – ), thus suggesting that PRX1/PRRX1 may be expressed in the human calvarial sutures. Confirming that PRX1/PRRX1 is expressed in the human calvarial sutures, the in situ hybridization in sagittal suture of a human fetus 80 d post-conception shows expression of PRX1/PRRX1 in cells across the sutures ( ), giving additional evidence of their role in calvarial development. To further confirm that PRX1/PRRX1 is highly expressed in cells of the human calvarial sutures, we performed a quantitative PCR of the PRX1/PRRX1 gene in human primary cells obtained from the parietal bone of a human fetus (180 d post-conception) and in human primary cells obtained from the fetal sagittal sutures of six different individuals (at various ages, from 79 d to 108 d post-conception). Results indicate that, compared to the parietal bone cells, sagittal suture cells tend to express higher levels of PRX1/PRRX1 (in five out of the six tested samples) ( ). We conclude that PRX1/PRRX1 is expressed in cells of the human calvaria expanding sutures. To evaluate and identify different populations of cells in the calvarial sutures of young and older mice, we performed scRNA-seq analyses. We utilized sutures explanted from 4-d-old mice, an age when the suture is actively and naturally expanding, and sutures explanted from skeletally mature 2-mo-old, 4-mo-old, and 14-mo-old mice, representing ages when the sutures are functionally closed and have matured into synarthroses ( ). In sutures of 4-d-old mice, an unbiased cluster analysis of all the isolated cells identified various cell clusters, which include cell types of the hematopoietic lineage, epithelial lineage, and osteogenic lineage ( ) (refer to Materials and Methods for the methodology utilized for the classification of the clusters). Three clusters, named osteogenic cells cluster 1, 2, and 3, make up all the cells of osteogenic lineage. The analysis of the calvarial sutures of 2-mo-old, 4-mo-old, and of 14-mo-old mice also reveals the presence of cells of the hematopoietic lineage and of the osteogenic lineage ( ). However, at these ages, only one cluster containing a small number of cells could be identified as constituent of the osteogenic lineage. A quantification of the total osteogenic cells across all ages (indicated as percentage of the total cells evaluated, representing a more accurate quantitative parameter since the final number of cells used in each scRNA-seq assay variates from experiment to experiment) confirms that in the sutures of 4-d-old mice, the osteogenic cells are ~44% of all the cells, whereas in 2-mo-old mice they decrease to ~5% and in 4 mo old and 14 mo old they become less than 0.3% of all cells ( ). Conversely, non-osteogenic cells increase with aging, spanning from ~56% in 4-d-old mice to ~95% in 2-mo-old mice, up to more than 99% in 4-mo and 14-mo-old mice ( ). We then interrogated the scRNA-seq data to distinguish and quantify, across the four different ages, cells expressing Prx1/Prrx1 , Ctsk , Axin2 , and Gli1 , the four cSSCs markers independently identified by different groups of investigators ( – ). The cluster analysis indicates that these four cSSCs markers identify cells within the osteogenic cell clusters of all ages, and that these cells significantly diminish as the mice age ( ). The quantitative analysis ( ) more specifically shows that in sutures of 4-d-old mice, a significant fraction of the total cells express Prx1/Prrx1 (~41%) and Ctsk (~48%), whereas cells expressing Gli1 and Axin 2 are less represented (~10 % and ~8%, respectively). Differently, Prx1/Prrx1 , Ctsk , Axin2 , and Gli1 expressing cells are almost absent in the functionally closed sutures of all the older mice (values spanning from ~1 to 2% in 2-mo-old mice to 0.2 to 1% in 14-mo-old mice), confirming that cSSCs significantly diminish in older mice. Collectively, these data indicate that expression of Prx1/Prrx1 , Ctsk , Gli1 , and Axin2 markers identifies cells with similar gene expression profiles, suggesting that these cells are all representative of the cSSCs population. Furthermore, the data show a correlation between the elevated number of cSSCs and the actively expanding sutures of 4-d-old mice. To confirm that expression of Prx1/Prrx1 , Ctsk , Gli1 , and Axin2 identifies cells representative of the same cSSCs population, we performed a re-clustering analysis of the osteogenic cells of the sutures of 4-d-old mice. This analysis identified five different subclusters of cells: 1) progenitor cells (PC), 2) proliferative osteogenic cells (PRO), 3) osteoblast precursors (OP), 3) mature osteoblasts (MO), and 4) osteocytes (OC) ( ) (refer to Materials and Methods for the methodology utilized for the classification of the subclusters). Expression of Prx1/Prrx1 , Ctsk , Gli1 , and Axin2 significantly overlaps with, and is mainly detectable within, the progenitor cells and, in part, the osteoblast precursors ( ). A dot plot showing the quantification of the expression of Prx1/Prrx1 in each of the subclusters confirms that Prx1/Prrx1 is highly expressed in the progenitor cells subcluster ( SI Appendix , Fig. S1 ). Similar trends are observed in the dot plots showing the quantification of the expression of Axin2 , Ctsk , and Gli1 ( SI Appendix , Fig. S2 ). To validate the subclustering analysis and to assess the grade of differentiation of the various subclusters, we performed a pseudotime analysis and visualized the distribution of the subclusters along the obtained trajectory ( ). This analysis identified the progenitor cell subcluster as the earliest subcluster, while the mature osteoblasts and the osteocytes subclusters are projected at later time points, as the latest subclusters ( ). When we overlapped the expression of Prx1/Prrx1 with the pseudotime trajectory, we found Prx1/Prrx1 to be mainly expressed within the early subclusters ( ). Similar trends were observed for the expression of Ctsk , Gli1 , and Axin2 ( SI Appendix , Fig. S3 ). A quantitative evaluation of the Ctsk , Gli1 , and Axin2 expressing cells within the early subclusters (progenitor cells and the proliferative osteogenic cells) shows that the vast majority of these cells (from 92% to 100%) co-express Prx1/Prrx1 ( SI Appendix , Table S1 ).Then, confirming that Prx1 /Prrx1 , Ctsk , Gli1 , and Axin2 are most represented in early differentiation stages of the osteoblastic lineage, we compared their expression to the expression of genes that are associated with more differentiated cells of the osteoblastic lineage. Data show that these genes ( Alpl , Runx2 , Ibsp , Sp7 , Col1a1 , and Osteocalcin ) are mainly expressed further down into the trajectory ( ). Since activation of Wnt signaling induces osteoblastic differentiation in cSSCs ( , ), we further evaluated the level of expression of β-catenin and Tcf7 across the subclusters ( SI Appendix , Fig. S4 ). This analysis showed a reduced expression of β-catenin and an almost undetectable expression of Tcf7 in the progenitor cells subcluster, confirming that Wnt signaling is down-regulated in undifferentiated cells of the 4-d-old sutures and up-regulated as cells move into their differentiation path. These data indicate that the expression of Prx1/Prrx1 , Ctsk , Gli1 , and Axin2 identifies a population of undifferentiated cells within the sutures of 4-d-old mice. Finally, to assess whether the undifferentiated cells of the sutures are proliferating, we interrogated the scRNA-seq analysis to visualize the expression of Birc5 , Ccnd1 , Espl1 , and Ki67 , four genes commonly utilized to assess cell proliferation ( – ). This analysis confirms that expression of the proliferation genes is highly detected within Prx1/Prrx1 , Axin2 , Ctsk , and Gli1 expressing cells of the 4-d-old sutures ( SI Appendix , Fig. S5 ). Collectively, these results indicate that expression of Prx1/Prrx1 , Axin2 , Ctsk , and Gli1 identifies undifferentiated and proliferating osteogenic cells within the expanding calvarial sutures of 4-d-old mice. Since the naturally expanding sutures of the 4-d-old mice are enriched in cSSCs, we next hypothesized that a mechanical expansion of a functionally closed suture of skeletally mature 2-mo-old (8 wk old) mice can reverse engineer what naturally occurs in 4-d-old mice. More specifically, we hypothesized that the mechanical expansion can induce proliferation of the cSSCs. To test this hypothesis, we utilized 2-mo-old mice, expanded their sagittal suture as previously described ( ), and collected the tissue samples after 7 d of expansion ( ). A histological examination confirmed that, after 7 d, the sagittal suture is effectively expanded ( ). Then, we utilized scRNA-seq to analyze the cellular composition and the gene expression profile of cells of the non-expanded sutures (control, mock surgery) and the expanded sutures (test). As indicated by the UMAPs, the cell cluster analysis identifies a cluster of osteogenic cells in both the expanded and the non-expanded sutures ( ). However, compared to the osteogenic cells of the non-expanded sutures, the osteogenic cells of the expanded sutures quadruplicate after 7 d of expansion ( ). On the contrary, after expansion, non-osteogenic cells diminish only by a small percentage (from approximately 99% to 95%) ( ). More specifically, we found that the cells expressing Prx1/Prrx1 quadruplicate (from 1.4% to 5.2%), that cells expressing Ctsk triplicate (from 2.3% to 6.9%), and that cells expressing Axin2 or Gli1 duplicate (from 0.8% to 1.6% and from 0.1% to 0.2%, respectively) ( ). Then, to test whether the observed cell increase was specific for the cSSCs expressing Prx1/Prrx1 , Ctsk , Axin 2 , and Gli1 , we also quantified cells expressing Runx2 , Sp7 ( Osterix ), and Osteocalcin ( Bglap ), which represent cells more differentiated along the osteoblastic lineage. Data indicate that, contrary to the sSSCs, the more differentiated cells decrease in expanded sutures, with cells expressing Osteocalcin decreasing up to almost 1/3 of the original number ( ). An independent experiment, utilizing intravital microscopy (IVM) to quantify Prx1/Prrx1 expressing cells by means of co-expression of enhanced green fluorescent protein (EGFP) in non-expanded and expanded sagittal sutures of 2-mo-old Prx1-creER-EGFP mice ( ), confirms the quantitative scRNA-seq analysis. In fact, when the number of green fluorescent cells were quantified in three different locations along the non-expanded and expanded sagittal sutures of 2-mo-old mice, we found that green fluorescent cells quadruplicate upon expansion ( ). Finally, to test whether the increase of the number of cSSCs during suture expansion is due to proliferation activity, we utilized the scRNA-seq data and quantified the expression of Birc5 , Ccnd1 , Espl1 , and Ki67 in cells of the expanded and of the non-expanded sutures ( ). Data indicate that compared to Prx1/Prrx1 expressing cells of the non-expanded sutures, Prx1/Prrx1 expressing cells of the expanded sutures present with higher level of expression of all four genes ( ). Similar results were observed in Ctsk , the Gli1 , and the Axin2 expressing cells ( SI Appendix , Fig. S6 ). On the contrary, more osteoblastic-differentiated cells expressing Runx2 , Sp7 , or Osteocalcin presented similar levels of expression of Birc5 , Ccnd1 , Espl1 , and Ki67 in both non-expanded and expanded sutures ( ). An independent evaluation, using quantitative PCR and in situ hybridization, performed 2 d after expansion also confirmed that Prx1/Prrx1 expressing cells proliferate during the mechanically induced expansion of the sutures ( SI Appendix , Fig. S7 ). Importantly, since Rindone et al ( ) recently reported that the creation of a subcritical defect of 1 mm in diameter in the parietal bone can stimulate a significant expansion of the cSSCs, we further analyzed whether the creation of the two 0.25 mm of diameter “anchoring holes” of the expansion device could, per se, induce any significant increase in the number of the cSSCs. To this end, we compared the number of osteogenic and non-osteogenic cells of the 2-mo-old non-surgery mice (as in ) with the number of osteogenic and non-osteogenic cells of the 2-mo-old mock surgery mice (no expansion device inserted, as in ) and observed no increment in the percentage of the osteogenic cells after the mock surgery. This is also the case for the Prx1/Prrx1 , Ctsk , Gli1 , and Axin2 expressing cells (compare with ). Overall, these data indicate that mechanical expansion of the sagittal sutures increases the number and induces proliferation of the cSSCs To assess the similarity between the cSSCs of the mechanically expanded sutures of the 2-mo-old mice and the cSSCs of the naturally expanding sutures of the 4-d-old mice, we first evaluated the cluster analysis of the mechanically expanded sutures. This analysis shows that, similar to what we observed for the cells of the 4-d-old sutures ( ), cells expressing Prx1/Prrx1 , Ctsk , Gi1 , and Axin2 of the mechanically expanded sutures are almost exclusively located in the osteogenic cells cluster ( SI Appendix , Fig. S8 ). Then, we performed a re-clustering analysis of the mechanically expanded osteogenic cells. This analysis identified only two different subclusters of cells: the progenitor cells subcluster and the osteoblast precursors subcluster ( ). Expression of Prx1/Prrx1 , Ctsk , Gli1 , and Axin2 is detectable in both subclusters ( ). To assess the grade of differentiation of the two identified subclusters, we performed a pseudotime analysis and visualized the distribution of the subclusters along the obtained trajectory ( ). Similar to what we found in the naturally expanding sutures of 4-d-old mice, we identified the progenitor cells as the earliest subcluster, while the osteoblast precursors are projected at later time point ( ). As expected, when we overlapped the expression of Prx1/Prrx1 with the pseudotime trajectory, we found Prx1/Prrx1 to be expressed in both subclusters ( ), whereas Ibsp , Col1a1 , and O steocalcin , markers of more differentiated osteoblastic cells, are almost undetectable ( ). Confirming that, in the mechanically expanding sutures, the expression of Prx1/Prrx1 overlaps with the expression of Ctsk , Gli1 , and Axin2 , cells, a quantitative evaluation of the cells expressing these genes in the progenitor cells subcluster shows that the vast majority of these cells (from 96% to 100%) co-express Prx1/Prrx1 ( SI Appendix , Table S2 ). A similar result was observed in the naturally expanding sutures ( SI Appendix , Table S1 ). Finally, to confirm that the progenitor cells of the mechanically expanding sutures are similar to the progenitor cells of the naturally expanding sutures, we repeated the subclusters analysis combining the data from both samples ( ). First, this analysis does not identify any additional subcluster, indicating that there is consistency of clustering between the two samples ( ). Second, the data indicate that the progenitor cells and the osteoblast precursors of the mechanically expanding sutures overlap with the progenitor cells and the osteoblast precursors of the naturally expanding sutures ( ). Collectively, these data indicate that cSSCs of the mechanically expanding sutures present with a gene expression profiles that resemble that of the sSSCs of the naturally expanding sutures of 4-d-old animals. Since the calvaria of newborn mice is enriched of cSSCs, which resemble, at least in terms of gene expression, the cSSCs of the mechanically expanded sutures, and since newborn mice can fully regenerate calvarial bone defects, we hypothesized that the mechanical expansion of the functionally closed sagittal suture of 2-mo-old (8 wk old) mice could reverse engineer the spontaneous expanding suture of the newborn mice and could, consequently, sustain the complete regeneration of a c-CSD created in the parietal bone of these skeletally mature mice. To test this hypothesis, simultaneously to the expansion of the sagittal suture, we created a c-CSD within the parietal bone of the mouse calvaria, 3 mm lateral from the sagittal sutures, and 1 mm mesial to the lambdoid suture ( SI Appendix , Fig. S9 ). In control mice, the two small holes that would hold the expansion device, as well as a c-CSD, were created in the same locations as in the test mice, but no expansion device was inserted. Sixty days after surgery, the c-CSDs of the control group showed limited amount of regenerated bone ( ). On the contrary, the c-CSDs of the test group regenerated up to ~100% of the missing bone ( ). Microcomputed tomography (µCT) quantification confirmed that the bone volume (BV) and the bone fraction of bone volume over total volume (BV/TV) of the c-CSDs created simultaneously with the suture expansion are significantly higher than the BV and BV/TV of the c-CSDs created in control mice ( ). As expected, since we previously showed that progeny of the cSSCs expressing Prx1/Prrx1 is responsible for regeneration of calvarila bone defects ( ), a lineage tracing analysis performed in Prx1-creER-EGFP;tdTOMATO mice confirmed that the defect in the test mice is regenerated by these cells ( SI Appendix , Fig. S10 ). We conclude that expansion of a functionally closed suture, by means of induced proliferation of the preexisting cSSCs, can sustain regeneration of calvarial bone defect otherwise unable to heal. Importantly, since regeneration does not occur in 10-mo-old mice ( SI Appendix , Fig. S11 ), which represents an age in between the 4-mo-old and the 14-mo-old ones analyzed by scRNA-seq, we also propose that the suture expansion-sustained regeneration can occur only when a certain minimum number of preexisting cSSCs are present with the suture. Prx1/Prrx1 Expressing Cells of the Expanding Suture during the Regeneration of the Calvarial Critical Size Bone Defects. Since previous studies in our laboratory have shown that Prx1/Prrx1 expressing cells are Wnt-responsive cells ( , ), and since the scRNA-seq analysis of the expanding sutures of 4-d-old mice shows that Wnt signaling is up-regulated in differentiating cells of the osteoblastic lineage ( SI Appendix , Fig. S4 ), next we sought to investigate whether Wnt signaling in Prx1/Prrx1 expressing cells influences the suture expansion-sustained regeneration of the c-CSDs. To this end, we utilized 2-mo-old Prx1-creER-EGFP;β-catenin mice to conditionally inactivate β-catenin , and therefore canonical Wnt signaling, by means of cre recombinase (creER) in Prx1/Prrx1 expressing cells during expansion and during the regeneration process. As indicated by the µCT rendering and by the histological analysis of the defects ( ), we found that the tamoxifen-induced genetic blockade of Wnt signaling in Prx1/Prrx1 expressing cells significantly impairs the capacity of the c-CSD to regenerate during expansion of the suture. The remodeling of the sagittal suture upon expansion was also impaired by the blockade ( ). µCT quantification of the regenerated bone in the c-CSDs of β-catenin inactivated mice revealed a reduction of the regenerated bone when compared to mice with active canonical Wnt signaling (control), with significant difference in BV and BV/TV ( ). The effective inactivation of Wnt signaling in Prx1/Prrx1 expressing cells was validated by analyzing the gene expression of Axin2 (a Wnt target gene) and β-catenin in FAC-sorted EGFP+ cells obtained from tamoxifen treated Prx1-creER-EGFP;β-catenin mice ( ). On the basis of these data, we conclude that canonical Wnt signaling is required during the suture expansion-sustained bone regeneration. Expressing Cells are Located in the Expanding Human Calvarial Sutures. To test the translational significance of the findings observed in the mouse model, we tested whether PRX1/PRRX1 is expressed in cells of the human sagittal suture. Previous studies in humans have shown that mutation of PRX1/PRRX1 or deletion of chromosome 1q23.3-q25.1 (the portion of the chromosome 1 that carries the human PRX1/PRRX1 gene) results in pre- and postnatal growth retardation, with microcephaly, micrognathia, and other skeletal malformations ( – ), thus suggesting that PRX1/PRRX1 may be expressed in the human calvarial sutures. Confirming that PRX1/PRRX1 is expressed in the human calvarial sutures, the in situ hybridization in sagittal suture of a human fetus 80 d post-conception shows expression of PRX1/PRRX1 in cells across the sutures ( ), giving additional evidence of their role in calvarial development. To further confirm that PRX1/PRRX1 is highly expressed in cells of the human calvarial sutures, we performed a quantitative PCR of the PRX1/PRRX1 gene in human primary cells obtained from the parietal bone of a human fetus (180 d post-conception) and in human primary cells obtained from the fetal sagittal sutures of six different individuals (at various ages, from 79 d to 108 d post-conception). Results indicate that, compared to the parietal bone cells, sagittal suture cells tend to express higher levels of PRX1/PRRX1 (in five out of the six tested samples) ( ). We conclude that PRX1/PRRX1 is expressed in cells of the human calvaria expanding sutures. Recent studies in the field of craniofacial bone biology have identified the calvarial sutures as reservoirs of skeletal stem cells expressing Prx1/Prrx1 , or Ctsk , Gli1 , and Axin2 ( – ). Specifically, we have described the presence of postnatal skeletal stem cells expressing Prx1/Prrx1 within the calvarial sutures and their requirement for calvarial bone regeneration ( ), showing that the regeneration process is abrogated when a significant number of Prx1/Prrx1 expressing cells are ablated by means of a targeted expression of Diphtheria toxin. Transplanting sutures carrying traceable fluorescent Prx1/Prrx1 expressing cells, we also showed that the calvarial bone regeneration occurs by means of their progeny. Building upon these studies, now we show for the first time that there is a significant overlap of expression of Prx1/Prrx1 in Ctsk , Gli1 , and Axin2 expressing cells, indicating that expression of Prx1/Prrx1 , Axin2 , Ctsk , and Gli1 identifies the cSSCs population of the calvarial sutures. While most of the current approaches to bone regenerative therapies focus on transplantation of bone competent cells, or on implantation of osteoconductive or osteoinductive biomaterials ( ), here we posited that such approaches, which are not exempt of health risks, may not be necessary if the regenerative potential of the native cSSCs is fully exploited. Since in children up to 2 y of age, with expanding osteogenically active calvarial sutures, regeneration of calvarial bone defects occurs naturally and without therapeutic aids, ( , ), we hypothesized that the expanding sutures, with their high content of cSSCs, are responsible for this extraordinary regeneration potential. Therefore, using the skeletally mature 2-mo-old (8 wk old) mouse calvaria as a model, we aimed at demonstrating that if an otherwise functionally closed suture is artificially expanded, its content of cSSCs increases, and complete regeneration of a bone defect, even critical in size and remotely located from the suture, can occur. A scRNA-seq analysis of the cells of the calvarial sutures, performed in mice of different ages, from 4-d-old up to 14-mo-old mice, demonstrated that the naturally expanding sutures of the 4-d-old mice are highly enriched with cSSCs. Subsequently, we demonstrated that a tensile force applied to a mature functionally closed suture can induce an enrichment in the number of the cSSCs and can sustain the regeneration of calvarial bone defects otherwise unable to regenerate. Therefore, the artificial expansion of a functionally closed, skeletally mature, suture can be utilized to harness the cSSCs and foster regeneration of calvarial bone defects. Importantly, our studies also show that the suture expansion strategy has limitations, since it is not effective in 10-mo-old mice, when the number of resident cSSCs is expected to be limited. This limitation is probably due to the cSSCs present in the 10-mo-old sutures, which may be either too limited in number or senescent to be able to proliferate up to a sufficiently higher number able to sustain the regeneration process. Thus, the suture expansion-sustained regeneration strategy has a limited temporal window of efficacy, although still effective in skeletally mature 2-mo-old mice. Of interest in the present studies is the fact that the regeneration of a c-CSD occurs even when the c-CSD is positioned at a considerable distance from the suture. In fact, Park et al. ( ) have shown that the healing capacity of a c-CSD in the mouse calvaria decreases with increasing distance from the sutures, as c-CSD distant 1 mm from the sagittal suture is able to regenerate ~50% of the missing bone while a c-CSDs distant 2 mm from the suture is able to regenerate only ~20% of the missing bone ( ). Thus, the suture expansion-sustained regeneration of a defect positioned 3 mm from the sagittal expanding sutures and 1 mm from the lambdoid suture is quite remarkable. The regeneration of remotely located defects also distinguishes the suture expansion-sustained regeneration from the traditional distraction osteogenesis, whereby bone formation is limited within the distraction site. While similar molecular mechanisms may regulate the osteogenic activity within the tension sites of the expanded sutures and the distracted bone segments, the ability of the cSSCs, or their progeny, to reach a distant defect during suture expansion is regulated by cellular migrating mechanisms not necessarily activated during distraction osteogenesis. Regardless of the mechanisms responsible for the cell migration and the remote regeneration process, the current study demonstrates that they depend on the activation of Wnt signaling within the Prx1/Prrx1 expressing cell during the regeneration process. This conclusion is validated by existing studies showing that Wnt signaling is involved with craniofacial development ( ), as well as calvarial suture homeostasis ( , , ) and calvarial bone regeneration ( ), and that is activated by tension forces applied to teeth during orthodontic treatments ( ) and required for distraction osteogenesis of long bones ( ). With the goal of investigating the translational significance of the mouse studies, we investigated whether expression of PRX1/PRRX1 could also be detected in cells of the human calvarial sutures. We found that, similar to cells of the mouse sutures, cells of the human fetal sutures (in situ hybridization) and primary cells derived from the fetal sutures (in vitro qPCR assays) express PRX1/PRRX1. This result, along with the documented involvement of mutations of the PRX1/PRRX1 gene with craniofacial malformations ( , , ), proves that PRX1/PRRX1 expressing cells have a significant role in the craniofacial development and might, consequently, have a role in the regeneration of the human calvarial bone defects as well. We propose that the translational meaning of the present studies is quite significant since we showed that, at least in mice, the suture expansion-sustained bone regeneration process does not require transplantation of osteogenic tissue or implantation of any biomaterial or scaffold within the bone defects. Since bone distraction devices are commonly utilized in humans to correct craniosynostosis and other craniofacial malformation ( , ), and since resorbable devices have been recently developed to eliminate the need for a second operative procedure for hardware removal ( ), clinical studies could be performed to test this bone regeneration approach. Importantly, given the age-associated limitations that we observed in mice, future clinical studies should assess these limitations in humans as well. Thus, calvarial sutures could represent targetable autotherapy entities whose local stimulation may sustain regeneration of otherwise non-healing calvarial bone defects. Finally, expanding on the calvarial clinical application, since Prx1/Prrx1 expressing cells are present within the periosteum of long bones and significantly contribute to the healing of long bone fractures ( , ), one may translate the results of the present calvarial studies to the regeneration of defects in long bones. For instance, special minimally invasive devices could be engineered to deliver a tensile tenting force to the periosteum of the long bones to induce activation of the periosteum and sustain the regeneration of otherwise non-healing fractures, even when they are remotely located (i.e., expansion of the diaphyseal periosteum for healing of the femur’s head fractures). In conclusion, our studies may lead to the development of more effective bone regenerating autotherapies for humans, whereby the endogenous healing capacity of each patient is fully exploited by stimulating the stem cell niches, such as the calvarial sutures, to harness their content of SSCs and sustain bone regeneration, even in remotely located sites. A summary of the Methods is reported below. Please refer to SI Appendix for additional details. Animals. Experiments were conducted in compliance with the Guide for the Care and Use of Laboratory Animals at the University of Pittsburgh School of Dental Medicine (IACUC Protocol #: 20066890) and at the Harvard School of Dental School (IACUC Protocol # IS00000535). To optimize the quantification of the fluorochrome expression and minimize signal noise observed in female mice ( ), only Prx1-creER-EGFP male mice were utilized for the IVM quantification studies. Accordingly, only male mice (C57BL/6) were also utilized for the scRNA-seq studies. Male and female mice (C57BL/6 and Prx1-creER-EGFP;β-catenin) were randomly distributed in each group for the bone regeneration studies. Single Cell RNA Sequencing. Cells were isolated using collagenase digestions, and scRNA-seq was performed using Chromium Next GEM Single Cell 3′ GEM, Library & Gel Bead Kit v3.1 (10× Genomics, USA) following the manufacturer’s guidelines. The subsequent data analysis, including statistical evaluations, was performed using Partek® Flow® software, v10.0 (Partek, Inc.). Clusters and subcluster were identified first by using an unbiased bioinformatic approach, in which we attributed cell identities using the Partek ® Flow® software’s “Biomarker” function, obtaining a list of the top 25 most expressed genes by each cluster or subcluster ( SI Appendix , Tables S3–S10 ). Second, we compared the expression of the cell identifiers obtained by Partek® with preexisting and published data ( , ). Mouse Suture Expansion Surgery and Creation of Calvarial Bone Defects. Mice were anesthetized, a surgical incision was performed to expose the calvarial bones, and the expansion device was applied. Then, the incision was closed to fully cover the expansion device ( SI Appendix , Fig. S9 ). Control mock surgery (non-expanded group) replicated every step of the surgical procedure, but expansion device was not inserted. For the bone regeneration studies, immediately after insertion of the suture expander, a defect of a diameter of 2.0 mm was manually created in the left parietal bone. In Vivo Imaging and Quantification of Mouse Prx1/Prrx1 expressing Cells. Intravital microscopy and Prx1-creER-EGFP +/− transgenic male mice were utilized for in vivo imaging and in vivo quantification of green fluorescent Prx1/Prrx1 expressing cells according to a methodology previously described ( , ). In Situ Hybridization in Mouse Sagittal Sutures. To perform in situ hybridization, the RNAscope Multiplex Fluorescent Reagent Kit V2 (320850, RNAscope®, Advanced Cell Diagnostics, Inc., Newark, CA) was used according to manufacturer’s recommendations. qPCR of Mouse Prx1/Prrx1 expressing Cells. EGFP+ cells, co-expressing Prx1/Prrx1 , were sorted (average of 100 to 150 cells/animal), and gene expression analyses were performed using the single cell to CT kit (Thermo Fisher Scientific, City and State). Micro-CT Analyses of Mouse Cranium. Mouse skulls were scanned using a Scanco μCT40 scanner (Scanco Medical AG, Basserdorf, Switzerland). Bone segmentation was conducted at a threshold of 300 (scale: 0 to 1,000), and the volume of interest (VOI) investigated included the 2 mm segmental defect and additional 0.5 mm in the peripheral regions. Inducible Inactivation of Canonical Wnt Signaling in Mouse Prx1/Prrx1 expressing Cells. Prx1-creER-EGFP +/− ; β-catenin +/+ mice and Prx1-creER-EGFP +/− ; β-catenin fl/fl mice (identified in the figures as Prx1-creER-EGFP +/− ; β-catenin −/− mice to indicate the cre recombinase inactivation of the β-catenin gene) were injected with tamoxifen (intraperitoneally, 40 mg/Kg in sterile corn oil) 5 d before and 5 d after surgery. In Situ Hybridization of Human Sagittal Sutures. De-identified specimen of the human fetal calvarial tissue (age 80 d post-conception) was obtained from the Birth Defects Research Laboratory at the University of Washington. The RNAscope Multiplex Fluorescent Reagent Kit V2 (320850, RNAscope®, Advanced Cell Diagnostics, Inc., Newark, CA) was used according to manufacturer’s recommendations. qPCR of Human Sagittal Suture Cells. De-identified specimens from human fetal calvarial tissue (age 79 to 108 d post-conception) were obtained from the Birth Defects Research Laboratory at the University of Washington. Immediately after collection, the parietal bone tissue or the sagittal suture tissue was dissected and cells were isolated as previously described ( ). After culturing, cells were dissociated from respective plates, and RNA was immediately isolated with the High Pure miRNA Isolation Kit (Roche, Basel, Switzerland) per manufacturer instructions. qPCR reactions were carried out using the TaqMan gene expression analysis kit. Experiments were conducted in compliance with the Guide for the Care and Use of Laboratory Animals at the University of Pittsburgh School of Dental Medicine (IACUC Protocol #: 20066890) and at the Harvard School of Dental School (IACUC Protocol # IS00000535). To optimize the quantification of the fluorochrome expression and minimize signal noise observed in female mice ( ), only Prx1-creER-EGFP male mice were utilized for the IVM quantification studies. Accordingly, only male mice (C57BL/6) were also utilized for the scRNA-seq studies. Male and female mice (C57BL/6 and Prx1-creER-EGFP;β-catenin) were randomly distributed in each group for the bone regeneration studies. Cells were isolated using collagenase digestions, and scRNA-seq was performed using Chromium Next GEM Single Cell 3′ GEM, Library & Gel Bead Kit v3.1 (10× Genomics, USA) following the manufacturer’s guidelines. The subsequent data analysis, including statistical evaluations, was performed using Partek® Flow® software, v10.0 (Partek, Inc.). Clusters and subcluster were identified first by using an unbiased bioinformatic approach, in which we attributed cell identities using the Partek ® Flow® software’s “Biomarker” function, obtaining a list of the top 25 most expressed genes by each cluster or subcluster ( SI Appendix , Tables S3–S10 ). Second, we compared the expression of the cell identifiers obtained by Partek® with preexisting and published data ( , ). Mice were anesthetized, a surgical incision was performed to expose the calvarial bones, and the expansion device was applied. Then, the incision was closed to fully cover the expansion device ( SI Appendix , Fig. S9 ). Control mock surgery (non-expanded group) replicated every step of the surgical procedure, but expansion device was not inserted. For the bone regeneration studies, immediately after insertion of the suture expander, a defect of a diameter of 2.0 mm was manually created in the left parietal bone. Prx1/Prrx1 expressing Cells. Intravital microscopy and Prx1-creER-EGFP +/− transgenic male mice were utilized for in vivo imaging and in vivo quantification of green fluorescent Prx1/Prrx1 expressing cells according to a methodology previously described ( , ). To perform in situ hybridization, the RNAscope Multiplex Fluorescent Reagent Kit V2 (320850, RNAscope®, Advanced Cell Diagnostics, Inc., Newark, CA) was used according to manufacturer’s recommendations. Prx1/Prrx1 expressing Cells. EGFP+ cells, co-expressing Prx1/Prrx1 , were sorted (average of 100 to 150 cells/animal), and gene expression analyses were performed using the single cell to CT kit (Thermo Fisher Scientific, City and State). Mouse skulls were scanned using a Scanco μCT40 scanner (Scanco Medical AG, Basserdorf, Switzerland). Bone segmentation was conducted at a threshold of 300 (scale: 0 to 1,000), and the volume of interest (VOI) investigated included the 2 mm segmental defect and additional 0.5 mm in the peripheral regions. Prx1/Prrx1 expressing Cells. Prx1-creER-EGFP +/− ; β-catenin +/+ mice and Prx1-creER-EGFP +/− ; β-catenin fl/fl mice (identified in the figures as Prx1-creER-EGFP +/− ; β-catenin −/− mice to indicate the cre recombinase inactivation of the β-catenin gene) were injected with tamoxifen (intraperitoneally, 40 mg/Kg in sterile corn oil) 5 d before and 5 d after surgery. De-identified specimen of the human fetal calvarial tissue (age 80 d post-conception) was obtained from the Birth Defects Research Laboratory at the University of Washington. The RNAscope Multiplex Fluorescent Reagent Kit V2 (320850, RNAscope®, Advanced Cell Diagnostics, Inc., Newark, CA) was used according to manufacturer’s recommendations. De-identified specimens from human fetal calvarial tissue (age 79 to 108 d post-conception) were obtained from the Birth Defects Research Laboratory at the University of Washington. Immediately after collection, the parietal bone tissue or the sagittal suture tissue was dissected and cells were isolated as previously described ( ). After culturing, cells were dissociated from respective plates, and RNA was immediately isolated with the High Pure miRNA Isolation Kit (Roche, Basel, Switzerland) per manufacturer instructions. qPCR reactions were carried out using the TaqMan gene expression analysis kit. Student’s t test was utilized to identify statistically significant difference among the analyzed groups of mice or genes. Appendix 01 (PDF) Click here for additional data file.
Hydrogen stable isotope probing of lipids demonstrates slow rates of microbial growth in soil
10af5cea-dff6-4648-b8b5-83addcf88c02
10120080
Microbiology[mh]
Measuring the Growth Rates of Soil Microorganisms. We measured 2 H incorporation into soil microbial PLFA components at three time points during a 7-d incubation period in the presence of a dilute heavy water ( 2 H 2 O) tracer. We detail these experiments in Materials and Methods . In brief, we incubated three soils (a sub-alpine conifer forest, prairie grassland, and an alpine tundra soil) in the presence of 5,000 ppm (δ 2 H VSMOW = 31,357 ‰) 2 H 2 O, extracted intact PLFAs, and measured their abundances by gas chromatography flame ionization detection (GC-FID). The structures of PLFAs were determined by gas chromatography mass spectrometry (GC-MS) with PLFA isotopic compositions measured by gas chromatography isotope ratio mass spectrometry (GC-IRMS). We report isotopic values as fractional abundance ( 2 F) in units of ppm where 2 F = 2 H/( 2 H + 1 H). We observed 2 H incorporation into fatty acids to range from 0 to 2,000 ppm in the presence of a 5,000-ppm 2 H 2 O tracer ( ). We inferred the growth rate and apparent generation times of microbial lipids using a previously derived relationship ( ) ( SI Appendix , Supplementary Text ). In short, microbial growth (µ) is a logarithmic function of incubation time and the fractional hydrogen isotopic enrichment of new biomass relative to that of biomass at the start of the incubation. We report both lipid-specific growth rates as well as the abundance-weighted mean (i.e., community-level mean) growth rate for each soil. We report growth rates (day −1 ) at both the compound-specific level and as assemblage-level means weighted by compound-abundance. We also calculate generation times (days), a derived statistic estimating the time for complete reproduction of living biomass during clonal growth ( Materials and Methods and SI Appendix , Supplementary Text ). While the majority of the PLFAs detected are sourced from bacteria ( ), PLFAs attributable to fungi ( , ) are also reported. Rates of Microbial Growth in Soil Are Slow. The grassland and conifer forest soils both exhibited respective abundance-weighted mean microbial growth rates of 0.0358 d −1 and 0.0489 d −1 (corresponding to mean generation times of 19.3 d and 14.1 d respectively). Growth was far slower (0.0154 d −1 ) in the alpine tundra (mean generation time of 44.9 d). The alpine tundra site experiences the lowest mean annual temperature (MAT) of all the sites studied here, with a MAT of −3 °C ( ), and is characterized by short (30 to 90 d) and cool growth seasons, where soil respiration begins immediately after soil begins to thaw underneath seasonal snowpack ( ). The generation times we observed under conditions analogous to the warm season (20 °C) suggest that, even in the warm season, the majority of the microbial community at this site may not complete a single cell cycle. Differences observed between specific compounds in all soils indicate that different taxonomic groups within a given soil may exhibit growth rates between 0.1629 and 0.0017 d −1 , corresponding to generation times between 4.3 and 402.1 d ( Dataset S4 ). This vast range indicates highly variable growth rates across different constituents of the soil microbiome ( ). The majority of soil microorganisms in our study appear to be growing at extremely slow rates when compared to the maximum potential growth rates of many bacteria grown in culture (where generation times typically range from <1 to 100 h) ( ). Although it is not surprising that the maximal growth of bacterial isolates in culture conditions does not represent in situ growth rates in soil, our finding that average, abundance-weighted generation times in soil microbial communities range from 14 to 45 d suggests that most soil microbes are oligotrophic, slow-growing, and/or dormant ( , ), and studies of microbial growth in vitro may not be easily applicable to understanding microbial growth in situ. Despite the slow growth rates inferred from our LH-SIP approach, these values should still be considered likely overestimates of ambient microbial growth rates. This is because the experimental conditions (conditions applied to many tracer approaches: water addition, sample homogenization, and stable temperatures) may provide more favorable growth conditions for many microbes compared to in situ conditions. The slow community-level and compound-specific growth rates of soil microbial communities measured here point to the importance and ubiquity of slow-growing life in soil systems. Microbial Biomass Quantity Does Not Predict Growth. In all soils examined, we find no strong relationship between compound-specific abundance and inferred growth rate ( ). Compounds exhibiting more rapid rates of production were not necessarily more abundant than compounds exhibiting slower production rates. Overall, there exists a weak negative correlation across all samples between compound abundance and growth rate (r = −0.144, p = 0.016, Pearson’s). This indicates that rapidly growing taxa do not represent a large fraction of the soil microbial community at our field sites and, instead, most of the microbes found in bulk soil are relatively slow growing or dormant. Furthermore, on the time scale of our SIP incubation (0 to 7 d), growth of certain taxa did not clearly alter the bulk fatty acid profiles of the soils ( SI Appendix , Fig. S4 ), contrary to what one might expect if a minority of taxa were overgrowing the community. The uniformity of PLFA profiles throughout the incubation supports the utility of 2 H 2 O as a tracer of in situ microbial growth, as there was no apparent modification of microbial growth or population with the addition of the tracer. Because PLFA abundance is a measure of living microbial biomass ( ), we next sought to examine whether the total quantity of microbial biomass predicted assemblage-level rates of microbial growth. In other words, do soils with more total microbial biomass also have higher rates of microbial growth? In the three soils examined here, we observed large differences between the sites in the total quantity of PLFAs, with the conifer forest and grassland soils having smaller quantities of intact lipids (150.7 and 154.3 µg g −1 , respectively) than the tundra soil (2,913.2 µg g −1 ). These differences in PLFA abundances are mirrored by differences in organic matter loading for each of the soils ( Dataset S1 ). At the same time, growth rates were slower in the tundra soil and faster in the grassland and conifer forest soils ( and , Inset ). Although microbial biomass has been considered a proxy for the microbial productivity of a given soil ( – ), here we find that total microbial biomass is inversely related to the rate at which this biomass is turning over. Our dataset delineates two distinct soil microbiome profiles: a comparatively fast growth but lower biomass soil (typified by the conifer forest and grassland soils) and a relatively slower growth but biomass-rich soil (the alpine tundra) ( , Inset ). We suggest that soil microbiomes should be assessed along independent axes of microbial biomass quantity and growth rate, parameters that do not necessarily covary. The location of any given soil along these axes is likely determined by biotic and abiotic conditions of the soil environment. Lipidomic Data Provides Coarse Taxonomic Growth Signals. Our 16S rRNA gene sequencing results ( Materials and Methods ) show that the microbial communities in the soils examined were distinct but composed of bacterial taxa that are typically dominant in soils ( SI Appendix , Fig. S3 ): Acidobacteria , Actinobacteria , Bacteroidetes , Chloroflexi , Proteobacteria , Planctomycetes , and Verrucomicrobia ( ). To examine whether any of these phyla could be distinguished with our lipidomic data, we mined the fatty acid profiles of 4,959 taxa included in the Bacterial Diversity (BacDive) metadatabase ( ). We observed that bacteria, at the phylum level, are broadly distinguishable based on their fatty acid profiles ( SI Appendix , Figs. S1 and S2 ), a finding supported by previous characterization of microbial PLFAs ( , ). We also note that all major PLFAs we predicted to be present and representative of these phyla, based on our analysis of BacDive profiles, were indeed represented in our extracted lipid pools. We find that growth rate patterns grouped by compound class are remarkably similar across all sample sites ( ). For example, terminally branched bacterial iso -15:0 and iso -17:0 saturated fatty acids consistently exhibited some of the fastest rates of growth in each soil. These fatty acids are closely associated with the Acidobacteria and Bacteroidetes phyla ( SI Appendix , Fig. S1 ). Conversely, the 18:1 and 18:2 unsaturated fatty acids exhibited slower growth rates. A large portion of the 18:1 and 18:2 unsaturated fatty acids likely represents the slower growth of saprotrophic fungi ( – ) whose lifestyles may differ markedly from those of their bacterial neighbors. We note that the LH-SIP method is limited in its taxonomic specificity due to the conserved nature of many groups of lipids (i.e., multiple bacterial taxa produce the same membrane lipids) and the fact that the lipid profiles of some major soil bacterial taxa have not been well characterized ( ). However, conservative inferences can be made regarding growth rates of bacteria and fungi at broad taxonomic levels based on the relative distributions of fatty acids, as demonstrated in previous studies ( , , , ). Therefore, we provide the BacDive dataset ( Dataset S2 ) for the benefit of researchers interested in using our approach in soil and other systems. However, we emphasize that lipidomic datasets like these benefit from sequencing-based approaches to couple taxonomic information on microbial communities to observed growth signatures. Comparing Estimates of Soil Microbial Growth. To assess how LH-SIP-derived estimates of microbial growth relate to previous estimates, we collected and compared 26 reports from prior studies that describe microbial growth rates in soil. These published estimates demonstrate disparate ranges that appear highly dependent on the methodology applied ( – , – ) ( ). Previously published assemblage-level growth rate/turnover estimates vary between 0.002 and 0.356 d −1 (apparent generation times corresponding to 1.94 to 294 d). Much of this prior work has measured the incorporation of isotopically labeled thymidine (TdR) or leucine (Leu) ( – , ) and reported community-level growth rates typically between 0.08 and 0.35 d −1 (generation times of 1.95 to 10 d), aside from a single study reporting doubling times of 107 to 170 d ( ) ( and Datasets S3 and S4 ). A potential source of difference in results between the LH-SIP method and these approaches is that DNA- and protein-amendment methods may likely bias toward faster-growing organisms because cells undergoing substantial translation or genomic replication are necessarily going to be captured at higher frequency and fidelity than organisms growing slowly. TdR and Leu approaches specifically could also stimulate microbial growth through the provision of carbon and nitrogen in the tracer solution. Perhaps most importantly, the short incubation times typically used with these approaches (usually <48 h) likely lead to only those taxa with generation times shorter than or approaching the incubation time incorporating sufficient quantities of the isotopic tracer for detection. Microbial taxa with longer generation times may not produce enough new DNA or protein to be detectable by these methods. Following our literature survey, we sought to understand the utility and sensitivity of LH-SIP as a measurement of slow microbial growth (for more details, see SI Appendix , Supplemental Text ). In brief, we calculated the propagated uncertainty in growth rates by accounting for errors inherent in measuring enriched isotopic abundances, soil moisture dilution of the tracer solution, microbial assimilation of hydrogen from water, and IRMS instrument precision. We determined that LH-SIP can accurately distinguish (with confidence ≥ 2σ) microbial generation times in the range of 5 to 700 d during a 7-d incubation period. IRMS instruments are highly sensitive to trace changes in 2 H composition and, with isotopically enriched samples, are sensitive to growth rate differences corresponding to 0.0069 d −1 with the incubation parameters of our experiments ( Materials and Methods ). Interestingly, we found that hydrogen assimilation efficiency, the proportion of lipid hydrogen sourced from water as opposed to other sources (e.g., carbon sources), is the main control on uncertainty in LH-SIP measurements ( SI Appendix , Figs. S7 and S10 ). In contrast, IRMS instrument precision and analytical corrections contribute minor components of the total uncertainty. In samples as complex as soil, it is difficult to parameterize hydrogen assimilation efficiency for an entire community of organisms, even assuming heterotrophy for the bulk of the community. We propose that LH-SIP measurements provide a more conservative estimate of soil microbial growth that captures the slow-growing majority of soil microbes. Because IRMS measurements can capture trace incorporation of 2 H into the alkyl chains of membrane lipids, LH-SIP is uniquely suited to capturing slow growth rates both at the compound-specific and bulk scale. Future Directions for LH-SIP. The growth rates inferred by LH-SIP of living cells sharing lipid membrane constituents are aggregated. Therefore, growth rate heterogeneity at the cell-to-cell or species-to-species level cannot be resolved with LH-SIP. Single-cell methods including Raman spectroscopy or nanoSIMS ( , , ) can elucidate cell-specific variation in growth rates while also measuring local mineralogical, elemental, or isotopic features. However, single-cell methods can be resource intensive and often require separation of intact cells from environmental matrices, which can be problematic in soil ( ). A benefit of the LH-SIP approach is that it can provide community-level insights into anabolic growth activity that may be missed at the single-cell level, with the possibility for coarse taxonomic resolution that is standard for PLFA analysis ( , , , , ). A two-pronged approach that pairs bulk-scale LH-SIP measurements with single-cell metrics of growth heterogeneity could be powerful. As noted above, LH-SIP is limited in its taxonomic specificity due to the conserved nature of many PLFA classes ( ). Because LH-SIP yields compound-specific growth rates, this method could be used in a highly targeted manner with systems containing less complex microbial communities or with strongly defined relationships between taxonomy and constituent lipid classes. Coupling LH-SIP with additional SIP (e.g., DNA or protein) or advanced lipidomic analyses has further potential to expand our understanding of the microbial physiology of slow growth in natural systems. For instance, future LH-SIP studies could take advantage of liquid chromatographic systems to elucidate relationships between intact polar lipid (IPL) head groups and associated growth rates. The use of signature biomarkers (e.g., glycerol dialkyl glycerol tetraether lipids (GDGT), bacteriohopanepolyols (BHPs), sterols, etc.) for an organism or group of organisms in a given environment would allow for growth rate assignments with greater taxonomic specificity, potentially down to the species and strain. For example, LH-SIP of phospholipid ether lipids could specifically target the growth rates of archaea in natural samples ( ). The SIP of PLFAs with 13 C ( ) has been used with great utility in identifying microbial community members involved in the degradation of distinct substrates ( , ) or identifying the use of specific metabolic pathways ( ): dual stable-isotope ( 2 H + 13 C) probing of PLFAs holds great potential for the study of catabolic and anabolic microbial physiology in soils. Finally, LH-SIP could be coupled with already-established 18 O-DNA qSIP methods to correlate highly sensitive compound-specific growth rates with direct sequencing of the active fractions of the microbial community. Relevance of the Slow Growth of Soil Microorganisms. The in vitro cultivation and isolation of microorganisms from natural systems are notoriously difficult ( , , ). There are numerous proposed reasons for this phenomenon, including the fact that media formulations are imperfect or select for certain taxa, but it is plausible that many “wild” microorganisms are not adapted for the rapid growth that is required for isolation and enrichment using standard cultivation approaches. Slow growth in soil may imply severe limitations on maximum growth rates even under ideal laboratory conditions, given that biochemical adaptations to slow growth may not be easily overcome in a lab environment. Many soil microorganisms in culture are observed to grow slowly, even in “ideal” conditions and, in fact, may be inhibited by high substrate concentrations ( – ). Our observed growth rates ( ) suggest that many soil microorganisms may be fundamentally difficult to cultivate due to time constraints on culturing experiments, as the amount of time for an organism to become visible on solid or in liquid media increases exponentially as doubling time increases ( SI Appendix , Fig. S6 ). We also emphasize that maximum potential growth rates measured in vitro likely do not reflect actual growth rates in situ as culture conditions may not adequately replicate the availability or paucity of carbon sources, electron donors/acceptors, and interactions with other organisms. Additionally, maximum potential growth rates [estimated via genomic analyses or culture-based experiments ( , )] are fundamentally different metrics than a direct measurement of growth in an environmental setting: a microbe capable of rapid growth will not necessarily exhibit this behavior under environmental conditions. This is highlighted by the observation that a wide array of microbial groups (including those with high maximum potential growth rates in vitro) persist in a dormant or nearly dormant state in soil ( , ). We measured 2 H incorporation into soil microbial PLFA components at three time points during a 7-d incubation period in the presence of a dilute heavy water ( 2 H 2 O) tracer. We detail these experiments in Materials and Methods . In brief, we incubated three soils (a sub-alpine conifer forest, prairie grassland, and an alpine tundra soil) in the presence of 5,000 ppm (δ 2 H VSMOW = 31,357 ‰) 2 H 2 O, extracted intact PLFAs, and measured their abundances by gas chromatography flame ionization detection (GC-FID). The structures of PLFAs were determined by gas chromatography mass spectrometry (GC-MS) with PLFA isotopic compositions measured by gas chromatography isotope ratio mass spectrometry (GC-IRMS). We report isotopic values as fractional abundance ( 2 F) in units of ppm where 2 F = 2 H/( 2 H + 1 H). We observed 2 H incorporation into fatty acids to range from 0 to 2,000 ppm in the presence of a 5,000-ppm 2 H 2 O tracer ( ). We inferred the growth rate and apparent generation times of microbial lipids using a previously derived relationship ( ) ( SI Appendix , Supplementary Text ). In short, microbial growth (µ) is a logarithmic function of incubation time and the fractional hydrogen isotopic enrichment of new biomass relative to that of biomass at the start of the incubation. We report both lipid-specific growth rates as well as the abundance-weighted mean (i.e., community-level mean) growth rate for each soil. We report growth rates (day −1 ) at both the compound-specific level and as assemblage-level means weighted by compound-abundance. We also calculate generation times (days), a derived statistic estimating the time for complete reproduction of living biomass during clonal growth ( Materials and Methods and SI Appendix , Supplementary Text ). While the majority of the PLFAs detected are sourced from bacteria ( ), PLFAs attributable to fungi ( , ) are also reported. The grassland and conifer forest soils both exhibited respective abundance-weighted mean microbial growth rates of 0.0358 d −1 and 0.0489 d −1 (corresponding to mean generation times of 19.3 d and 14.1 d respectively). Growth was far slower (0.0154 d −1 ) in the alpine tundra (mean generation time of 44.9 d). The alpine tundra site experiences the lowest mean annual temperature (MAT) of all the sites studied here, with a MAT of −3 °C ( ), and is characterized by short (30 to 90 d) and cool growth seasons, where soil respiration begins immediately after soil begins to thaw underneath seasonal snowpack ( ). The generation times we observed under conditions analogous to the warm season (20 °C) suggest that, even in the warm season, the majority of the microbial community at this site may not complete a single cell cycle. Differences observed between specific compounds in all soils indicate that different taxonomic groups within a given soil may exhibit growth rates between 0.1629 and 0.0017 d −1 , corresponding to generation times between 4.3 and 402.1 d ( Dataset S4 ). This vast range indicates highly variable growth rates across different constituents of the soil microbiome ( ). The majority of soil microorganisms in our study appear to be growing at extremely slow rates when compared to the maximum potential growth rates of many bacteria grown in culture (where generation times typically range from <1 to 100 h) ( ). Although it is not surprising that the maximal growth of bacterial isolates in culture conditions does not represent in situ growth rates in soil, our finding that average, abundance-weighted generation times in soil microbial communities range from 14 to 45 d suggests that most soil microbes are oligotrophic, slow-growing, and/or dormant ( , ), and studies of microbial growth in vitro may not be easily applicable to understanding microbial growth in situ. Despite the slow growth rates inferred from our LH-SIP approach, these values should still be considered likely overestimates of ambient microbial growth rates. This is because the experimental conditions (conditions applied to many tracer approaches: water addition, sample homogenization, and stable temperatures) may provide more favorable growth conditions for many microbes compared to in situ conditions. The slow community-level and compound-specific growth rates of soil microbial communities measured here point to the importance and ubiquity of slow-growing life in soil systems. In all soils examined, we find no strong relationship between compound-specific abundance and inferred growth rate ( ). Compounds exhibiting more rapid rates of production were not necessarily more abundant than compounds exhibiting slower production rates. Overall, there exists a weak negative correlation across all samples between compound abundance and growth rate (r = −0.144, p = 0.016, Pearson’s). This indicates that rapidly growing taxa do not represent a large fraction of the soil microbial community at our field sites and, instead, most of the microbes found in bulk soil are relatively slow growing or dormant. Furthermore, on the time scale of our SIP incubation (0 to 7 d), growth of certain taxa did not clearly alter the bulk fatty acid profiles of the soils ( SI Appendix , Fig. S4 ), contrary to what one might expect if a minority of taxa were overgrowing the community. The uniformity of PLFA profiles throughout the incubation supports the utility of 2 H 2 O as a tracer of in situ microbial growth, as there was no apparent modification of microbial growth or population with the addition of the tracer. Because PLFA abundance is a measure of living microbial biomass ( ), we next sought to examine whether the total quantity of microbial biomass predicted assemblage-level rates of microbial growth. In other words, do soils with more total microbial biomass also have higher rates of microbial growth? In the three soils examined here, we observed large differences between the sites in the total quantity of PLFAs, with the conifer forest and grassland soils having smaller quantities of intact lipids (150.7 and 154.3 µg g −1 , respectively) than the tundra soil (2,913.2 µg g −1 ). These differences in PLFA abundances are mirrored by differences in organic matter loading for each of the soils ( Dataset S1 ). At the same time, growth rates were slower in the tundra soil and faster in the grassland and conifer forest soils ( and , Inset ). Although microbial biomass has been considered a proxy for the microbial productivity of a given soil ( – ), here we find that total microbial biomass is inversely related to the rate at which this biomass is turning over. Our dataset delineates two distinct soil microbiome profiles: a comparatively fast growth but lower biomass soil (typified by the conifer forest and grassland soils) and a relatively slower growth but biomass-rich soil (the alpine tundra) ( , Inset ). We suggest that soil microbiomes should be assessed along independent axes of microbial biomass quantity and growth rate, parameters that do not necessarily covary. The location of any given soil along these axes is likely determined by biotic and abiotic conditions of the soil environment. Our 16S rRNA gene sequencing results ( Materials and Methods ) show that the microbial communities in the soils examined were distinct but composed of bacterial taxa that are typically dominant in soils ( SI Appendix , Fig. S3 ): Acidobacteria , Actinobacteria , Bacteroidetes , Chloroflexi , Proteobacteria , Planctomycetes , and Verrucomicrobia ( ). To examine whether any of these phyla could be distinguished with our lipidomic data, we mined the fatty acid profiles of 4,959 taxa included in the Bacterial Diversity (BacDive) metadatabase ( ). We observed that bacteria, at the phylum level, are broadly distinguishable based on their fatty acid profiles ( SI Appendix , Figs. S1 and S2 ), a finding supported by previous characterization of microbial PLFAs ( , ). We also note that all major PLFAs we predicted to be present and representative of these phyla, based on our analysis of BacDive profiles, were indeed represented in our extracted lipid pools. We find that growth rate patterns grouped by compound class are remarkably similar across all sample sites ( ). For example, terminally branched bacterial iso -15:0 and iso -17:0 saturated fatty acids consistently exhibited some of the fastest rates of growth in each soil. These fatty acids are closely associated with the Acidobacteria and Bacteroidetes phyla ( SI Appendix , Fig. S1 ). Conversely, the 18:1 and 18:2 unsaturated fatty acids exhibited slower growth rates. A large portion of the 18:1 and 18:2 unsaturated fatty acids likely represents the slower growth of saprotrophic fungi ( – ) whose lifestyles may differ markedly from those of their bacterial neighbors. We note that the LH-SIP method is limited in its taxonomic specificity due to the conserved nature of many groups of lipids (i.e., multiple bacterial taxa produce the same membrane lipids) and the fact that the lipid profiles of some major soil bacterial taxa have not been well characterized ( ). However, conservative inferences can be made regarding growth rates of bacteria and fungi at broad taxonomic levels based on the relative distributions of fatty acids, as demonstrated in previous studies ( , , , ). Therefore, we provide the BacDive dataset ( Dataset S2 ) for the benefit of researchers interested in using our approach in soil and other systems. However, we emphasize that lipidomic datasets like these benefit from sequencing-based approaches to couple taxonomic information on microbial communities to observed growth signatures. To assess how LH-SIP-derived estimates of microbial growth relate to previous estimates, we collected and compared 26 reports from prior studies that describe microbial growth rates in soil. These published estimates demonstrate disparate ranges that appear highly dependent on the methodology applied ( – , – ) ( ). Previously published assemblage-level growth rate/turnover estimates vary between 0.002 and 0.356 d −1 (apparent generation times corresponding to 1.94 to 294 d). Much of this prior work has measured the incorporation of isotopically labeled thymidine (TdR) or leucine (Leu) ( – , ) and reported community-level growth rates typically between 0.08 and 0.35 d −1 (generation times of 1.95 to 10 d), aside from a single study reporting doubling times of 107 to 170 d ( ) ( and Datasets S3 and S4 ). A potential source of difference in results between the LH-SIP method and these approaches is that DNA- and protein-amendment methods may likely bias toward faster-growing organisms because cells undergoing substantial translation or genomic replication are necessarily going to be captured at higher frequency and fidelity than organisms growing slowly. TdR and Leu approaches specifically could also stimulate microbial growth through the provision of carbon and nitrogen in the tracer solution. Perhaps most importantly, the short incubation times typically used with these approaches (usually <48 h) likely lead to only those taxa with generation times shorter than or approaching the incubation time incorporating sufficient quantities of the isotopic tracer for detection. Microbial taxa with longer generation times may not produce enough new DNA or protein to be detectable by these methods. Following our literature survey, we sought to understand the utility and sensitivity of LH-SIP as a measurement of slow microbial growth (for more details, see SI Appendix , Supplemental Text ). In brief, we calculated the propagated uncertainty in growth rates by accounting for errors inherent in measuring enriched isotopic abundances, soil moisture dilution of the tracer solution, microbial assimilation of hydrogen from water, and IRMS instrument precision. We determined that LH-SIP can accurately distinguish (with confidence ≥ 2σ) microbial generation times in the range of 5 to 700 d during a 7-d incubation period. IRMS instruments are highly sensitive to trace changes in 2 H composition and, with isotopically enriched samples, are sensitive to growth rate differences corresponding to 0.0069 d −1 with the incubation parameters of our experiments ( Materials and Methods ). Interestingly, we found that hydrogen assimilation efficiency, the proportion of lipid hydrogen sourced from water as opposed to other sources (e.g., carbon sources), is the main control on uncertainty in LH-SIP measurements ( SI Appendix , Figs. S7 and S10 ). In contrast, IRMS instrument precision and analytical corrections contribute minor components of the total uncertainty. In samples as complex as soil, it is difficult to parameterize hydrogen assimilation efficiency for an entire community of organisms, even assuming heterotrophy for the bulk of the community. We propose that LH-SIP measurements provide a more conservative estimate of soil microbial growth that captures the slow-growing majority of soil microbes. Because IRMS measurements can capture trace incorporation of 2 H into the alkyl chains of membrane lipids, LH-SIP is uniquely suited to capturing slow growth rates both at the compound-specific and bulk scale. The growth rates inferred by LH-SIP of living cells sharing lipid membrane constituents are aggregated. Therefore, growth rate heterogeneity at the cell-to-cell or species-to-species level cannot be resolved with LH-SIP. Single-cell methods including Raman spectroscopy or nanoSIMS ( , , ) can elucidate cell-specific variation in growth rates while also measuring local mineralogical, elemental, or isotopic features. However, single-cell methods can be resource intensive and often require separation of intact cells from environmental matrices, which can be problematic in soil ( ). A benefit of the LH-SIP approach is that it can provide community-level insights into anabolic growth activity that may be missed at the single-cell level, with the possibility for coarse taxonomic resolution that is standard for PLFA analysis ( , , , , ). A two-pronged approach that pairs bulk-scale LH-SIP measurements with single-cell metrics of growth heterogeneity could be powerful. As noted above, LH-SIP is limited in its taxonomic specificity due to the conserved nature of many PLFA classes ( ). Because LH-SIP yields compound-specific growth rates, this method could be used in a highly targeted manner with systems containing less complex microbial communities or with strongly defined relationships between taxonomy and constituent lipid classes. Coupling LH-SIP with additional SIP (e.g., DNA or protein) or advanced lipidomic analyses has further potential to expand our understanding of the microbial physiology of slow growth in natural systems. For instance, future LH-SIP studies could take advantage of liquid chromatographic systems to elucidate relationships between intact polar lipid (IPL) head groups and associated growth rates. The use of signature biomarkers (e.g., glycerol dialkyl glycerol tetraether lipids (GDGT), bacteriohopanepolyols (BHPs), sterols, etc.) for an organism or group of organisms in a given environment would allow for growth rate assignments with greater taxonomic specificity, potentially down to the species and strain. For example, LH-SIP of phospholipid ether lipids could specifically target the growth rates of archaea in natural samples ( ). The SIP of PLFAs with 13 C ( ) has been used with great utility in identifying microbial community members involved in the degradation of distinct substrates ( , ) or identifying the use of specific metabolic pathways ( ): dual stable-isotope ( 2 H + 13 C) probing of PLFAs holds great potential for the study of catabolic and anabolic microbial physiology in soils. Finally, LH-SIP could be coupled with already-established 18 O-DNA qSIP methods to correlate highly sensitive compound-specific growth rates with direct sequencing of the active fractions of the microbial community. The in vitro cultivation and isolation of microorganisms from natural systems are notoriously difficult ( , , ). There are numerous proposed reasons for this phenomenon, including the fact that media formulations are imperfect or select for certain taxa, but it is plausible that many “wild” microorganisms are not adapted for the rapid growth that is required for isolation and enrichment using standard cultivation approaches. Slow growth in soil may imply severe limitations on maximum growth rates even under ideal laboratory conditions, given that biochemical adaptations to slow growth may not be easily overcome in a lab environment. Many soil microorganisms in culture are observed to grow slowly, even in “ideal” conditions and, in fact, may be inhibited by high substrate concentrations ( – ). Our observed growth rates ( ) suggest that many soil microorganisms may be fundamentally difficult to cultivate due to time constraints on culturing experiments, as the amount of time for an organism to become visible on solid or in liquid media increases exponentially as doubling time increases ( SI Appendix , Fig. S6 ). We also emphasize that maximum potential growth rates measured in vitro likely do not reflect actual growth rates in situ as culture conditions may not adequately replicate the availability or paucity of carbon sources, electron donors/acceptors, and interactions with other organisms. Additionally, maximum potential growth rates [estimated via genomic analyses or culture-based experiments ( , )] are fundamentally different metrics than a direct measurement of growth in an environmental setting: a microbe capable of rapid growth will not necessarily exhibit this behavior under environmental conditions. This is highlighted by the observation that a wide array of microbial groups (including those with high maximum potential growth rates in vitro) persist in a dormant or nearly dormant state in soil ( , ). Here, we present evidence that slow microbial growth is widespread in soil systems. Across the soils analyzed, PLFA abundances and growth rates were not strongly correlated at the compound-specific level ( ), indicating that the most abundant taxa are not necessarily the fastest growing. In addition, we observed that soils with lower total biomass exhibited higher rates of microbial growth, and vice versa. These results challenge the idea, often implicit in many studies documenting microbial biomass variation across soils, that higher biomass necessarily equates to higher soil microbial productivity. Instead, our results suggest that soil microbiomes operate on a continuum of growth rate and biomass quantity, with the largest proportion of standing microbial biomass representing oligotrophic or dormant taxa adapted to slow growth. Growth rates presented here occur on the order of weeks to months, comparable to estimates generated by carbon- and nutrient-budgeting models ( ). Our conclusion that slow-growing microorganisms appear to dominate the soil microbiome is in line with recent evidence that spatial variability in the composition of soil microbial communities typically exceeds the temporal variability observed at a given location ( , ). Slow growth rates would be expected to attenuate short-term changes in overall microbial community composition, especially in soils with longer observed generation times. As microbial growth is a key regulator of a wide array of soil biogeochemical processes, our findings warrant additional studies that take advantage of the LH-SIP method described here to quantify variation in microbial growth rates across a broader array of soil types and conditions. Soil Sampling and Incubation. Soils were collected from three locations in central Colorado: a conifer forest located at Gordon Gulch Critical Zone Observatory, Boulder County, CO (40.01, −105.46); a prairie grassland located at Marshall Mesa, Boulder County, CO (39.95, −105.22); and an alpine tundra located at Niwot Ridge, Niwot Long-term Ecological Research Program (LTER) (40.05, −105.58) near Ward, CO. The top 10 cm of soil was excavated with a surface-sterilized trowel, soils were sieved to 2 mm to remove rocks and plant material and homogenized. Soils were stored in the dark at 4 °C before incubations were started. For the SIP incubations, a 10-g subsample of each soil was weighed into a centrifuge tube and combined with 10 mL of filter-sterilized water with ~5,000 ppm 2 H 2 O (0.5 at% 2 H, δ 2 H VSMOW ≈ 31,000 s ‰) and incubated at 20 °C for 0, 3, or 7 d. Isotopic composition of the incubation water was measured at the end of the time series experiment to account for the isotopic contributions of soil water. Samples were periodically shaken over the course of the incubation period to ensure uniform distribution of the tracer solution. At the end of the incubation period, excess incubation water was separated from the soil by centrifugation, decanted, and frozen for later isotopic analysis. Soil pellets were immediately flash-frozen by submerging in a dry ice ethanol bath and stored at −20 °C until lipid extraction. Water Hydrogen Isotope Analysis. The labeled incubation waters were analyzed for their H isotope composition (F L ) after gravimetric dilution with water of known isotopic composition (1:1000 w/w) to get into the analytical range of available in-house standards previously calibrated to Vienna Standard Mean Ocean Water (VSMOW) and Standard Light Antarctic Precipitation (SLAP). Then, 1 µL of each sample was measured on a dual inlet Thermo Delta Plus XL isotope ratio mass spectrometer connected to an H-Device for water reduction by chromium powder at 850 °C ( ). Measured isotope values in δ notation on the VSMOW-SLAP scale were converted to fractional abundances using the isotopic composition of VSMOW [R VSMOW = 2 H/ 1 H = 0.00015576, ( )] and the relation F = R/(1+R) = (δ + 1)/(1/R VSMOW + δ + 1) and corrected for the isotope dilution by mass-balance. The resulting isotopic composition of the tracer water was F = 5,015 ppm 2 H (0.5015 at%; 31,357 ‰ vs. VSMOW). The isotopic composition of the labeled incubation water was diluted from this value after homogenization with the water in the soil depending on soil type and resulted in 4,363 ± 251 ppm 2 H for conifer forest, 4,483 ± 49 ppm 2 H for grassland, and 3,492 ± 43 ppm 2 H for tundra soils. The latter values were considered to be what cells encountered during tracer incubation (a combination of both the tracer and water present in the soil) and were used for all growth rate calculations. Lipid Extraction. Frozen soil pellets were lyophilized for 24 h. Intact polar lipids were extracted from the dry pellets using a modified MTBE-based lipid extraction method ( , ). In brief, 3.0 g of freeze-dried soil sample was added to a PTFE centrifuge tube. Then, 3 mL of methanol was added to the sample and vortexed. In addition, 10 mL of MTBE was added to the sample and incubated at 1 h at room temperature while shaking. To induce phase separation, 2.5 mL of MS-Grade water was added, and the mixture was centrifuged for 10 min at 1,000G at room temperature. The organic phase was carefully extracted and transferred to an organic-clean glass vial. This process was repeated three times in total. Total lipid extract (TLE) was dried down under a stream of N 2 gas and the sample was stored dry at −20 °C until solid phase chromatography. Prior to MTBE extraction, 100 μg of 23-phosphatidylcholine (23-PC) was added to all soils as an internal extraction standard. All glassware was combusted at 450 °C for 8 h prior to use. All Teflon vessels were solvent washed by sonication in a 9:1 mixture of DCM:MeOH for two sets of 30 min. Empty vessels were extracted alongside samples to monitor for contamination. No contamination was detected in the extraction blanks. Phospholipid Separation and Derivatization. Phospholipid extract (PLE) was purified from TLE using silica gel chromatography ( ) to focus isotopic analyses on lipids derived from intact cells [free phospholipids outside of cellular membranes degrade relatively rapidly with half-life estimates of 39 h at 15 °C ( )]. Combusted silica solid-phase extraction (SPE) columns containing 500 mg SiO 2 were conditioned by the addition of 5 mL acetone, then two additions of 5 mL dichloromethane (DCM). TLE was redissolved in 0.5 mL DCM and transferred to the SPE column. Neutral lipids and glycolipids were eluted by the addition of 5 mL of DCM or acetone, respectively, and then dried down under a stream of N 2 and stored dry at −20 °C. A PLE was eluted by the addition of 5 mL methanol to the column. PLE was similarly dried under a stream of N 2 and stored with an N 2 -purged headspace at −20 °C. The PLE was derivatized to fatty acid methyl esters (FAMEs) via base-catalyzed transesterification using methanolic base ( , ). Transesterification was initiated by the addition of a mixture of 2 mL hexane and 1 mL 0.5 M NaOH in anhydrous methanol to dry PLE. The reaction mixture was allowed to proceed for 10 min at room temperature before being quenched by the addition of 140 μL of glacial (~17 M) acetic acid and 1 mL water. The organic phase was extracted three times with 4 mL hexane and dried down under a stream of N 2 . A recovery standard of 10 μg 21-phosphatidylcholine (21:0 PC) was added to each PLE before derivatization to assess reaction yield. Then, 10 μg of isobutyl palmitate (PAIBE) was added after derivatization to all samples as a quantification standard prior to analysis. FAME Quantification and Identification. A Thermo Scientific Trace 1310 Gas-Chromatograph equipped with a DB-5HT column (30 m × 0.250 mm, 0.10 µm) coupled to a flame-ionization detector (GC-FID) was used to quantify FAME concentrations (µg/g soil) and total amounts (µg extracted) based on peak area relative to the 23-PC extraction and PAIBE quantification standards, respectively. FAMEs were suspended in 100 μL n-hexane, and 1 uL was injected using a split-splitless injector run in splitless mode at 325 °C; split flow was 12.0 mL per min; splitless time was 0.80 min; purge flow was 5.00 mL/min; column flow rate was constant at 1.2 mL/min. The GC ramped according to the following program: 80 °C for 2 min, ramp at 20 °C/min for 5 min (to 140 °C), and ramp at 5 °C/min for 35 min (to 290 °C). The FID was held at 350 °C for the duration of the run. Major peaks were identified by retention time relative to a Bacterial Acid Methyl Ester standard (Millipore-Sigma) and a 37 FAME standard (Supelco). Peak identities were confirmed using a Thermo Scientific Trace 1310 Gas-Chromatograph coupled to a single quadrupole mass spectrometer (ISQ) using identical injection and chromatography conditions with mass scans from 50 to 550 amu and a scan time of 0.2 s in positive ion mode (electron impact). Due to the ambiguity associated with identifying double-bond position and stereochemistry for FAMEs containing multiple bonds, unsaturated compounds are identified only tentatively. FAMEs are referred to using the nomenclature z-x:y, where x is the total number of carbons in the fatty acid skeleton and y is the number of double bonds and their position (if known), while z-is a prefix describing additional structural features of the compound such as methylation and cyclization. FAME Hydrogen Isotope Analysis. The isotopic composition of FAMEs was measured on a Thermo Scientific 253 Plus stable isotope ratio mass spectrometer coupled to a Trace 1310 GC via Isolink II pyrolysis/combustion interface (GC/P/IRMS). Chromatographic conditions were identical to those from the GC-FID and GC-MS stated above except for extension of the temperature program to baseline separate all major analytes (40 °C hold for 2 min, 20 °C/min to 120 °C, then 2 °C/min ramp to 240 °C; 30 °C/min ramp to 330 °C, 4-min hold) and injection via programmable temperature vaporization inlet (ramped from 40 to 400 °C) to ensure quantitative transfer from the inlet to the column. Peaks were identified based on retention order and relative height based on coregistration with GC-FID and GC-MS chromatograms. Measured isotope ratios were corrected for scale compression, linearity, and memory effects using natural abundance and isotopically enriched fatty acid esters of known isotopic composition ranging from –231.2 ‰ to +3,972 ‰ vs. VSMOW ( SI Appendix , Supplementary Text ). Memory (peak-to-peak carryover) effects are important to correct for given the wide range of peak areas and isotopic values encountered in this study and the known impact of memory effects on H isotope measurements ( , ). Full memory effect corrections for standard mixtures of natural abundance and enriched fatty acid esters lead to a >44% improvement in residual SE ( SI Appendix , Table S1 and Fig. S8 ). The multivariate linear regression calibration for the enriched samples ( SI Appendix , Eqs. S1 – S3 and Table S1 ) included standards ranging in mass 2 areas from 1.74 Vs (~25 ng) to 69.67 Vs (~975 ng) and lead to an overall RMSE of the calibration of 60.4‰ stemming from the substantial dynamic range of the isotope standards and believed to accurately reflect the elevated uncertainty that should be expected in the isotopically diverse samples. The conservative analytical SEs ranged from 46.5 to 269 ‰ depending on peak area with larger error estimates for smaller peaks that were more affected by memory effects ( SI Appendix , Supplementary Text ). The hydrogen isotope calibration was performed in R using the packages isoreader [v 1.3.0 ( )] and isoprocessor (v 0.6.11) available at github.com/isoverse. Calibrated isotope ratios measured via GC/P/IRMS were further corrected for H added during derivatization to FAMEs, as well as for analytical and replicate error as follows: First, the 2 F of methanol used for base-catalyzed transesterification was measured by taking an aliquot of anhydrous methanol reagent (the exact stock used for transesterification) and derivatizing a phthalic acid with a known H isotopic composition (Arndt Schimmelmann, Indiana University) via acid catalysis. The resulting phthalic methyl ester was analyzed by GC/P/IRMS and a correction was applied to all values ( SI Appendix , Supplemental Text ). Growth Rate Calculations. Calculations of biosynthetic activity focus on the isotopic composition of FAMEs because the hydrocarbon skeleton of fatty acids consists only of C–H bonds that are nonexchangeable on biological time scales ( ), unlike the readily exchangeable H bound to O, N, P, and S in parts of lipid headgroups, proteins, and nucleic acids. The H tracer is thus stably incorporated into fatty acids tails during biological activity. The resulting isotopic enrichment of fatty acids in intact cellular lipids is described by the following equation ( ): F t - F 0 = ( 1 - e - r · t ) · ( a · F L - F 0 ) , where r is the specific biosynthesis rate (1/days); t is the duration of tracer exposure (days); a is the assimilation efficiency and fractionation of water hydrogen during lipid biosynthesis (see ref. and SI Appendix , Fig. S7 ); and F 0 , F t , and F L are the fractional abundances of 2 H in fatty acids before tracer incorporation, in fatty acids at time t , and in the isotopically labeled sample water. Solving this equation for r makes it possible to infer from the incubation time and isotopic measurements how quickly cellular fatty acids turn over. With lipid biosynthesis reflecting a combination of growth and repair, this provides an upper/lower bound for the specific growth rate µ and apparent generation time T G of the microbial producers of a given lipid: r = - 1 t · l n F t - a · F L F 0 - a · F L , μ ≤ r , T G = l n 2 μ ≥ l n 2 r , These calculations yield compound-specific growth rate and generation time estimates that can be viewed by themselves or aggregated into assemblage-level estimates of community growth by calculating an abundance-weighted mean: weighted mean = ∑ i = 1 N x i w i ∑ i = 1 N x i where x i is the isotopic fractional abundance of compound i and w i is the relative abundance (weighting) of compound i. As discussed, these estimates aggregate cell-to-cell variations in growth. Uncertainty in each set of measurements was propagated through our calculations of µ by SE propagation ( SI Appendix , Supplementary Text ). 16S Ribosomal RNA Gene Sequencing. To characterize the microbial community composition of each soil type before SIP incubation, DNA was extracted from soil subsamples in triplicate and with negative control using the DNeasy PowerSoil DNA isolation kit (Qiagen), according to the manufacturer’s instructions with one minor modification; samples were heated with Solution C1 for 10 min at 65 °C in a dry heat block prior to bead beating. Extracted DNA samples were amplified in duplicate using Platinum II Hot-Start PCR Master Mix (Thermo Fisher Scientific) and the 16S rRNA gene primers 515F and 806R with Illumina sequencing adapters and unique 12-bp barcodes. The PCR program was 94 °C for 2 min followed by 35 cycles of 94 °C (15 s), 60 °C (15 s), 68 °C (1 min), and a final extension at 72 °C for 10 min. Amplification was verified via gel electrophoresis. Amplicons were cleaned and normalized with the SequalPrep Normalization Plate (Thermo Fisher Scientific) following the manufacturer’s instructions and then pooled together. Sequencing was performed on an Illumina MiSeq using a v2 300 cycle kit with paired-end reads at the University of Colorado BioFrontiers Institute Next-Gen Sequencing Core Facility. To prepare samples for analysis with the DADA2 (version 1.10.1) bioinformatic pipeline ( ), reads were demultiplexed with adapters and primers were removed using standard settings for cutadapt (version 1.8.1, Martin 2011). We used standard filtering parameters with slight modifications for 2 × 150 bp chemistry where forward reads were not trimmed and reverse reads were trimmed (truncLen) to 140 base pairs. In addition, we truncated reads at the first nucleotide with a quality score (truncQ) below 11 and a maximum allowed error rate (maxEE) of 1. These filtering parameters resulted in a mean of 95.7% of reads retained, and this was visually assessed with quality profiles for each sample. Reads were dereplicated, paired ends were merged, amplicon sequence variants (ASVs) were assigned, and chimeras were removed (98.23% of reads were not chimeric). Finally, taxonomy was assigned to each ASV against the SILVA (v132) reference database ( ). We removed all chloroplast, mitochondria, and eukaryotic reads from the ASV table, which resulted in an average of 36,029 reads per sample (range 27,074 to 58,528 reads), with the ASV table subsequently rarefied to 27,000 reads per sample. Blank samples had far fewer reads than actual samples (mean of 193 reads compared to 35,455 reads per sample), and the four genera detected in blanks ( Thermus , Geobacillus , Deinococcus , and Pseudomonas ) were not consistently detected and were below the 1% relative abundance threshold for inclusion in sample analyses. Taxonomic composition of the samples was compared across soil types ( SI Appendix , Fig. S3 ). BacDive Database Analysis and Literature Survey. To infer relationships between lipid profiles observed across our SIP incubations and high-level taxonomy, we queried the Bacterial Metadiversity Database (BacDive) ( ) for all available fatty acid profiles using the BacDive API client implemented by the BacDiveR package ( ). We generated a table of 4,959 fatty acid profiles indexed by the taxonomy reported in the database and used principal component analysis to generate SI Appendix , Fig. S2 , grouped at the phylum level. We appended 24 manually curated fungal fatty acid profiles to this dataset, and this combined table is available as Dataset S2 . We looked at relationships between fatty acid profiles at the phylum level ( SI Appendix , Fig. S1 ) and conducted a principal component analysis of fatty acid composition and taxonomy ( SI Appendix , Fig. S2 ). To compare LH-SIP-measured rates of growth to other methods, we surveyed 26 previously reported estimates of microbial growth and compared them to the abundance-weighted mean estimates of our study ( Datasets S3 and S4 ). We collected specific passages, sentences, and tables and manually digitized these reported values, noting the specific mention of microbial growth. Studies use various terminology and units including growth rate (day −1 ), generation time (days), turnover rate (day −1 ), turnover time (days), or doubling times (days). Turnover time was converted to turnover rate by taking the reciprocal value. We include both “growth rates” and “turnover rates” as the distinction between the two is that turnover explicitly assumes a steady state of biomass (loss rates equal production rates). Generation time/doubling time estimates are converted to growth rates as described in this study. We plot the mean value from each study ( ), either reported in the manuscript or calculated from the upper- and lower-bounds reported in the manuscript. Soil Geochemistry. Soils were analyzed at the Colorado State University Soil, Water, and Plant Testing Laboratory for routine determination of soil characteristics. In short, a KCl extract was used to quantify soil nitrate ( ). An AB-DTPA extract was used to quantify soil P, Zn, Fe, Mn, Cu, and S ( ). Organic matter percentages were calculated by determining the weight loss of samples after ignition. These data are available in Dataset S1 . Soils were collected from three locations in central Colorado: a conifer forest located at Gordon Gulch Critical Zone Observatory, Boulder County, CO (40.01, −105.46); a prairie grassland located at Marshall Mesa, Boulder County, CO (39.95, −105.22); and an alpine tundra located at Niwot Ridge, Niwot Long-term Ecological Research Program (LTER) (40.05, −105.58) near Ward, CO. The top 10 cm of soil was excavated with a surface-sterilized trowel, soils were sieved to 2 mm to remove rocks and plant material and homogenized. Soils were stored in the dark at 4 °C before incubations were started. For the SIP incubations, a 10-g subsample of each soil was weighed into a centrifuge tube and combined with 10 mL of filter-sterilized water with ~5,000 ppm 2 H 2 O (0.5 at% 2 H, δ 2 H VSMOW ≈ 31,000 s ‰) and incubated at 20 °C for 0, 3, or 7 d. Isotopic composition of the incubation water was measured at the end of the time series experiment to account for the isotopic contributions of soil water. Samples were periodically shaken over the course of the incubation period to ensure uniform distribution of the tracer solution. At the end of the incubation period, excess incubation water was separated from the soil by centrifugation, decanted, and frozen for later isotopic analysis. Soil pellets were immediately flash-frozen by submerging in a dry ice ethanol bath and stored at −20 °C until lipid extraction. The labeled incubation waters were analyzed for their H isotope composition (F L ) after gravimetric dilution with water of known isotopic composition (1:1000 w/w) to get into the analytical range of available in-house standards previously calibrated to Vienna Standard Mean Ocean Water (VSMOW) and Standard Light Antarctic Precipitation (SLAP). Then, 1 µL of each sample was measured on a dual inlet Thermo Delta Plus XL isotope ratio mass spectrometer connected to an H-Device for water reduction by chromium powder at 850 °C ( ). Measured isotope values in δ notation on the VSMOW-SLAP scale were converted to fractional abundances using the isotopic composition of VSMOW [R VSMOW = 2 H/ 1 H = 0.00015576, ( )] and the relation F = R/(1+R) = (δ + 1)/(1/R VSMOW + δ + 1) and corrected for the isotope dilution by mass-balance. The resulting isotopic composition of the tracer water was F = 5,015 ppm 2 H (0.5015 at%; 31,357 ‰ vs. VSMOW). The isotopic composition of the labeled incubation water was diluted from this value after homogenization with the water in the soil depending on soil type and resulted in 4,363 ± 251 ppm 2 H for conifer forest, 4,483 ± 49 ppm 2 H for grassland, and 3,492 ± 43 ppm 2 H for tundra soils. The latter values were considered to be what cells encountered during tracer incubation (a combination of both the tracer and water present in the soil) and were used for all growth rate calculations. Frozen soil pellets were lyophilized for 24 h. Intact polar lipids were extracted from the dry pellets using a modified MTBE-based lipid extraction method ( , ). In brief, 3.0 g of freeze-dried soil sample was added to a PTFE centrifuge tube. Then, 3 mL of methanol was added to the sample and vortexed. In addition, 10 mL of MTBE was added to the sample and incubated at 1 h at room temperature while shaking. To induce phase separation, 2.5 mL of MS-Grade water was added, and the mixture was centrifuged for 10 min at 1,000G at room temperature. The organic phase was carefully extracted and transferred to an organic-clean glass vial. This process was repeated three times in total. Total lipid extract (TLE) was dried down under a stream of N 2 gas and the sample was stored dry at −20 °C until solid phase chromatography. Prior to MTBE extraction, 100 μg of 23-phosphatidylcholine (23-PC) was added to all soils as an internal extraction standard. All glassware was combusted at 450 °C for 8 h prior to use. All Teflon vessels were solvent washed by sonication in a 9:1 mixture of DCM:MeOH for two sets of 30 min. Empty vessels were extracted alongside samples to monitor for contamination. No contamination was detected in the extraction blanks. Phospholipid extract (PLE) was purified from TLE using silica gel chromatography ( ) to focus isotopic analyses on lipids derived from intact cells [free phospholipids outside of cellular membranes degrade relatively rapidly with half-life estimates of 39 h at 15 °C ( )]. Combusted silica solid-phase extraction (SPE) columns containing 500 mg SiO 2 were conditioned by the addition of 5 mL acetone, then two additions of 5 mL dichloromethane (DCM). TLE was redissolved in 0.5 mL DCM and transferred to the SPE column. Neutral lipids and glycolipids were eluted by the addition of 5 mL of DCM or acetone, respectively, and then dried down under a stream of N 2 and stored dry at −20 °C. A PLE was eluted by the addition of 5 mL methanol to the column. PLE was similarly dried under a stream of N 2 and stored with an N 2 -purged headspace at −20 °C. The PLE was derivatized to fatty acid methyl esters (FAMEs) via base-catalyzed transesterification using methanolic base ( , ). Transesterification was initiated by the addition of a mixture of 2 mL hexane and 1 mL 0.5 M NaOH in anhydrous methanol to dry PLE. The reaction mixture was allowed to proceed for 10 min at room temperature before being quenched by the addition of 140 μL of glacial (~17 M) acetic acid and 1 mL water. The organic phase was extracted three times with 4 mL hexane and dried down under a stream of N 2 . A recovery standard of 10 μg 21-phosphatidylcholine (21:0 PC) was added to each PLE before derivatization to assess reaction yield. Then, 10 μg of isobutyl palmitate (PAIBE) was added after derivatization to all samples as a quantification standard prior to analysis. A Thermo Scientific Trace 1310 Gas-Chromatograph equipped with a DB-5HT column (30 m × 0.250 mm, 0.10 µm) coupled to a flame-ionization detector (GC-FID) was used to quantify FAME concentrations (µg/g soil) and total amounts (µg extracted) based on peak area relative to the 23-PC extraction and PAIBE quantification standards, respectively. FAMEs were suspended in 100 μL n-hexane, and 1 uL was injected using a split-splitless injector run in splitless mode at 325 °C; split flow was 12.0 mL per min; splitless time was 0.80 min; purge flow was 5.00 mL/min; column flow rate was constant at 1.2 mL/min. The GC ramped according to the following program: 80 °C for 2 min, ramp at 20 °C/min for 5 min (to 140 °C), and ramp at 5 °C/min for 35 min (to 290 °C). The FID was held at 350 °C for the duration of the run. Major peaks were identified by retention time relative to a Bacterial Acid Methyl Ester standard (Millipore-Sigma) and a 37 FAME standard (Supelco). Peak identities were confirmed using a Thermo Scientific Trace 1310 Gas-Chromatograph coupled to a single quadrupole mass spectrometer (ISQ) using identical injection and chromatography conditions with mass scans from 50 to 550 amu and a scan time of 0.2 s in positive ion mode (electron impact). Due to the ambiguity associated with identifying double-bond position and stereochemistry for FAMEs containing multiple bonds, unsaturated compounds are identified only tentatively. FAMEs are referred to using the nomenclature z-x:y, where x is the total number of carbons in the fatty acid skeleton and y is the number of double bonds and their position (if known), while z-is a prefix describing additional structural features of the compound such as methylation and cyclization. The isotopic composition of FAMEs was measured on a Thermo Scientific 253 Plus stable isotope ratio mass spectrometer coupled to a Trace 1310 GC via Isolink II pyrolysis/combustion interface (GC/P/IRMS). Chromatographic conditions were identical to those from the GC-FID and GC-MS stated above except for extension of the temperature program to baseline separate all major analytes (40 °C hold for 2 min, 20 °C/min to 120 °C, then 2 °C/min ramp to 240 °C; 30 °C/min ramp to 330 °C, 4-min hold) and injection via programmable temperature vaporization inlet (ramped from 40 to 400 °C) to ensure quantitative transfer from the inlet to the column. Peaks were identified based on retention order and relative height based on coregistration with GC-FID and GC-MS chromatograms. Measured isotope ratios were corrected for scale compression, linearity, and memory effects using natural abundance and isotopically enriched fatty acid esters of known isotopic composition ranging from –231.2 ‰ to +3,972 ‰ vs. VSMOW ( SI Appendix , Supplementary Text ). Memory (peak-to-peak carryover) effects are important to correct for given the wide range of peak areas and isotopic values encountered in this study and the known impact of memory effects on H isotope measurements ( , ). Full memory effect corrections for standard mixtures of natural abundance and enriched fatty acid esters lead to a >44% improvement in residual SE ( SI Appendix , Table S1 and Fig. S8 ). The multivariate linear regression calibration for the enriched samples ( SI Appendix , Eqs. S1 – S3 and Table S1 ) included standards ranging in mass 2 areas from 1.74 Vs (~25 ng) to 69.67 Vs (~975 ng) and lead to an overall RMSE of the calibration of 60.4‰ stemming from the substantial dynamic range of the isotope standards and believed to accurately reflect the elevated uncertainty that should be expected in the isotopically diverse samples. The conservative analytical SEs ranged from 46.5 to 269 ‰ depending on peak area with larger error estimates for smaller peaks that were more affected by memory effects ( SI Appendix , Supplementary Text ). The hydrogen isotope calibration was performed in R using the packages isoreader [v 1.3.0 ( )] and isoprocessor (v 0.6.11) available at github.com/isoverse. Calibrated isotope ratios measured via GC/P/IRMS were further corrected for H added during derivatization to FAMEs, as well as for analytical and replicate error as follows: First, the 2 F of methanol used for base-catalyzed transesterification was measured by taking an aliquot of anhydrous methanol reagent (the exact stock used for transesterification) and derivatizing a phthalic acid with a known H isotopic composition (Arndt Schimmelmann, Indiana University) via acid catalysis. The resulting phthalic methyl ester was analyzed by GC/P/IRMS and a correction was applied to all values ( SI Appendix , Supplemental Text ). Calculations of biosynthetic activity focus on the isotopic composition of FAMEs because the hydrocarbon skeleton of fatty acids consists only of C–H bonds that are nonexchangeable on biological time scales ( ), unlike the readily exchangeable H bound to O, N, P, and S in parts of lipid headgroups, proteins, and nucleic acids. The H tracer is thus stably incorporated into fatty acids tails during biological activity. The resulting isotopic enrichment of fatty acids in intact cellular lipids is described by the following equation ( ): F t - F 0 = ( 1 - e - r · t ) · ( a · F L - F 0 ) , where r is the specific biosynthesis rate (1/days); t is the duration of tracer exposure (days); a is the assimilation efficiency and fractionation of water hydrogen during lipid biosynthesis (see ref. and SI Appendix , Fig. S7 ); and F 0 , F t , and F L are the fractional abundances of 2 H in fatty acids before tracer incorporation, in fatty acids at time t , and in the isotopically labeled sample water. Solving this equation for r makes it possible to infer from the incubation time and isotopic measurements how quickly cellular fatty acids turn over. With lipid biosynthesis reflecting a combination of growth and repair, this provides an upper/lower bound for the specific growth rate µ and apparent generation time T G of the microbial producers of a given lipid: r = - 1 t · l n F t - a · F L F 0 - a · F L , μ ≤ r , T G = l n 2 μ ≥ l n 2 r , These calculations yield compound-specific growth rate and generation time estimates that can be viewed by themselves or aggregated into assemblage-level estimates of community growth by calculating an abundance-weighted mean: weighted mean = ∑ i = 1 N x i w i ∑ i = 1 N x i where x i is the isotopic fractional abundance of compound i and w i is the relative abundance (weighting) of compound i. As discussed, these estimates aggregate cell-to-cell variations in growth. Uncertainty in each set of measurements was propagated through our calculations of µ by SE propagation ( SI Appendix , Supplementary Text ). To characterize the microbial community composition of each soil type before SIP incubation, DNA was extracted from soil subsamples in triplicate and with negative control using the DNeasy PowerSoil DNA isolation kit (Qiagen), according to the manufacturer’s instructions with one minor modification; samples were heated with Solution C1 for 10 min at 65 °C in a dry heat block prior to bead beating. Extracted DNA samples were amplified in duplicate using Platinum II Hot-Start PCR Master Mix (Thermo Fisher Scientific) and the 16S rRNA gene primers 515F and 806R with Illumina sequencing adapters and unique 12-bp barcodes. The PCR program was 94 °C for 2 min followed by 35 cycles of 94 °C (15 s), 60 °C (15 s), 68 °C (1 min), and a final extension at 72 °C for 10 min. Amplification was verified via gel electrophoresis. Amplicons were cleaned and normalized with the SequalPrep Normalization Plate (Thermo Fisher Scientific) following the manufacturer’s instructions and then pooled together. Sequencing was performed on an Illumina MiSeq using a v2 300 cycle kit with paired-end reads at the University of Colorado BioFrontiers Institute Next-Gen Sequencing Core Facility. To prepare samples for analysis with the DADA2 (version 1.10.1) bioinformatic pipeline ( ), reads were demultiplexed with adapters and primers were removed using standard settings for cutadapt (version 1.8.1, Martin 2011). We used standard filtering parameters with slight modifications for 2 × 150 bp chemistry where forward reads were not trimmed and reverse reads were trimmed (truncLen) to 140 base pairs. In addition, we truncated reads at the first nucleotide with a quality score (truncQ) below 11 and a maximum allowed error rate (maxEE) of 1. These filtering parameters resulted in a mean of 95.7% of reads retained, and this was visually assessed with quality profiles for each sample. Reads were dereplicated, paired ends were merged, amplicon sequence variants (ASVs) were assigned, and chimeras were removed (98.23% of reads were not chimeric). Finally, taxonomy was assigned to each ASV against the SILVA (v132) reference database ( ). We removed all chloroplast, mitochondria, and eukaryotic reads from the ASV table, which resulted in an average of 36,029 reads per sample (range 27,074 to 58,528 reads), with the ASV table subsequently rarefied to 27,000 reads per sample. Blank samples had far fewer reads than actual samples (mean of 193 reads compared to 35,455 reads per sample), and the four genera detected in blanks ( Thermus , Geobacillus , Deinococcus , and Pseudomonas ) were not consistently detected and were below the 1% relative abundance threshold for inclusion in sample analyses. Taxonomic composition of the samples was compared across soil types ( SI Appendix , Fig. S3 ). To infer relationships between lipid profiles observed across our SIP incubations and high-level taxonomy, we queried the Bacterial Metadiversity Database (BacDive) ( ) for all available fatty acid profiles using the BacDive API client implemented by the BacDiveR package ( ). We generated a table of 4,959 fatty acid profiles indexed by the taxonomy reported in the database and used principal component analysis to generate SI Appendix , Fig. S2 , grouped at the phylum level. We appended 24 manually curated fungal fatty acid profiles to this dataset, and this combined table is available as Dataset S2 . We looked at relationships between fatty acid profiles at the phylum level ( SI Appendix , Fig. S1 ) and conducted a principal component analysis of fatty acid composition and taxonomy ( SI Appendix , Fig. S2 ). To compare LH-SIP-measured rates of growth to other methods, we surveyed 26 previously reported estimates of microbial growth and compared them to the abundance-weighted mean estimates of our study ( Datasets S3 and S4 ). We collected specific passages, sentences, and tables and manually digitized these reported values, noting the specific mention of microbial growth. Studies use various terminology and units including growth rate (day −1 ), generation time (days), turnover rate (day −1 ), turnover time (days), or doubling times (days). Turnover time was converted to turnover rate by taking the reciprocal value. We include both “growth rates” and “turnover rates” as the distinction between the two is that turnover explicitly assumes a steady state of biomass (loss rates equal production rates). Generation time/doubling time estimates are converted to growth rates as described in this study. We plot the mean value from each study ( ), either reported in the manuscript or calculated from the upper- and lower-bounds reported in the manuscript. Soils were analyzed at the Colorado State University Soil, Water, and Plant Testing Laboratory for routine determination of soil characteristics. In short, a KCl extract was used to quantify soil nitrate ( ). An AB-DTPA extract was used to quantify soil P, Zn, Fe, Mn, Cu, and S ( ). Organic matter percentages were calculated by determining the weight loss of samples after ignition. These data are available in Dataset S1 . Appendix 01 (PDF) Click here for additional data file. Dataset S01 (XLSX) Click here for additional data file. Dataset S02 (XLSX) Click here for additional data file. Dataset S03 (XLSX) Click here for additional data file. Dataset S04 (XLSX) Click here for additional data file.
mEYEstro software: an automatic tool for standardized refractive surgery outcomes reporting
501045d3-80df-4f64-bb37-72bb0382703f
10120175
Ophthalmology[mh]
The standardization of medical outcome reporting simplifies comparisons between clinical studies and enhances reproducibility . Waring proposed the first set of refractive surgery outcomes reporting standards in 1992 incorporating six standard graphs describing accuracy, efficacy, safety, and stability of surgical procedures . A new updated set of nine standard graphs was added to cover astigmatism outcomes . A similar set of guidelines was recently published for lens-based refractive surgery . In the Journal of Refractive Surgery (JRS) , Journal of Cataract and Refractive Surgery (JCRS) , and Cornea , these standard graphs are currently required with each submission assessing post-operative outcomes. Additional journals, including Ophthalmology, also recommend using these standard graphs as part of their author guidelines . By following these specifications, results from specific surgical techniques, studies, case reports, or case series are standardized and easily comparable within and between studies . Refractive surgery standard graphs can be made by purchasing web-based or standalone software designed for refractive surgery outcomes analysis or by downloading free macro-enabled Microsoft Excel spreadsheets . Macro-enabled spreadsheets are difficult to use because they require manual data importation, manual formatting, and manual adjustments, which are time-consuming and are prone to user error. More importantly, they do not allow for automated simultaneous analyses of two comparative groups nor for performing automated "paired" and "unpaired" statistical analyses. Consequently, specialized freeware for the rapid and automated production of all standard graphs remains unavailable, limiting their use. In this context, we introduce mEYEstro, a free standalone software program that automatically performs statistical analysis and produces standardized refractive surgery graphs. By providing high-definition standard graphs, mEYEstro can assist clinicians and researchers in understanding clinical outcomes and presenting them in accordance with current peer-reviewed journal standards for reproducible research in corneal and intra-ocular refractive surgery. Software implementation and system requirements mEYEstro is programmed and compiled in MATLAB R2023a (MathWorks Inc., Natick, MA, USA) using the MATLAB runtime compiler (MathWorks Inc.). mEYEstro is therefore an executable file (*.exe) that can be run as an independent Desktop application. mEYEstro requires the MATLAB runtime compiler (MRC) to be correctly installed on the computer. The MRC installs automatically with the mEYEstro install. mEYEstro has been tested on Windows 10 Home and Professional, with a 64-bit-operating system and both with 1920 × 1080 and 3840 × 2160 screen resolutions. mEYEstro and the demonstration trial datasets are available to download from https://www.lasikmd.com/media/meyestro . A tutorial video is available at this link ( https://www.youtube.com/watch?v=NFlRRHx6ZaI ) and a tutorial guideline in Supplementary File . Usage mEYEstro can be used to automate producing all of the standard refractive surgery graphs, as recommended by various ophthalmology journals . The tool was developed specifically for academic research and teaching purposes but can also be used by surgeons looking to understand and improve their clinical outcomes. mEYEstro can be used to examine the visual and refractive outcomes of any corneal or intraocular refractive procedure. The corneal procedures include LASIK, PRK, and SMILE as well as collagen crosslinking, incisional keratotomy, intracorneal rings segments, LASEK, etc. The lens-based procedures include cataract surgery, refractive lens exchange, phakic IOL, etc. mEYEstro can also be utilized to study outcomes of procedures used to treat the various refractive surgery complications that exist today, or any other surgical procedure involving the eye . The use of mEYEstro is completely free provided that the user cites the current manuscript when using mEYEstro results in publications, presentations, or other public communications. Input data format To automatically generate the figures, mEYEstro reads data files in Microsoft Excel format (e.g., Datafile.xlsx). Excel was used due to its widespread use and simplicity. There are 20 columns, including 15 that are mandatory for proper mEYEstro functioning. The first five columns are 1) preoperative refraction sphere, 2) preoperative refraction cylinder, 3) preoperative refraction axis, 4) preoperative refraction vertex distance, and 5) preoperative corrected distance visual acuity (CDVA). The next four columns are 6) intended postoperative refraction sphere target, 7) intended postoperative refraction cylinder target, 8) intended postoperative refraction axis target, and 9) intended postoperative refraction vertex distance. If the intended postoperative refraction is plano, columns 6, 7 and 8 should be reported as 0, 0, and 0, respectively. The six next columns are 10) postoperative refraction sphere, 11) postoperative refraction cylinder, 12) postoperative refraction axis, 13) postoperative refraction vertex distance, 14) postoperative CDVA and 15) postoperative uncorrected distance visual acuity (UDVA). The last five columns (columns 16 to 20) are optional and allow the user to report the postoperative spherical equivalent (SEQ) at up to five different time points to generate a standard stability graph. Refraction data must be provided in the point decimal format (e.g., -1.50, 0.75) and using the negative cylinder (-ve) nomenclature. The negative cylinder notation was chosen since it is by far the most widespread notation used among refractive surgeons. For calculation mEYEstro will automatically convert the negative cylinder (-ve) to positive notation (+ ve). The UDVA and CDVA data must be provided as the 20/XX Snellen denominator (e.g., 20–1, 15, 20 + 2, 25, 30–1). An example of a representative mEYEstro data file is presented in Supplementary File . For users that use LogMAR notation in their charting, a LogMAR to Snellen denominator automatic conversion table is included in Supplementary File . This automatic conversion table can be used as needed to make automatic conversion of LogMAR values to 20/XX Snellen denominator values. The converted value can simply be pasted in a mEYEstro data file. Users can report their refraction data at any vertex distance (12 mm, 10 mm, 0 mm, etc.). mEYEstro will automatically convert the refractive astigmatism, generally measured at a vertex distance of 12 mm, to the corneal plane (0 mm). Data exclusion is at the user’s discretion prior to data importation. Upon entering your data in the mEYEstro datafile, if Excel is automatically converting Snellen denominator like 25–2 to a date “25-Feb”, please instead type ‘25–2 or set the column number format to “Text”. Methods and standard reporting The mEYEstro software adheres to terminology, calculations, and graphical representations originally described by Waring and Reinstein, as well as Editorials by Reinstein et al. . All vectorial analyses adhere to terminology, calculations, and graphical representations originally described by Alpins . The efficacy index is calculated as the ratio of postoperative UDVA (converted to decimal format) to the mean preoperative CDVA (converted to decimal format). The safety index is the ratio of postoperative CDVA (converted to decimal format) to mean preoperative CDVA (converted to decimal format). The SEQ was calculated by adding the sum of the sphere power with half of the cylinder power. The defocus equivalent was calculated as the absolute value of the SEQ plus half the absolute value of the cylinder. Negative cylinder (-ve) to positive notation (+ ve) conversion and vertex distance conversions of refraction data adhere to the methodology described by Alpins . Statistical methodologies are presented in the statistical analyses reporting section of the current paper. For additional methodological details, please contact the corresponding author. Program workflow The flow chart of the mEYEstro workflow is shown in Fig. . mEYEstro is entirely controlled via a few simple steps and each triggered as the user progresses through the program workflow (Fig. ). Upon starting the application, the user must choose the type of refractive surgery procedures (LVC, RLE, ICL, CAT) (Fig. A), the study design (single group, unpaired groups, paired groups) ( Fig. B), the name of the group(s), the color of the graphs, and the analysis parameters (Snellen lines to display on the UDVA/CDVA graphs, LogMAR threshold for each Snellen optotypes, efficacy & safety index levels, etc.) (Fig. C). If the user wants to include a stability graph, additional choices are presented (number of time points, selection of time points to compare, etc.). Finally, the user is invited to select the Excel data file for each group (Fig. D). The selected graphs are then generated and automatically saved in a folder as high resolution 400 dpi TIFF images (Fig. E). These individual images are ideal for PowerPoint presentations and scientific articles. In addition to the 400 dpi TIFF images, the one-page figure with all 10 standard graphs (Fig. F) is exported as an ultra-high definition 1200 dpi TIFF image for journals with higher image quality criteria, such as the Journal of Cataract & Refractive Surgery. Efficacy reporting Efficacy analyses include the preoperative and postoperative cumulative Snellen uncorrected (UDVA) and corrected visual acuity (CDVA) graph (Panel A in Figs. , snd 4) and the difference between postop UDVA and preop CDVA graph (Panel B in Figs. , and ). These two graphs allow the user to visualize and report standard visual outcomes. Panel A also includes the average (± standard deviations) of the preoperative and postoperative UDVA and CDVA in LogMAR values. The Panel B graph also reports the average efficacy index. The number of eyes per group is also displayed in Panel B. If two groups are analyzed, the p -value and effect size between groups is also displayed. For cataract surgery, the postoperative UDVA is compared to postoperative CDVA instead of preoperative CDVA, in agreement with current journal standards. Safety reporting Safety analyses include the change in lines of CDVA (Panel C in Figs. , and ). This graph allows the user to visualize procedures safety in terms of corrected visual acuity line gain and line loss from preop to postop. The graph also reports the average safety index. The number of eyes per group is also displayed in Panel C. If two groups are analyzed, the p -value and effect size between groups is also displayed. Spherical equivalent accuracy reporting SEQ accuracy analyses include the accuracy of SEQ to intended target histogram (Panel D in Figs. , and ) and the achieved SEQ vs attempted SEQ scattergram (Panel E in Figs. , and ). These two graphs allow the user to visualize accuracy outcomes. Panel D displays the percentage of eyes within 0.25, 0.50, 0.75 and 1.00 D of intended target, as well as the average postop SEQ to intended target. The number of eyes per group is also displayed in Panel D. If two groups are analyzed, the p -value and the effect size between groups is also displayed. Panel E also displays the linear regression equation, the R 2 , the average attempted SEQ, and the range of attempted SEQ. Stability reporting For longitudinal studies where stability over time is an important part of the analyses, a standard SEQ stability graph is required. This graph shows the preoperative SEQ and the postoperative SEQ at up to 5 time points for each group. For example, the user can provide the 1, 3, 6, 12, and 24 months SEQ data in the 5 last columns of the data file and mEYEstro will generate the graph (Panel F in Fig. ) from the provided data, automatically calculating the mean SEQ at each time point. mEYEstro will also calculate the percentage of eyes with a SEQ change greater than ± 0.50 D between two selected time points. For example, between the 3 months and 24 months postop. Since not all research questions require a longitudinal analysis, the stability analyses are optional and the user may leave the last five columns in the data file blank. The number of eyes at each time point and each group is also displayed at the bottom of Panel F. Defocus equivalent accuracy reporting When the user does not include a stability graph, a DEQ accuracy graph will automatically be included instead. Defocus equivalent (DEQ) accuracy analyses include the postoperative DEQ histogram (Panel F in Figs. and ). By default, this graph shows the percentage of eyes with a DEQ within 0.25, 0.50, 0.75, 1.00 and 2.00 D. The average postoperative DEQ and the number of eyes per group are also displayed in Panel F. If two groups are analyzed, the p -value and the effect size between groups is also displayed. Astigmatism accuracy and vector reporting Standard astigmatism analyses include the postoperative refractive astigmatism graph (Panel G in Figs. , and ), the target-induced astigmatism (TIA) vector vs surgically-induced astigmatism (SIA) vector scattergram (Panel H in Figs. , and ), the Correction Index histogram (Panel I in Figs. , and ), and the Angle of Error histogram (Panel J in Figs. , and ). Panel G display the percentage of eyes within 0.50, 0.75 and 1.00 D of plano postoperative astigmatism, as well as the average postop refractive astigmatism. Panel H also displays the linear regression equation, the R 2 , the average TIA and SIA, and the range of attempted SEQ. The number of eyes per group is also displayed in Panels G, H, I and J. If two groups are analyzed, the p -value and the effect size between groups is also displayed. For more detailed formulas and calculations, the interested reader can consult previous literature . For advanced standard vector graphs, we have described and provided AstigMATIC tool, available at www.lasikmd.com/media/astigmatic . Statistical analyses reporting When comparing two groups, the mEYEstro software automatically selects and uses the appropriate statistical hypothesis tests. The Kolmogorov–Smirnov test is first used to test if the preoperative and postoperative variables are normally distributed. Unpaired sample T-tests and non-parametric Mann Whitney U-tests are then used where applicable to compared outcomes between two independent groups. Paired samples T-tests or non-parametric Wilcoxon signed-rank test tests are used where applicable to compare two paired groups. Statistical significance is set at p < 0.05 and all data are reported as means ± standard deviations (SD). Effect size (ES), expressed as the Cohen’s d is also automatically calculated to better quantify the differences between groups. The effect size is an important indicator of clinical significance. For interpretation, we recommend the user follow the Cohen criteria, where d < 0.20 is considered as non-clinically relevant. For greater statistical validity, the user should include the outcome from one eye per patient in the input data file, such as the dominant eye or a randomly selected eye. mEYEstro currently has no feature for comparisons of 3 of more groups using ANOVA. mEYEstro is programmed and compiled in MATLAB R2023a (MathWorks Inc., Natick, MA, USA) using the MATLAB runtime compiler (MathWorks Inc.). mEYEstro is therefore an executable file (*.exe) that can be run as an independent Desktop application. mEYEstro requires the MATLAB runtime compiler (MRC) to be correctly installed on the computer. The MRC installs automatically with the mEYEstro install. mEYEstro has been tested on Windows 10 Home and Professional, with a 64-bit-operating system and both with 1920 × 1080 and 3840 × 2160 screen resolutions. mEYEstro and the demonstration trial datasets are available to download from https://www.lasikmd.com/media/meyestro . A tutorial video is available at this link ( https://www.youtube.com/watch?v=NFlRRHx6ZaI ) and a tutorial guideline in Supplementary File . mEYEstro can be used to automate producing all of the standard refractive surgery graphs, as recommended by various ophthalmology journals . The tool was developed specifically for academic research and teaching purposes but can also be used by surgeons looking to understand and improve their clinical outcomes. mEYEstro can be used to examine the visual and refractive outcomes of any corneal or intraocular refractive procedure. The corneal procedures include LASIK, PRK, and SMILE as well as collagen crosslinking, incisional keratotomy, intracorneal rings segments, LASEK, etc. The lens-based procedures include cataract surgery, refractive lens exchange, phakic IOL, etc. mEYEstro can also be utilized to study outcomes of procedures used to treat the various refractive surgery complications that exist today, or any other surgical procedure involving the eye . The use of mEYEstro is completely free provided that the user cites the current manuscript when using mEYEstro results in publications, presentations, or other public communications. To automatically generate the figures, mEYEstro reads data files in Microsoft Excel format (e.g., Datafile.xlsx). Excel was used due to its widespread use and simplicity. There are 20 columns, including 15 that are mandatory for proper mEYEstro functioning. The first five columns are 1) preoperative refraction sphere, 2) preoperative refraction cylinder, 3) preoperative refraction axis, 4) preoperative refraction vertex distance, and 5) preoperative corrected distance visual acuity (CDVA). The next four columns are 6) intended postoperative refraction sphere target, 7) intended postoperative refraction cylinder target, 8) intended postoperative refraction axis target, and 9) intended postoperative refraction vertex distance. If the intended postoperative refraction is plano, columns 6, 7 and 8 should be reported as 0, 0, and 0, respectively. The six next columns are 10) postoperative refraction sphere, 11) postoperative refraction cylinder, 12) postoperative refraction axis, 13) postoperative refraction vertex distance, 14) postoperative CDVA and 15) postoperative uncorrected distance visual acuity (UDVA). The last five columns (columns 16 to 20) are optional and allow the user to report the postoperative spherical equivalent (SEQ) at up to five different time points to generate a standard stability graph. Refraction data must be provided in the point decimal format (e.g., -1.50, 0.75) and using the negative cylinder (-ve) nomenclature. The negative cylinder notation was chosen since it is by far the most widespread notation used among refractive surgeons. For calculation mEYEstro will automatically convert the negative cylinder (-ve) to positive notation (+ ve). The UDVA and CDVA data must be provided as the 20/XX Snellen denominator (e.g., 20–1, 15, 20 + 2, 25, 30–1). An example of a representative mEYEstro data file is presented in Supplementary File . For users that use LogMAR notation in their charting, a LogMAR to Snellen denominator automatic conversion table is included in Supplementary File . This automatic conversion table can be used as needed to make automatic conversion of LogMAR values to 20/XX Snellen denominator values. The converted value can simply be pasted in a mEYEstro data file. Users can report their refraction data at any vertex distance (12 mm, 10 mm, 0 mm, etc.). mEYEstro will automatically convert the refractive astigmatism, generally measured at a vertex distance of 12 mm, to the corneal plane (0 mm). Data exclusion is at the user’s discretion prior to data importation. Upon entering your data in the mEYEstro datafile, if Excel is automatically converting Snellen denominator like 25–2 to a date “25-Feb”, please instead type ‘25–2 or set the column number format to “Text”. The mEYEstro software adheres to terminology, calculations, and graphical representations originally described by Waring and Reinstein, as well as Editorials by Reinstein et al. . All vectorial analyses adhere to terminology, calculations, and graphical representations originally described by Alpins . The efficacy index is calculated as the ratio of postoperative UDVA (converted to decimal format) to the mean preoperative CDVA (converted to decimal format). The safety index is the ratio of postoperative CDVA (converted to decimal format) to mean preoperative CDVA (converted to decimal format). The SEQ was calculated by adding the sum of the sphere power with half of the cylinder power. The defocus equivalent was calculated as the absolute value of the SEQ plus half the absolute value of the cylinder. Negative cylinder (-ve) to positive notation (+ ve) conversion and vertex distance conversions of refraction data adhere to the methodology described by Alpins . Statistical methodologies are presented in the statistical analyses reporting section of the current paper. For additional methodological details, please contact the corresponding author. The flow chart of the mEYEstro workflow is shown in Fig. . mEYEstro is entirely controlled via a few simple steps and each triggered as the user progresses through the program workflow (Fig. ). Upon starting the application, the user must choose the type of refractive surgery procedures (LVC, RLE, ICL, CAT) (Fig. A), the study design (single group, unpaired groups, paired groups) ( Fig. B), the name of the group(s), the color of the graphs, and the analysis parameters (Snellen lines to display on the UDVA/CDVA graphs, LogMAR threshold for each Snellen optotypes, efficacy & safety index levels, etc.) (Fig. C). If the user wants to include a stability graph, additional choices are presented (number of time points, selection of time points to compare, etc.). Finally, the user is invited to select the Excel data file for each group (Fig. D). The selected graphs are then generated and automatically saved in a folder as high resolution 400 dpi TIFF images (Fig. E). These individual images are ideal for PowerPoint presentations and scientific articles. In addition to the 400 dpi TIFF images, the one-page figure with all 10 standard graphs (Fig. F) is exported as an ultra-high definition 1200 dpi TIFF image for journals with higher image quality criteria, such as the Journal of Cataract & Refractive Surgery. Efficacy analyses include the preoperative and postoperative cumulative Snellen uncorrected (UDVA) and corrected visual acuity (CDVA) graph (Panel A in Figs. , snd 4) and the difference between postop UDVA and preop CDVA graph (Panel B in Figs. , and ). These two graphs allow the user to visualize and report standard visual outcomes. Panel A also includes the average (± standard deviations) of the preoperative and postoperative UDVA and CDVA in LogMAR values. The Panel B graph also reports the average efficacy index. The number of eyes per group is also displayed in Panel B. If two groups are analyzed, the p -value and effect size between groups is also displayed. For cataract surgery, the postoperative UDVA is compared to postoperative CDVA instead of preoperative CDVA, in agreement with current journal standards. Safety analyses include the change in lines of CDVA (Panel C in Figs. , and ). This graph allows the user to visualize procedures safety in terms of corrected visual acuity line gain and line loss from preop to postop. The graph also reports the average safety index. The number of eyes per group is also displayed in Panel C. If two groups are analyzed, the p -value and effect size between groups is also displayed. SEQ accuracy analyses include the accuracy of SEQ to intended target histogram (Panel D in Figs. , and ) and the achieved SEQ vs attempted SEQ scattergram (Panel E in Figs. , and ). These two graphs allow the user to visualize accuracy outcomes. Panel D displays the percentage of eyes within 0.25, 0.50, 0.75 and 1.00 D of intended target, as well as the average postop SEQ to intended target. The number of eyes per group is also displayed in Panel D. If two groups are analyzed, the p -value and the effect size between groups is also displayed. Panel E also displays the linear regression equation, the R 2 , the average attempted SEQ, and the range of attempted SEQ. For longitudinal studies where stability over time is an important part of the analyses, a standard SEQ stability graph is required. This graph shows the preoperative SEQ and the postoperative SEQ at up to 5 time points for each group. For example, the user can provide the 1, 3, 6, 12, and 24 months SEQ data in the 5 last columns of the data file and mEYEstro will generate the graph (Panel F in Fig. ) from the provided data, automatically calculating the mean SEQ at each time point. mEYEstro will also calculate the percentage of eyes with a SEQ change greater than ± 0.50 D between two selected time points. For example, between the 3 months and 24 months postop. Since not all research questions require a longitudinal analysis, the stability analyses are optional and the user may leave the last five columns in the data file blank. The number of eyes at each time point and each group is also displayed at the bottom of Panel F. When the user does not include a stability graph, a DEQ accuracy graph will automatically be included instead. Defocus equivalent (DEQ) accuracy analyses include the postoperative DEQ histogram (Panel F in Figs. and ). By default, this graph shows the percentage of eyes with a DEQ within 0.25, 0.50, 0.75, 1.00 and 2.00 D. The average postoperative DEQ and the number of eyes per group are also displayed in Panel F. If two groups are analyzed, the p -value and the effect size between groups is also displayed. Standard astigmatism analyses include the postoperative refractive astigmatism graph (Panel G in Figs. , and ), the target-induced astigmatism (TIA) vector vs surgically-induced astigmatism (SIA) vector scattergram (Panel H in Figs. , and ), the Correction Index histogram (Panel I in Figs. , and ), and the Angle of Error histogram (Panel J in Figs. , and ). Panel G display the percentage of eyes within 0.50, 0.75 and 1.00 D of plano postoperative astigmatism, as well as the average postop refractive astigmatism. Panel H also displays the linear regression equation, the R 2 , the average TIA and SIA, and the range of attempted SEQ. The number of eyes per group is also displayed in Panels G, H, I and J. If two groups are analyzed, the p -value and the effect size between groups is also displayed. For more detailed formulas and calculations, the interested reader can consult previous literature . For advanced standard vector graphs, we have described and provided AstigMATIC tool, available at www.lasikmd.com/media/astigmatic . When comparing two groups, the mEYEstro software automatically selects and uses the appropriate statistical hypothesis tests. The Kolmogorov–Smirnov test is first used to test if the preoperative and postoperative variables are normally distributed. Unpaired sample T-tests and non-parametric Mann Whitney U-tests are then used where applicable to compared outcomes between two independent groups. Paired samples T-tests or non-parametric Wilcoxon signed-rank test tests are used where applicable to compare two paired groups. Statistical significance is set at p < 0.05 and all data are reported as means ± standard deviations (SD). Effect size (ES), expressed as the Cohen’s d is also automatically calculated to better quantify the differences between groups. The effect size is an important indicator of clinical significance. For interpretation, we recommend the user follow the Cohen criteria, where d < 0.20 is considered as non-clinically relevant. For greater statistical validity, the user should include the outcome from one eye per patient in the input data file, such as the dominant eye or a randomly selected eye. mEYEstro currently has no feature for comparisons of 3 of more groups using ANOVA. A total of 3 simulated refractive surgery datasets were produced to test and demonstrate the capabilities and all features of mEYEstro. The first simulated trial dataset (Trial 1) includes two Excel files (Group A and Group B) and investigates the outcomes of a laser vision correction contralateral eye study comparing two excimer lasers in hyperopic eyes with astigmatism. The second dataset (Trial 2) comprised simulated data from a single group in order to investigate the outcomes of a toric Phakic IOL (PIOL) in hyperopic eyes with moderate to high astigmatism. The third simulated dataset (Trial 3) included two files (Group A and Group B), in order to investigate the outcomes of two cataract surgery groups, comparing two biometers, in myopic-astigmatism eyes. In each case, mEYEstro was used to read the datasets (Excel files) and to automatically generate all the standard graphs from the provided data, as shown in Figs. , and . The interested reader can reproduce those graphs using mEYEstro on their own computer with the 3 provided trial datasets. A tutorial video is available at this link ( https://www.youtube.com/watch?v=NFlRRHx6ZaI ) and a tutorial guideline in Supplementary File . One limitation of mEYEstro is that the user cannot modify or fully customize mEYEstro graphs and features. We chose the executable (*.exe) format to prevent users from modifying or copying the source code, which could then lead to a lack a standardization over time. We elected to fix the format of the mEYEstro graphs so that they would be in accordance with current journal standards . The latter will facilitate comparisons between studies. mEYEstro is currently limited to the 11 standard refractive surgery graphs discussed in this article. While these figures cover the main outcome measures for refractive surgery, supplementary vectorial astigmatism analyses are also recommended . For advanced vector analysis graphs, the user can download and use our free AstigMATIC tool (available at www.lasikmd.com/media/astigmatic ) . Future additions and improvements to the mEYEstro software mEYEstro will be updated annually or as needed to avoid obsolescence. Future updates to mEYEstro may include: 1) Advanced enhancement analyses. Note that mEYEstro can already be used to report enhancement outcomes using standard graphs but it does not provide additional non-standardized enhancement analyses at present. Pre-enhancement data can be entered as preoperative refractions and post-enhancement data as postoperative refractions, and mEYEstro will display graphs of enhancement outcomes. 2) Advanced nomogram analyses. In the interim, there is currently a scattergram for attempted versus achieved SEQ correction (Panel E), and another for attempted versus achieved astigmatism correction (Panel H), both of which employ linear regression coefficients. Surgeons can use this to make a basic nomogram and improve their surgical outcomes. 3) Direct LogMAR data entry. In the meantime, our LogMAR to Snellen converter can be used to enter data in LogMAR. 4) Snellen data entry in metric format. For now, online tools and tables can help users convert visual acuity in any format, including Snellen in metric units. 5) Multiple three or more groups analyses with automated ANOVA statistics. In the interim, when comparing 3 or more groups, the interested user can generate single group outcome graphs individually for each group. They can then use their own calculated averages, standard deviations, and sample sizes to derive their own hypothesis tests, including ANOVAs. Users who have additional suggestions to make are encouraged to contact us. Significance of the mEYEstro software Refractive surgery analyses are extensive and subtle nuances cannot be fully captured in a single graphical display . The mEYEstro automated outcomes software provides a simple approach whereby all graphs are used to answer distinct questions about the efficacy, safety, accuracy, and stability of the procedure. Such analysis enables the cause of an inaccurate surgical correction to be understood and the effectiveness of a treatment to be fully evaluated . Many authors, research presenters and clinicians are not able to perform accurate analyses in their studies since a free specialized software for standardized refractive surgery graphs and statistical analyses is unavailable. We therefore developed mEYEstro to meet their needs. It is a fully automated and easy-to-use freeware, designed to analyze outcomes of any refractive procedure creating an output as per the latest standards prescribed by JRS , JCRS , and Cornea . Note that there are alternate paid software options, including ASSORT, SurgiVision DataLink, Datagraph-med, or IBRA which have nomograms and surgical planning tools. Future studies might compare those alternative outcomes reporting tools. mEYEstro will be updated annually or as needed to avoid obsolescence. Future updates to mEYEstro may include: 1) Advanced enhancement analyses. Note that mEYEstro can already be used to report enhancement outcomes using standard graphs but it does not provide additional non-standardized enhancement analyses at present. Pre-enhancement data can be entered as preoperative refractions and post-enhancement data as postoperative refractions, and mEYEstro will display graphs of enhancement outcomes. 2) Advanced nomogram analyses. In the interim, there is currently a scattergram for attempted versus achieved SEQ correction (Panel E), and another for attempted versus achieved astigmatism correction (Panel H), both of which employ linear regression coefficients. Surgeons can use this to make a basic nomogram and improve their surgical outcomes. 3) Direct LogMAR data entry. In the meantime, our LogMAR to Snellen converter can be used to enter data in LogMAR. 4) Snellen data entry in metric format. For now, online tools and tables can help users convert visual acuity in any format, including Snellen in metric units. 5) Multiple three or more groups analyses with automated ANOVA statistics. In the interim, when comparing 3 or more groups, the interested user can generate single group outcome graphs individually for each group. They can then use their own calculated averages, standard deviations, and sample sizes to derive their own hypothesis tests, including ANOVAs. Users who have additional suggestions to make are encouraged to contact us. Refractive surgery analyses are extensive and subtle nuances cannot be fully captured in a single graphical display . The mEYEstro automated outcomes software provides a simple approach whereby all graphs are used to answer distinct questions about the efficacy, safety, accuracy, and stability of the procedure. Such analysis enables the cause of an inaccurate surgical correction to be understood and the effectiveness of a treatment to be fully evaluated . Many authors, research presenters and clinicians are not able to perform accurate analyses in their studies since a free specialized software for standardized refractive surgery graphs and statistical analyses is unavailable. We therefore developed mEYEstro to meet their needs. It is a fully automated and easy-to-use freeware, designed to analyze outcomes of any refractive procedure creating an output as per the latest standards prescribed by JRS , JCRS , and Cornea . Note that there are alternate paid software options, including ASSORT, SurgiVision DataLink, Datagraph-med, or IBRA which have nomograms and surgical planning tools. Future studies might compare those alternative outcomes reporting tools. With mEYEstro, we provide a freely downloadable tool for automated detailed reporting of refractive surgery outcomes that can be used by clinicians, surgeons, and researchers to easily display standardized graphs for publication, presentation or clinical knowledge. Additional file 1. Additional file 2. Additional file 3.
Outcomes of patient education in adult oncologic patients receiving oral anticancer agents: a systematic review protocol
055d402e-08b3-42cd-be26-059e8381851a
10120216
Patient Education as Topic[mh]
During the past several years, a large variety of antineoplastic agents for oral administration has become available and led to a shift in cancer treatment from the hospital to the home setting . In a survey of patients’ preferences for oral anticancer agents (OAA) , the majority (89.32%, n = 90) declared that they preferred oral to intravenous chemotherapy. The primary reasons given were convenience, concerns about intravenous access, and perceived environmental control . It has been reported also that patients may have misconceptions about OAA with respect to their side effects and efficacy . In this evolving scenario, special care must be given to oncologic patients who often undergo a multi-faceted and multi-professional treatment pathway that requires them to have timely and accurate information related to their needs. While at first glance, OAA therapies appear to provide only benefits , a qualitative study found that patients expressed their need for educational interventions, raised safety issues related to OAA, and were concerned about both identifying and managing their side effects . An interplay between safety and patient education (PE) has been raised and is acknowledged that OAA are prescribed best in combination with structured educational efforts . PE, which is defined as “a process of assisting consumers of health care to learn how to incorporate health related behaviors into everyday life with the purpose of achieving the goal of optimal health” , has long been recognized as “an essential component of effective healthcare delivery” in which registered nurses (RNs) are seen as “patient teachers” with the support of patients who are recognized as “equal partners” . Hence, oncology RNs are required to find effective ways to educate patients about cancer diagnosis, treatment, and symptom management . Objectives Both patients and providers have reported the importance of PE for those taking OAA . O’Neill and colleagues also highlighted PE’s crucial role, particularly when OAA are taken outside of the hospital in a community setting . Although PE has been documented as an important area of intervention that enhances adherence to OAA , to the best of our knowledge, there are no systematic reviews (SRs) of the literature to date that have summarized all of the current evidence about the outcomes of PE for patients receiving OAA. To fill this gap in the literature, the underlying research question formulated following the Population (P), Intervention (I), Comparison (C), and Outcomes (O) framework is: what are the documented outcomes (O) of patient education interventions (I) for adult patients with solid/oncohematological cancer who receive OAA (P) compared to no structured PE interventions (C)? The secondary objectives will be to describe systematically the (1) content, (2) methodologies, (3) setting, (4) timing/duration, and (5) healthcare professionals involved in documented PE interventions that target patients who take OAA. Both patients and providers have reported the importance of PE for those taking OAA . O’Neill and colleagues also highlighted PE’s crucial role, particularly when OAA are taken outside of the hospital in a community setting . Although PE has been documented as an important area of intervention that enhances adherence to OAA , to the best of our knowledge, there are no systematic reviews (SRs) of the literature to date that have summarized all of the current evidence about the outcomes of PE for patients receiving OAA. To fill this gap in the literature, the underlying research question formulated following the Population (P), Intervention (I), Comparison (C), and Outcomes (O) framework is: what are the documented outcomes (O) of patient education interventions (I) for adult patients with solid/oncohematological cancer who receive OAA (P) compared to no structured PE interventions (C)? The secondary objectives will be to describe systematically the (1) content, (2) methodologies, (3) setting, (4) timing/duration, and (5) healthcare professionals involved in documented PE interventions that target patients who take OAA. Research design and methodology We will perform a systematic review of the literature (SR) that will be guided by the standards of reporting of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 . The SR protocol is being reported in accordance with the Preferred Reporting Items for systematic reviews and Meta-Analyses for Systematic Review Protocols (PRISMA-P) 2015 . The SR was also registered prospectively in the International Prospective Register of Systematic Reviews (PROSPERO) (Registration number CRD42022341797). The complete PRISMA-P Checklist is available as Additional file . Protocol development To define the SR protocol and perform the subsequent SR, a purpose-built research team was constituted and multiple stakeholders from an Italian National Cancer Institute were involved. A total of seven RNs, two physicians, and two pharmacists were consulted, including a clinical trial nurse with extensive clinical experience with patients who are receiving OAA and PE interventions (IS), a pharmacist with experience in PE programs’ management, and a pharmacist from the pharmacy’s clinical desk dedicated to the OAA’s distribution, counseling, and pharmacovigilance. A senior PhD researcher (LC) with experience in both quantitative/qualitative studies and SRs in the field of educational research was appointed supervisor of the research team. As shown in Fig. , the research team first developed the research question and established the SR’s primary and secondary objectives in detail using the PICO framework and then defined the eligibility criteria and search strategy accordingly. The processes of retrieving the search results, screening the title/abstract and full-text, extracting and synthesizing the data, and appraising their quality have also been planned and are described in detail in the SR protocol. Eligibility criteria Study types, time restrictions, and language Randomized controlled trials (RCTs), clinical trials (CT), prospective and retrospective cohort studies, case-control studies, and cross-sectional studies will be included, while qualitative studies, editorials, letters, reports, commentaries, books, dissertations, conference papers, and proceedings will be excluded. No time restrictions will be applied to summarize the entire existing literature and provide a comprehensive review . Only studies written in English will be included. Population and setting Adult patients (≥ 18 years old) of both genders from any country in hospital, outpatient, and home settings who are diagnosed with solid or oncohematological cancer and prescribed OAA will be included, while studies of pediatric populations (< 18 years old) will be excluded. Intervention and comparison All types of PE interventions that provide contents related to OAA, including, but not limited to, basic information about the drug, the therapeutic schedule, managing collateral effects, administration methods, reporting adverse events, and monitoring will constitute the intervention group, and will be compared to those with no structured PE intervention who receive only standard information. Outcomes The research team (FF, IS, LC) and the stakeholders held multiple meetings to define the expected outcomes at this stage. It is hypothesized that PE outcomes may include the following: number of reports of adverse events, toxicity related to OAA, hospital/emergency care admissions, the number of nurse/physician/pharmacist consults, adherence to follow-ups, quality of life (QoL), use of alternative pharmacological and non-pharmacological therapeutic approaches, caregiver burden, and adherence to the medication prescribed. Other outcomes that may emerge during the literature analysis will be considered as well. A complete overview of the eligibility criteria and their rationale is provided in Table . Information sources A systematic search of the literature available will be conducted in the following electronic databases: PubMed/Medical Literature Analysis and Retrieval System Online (MEDLINE), Cumulative Index to Nursing and Allied Health Literature (CINAHL), Excerpta Medica atabase (Embase), and Scopus. In addition, gray literature, such as non-indexed journals and professional associations’ websites will be searched. The reference lists of key publications will be taken into consideration and two researchers (FF, LC) will screen them in-depth throughout to identify additional relevant studies not retrieved in the online searches . Search strategy After consultations among the team members (FF, IS, LC) and stakeholders, relevant keywords and medical subject headings (MeSH) terms were identified. Each keyword/MeSH term was combined using the Boolean operators AND/OR/NOT. LC and a senior librarian who was involved in both the conceptualization and preparation of the search strategy supervised the process. The complete search strategy for each database is described in Additional file . Screening The search results will be screened in two phases: (1) title/abstract screening and (3) full-text screening. This part of the SR will attempt to include only relevant studies and exclude irrelevant articles . FF will import the search results into the Rayyan AI platform , and after duplicate results have been removed, title/abstract screening will be conducted. Subsequently, full-text screening of studies that meet the inclusion criteria will be performed. Two researchers (FF, LC) will conduct the screening independently and blindly, and a Cohen’s kappa score of >0.6 will be considered acceptable . In case of disagreements about article eligibility, a third researcher will be consulted (IS), and the final decision will be made during discussion until consensus is reached . Data extraction An electronic data extraction form will be implemented and piloted with at least two of the articles selected to ensure its usefulness, appropriateness, and feasibility . The entire team will discuss the results of the pilot and the data extraction form will be adjusted accordingly. Two data extractors (SC, IS) with knowledge of PE, OAA, and research methods will extract the data cooperatively . In addition, the data extractors will be provided with training to ensure that they are familiar with the tool and to enhance the results’ trustworthiness . A third researcher will be involved (FF) to resolve disagreements in the data extraction, and the final decision will be made upon discussion until consensus is reached . The following data will be extracted: Author(s), year of publication, and country. Study design and objectives. Setting and population characteristics (including oncologic disease and OAA drug prescribed). Content(s) of the PE intervention. PE intervention methodologies adopted/studied (i.e., written, verbal, etc.). Timing and duration of the PE intervention. Healthcare professionals involved in the PE process. Comparison. Outcomes explored with measurement tools/measurement of effects. Quality appraisal Two researchers (FF, LC) will appraise the quality of the studies included independently using the Critical Appraisal Tools from The Joanna Briggs Institute , which provides thirteen checklists that contain from a minimum of six to thirteen items, respectively. For each item, “yes,” “no,” “unclear,” or “not applicable” answers are available. No studies will be excluded during the quality appraisal phase. The quality assessment will be considered an advantage to understand the study results’ meaning and weight better. A Cohen’s kappa score of >0.6 will be considered acceptable in quality appraisal as well . Data synthesis The completed electronic data extraction form will be shared among all of the research team members. Initially, the data will be aggregated by study design, setting, and main PE outcomes. Patterns and relations in the results will be discussed in multiple meetings until consensus is reached. The results will be analyzed, grouped, and assigned to categories according to the studies’ similarities and differences, with particular emphasis on outcomes and (1) content, (2) methodologies, (3) setting, (4) timing/duration, and (5) healthcare professionals involved. Because of the considerable heterogeneity expected given the broad eligibility criteria in the study design, population, setting, PE interventions, and outcomes, the emerging evidence will be synthesized narratively , and the results will be reported according to the Synthesis without meta-analysis (SWiM) guidelines . A textual description of the findings, as well as tables, the vote counting technique, and figures will be adopted when appropriate . We will perform a systematic review of the literature (SR) that will be guided by the standards of reporting of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 . The SR protocol is being reported in accordance with the Preferred Reporting Items for systematic reviews and Meta-Analyses for Systematic Review Protocols (PRISMA-P) 2015 . The SR was also registered prospectively in the International Prospective Register of Systematic Reviews (PROSPERO) (Registration number CRD42022341797). The complete PRISMA-P Checklist is available as Additional file . To define the SR protocol and perform the subsequent SR, a purpose-built research team was constituted and multiple stakeholders from an Italian National Cancer Institute were involved. A total of seven RNs, two physicians, and two pharmacists were consulted, including a clinical trial nurse with extensive clinical experience with patients who are receiving OAA and PE interventions (IS), a pharmacist with experience in PE programs’ management, and a pharmacist from the pharmacy’s clinical desk dedicated to the OAA’s distribution, counseling, and pharmacovigilance. A senior PhD researcher (LC) with experience in both quantitative/qualitative studies and SRs in the field of educational research was appointed supervisor of the research team. As shown in Fig. , the research team first developed the research question and established the SR’s primary and secondary objectives in detail using the PICO framework and then defined the eligibility criteria and search strategy accordingly. The processes of retrieving the search results, screening the title/abstract and full-text, extracting and synthesizing the data, and appraising their quality have also been planned and are described in detail in the SR protocol. Study types, time restrictions, and language Randomized controlled trials (RCTs), clinical trials (CT), prospective and retrospective cohort studies, case-control studies, and cross-sectional studies will be included, while qualitative studies, editorials, letters, reports, commentaries, books, dissertations, conference papers, and proceedings will be excluded. No time restrictions will be applied to summarize the entire existing literature and provide a comprehensive review . Only studies written in English will be included. Population and setting Adult patients (≥ 18 years old) of both genders from any country in hospital, outpatient, and home settings who are diagnosed with solid or oncohematological cancer and prescribed OAA will be included, while studies of pediatric populations (< 18 years old) will be excluded. Intervention and comparison All types of PE interventions that provide contents related to OAA, including, but not limited to, basic information about the drug, the therapeutic schedule, managing collateral effects, administration methods, reporting adverse events, and monitoring will constitute the intervention group, and will be compared to those with no structured PE intervention who receive only standard information. Outcomes The research team (FF, IS, LC) and the stakeholders held multiple meetings to define the expected outcomes at this stage. It is hypothesized that PE outcomes may include the following: number of reports of adverse events, toxicity related to OAA, hospital/emergency care admissions, the number of nurse/physician/pharmacist consults, adherence to follow-ups, quality of life (QoL), use of alternative pharmacological and non-pharmacological therapeutic approaches, caregiver burden, and adherence to the medication prescribed. Other outcomes that may emerge during the literature analysis will be considered as well. A complete overview of the eligibility criteria and their rationale is provided in Table . Information sources A systematic search of the literature available will be conducted in the following electronic databases: PubMed/Medical Literature Analysis and Retrieval System Online (MEDLINE), Cumulative Index to Nursing and Allied Health Literature (CINAHL), Excerpta Medica atabase (Embase), and Scopus. In addition, gray literature, such as non-indexed journals and professional associations’ websites will be searched. The reference lists of key publications will be taken into consideration and two researchers (FF, LC) will screen them in-depth throughout to identify additional relevant studies not retrieved in the online searches . Search strategy After consultations among the team members (FF, IS, LC) and stakeholders, relevant keywords and medical subject headings (MeSH) terms were identified. Each keyword/MeSH term was combined using the Boolean operators AND/OR/NOT. LC and a senior librarian who was involved in both the conceptualization and preparation of the search strategy supervised the process. The complete search strategy for each database is described in Additional file . Screening The search results will be screened in two phases: (1) title/abstract screening and (3) full-text screening. This part of the SR will attempt to include only relevant studies and exclude irrelevant articles . FF will import the search results into the Rayyan AI platform , and after duplicate results have been removed, title/abstract screening will be conducted. Subsequently, full-text screening of studies that meet the inclusion criteria will be performed. Two researchers (FF, LC) will conduct the screening independently and blindly, and a Cohen’s kappa score of >0.6 will be considered acceptable . In case of disagreements about article eligibility, a third researcher will be consulted (IS), and the final decision will be made during discussion until consensus is reached . Data extraction An electronic data extraction form will be implemented and piloted with at least two of the articles selected to ensure its usefulness, appropriateness, and feasibility . The entire team will discuss the results of the pilot and the data extraction form will be adjusted accordingly. Two data extractors (SC, IS) with knowledge of PE, OAA, and research methods will extract the data cooperatively . In addition, the data extractors will be provided with training to ensure that they are familiar with the tool and to enhance the results’ trustworthiness . A third researcher will be involved (FF) to resolve disagreements in the data extraction, and the final decision will be made upon discussion until consensus is reached . The following data will be extracted: Author(s), year of publication, and country. Study design and objectives. Setting and population characteristics (including oncologic disease and OAA drug prescribed). Content(s) of the PE intervention. PE intervention methodologies adopted/studied (i.e., written, verbal, etc.). Timing and duration of the PE intervention. Healthcare professionals involved in the PE process. Comparison. Outcomes explored with measurement tools/measurement of effects. Quality appraisal Two researchers (FF, LC) will appraise the quality of the studies included independently using the Critical Appraisal Tools from The Joanna Briggs Institute , which provides thirteen checklists that contain from a minimum of six to thirteen items, respectively. For each item, “yes,” “no,” “unclear,” or “not applicable” answers are available. No studies will be excluded during the quality appraisal phase. The quality assessment will be considered an advantage to understand the study results’ meaning and weight better. A Cohen’s kappa score of >0.6 will be considered acceptable in quality appraisal as well . Data synthesis The completed electronic data extraction form will be shared among all of the research team members. Initially, the data will be aggregated by study design, setting, and main PE outcomes. Patterns and relations in the results will be discussed in multiple meetings until consensus is reached. The results will be analyzed, grouped, and assigned to categories according to the studies’ similarities and differences, with particular emphasis on outcomes and (1) content, (2) methodologies, (3) setting, (4) timing/duration, and (5) healthcare professionals involved. Because of the considerable heterogeneity expected given the broad eligibility criteria in the study design, population, setting, PE interventions, and outcomes, the emerging evidence will be synthesized narratively , and the results will be reported according to the Synthesis without meta-analysis (SWiM) guidelines . A textual description of the findings, as well as tables, the vote counting technique, and figures will be adopted when appropriate . Randomized controlled trials (RCTs), clinical trials (CT), prospective and retrospective cohort studies, case-control studies, and cross-sectional studies will be included, while qualitative studies, editorials, letters, reports, commentaries, books, dissertations, conference papers, and proceedings will be excluded. No time restrictions will be applied to summarize the entire existing literature and provide a comprehensive review . Only studies written in English will be included. Adult patients (≥ 18 years old) of both genders from any country in hospital, outpatient, and home settings who are diagnosed with solid or oncohematological cancer and prescribed OAA will be included, while studies of pediatric populations (< 18 years old) will be excluded. All types of PE interventions that provide contents related to OAA, including, but not limited to, basic information about the drug, the therapeutic schedule, managing collateral effects, administration methods, reporting adverse events, and monitoring will constitute the intervention group, and will be compared to those with no structured PE intervention who receive only standard information. The research team (FF, IS, LC) and the stakeholders held multiple meetings to define the expected outcomes at this stage. It is hypothesized that PE outcomes may include the following: number of reports of adverse events, toxicity related to OAA, hospital/emergency care admissions, the number of nurse/physician/pharmacist consults, adherence to follow-ups, quality of life (QoL), use of alternative pharmacological and non-pharmacological therapeutic approaches, caregiver burden, and adherence to the medication prescribed. Other outcomes that may emerge during the literature analysis will be considered as well. A complete overview of the eligibility criteria and their rationale is provided in Table . A systematic search of the literature available will be conducted in the following electronic databases: PubMed/Medical Literature Analysis and Retrieval System Online (MEDLINE), Cumulative Index to Nursing and Allied Health Literature (CINAHL), Excerpta Medica atabase (Embase), and Scopus. In addition, gray literature, such as non-indexed journals and professional associations’ websites will be searched. The reference lists of key publications will be taken into consideration and two researchers (FF, LC) will screen them in-depth throughout to identify additional relevant studies not retrieved in the online searches . After consultations among the team members (FF, IS, LC) and stakeholders, relevant keywords and medical subject headings (MeSH) terms were identified. Each keyword/MeSH term was combined using the Boolean operators AND/OR/NOT. LC and a senior librarian who was involved in both the conceptualization and preparation of the search strategy supervised the process. The complete search strategy for each database is described in Additional file . The search results will be screened in two phases: (1) title/abstract screening and (3) full-text screening. This part of the SR will attempt to include only relevant studies and exclude irrelevant articles . FF will import the search results into the Rayyan AI platform , and after duplicate results have been removed, title/abstract screening will be conducted. Subsequently, full-text screening of studies that meet the inclusion criteria will be performed. Two researchers (FF, LC) will conduct the screening independently and blindly, and a Cohen’s kappa score of >0.6 will be considered acceptable . In case of disagreements about article eligibility, a third researcher will be consulted (IS), and the final decision will be made during discussion until consensus is reached . An electronic data extraction form will be implemented and piloted with at least two of the articles selected to ensure its usefulness, appropriateness, and feasibility . The entire team will discuss the results of the pilot and the data extraction form will be adjusted accordingly. Two data extractors (SC, IS) with knowledge of PE, OAA, and research methods will extract the data cooperatively . In addition, the data extractors will be provided with training to ensure that they are familiar with the tool and to enhance the results’ trustworthiness . A third researcher will be involved (FF) to resolve disagreements in the data extraction, and the final decision will be made upon discussion until consensus is reached . The following data will be extracted: Author(s), year of publication, and country. Study design and objectives. Setting and population characteristics (including oncologic disease and OAA drug prescribed). Content(s) of the PE intervention. PE intervention methodologies adopted/studied (i.e., written, verbal, etc.). Timing and duration of the PE intervention. Healthcare professionals involved in the PE process. Comparison. Outcomes explored with measurement tools/measurement of effects. Two researchers (FF, LC) will appraise the quality of the studies included independently using the Critical Appraisal Tools from The Joanna Briggs Institute , which provides thirteen checklists that contain from a minimum of six to thirteen items, respectively. For each item, “yes,” “no,” “unclear,” or “not applicable” answers are available. No studies will be excluded during the quality appraisal phase. The quality assessment will be considered an advantage to understand the study results’ meaning and weight better. A Cohen’s kappa score of >0.6 will be considered acceptable in quality appraisal as well . The completed electronic data extraction form will be shared among all of the research team members. Initially, the data will be aggregated by study design, setting, and main PE outcomes. Patterns and relations in the results will be discussed in multiple meetings until consensus is reached. The results will be analyzed, grouped, and assigned to categories according to the studies’ similarities and differences, with particular emphasis on outcomes and (1) content, (2) methodologies, (3) setting, (4) timing/duration, and (5) healthcare professionals involved. Because of the considerable heterogeneity expected given the broad eligibility criteria in the study design, population, setting, PE interventions, and outcomes, the emerging evidence will be synthesized narratively , and the results will be reported according to the Synthesis without meta-analysis (SWiM) guidelines . A textual description of the findings, as well as tables, the vote counting technique, and figures will be adopted when appropriate . Relevance of the systematic review The SR will address the PE outcomes documented for adult oncologic patients who are receiving OAA, as well as describe the PE interventions’ contents, methodologies, setting, timing/duration, and the healthcare professionals involved. Therefore, the results will be relevant to policymakers and management, as they will inform policies and provide guidance in structuring PE programs for OAA patients at an institutional level, and for clinical practice in an evidence-based paradigm , they will advance the knowledge and awareness of PE outcomes for patients receiving OAA, and potentially lead to a change in the practice that ensures quality improvement with respect to safety and patient satisfaction [ , , ]. Limitations and strategies The SR has certain limitations. Only articles written in English will be included, as it is intended to seek evidence that targets an international audience, which could potentially lead to selection bias. No individual search within the most relevant journals in the field is planned, although they are expected to be indexed in the broad databases that will be searched. Finally, negative or neutral results of PE interventions may have not been published, which could introduce an involuntary publication bias in the SR. As a strategy to improve the SR’s transparency, all important amendments to the protocol in the final SR will be reported and described in-depth in an appendix that provides the rationale and a full description of the change(s). The SR will address the PE outcomes documented for adult oncologic patients who are receiving OAA, as well as describe the PE interventions’ contents, methodologies, setting, timing/duration, and the healthcare professionals involved. Therefore, the results will be relevant to policymakers and management, as they will inform policies and provide guidance in structuring PE programs for OAA patients at an institutional level, and for clinical practice in an evidence-based paradigm , they will advance the knowledge and awareness of PE outcomes for patients receiving OAA, and potentially lead to a change in the practice that ensures quality improvement with respect to safety and patient satisfaction [ , , ]. The SR has certain limitations. Only articles written in English will be included, as it is intended to seek evidence that targets an international audience, which could potentially lead to selection bias. No individual search within the most relevant journals in the field is planned, although they are expected to be indexed in the broad databases that will be searched. Finally, negative or neutral results of PE interventions may have not been published, which could introduce an involuntary publication bias in the SR. As a strategy to improve the SR’s transparency, all important amendments to the protocol in the final SR will be reported and described in-depth in an appendix that provides the rationale and a full description of the change(s). Additional file 1. “PRISMA- P 2015 Checklist”. Microsoft Word document. Additional file 2. “Search strategy”. Microsoft Word document.
Microbiota-mediated nitrogen fixation and microhabitat homeostasis in aerial root-mucilage
d53ef5a9-b8ac-4b6f-8221-87c874391cf7
10120241
Microbiology[mh]
Plant-associated microbes play an important role in host nutrient utilization, stress tolerance, plant health, and adaptation. Among those microbes, symbiotic and associative diazotrophs that can fix nitrogen and directly enhance the nitrogen supply to host plants have attracted much attention. To date, outside the underground root/rhizosphere diazotrophs, special niches such as rhizome, xylem, and aerial root have attracted most of the attention [ – ]. Recently, a pioneering study showed that aerial roots of a Mexican maize cultivar exude a large amount of carbohydrate-rich mucilage, which contributes 29–82% of the plant nitrogen from atmospheric nitrogen . Analysis of the mucilage microbiota indicated that it was enriched in nitrogen diazotrophic bacterial genera such as Burkholderia , Herbaspirillum , and Azospirillumin , suggesting the essential roles of mucilage-microbiota system in fulfilling the nitrogen demand of plant . While the underlying mechanisms of mucilage secretion and nitrogen fixation remain unclear, the carbohydrate-rich mucilage was recognized as a vital zone in mediating maize-diazotrophic microbiota interactions . Strikingly, this nutrient-rich spot specifically enriches nitrogen-fixing bacteria rather than pathogens and environmental microbes , suggesting the micro-homeostasis behind. However, the underpinning mechanisms of this process are still unknown. It is increasingly recognized the plant specialized metabolites in root exudates play important roles in defending against pathogens and maintaining microbial homeostasis in the rhizosphere [ – ]. For instance, root-secreted coumarin can inhibit pathogens and influence the assembly of rhizosphere microbiota . Besides host plant-derived regulators, the interactions among microbiota contribute to the establishment, stability, and resilience of host-associated microbial communities . For instance, the root bacterial Variovorax and Pseudomonas species can maintain root fungal homeostasis, thereby promoting the survival of their host plant Arabidopsis thaliana . In the second line of defense (within the root), enriched endophytic bacterial microbiota, Chitinophagaceae, and Flavobacteriaceae can still become the protective layer of the host plant, functioning as a second line of defense within the root . These “friendly” microbes play an important role in maintaining microbial homeostasis and defending against pathogens in the microenvironment. In addition, host-microbe and microbe-microbe interactions can also simultaneously produce synergistic effects to maintain microbial homeostasis in plant roots, such as Arabidopsis root bacterial microbial communities and plant metabolite tryptophan jointly control soil fungal pathogens Plectosphaerella cucumerina to promote plant growth and health . In summary, current hypotheses about plant-associated microbial homeostasis are the result of host plant-microbiota and microbiota-microbiota interactions. To find out the key mucilage compound and friendly microbe as well as its protective association with diazotrophic microbiota is of great value for understanding the microbial homeostasis of the mucilage-microbiota system. Pink lady ( Heterotis rotundifolia ) is a fast-growing, perennial shrub, dicotyledon plant. It is a high-risk invasive plant listed in the Global Compendium of Weeds . Their aerial roots vary from 0.1 to 16 cm and exude a large amount of mucilage before reaching the ground (Fig. a). Efficient uptake and utilization of nitrogen by plants seem to contribute to their successful invasion, a process that is accompanied by early reproduction and many offspring and high growth rates . Given the roles of aerial roots in nitrogen uptake stated above, the aerial root-mucilage-microbiota system may contribute to its nitrogen-fixing to achieve their growth and spread. In this study, we use this dicotyledon plant as a model to answer the following two scientific questions: what is the biological function of the aerial root-mucilage system and underlying mechanisms in dicotyledon plants? As a “natural medium,” how does mucilage microhabitat maintain its function and homeostasis in the fluctuant open air? Aerial root mucilage and underground root exudate of H. rotundifolia have distinct biochemical composition The creeping H. rotundifolia plants can grow over 2 m long and form the aerial root at each stem node (Fig. a). These mucilage-producing aerial roots vary from barely visible to the naked eye to over 16 cm in length. We compared the biochemical composition of the aerial root mucilage (ARM or mucilage) and underground root exudates (URE) by collecting widely targeted metabolomics data of these samples (Fig. b) . Principal-component analysis (PCA) results demonstrated a clear differentiation of metabolic profile of ARM and URE samples (Fig. c). Indeed, 531 of the 1033 putatively annotated metabolites were significantly different between these two samples ( P < 0.01, Table S and Fig. S ). Whereas URE tended to contain higher levels of lipids and alkaloids, ARM richer in amino acid derivatives, nucleotide derivatives, flavonoids, and carbohydrates (Fig. d). Further carbohydrate measurement with a targeted approach confirmed that the ARM was rich in glucose, fructose and sucrose, which were barely detectable in URE (< 0.01 mg·g −1 ; Fig. e). Aerial root mucilage microbiota of H. rotundifolia is enriched in diazotrophic bacteria As underscored by their biochemical differences, ARM and URE likely represent distinct ecological niches that are suitable to host different microbiota. To test this hypothesis, we collected aerial root mucilage, rhizospheric samples, and environmental soil at five separated sites in Xishuangbanna Tropical Botanical Garden (Xishuangbanna, China) and examined their prokaryotic and eukaryotic microbiota through 16S rRNA and ITS gene sequencing (Table S and Fig. S ). In support of our hypothesis, unconstrained principal coordinate analysis (PCoA) based on the Bray–Curtis metric revealed that both the bacterial and fungal composition of mucilage were distinct from those of rhizospheric and bulk soil, which were similar to each other (Fig. a bacteria and S a fungi). Notably, the prokaryotic taxonomic diversity of ARM was lower than rhizospheric and bulk soil, indicative of a specialized bacterial community in ARM (Fig. b). For fungal communities, there was significant difference in alpha diversity among the mucilage and bulk soil ecological niches (Fig. S b, P < 0.01). A total of 233 differentially enriched bacterial genera and 65 fungal genera were identified between mucilage and rhizosphere soil (Wilcoxon rank-sum test, P < 0.05, Table S ). Specifically, mucilage contained a higher load of Burkholderia-Caballeronia-Paraburkholderia , Herbaspirillum , and Novosphingobium , whereas the relative abundance of Bacillus , Gaiella , and Nocardioides were higher in the underground sample (Fig. c and S a-d). The top two ARM-enriched bacterial genera, Burkholderia-Caballeronia-Paraburkholderia and Herbaspirillum , include numerous diazotrophic species that have been widely used as model systems of plant-associated nitrogen-fixing microbes . Some of the single-strain cultures we obtained were identified as Klebsiella , Sphingobacterium , Cupriavidus , Acinetobacter , and Pantoea species were able to grow on nitrogen-free medium. From these species, we were able to clone different alleles of the key nitrogen fixation gene, nifH (Table S and Fig. S a). Most number of these species further demonstrated nitrogen fixation activity in 15 N 2 -labeled experiments (Fig. S b). In addition, bacterial function annotation showed that aerial mucilage has a higher nitrogen fixation capacity than rhizosphere soil (Wilcoxon rank-sum test, P < 0.05, Fig. S e, f). These data lead us to hypothesize that nitrogen fixation may be a key function of the bacterial community associated with H. rotundifolia aerial root mucilage. Aerial root mucilage of H. rotundifolia fixes atmospheric nitrogen to support plant growth To test if nitrogen fixation could occur in intact ARM microbiota, we performed 15 N-labeled nitrogen gas (purity specification ≥ 98% and dilution fed to 5%, 10%, and 20% 15 N 2 in the bottle, V · V −1 ) feeding experiments with wild segments of H. rotundifolia stems with mucilage-bearing aerial roots, aerial roots with mucilage artificially removed, or aerial roots removed as a whole (Fig. a). When fed with 20% 15 N 2 , plants with mucilage-bearing aerial roots contained significantly higher relative abundance of 15 N in all tissues measured compared to those with aerial roots only or no aerial roots at all plant tissues ( P < 0.05, Fig. b-e and Table S ). To further verify nitrogen fixation could be utilized by the host plant, we detected the enrichment of 15 N in plant chlorophyll (converted to pheophytin for analysis) in mucilage-producing plants, but not in the negative controls (Fig. S ). Moreover, the contribution of atmospheric nitrogen fixation to mucilage plants can be quantified by natural 15 N abundance values of nitrogen-fixing plants and non-fixing (reference) plants. The soil δ 15 N is more abundant than in air; thus, nitrogen-fixing plants will exhibit reduced δ 15 N levels compared to reference non-fixing plants. The samples of mucilage-producing plants and reference plants growing near each other were collected. In 2 years, the natural 15 N values of leaf and stem for the mucilage-producing samples were lower than the no aerial root plants (Fig. f and Table S ), indicating that the aerial root-mucilage system was able to derive a significant part of plant tissue nitrogen from atmospheric nitrogen. The percent of nitrogen derived from the atmosphere (%Ndfa) calculated from the 15 N natural abundance values in the results ranged from 37.04 to 54.85% (Fig. f and Table S ). These results support our hypothesis that intact aerial root-mucilage-microbiota can facilitate incorporation of atmospheric nitrogen for plant. To further test whether the root mucilage system could fulfill H. rotundifolia plant nitrogen requirements and growth promotion, ARM-dependent nitrogen fixation is ecologically relevant for the fitness of H. rotundifolia in greenhouse and wild, we compared the growth of plants with aerial roots artificially excised versus sympatric intact controls in a field experiment. Four months after root excision (all plant aerial roots not entering soil), plants with aerial roots-mucilage had more than 40% total nitrogen content and 10% greater dry mass than those without, suggesting that the aerial roots, along with the microbiota hosted in their associated mucilage, can play non-trivial roles in fixing nitrogen and supporting the growth of H. rotundifolia (Table S ). Tissue-specific gene expression in mucilage-secreting aerial roots could facilitate efficient carbon–nitrogen exchange with mucilage microbiota Nitrogen fixation is a complex symbiotic relationship that requires extensive collaboration between the host plants and the diazotrophs at molecular level. To investigate the molecular interaction between H. rotundifolia and its ARM microbiota, we first sequenced the H. rotundifolia genome with a combination of 61.86 G PacBio CLR reads and 20.1 G Hi-C reads to obtain a chromosome-scale reference genome assembly, which was subsequently used to map short-read RNAseq data for transcriptome-wide gene expression profiling. The reference assembly we obtained was estimated to be 171.44 Mb, similar to our previous estimation (180 M, Fig. S a). We annotated 29,574 putative gene models and 2434 noncoding RNAs (Table S ). We then obtained RNA-seq data from aerial roots with or without mucilage and underground roots to identify transcriptome-wide gene expression differences in these tissues (Fig. a). Functional enrichment analysis results based on GO and KEGG annotations revealed that genes related to photosynthesis and phenylpropanoid metabolism were preferentially expressed in aerial roots, and these differences in gene expression were more pronounced when comparing mucilage-bearing aerial roots and underground roots (Table S ; Fig. S d-e). Stable plant-diazotroph symbiosis is built upon extensive carbon flow from the host plant to the microbes, accompanied by nitrogen flow in reverse . Hence, we specifically examined the expression of carbohydrate and accessible nitrogen transporter genes in the four tissues. In result, expression of both putative sugar transporters (e.g., STP, SWEET, INV ) and nitrate/nitrite transporters and assimilation genes (e.g., AMT , NRT, and GS ) were significantly higher in mucilage-bearing aerial roots than those without mucilage, indicative of more active carbon–nitrogen flow when the mucilage is present (Fig. b). Interestingly, the putative nitrate and nitrite transporters highly expressed in mucilage-bearing aerial roots were also highly expressed in underground roots (with the notable exception of NRT2.1 ), whereas the putative carbohydrate transporters were expressed at higher levels in mucilage-bearing aerial roots than underground roots. H. rotundifolia as the first sequenced genome of this genus plant and mucilage-producing dicotyledon, provides the molecular basis for further studies on mucilage exudation and nitrogen fixation mechanism. Mucilage microhabitat dwelling Chaetomella raphigera selectively inhibit environmental but not sympatric diazotrophic microbes Though the carbohydrate-rich mucilage of H. rotundifolia creates an ideal niche and “natural medium” for diazotrophic bacteria, it is also potentially prone to disturbance by environmental microbes that are commensal or even pathogenic to host plants. Though there is a rich literature on the microbiota structuring function of exuded plant specialized metabolites [ , , , ], we were not able to identify any specific compound that displayed obvious antibiotic function when screening the twenty most abundant metabolites in the mucilage at physiologically relevant concentration (Fig. S and Table S ). The alternative explanation for the homeostatic diazotrophic microbiota is that certain members of the community could selectively allow the growth of the diazotrophs, but inhibit unwelcomed environmental microbes. To identify potential friendly microbes in the mucilage microbiota, we collected 56 single-spore/single-colony bacterial and fungal isolates from mucilage samples and screened their broad-spectrum antibiotic activity by culturing on rich media exposed to open air. Through the 1% PDA medium with mucilage addition, we scanned that one fungal culture F-XTBG8 remained uncontaminated after 5 days of exposure to airborne microbes and was identified as Chaetomella raphigera by ITS sequencing (Figs. a, S and Table S ). Subsequent antibiotic tests demonstrated that this isolate of C. raphigera could inhibit the growth of over 100 common phytopathogens and environmental fungi including Fusarium oxysporum, Fusarium solani, and Magnaporthe oryzae (Figs. b and S ). To identify the interaction between F-XTBG8 and other microorganisms, we further co-cultured F-XTBG8 and its metabolites with mucilage strains and other environmental microorganisms in the medium simulating mucilage microhabitat. Interestingly, the liquid culture of this C. raphigera strain demonstrated clear inhibitory effect against aerial and soil bacterial communities , but this effect significantly diminished when tested against rhizospheric bacteria of H. rotundifolia , and was completely abolished when tested against mucilage bacteria (Figs. c and S ). This may be because the root samples have just entered soil which converted from aerial roots and become normal underground roots. Consistently, liquid culture of C. raphigera suppressed the growth of a generic bacteria Escherichia coli DH-5α but showed no inhibitory effect against any of the six diazotrophic bacterial strains isolated from H. rotundifolia mucilage (Figs. d and S ). Two years of microbial amplicon sequencing of aerial root-mucilage samples, we found that this fungal genus was present in mucilage with a higher relative abundance than rhizosphere, and its sequencing reads in soil samples indicated that the fungus may be from planted soil or environment (Fig. S d). Fungus Chaetomella raphigera genome and its antifungal effect To confirm that the friendly and “companion” fungi C. raphigera is selected by H. rotundifolia plant, we sequenced the monoculture fungus C. raphigera genome and the RNA-seq data of different 15 types of plant tissue (Figs. S a and a). A total of 1710 of 8842 genes of the fungal genome were detected expression (reads FPKM > 10) in different plant tissues (Table S ). Interestingly, the fungal 28S rRNA genes (ribosomal gene 1581 and 1584) were widely found in different sample transcripts, which means that the fungi may habitat in leaves, stems, aerial roots, and underground roots of the host plant with or without aerial roots and mucilage (Fig. a). In addition, the fungal genes were found in the underground tissue and above part, implying that the fungi could be recruited from the from plant-borne (“vertical transmission”) or environment by the host (Table S ). To further reveal the effect of how friendly fungi could inhibit a broad range of environmental fungi, the common pathogenic fungus Fusarium and Magnaporthe in agriculture were chosen for coculture, and transcription and metabolism analysis was performed (Fig. S b). A total of C. raphigera 8080 transcripts (91.38% of genes) and 4723 metabolites were detected (Table S ). Moreover, in contrast to the monocultures of C. raphigera and pathogenic fungi, a broad accumulation of different metabolites were detected during cocultivation based on LC–MS analysis (Fig. c and Table S ), whereas there was no significant pathway difference for cocultivation transcriptome RNA-seq dates of C. raphigera (Fig. b–c and Table S ). Consistent with the previous medium coculture experiments, we found that friendly fungi and pathogenic fungi formed distinct inhibition zone and further hypothesized that the fungi may inhibit against other pathogens by releasing special metabolites. The transcriptome results of co-culture showed no widespread and coincident gene expression/pathway changes of C. raphigera could confirm this hypothesis. Statistical analysis showed that 51 metabolites significantly increased in coculture compared with those four pathogen monocultures ( P < 0.01, Fig. S b), and these metabolites not detected or very low content in four pathogen monocultures (Figs. d, S b-c and Table S ). Among them, 12 metabolites (subcluster 3) have been studied and shown that have a wide range of antifungal activities, such as cicerin, eugenol, 4-Isopropylbenzoic acid, and cinnamaldehyde (Figs. d and S c). These results may demonstrate the widespread inhibit effect on environmental fungi by C. raphigera special metabolites. H. rotundifolia have distinct biochemical composition The creeping H. rotundifolia plants can grow over 2 m long and form the aerial root at each stem node (Fig. a). These mucilage-producing aerial roots vary from barely visible to the naked eye to over 16 cm in length. We compared the biochemical composition of the aerial root mucilage (ARM or mucilage) and underground root exudates (URE) by collecting widely targeted metabolomics data of these samples (Fig. b) . Principal-component analysis (PCA) results demonstrated a clear differentiation of metabolic profile of ARM and URE samples (Fig. c). Indeed, 531 of the 1033 putatively annotated metabolites were significantly different between these two samples ( P < 0.01, Table S and Fig. S ). Whereas URE tended to contain higher levels of lipids and alkaloids, ARM richer in amino acid derivatives, nucleotide derivatives, flavonoids, and carbohydrates (Fig. d). Further carbohydrate measurement with a targeted approach confirmed that the ARM was rich in glucose, fructose and sucrose, which were barely detectable in URE (< 0.01 mg·g −1 ; Fig. e). H. rotundifolia is enriched in diazotrophic bacteria As underscored by their biochemical differences, ARM and URE likely represent distinct ecological niches that are suitable to host different microbiota. To test this hypothesis, we collected aerial root mucilage, rhizospheric samples, and environmental soil at five separated sites in Xishuangbanna Tropical Botanical Garden (Xishuangbanna, China) and examined their prokaryotic and eukaryotic microbiota through 16S rRNA and ITS gene sequencing (Table S and Fig. S ). In support of our hypothesis, unconstrained principal coordinate analysis (PCoA) based on the Bray–Curtis metric revealed that both the bacterial and fungal composition of mucilage were distinct from those of rhizospheric and bulk soil, which were similar to each other (Fig. a bacteria and S a fungi). Notably, the prokaryotic taxonomic diversity of ARM was lower than rhizospheric and bulk soil, indicative of a specialized bacterial community in ARM (Fig. b). For fungal communities, there was significant difference in alpha diversity among the mucilage and bulk soil ecological niches (Fig. S b, P < 0.01). A total of 233 differentially enriched bacterial genera and 65 fungal genera were identified between mucilage and rhizosphere soil (Wilcoxon rank-sum test, P < 0.05, Table S ). Specifically, mucilage contained a higher load of Burkholderia-Caballeronia-Paraburkholderia , Herbaspirillum , and Novosphingobium , whereas the relative abundance of Bacillus , Gaiella , and Nocardioides were higher in the underground sample (Fig. c and S a-d). The top two ARM-enriched bacterial genera, Burkholderia-Caballeronia-Paraburkholderia and Herbaspirillum , include numerous diazotrophic species that have been widely used as model systems of plant-associated nitrogen-fixing microbes . Some of the single-strain cultures we obtained were identified as Klebsiella , Sphingobacterium , Cupriavidus , Acinetobacter , and Pantoea species were able to grow on nitrogen-free medium. From these species, we were able to clone different alleles of the key nitrogen fixation gene, nifH (Table S and Fig. S a). Most number of these species further demonstrated nitrogen fixation activity in 15 N 2 -labeled experiments (Fig. S b). In addition, bacterial function annotation showed that aerial mucilage has a higher nitrogen fixation capacity than rhizosphere soil (Wilcoxon rank-sum test, P < 0.05, Fig. S e, f). These data lead us to hypothesize that nitrogen fixation may be a key function of the bacterial community associated with H. rotundifolia aerial root mucilage. H. rotundifolia fixes atmospheric nitrogen to support plant growth To test if nitrogen fixation could occur in intact ARM microbiota, we performed 15 N-labeled nitrogen gas (purity specification ≥ 98% and dilution fed to 5%, 10%, and 20% 15 N 2 in the bottle, V · V −1 ) feeding experiments with wild segments of H. rotundifolia stems with mucilage-bearing aerial roots, aerial roots with mucilage artificially removed, or aerial roots removed as a whole (Fig. a). When fed with 20% 15 N 2 , plants with mucilage-bearing aerial roots contained significantly higher relative abundance of 15 N in all tissues measured compared to those with aerial roots only or no aerial roots at all plant tissues ( P < 0.05, Fig. b-e and Table S ). To further verify nitrogen fixation could be utilized by the host plant, we detected the enrichment of 15 N in plant chlorophyll (converted to pheophytin for analysis) in mucilage-producing plants, but not in the negative controls (Fig. S ). Moreover, the contribution of atmospheric nitrogen fixation to mucilage plants can be quantified by natural 15 N abundance values of nitrogen-fixing plants and non-fixing (reference) plants. The soil δ 15 N is more abundant than in air; thus, nitrogen-fixing plants will exhibit reduced δ 15 N levels compared to reference non-fixing plants. The samples of mucilage-producing plants and reference plants growing near each other were collected. In 2 years, the natural 15 N values of leaf and stem for the mucilage-producing samples were lower than the no aerial root plants (Fig. f and Table S ), indicating that the aerial root-mucilage system was able to derive a significant part of plant tissue nitrogen from atmospheric nitrogen. The percent of nitrogen derived from the atmosphere (%Ndfa) calculated from the 15 N natural abundance values in the results ranged from 37.04 to 54.85% (Fig. f and Table S ). These results support our hypothesis that intact aerial root-mucilage-microbiota can facilitate incorporation of atmospheric nitrogen for plant. To further test whether the root mucilage system could fulfill H. rotundifolia plant nitrogen requirements and growth promotion, ARM-dependent nitrogen fixation is ecologically relevant for the fitness of H. rotundifolia in greenhouse and wild, we compared the growth of plants with aerial roots artificially excised versus sympatric intact controls in a field experiment. Four months after root excision (all plant aerial roots not entering soil), plants with aerial roots-mucilage had more than 40% total nitrogen content and 10% greater dry mass than those without, suggesting that the aerial roots, along with the microbiota hosted in their associated mucilage, can play non-trivial roles in fixing nitrogen and supporting the growth of H. rotundifolia (Table S ). Nitrogen fixation is a complex symbiotic relationship that requires extensive collaboration between the host plants and the diazotrophs at molecular level. To investigate the molecular interaction between H. rotundifolia and its ARM microbiota, we first sequenced the H. rotundifolia genome with a combination of 61.86 G PacBio CLR reads and 20.1 G Hi-C reads to obtain a chromosome-scale reference genome assembly, which was subsequently used to map short-read RNAseq data for transcriptome-wide gene expression profiling. The reference assembly we obtained was estimated to be 171.44 Mb, similar to our previous estimation (180 M, Fig. S a). We annotated 29,574 putative gene models and 2434 noncoding RNAs (Table S ). We then obtained RNA-seq data from aerial roots with or without mucilage and underground roots to identify transcriptome-wide gene expression differences in these tissues (Fig. a). Functional enrichment analysis results based on GO and KEGG annotations revealed that genes related to photosynthesis and phenylpropanoid metabolism were preferentially expressed in aerial roots, and these differences in gene expression were more pronounced when comparing mucilage-bearing aerial roots and underground roots (Table S ; Fig. S d-e). Stable plant-diazotroph symbiosis is built upon extensive carbon flow from the host plant to the microbes, accompanied by nitrogen flow in reverse . Hence, we specifically examined the expression of carbohydrate and accessible nitrogen transporter genes in the four tissues. In result, expression of both putative sugar transporters (e.g., STP, SWEET, INV ) and nitrate/nitrite transporters and assimilation genes (e.g., AMT , NRT, and GS ) were significantly higher in mucilage-bearing aerial roots than those without mucilage, indicative of more active carbon–nitrogen flow when the mucilage is present (Fig. b). Interestingly, the putative nitrate and nitrite transporters highly expressed in mucilage-bearing aerial roots were also highly expressed in underground roots (with the notable exception of NRT2.1 ), whereas the putative carbohydrate transporters were expressed at higher levels in mucilage-bearing aerial roots than underground roots. H. rotundifolia as the first sequenced genome of this genus plant and mucilage-producing dicotyledon, provides the molecular basis for further studies on mucilage exudation and nitrogen fixation mechanism. Chaetomella raphigera selectively inhibit environmental but not sympatric diazotrophic microbes Though the carbohydrate-rich mucilage of H. rotundifolia creates an ideal niche and “natural medium” for diazotrophic bacteria, it is also potentially prone to disturbance by environmental microbes that are commensal or even pathogenic to host plants. Though there is a rich literature on the microbiota structuring function of exuded plant specialized metabolites [ , , , ], we were not able to identify any specific compound that displayed obvious antibiotic function when screening the twenty most abundant metabolites in the mucilage at physiologically relevant concentration (Fig. S and Table S ). The alternative explanation for the homeostatic diazotrophic microbiota is that certain members of the community could selectively allow the growth of the diazotrophs, but inhibit unwelcomed environmental microbes. To identify potential friendly microbes in the mucilage microbiota, we collected 56 single-spore/single-colony bacterial and fungal isolates from mucilage samples and screened their broad-spectrum antibiotic activity by culturing on rich media exposed to open air. Through the 1% PDA medium with mucilage addition, we scanned that one fungal culture F-XTBG8 remained uncontaminated after 5 days of exposure to airborne microbes and was identified as Chaetomella raphigera by ITS sequencing (Figs. a, S and Table S ). Subsequent antibiotic tests demonstrated that this isolate of C. raphigera could inhibit the growth of over 100 common phytopathogens and environmental fungi including Fusarium oxysporum, Fusarium solani, and Magnaporthe oryzae (Figs. b and S ). To identify the interaction between F-XTBG8 and other microorganisms, we further co-cultured F-XTBG8 and its metabolites with mucilage strains and other environmental microorganisms in the medium simulating mucilage microhabitat. Interestingly, the liquid culture of this C. raphigera strain demonstrated clear inhibitory effect against aerial and soil bacterial communities , but this effect significantly diminished when tested against rhizospheric bacteria of H. rotundifolia , and was completely abolished when tested against mucilage bacteria (Figs. c and S ). This may be because the root samples have just entered soil which converted from aerial roots and become normal underground roots. Consistently, liquid culture of C. raphigera suppressed the growth of a generic bacteria Escherichia coli DH-5α but showed no inhibitory effect against any of the six diazotrophic bacterial strains isolated from H. rotundifolia mucilage (Figs. d and S ). Two years of microbial amplicon sequencing of aerial root-mucilage samples, we found that this fungal genus was present in mucilage with a higher relative abundance than rhizosphere, and its sequencing reads in soil samples indicated that the fungus may be from planted soil or environment (Fig. S d). Chaetomella raphigera genome and its antifungal effect To confirm that the friendly and “companion” fungi C. raphigera is selected by H. rotundifolia plant, we sequenced the monoculture fungus C. raphigera genome and the RNA-seq data of different 15 types of plant tissue (Figs. S a and a). A total of 1710 of 8842 genes of the fungal genome were detected expression (reads FPKM > 10) in different plant tissues (Table S ). Interestingly, the fungal 28S rRNA genes (ribosomal gene 1581 and 1584) were widely found in different sample transcripts, which means that the fungi may habitat in leaves, stems, aerial roots, and underground roots of the host plant with or without aerial roots and mucilage (Fig. a). In addition, the fungal genes were found in the underground tissue and above part, implying that the fungi could be recruited from the from plant-borne (“vertical transmission”) or environment by the host (Table S ). To further reveal the effect of how friendly fungi could inhibit a broad range of environmental fungi, the common pathogenic fungus Fusarium and Magnaporthe in agriculture were chosen for coculture, and transcription and metabolism analysis was performed (Fig. S b). A total of C. raphigera 8080 transcripts (91.38% of genes) and 4723 metabolites were detected (Table S ). Moreover, in contrast to the monocultures of C. raphigera and pathogenic fungi, a broad accumulation of different metabolites were detected during cocultivation based on LC–MS analysis (Fig. c and Table S ), whereas there was no significant pathway difference for cocultivation transcriptome RNA-seq dates of C. raphigera (Fig. b–c and Table S ). Consistent with the previous medium coculture experiments, we found that friendly fungi and pathogenic fungi formed distinct inhibition zone and further hypothesized that the fungi may inhibit against other pathogens by releasing special metabolites. The transcriptome results of co-culture showed no widespread and coincident gene expression/pathway changes of C. raphigera could confirm this hypothesis. Statistical analysis showed that 51 metabolites significantly increased in coculture compared with those four pathogen monocultures ( P < 0.01, Fig. S b), and these metabolites not detected or very low content in four pathogen monocultures (Figs. d, S b-c and Table S ). Among them, 12 metabolites (subcluster 3) have been studied and shown that have a wide range of antifungal activities, such as cicerin, eugenol, 4-Isopropylbenzoic acid, and cinnamaldehyde (Figs. d and S c). These results may demonstrate the widespread inhibit effect on environmental fungi by C. raphigera special metabolites. Aerial roots are known to facilitate gas exchange in submergence-tolerant plants, provide additional mechanical support for climbing vines, or function as the primary site of nutrient uptake in marginal habitats . Mucilage has been proposed to maintain a moisturized interface and provide lubrication and protection for aerial roots . Microbiome studies in the last decade have revealed significant functions for the diverse microbial communities associated with the plant rhizosphere and have inspired similar investigation into the biochemical and microbial diversity in aerial root mucilage [ , , , , ]. In a mucilage-secreting tropical maize landrace, microbes isolated from the carbohydrate-rich mucilage have been demonstrated to fix atmospheric nitrogen to promote plant growth . In this study on H. rotundifolia, we confirmed that aerial root mucilage (ARM) and underground root exudate (URE) had distinct biochemical compositions from each other, both in terms of primary and specialized metabolites (Fig. ). This biochemical differentiation correlated with the distinctive prokaryotic communities associated with aerial and underground roots (Fig. ). Since the environmental microbial composition has a strong impact on the structure of plant-associated microbiota, the inherently different airborne and soilborne microbial communities is likely a predominant factor in deciding the microbial diversity in ARM and URE. Yet, the biochemical differences between ARM and URE likely play a role in structuring their associated microbiota as well. From the nutritional perspective, ARM was rich in amino acids, nucleotides, and carbohydrates, which provided a different niche from the lipid-rich URE (Fig. d) [ , , , ]. Furthermore, the terpenoid and flavonoid-rich ARM could impose a distinct selective pressure on microbes compared to URE, which contained higher levels of alkaloids and phenolic acids (Fig. d). Comparison of microbial community composition between URE and ARM revealed that several known diazotrophic bacterial genera were enriched in the ARM (Fig. c). In our study, based on bacterial function prediction and nitrogen-labeled experiment, we can conclude that the aerial root-mucilage microhabitat harbors diazotrophic microbiota such as diazotrophs Klebsiella, Pantoea , Sphingobacterium , Herbaspirillum , and Burkholderia . These bacteria are widely used model systems to study associative nitrogen fixation and are already well utilized for agricultural production [ , , – ]. Unfortunately, we were not able to produce any single-strain culture in these functional Herbaspirillum and Burkholderia genera under all accessible culturing conditions including anaerobic incubation and supplementation of sterilized H. rotundifolia mucilage. In the subsequent culture, hypoxia and selective culture of these microbes should be more simulated for mucilage microhabitat . Consistently, the subsequent stable isotope tracking experiment showed that atmospheric nitrogen could be incorporated into plant chlorophyll, nitrogen, and biomass in an aerial root and mucilage-dependent manner (Fig. a–e). We further demonstrated that removal of aerial roots could significantly impact plant nitrogen content and biomass in a 4-month-long field experiment (Table. S ). In addition, based on the natural abundance of 15 N, we found that the contribution of aerial root-mucilage system to %Ndfa was up to 54.85% (Fig. f). These results partially support the ecological relevance of ARM-mediated nitrogen fixation for plant fitness in situ, though the operation of aerial root removal could have complex influence on plant growth independent of the mucilage microbiota. In addition, aerial roots without mucilage also showed residual nitrogen fixation ability to some extent, which may be caused by the incompletely removed mucilage in aerial roots or endophytic nitrogen-fixing bacteria like the maize xylem . Since nitrogen fixation involves dynamic nitrogen-carbon exchange between the symbiotic partners, we hypothesized that the aerial roots of H. rotundifolia needed to be highly active in nitrogen and carbon transportation. To test this hypothesis, we generated a chromosome-scale genome assembly of H. rotundifolia with PacBio CLR and Hi-C technologies and RNA sequencing data, which was then used for comparative transcriptomics analyses (Fig. ). In support of our hypothesis, a number of sugar transporters were exclusively expressed in mucilage-bearing aerial roots when compared to aerial roots with no mucilage and underground roots (Fig. e). Interestingly, three of the five nitrogenous compound transporters highly expressed in underground roots were also expressed in aerial roots in a mucilage-dependent fashion (Fig. e) . This expression pattern strongly suggests that the presence of ARM could induce the expression of nitrogenous compound transporters. This is consistent with previous observations in seagrass . The nitrogen-fixing symbionts live within the seagrass root tissue, where it supplies amino acids and ammonia to the host in exchange for sugars . In addition to transcriptional regulation that could maintain the diazotrophic mucilage microbiome, we hypothesize that this functional association may also be facilitated by adaptative evolution of the genome of H. rotundifolia . This potential genomic differentiation could be revealed by comparative genomics analysis of H. rotundifolia and sister species that cannot host nitrogen-fixing symbionts in the future. The plant innate immune systems and specialized metabolites play important roles in shaping the host-associated microbiota [ – ]. Recent research has found that the host plant factors could shape the plant-association microbiota community by recruiting specific microbes while generating molecules that are toxic to others . Such function has been demonstrated for a large variety of plant special metabolites including triterpenes, coumarin, flavonoid, and benzoxazinoid are key compounds modulating plant microbiota composition [ , – ]. It is reasonable to speculate that the specialized compounds of aerial root mucilage recruit special microbiota as nutrients, in addition to some proteins and compounds that could serve as antibiotics to defend against pathogenic and environmental microbes or maintain the homeostasis of the mucilage-microbiota system. A hint at the answer to this question may be found in our potential mucilage and other compounds (Table S ): gallic acid, epigallocatechin gallate, phthalic anhydride, and others have a certain inhibitory effect on pathogens, but these compounds are low concentrations in mucilage and the relationship between metabolites and microbial homeostasis needs to be further explored. Whether plant special metabolites play an important regulatory role in the aerial root-mucilage microenvironment should further consider and expand list of candidate metabolites. In addition, preeminent researchers have shown that rice and maize roots secrete flavones that attract the enrichment of the rhizosphere diazotrophic bacteria , thereby nitrogen acquisition and promoting growth in nitrogen-poor soils . This study inspires our research, we also found that aerial root mucilage has a higher content of flavonoids. It is still unknown how and which metabolites are recruited for shaping functional diazotrophic microbiota in the aerial root mucilage of Heterotis rotundifolia and worth further study with plant transcriptome (and metatranscriptomics), metabolomic, and metagenomics. In addition, we found that mucilage production always follows high humidity in the environment (Table S ), and future studies could reveal the causality between the dynamic basis of mucilage exudates and environmental factors. The current hypothesis is that microbial homeostasis in plant roots is maintained by both microbiota-microbiota and host plant-microbiota and interactions, whereas little is known of those distinct outputs in maintaining microbial homeostasis between the plant and its root microbiota . Our discovery of Chaetomella raphigera (F-XTBG8)-the friendly fungi that helps mucilage and diazotrophic bacteria withstand pathogenic and environmental microbes-establishes the existence of such beneficial partnerships in aerial root-mucilage microhabitat. Previous study showed that the root microbiota homeostasis has an important regulatory role in nitrogen-fixing symbionts and the adaptation of plants to different environments. For instance, the rhizosphere microbiota Bacillaceae group promotes the nodulation of rhizobia Sinorhizobium and soybean growth under saline–alkali conditions . Likewise, a recent study found that the facultative biotrophic fungus Phomopsis liquidambaris facilitates the migration of rhizobia from the soil to the peanut rhizosphere, thus triggering peanut-rhizobia nodulation . Those relationships may be common in plant-microbiota interactions, and interesting findings may have fundamental practical implications . This study for rhizosphere microecology implies that microbial interactions should also be considered when studying functional microbes. Search for “partner” or friendly microbes of plant growth-promoting bacteria and use synthetic communities (SynComs) for plant-microbial interaction studies and agro-ecosystem applications. Moreover, F-XTBG8 was not only nutritionally competitive with other environmental and pathogenic microbes, but its metabolites had inhibitory effects on those microbes and no inhibitory on diazotrophic bacteria in the mucilage. Interestingly, special antibacterial metabolites may produce by the friendly fungus F-XTBG 8 inhibiting the growth of other environmental and pathogenic microbes rather than diazotrophic bacteria. The mechanisms need to be further investigated, the recent study found that diazotrophic bacteria Klebsiella degrade various types of antibiotics and their genomes exhibit adaptations to toxins will help us further understand and further study friendly fungi and diazotrophic bacteria . Further study could address microbial cross-feeding of friendly and functional microbes, which refers to interactions between microbes in which molecules metabolized by one microbe are further utilized by another microbe . Extensive cultivation of these nitrogen-fixing microbes and further exploration of the molecular basis for the friendly coexistence of these microbes with friendly fungi from the genes, transporters, and metabolic gene clusters are necessary. Our study emphasized the key role the “friendly microbe” played in controlling the aerial root-mucilage-diazotrophic microbiota microenvironment and the biochemical conversation that dominates it. Further investigation of the fungal antimicrobial metabolites and broad-spectrum antimicrobe mechanisms of friendly fungi will help us to decipher the specific interaction of mucilage-microbiota and the underlying mechanism of microbial homeostasis. The predicted results of the metabolic genes of terpenoids and polyketides in its genome will help us to further target potential antibacterial compounds. Moreover, the origin of this fungus and diazotrophic bacteria needs to be further determined which may be from plant-borne (“vertical transmission”) or environment unresolved. Collectively, our concepts provide a study paradigm for understanding how plants may engage with functional microbiota while restricting pathogens and environmental microbes. In summary, our study found a novel role for aerial root-mucilage microhabitat in nitrogen uptake, and extended the concept of rhizosphere, that aerial roots can still perform the same biological functions as underground roots. More importantly, the discovery of friendly fungi in mucilage further advances our understanding of the homeostasis mechanisms of specific functional microbiota in the microenvironment (Fig. ). This further confirms that plants actively mediate plant-microbiota interactions and maintain microbial homeostasis and could have an important impact on nitrogen use efficiency, rapid growth, and invasion of H. rotundifolia . The friendly microbe insight provides important understanding for diverse problems concerning plant microbiota assembly and environmental microbes. We hope that the aerial root-mucilage-functional microbiota system established in this study will enable basic biological insights to be gained for biological interactions. Plant samples H. rotundifolia maintained by Xishuangbanna Tropical Botanical Garden (Xishuangbanna, China) was sampled between June 2019 and December 2021. The aerial roots have been removed in the wild and greenhouse experiment in Xishuangbanna Tropical Botanical Garden. A string is used as a barrier to keep the aerial roots entering the ground or soil during the growth of the plant. After 4 months of the experiment, the stem length was recorded, and the dry weight and nitrogen content were analyzed after drying. Metabolite profiling Aerial root mucilage and underground root exudate collection In May 2020, we chose the creeping H. rotundifolia plants that were mature enough to have roots at various stages to sample the aerial root mucilage and underground root exudate. Sterile forceps were used to load the aerial root mucilage into a 50-mL centrifuge tube. For underground root exudate collection, roots were repeatedly washed and shaken in a 200-mL deionized water and collected promptly after root washing for 2 h by 60 rpm. All mucilage and exudate samples were immediately frozen to − 80 °C and then freeze-dried (MM400, Retsch) for 1.5 min at 30 Hz. After mixing, take 40 mL sample into 50 mL centrifuge tube and then immerse the sample in liquid nitrogen. Powder (100 mg) was weighed and extracted overnight at 4 °C with 1 mL 70% methanol, followed by centrifugation for 10 min at 12 000 g. The supernatants were collected separately and combined, followed by filtration of 0.22-mm pore size. For metabolome analysis, mucilage and exudate samples were analyzed using ultrahigh-performance liquid chromatography (SHIMADZU Nexera X2)-tandem mass spectroscopy (Applied Biosystems 4500 QTRAP) (UPLC-MS/MS) based widely targeted metabolome method by Wuhan Metware Biotechnology Co., Ltd. (Wuhan, China) ( http://www.metware.cn/ ). Carbohydrate contents were detected based on an Agilent 7890B gas chromatograph coupled to a 7000D mass spectrometer (GC–MS) platform by Metware. Concentration determination of candidate compounds (gallic acid, epigallocatechin gallate, phthalic anhydride, and others) in mucilage using high-performance liquid chromatography and ultra-performance liquid chromatography-mass spectrometry (Supplementary Methods for details regarding the protocol). Mucilage and soil DNA extraction, 16S rRNA, and ITS gene sequencing In our study, the rhizosphere soil was defined and collected as previous research . Aerial root mucilage and underground rhizosphere soil and bulk soil sample DNA were extracted using the FastDNA SPIN Kit for Soil (MP Biomedicals, USA) according to the manufacturer’s protocols. The fungal ITS1 region (ITS1F/ITS2R) and the bacterial 16S rRNA gene (V3-V4, 338F/806R) were amplified. Amplicon libraries were sequenced on the Illumina MiSeq PE300 platform by the Majorbio Bio-Pharm Technology Co., Ltd. (Shanghai, China). The 16S rRNA and ITS gene sequencing reads were demultiplexed by Fastp (version 0.20.0) and merged with FLASH (version 1.2.7) following: operational taxonomic units (OTUs) with 97% similarity were clustered using UPARSE (version 7.1) . Taxonomic assignment was performed using the bacterial SILVA reference database (v12_8) and fungal UNITE database (v7.0). Functional prediction (Tax4Fun, PICRUst1/2, FAPROTAX, BugBase phenotype prediction) was analyzed on the free online platform of the Majorbio Cloud Platform ( www.majorbio.com ) . Bacteria and fungi isolation in mucilage and rhizosphere Fungi isolated from the mucilage and rhizosphere were placed onto potato dextrose agar (PDA, Hopebio, Qingdao) and incubated for 7 days (in the dark at 28 °C). Five grams of rhizospheric soil was weighed into a 50-mL centrifuge tube, and 10 –3 and 10 –4 diluents were soaked up with diluted spreading onto tryptic soy agar (TSA, Hopebio, Qingdao), LB agar, and nitrogen-free agar (Ashby, Rhizobium, Associations and Nitrogen-free Culture-medium, Hopebio, Qingdao) to isolate diazotrophic bacteria. To simulate the native habitats to increase the diversity of cultivatable microbes, we attempted to add sterilized ARM of H. rotundifolia into commercial media (final concentration 10 –4 diluents) and incubated the culture under both aerobic and anaerobic conditions. Pure culture isolates were obtained by plate-streaking or single-spore isolation. ITS and 16S rRNA genes of the isolated strains were amplified with fungal primers ITS1F (5′-CTTGGTCATTTAGAGGAAGTAA-3′) and ITS4R (5′-TCCTCCGCTTATTGATATGC-3′) and bacterial universal 27F (5′-GAGAGTTTGATCCTGGCTCAG-3′) and 1492R (5′-ACGGATACCTTGTTACGACT-3′). ITS and 16S sequences were aligned with the NCBI ITS (fungi) and 16S rRNA sequences (bacteria) databases by Nucleotide BLAST ( https://blast.ncbi.nlm.nih.gov/Blast.cgi ) to determine the approximate phylogenetic affiliation. The core nitrogen fixation genes ( nif ) from known diazotrophs and maize mucilage microbiota as previously published as a reference. 15 N 2 gas-enrichment experiments and pheophytin analysis The three types of no-aerial root, no-mucilage, and aerial root-mucilage plants (none of them have underground roots, about 18 cm in length) were collected from the healthy H. rotundifolia plant. For the Isotope ratio mass spectrometry (IRMS) study, we constructed an experimental device for 15 N 2 -labeling experiments, a clear 565 mL plastic bottle, equipped with an intake valve and a device for fixing plants (Fig. a). After simply washing the sampled plant material with sterile water, place the sterile filter paper on the bottom layer of bottle and add a 2-mL sterile water for moisturizing. Then, the 0% (negative controls), 5%, 10%, and 20% (v/v) of 15 N 2 gas (99.9% purity, Wuhan Isotope Technology, Co., Ltd., Wuhan, China) was pumped into the bottle, and the samples were moved to phytotron at 25 °C for 72 h. For the nitrogen-fixation bacteria strain, the bacteria liquid of logarithmic growth stage (2 weeks of age) was added into 125 mL conical flask, sealed, and replaced with 99.9% 15 N 2 by 20% of the volume of gas in the flask with a syringe. The culture was continued at 28 °C for 72 h, and strain cells were collected through Whatman glass microfiber filter (GF/F, GE Healthcare). After drying, grinding and weighing, the 15 N of individual aerial root, young and old leaf, stem from each plant and bacterial strains were measured using EA-HT elemental analyzer (Thermo Fisher Scientific, Inc., Bremen, Germany) coupled to an isotope ratio mass spectrometer (DELTA V Advantage, Thermo Finnigan). The isotopic composition analysis was performed at Tsinghua university Stable Isotope Facility (Shenzhen, China) and Central laboratory of Xishuangbanna Tropical Botanical Garden (Xishuangbanna, China). For the pheophytin analysis, chlorophyll of leaf and stem were extracted, similarly to previously reported methods [ , , ]. 14 N 2 -treated plants with mucilage and non-mucilage were used as negative controls . Chlorophyll was converted to pheophytin by acid treatment for obtaining pheophytin, following as described Pheophytin isotope abundances (m·z −1 : 871.57–875.57) were analyzed by quantitative time-of-flight (qTOF) mass spectrometry on the liquid chromatograph-mass spectrometer (LC–MS, Agilent 6545). The analysis was similar to previously reported methods . 15 N natural abundance The biological nitrogen fixation sources (%Ndfa) proportion was analyzed using the 15 N natural abundance (δ, ‰) of the mucilage-producing-fixing plant (δ 15 N, fixing plant-ARM) and non-aerial root-reference plants (δ 15 N, reference plants). In 2020 and 2021, nearly 12 individual mucilage-producing samples and eight non-aerial root-reference plant samples were analyzed. The percentage of nitrogen derived from nitrogen fixation (%Ndfa) was calculated as follows: [12pt]{minimal} $$\%\;=(^{15}{ N}_{}\;-\;^{15}{ N}_{\;-})/(^{15}{ N}_{}\;-\; B)100\%$$ % Ndfa = δ 15 N reference - δ 15 N fixing plant - ARM / δ 15 N reference - B × 100 % The “δ 15 N” is stable nitrogen isotopes, “ref” is the value from non-aerial root-reference plants, “fixing plant-ARM” is the mucilage-producing-fixing plant, and “B” is the 15 N abundance in the air, assumed to be 0.0‰. Microbial antagonism experiment To identify mucilage compounds and microbes with broad-spectrum resistance, the compounds and isolates were purified and transferred to a new medium plate for 24 h in advance (bacteria 12 h) and then placed outdoors (exposed to airborne microbial infection) 5 days with the cover of the plate open. Medium plate that did not grow other microbes were considered to have broad-spectrum resistance. To simulate the mucilage environment, plates consist of sterile mucilage and agar. For mucilage sterilization, it was placed in − 80 °C refrigerator for 14 days, melted at room temperature, and then added to the high-temperature sterilized agar. The negative control was melted mucilage cultured in PDA and TSA medium. The efficacy of fungi strain “F-XTBG8” was tested against environment and pathogenic fungi on potato dextrose agar (PDA) medium plates. Fungal culture plate agar discs (6 mm) of environment and pathogenic fungi were disposed in all around at the center of F-XTBG8 strain in a square at 1.8 cm distance and incubated at 30 °C until mycelial growth had filled medium plates. The Oxford cup method was used for the mucilage compounds, fungal metabolite, and bacteria antagonism experiment, the cup height of 10 mm, inner diameter of 6 mm, and outer diameter of 8 mm. Then, 100 uL of each bacterial suspension (OD 600 = 1) was spread evenly on plates (mucilage compounds: 1% PDA and 1% TSA, fungal metabolites: sterile mucilage with agar), while three sterilized Oxford cups were placed in each plate, and three cups were inoculated separately with 200 uL of antibacterial streptomycin and tetracycline hydrochloride (mix, positive controls, CK + ), PDB of fungal metabolite (F-XTBG8), and bulk PDB (negative control, CK + ). The plates of inoculated bacterial suspension were cultured at 30 °C for 12 h. The antagonistic zones (the bacterial growth inhibition zone minus cup outer diameter) were measured to evaluate the antibacterial effects of F-XTBG8 on different bacteria. H. rotundifolia genome sequencing, assembly, and annotation Genomic DNA was extracted using the QIAGEN DNaesy Plant Mini Kit according to the manufacturer’s protocols. The extracted DNA molecules were sequenced by PacBio Sequel (Pacific Biosciences of California, Menlo Park, CA, USA) platforms. The CLR reads were assembled using mecat2 (20,190,226) with default parameters. Draft genome was corrected by arrow and pilon. Polished contigs were anchored to chromosome by Hi-C reads. First, Hi-C reads were mapped to the polished H. rotundifolia genome using BWA (bwa-0.7.17) and Lachesis with default parameters . Paired reads with mate mapped to a different contig were used to do the Hi-C-associated scaffolding. Lachesis was further applied to cluster, order, and orient the clustered contigs. Two methods were combined to identify the repeat contents in H. rotundifolia genome, homology-based, and de novo prediction. The repeat contents found by these two methods were merged by repeat masker. Protein-coding genes of the H. rotundifolia genome were predicted by three methods, including ab initio gene prediction, homology-based gene prediction, and RNA-Seq-aided gene prediction (Supplementary Methods for genome sequencing, assembly, and annotation details). H. rotundifolia RNA isolation and transcriptome analysis RNA samples were extracted using the Trizol reagent following the manufacturer’s recommendations (Invitrogen, CA, USA) and then sequenced using the MGI-SEQ 2000 platform (Supplementary Methods for library construction and sequencing details). Low-quality reads were filtered out by SOAPnuke software , and clean reads were mapped to H. rotundifolia genome using bowtie2 software . Gene expression levels were estimated using FPKM values (fragments per kilobase per million fragments mapped) by the RSEM software . DESeq2 was used to evaluate differential expression genes between different root samples. Genes with fold change > 1 or ≤ 1, and FDR < 0.05 were differentially expressed genes. Statistical analysis and data normalization All data analysis was conducted in R and visualized using the ggplot2 and igraph packages. The corrected P values were used as the significance threshold for differentially expressed genes. The microbial alpha diversity, including the Shannon, Chao1, and Simpson indices, was determined using Mothur v. 1.34.4. The PCoA was performed to examine the similarities and dissimilarities within the different groups. Some analysis and function prediction such as Spearman’s correlations, PICRUSt, FUNGuild, and others were analyzed on the free online platform of the Majorbio Cloud Platform ( www.majorbio.com ). The base R package “stats” (v. 3.4.1) was used to perform the two-tailed Wilcoxon rank-sum test (wilcox.test function). ANOVA and mean separation using least significant difference ( P = 0.05) for each location. T test and two-way ANOVA test were significant at P = 0.05. H. rotundifolia maintained by Xishuangbanna Tropical Botanical Garden (Xishuangbanna, China) was sampled between June 2019 and December 2021. The aerial roots have been removed in the wild and greenhouse experiment in Xishuangbanna Tropical Botanical Garden. A string is used as a barrier to keep the aerial roots entering the ground or soil during the growth of the plant. After 4 months of the experiment, the stem length was recorded, and the dry weight and nitrogen content were analyzed after drying. Aerial root mucilage and underground root exudate collection In May 2020, we chose the creeping H. rotundifolia plants that were mature enough to have roots at various stages to sample the aerial root mucilage and underground root exudate. Sterile forceps were used to load the aerial root mucilage into a 50-mL centrifuge tube. For underground root exudate collection, roots were repeatedly washed and shaken in a 200-mL deionized water and collected promptly after root washing for 2 h by 60 rpm. All mucilage and exudate samples were immediately frozen to − 80 °C and then freeze-dried (MM400, Retsch) for 1.5 min at 30 Hz. After mixing, take 40 mL sample into 50 mL centrifuge tube and then immerse the sample in liquid nitrogen. Powder (100 mg) was weighed and extracted overnight at 4 °C with 1 mL 70% methanol, followed by centrifugation for 10 min at 12 000 g. The supernatants were collected separately and combined, followed by filtration of 0.22-mm pore size. For metabolome analysis, mucilage and exudate samples were analyzed using ultrahigh-performance liquid chromatography (SHIMADZU Nexera X2)-tandem mass spectroscopy (Applied Biosystems 4500 QTRAP) (UPLC-MS/MS) based widely targeted metabolome method by Wuhan Metware Biotechnology Co., Ltd. (Wuhan, China) ( http://www.metware.cn/ ). Carbohydrate contents were detected based on an Agilent 7890B gas chromatograph coupled to a 7000D mass spectrometer (GC–MS) platform by Metware. Concentration determination of candidate compounds (gallic acid, epigallocatechin gallate, phthalic anhydride, and others) in mucilage using high-performance liquid chromatography and ultra-performance liquid chromatography-mass spectrometry (Supplementary Methods for details regarding the protocol). Mucilage and soil DNA extraction, 16S rRNA, and ITS gene sequencing In our study, the rhizosphere soil was defined and collected as previous research . Aerial root mucilage and underground rhizosphere soil and bulk soil sample DNA were extracted using the FastDNA SPIN Kit for Soil (MP Biomedicals, USA) according to the manufacturer’s protocols. The fungal ITS1 region (ITS1F/ITS2R) and the bacterial 16S rRNA gene (V3-V4, 338F/806R) were amplified. Amplicon libraries were sequenced on the Illumina MiSeq PE300 platform by the Majorbio Bio-Pharm Technology Co., Ltd. (Shanghai, China). The 16S rRNA and ITS gene sequencing reads were demultiplexed by Fastp (version 0.20.0) and merged with FLASH (version 1.2.7) following: operational taxonomic units (OTUs) with 97% similarity were clustered using UPARSE (version 7.1) . Taxonomic assignment was performed using the bacterial SILVA reference database (v12_8) and fungal UNITE database (v7.0). Functional prediction (Tax4Fun, PICRUst1/2, FAPROTAX, BugBase phenotype prediction) was analyzed on the free online platform of the Majorbio Cloud Platform ( www.majorbio.com ) . Bacteria and fungi isolation in mucilage and rhizosphere Fungi isolated from the mucilage and rhizosphere were placed onto potato dextrose agar (PDA, Hopebio, Qingdao) and incubated for 7 days (in the dark at 28 °C). Five grams of rhizospheric soil was weighed into a 50-mL centrifuge tube, and 10 –3 and 10 –4 diluents were soaked up with diluted spreading onto tryptic soy agar (TSA, Hopebio, Qingdao), LB agar, and nitrogen-free agar (Ashby, Rhizobium, Associations and Nitrogen-free Culture-medium, Hopebio, Qingdao) to isolate diazotrophic bacteria. To simulate the native habitats to increase the diversity of cultivatable microbes, we attempted to add sterilized ARM of H. rotundifolia into commercial media (final concentration 10 –4 diluents) and incubated the culture under both aerobic and anaerobic conditions. Pure culture isolates were obtained by plate-streaking or single-spore isolation. ITS and 16S rRNA genes of the isolated strains were amplified with fungal primers ITS1F (5′-CTTGGTCATTTAGAGGAAGTAA-3′) and ITS4R (5′-TCCTCCGCTTATTGATATGC-3′) and bacterial universal 27F (5′-GAGAGTTTGATCCTGGCTCAG-3′) and 1492R (5′-ACGGATACCTTGTTACGACT-3′). ITS and 16S sequences were aligned with the NCBI ITS (fungi) and 16S rRNA sequences (bacteria) databases by Nucleotide BLAST ( https://blast.ncbi.nlm.nih.gov/Blast.cgi ) to determine the approximate phylogenetic affiliation. The core nitrogen fixation genes ( nif ) from known diazotrophs and maize mucilage microbiota as previously published as a reference. 15 N 2 gas-enrichment experiments and pheophytin analysis The three types of no-aerial root, no-mucilage, and aerial root-mucilage plants (none of them have underground roots, about 18 cm in length) were collected from the healthy H. rotundifolia plant. For the Isotope ratio mass spectrometry (IRMS) study, we constructed an experimental device for 15 N 2 -labeling experiments, a clear 565 mL plastic bottle, equipped with an intake valve and a device for fixing plants (Fig. a). After simply washing the sampled plant material with sterile water, place the sterile filter paper on the bottom layer of bottle and add a 2-mL sterile water for moisturizing. Then, the 0% (negative controls), 5%, 10%, and 20% (v/v) of 15 N 2 gas (99.9% purity, Wuhan Isotope Technology, Co., Ltd., Wuhan, China) was pumped into the bottle, and the samples were moved to phytotron at 25 °C for 72 h. For the nitrogen-fixation bacteria strain, the bacteria liquid of logarithmic growth stage (2 weeks of age) was added into 125 mL conical flask, sealed, and replaced with 99.9% 15 N 2 by 20% of the volume of gas in the flask with a syringe. The culture was continued at 28 °C for 72 h, and strain cells were collected through Whatman glass microfiber filter (GF/F, GE Healthcare). After drying, grinding and weighing, the 15 N of individual aerial root, young and old leaf, stem from each plant and bacterial strains were measured using EA-HT elemental analyzer (Thermo Fisher Scientific, Inc., Bremen, Germany) coupled to an isotope ratio mass spectrometer (DELTA V Advantage, Thermo Finnigan). The isotopic composition analysis was performed at Tsinghua university Stable Isotope Facility (Shenzhen, China) and Central laboratory of Xishuangbanna Tropical Botanical Garden (Xishuangbanna, China). For the pheophytin analysis, chlorophyll of leaf and stem were extracted, similarly to previously reported methods [ , , ]. 14 N 2 -treated plants with mucilage and non-mucilage were used as negative controls . Chlorophyll was converted to pheophytin by acid treatment for obtaining pheophytin, following as described Pheophytin isotope abundances (m·z −1 : 871.57–875.57) were analyzed by quantitative time-of-flight (qTOF) mass spectrometry on the liquid chromatograph-mass spectrometer (LC–MS, Agilent 6545). The analysis was similar to previously reported methods . In May 2020, we chose the creeping H. rotundifolia plants that were mature enough to have roots at various stages to sample the aerial root mucilage and underground root exudate. Sterile forceps were used to load the aerial root mucilage into a 50-mL centrifuge tube. For underground root exudate collection, roots were repeatedly washed and shaken in a 200-mL deionized water and collected promptly after root washing for 2 h by 60 rpm. All mucilage and exudate samples were immediately frozen to − 80 °C and then freeze-dried (MM400, Retsch) for 1.5 min at 30 Hz. After mixing, take 40 mL sample into 50 mL centrifuge tube and then immerse the sample in liquid nitrogen. Powder (100 mg) was weighed and extracted overnight at 4 °C with 1 mL 70% methanol, followed by centrifugation for 10 min at 12 000 g. The supernatants were collected separately and combined, followed by filtration of 0.22-mm pore size. For metabolome analysis, mucilage and exudate samples were analyzed using ultrahigh-performance liquid chromatography (SHIMADZU Nexera X2)-tandem mass spectroscopy (Applied Biosystems 4500 QTRAP) (UPLC-MS/MS) based widely targeted metabolome method by Wuhan Metware Biotechnology Co., Ltd. (Wuhan, China) ( http://www.metware.cn/ ). Carbohydrate contents were detected based on an Agilent 7890B gas chromatograph coupled to a 7000D mass spectrometer (GC–MS) platform by Metware. Concentration determination of candidate compounds (gallic acid, epigallocatechin gallate, phthalic anhydride, and others) in mucilage using high-performance liquid chromatography and ultra-performance liquid chromatography-mass spectrometry (Supplementary Methods for details regarding the protocol). In our study, the rhizosphere soil was defined and collected as previous research . Aerial root mucilage and underground rhizosphere soil and bulk soil sample DNA were extracted using the FastDNA SPIN Kit for Soil (MP Biomedicals, USA) according to the manufacturer’s protocols. The fungal ITS1 region (ITS1F/ITS2R) and the bacterial 16S rRNA gene (V3-V4, 338F/806R) were amplified. Amplicon libraries were sequenced on the Illumina MiSeq PE300 platform by the Majorbio Bio-Pharm Technology Co., Ltd. (Shanghai, China). The 16S rRNA and ITS gene sequencing reads were demultiplexed by Fastp (version 0.20.0) and merged with FLASH (version 1.2.7) following: operational taxonomic units (OTUs) with 97% similarity were clustered using UPARSE (version 7.1) . Taxonomic assignment was performed using the bacterial SILVA reference database (v12_8) and fungal UNITE database (v7.0). Functional prediction (Tax4Fun, PICRUst1/2, FAPROTAX, BugBase phenotype prediction) was analyzed on the free online platform of the Majorbio Cloud Platform ( www.majorbio.com ) . Fungi isolated from the mucilage and rhizosphere were placed onto potato dextrose agar (PDA, Hopebio, Qingdao) and incubated for 7 days (in the dark at 28 °C). Five grams of rhizospheric soil was weighed into a 50-mL centrifuge tube, and 10 –3 and 10 –4 diluents were soaked up with diluted spreading onto tryptic soy agar (TSA, Hopebio, Qingdao), LB agar, and nitrogen-free agar (Ashby, Rhizobium, Associations and Nitrogen-free Culture-medium, Hopebio, Qingdao) to isolate diazotrophic bacteria. To simulate the native habitats to increase the diversity of cultivatable microbes, we attempted to add sterilized ARM of H. rotundifolia into commercial media (final concentration 10 –4 diluents) and incubated the culture under both aerobic and anaerobic conditions. Pure culture isolates were obtained by plate-streaking or single-spore isolation. ITS and 16S rRNA genes of the isolated strains were amplified with fungal primers ITS1F (5′-CTTGGTCATTTAGAGGAAGTAA-3′) and ITS4R (5′-TCCTCCGCTTATTGATATGC-3′) and bacterial universal 27F (5′-GAGAGTTTGATCCTGGCTCAG-3′) and 1492R (5′-ACGGATACCTTGTTACGACT-3′). ITS and 16S sequences were aligned with the NCBI ITS (fungi) and 16S rRNA sequences (bacteria) databases by Nucleotide BLAST ( https://blast.ncbi.nlm.nih.gov/Blast.cgi ) to determine the approximate phylogenetic affiliation. The core nitrogen fixation genes ( nif ) from known diazotrophs and maize mucilage microbiota as previously published as a reference. N 2 gas-enrichment experiments and pheophytin analysis The three types of no-aerial root, no-mucilage, and aerial root-mucilage plants (none of them have underground roots, about 18 cm in length) were collected from the healthy H. rotundifolia plant. For the Isotope ratio mass spectrometry (IRMS) study, we constructed an experimental device for 15 N 2 -labeling experiments, a clear 565 mL plastic bottle, equipped with an intake valve and a device for fixing plants (Fig. a). After simply washing the sampled plant material with sterile water, place the sterile filter paper on the bottom layer of bottle and add a 2-mL sterile water for moisturizing. Then, the 0% (negative controls), 5%, 10%, and 20% (v/v) of 15 N 2 gas (99.9% purity, Wuhan Isotope Technology, Co., Ltd., Wuhan, China) was pumped into the bottle, and the samples were moved to phytotron at 25 °C for 72 h. For the nitrogen-fixation bacteria strain, the bacteria liquid of logarithmic growth stage (2 weeks of age) was added into 125 mL conical flask, sealed, and replaced with 99.9% 15 N 2 by 20% of the volume of gas in the flask with a syringe. The culture was continued at 28 °C for 72 h, and strain cells were collected through Whatman glass microfiber filter (GF/F, GE Healthcare). After drying, grinding and weighing, the 15 N of individual aerial root, young and old leaf, stem from each plant and bacterial strains were measured using EA-HT elemental analyzer (Thermo Fisher Scientific, Inc., Bremen, Germany) coupled to an isotope ratio mass spectrometer (DELTA V Advantage, Thermo Finnigan). The isotopic composition analysis was performed at Tsinghua university Stable Isotope Facility (Shenzhen, China) and Central laboratory of Xishuangbanna Tropical Botanical Garden (Xishuangbanna, China). For the pheophytin analysis, chlorophyll of leaf and stem were extracted, similarly to previously reported methods [ , , ]. 14 N 2 -treated plants with mucilage and non-mucilage were used as negative controls . Chlorophyll was converted to pheophytin by acid treatment for obtaining pheophytin, following as described Pheophytin isotope abundances (m·z −1 : 871.57–875.57) were analyzed by quantitative time-of-flight (qTOF) mass spectrometry on the liquid chromatograph-mass spectrometer (LC–MS, Agilent 6545). The analysis was similar to previously reported methods . N natural abundance The biological nitrogen fixation sources (%Ndfa) proportion was analyzed using the 15 N natural abundance (δ, ‰) of the mucilage-producing-fixing plant (δ 15 N, fixing plant-ARM) and non-aerial root-reference plants (δ 15 N, reference plants). In 2020 and 2021, nearly 12 individual mucilage-producing samples and eight non-aerial root-reference plant samples were analyzed. The percentage of nitrogen derived from nitrogen fixation (%Ndfa) was calculated as follows: [12pt]{minimal} $$\%\;=(^{15}{ N}_{}\;-\;^{15}{ N}_{\;-})/(^{15}{ N}_{}\;-\; B)100\%$$ % Ndfa = δ 15 N reference - δ 15 N fixing plant - ARM / δ 15 N reference - B × 100 % The “δ 15 N” is stable nitrogen isotopes, “ref” is the value from non-aerial root-reference plants, “fixing plant-ARM” is the mucilage-producing-fixing plant, and “B” is the 15 N abundance in the air, assumed to be 0.0‰. Microbial antagonism experiment To identify mucilage compounds and microbes with broad-spectrum resistance, the compounds and isolates were purified and transferred to a new medium plate for 24 h in advance (bacteria 12 h) and then placed outdoors (exposed to airborne microbial infection) 5 days with the cover of the plate open. Medium plate that did not grow other microbes were considered to have broad-spectrum resistance. To simulate the mucilage environment, plates consist of sterile mucilage and agar. For mucilage sterilization, it was placed in − 80 °C refrigerator for 14 days, melted at room temperature, and then added to the high-temperature sterilized agar. The negative control was melted mucilage cultured in PDA and TSA medium. The efficacy of fungi strain “F-XTBG8” was tested against environment and pathogenic fungi on potato dextrose agar (PDA) medium plates. Fungal culture plate agar discs (6 mm) of environment and pathogenic fungi were disposed in all around at the center of F-XTBG8 strain in a square at 1.8 cm distance and incubated at 30 °C until mycelial growth had filled medium plates. The Oxford cup method was used for the mucilage compounds, fungal metabolite, and bacteria antagonism experiment, the cup height of 10 mm, inner diameter of 6 mm, and outer diameter of 8 mm. Then, 100 uL of each bacterial suspension (OD 600 = 1) was spread evenly on plates (mucilage compounds: 1% PDA and 1% TSA, fungal metabolites: sterile mucilage with agar), while three sterilized Oxford cups were placed in each plate, and three cups were inoculated separately with 200 uL of antibacterial streptomycin and tetracycline hydrochloride (mix, positive controls, CK + ), PDB of fungal metabolite (F-XTBG8), and bulk PDB (negative control, CK + ). The plates of inoculated bacterial suspension were cultured at 30 °C for 12 h. The antagonistic zones (the bacterial growth inhibition zone minus cup outer diameter) were measured to evaluate the antibacterial effects of F-XTBG8 on different bacteria. H. rotundifolia genome sequencing, assembly, and annotation Genomic DNA was extracted using the QIAGEN DNaesy Plant Mini Kit according to the manufacturer’s protocols. The extracted DNA molecules were sequenced by PacBio Sequel (Pacific Biosciences of California, Menlo Park, CA, USA) platforms. The CLR reads were assembled using mecat2 (20,190,226) with default parameters. Draft genome was corrected by arrow and pilon. Polished contigs were anchored to chromosome by Hi-C reads. First, Hi-C reads were mapped to the polished H. rotundifolia genome using BWA (bwa-0.7.17) and Lachesis with default parameters . Paired reads with mate mapped to a different contig were used to do the Hi-C-associated scaffolding. Lachesis was further applied to cluster, order, and orient the clustered contigs. Two methods were combined to identify the repeat contents in H. rotundifolia genome, homology-based, and de novo prediction. The repeat contents found by these two methods were merged by repeat masker. Protein-coding genes of the H. rotundifolia genome were predicted by three methods, including ab initio gene prediction, homology-based gene prediction, and RNA-Seq-aided gene prediction (Supplementary Methods for genome sequencing, assembly, and annotation details). H. rotundifolia RNA isolation and transcriptome analysis RNA samples were extracted using the Trizol reagent following the manufacturer’s recommendations (Invitrogen, CA, USA) and then sequenced using the MGI-SEQ 2000 platform (Supplementary Methods for library construction and sequencing details). Low-quality reads were filtered out by SOAPnuke software , and clean reads were mapped to H. rotundifolia genome using bowtie2 software . Gene expression levels were estimated using FPKM values (fragments per kilobase per million fragments mapped) by the RSEM software . DESeq2 was used to evaluate differential expression genes between different root samples. Genes with fold change > 1 or ≤ 1, and FDR < 0.05 were differentially expressed genes. To identify mucilage compounds and microbes with broad-spectrum resistance, the compounds and isolates were purified and transferred to a new medium plate for 24 h in advance (bacteria 12 h) and then placed outdoors (exposed to airborne microbial infection) 5 days with the cover of the plate open. Medium plate that did not grow other microbes were considered to have broad-spectrum resistance. To simulate the mucilage environment, plates consist of sterile mucilage and agar. For mucilage sterilization, it was placed in − 80 °C refrigerator for 14 days, melted at room temperature, and then added to the high-temperature sterilized agar. The negative control was melted mucilage cultured in PDA and TSA medium. The efficacy of fungi strain “F-XTBG8” was tested against environment and pathogenic fungi on potato dextrose agar (PDA) medium plates. Fungal culture plate agar discs (6 mm) of environment and pathogenic fungi were disposed in all around at the center of F-XTBG8 strain in a square at 1.8 cm distance and incubated at 30 °C until mycelial growth had filled medium plates. The Oxford cup method was used for the mucilage compounds, fungal metabolite, and bacteria antagonism experiment, the cup height of 10 mm, inner diameter of 6 mm, and outer diameter of 8 mm. Then, 100 uL of each bacterial suspension (OD 600 = 1) was spread evenly on plates (mucilage compounds: 1% PDA and 1% TSA, fungal metabolites: sterile mucilage with agar), while three sterilized Oxford cups were placed in each plate, and three cups were inoculated separately with 200 uL of antibacterial streptomycin and tetracycline hydrochloride (mix, positive controls, CK + ), PDB of fungal metabolite (F-XTBG8), and bulk PDB (negative control, CK + ). The plates of inoculated bacterial suspension were cultured at 30 °C for 12 h. The antagonistic zones (the bacterial growth inhibition zone minus cup outer diameter) were measured to evaluate the antibacterial effects of F-XTBG8 on different bacteria. Genomic DNA was extracted using the QIAGEN DNaesy Plant Mini Kit according to the manufacturer’s protocols. The extracted DNA molecules were sequenced by PacBio Sequel (Pacific Biosciences of California, Menlo Park, CA, USA) platforms. The CLR reads were assembled using mecat2 (20,190,226) with default parameters. Draft genome was corrected by arrow and pilon. Polished contigs were anchored to chromosome by Hi-C reads. First, Hi-C reads were mapped to the polished H. rotundifolia genome using BWA (bwa-0.7.17) and Lachesis with default parameters . Paired reads with mate mapped to a different contig were used to do the Hi-C-associated scaffolding. Lachesis was further applied to cluster, order, and orient the clustered contigs. Two methods were combined to identify the repeat contents in H. rotundifolia genome, homology-based, and de novo prediction. The repeat contents found by these two methods were merged by repeat masker. Protein-coding genes of the H. rotundifolia genome were predicted by three methods, including ab initio gene prediction, homology-based gene prediction, and RNA-Seq-aided gene prediction (Supplementary Methods for genome sequencing, assembly, and annotation details). RNA samples were extracted using the Trizol reagent following the manufacturer’s recommendations (Invitrogen, CA, USA) and then sequenced using the MGI-SEQ 2000 platform (Supplementary Methods for library construction and sequencing details). Low-quality reads were filtered out by SOAPnuke software , and clean reads were mapped to H. rotundifolia genome using bowtie2 software . Gene expression levels were estimated using FPKM values (fragments per kilobase per million fragments mapped) by the RSEM software . DESeq2 was used to evaluate differential expression genes between different root samples. Genes with fold change > 1 or ≤ 1, and FDR < 0.05 were differentially expressed genes. All data analysis was conducted in R and visualized using the ggplot2 and igraph packages. The corrected P values were used as the significance threshold for differentially expressed genes. The microbial alpha diversity, including the Shannon, Chao1, and Simpson indices, was determined using Mothur v. 1.34.4. The PCoA was performed to examine the similarities and dissimilarities within the different groups. Some analysis and function prediction such as Spearman’s correlations, PICRUSt, FUNGuild, and others were analyzed on the free online platform of the Majorbio Cloud Platform ( www.majorbio.com ). The base R package “stats” (v. 3.4.1) was used to perform the two-tailed Wilcoxon rank-sum test (wilcox.test function). ANOVA and mean separation using least significant difference ( P = 0.05) for each location. T test and two-way ANOVA test were significant at P = 0.05. Additional file 1: Figure S1. Aerial root mucilage (ARM) and underground root exudate (URE) compound of H. rotundifolia . Figure S2. Fungal diversity and community of aerial root mucilage (mucilage) and underground rhizosphere soil (rhizosphere). Figure S3. Differentiate analysis, function and phenotypic prediction of mucilage and rhizosphere bacteria. Figure S4. Cultured bacteria and their nitrogen-fixing capacity. Figure S5. Cultured bacteria and their nitrogen-fixing capacity. Figure S6. Estimated genome size of H. rotundifolia by flow cytometry. Figure S7. Resistance of mucilage compound to environmental microbes. Figure S8. Resistance of F-XTBG8 to pathogenic and environmental fungi. Figure S9. A candidate in mucilage microhabitat and its defense against environmental microbes but not mucilage bacteria. Figure S10. Genome, transcriptome and metabolome analysis of C. raphigera (Cr). Additional file 2: Table S1. Relative content of different compounds and carbohydrates of aerial root mucilage (ARM) and underground root exudate (URE) Table S2. Fungal and bacterial OTU of different samples. Table S3. Differentially bacterial and fungal genera of rhizosphere and mucilage. Table S4. Cultured bacteria and their nitrogen-fixing capacity. Table S5. Plant nitrogen enrichment, natural abundance, nitrogencontent, length and dry weight analysis. Table S6. Results of plant genome sequencing, assembly, and annotation. Table S7. GO and KEGG enrichment of differential sample. Table S8. Candidates for follow-up experiments in mucilage compounds. Table S9. The mucilage compounds for anti-microbe activity test. Table S10. Antagonism of F-XTBG8 against different fungi and bacteria. Table S11. Fungal genome was detected and expressed (reads FPKM>10) in different tissues of the plant. Table S12. Different metabolites analysis for monocultivation of C. raphigera (Cr) and cocultivation with environmental fungi. Table S13. Record of mucilage production and meteorological factors. Additional file 3: Method S1. Carbohydrate and widely targeted metabolites profiling. Method S2. Plant Genome and transcriptome analysis.
The senescence difference between the central and peripheral cornea induced by sutures
1663c511-95f9-4c98-b1c6-10ec1668b85d
10120248
Suturing[mh]
The cornea in the anterior part of the eye is transparent and has tissue integrity, a feature that is very important for clear vision. Keratocytes, as the main cells in the corneal stroma, usually remain stationary between the collagen layers and secrete the necessary extracellular matrix (ECM) proteins [ – ]. When eye injuries (such as trauma, infection, and inflammation) occur, corneal cells may be transformed into fibroblasts and myofibroblasts, which rapidly secrete excessive ECM protein and cause corneal scar formation during wound healing . Along with keratocyte activation, new blood vessels gradually arise from the peripheral cornea to the central cornea . Our previous study has reported that senescent fibroblasts can promote alkali-induced corneal neovascularization (CNV) . In clinical practice, scarring is usually disk-shaped in the central cornea and more severe than that in the periphery. The recovery of corneal transparency after an injury is important for vision restoration . Cellular senescence refers to the irreversible cell cycle arrest caused by cellular pressures, such as DNA damage, oncogene activation, and oxidative stress . Senescent cells show a large and flattened morphology, with positive senescence-associated β-galactosidase (SA-β-gal) staining. In addition to growth arrest, senescent cells also upregulate many secreted proteins, including cytokines and growth factors, called senescence-related secretory phenotype (SASP) or senescence information secretory group (SMS) [ – ]. Aside from suppressing tumorigenesis, the senescence response participates in a variety of histopathological processes through SASP/SMS . Recent reports have shown that senescent cells inhibit fibrosis of the liver, skin, and heart [ – ], indicating that cellular senescence is an effective regulator of fibrosis limitation, although this has not yet been studied in the cornea. Based on these studies, we hypothesized that the senescence of corneal fibroblasts might contribute to the difference in scarring between the central and peripheral cornea. In this study, we examined cellular senescence in the central and peripheral cornea and found that senescent corneal fibroblasts were more frequently presented in the peripheral cornea in mice, which was in line with the results of the in vitro experiment. The senescent corneal fibroblasts obtained from the peripheral cornea exhibited a higher level of senescence-related genes p21, p27, and p53. Suture-induced mouse model of corneal senescence Balb/c mice (male, 6–8 weeks; Beijing Pharmacology Institute, Beijing, China) were used for the study. All animal experiments were conducted in accordance with the Declaration on the Use of Eye and Vision Research Animals of the Vision and Ophthalmology Research Association and approved by the Ethics Committee of Shandong Institute of Ophthalmology. All animal experiments were conducted in accordance with the ARRIVE guidelines. After systemic and topical anesthesia, five interrupted stitches of the 11 − 0 polypropylene suture (Mani, Togichi, Japan) were placed in the central, superior, inferior, nasal, and temporal cornea, respectively. Ofloxacin eye ointment was immediately applied to the eye surface after the injury and once a day for one week to avoid infection. Only one eye was operated on in each mouse for all experiments. The suture was removed on the 14th day. At 12 h and 3, 5, 7, 14, and 21 days after the operation, corneal edema, neovascularization, and scarring were observed under a slit lamp microscope with a digital camera. Cell culture and senescence induction New Zealand white rabbits (male, 2–4 months; Kangda, Qingdao, Shandong, China) were used for corneal fibroblast culture. The peripheral and central corneal tissues were isolated using a trephine of 8 mm and 5 mm diameter, respectively. The corneal epithelium and endothelium were digested with 2.4 U/ml dispase II (Roche, Basel, Switzerland) overnight at 4 °C. The corneal stroma was cut into pieces and incubated with 2 mg/ml collagenase I (Invitrogen, Carlsbad, CA) for 2–4 h at 37 °C. The cells were then collected and cultured in DMEM/F-12 medium supplemented with 10% FBS. A hydrogen peroxide (H 2 O 2 )-induced in vitro model of senescent corneal fibroblasts was established as previously described. Briefly, the cells were exposed three times to 200 µM H 2 O 2 for 1 h daily. After each treatment, the cells were washed with PBS and incubated in a complete medium for 24 h. Senescent cell assay The mouse eyeballs were embedded in the Tissue-Tek optimum cutting temperature compound, and 8-µm sections were created (n = 5). The cryosections were fixed with ice methanol at − 20 °C for 10 min. The rabbit keratocytes were fixed with 4% paraformaldehyde at room temperature for 10 min. Cellular senescence was identified by SA-β-gal staining (Beyotime, Haimen, China). In brief, the samples were washed with PBS, fixed at room temperature for 15 min, washed 2–3 times again before incubation in SA-β-gal staining solution (pH 6.0) overnight at 37 °C, and then observed under a Nikon microscope. Real-time quantitative PCR reaction Total RNA was extracted from the H 2 O 2 -treated peripheral and central rabbit corneal fibroblasts using Nucleospin RNA kits (BD Biosciences, Palo Alto, CA). cDNA was acquired from the total RNA using the first-strand cDNA synthesis kit (TaKaRa, Dalian, China). Real-time PCR analysis was then performed with the SYBR Green PCR reagent (Invitrogen, Carlsbad, CA) in an Applied Biosystems 7500 real-time PCR system (Applied Biosystems, Foster City, CA). The cycling system was an initial denaturation cycle at 95 °C for 10 s, followed by 45 cycles at 95 °C for 15 s and 60 °C for 1 min. The results were assayed by the comparative threshold method (2 −ΔΔCt ). The nucleotide sequences of primers used in this assay are listed in Table . GAPDH was used as an endogenous control gene. Statistical analysis The data in this study were representative of more than three different experiments and presented as mean ± SEM. We have analysed the images using imageJ software.The differences between the control and treated groups were compared with the Student’s t-test. P < 0.05 was considered significant. Balb/c mice (male, 6–8 weeks; Beijing Pharmacology Institute, Beijing, China) were used for the study. All animal experiments were conducted in accordance with the Declaration on the Use of Eye and Vision Research Animals of the Vision and Ophthalmology Research Association and approved by the Ethics Committee of Shandong Institute of Ophthalmology. All animal experiments were conducted in accordance with the ARRIVE guidelines. After systemic and topical anesthesia, five interrupted stitches of the 11 − 0 polypropylene suture (Mani, Togichi, Japan) were placed in the central, superior, inferior, nasal, and temporal cornea, respectively. Ofloxacin eye ointment was immediately applied to the eye surface after the injury and once a day for one week to avoid infection. Only one eye was operated on in each mouse for all experiments. The suture was removed on the 14th day. At 12 h and 3, 5, 7, 14, and 21 days after the operation, corneal edema, neovascularization, and scarring were observed under a slit lamp microscope with a digital camera. New Zealand white rabbits (male, 2–4 months; Kangda, Qingdao, Shandong, China) were used for corneal fibroblast culture. The peripheral and central corneal tissues were isolated using a trephine of 8 mm and 5 mm diameter, respectively. The corneal epithelium and endothelium were digested with 2.4 U/ml dispase II (Roche, Basel, Switzerland) overnight at 4 °C. The corneal stroma was cut into pieces and incubated with 2 mg/ml collagenase I (Invitrogen, Carlsbad, CA) for 2–4 h at 37 °C. The cells were then collected and cultured in DMEM/F-12 medium supplemented with 10% FBS. A hydrogen peroxide (H 2 O 2 )-induced in vitro model of senescent corneal fibroblasts was established as previously described. Briefly, the cells were exposed three times to 200 µM H 2 O 2 for 1 h daily. After each treatment, the cells were washed with PBS and incubated in a complete medium for 24 h. The mouse eyeballs were embedded in the Tissue-Tek optimum cutting temperature compound, and 8-µm sections were created (n = 5). The cryosections were fixed with ice methanol at − 20 °C for 10 min. The rabbit keratocytes were fixed with 4% paraformaldehyde at room temperature for 10 min. Cellular senescence was identified by SA-β-gal staining (Beyotime, Haimen, China). In brief, the samples were washed with PBS, fixed at room temperature for 15 min, washed 2–3 times again before incubation in SA-β-gal staining solution (pH 6.0) overnight at 37 °C, and then observed under a Nikon microscope. Total RNA was extracted from the H 2 O 2 -treated peripheral and central rabbit corneal fibroblasts using Nucleospin RNA kits (BD Biosciences, Palo Alto, CA). cDNA was acquired from the total RNA using the first-strand cDNA synthesis kit (TaKaRa, Dalian, China). Real-time PCR analysis was then performed with the SYBR Green PCR reagent (Invitrogen, Carlsbad, CA) in an Applied Biosystems 7500 real-time PCR system (Applied Biosystems, Foster City, CA). The cycling system was an initial denaturation cycle at 95 °C for 10 s, followed by 45 cycles at 95 °C for 15 s and 60 °C for 1 min. The results were assayed by the comparative threshold method (2 −ΔΔCt ). The nucleotide sequences of primers used in this assay are listed in Table . GAPDH was used as an endogenous control gene. The data in this study were representative of more than three different experiments and presented as mean ± SEM. We have analysed the images using imageJ software.The differences between the control and treated groups were compared with the Student’s t-test. P < 0.05 was considered significant. Corneal neovascularization after suturing Corneal neovascularization was detected in the suture-induced senescence mouse model. The new vessels grew gradually from the limbal region to the central cornea after sutures were placed on the cornea. They emerged on the third day, became dense and rough on the seventh day, and faded to the surrounding sutures on the 14th day. After the sutures were removed on the 14th day, the new vessels regressed rapidly and could not be observed on the third and 21st days (Fig. ). Senescent cell accumulation in the corneal suture model In the mouse model of senescence induced by localized corneal sutures, SA-β-gal staining revealed the senescent cell distribution on the whole corneal mount. No positive SA-β-gal staining was observed in the cornea 12 h after the operation. Mild staining of SA-β-gal activity was detected in the cornea on the third day, and strong positive staining was consistently observed on the fifth and seventh days, peaking on the 14th day. After suture removal on the 14th day, the SA-β-gal staining weakened rapidly until the senescent cells disappeared completely on the 21st day (Fig. A). The SA-β-gal staining of the corneal sections showed that the senescent cells were mainly localized in the corneal stroma, especially around the sutures, so the possibility of corneal epithelial or endothelial cell senescence was excluded (Fig. B). Senescence difference between the central and peripheral cornea To compare the relative percentage of senescent cells in the central and peripheral cornea, we placed sutures at five different regions (Fig. A). The whole flat mount that contained tissues from both the central and peripheral suture regions was prepared for SA-β-gal staining. The staining was graded by optical density measured through the Image J software. On the fifth day after the operation, positive senescence expression was found in all five locations. The density of SA-β-gal staining positivity was higher in the peripheral area than in the central area (Fig. A). The findings were consistent at different time points, and the most significant difference appeared on the 14th day (Fig. ). Induction of senescent corneal fibroblasts in vitro To further understand the difference in the senescence ability between the central and peripheral corneal fibroblasts, rabbit corneal fibroblasts were obtained and induced to senescence in vitro with H 2 O 2 . SA-β-gal staining was used to identify cell senescence. The senescent fibroblasts were found to be enlarged and flattened with accumulated SA-β-gal staining. Similar to the in vivo experimental results of the mouse corneal model, under the same experimental conditions, the density of the senescent cells in the peripheral cornea was much higher than that in the central cornea (Fig. A). Real-time RT-PCR results showed that the levels of senescence-related genes p21, p27, and p53 in the peripheral corneal fibroblasts were higher than those in the central fibroblasts, which also proved that peripheral corneal fibroblasts were more prone to senescent (Fig. B). Corneal neovascularization was detected in the suture-induced senescence mouse model. The new vessels grew gradually from the limbal region to the central cornea after sutures were placed on the cornea. They emerged on the third day, became dense and rough on the seventh day, and faded to the surrounding sutures on the 14th day. After the sutures were removed on the 14th day, the new vessels regressed rapidly and could not be observed on the third and 21st days (Fig. ). In the mouse model of senescence induced by localized corneal sutures, SA-β-gal staining revealed the senescent cell distribution on the whole corneal mount. No positive SA-β-gal staining was observed in the cornea 12 h after the operation. Mild staining of SA-β-gal activity was detected in the cornea on the third day, and strong positive staining was consistently observed on the fifth and seventh days, peaking on the 14th day. After suture removal on the 14th day, the SA-β-gal staining weakened rapidly until the senescent cells disappeared completely on the 21st day (Fig. A). The SA-β-gal staining of the corneal sections showed that the senescent cells were mainly localized in the corneal stroma, especially around the sutures, so the possibility of corneal epithelial or endothelial cell senescence was excluded (Fig. B). To compare the relative percentage of senescent cells in the central and peripheral cornea, we placed sutures at five different regions (Fig. A). The whole flat mount that contained tissues from both the central and peripheral suture regions was prepared for SA-β-gal staining. The staining was graded by optical density measured through the Image J software. On the fifth day after the operation, positive senescence expression was found in all five locations. The density of SA-β-gal staining positivity was higher in the peripheral area than in the central area (Fig. A). The findings were consistent at different time points, and the most significant difference appeared on the 14th day (Fig. ). To further understand the difference in the senescence ability between the central and peripheral corneal fibroblasts, rabbit corneal fibroblasts were obtained and induced to senescence in vitro with H 2 O 2 . SA-β-gal staining was used to identify cell senescence. The senescent fibroblasts were found to be enlarged and flattened with accumulated SA-β-gal staining. Similar to the in vivo experimental results of the mouse corneal model, under the same experimental conditions, the density of the senescent cells in the peripheral cornea was much higher than that in the central cornea (Fig. A). Real-time RT-PCR results showed that the levels of senescence-related genes p21, p27, and p53 in the peripheral corneal fibroblasts were higher than those in the central fibroblasts, which also proved that peripheral corneal fibroblasts were more prone to senescent (Fig. B). In the present study, we observed that in the mouse model of cellular senescence induced by corneal sutures, the senescent corneal fibroblasts were more frequently found in the peripheral cornea than in the central cornea. In vitro, the rabbit corneal fibroblasts from the peripheral cornea were more likely to have senescence-associated β-galactosidase staining, and the senescence-related gene level was higher in the peripheral corneal fibroblasts. Since senescent fibroblasts can act as important regulators of tissue fibrogenesis , we believe the senescence difference was one of the reasons for corneal scar formation after trauma, infection, or inflammation. Eye trauma, such as sutures on the cornea, may cause serious damage during wound healing, such as corneal matrix turbidity and new blood vessels [ – ]. Our previous studies have described that senescent cells from activated corneal fibroblasts can promote alkali-induced CNV . In this study, senescent fibroblasts were also found to participate in CNV, although they appeared later than the development of CNV. During wound healing, quiescent keratocytes were activated by inflammation-induced transforming growth factor β and transformed into fibroblasts and myofibroblasts, which rapidly synthesized and secreted redundant ECM proteins and repaired the wound . After the completion of wound repair, myofibroblasts become senescent or transform to scar keratocytes . Recent studies have shown that one of the roles of senescent fibroblasts in tissue repair is to limit fibrosis, which is usually observed in chronic wounds. The mark of cellular senescence is cell cycle arrest, and the enforced cell cycle arrest of senescent fibroblasts in vivo slows the fibrogenic response to damage by limiting the expansion of the cell type responsible for producing the fibrotic scar . A number of MMPs, including MMP2, MMP3, and MMP9, are part of the SASP, which can degrade excessive collagen and maintain tissue homeostasis during wound healing . The stable cell cycle arrest of aging fibroblasts and SASP may also be a mechanism for corneal scarring. The central cornea of keratoconus is thinner than the peripheral cornea, and the central cornea is more prone to scar formation, underscoring a difference between the central and peripheral corneal cells . Our results showed that the levels of senescence-related genes p21, p27, and p53 were higher in the peripheral cells. The expression of p21 and p27 is strictly controlled by the tumor suppressor protein p53, which plays a role in cell apoptosis, senescence, genomic stability, and inhibition of angiogenesis . P21 maintains the cell cycle at the G1/S stage and arrests growth. The higher expression of these genes may partially contribute to peripheral keratocyte senescence. Our study indicates that the senescence difference between the central and peripheral cornea may contribute to the disparity in scarring, as senescent fibroblasts can limit tissue fibrosis.
Acute Deterioration of Patient with Sudden Onset of Shock Caused by Group G Streptococcus Infection after Revision Total Knee Arthroplasty: A Case Report
745e6793-d648-4f61-a0ad-ef552e038c46
10120603
Debridement[mh]
Periprosthetic joint infection (PJI) represents one of the most major complications after total knee arthroplasty (TKA). Rates of PJI in primary TKA range between 0.5% and 1.9% and between 8% and 10% in revision TKA [ – ]. The incidence of PJI among patients with rheumatoid arthritis (RA) is 1.6 times greater than that among patients with osteoarthritis . PJI is a challenging complication that usually progresses slowly with an appropriate combination of surgical and antibiotic treatments, without life-threatening complications. However, PJIs with rapid deterioration are rare, in which the sudden onset of septic shock rarely occurs. In the rare incidence, Streptococcus including the G strain were causatives . Necrotizing fasciitis (NF) is a rare bacterial infection that spreads quickly throughout the body and can cause death. NF is commonly caused by group A Streptococcus. NF after TKA is extremely rare; the only reported case was a lifesaving above-knee amputation . Here, we present a rare case of septic shock due to infection caused by group G Streptococcus after revision TKA. Successful treatment resulted in saving the infected limb as well as the life of a 61-year-old woman with RA treated with biological disease-modifying antirheumatic drugs (bDMARDs). A 61-year-old woman presented to our hospital with pain in her right knee. The patient received a diagnosis of PJI after revision TKA and was given intravenous antibiotic treatment at the previous hospital. The next day, she was referred to our hospital for treatment of septic shock. She had undergone primary TKA (Sigma RP-F, Depuy, Warsaw, IN, USA) for rheumatoid arthritis 10 years previously and revision TKA (P. F. C Sigma TC3, Depuy, Warsaw, IN, USA) due to aseptic loosening 6 months previously. The patient had been receiving oral prednisone (4 mg daily), tacrolimus (1.5 mg daily), and etanercept (50 mg subcutaneous injections weekly). Pain, swelling, and burning sensations were observed in the right knee, and redness and local heat extending to the right lower leg were observed during physical examination. Laboratory examination showed a white blood cell count of 8.1×10 9 /L and C-reactive protein level of 19.9 mg/L. We performed aspiration of the right knee joint, and gram-positive streptococci were detected by staining, which were later identified to be Streptococcus dysgalactiae (group G Streptococcus). She had a temperature of 38.2°C, heart rate of 128 beats/min, and blood pressure of 56/32 mmHg. Laboratory investigations yielded the following values: hemoglobin, 9.5 g/dL; serumglucose, 99 mg/dL; serum creatinine, 2.93 mg/dL; and sodium, 139 mEq/L. Radiography of the right knee showed no osteolytic lesions and no signs of periprosthetic loosening ( ). A computed tomography scan might have helped with the diagnosis but was not performed because it was too dangerous to move the patient, who was in a state of shock, to the examination room. We diagnosed septic shock due to PJI after revision TKA and rapidly performed irrigation, debridement, polyethylene liner exchange, and antibiotic therapy because there were no findings that suggested loosening of the prosthesis and because of the short period of time since the onset of symptoms. During the surgery, a lack of tissue resistance to blunt finger dissection was observed. The patient was rapidly admitted to the Intensive Care Unit for mechanical ventilation after the operation. She was immediately placed empirically on meropenem (1.0 g every 12 h), vancomycin (0.5 g every 12 h), and clindamycin (600 mg every 8 h). We administered continuous hemodiafiltration and started her on nor-adrenaline via a peripheral intravenous catheter. Blood culture tests were performed several times, but the results were all negative. The patient’s clinical condition gradually stabilized, and noradrenaline application was stopped on day 5 of admission. However, on the hospitalization day 8, swelling and burning sensation were again observed in the right knee, and approximately 35 mL of exudate was aspirated. Analysis of the joint aspiration revealed a white blood cell count of 60 100/ μL and a positive alpha-defensin test. We judged eradication failure of the infection and performed irrigation, debridement, implant removal, and antibiotic spacer (1 g of vancomycin per 40 g of cement) placement ( ). The patient’s postoperative period was uneventful. Six weeks after surgery, the inflammatory markers had normalized (C-reactive protein, 0.05 mg/L) with intravenous antibiotic therapy (ampicillin 2 g every 6 h and clindamycin 600 mg every 8 h for 2 weeks and then cefazolin 2 g every 8 h for 4 weeks). Ten weeks after surgery, the patient displayed no clinical signs of infection and was discharged from the hospital with 2 canes. Based on the suggestion of the Department of Rheumatology and Clinical Immunology, she was started on bucillamine (100 mg daily) 1 month after surgery, and the dose was gradually increased to 300 mg daily. At the time of discharge, she was taking busiramine and prednisolone (4 mg daily). We did not resume bDMARDs because her RA was well controlled. She was administrated oral cefalexin 500 mg 3 times a day for 2 months. Approximately 1.5 years after surgery, she underwent re-revision TKA (NexGen Rotating Hinge Knee, Zimmer-Biomet, Warsaw, IN, USA; ). Throughout the follow-up period, inflammatory markers were not elevated and no clinical signs of infection were observed. One year after the final surgery, the patient walked with a cane and had no symptoms of infection. No pain, swelling, or burning sensation was observed in the right knee, and right knee motion ranged from 0° of extension to 90° of flexion with no extension lag. Significant advances in RA treatment have resulted in the introduction of bDMARDs, such as tumor necrosis factor inhibitors (TNFi). Nevertheless, patients with RA still progress to end-stage arthritis and require arthroplasty . Critical outcomes, such as infection and dislocation, were reported to be higher in patients with RA than in those with osteoarthritis. Lee et al reported that the deep infection rate was significantly higher in RA patients than in osteoarthritis patients (3.0% vs 0.9%, P <0.001) . Rene et al reported that patients with RA had an increased risk of PJI (HR: 1.46; 95% CI: 1.13–1.88) and death (HR: 1.25; 95% CI: 1.01–1.55) . Hayashi et al reported that TNFi therapy was found to be significantly associated with the development of late infection after total hip arthroplasty (OR: 11.7; 95% CI: 1.2–109) . In the present case report, RA treated with bDMARDs led to an immunocompromised condition, which might be the reason for this severe infection. Beta-hemolytic streptococcus can cause PJI; however, Cunningham et al reported that Staphylococcus species such as coagulase-negative staphylococci, methicillin-sensitive Staphylococcus aureus , and methicillin-resistant Staphylococcus aureus represented the largest proportion of infecting organisms, whereas gram-negative organisms and fungi were relatively less prevalent . It was reported that 7% to 12% of PJI are caused by Streptococcus spp. . Group C and G streptococcal bacteremias are linked to diabetes mellitus, cardiovascular disease, malignancy, immunosuppression, and breakdown of the skin . Patients infected with group C and G streptococcus presented with septic arthritis more often than those infected with Group B, whereas patients with group A infections had abscesses involving sites deeper than the skin more often than patients with group C and G infections . These reports suggest that immunosuppressed patients with RA, as in the present case, are more susceptible to group C and G streptococcus and are at a higher risk of PJI. NF is an uncommon, life-threatening, and aggressive soft-tissue infection. NF occurring simultaneously with PJI is a rare occurrence. Hanno et al reported the only case, which was of a 65-year-old woman with NF following TKA who was infected with Staphylococcus epidermidis and underwent above-knee amputation . Establishing a diagnosis of NF is challenging, and the Laboratory Risk Indicator for Necrotizing Fasciitis (LRINEC) score is a diagnostic tool. In the present case, the LRINEC score was 8 points, and a score ≥8 was strongly predictive of NF . However, their study showed a probability of necrotizing infections of 75% corresponds to a score of 8. Neeki et al reported a false positive rate for the LRINEC score of 10.7% . Therefore, we need to understand the limitations of the LRINEC score. Because no obvious necrotic tissue was observed intraoperatively, debridement of the lower leg was not performed. In addition, when signs of infection were observed again, we were able to respond rapidly, which saved the patient’s life and preserved her lower leg. We encountered a rare case of PJI in an immunosuppressed RA patient whose clinical course rapidly worsened, with sudden onset of septic shock caused by group G Streptococcus. Appropriate and prompt surgical treatment and intensive care made it possible to save the affected knee joint as well as the patient’s life.
Biofertilizer effect of some zinc dissolving bacteria free and encapsulated on
bb1e5025-fa3e-463a-826a-1da15c24cb89
10121707
Microbiology[mh]
Soil zinc deficiency affects millions of cropland worldwide, and is particularly prevalent in developing countries. In plants, Zinc is involved in carbohydrate metabolism (Alloway ). Zinc is imperative for both human development and crop production. Egypt's soils are among those that tend to be deficient in zinc (Khafagy et al. ), soils in areas where wheat is grown often have extremely low levels of plant-available P and Zn, causing widespread P and zinc deficiency in this crop (Kotb ). Zn deficiency can affect the development of plants and animals since Zn is a regulatory co-factor and structural constituent in proteins and enzymes involved in many biochemical pathways (Cakmak et al. ). In addition to carbohydrate metabolism, photosynthesis, and glucose metabolism, these enzymes are implicated in starch and sugar metabolism. In addition, Zn is also vital for protein metabolism, auxin metabolism, pollen formation, maintenance of the integrity of biological membranes and those related to resistant to pathogens (Rashid ). Zn acts as a significant antioxidant. The lack of zinc in plants can lead to retarded shoot growth, chlorosis, reduced leaf size, susceptibility to heat, light and fungal infections, as well as affecting grain yield, pollen formation, root development, and water uptake (Tavallali et al. ). Almost half of the world's cereal crops are grown on zinc-deficient soils; as a result, zinc deficiency in humans is a widespread problem. Soil Zn content is determined by the geochemical composition and weathering of the parent rock. Insoluble Zn cannot be assimilated by crops, resulting in Zn deficiency; more Zn fertilizers are applied to crops to combat Zn deficiency. However, this technique is costly and can be harmful to both human health and the natural environment. Thus, eco-friendly and cost-effective agro-technologies are needed to increase crop yield and reduce Zn deficiency (Khanghahi et al. ). When the concentration of Zn available in soil becomes less than 0.5 mg Zn/kg dry soil, deficiency symptoms become visible (Alloway ). Despite most soils having a total Zn content of around 50–100 mg l −1 , soil solutions are usually extremely low in Zn concentration (0.002–0.196 mg l −1 ). (Srivastava and Gupta ). Fertilization appears a quick solution to rectify the nutrient deficiency but the cost of micronutrient fertilizers is high. The continuous use of inorganic fertilizers may cause damage to the physical, chemical and biological properties of soil, which could lead to the decline of soil fertility. Various methods have been used including zinc sulfate (White and Broadly ) or Zn-EDTA (Karak et al. ) as zinc fertilizers. However, their usage is not economical feasible or environment friendly and they are transformed into insoluble complex forms within 7 days of fertilizer application (Rattan and Shukla ). The use of plant growth-promoting microorganisms is a novel approach in this respect (Alavi et al. ). As bacteria release chelating metabolites in the rhizosphere of plants, siderophores are considered important reserves of micronutrients, like Zn and Fe that are easily available to plants. (Ahemad and Kibert ). Microbial siderophores make complexes with Zn and increases plant uptake (Madsen et al. ). There have been previous reports on transforming insoluble Zn into plant-accessible, soluble forms by soil microbiome. Bacterial genera, such as Pseudomonas, Bacillus, Acinetobacter, Azotobacter, Azospirillum, Gluconacetobacter, Burkholderia , and Thiobacillus have shown their ability to solubilize Zn (Bhakat et al. ). ZSB have the ability to improve crop quality via producing various phytohormones and soluble nutrients (e.g., P and K), synthesizing exopolysaccharides and siderophores, and reducing environmental stresses (Gupta et al. ). Using a pot experiment as a model, this study investigated the selection of powerful Zn solubilizing bacteria that could infiltrate edible parts of the crop and improve its Zn accumulation. The chosen bacteria will also be formulated in sodium alginate beads and their viability will be tested over 3 months during storage. Sample collection and isolation of zinc solubilizing bacteria on solid media Five soil samples were collected from different fields (El Monofia and Giza) in Egypt. Twenty bacterial isolates of different morphology were selected after using proper dilutions from each soil sample through serial dilution on modified Bunt and Rovira medium containing gl −1 [0.4 KH 2 PO 4 , 0.5 (NH 4 ) 2 SO 4 , 0.5 MgSO 4 ·7H 2 O, 0.1 MgCl 2 , 0.1 FeCl 3 , 0.1 CaCl 2 , 1.0 peptone, 1.0 yeast extract, 5.0 glucose, 250.0 ml soil extracts, 20.0 g agar, 750.0 ml tap water, pH 7.0], as well as 0.1% insoluble ZnO and ZnCO 3 as described by Saravanan et al. ( ). Bacterial isolates with high growth and clear halo zone were selected, purified, and preserved in 40% glycerol at −20 ℃ (Omara et al. ). Determination of zinc solubilization activity by isolated bacteria The zinc-solubilizing potential of twenty selected bacteria was evaluated using two insoluble zinc sources: zinc oxide (ZnO), zinc carbonate (ZnCO 3 ), and a combination of the two. The bacterial isolates were spotted on media in triplicate and incubated at 28 ℃ for 5 days to check for clear halo zones. By measuring the colony and halo zone diameter, hydrolysis capacity (HC) was calculated as follows: diameter of clear zone/diameter of colony (Omara et al. ). Determination of zinc tolerance by zinc-solubilizing bacterial isolates At low concentrations, zinc is a nutrient, but at high concentrations, it is toxic. The ability of selected bacterial isolates to tolerate solubilized zinc was determined under in vitro conditions in nutrient broth containing different concentrations of soluble zinc (ZnSO 4 ). The nutrient broth was prepared and ZnSO 4 was integrated into the broth in such a manner that the final concentration of zinc was (20, 40, 50, 60, 80, 100, 150, 200, 300, 400 and 500) mg kg −1 . These solutions were divided in 10 ml quantities in test tubes, sterilized and inoculated with 0.1 ml of each of the tested isolates. An un-inoculated control was also maintained. The total zinc-solubilizing bacterial population was assessed by plating on nutrient agar medium. The highest concentration at which poor growth was observed was taken as a tolerance level (Nandal and Solanki ). Molecular characterization of bacteria The most efficient and tolerant isolates were chosen for molecular identification at Sigma Scientific Services Co., Giza, Egypt. DNA of the test bacterial isolates grown in nutrient broth was extracted with GeneJet Bacterial Genomic DNA Extraction Kit (Fermentas). The 16S rRNA gene of the isolate was amplified using universal primers forward and reverse (F, 5-AGA GTT TGA TCC TGG CTC AG-3 and R, 5-GGT TAC CTT GTT ACG ACT T-3) used to obtain a PCR product of ∼1.5 kb. The sample was placed in a hybrid thermal reactor thermocycler (Maxima Hot Start PCR Master Mix (Fermentas), initially denatured (enzyme activation) for 10 min at 95 ℃ for one cycle and denaturated for 30 s at 95 ℃, annealing for 1 min at 65 ℃ then extension for 1 min at 72 ℃. This was followed by a final elongation step for 10 min at 72 ℃. The PCR products were analyzed on 1% ( w/v ) agarose gels and sent to GATC (Germany) for sequencing by ABI 3730xl DNA sequencer. Sequence data were imported into the BioEdit version 5.0.9 sequence editor; base-calling was examined, and a contiguous sequence was obtained. Sequences used in the phylogenetic analysis were obtained from the RDP and GenBank databases. A phylogenetic tree was constructed using the neighbour-joining method (Omara et al. ). Storage experiment Preparation of inoculums A loop full of each tested zinc-solubilizing isolate on nutrient agar slant was individually inoculated in 250 ml nutrient broth medium in 500 Erlenmeyer flask at 28 ℃ in a shaking incubator for 24 h at 150 rpm. Formulation of selected zinc-solubilizing bacteria with Alginate beads preparation Alginate was prepared according to Bashan ( ) and Draget et al. ( ). Sodium alginate solution (6% w/v ) were autoclaved at 121 ℃ for 20 min, then cooled, and mixed slowly with growth of each strain at the rate of (1:1 v / v ). The alginate-cells mixture was stirred gently for 30 min at 100 rpm to be homogenous. For preparing the beads, the mixture was added drop wise, with the aid of a micropipette, to 0.1 M CaCl 2 solution. The CaCl 2 was then removed from the solution and the beads were washed twice with distilled sterile water. The beads were stored on sterile plates at room temperature and in the fridge. Viability of bacteria was tested every month for 3 months where 1 g alginate beads samples were rehydrated with NaCl solution (0.8% W/V , pH 7) while shaking based on Ivanova et al. ( ) then serial dilution of rehydrated bead sample and plate counting method was used for 10 7 dilution of each (Mohamed et al. ). Plant experiment Pot experiment to test the effect of biofertlizer on plant growth Using sterile soil, the experiment was conducted in the greenhouse of the Microbiology department at Ain Shams University. The temperature during the experimental period (September and October) ranged from 30 ℃ to 37 ℃ with 10–12 h daylight and 60–65% relative humidity. According to the experiment plan, eight treatments referred to in (Table ). Experiment was conducted in 18 cm pots filled with 2 kg of soil inoculated with 3 ml of each bacterial inoculant (B3, B5 and C6) and 120 beads. Each bead was 25 mm (B3 beads, B5 beads and C6 beads). The experiment was done in duplicate and arranged in a random pattern. Grain of maize ( Zea mays ) surface sterilization was performed in a 3.5% ( w/v ) solution of calcium hypochlorite for 10 min then submerged in distilled water twice, and afterwards each pot was seeded with one grain (Javed et al. ). Soil sample was sent to the research laboratories complex in Cairo University, Egypt for physical and chemical analysis of soil (Vaid et al. ). After 2 months, plants were harvested and dried in an electric oven at 70 ℃ for 72 h and dry weight was measured. Zinc content assay of plant materials Samples were sent to Research laboratories complex in Cairo University, Egypt, for Zn content analysis using atomic absorption technique (Thermo Scientific iCE 3300, German) according to Christian and Feldman ( ). Five soil samples were collected from different fields (El Monofia and Giza) in Egypt. Twenty bacterial isolates of different morphology were selected after using proper dilutions from each soil sample through serial dilution on modified Bunt and Rovira medium containing gl −1 [0.4 KH 2 PO 4 , 0.5 (NH 4 ) 2 SO 4 , 0.5 MgSO 4 ·7H 2 O, 0.1 MgCl 2 , 0.1 FeCl 3 , 0.1 CaCl 2 , 1.0 peptone, 1.0 yeast extract, 5.0 glucose, 250.0 ml soil extracts, 20.0 g agar, 750.0 ml tap water, pH 7.0], as well as 0.1% insoluble ZnO and ZnCO 3 as described by Saravanan et al. ( ). Bacterial isolates with high growth and clear halo zone were selected, purified, and preserved in 40% glycerol at −20 ℃ (Omara et al. ). The zinc-solubilizing potential of twenty selected bacteria was evaluated using two insoluble zinc sources: zinc oxide (ZnO), zinc carbonate (ZnCO 3 ), and a combination of the two. The bacterial isolates were spotted on media in triplicate and incubated at 28 ℃ for 5 days to check for clear halo zones. By measuring the colony and halo zone diameter, hydrolysis capacity (HC) was calculated as follows: diameter of clear zone/diameter of colony (Omara et al. ). At low concentrations, zinc is a nutrient, but at high concentrations, it is toxic. The ability of selected bacterial isolates to tolerate solubilized zinc was determined under in vitro conditions in nutrient broth containing different concentrations of soluble zinc (ZnSO 4 ). The nutrient broth was prepared and ZnSO 4 was integrated into the broth in such a manner that the final concentration of zinc was (20, 40, 50, 60, 80, 100, 150, 200, 300, 400 and 500) mg kg −1 . These solutions were divided in 10 ml quantities in test tubes, sterilized and inoculated with 0.1 ml of each of the tested isolates. An un-inoculated control was also maintained. The total zinc-solubilizing bacterial population was assessed by plating on nutrient agar medium. The highest concentration at which poor growth was observed was taken as a tolerance level (Nandal and Solanki ). The most efficient and tolerant isolates were chosen for molecular identification at Sigma Scientific Services Co., Giza, Egypt. DNA of the test bacterial isolates grown in nutrient broth was extracted with GeneJet Bacterial Genomic DNA Extraction Kit (Fermentas). The 16S rRNA gene of the isolate was amplified using universal primers forward and reverse (F, 5-AGA GTT TGA TCC TGG CTC AG-3 and R, 5-GGT TAC CTT GTT ACG ACT T-3) used to obtain a PCR product of ∼1.5 kb. The sample was placed in a hybrid thermal reactor thermocycler (Maxima Hot Start PCR Master Mix (Fermentas), initially denatured (enzyme activation) for 10 min at 95 ℃ for one cycle and denaturated for 30 s at 95 ℃, annealing for 1 min at 65 ℃ then extension for 1 min at 72 ℃. This was followed by a final elongation step for 10 min at 72 ℃. The PCR products were analyzed on 1% ( w/v ) agarose gels and sent to GATC (Germany) for sequencing by ABI 3730xl DNA sequencer. Sequence data were imported into the BioEdit version 5.0.9 sequence editor; base-calling was examined, and a contiguous sequence was obtained. Sequences used in the phylogenetic analysis were obtained from the RDP and GenBank databases. A phylogenetic tree was constructed using the neighbour-joining method (Omara et al. ). Preparation of inoculums A loop full of each tested zinc-solubilizing isolate on nutrient agar slant was individually inoculated in 250 ml nutrient broth medium in 500 Erlenmeyer flask at 28 ℃ in a shaking incubator for 24 h at 150 rpm. Formulation of selected zinc-solubilizing bacteria with Alginate beads preparation Alginate was prepared according to Bashan ( ) and Draget et al. ( ). Sodium alginate solution (6% w/v ) were autoclaved at 121 ℃ for 20 min, then cooled, and mixed slowly with growth of each strain at the rate of (1:1 v / v ). The alginate-cells mixture was stirred gently for 30 min at 100 rpm to be homogenous. For preparing the beads, the mixture was added drop wise, with the aid of a micropipette, to 0.1 M CaCl 2 solution. The CaCl 2 was then removed from the solution and the beads were washed twice with distilled sterile water. The beads were stored on sterile plates at room temperature and in the fridge. Viability of bacteria was tested every month for 3 months where 1 g alginate beads samples were rehydrated with NaCl solution (0.8% W/V , pH 7) while shaking based on Ivanova et al. ( ) then serial dilution of rehydrated bead sample and plate counting method was used for 10 7 dilution of each (Mohamed et al. ). A loop full of each tested zinc-solubilizing isolate on nutrient agar slant was individually inoculated in 250 ml nutrient broth medium in 500 Erlenmeyer flask at 28 ℃ in a shaking incubator for 24 h at 150 rpm. Alginate was prepared according to Bashan ( ) and Draget et al. ( ). Sodium alginate solution (6% w/v ) were autoclaved at 121 ℃ for 20 min, then cooled, and mixed slowly with growth of each strain at the rate of (1:1 v / v ). The alginate-cells mixture was stirred gently for 30 min at 100 rpm to be homogenous. For preparing the beads, the mixture was added drop wise, with the aid of a micropipette, to 0.1 M CaCl 2 solution. The CaCl 2 was then removed from the solution and the beads were washed twice with distilled sterile water. The beads were stored on sterile plates at room temperature and in the fridge. Viability of bacteria was tested every month for 3 months where 1 g alginate beads samples were rehydrated with NaCl solution (0.8% W/V , pH 7) while shaking based on Ivanova et al. ( ) then serial dilution of rehydrated bead sample and plate counting method was used for 10 7 dilution of each (Mohamed et al. ). Pot experiment to test the effect of biofertlizer on plant growth Using sterile soil, the experiment was conducted in the greenhouse of the Microbiology department at Ain Shams University. The temperature during the experimental period (September and October) ranged from 30 ℃ to 37 ℃ with 10–12 h daylight and 60–65% relative humidity. According to the experiment plan, eight treatments referred to in (Table ). Experiment was conducted in 18 cm pots filled with 2 kg of soil inoculated with 3 ml of each bacterial inoculant (B3, B5 and C6) and 120 beads. Each bead was 25 mm (B3 beads, B5 beads and C6 beads). The experiment was done in duplicate and arranged in a random pattern. Grain of maize ( Zea mays ) surface sterilization was performed in a 3.5% ( w/v ) solution of calcium hypochlorite for 10 min then submerged in distilled water twice, and afterwards each pot was seeded with one grain (Javed et al. ). Soil sample was sent to the research laboratories complex in Cairo University, Egypt for physical and chemical analysis of soil (Vaid et al. ). After 2 months, plants were harvested and dried in an electric oven at 70 ℃ for 72 h and dry weight was measured. Zinc content assay of plant materials Samples were sent to Research laboratories complex in Cairo University, Egypt, for Zn content analysis using atomic absorption technique (Thermo Scientific iCE 3300, German) according to Christian and Feldman ( ). Using sterile soil, the experiment was conducted in the greenhouse of the Microbiology department at Ain Shams University. The temperature during the experimental period (September and October) ranged from 30 ℃ to 37 ℃ with 10–12 h daylight and 60–65% relative humidity. According to the experiment plan, eight treatments referred to in (Table ). Experiment was conducted in 18 cm pots filled with 2 kg of soil inoculated with 3 ml of each bacterial inoculant (B3, B5 and C6) and 120 beads. Each bead was 25 mm (B3 beads, B5 beads and C6 beads). The experiment was done in duplicate and arranged in a random pattern. Grain of maize ( Zea mays ) surface sterilization was performed in a 3.5% ( w/v ) solution of calcium hypochlorite for 10 min then submerged in distilled water twice, and afterwards each pot was seeded with one grain (Javed et al. ). Soil sample was sent to the research laboratories complex in Cairo University, Egypt for physical and chemical analysis of soil (Vaid et al. ). After 2 months, plants were harvested and dried in an electric oven at 70 ℃ for 72 h and dry weight was measured. Samples were sent to Research laboratories complex in Cairo University, Egypt, for Zn content analysis using atomic absorption technique (Thermo Scientific iCE 3300, German) according to Christian and Feldman ( ). Isolation and screening of zinc-solubilizing bacteria on solid medium 20 isolates obtained and purified from soil samples were screened for hydrolysis capacity (HC) according to the diameter of the clear zone and of the colony on modified Bunt and Rovira solid medium containing insoluble ZnO, ZnCO 3 and a combination of both. The results showed that most isolates were able to solubilize zinc to some degree but the most potent Zn-solubilizing isolates were A1, A5, B1, B3, B5, C1, C6 and D4, so they were selected for further studies (Fig. ). Determination of zinc solubilization activity by isolated bacteria Twenty isolates were screened according to hydrolysis capacity (HC) using the diameter of clear zone and diameter of colony on the modified Bunt and Rovira solid medium containing nonsoluble ZnO, ZnCO 3 and combination of both as insoluble Zn source to choose the most effective bacteria for solubilizing Zn (Fig. and Tables , , ). The maximum hydrolysis capacity (HC: 7.00 ± 0.34) was observed in the bacterial isolates B5 supplemented with combination of both ZnO and ZnCO 3 as insoluble Zn source followed by B3 bacterial isolates similarly having hydrolysis capacity (HC: 6.17 ± 0.17) supplemented with combination of both ZnO and ZnCO 3 as insoluble Zn source. Also, B3 isolate showed the highest hydrolysis capacity of ZnO (HC: 5.50 ± 0.17) as insoluble Zn source. Moreover, C6 isolate had high hydrolysis capacity (HC: 5.13 ± 0.13) of ZnO, B5 isolate also had the highest hydrolysis capacity (HC: 4.60 ± 0.40) of isolates supplemented with ZnCO 3 as Insoluble Zn source, followed by A5 and C6 isolates with hydrolysis capacity. Eight bacterial isolates (A1, A5, B1, B3, B5, C1, C6, and D4) showed the greatest potential and were selected for further studies. Determination of Zn tolerance for the selected isolates The solubilization of zinc might limit the growth of bacteria at higher levels. In vitro testing was performed on selected isolates using nutrient broth containing different concentrations of soluble zinc (ZnSO 4 ) in order to determine their ability to tolerate solubilized zinc. The results in Table showed that most of the isolates were tolerant isolates as they were able to grow and tolerate up to 400 mg kg −1 of ZnSO 4 where only B3, B5 and C6 were able to grow at 500 mg kg −1 of ZnSO 4 . Characterization of isolates Bacterial isolates designated as B3, B5 and C6 were isolated from mature compost. Colony morphology and Gram stain proved that: B3 is white, flat, entire colonies with dry surface and Gram negative cocco-bacilli, B5 is white, raised, entire colonies with smooth surface and Gram positive spore forming and C6 is transparent, raised, entire colonies with glossy surface and Gram negative rod-shaped. The selected isolates were also characterized using 16S rRNA gene sequencing methods. Phylogenetic analysis based on 16S rRNA gene sequence was done by the Neighbor-joining method. Comparing obtained sequences with Gen bank database, isolates were found to have similarity with Acinetobacter calcoaceticus (with similarity, 100%), Bacillus proteolyticus (with similarity, 99.84%), Stenotrophomonas pavanii , (with similarity, 99.40%) respectively. Sequence data were deposited to GenBank with accession numbers in Table and phylogenetic trees are shown in Figs. , and . Storage experiment Since it is imperative the biofertilizer agent be viable during storage, the viability of the selected efficient isolates was determined with formulation in sodium alginate beads (Fig. ) and at intervals during storage at room temperature and in cold storage for 3 months (Table ). The results showed that the viable count of isolate B3 was 8.8 log 10 CFU/ml at the start then was 8.8 log 10 CFU/ml in the 1st month and remained 8.8 log 10 CFU/ml after 3rd month of storage in room temperature also in fridge. It showed stable viability then decrease slightly (0.1 log 10 CFU/ml) after the 3rd month, For the number of B5 bacteria it started from 9.2 log 10 CFU/ml and maintain the same count till the last month of storage in room temperature while, it increased by 0.1 log 10 CFU/ml after the 3rd month in the fridge. Similarly, the number of C6 bacteria started from 8.9 log 10 CFU/ml reached 9.15 log 10 CFU/ml the 1st month. Then, it decreased to 8.9 log 10 CFU/ml in the last month at room temperature while, it increased to 9.1 log 10 CFU/ml after the 3rd month in the fridge. Effect of biofertilizers on plant growth Pot experiment Plant experiment was carried out using soil, which was analyzed to determine its physical and chemical characteristics (Table ). Figure showed the pots at different growth stages. The results showed considerable increase in shoot and root lengths, fresh and dry weights in all treatments over the negative control. This is because no zinc or bacteria was added to the sterile soil. As showed in (Table ) the maximum fresh weight was 35.50 g for B3 followed by 32.25 gm for B5 beads and 31.95 gm for C6 beads, respectively. In addition, B3 had the best dry weight of 16.19 gm, followed by C6 beads 15.46 then (C6 and B3 beads). Furthermore, B3 had the highest shoot length of 32.20 cm, while, C6 30.75 cm and C6 beads was 30.35 cm. On the other hand, B5 had the highest root length 14.43 cm followed by C6 beads with 13.25 cm, and B5 beads with 12.75 cm. Zinc content assay of plant materials The results showed that negative control plants where no zinc or bacteria was added to the soil had far lower zinc levels than positive control plants (Fig. ). On the other hand, all treatments resulted in an increased zinc content of the analyzed plants. In addition, plants grown in the pot with free B3 isolate as a biofertilizer had the highest zinc content (370.20 mg kg −1 ) while plants with B3 in beads had only 212.61 mg kg −1 ; meanwhile, plants with free B5 inoculum showed higher zinc concentrations than those grown in beads formulated with the same isolate. The zinc content of beads loaded with isolate C6 was 358.28 (mg kg −1 ), while the zinc content of free C6 isolate inoculum was 299.13 (mg kg −1 ). Finally, it is clear that both selected isolates and isolates embedded in sodium alginate beads improved zinc content and growth of plants. 20 isolates obtained and purified from soil samples were screened for hydrolysis capacity (HC) according to the diameter of the clear zone and of the colony on modified Bunt and Rovira solid medium containing insoluble ZnO, ZnCO 3 and a combination of both. The results showed that most isolates were able to solubilize zinc to some degree but the most potent Zn-solubilizing isolates were A1, A5, B1, B3, B5, C1, C6 and D4, so they were selected for further studies (Fig. ). Twenty isolates were screened according to hydrolysis capacity (HC) using the diameter of clear zone and diameter of colony on the modified Bunt and Rovira solid medium containing nonsoluble ZnO, ZnCO 3 and combination of both as insoluble Zn source to choose the most effective bacteria for solubilizing Zn (Fig. and Tables , , ). The maximum hydrolysis capacity (HC: 7.00 ± 0.34) was observed in the bacterial isolates B5 supplemented with combination of both ZnO and ZnCO 3 as insoluble Zn source followed by B3 bacterial isolates similarly having hydrolysis capacity (HC: 6.17 ± 0.17) supplemented with combination of both ZnO and ZnCO 3 as insoluble Zn source. Also, B3 isolate showed the highest hydrolysis capacity of ZnO (HC: 5.50 ± 0.17) as insoluble Zn source. Moreover, C6 isolate had high hydrolysis capacity (HC: 5.13 ± 0.13) of ZnO, B5 isolate also had the highest hydrolysis capacity (HC: 4.60 ± 0.40) of isolates supplemented with ZnCO 3 as Insoluble Zn source, followed by A5 and C6 isolates with hydrolysis capacity. Eight bacterial isolates (A1, A5, B1, B3, B5, C1, C6, and D4) showed the greatest potential and were selected for further studies. The solubilization of zinc might limit the growth of bacteria at higher levels. In vitro testing was performed on selected isolates using nutrient broth containing different concentrations of soluble zinc (ZnSO 4 ) in order to determine their ability to tolerate solubilized zinc. The results in Table showed that most of the isolates were tolerant isolates as they were able to grow and tolerate up to 400 mg kg −1 of ZnSO 4 where only B3, B5 and C6 were able to grow at 500 mg kg −1 of ZnSO 4 . Bacterial isolates designated as B3, B5 and C6 were isolated from mature compost. Colony morphology and Gram stain proved that: B3 is white, flat, entire colonies with dry surface and Gram negative cocco-bacilli, B5 is white, raised, entire colonies with smooth surface and Gram positive spore forming and C6 is transparent, raised, entire colonies with glossy surface and Gram negative rod-shaped. The selected isolates were also characterized using 16S rRNA gene sequencing methods. Phylogenetic analysis based on 16S rRNA gene sequence was done by the Neighbor-joining method. Comparing obtained sequences with Gen bank database, isolates were found to have similarity with Acinetobacter calcoaceticus (with similarity, 100%), Bacillus proteolyticus (with similarity, 99.84%), Stenotrophomonas pavanii , (with similarity, 99.40%) respectively. Sequence data were deposited to GenBank with accession numbers in Table and phylogenetic trees are shown in Figs. , and . Since it is imperative the biofertilizer agent be viable during storage, the viability of the selected efficient isolates was determined with formulation in sodium alginate beads (Fig. ) and at intervals during storage at room temperature and in cold storage for 3 months (Table ). The results showed that the viable count of isolate B3 was 8.8 log 10 CFU/ml at the start then was 8.8 log 10 CFU/ml in the 1st month and remained 8.8 log 10 CFU/ml after 3rd month of storage in room temperature also in fridge. It showed stable viability then decrease slightly (0.1 log 10 CFU/ml) after the 3rd month, For the number of B5 bacteria it started from 9.2 log 10 CFU/ml and maintain the same count till the last month of storage in room temperature while, it increased by 0.1 log 10 CFU/ml after the 3rd month in the fridge. Similarly, the number of C6 bacteria started from 8.9 log 10 CFU/ml reached 9.15 log 10 CFU/ml the 1st month. Then, it decreased to 8.9 log 10 CFU/ml in the last month at room temperature while, it increased to 9.1 log 10 CFU/ml after the 3rd month in the fridge. Pot experiment Plant experiment was carried out using soil, which was analyzed to determine its physical and chemical characteristics (Table ). Figure showed the pots at different growth stages. The results showed considerable increase in shoot and root lengths, fresh and dry weights in all treatments over the negative control. This is because no zinc or bacteria was added to the sterile soil. As showed in (Table ) the maximum fresh weight was 35.50 g for B3 followed by 32.25 gm for B5 beads and 31.95 gm for C6 beads, respectively. In addition, B3 had the best dry weight of 16.19 gm, followed by C6 beads 15.46 then (C6 and B3 beads). Furthermore, B3 had the highest shoot length of 32.20 cm, while, C6 30.75 cm and C6 beads was 30.35 cm. On the other hand, B5 had the highest root length 14.43 cm followed by C6 beads with 13.25 cm, and B5 beads with 12.75 cm. Zinc content assay of plant materials The results showed that negative control plants where no zinc or bacteria was added to the soil had far lower zinc levels than positive control plants (Fig. ). On the other hand, all treatments resulted in an increased zinc content of the analyzed plants. In addition, plants grown in the pot with free B3 isolate as a biofertilizer had the highest zinc content (370.20 mg kg −1 ) while plants with B3 in beads had only 212.61 mg kg −1 ; meanwhile, plants with free B5 inoculum showed higher zinc concentrations than those grown in beads formulated with the same isolate. The zinc content of beads loaded with isolate C6 was 358.28 (mg kg −1 ), while the zinc content of free C6 isolate inoculum was 299.13 (mg kg −1 ). Finally, it is clear that both selected isolates and isolates embedded in sodium alginate beads improved zinc content and growth of plants. Plant experiment was carried out using soil, which was analyzed to determine its physical and chemical characteristics (Table ). Figure showed the pots at different growth stages. The results showed considerable increase in shoot and root lengths, fresh and dry weights in all treatments over the negative control. This is because no zinc or bacteria was added to the sterile soil. As showed in (Table ) the maximum fresh weight was 35.50 g for B3 followed by 32.25 gm for B5 beads and 31.95 gm for C6 beads, respectively. In addition, B3 had the best dry weight of 16.19 gm, followed by C6 beads 15.46 then (C6 and B3 beads). Furthermore, B3 had the highest shoot length of 32.20 cm, while, C6 30.75 cm and C6 beads was 30.35 cm. On the other hand, B5 had the highest root length 14.43 cm followed by C6 beads with 13.25 cm, and B5 beads with 12.75 cm. The results showed that negative control plants where no zinc or bacteria was added to the soil had far lower zinc levels than positive control plants (Fig. ). On the other hand, all treatments resulted in an increased zinc content of the analyzed plants. In addition, plants grown in the pot with free B3 isolate as a biofertilizer had the highest zinc content (370.20 mg kg −1 ) while plants with B3 in beads had only 212.61 mg kg −1 ; meanwhile, plants with free B5 inoculum showed higher zinc concentrations than those grown in beads formulated with the same isolate. The zinc content of beads loaded with isolate C6 was 358.28 (mg kg −1 ), while the zinc content of free C6 isolate inoculum was 299.13 (mg kg −1 ). Finally, it is clear that both selected isolates and isolates embedded in sodium alginate beads improved zinc content and growth of plants. Globally, zinc deficiency in soil is a major problem because plants' Zn requirements are rarely met (Sillanpaa ) . There is an estimated 50% shortage of plant-available zinc in soils used for cereal production around the world (Graham and Welch ). Almost all crops and pastures worldwide suffer from zinc deficiency, leading to severe yield losses and nutritional deficiencies. This is particularly the case in areas of cereal crop (Alloway ) . Fertilizers are not always effective in correcting Zn deficiency due to economic and agronomic factors. There are several factors that contribute to reduced Zn availability in developing countries, including topsoil drying, soil constraints, disease interactions, and fertilizer costs. (Graham and Rengel ). Bacteria play a vital role in the processes of environmental cycling such as metal solubilization into soluble forms that are more suitable for uptake by plants. This study is focused on the isolation and identification of zinc dissolving bacterial strains and their ability to be used as biofertilizers based on their ability to solubilize zinc. Twenty zinc-solubilizing bacteria were isolated from soil collected from different farms in Egypt. Bhatt and Maheshwari, ( ) have previously isolated zinc solubilizing bacteria from cow dung using serial dilution protocols on Bunt and Rovira media containing 0.1% zinc sources, ZnO and ZnCO 3 , incubated for 1 week at 30° C. In contrast, Javed et al. ( ) stated that the plate method is limited and used both qualitative and quantitative methods to isolate zinc solubilizing bacteria. This study showed the ability of most isolates to tolerate soluble zinc in the form of (ZnSO 4 ) up to 400 mg kg −1 only B3, B5 and C6 were able to survive 500 mg kg −1 . Metal resistance mechanisms are highly dependent on metal interactions with cells, and the most likely cause of such high resistance is either bioaccumulation or biosorption. Atomic absorption spectroscopy studies revealed zinc bioaccumulation by zinc tolerant bacteria (ZTB). The ZTB strains were found to produce a significant amount of exopolysaccharide (EPS) under Zn stress. EPS mediated Zn biosorption mainly occurs due to the interaction between positively charged Zn ions and negatively charged EPS on the cell surfaces (Gupta and Diwan ). Which was supported by Pramanik et al. ( ) they isolated three bacterial strains and selection was based on their ability to grow in media containing 500 mg kg −1 of soluble zinc (ZnSO 4 ). Similarly, Saravanan et al. ( ) results showed that both test isolates were able to survive 500 mg kg −1 but one was inhibited after 8 days incubation. On the contrary, Nandal and Solanki. ( ) showed that most of their isolates were only able to grow up to 100 ppm of ZnSO 4 and only one isolate was able to tolerate 200 mg kg −1 . Solubilization of Zn compounds by bacteria depends on organic acid production, especially 2-ketogluconic acid and H + , as well as other metabolic products, siderophores, and CO 2 (Nautiyal et al. ). Sodium alginate was chosen as a carrier and tested for the ability to keep the biofertilizer agent viable during storage. Viability was compared during storage at both room temperate and fridge. Viable counts results showed that biofertlizer agents remained viable in both storage conditions during 3 months, where only B3 fridge sample showed slight decrease from 8.8 log 10 CFU/ml to 8.7 log 10 CFU/ml by the end of the storage period while all other samples showed stable counts. These results agreed with the results by Mohamed et al. ( ) who , recommended sodium alginate beads as best carrier as it is a cheap and easily used and stored. Encapsulation using alginate exerted high stability densities up to the end of 6 months storage period at 5, 20 and 30 ℃, likewise this study showed that sodium alginate beads are able to maintain viability of bacteria for 3 months in room temperature and fridge. The results agreed with those of Bashan ( ) who mentioned that bacteria could survive in alginate beads for long periods. Moreover, Ivanova et al. ( ) found that total number of bacteria decreased quickly in the first 7 days of storage, but total number of viable bacteria remained stable for 6 months of storage. Also, plant growth-promoting bacteria, Bacillus subtilis and Pseudomonas corrugata , immobilized in sodium alginate-based formulation were evaluated for their survival, viability and plant growth-promoting ability after 3 years of storage at 4 ℃ (Trivedi and Pandey ). Populations of both bacterial isolates recovered from the immobilized sodium alginate beads were in the order of 10 8 cfu/g. The plant-based bioassay indicated that the plant growth promotion ability of both of the bacterial isolates was equal to those of fresh broth-based formulations. The bacterial isolates retained root colonization, and antifungal and enzyme activities in the alginate-based formulation during storage (Trivedi and Pandey ). In this study, three bacteria were selected based on their ability to solubilize zinc in different forms and their ability to tolerate high amounts of zinc and were identified as B3 ( Acinetobacter calcoaceticus ), B5 ( Bacillus proteolyticus ), C6 ( Stenotrophomonas pavanii ). Plant experiment was very helpful for analyzing the effects of microbial species on various plant growth parameters as well as on zinc content and evaluating their efficacy under simulatory field conditions when maize seeds were planted in soil inoculated with these strains, they showed enhanced growth as well as increase in zinc content over both negative control where no zinc or bacteria was added as well as positive control where no bacteria was added only ZnCo 3 was added. The results varied with inoculation of bacterial isolate alone or as beads with ZnCo 3 . There were instances when inoculation of bacterial isolate showed better results than the addition of beads as with B3 as well as B5 with B3 isolate having highest zinc continent of 370.20 Zn (mg kg −1 ), while C6 isolate the beads had better results where encapsulated C6 had second highest zinc content 358.28 Zn (mg kg −1 ). Similarly, Goteti et al. ( ) demonstrated that inoculation with plant growth-promoting rhizobacteria significantly enhanced the growth of maize in all dimensions. While, Omara et al. ( ) indicated that inoculation with E. cloacae alone or with different zinc applications variably increased zinc percentage of Zea mays with the highest content of Zn in plants reported for inoculation with E. cloacae combined with ZnO. Similar approach by Zaheer et al. ( ) showed that Pseudomona ssp. strain AZ5 and Bacillus sp. strain AZ17 inoculation improved considerably the number and dry weight of nodules, grain, and straw weight, and uptake of P and Zn of the chickpea cultivar compared to non-inoculated control. A similar finding was made by Kamran et al. ( ) using Zinc solubilizing Pseudomonas fragi, E. cloacae , and Rhizobium sp. When they inoculated wheat plants with these strains, showed enhanced shoot and root length and weight as well as zinc content. While, Yasmin et al. ( ) findings showed that In the treatment with Zn-solubilizing bacteria P. protegens RY2 and ZnO as zinc source, zinc content in both root and shoot was significantly (P 0.05) higher than in the control treatment.. Plant growth and development are enhanced by zinc-solubilizing bacteria colonizing the rhizosphere and intensifying zinc bioavailability by solubilizing complex zinc compounds. Enhancement of the soil microbiome has the potential to increase crop production (Yuan et al. ), and reduce chemical inputs (Thijs et al. ), resulting in more sustainable agricultural practices. In conclusion, the current study findings represent contribution to achieve sustainable agriculture by introducing a cheap biofertilizers of three selected different bacteria formulated in stable commercial form. Sodium alginate encapsulated biofertilizers displayed viability at both room and fridge temperatures during storage and lasting for 3 months. And also, they showed enhanced zinc content and increased growth parameters of the plants during a pot experiment conducted in greenhouse. Hence, can be used for biofortification of Zea mays , which in turn improves human and animal health. Below is the link to the electronic supplementary material. Supplementary file1 (TXT 1 KB) Supplementary file2 (TXT 1 KB) Supplementary file3 (TXT 1 KB) Supplementary file4 (TXT 1 KB) Supplementary file5 (TXT 2 KB) Supplementary file6 (TXT 2 KB)
Introduction of radiation therapist‐led adaptive treatments on a 1.5 T
524353f6-57ce-4caa-ad14-13ec06bce91b
10122921
Internal Medicine[mh]
Magnetic resonance (MR) imaging has been shown to provide valuable additional information for many tumour sites and associated normal tissues due to its excellent soft tissue discrimination and functional information. Hybrid technologies combining linear accelerators (linacs) and MR scanners (MR‐Linacs) mark the beginning of a new era. MR‐guided adaptive radiotherapy (MRgART) enables daily adaptive radiotherapy allowing the treatment plan to be personalised to the patient's anatomy at the time of treatment. This ensures a precise localisation and real‐time tracking of the tumour and organs at risk (OAR), allowing further potential for dose escalation. The traditional radiotherapy treatment planning process may span between days to weeks with the exception of ‘plan and treat’ cases where the planning workflow may be reduced to hours. By contrast, the process of MRgART condenses the planning workflow into minutes at the treatment console. As a result, the process requires input from the multidisciplinary radiation oncology professionals; radiation therapists (RTs), radiation oncology medical physicists (ROMPs) and radiation oncologists (ROs) at the treatment console. This is resource‐intensive and logistically challenging, especially for services that span across multiple locations. If MRgART can be delivered similar to standard image‐guided radiotherapy (IGRT) involving traditional RT‐led service at the treatment console, these processes can be simplified and become more efficient. This work aimed at providing an overview of processes involved in the development of an RT‐led approach to MRgART on the 1.5 T MR‐Linac. The concept of MRgART allows treatment plan adaptation for every fraction with options of multiple adaptation methods. There are two workflows available when treating on the Unity MR‐Linac (Elekta AB, Stockholm, Sweden): adapt‐to‐shape (ATS) and adapt‐to‐position (ATP) using Monaco treatment planning system (Elekta AB). The ATS is a complex approach that involves a full plan adaptation with contour propagation and (re)contouring and plan approval completed by the treating radiation oncologist. By contrast, the ATP approach involves a less complex virtual plan isocentre shift, which only requires adapting the multi‐leaf collimator (MLC) leaves according to translational corrections with no further intervention to the contours. In our centre, ‘optimise shapes’ adaptation method is used for ATP where the default optimisation parameters are applied to change the segment weights and allows the optimiser to adjust the MLC leaf positions of the adapted plan to better match the reference plan dose distribution. This provides an opportunity for the plan to meet target and OAR dose without the full adaptive workflow involved in ATS. Therefore, ATP was adopted for the RT‐led workflow. Multidisciplinary team meetings involving RTs, ROs and ROMPs were held to identify the appropriate tumour site and cases suitable for the RT‐led workflow. It was decided that intact prostate cancer patients receiving more than five fractions would benefit from the RT‐led workflow, as prostate cancer is the most common clinical indication for treatment on MR‐Linac in our centre. Figure shows the roles of each of the craft groups comparing ATS and RT‐led ATP workflow. The development process was divided into three main components: framework, tolerance and action levels, and staff credentialing. Framework ATS workflow The ATS workflow is used for the first five fractions of every patient. This ultimately creates a library of patient‐specific data sets and plans that can be utilised for the RT‐led ATP workflow. The steps involved in the ATS workflow are highlighted in Figure and briefly described here. The patient is positioned on the MR‐Linac according to the simulation instructions, and a T2‐weighted MR scan is obtained. For the first fraction, the planning computed tomography (CT) is registered to the online MR scan, and using a combination of rigid and deformable image registration methods, the contours from the reference plan are propagated onto the online MR image. From fraction 2 onwards, the online MR scans are registered to a previous MR scan with visually similar OAR sizes and characteristics to enable superior image fusion and deformable registration. The selection of the most suitable previous MR scan is based on the ‘Unity handover document’ where volumes and doses delivered to all structures are recorded on a daily basis. The document serves as a tool to support decision‐making for subsequent fractions. Once the contours are propagated, the RO may edit any contours where needed, based on the anatomy of the day. Following this, the treatment plan is re‐optimised using ‘optimise shapes’ adaptation method and approved by the RO. The motion monitoring 2D cineMR images are reviewed prior to beam‐on and display the planning target volume (PTV) location relative to the patient's anatomy. Treatment is initiated while monitoring the PTV using 2D cineMRI. RT ‐led ATP workflow Once the library of datasets and plans has been created from the first five fractions, the RT‐led ATP workflow can be used from fraction 6 without the RO present. The steps involved in the RT‐led workflow are displayed in Figure and briefly described here. Following patient set up, a T2‐weighted MR scan is obtained. A previous MR image will be selected from the library of plans based on the Unity handover document and a visual qualitative comparison made by the RT of the bladder and rectum size to be fused and adapted to. The RT performs an automatic fusion and contour propagation, where the PTV, rectum and bladder volumes are assessed in Monaco. The MR fusion is adjusted to provide a superior match of the volumes for that fraction. The bladder and rectum volume is also assessed ensuring the contours visually cover the anatomy. Finally, the patient contour is reviewed by at least two credentialed RTs, ensuring that the variation is less than 5 mm in any direction, for regions within 2 cm superior and inferior to maximum extent of the PTV. If all metrics are met and the volumes are within tolerance, the plan is adapted using the ATP ‘optimise shapes’ method and treatment delivery proceeds. In the event of dose constraint violations for any of the structures, the RO is called to provide the ATS workflow for plan review and approval. Tolerance and action levels Respiratory and organ motion contribute to geometrical uncertainty in radiotherapy, and longer time on the radiotherapy couch may be associated with larger uncertainties. Therefore, due to the adaptive nature of each fraction, it is essential that critical online decisions are made swiftly and consistently. This required close multidisciplinary teamwork to define and optimise treatment workflows and thresholds for decision‐making and action levels. Strict tolerance and action levels were developed for the RT‐led workflow (Table ). A decision tree (Fig. ) was established to determine whether an RT‐led workflow was acceptable for that fraction. If a patient met all of the guidelines and was treated using the RT‐led ATP workflow, the treating RTs will document the fraction as ‘RT‐led treatment session’ in the Unity handover document. This record serves as a clinical handover between RTs and provides evidence for future audit and process improvement. Credentialing A credentialing framework was developed for this workflow using the experience of another Elekta Unity site (The Royal Marsden) through personal communication, as well as the input from internal ROs and RTs. This credentialing programme enabled those RTs working on the MR‐Linac to be trained and competent in the RT‐led workflow in an online setting. The credentialing consists of both offline and online components to be completed by RTs. In the offline component, the RTs are required to prepare 20 MRgART plans using the RT‐led workflow; then, these are reviewed by a qualified RO. Once these 20 plans are signed off and deemed clinically acceptable, the RTs are required to complete 10 plans online using the RT‐led workflow under the supervision of the prescribing RO. The credentialing framework covers appropriate reference plan choice, image registration, identification of the anterior wall of rectum, bladder volume assessment and the understanding of the decision tree and when to engage the RO. ATS workflow The ATS workflow is used for the first five fractions of every patient. This ultimately creates a library of patient‐specific data sets and plans that can be utilised for the RT‐led ATP workflow. The steps involved in the ATS workflow are highlighted in Figure and briefly described here. The patient is positioned on the MR‐Linac according to the simulation instructions, and a T2‐weighted MR scan is obtained. For the first fraction, the planning computed tomography (CT) is registered to the online MR scan, and using a combination of rigid and deformable image registration methods, the contours from the reference plan are propagated onto the online MR image. From fraction 2 onwards, the online MR scans are registered to a previous MR scan with visually similar OAR sizes and characteristics to enable superior image fusion and deformable registration. The selection of the most suitable previous MR scan is based on the ‘Unity handover document’ where volumes and doses delivered to all structures are recorded on a daily basis. The document serves as a tool to support decision‐making for subsequent fractions. Once the contours are propagated, the RO may edit any contours where needed, based on the anatomy of the day. Following this, the treatment plan is re‐optimised using ‘optimise shapes’ adaptation method and approved by the RO. The motion monitoring 2D cineMR images are reviewed prior to beam‐on and display the planning target volume (PTV) location relative to the patient's anatomy. Treatment is initiated while monitoring the PTV using 2D cineMRI. RT ‐led ATP workflow Once the library of datasets and plans has been created from the first five fractions, the RT‐led ATP workflow can be used from fraction 6 without the RO present. The steps involved in the RT‐led workflow are displayed in Figure and briefly described here. Following patient set up, a T2‐weighted MR scan is obtained. A previous MR image will be selected from the library of plans based on the Unity handover document and a visual qualitative comparison made by the RT of the bladder and rectum size to be fused and adapted to. The RT performs an automatic fusion and contour propagation, where the PTV, rectum and bladder volumes are assessed in Monaco. The MR fusion is adjusted to provide a superior match of the volumes for that fraction. The bladder and rectum volume is also assessed ensuring the contours visually cover the anatomy. Finally, the patient contour is reviewed by at least two credentialed RTs, ensuring that the variation is less than 5 mm in any direction, for regions within 2 cm superior and inferior to maximum extent of the PTV. If all metrics are met and the volumes are within tolerance, the plan is adapted using the ATP ‘optimise shapes’ method and treatment delivery proceeds. In the event of dose constraint violations for any of the structures, the RO is called to provide the ATS workflow for plan review and approval. workflow The ATS workflow is used for the first five fractions of every patient. This ultimately creates a library of patient‐specific data sets and plans that can be utilised for the RT‐led ATP workflow. The steps involved in the ATS workflow are highlighted in Figure and briefly described here. The patient is positioned on the MR‐Linac according to the simulation instructions, and a T2‐weighted MR scan is obtained. For the first fraction, the planning computed tomography (CT) is registered to the online MR scan, and using a combination of rigid and deformable image registration methods, the contours from the reference plan are propagated onto the online MR image. From fraction 2 onwards, the online MR scans are registered to a previous MR scan with visually similar OAR sizes and characteristics to enable superior image fusion and deformable registration. The selection of the most suitable previous MR scan is based on the ‘Unity handover document’ where volumes and doses delivered to all structures are recorded on a daily basis. The document serves as a tool to support decision‐making for subsequent fractions. Once the contours are propagated, the RO may edit any contours where needed, based on the anatomy of the day. Following this, the treatment plan is re‐optimised using ‘optimise shapes’ adaptation method and approved by the RO. The motion monitoring 2D cineMR images are reviewed prior to beam‐on and display the planning target volume (PTV) location relative to the patient's anatomy. Treatment is initiated while monitoring the PTV using 2D cineMRI. ‐led ATP workflow Once the library of datasets and plans has been created from the first five fractions, the RT‐led ATP workflow can be used from fraction 6 without the RO present. The steps involved in the RT‐led workflow are displayed in Figure and briefly described here. Following patient set up, a T2‐weighted MR scan is obtained. A previous MR image will be selected from the library of plans based on the Unity handover document and a visual qualitative comparison made by the RT of the bladder and rectum size to be fused and adapted to. The RT performs an automatic fusion and contour propagation, where the PTV, rectum and bladder volumes are assessed in Monaco. The MR fusion is adjusted to provide a superior match of the volumes for that fraction. The bladder and rectum volume is also assessed ensuring the contours visually cover the anatomy. Finally, the patient contour is reviewed by at least two credentialed RTs, ensuring that the variation is less than 5 mm in any direction, for regions within 2 cm superior and inferior to maximum extent of the PTV. If all metrics are met and the volumes are within tolerance, the plan is adapted using the ATP ‘optimise shapes’ method and treatment delivery proceeds. In the event of dose constraint violations for any of the structures, the RO is called to provide the ATS workflow for plan review and approval. Respiratory and organ motion contribute to geometrical uncertainty in radiotherapy, and longer time on the radiotherapy couch may be associated with larger uncertainties. Therefore, due to the adaptive nature of each fraction, it is essential that critical online decisions are made swiftly and consistently. This required close multidisciplinary teamwork to define and optimise treatment workflows and thresholds for decision‐making and action levels. Strict tolerance and action levels were developed for the RT‐led workflow (Table ). A decision tree (Fig. ) was established to determine whether an RT‐led workflow was acceptable for that fraction. If a patient met all of the guidelines and was treated using the RT‐led ATP workflow, the treating RTs will document the fraction as ‘RT‐led treatment session’ in the Unity handover document. This record serves as a clinical handover between RTs and provides evidence for future audit and process improvement. A credentialing framework was developed for this workflow using the experience of another Elekta Unity site (The Royal Marsden) through personal communication, as well as the input from internal ROs and RTs. This credentialing programme enabled those RTs working on the MR‐Linac to be trained and competent in the RT‐led workflow in an online setting. The credentialing consists of both offline and online components to be completed by RTs. In the offline component, the RTs are required to prepare 20 MRgART plans using the RT‐led workflow; then, these are reviewed by a qualified RO. Once these 20 plans are signed off and deemed clinically acceptable, the RTs are required to complete 10 plans online using the RT‐led workflow under the supervision of the prescribing RO. The credentialing framework covers appropriate reference plan choice, image registration, identification of the anterior wall of rectum, bladder volume assessment and the understanding of the decision tree and when to engage the RO. The RT‐led adaptive workflow for prostate cancer using MR‐Linac has been successfully implemented at St Vincent's Hospital, Sydney, GenesisCare. Since the initiation of this workflow, eight RTs have been successfully credentialed. The RT‐led workflow has broadened the responsibilities of RTs and allowed for role expansions since the implementation of this approach. We found that the implementation of this workflow has reduced the overall treatment times by 10 to 15 min and relieved ROs from being at the console for all fractions. The time saved could be attributed to the seamless workflow performed by RTs alone as opposed to having multiple work groups performing various tasks. Online tasks required as part of MRgART increase the workload of key radiotherapy staff. With an increasing number of patients being treated on MR‐Linac systems, the transition of tasks from ROs to RTs is being widely investigated. A recent study reported that the general consensus from focus group interviews of RTs, ROs and ROMPs was to move towards RT‐led workflows. Another study reported results of a physician‐free workflow for the MRIdian MR‐Linac, concluding that the process had a success rate of 97.5% where plans were correctly adapted to the gold standard. Similarly, another study reported that early evaluation of the RT‐led framework after treatment of 10 patients has required minimal online clinician input (1.5% of 200 fractions delivered). More recently, the investigation of RT‐led daily online contouring for prostate cancer treatment reported that the contours were acceptable for clinical use in 94.2% fractions. These findings suggest that the RT‐led workflow is feasible and can be implemented into a routine clinical setting. However, comprehensive training and credentialing is required for a safe and successful RT‐led treatment. , We anticipate the current working model of MR‐Linac systems will continue to evolve as RT‐led treatment becomes more prevalent in routine practice and new anatomical sites or clinical indications are introduced. Artificial intelligence may also be integrated into this process as it becomes applicable to the workflow. Moving forward, roles and responsibilities may cross traditional boundaries, to enable a prompt and less resource‐intensive MRgART. The implementation of an RT‐led workflow for prostate cancer using MR‐Linac has presented a number of benefits including efficiency gained through a seamless treatment delivery process, broadening RTs' skillsets and role expansions. The development of the RT‐led workflow required close collaboration between RTs, ROMPs and ROs. This workflow has provided a framework to expand the process to other tumour sites. The author declares no conflict of interest.
The current and future role of the
34455291-0737-4442-b1a3-8ad936e30afd
10122922
Internal Medicine[mh]
Improvements in radiation therapy treatment planning (RTP) and delivery are reliant on accurately visualising and localising the tumour as well as normal tissue structures. , Traditionally, computed tomography (CT) imaging has been used for treatment planning, primarily due to its high spatial resolution, geometric accuracy and electron density information needed for beam attenuation and dose calculation. , However, with an increase in the number of people surviving cancer, greater emphasis is now placed on reducing the side effects of radiation therapy (RT) treatment. Magnetic resonance imaging (MRI), with its superior soft tissue contrast, is increasingly being incorporated into the RT workflow to improve lesion detection, definition and extent. The improvements in target delineation and contouring when compared to CT alone allows for greater conformal radiation delivery and fewer side effects through reduced exposure of normal tissue. The emergence of MRI‐guided linear accelerators (MR Linacs) has improved tumour visibility as well as response providing the potential to transform RT treatment paradigms by enabling personalised adaptive treatment workflows. , As a result, they have highlighted the many benefits MRI can add to the radiotherapy workflow, leading to increased interest and rapid adoption of this imaging modality within radiation oncology departments. Despite the many advantages of MRI, whether for use in MR Linacs or standalone MR simulators, there are a number of challenges incorporating this modality within radiation oncology, particularly in regard to the absence of standards for staff training, education and safety. Two recent Institute of Physics and Engineering in Medicine (IPEM) topical reports , highlighted the problem associated with the lack of available guidance from both national and international professional bodies regarding the use of MRI in radiotherapy, including staff training requirements and minimum competencies. Both of these reports stressed the importance of close collaboration between RT and radiology disciplines when establishing an integrated MRI service. , To address this issue locally, the Australia and New Zealand Magnetic Resonance (ANZMR) Sim Working Party was formed, comprising members across disciplines from Australia & New Zealand who are either currently working on dedicated MR simulators or plan to acquire one in the future. The Working Party aims to establish consensus on standard imaging protocols, MR safety, competency profiles, staff training and quality assurance guidelines. This commentary draws on the collective experiences from members of the Working Party situated within three tertiary teaching facilities. We detail the ongoing challenges of establishing MRI Simulators for RTP and present possible solutions to support this emerging field. There are previous examples where MRI has been introduced into ‘hybrid’ environments, including MR Positron Emission Tomography (MR PET), intra‐operative MRI scanners, or MR guided Focussed Ultrasound systems. However, these previous systems were installed in, and/or managed by, diagnostic radiology departments within established institutions using existing MR protocols and procedures. In contrast, MRI installations in many radiation oncology settings have been conducted outside of traditional radiology departments and have required the development of comprehensive safety and training strategies developed for a workforce unfamiliar with MRI environments. Previous reviews of the experience of MRI Simulation in tertiary teaching facilities, both in Australia and globally, have found that MRI radiographers are an important resource for establishing imaging protocols and procedures and for guiding radiation therapists through the fundamentals of MRI scanning. , , MRI radiographers have also provided valuable advice on MR safety and established safety recommendations and documentation. , , The IPEM survey by Speight et al in 2021 indicated that in centres in Europe, North America, Australia and New Zealand, MRI radiographers were the staff member most commonly involved in setting up patients for external beam radiation therapy (EBRT) MRI simulation, followed by radiation therapists. The survey found that training and education for both MRI radiographers and radiation therapists was undertaken using a combination of departmental training programs, self‐directed reading and vendor training. This finding is consistent with previously documented experiences in both the United Kingdom and Australia. Institute of Physics and Engineering in Medicine proposed two potential staffing models for MR image acquisition for EBRT either (1) two radiation therapists with pre‐treatment imaging experience for patient setup and a diagnostic radiographer to oversee image acquisition, or (2) a diagnostic radiographer and radiation therapist with suitable cross‐training in each discipline. In the context of MR guided radiation therapy (MRgRT), the role of the MRI radiographer is less clear. The ESTRO‐ACROP recommendations on the clinical implementation of hybrid MR‐linac systems in radiation oncology indicate that the core staff involved in MRgRT would usually comprise of radiation therapists, radiation oncologists and medical physicists, with MRI radiographers not explicitly mentioned. While the need to appoint a dedicated MRI radiographer for independent MRI simulators is typically well recognised, within Australia MR radiographers are not currently employed as permanent members of staff for MRgRT. Instead, radiographers working in radiology departments may be requested to provide consultation on an ad hoc basis, although this limited support is typically only available in larger tertiary hospitals. Day‐to‐day implementation and management of MRI safety policies for staff and patients can be particularly challenging in the absence of an experienced MRI radiographer. As advanced functional imaging techniques are introduced , to support biological image guided adaptive radiotherapy techniques (BIGART), greater support from experienced MR radiographers will be critical. Current pathways and recommendations for radiographers in radiotherapy Few resources are currently available for the radiographer entering the specialised field of radiotherapy. Understanding the specific needs of MRI for RT requires radiographers to spend time observing RT treatment and planning. Experience in CT simulation gives insight into patient treatment positioning and RT immobilisation equipment, as well as the limitations and requirements of radiotherapy planning systems and software (e.g. transverse acquisition only). This experience is vital in adapting MRI sim scans to aid with treatment planning and image fusion. Previous extensive experience in a diverse clinical diagnostic setting is vital in gaining the skillset needed for problem solving in areas such as integrating RT immobilisation devices, novel use of coils, non‐standard patient presentation, protocol development, image interpretation, reducing image artefacts and distortion, as well as safe and appropriate administration of contrast media. Current pathways and recommendations for radiation therapists in MRI simulation Radiation therapists in Australia have similarly limited training opportunities in MRI prior to operating an MRI scanner. Prior learning consists largely of self‐directed reading, brief online courses and vendor applications training often focussed on diagnostic applications and not providing adequate theory or practice in MRgRT. Hales et al. suggested that in this modality, radiation therapists require considerable additional MRI knowledge that is outside the scope of their usual training including basic MR physics, MR safety (including screening), MR image acquisition, MR image interpretation and MR anatomy. However, in the UK, radiation therapists using MRgRT typically acquire MRI knowledge in an ad hoc manner, for example, via vendor training, external courses, in‐house training, tutorials, workshops, self‐directed learning or by providing in‐services to teach other members of staff. The absence of comprehensive, consistent training that adequately addresses the additional skills that are required for MRgRT is problematic. Our experience across three radiation oncology departments highlights that adequate cross‐training of radiation therapy professionals in MRI simulation requires foresight and ongoing commitment from management and rostering staff within radiation oncology. Factors such staff rotations, leave, continuous professional training, case workload of department and staff turnover must be considered when staffing MRI simulators. Training in a new modality requires time, sufficient exposure to cases of varying complexities and is currently heavily dependent on the trainee themselves due to the self‐directed nature of much of the learning. Development of individual credentialling documents is recommended, to ensure both the learner and trainer have a clear understanding of learning objectives and skills required to progress and have a suitable level of exposure to a variety of examinations. In our sites, an initial period of three uninterrupted months (FTE) rotation through MRI sim with an experienced MRI radiographer is necessary, to introduce trainees to MRI safety, screening, patient positioning and basic image acquisition. It is important to ensure staff/trainees maintain consistent application of learnt concepts across a variety of clinical site groups and patient presentations through regular rostered rotations. Radiation therapists can progress from learner to intermediate where they may undertake independent practice working alongside an MR radiographer or advanced level MR‐RT. Advanced level MR‐RTs are senior radiation therapists that have gained additional MR training, skills and experience working in MRI simulation and/or treatment. However, while this experiential learning is helpful, it is not a substitute for formal training, and there remains a gap in available standardised training programs. Legislation, governance and training guidelines for MRI education and staffing Currently there is no agreement on how best to train and educate MRI practitioners, , whether MR radiographers or radiation therapists. This is further complicated in countries like Australia, the United Kingdom and Ireland where no national MRI competence profile exists. Australian undergraduate radiography programs cover basic MRI physics and limited clinical training in MRI; however, MRI is not currently well integrated into the radiation therapy undergraduate curriculum. The Australian Society of Medical Imaging and Radiation Therapy (ASMIRT) recommend MRI practitioners undergo additional specific training and supervised clinical experience. An international survey of MRI qualification and certification frameworks concluded that MRI certification should be mandatory, managed by a regulatory body and supported by formal registration. While ASMIRT offers two levels of post qualification certification (basic and advanced) and maintains a register of certified radiographers, this is voluntary and not a requirement to practice clinically. In addition, the Australian Health Practitioner Regulation Agency (AHPRA) does not recognise a protected title for an MRI practitioner or provide minimum practice standards radiographers or radiation therapists must meet in order to work in MRI. In most parts if the world, MRI practitioners traditionally qualify as general radiographers before entering MRI and learn specialist skills experientially or through a combination of postgraduate study and clinical practice, with no set curriculum or uniform assessment. Several studies have highlighted a lack of consistency in the depth and retention of knowledge when MRI is learned in this way , , . Workplace learning of MRI varies widely and is dependent on the knowledge and skills of co‐workers, who have likely learnt MRI experientially themselves. This model is particularly problematic with increasing automation of MRI software and rapid advances in MRI technology resulting in even experienced MRI operators lacking sound knowledge and understanding of underlying theory required to integrate new techniques into clinical practice. Current MRI post graduate programs offered in Australia are designed largely for practitioners that are already familiar with MRI and looking to progress their knowledge. Advanced certification courses and specialised undergraduate courses such as those offered in the United States and Canada, which comprise didactic learning with supervised clinical practice have been shown to be significantly more effective in ensuring consistency in understanding of concepts and retention of knowledge. , The Australian Sonographers Association (ASA) similarly recognised that competency in practice requires specialist knowledge not adequately obtained through general radiography study. The ASA established competency profiles that required completion of an accredited course and clinical practice equivalent to 1‐year full time practice. In the context of MR‐RT, ensuring adequately trained and credentialled staff are present for MR image acquisition and MR guided treatment is a challenge in the absence of recognised competency profiles. A recent unpublished Canadian review using the consensus building Delphi technique, identified core knowledge and requisite skills required by radiation therapists operating and utilising MRI in practice. Many of these skills were novel and not recognised in the competency profiles of either radiation therapists or MR radiographers/technologists. Furthermore, the limited scope of MRI examinations and sequences used for radiotherapy planning reduces the opportunity to develop the breadth of experience necessary to practically scan difficult or complex cases. Cross‐training achieved through integrated collaborative working models that combine experienced diagnostic MRI technologists and radiation therapists can address many of these knowledge gaps. , However, this model is reliant on the experience of the trainers and does not ensure consistent or standardised MRI education essential to becoming safe and efficient operators, which can make this model difficult to implement at scale in a rapidly evolving field. The challenge of dealing with education, training and staffing of hybrid MRI technologies is not unique to radiotherapy. In 2013, recognising the need to collaborate in the emerging field of MR PET, Gilmore et al published a joint consensus paper making several recommendations including a need for advanced level education for both radiographers and nuclear medicine technologists and suggesting a new specialty certification be explored to demonstrate competency in this new and complex hybrid imaging system. Gilmore et al also recognised the American College of Radiology (ACR) recommendation of two staff members with appropriate level of MRI safety knowledge be present at all times in the scanning area during patient examination. MRI sites that are not adequately staffed with suitably trained personnel to supervise staff, patients and equipment have been disproportionately represented in reports of adverse safety incidents, , so ensuring a safe operating environment requires careful consideration. This is particularly relevant for MRI units located within the RT department where MRI Hazards are unfamiliar to staff and consequently poorly understood. Few resources are currently available for the radiographer entering the specialised field of radiotherapy. Understanding the specific needs of MRI for RT requires radiographers to spend time observing RT treatment and planning. Experience in CT simulation gives insight into patient treatment positioning and RT immobilisation equipment, as well as the limitations and requirements of radiotherapy planning systems and software (e.g. transverse acquisition only). This experience is vital in adapting MRI sim scans to aid with treatment planning and image fusion. Previous extensive experience in a diverse clinical diagnostic setting is vital in gaining the skillset needed for problem solving in areas such as integrating RT immobilisation devices, novel use of coils, non‐standard patient presentation, protocol development, image interpretation, reducing image artefacts and distortion, as well as safe and appropriate administration of contrast media. MRI simulation Radiation therapists in Australia have similarly limited training opportunities in MRI prior to operating an MRI scanner. Prior learning consists largely of self‐directed reading, brief online courses and vendor applications training often focussed on diagnostic applications and not providing adequate theory or practice in MRgRT. Hales et al. suggested that in this modality, radiation therapists require considerable additional MRI knowledge that is outside the scope of their usual training including basic MR physics, MR safety (including screening), MR image acquisition, MR image interpretation and MR anatomy. However, in the UK, radiation therapists using MRgRT typically acquire MRI knowledge in an ad hoc manner, for example, via vendor training, external courses, in‐house training, tutorials, workshops, self‐directed learning or by providing in‐services to teach other members of staff. The absence of comprehensive, consistent training that adequately addresses the additional skills that are required for MRgRT is problematic. Our experience across three radiation oncology departments highlights that adequate cross‐training of radiation therapy professionals in MRI simulation requires foresight and ongoing commitment from management and rostering staff within radiation oncology. Factors such staff rotations, leave, continuous professional training, case workload of department and staff turnover must be considered when staffing MRI simulators. Training in a new modality requires time, sufficient exposure to cases of varying complexities and is currently heavily dependent on the trainee themselves due to the self‐directed nature of much of the learning. Development of individual credentialling documents is recommended, to ensure both the learner and trainer have a clear understanding of learning objectives and skills required to progress and have a suitable level of exposure to a variety of examinations. In our sites, an initial period of three uninterrupted months (FTE) rotation through MRI sim with an experienced MRI radiographer is necessary, to introduce trainees to MRI safety, screening, patient positioning and basic image acquisition. It is important to ensure staff/trainees maintain consistent application of learnt concepts across a variety of clinical site groups and patient presentations through regular rostered rotations. Radiation therapists can progress from learner to intermediate where they may undertake independent practice working alongside an MR radiographer or advanced level MR‐RT. Advanced level MR‐RTs are senior radiation therapists that have gained additional MR training, skills and experience working in MRI simulation and/or treatment. However, while this experiential learning is helpful, it is not a substitute for formal training, and there remains a gap in available standardised training programs. MRI education and staffing Currently there is no agreement on how best to train and educate MRI practitioners, , whether MR radiographers or radiation therapists. This is further complicated in countries like Australia, the United Kingdom and Ireland where no national MRI competence profile exists. Australian undergraduate radiography programs cover basic MRI physics and limited clinical training in MRI; however, MRI is not currently well integrated into the radiation therapy undergraduate curriculum. The Australian Society of Medical Imaging and Radiation Therapy (ASMIRT) recommend MRI practitioners undergo additional specific training and supervised clinical experience. An international survey of MRI qualification and certification frameworks concluded that MRI certification should be mandatory, managed by a regulatory body and supported by formal registration. While ASMIRT offers two levels of post qualification certification (basic and advanced) and maintains a register of certified radiographers, this is voluntary and not a requirement to practice clinically. In addition, the Australian Health Practitioner Regulation Agency (AHPRA) does not recognise a protected title for an MRI practitioner or provide minimum practice standards radiographers or radiation therapists must meet in order to work in MRI. In most parts if the world, MRI practitioners traditionally qualify as general radiographers before entering MRI and learn specialist skills experientially or through a combination of postgraduate study and clinical practice, with no set curriculum or uniform assessment. Several studies have highlighted a lack of consistency in the depth and retention of knowledge when MRI is learned in this way , , . Workplace learning of MRI varies widely and is dependent on the knowledge and skills of co‐workers, who have likely learnt MRI experientially themselves. This model is particularly problematic with increasing automation of MRI software and rapid advances in MRI technology resulting in even experienced MRI operators lacking sound knowledge and understanding of underlying theory required to integrate new techniques into clinical practice. Current MRI post graduate programs offered in Australia are designed largely for practitioners that are already familiar with MRI and looking to progress their knowledge. Advanced certification courses and specialised undergraduate courses such as those offered in the United States and Canada, which comprise didactic learning with supervised clinical practice have been shown to be significantly more effective in ensuring consistency in understanding of concepts and retention of knowledge. , The Australian Sonographers Association (ASA) similarly recognised that competency in practice requires specialist knowledge not adequately obtained through general radiography study. The ASA established competency profiles that required completion of an accredited course and clinical practice equivalent to 1‐year full time practice. In the context of MR‐RT, ensuring adequately trained and credentialled staff are present for MR image acquisition and MR guided treatment is a challenge in the absence of recognised competency profiles. A recent unpublished Canadian review using the consensus building Delphi technique, identified core knowledge and requisite skills required by radiation therapists operating and utilising MRI in practice. Many of these skills were novel and not recognised in the competency profiles of either radiation therapists or MR radiographers/technologists. Furthermore, the limited scope of MRI examinations and sequences used for radiotherapy planning reduces the opportunity to develop the breadth of experience necessary to practically scan difficult or complex cases. Cross‐training achieved through integrated collaborative working models that combine experienced diagnostic MRI technologists and radiation therapists can address many of these knowledge gaps. , However, this model is reliant on the experience of the trainers and does not ensure consistent or standardised MRI education essential to becoming safe and efficient operators, which can make this model difficult to implement at scale in a rapidly evolving field. The challenge of dealing with education, training and staffing of hybrid MRI technologies is not unique to radiotherapy. In 2013, recognising the need to collaborate in the emerging field of MR PET, Gilmore et al published a joint consensus paper making several recommendations including a need for advanced level education for both radiographers and nuclear medicine technologists and suggesting a new specialty certification be explored to demonstrate competency in this new and complex hybrid imaging system. Gilmore et al also recognised the American College of Radiology (ACR) recommendation of two staff members with appropriate level of MRI safety knowledge be present at all times in the scanning area during patient examination. MRI sites that are not adequately staffed with suitably trained personnel to supervise staff, patients and equipment have been disproportionately represented in reports of adverse safety incidents, , so ensuring a safe operating environment requires careful consideration. This is particularly relevant for MRI units located within the RT department where MRI Hazards are unfamiliar to staff and consequently poorly understood. MRI Practice in Radiation Oncology MRI is traditionally regarded as a safe imaging modality due to its lack of ionising radiation; however, there are a number of concerns with the impact of high static magnetic fields, time‐varying gradient magnetic fields, and radiofrequency pulses on patients and objects in the environment, as well as the effects of contrast agents, claustrophobia and acoustic noise. , In 2002 the ACR published a white paper on MRI safety in response to the death of 6‐year‐old child in an MRI related safety incident. This document formed the basis of the ACR MRI safety guidelines, which have been adopted, or adapted, in many countries across the world such as the Royal Australian and New Zealand College of Radiologists (RANZCR) guidelines in Australia and New Zealand, and Medicines and Healthcare products Regulatory Agency (MHRA) in the UK. Despite more than 20 years of universally accepted guidance in our industry we still have no legislated MRI safety regulations, or mandated reporting of MRI safety incidents, and MRI related adverse events remain a significant problem globally. , Individual facilities are responsible for designing and implementing their own safety procedures; however, the lack of legislation has led to inconsistent implementation of the guidelines in individual settings. The risk of adverse events increases in hybrid environments due to the added complexity, and compromises required to incorporate multiple technologies and staff groups. , This is further complicated in atypical settings outside of clinical radiology departments, such as radiation oncology, where no specific guidance currently exists. MRI equipment is often retrofitted into pre‐existing radiation bunkers or converted CT scan rooms and many of the personnel have not undergone MR safety education as part of their conventional clinical training. In Australia and New Zealand, current RANZCR MRI Safety guidelines provide a framework for individual departmental policies by addressing topics such as equipment siting and zoning, staff and patient screening, education and training requirements, as well as designation of specific roles and responsibilities. The guidelines recommend allocation of specific persons to designated MR safety roles; however, the guidance is tailored toward diagnostic imaging and the staff typically found in these environments. While there may be many similarities between radiology and radiation oncology, education, training and experience of staff vary significantly regarding exposure to MRI scanning and appreciation of the risks and safety requirements of working in these environments. Safety Roles and How they Might be Filled in Radiation Oncology The MRI Medical Director (MRMD) is responsible for all aspects of MRI safety, including formulation and application of the safety policies and procedures. This role is required to assess the balance of risk and benefit for unusual scanning situations and, therefore, should be held by a medical practitioner with substantial MRI experience. However, it can be a challenging role to fill in some radiology departments and is even more difficult to adequately assign in a radiation oncology department without significant investment in training, plus the engagement of a dedicated Radiation Oncologist to provide appropriate support. The MRI Safety Officer (MRSO) is responsible for the day‐to‐day implementation of the site's safety policies and for supervising all aspects of MRI safety for staff, visitors and patients. The MRSO role requires an individual with significant breadth and depth of knowledge that covers MRI scanning principles and techniques, system specifications, spatial gradient field maps, as well as clinical experience such as manual handling techniques and basic life support, and awareness of MRI conditional equipment and implants and their use and limitations in the environment. Although this role is most often filled by a senior radiographer, the description of the role in current guidance documents is ambiguous, for example, suggesting it could be held by professionals of varying educational backgrounds, qualifications, or certifications. , , , , These definitions fail to recognise the core knowledge skills and competence required to hold the MRSO role, particularly the specialist knowledge required for operating imaging in non‐conventional MR environments. Therefore, careful consideration is needed when deciding who to appoint to the MRSO role in the radiation oncology setting, and what supports or training they will require. The final designated role is that of the MRI Safety Advisor or expert (MRSA/E). This role can be filled in‐house or shared across sites to advise on all technical and engineering aspects of MRI safety and bio‐effects of electromagnetic fields. The MRI Safety Advisor may be invited to review existing policy documents, conduct external audits of procedures and provide advice regarding scanning specific implants. It is expected this role will be held by a physicist with specific MRI experience; however, the RANZCR guidelines suggest a suitably credentialled MRMD or MRSO from another site may hold this position. In 2020, the Australasian College of Physical Scientists & Engineers in Medicine (ACPSEM), recognising the increasing integration of MRI in radiation therapy and the urgent need to address the general lack of MRI knowledge of medical physicists, introduced an MRSE certification course. The course comprises a series of modules, assignments and examinations structured over 12 months, specifically designed to provide a depth of understanding of MRI principles for the candidate to gain the necessary experience required to advise on safe MRI practice in the diagnostic, research and radiation oncology clinic. The latest RANZCR MRI safety guidelines require this certification or equivalent to hold the position of MRI Safety Advisor. Limiting access to this course to ACPSEM members strengthens the role definition within medical physicists' scope of practice. We believe a similar opportunity exists to provide a pathway for radiographers and radiation therapists to attain recognised subspecialty certification and define the role of MRI Safety Officer in our advanced scope of practice. It is clear that in order to satisfy the requirements of the current MRI safety guidelines, urgent attention needs to be directed toward staff education and training, as well as development of suitable accreditation programs and support for staff accepting these roles and responsibilities outside the typical radiology setting. In addition to these formal roles, all staff require core competency in MRI safety. Safety practices, such as checking pockets prior to entering the scan room, which are routine and habitual for radiographers, may not be well known and practiced regularly by those new to MRI, and safety competency cannot be assumed in an atypical environment. In our experience, the burden of responsibility around safety often sits with the MR radiographer. However, failure to adequately train all staff who work in an MR setting in safety practices represents a substantial risk. Radiographers and radiation therapists will need to work together to identify the core requisite knowledge and skills and establish clear roles for the management of MRI safety in this space. In order to do this effectively, we need the support of professional and regulatory bodies to recognise, endorse and advocate for radiographer and radiation therapist advanced scope of practice in MR‐RT and MR safety more generally. At the current rate of uptake of MR‐RT hybrid systems, the opportunity to provide specialist input into the development and implementation of policies and procedures for the safe operation and staffing required is rapidly closing. This paper has discussed several key issues regarding the safe and effective use of MRI in radiation oncology, including recommendations for supervision, staffing, education and training for radiographers and radiation therapists to safely navigate this niche field. We have also highlighted a number of opportunities to progress the role of radiographers in Australia. The authors declare no conflict of interest.
Radiation therapist perceptions on how artificial intelligence may affect their role and practice
9b84a9e1-b2d7-403a-ad55-557e3b6add5d
10122926
Internal Medicine[mh]
Chronic illnesses are a challenging burden, with an ageing population. There were an estimated 151,000 cancer diagnoses in Australia in 2021, with 49,000 deaths. Radiation therapy provides effective treatment for cancer, with 74,200 courses being delivered between 2018–19 within Australia alone, highlighting that improvements to the radiation therapy service would be beneficial. Artificial intelligence (AI) and machine learning (ML) algorithms can enable enhanced personalised treatments through automation easing the medical radiation professionals (MRPs') workload. AI uses computerised systems performing tasks ordinarily carried out by humans and refers to the intelligence achieved by computer systems, falling into three subcategories including ML, representation learning (RL) and deep learning (DL). The term AI and automation are often used interchangeably with automation referring to computers doing a task without human intervention using data fed into a system. AI takes the use of complex algorithms to a new level allowing computers to learn without explicit programming through automatic extraction and analysis of complex data. AI is increasingly utilised in medical radiation science (MRS) with advanced computing and modelling, supporting advancements in treatment decision making, adaptive radiotherapy (ART), treatment workflows and quality assurance (QA). , Artificial intelligence can improve clinical decision support through ML‐enhanced treatment prediction outcome models, providing evidence‐based, outcome‐orientated treatment pathways for patients. AI can improve safety and quality in CT simulation by predicting tumour motion and improving image registration via improved computational efficiency and treatment planning. Standardising treatment planning using ML algorithms enables plans to be produced based on the predicted attributes of historical plans, improving efficiency. , Further treatment planning improvements include ML algorithms automating organ segmentation by analysing images and providing a likelihood segmentation map, which speeds up the segmentation process. This reduces the bottleneck in the treatment workflow, increasing time for human interaction and efficient treatment delivery. AI is also used in multiple aspects of quality assurance (QA) where there are various combinations being adapted striving for ‘machine‐creates and machine‐verifies’ workflows. Artificial intelligence can improve consistency and quality, moving radiation therapists (RTs') attention away from manual tasks to developing and evaluating radiation treatments, , which still require human intervention. The introduction of AI has not come without challenges, with AI being considered a ‘black box’. Anxiety exists around how AI may impact job roles, including AI displacing roles , or ‘dumbing down’ the workforce, reducing RTs' ability to problem solve, impacting patient safety and quality treatment delivery. Concerns have been raised about job security, job satisfaction and loss of skills as well as organisation disruption. , Little evidence exists about AI and the perceptions of RTs' and the support needed for safe use of these technologies. This must be explored to realise the full potential of AI and to develop strategies for implementation, education and training for the current and future RT workforce. The aim of this study was to survey Australian RTs' perceptions on how AI may affect their role and the service they deliver to patients. Study design A pragmatic decision was made to use an online survey for all participants to maximise the population reach, reducing bias and increasing internal validity. Development of the questionnaire drew on a scoping literature review and was discussed with RT experts, ensuring questions were appropriate; important in self‐generated research. , The survey was piloted by three MRS experts, with feedback acted upon, providing a degree of face and content validity, increasing the opportunity for meaningful answers, to expand the depth of the research. Carefully considered questions aided accurate encoding of the survey, guarding against miscommunication. , A direct structured questioning strategy was brief, relevant, unambiguous, specific and objective aiming to gain quantifiable and generalisable answers. , , Close‐ended and open‐ended questions collected qualitative data in sufficient quantity, adapting a triangulation methodological strategy. Concurrent method triangulation combining overview and insight questions enabled participants to expand and contextualise responses, gaining rich data, resulting in well‐validated, substantiated findings. , Study population and sampling This study was approved by the Sheffield Hallam University Human Ethics Committee (SHU Ethics/MC/310321). Intrusions on privacy were minimised with survey questions designed so participants were not identifiable through their responses aiding anonymity and confidentiality. This is important as radiation therapy is a small profession A participant information sheet (see supplementary file : Participant Information Sheet.docx) provided clear information on expectations prior to completing the questionnaire (see supplementary file : Questionnaire.docx). Consent was gained in question one of the questionnaire and participants could close the questionnaire without submitting if they no longer wished to participate. Direct participant quotes are available to view in supplementary file Direct quotes.docx for transparency. The selection criteria for participants in the survey included Australian RTs' who currently perform RT practice in clinical sites and consented to completing the survey. By targeting RT networks, the sample was homogeneous in terms of a shared profession, yet heterogeneous in demographics, skill level and perceptions of AI. Multiple methods of recruitment included an online survey link via the Medical Radiation Practice Board of Australia (MRPBA) and the Australian Society of Medical Imaging and Radiation Therapy (ASMIRT) newsletter. Participants were also recruited via professional networks, and a snowball strategy through Medical Radiation Australia, the Australian Chief RT's group and the Australian Radiation Therapy Clinical Educators group on social media. With a 95% confidence interval and 5% margin of error, the total sample size required for this study was 336, which would allow robust conclusions to be drawn. The sample was drawn from the total population of 2625, Australian registered RTs’ between April and June 2021. The study recorded 105 responses, which is lower than the estimated sample size, thus it increased the margin of error. The low response rate (31%) means that the results may not include all perspectives from practicing RTs'. Data analysis Quantitative data were analysed using Microsoft Excel v2109 (Microsoft Corp, Redmond, US) and expressed in percentages, charts and tables. Qualitative open‐ended questions were coded in NVivo‐12 software (QSR International, Melbourne, Australia) and analysed using reflexive thematic analysis (TA). TA was conducted in six phases by a single member of the research team (JO) to identify main themes within the data, based on relevance to the research question and represented a patterned response, useful when analysing perceptions providing a broader understanding of the context. These included familiarising oneself with the data, generating codes, searching for themes, reviewing themes, defining and naming themes and then producing a report. The TA was reviewed independently by a second member of the research team (MC), which improved the dependability and rigour of the coded data. A pragmatic decision was made to use an online survey for all participants to maximise the population reach, reducing bias and increasing internal validity. Development of the questionnaire drew on a scoping literature review and was discussed with RT experts, ensuring questions were appropriate; important in self‐generated research. , The survey was piloted by three MRS experts, with feedback acted upon, providing a degree of face and content validity, increasing the opportunity for meaningful answers, to expand the depth of the research. Carefully considered questions aided accurate encoding of the survey, guarding against miscommunication. , A direct structured questioning strategy was brief, relevant, unambiguous, specific and objective aiming to gain quantifiable and generalisable answers. , , Close‐ended and open‐ended questions collected qualitative data in sufficient quantity, adapting a triangulation methodological strategy. Concurrent method triangulation combining overview and insight questions enabled participants to expand and contextualise responses, gaining rich data, resulting in well‐validated, substantiated findings. , This study was approved by the Sheffield Hallam University Human Ethics Committee (SHU Ethics/MC/310321). Intrusions on privacy were minimised with survey questions designed so participants were not identifiable through their responses aiding anonymity and confidentiality. This is important as radiation therapy is a small profession A participant information sheet (see supplementary file : Participant Information Sheet.docx) provided clear information on expectations prior to completing the questionnaire (see supplementary file : Questionnaire.docx). Consent was gained in question one of the questionnaire and participants could close the questionnaire without submitting if they no longer wished to participate. Direct participant quotes are available to view in supplementary file Direct quotes.docx for transparency. The selection criteria for participants in the survey included Australian RTs' who currently perform RT practice in clinical sites and consented to completing the survey. By targeting RT networks, the sample was homogeneous in terms of a shared profession, yet heterogeneous in demographics, skill level and perceptions of AI. Multiple methods of recruitment included an online survey link via the Medical Radiation Practice Board of Australia (MRPBA) and the Australian Society of Medical Imaging and Radiation Therapy (ASMIRT) newsletter. Participants were also recruited via professional networks, and a snowball strategy through Medical Radiation Australia, the Australian Chief RT's group and the Australian Radiation Therapy Clinical Educators group on social media. With a 95% confidence interval and 5% margin of error, the total sample size required for this study was 336, which would allow robust conclusions to be drawn. The sample was drawn from the total population of 2625, Australian registered RTs’ between April and June 2021. The study recorded 105 responses, which is lower than the estimated sample size, thus it increased the margin of error. The low response rate (31%) means that the results may not include all perspectives from practicing RTs'. Quantitative data were analysed using Microsoft Excel v2109 (Microsoft Corp, Redmond, US) and expressed in percentages, charts and tables. Qualitative open‐ended questions were coded in NVivo‐12 software (QSR International, Melbourne, Australia) and analysed using reflexive thematic analysis (TA). TA was conducted in six phases by a single member of the research team (JO) to identify main themes within the data, based on relevance to the research question and represented a patterned response, useful when analysing perceptions providing a broader understanding of the context. These included familiarising oneself with the data, generating codes, searching for themes, reviewing themes, defining and naming themes and then producing a report. The TA was reviewed independently by a second member of the research team (MC), which improved the dependability and rigour of the coded data. Demographic information All 105 RTs' who participated completed the survey, with the majority (65%) having >11 years clinical experience and proficient in multiple RT tasks presented in Table . The data were organised into multiple codes, generating key themes, which were displayed in Figure . Theme 1: AI implementation Results demonstrated that automation is used in image reconstruction (59%) and fusion (61%), organs at risk (OAR) contouring (50%), plan set‐up (27%), plan optimisation (49%), plan evaluation (25%), QA (20%) and image match analysis (50%), with the most manual task being target contouring. Participants were optimistic about AI use (68%), attributed to increased safety, quality, efficiency and service improvement with one participant stating. Radiation therapy is becoming increasingly complex. Without AI we will not be able to continue to improve the service, without becoming unproductive. (82) Few participants showed apprehension towards AI (7%) due to the fear of the unknown, ethical implications, increased expectations, safety issues, a lack of understanding and loss of skills, with 25% of participants having neutral feelings. The potential is great; however, it is dependent on its implementation, scope of use and how trustworthy the results will be, the old adage of ‘shit in, shit out’ comes to mind. (17) Theme 2: AI knowledge and training Knowledge AI knowledge was limited amongst most participants (63%) (Fig. .), with 18% gaining this knowledge through university or work‐based training (Fig. .). Some participants (27%) perceived sufficient training had been provided for safe and efficient AI use and 96% desired to know more, with one participant stating. I said “no” but I'd be willing to learn more if the training wasn't too complex avoiding IT‐language. (50) RTs' would like to learn more about clinical applications (95%), ethical implications (60%) and the theory of AI (41%) (Fig. .), via online workshops, self‐paced online material and face‐to‐face workshops. A selection of what aspect of AI, RTs' would like to learn more include; Building AI tools The limitations of AI; what it could one day achieve, and what it would never achieve. (24) . Regulators/ethics/safety Ethics of allowing computer‐based technology to adapt decision making techniques. (55) . Application of AI I like to be hands‐on with my learning, so the application of AI is more interesting. (4) AI training Participants commented a lack of time and resources inhibits training opportunities and whilst training may have been given, this does not guarantee competency. Participants suggest training should include the underpinnings of AI, safe use of AI, maintenance of algorithms and troubleshooting skills with associated competency packages. Theme 3: Impact of AI on RT profession Participants perceived that AI increases productivity more so than quality in most radiation therapy tasks (Table ). RT roles Many participants (66%) perceived AI would affect their role, with 12% stating it would not and 22% were unsure. Participants stated AI would increase their skillset, providing opportunities for staff, patients and the radiotherapy service by implementation of daily ART, and advancing the profession. RTs' perceived AI would free up critical thinking time, increasing job satisfaction whilst reducing mundane tasks and with good management, time gained could be used for patient‐facing tasks, research or continued professional development. I believe it will decrease repetitive tasks allowing more time spent on training and advancing practice. (105) Others perceived RTs' would lose their clinical reasoning skills, to become ‘button pushers’ or QA computer operators, unable to detect errors and use professional judgement. Participants (23%) stated AI could lead to reduced job satisfaction, job security and could devalue experienced staff. The quality is not the same as human intervention. (99) Neutral perceptions stated RT roles will adapt, attracting different people to the profession. AI has the potential if implemented successfully, to improve care, highlighting the need for a strategic implementation process, ensuring AI is not used to ‘cut corners’ making it inferior to current practices. I believe implementation of AI will result in changes to current roles to adapt to new technology. (61) Theme 4: Impact of AI on the patient Some (67%) perceived AI would improve the treatment pathway for patients, as RTs' can focus on patients or develop supportive roles. Others (5%) believed AI will have a negative impact on the patient; decreasing interaction with RTs', and patient service will become a people conveyor belt, with templated treatments and less personalised plans, decreasing service quality, whereas 28% had mixed feelings. Theme 5: Impact of AI on service Many participants (67%) stated AI would positively impact the service by improving outcomes (21%), increasing efficiency (40%), improving accuracy (11%) and quality (9%). Treatments are increasingly complex with technological advancements, and if carried out manually would limit availability. Several (60%) perceived AI could improve consistency, quality and efficiency, while decreasing human error, positively impacting the patient, the RT and the service. The success of AI depends on its implementation, with participants stating if AI is safely implemented, it can benefit patients and increase precision. Research such as this, will aid in the implementation process, so perceptions of RTs' can be heard and hopefully acted upon, ultimately improving the service to patients. All 105 RTs' who participated completed the survey, with the majority (65%) having >11 years clinical experience and proficient in multiple RT tasks presented in Table . The data were organised into multiple codes, generating key themes, which were displayed in Figure . AI implementation Results demonstrated that automation is used in image reconstruction (59%) and fusion (61%), organs at risk (OAR) contouring (50%), plan set‐up (27%), plan optimisation (49%), plan evaluation (25%), QA (20%) and image match analysis (50%), with the most manual task being target contouring. Participants were optimistic about AI use (68%), attributed to increased safety, quality, efficiency and service improvement with one participant stating. Radiation therapy is becoming increasingly complex. Without AI we will not be able to continue to improve the service, without becoming unproductive. (82) Few participants showed apprehension towards AI (7%) due to the fear of the unknown, ethical implications, increased expectations, safety issues, a lack of understanding and loss of skills, with 25% of participants having neutral feelings. The potential is great; however, it is dependent on its implementation, scope of use and how trustworthy the results will be, the old adage of ‘shit in, shit out’ comes to mind. (17) AI knowledge and training Knowledge AI knowledge was limited amongst most participants (63%) (Fig. .), with 18% gaining this knowledge through university or work‐based training (Fig. .). Some participants (27%) perceived sufficient training had been provided for safe and efficient AI use and 96% desired to know more, with one participant stating. I said “no” but I'd be willing to learn more if the training wasn't too complex avoiding IT‐language. (50) RTs' would like to learn more about clinical applications (95%), ethical implications (60%) and the theory of AI (41%) (Fig. .), via online workshops, self‐paced online material and face‐to‐face workshops. A selection of what aspect of AI, RTs' would like to learn more include; Building AI tools The limitations of AI; what it could one day achieve, and what it would never achieve. (24) . Regulators/ethics/safety Ethics of allowing computer‐based technology to adapt decision making techniques. (55) . Application of AI I like to be hands‐on with my learning, so the application of AI is more interesting. (4) AI training Participants commented a lack of time and resources inhibits training opportunities and whilst training may have been given, this does not guarantee competency. Participants suggest training should include the underpinnings of AI, safe use of AI, maintenance of algorithms and troubleshooting skills with associated competency packages. AI knowledge was limited amongst most participants (63%) (Fig. .), with 18% gaining this knowledge through university or work‐based training (Fig. .). Some participants (27%) perceived sufficient training had been provided for safe and efficient AI use and 96% desired to know more, with one participant stating. I said “no” but I'd be willing to learn more if the training wasn't too complex avoiding IT‐language. (50) RTs' would like to learn more about clinical applications (95%), ethical implications (60%) and the theory of AI (41%) (Fig. .), via online workshops, self‐paced online material and face‐to‐face workshops. A selection of what aspect of AI, RTs' would like to learn more include; AI tools The limitations of AI; what it could one day achieve, and what it would never achieve. (24) . Ethics of allowing computer‐based technology to adapt decision making techniques. (55) . AI I like to be hands‐on with my learning, so the application of AI is more interesting. (4) training Participants commented a lack of time and resources inhibits training opportunities and whilst training may have been given, this does not guarantee competency. Participants suggest training should include the underpinnings of AI, safe use of AI, maintenance of algorithms and troubleshooting skills with associated competency packages. AI on RT profession Participants perceived that AI increases productivity more so than quality in most radiation therapy tasks (Table ). RT roles Many participants (66%) perceived AI would affect their role, with 12% stating it would not and 22% were unsure. Participants stated AI would increase their skillset, providing opportunities for staff, patients and the radiotherapy service by implementation of daily ART, and advancing the profession. RTs' perceived AI would free up critical thinking time, increasing job satisfaction whilst reducing mundane tasks and with good management, time gained could be used for patient‐facing tasks, research or continued professional development. I believe it will decrease repetitive tasks allowing more time spent on training and advancing practice. (105) Others perceived RTs' would lose their clinical reasoning skills, to become ‘button pushers’ or QA computer operators, unable to detect errors and use professional judgement. Participants (23%) stated AI could lead to reduced job satisfaction, job security and could devalue experienced staff. The quality is not the same as human intervention. (99) Neutral perceptions stated RT roles will adapt, attracting different people to the profession. AI has the potential if implemented successfully, to improve care, highlighting the need for a strategic implementation process, ensuring AI is not used to ‘cut corners’ making it inferior to current practices. I believe implementation of AI will result in changes to current roles to adapt to new technology. (61) roles Many participants (66%) perceived AI would affect their role, with 12% stating it would not and 22% were unsure. Participants stated AI would increase their skillset, providing opportunities for staff, patients and the radiotherapy service by implementation of daily ART, and advancing the profession. RTs' perceived AI would free up critical thinking time, increasing job satisfaction whilst reducing mundane tasks and with good management, time gained could be used for patient‐facing tasks, research or continued professional development. I believe it will decrease repetitive tasks allowing more time spent on training and advancing practice. (105) Others perceived RTs' would lose their clinical reasoning skills, to become ‘button pushers’ or QA computer operators, unable to detect errors and use professional judgement. Participants (23%) stated AI could lead to reduced job satisfaction, job security and could devalue experienced staff. The quality is not the same as human intervention. (99) Neutral perceptions stated RT roles will adapt, attracting different people to the profession. AI has the potential if implemented successfully, to improve care, highlighting the need for a strategic implementation process, ensuring AI is not used to ‘cut corners’ making it inferior to current practices. I believe implementation of AI will result in changes to current roles to adapt to new technology. (61) AI on the patient Some (67%) perceived AI would improve the treatment pathway for patients, as RTs' can focus on patients or develop supportive roles. Others (5%) believed AI will have a negative impact on the patient; decreasing interaction with RTs', and patient service will become a people conveyor belt, with templated treatments and less personalised plans, decreasing service quality, whereas 28% had mixed feelings. AI on service Many participants (67%) stated AI would positively impact the service by improving outcomes (21%), increasing efficiency (40%), improving accuracy (11%) and quality (9%). Treatments are increasingly complex with technological advancements, and if carried out manually would limit availability. Several (60%) perceived AI could improve consistency, quality and efficiency, while decreasing human error, positively impacting the patient, the RT and the service. The success of AI depends on its implementation, with participants stating if AI is safely implemented, it can benefit patients and increase precision. Research such as this, will aid in the implementation process, so perceptions of RTs' can be heard and hopefully acted upon, ultimately improving the service to patients. This study explored RT perceptions on how AI may affect their role. 105 RTs' completed this research with 77% female, comparable to the MRPBA registrant data (68.4%), demonstrating a representative gender ratio was collected. Participants recruited had significant clinical experience, similar to a prospective cohort study carried out by Batumalai et al. who examined radiation oncology professionals perceptions of automation in radiotherapy planning, whereby participants highlighted the need for continued education to ensure knowledge is not lost with automation, similar to this current study. Most RTs' practicing in Australia are enthusiastic about AI use. Support and training are desired to reduce apprehension as AI is increasingly utilised. Training would preferably be via online workshops and self‐paced material, including the underpinnings of AI, safe use of it, maintenance of algorithms, and how to effectively troubleshoot AI creations. RTs' had mixed feelings on how AI may change their role and the impact it has on the patient and service. The perceptions of RTs' in this study provide a useful insight into the use of AI in radiation therapy, and how it may affect their role, with many RTs' using AI in practice alongside manual intervention. Most participants (63%) had limited AI knowledge with many enthusiastic about learning more (96%), as they perceived AI would affect their role (66%). Confusion exists between AI and automation definitions amongst participants, with 63.5% of RTs' having limited or no knowledge of AI, despite using AI daily, potentially limiting its application to practice due to a lack of understanding. This highlights a need for improved education on AI use and its application in radiation therapy. These results were similar to Batumalai et al's findings, whereby 24% of their MRS participants felt training and education in their department was sufficient, with the remaining 76% undecided or required further training, which is comparable to 63% of RT participants in this study having limited understanding. This further demonstrates a need for RTs' to gain underpinning knowledge of AI and its application, so advancement could develop faster. A potential theory of why confusion exists amongst RTs' could be because RT departments lacked transparency implementing new software ‘behind the scenes,’ limiting RTs' knowledge to day‐to‐day use only, preventing a depth of understanding and potentially limiting the scope of AI use in practice. Most participants (96%) would like to learn more about AI, specifically clinical application, ethical implications and the theory of AI. This is supported by Chamunyonga et al. in their review on considerations for future radiotherapy curriculum enhancement. Chamunyonga et al. found effective training must include ongoing maintenance of algorithms, alongside, multi‐disciplinary care and research, complemented with competency packages. This is further supported in a review paper by Vanderwinckele et al. who looked at AI applications in radiotherapy and suggested recommendations for practice. They stated a multi‐disciplinary team must have basic knowledge of AI, knowing the strengths and limitations, enabling safe implementation of AI models, thus supporting what depth of knowledge would be useful to the participants in this study. RTs would prefer AI training to avoid ‘IT language’, although, collaborative training would be beneficial including AI experts, researchers, software companies, radiation oncologists, RTs' and radiation oncology medical physicists (ROMP), which was supported further by Chamunyonga et al. who found RTs' or ROMPs' would be the preferred facilitators. French and Chen concur in their invited commentary on preparing for AI, stating collaboration for training should occur across computer science and data analytics domains. Considering the results in this and other research discussed, , , an ideal training programme would incorporate a multi‐disciplinary collaborative approach. Concepts raised in this and previous studies, , , , , , align with the MRPBA professional capabilities, which underpin the RT role. This highlights ethical implications, clinical applications and theory of AI would be imperative to include in a multi‐disciplinary training programme. The need for AI knowledge to be improved amongst RTs' is important with AI being increasingly utilised and, therefore, potentially affecting the role of the RT. The MRPBA recognise this in their position statement on AI, acknowledging that AI will be a significant element to the future of the MRS professions. Many participants (66%) in this study believe AI will affect their role as was found in a study by Batumalai et al. (83%). By using AI to its full potential, administrative tasks can be reduced and mundane processes automated, increasing efficiency , and job satisfaction, potentially leading to a shift in roles, ‘making fuller use of RTs’ scope of practice’. Nevertheless, concerns were raised that AI will make the job repetitive and boring, and RTs' will ‘become button pushers’, consistent with Batumalai et al.'s findings. However, some participants in this study felt AI would not affect the equilibrium of the RT profession, as roles will adapt, and RTs' will embrace AI to improve care. This perception would be beneficial to all as there is a greater demand on healthcare with people living longer and more successful treatments. The workforce must become more efficient, which is pertinent to keep up with the demand on the healthcare system and continue to improve the patients experience and safety. Many perceived AI would affect the patient's radiotherapy service positively (67%), with some having a contrasting view (5%) and 28% having mixed feelings. Participants believed AI should enable RTs' to deliver the best care to patients more easily; however, some perceive AI can decrease treatment quality. With improved efficiency by utilising AI applications further, specialised pathways could be implemented, and supportive roles developed. This could improve patient care and potentially increase the overall quality of service. Some perceived AI would have a neutral effect on the patient and service. If used appropriately, AI has many advantages to reduce the time from planning to treatment, highlighting the necessary quality and safety of the implementation process potentially enhancing the profession, not replacing it, , highlighting the need for improved AI knowledge and understanding with further exploration required. RTs' as ‘communicators and collaborators’ put patients' needs first with RTs' duties balanced between the technical and patient contact aspects of the profession. Maintaining this balance is a hallmark of the MRS profession, , with a need for the indispensable element of human support. The findings within this study support the need for a balance to be maintained, whilst further improving knowledge and increasing implementation of AI driven tasks to improve the service. AI implementation and acceptance in practice could be improved with collaboration between RTs', industry experts and academics so the next breakthrough in RT advancement has the potential to be developed. It is imperative for the MRS profession to collaborate and improve their knowledge, so it can improve the quality of RT tasks at the same rate as the productivity. Change is upon us, and adaptation is required, RTs' who prepare for and accept this change may prosper, career‐wise, whereas those that do not could limit the opportunities AI could provide. Some limitations are apparent in this study, with the main one being the sample size. A confidence interval of 95% with a 9% margin of error was achieved with a sample size of 105. This low response rate (31%) may limit the generalisability; however, data collected from those RTs' had great depth and quality, which is not so easily measured with statistics. The response rate may have been improved if it was conducted outside the COVID‐19 pandemic, or if the survey completion timeframe was longer. Another limitation was a slight overuse of open‐ended questions, which made responses harder to analyse, due to the mass of responses and analysis required. Although time‐consuming to analyse, this did expand the scope of this research, so considering this a limitation could be contradicted. Some of the open‐ended questions could have been changed to Likert‐type responses, which would have been quicker to interpret, although may not have gained the same level of depth of understanding. Enabling participants to provide their email addresses could have provided increased opportunity for follow‐up. By not collecting correspondence information limited this opportunity and was another limitation of this study, which should be considered in future studies. In conclusion, RTs' perceive embracing AI in radiotherapy could potentially advance the profession and improve the service to patients, changing the RT role and not replacing it. They perceive these benefits will outweigh the negatives, if AI is implemented with sufficient training, to enable RTs' to greater understand the potential benefits of AI, and management use these benefits to improve patient care rather than replace RT roles, so the quality of patients treatment can be improved at the same rate as productivity, whilst maintaining job satisfaction and retention amongst RTs'. This study can inform multiple follow‐up research projects including management perspectives on the implementation process of AI, and what they perceive are the training needs of employees. This study could also be replicated to include other MRS professionals to increase the scope of the research. Future directions could also investigate AI training options for MRS professionals in Australia and worldwide, as it appears there is a great demand for it with 96% of RTs wanting to learn more. No financial support has been received for this research. The authors declare no conflict of interest. This study was approved by the Sheffield Hallam University Human Ethics Committee (SHU Ethics/MC/310321). Supplementary file S1 : Participant Information Sheet. Click here for additional data file. Supplementary file S2 : Questionnaire. Click here for additional data file. Supplementary file S3 : Direct quotes. Click here for additional data file.
Magnetic resonance imaging organ at risk delineation for nasopharyngeal radiotherapy: Measuring the effectiveness of an educational intervention
e69760b8-318d-4bdf-b8ac-55520d8a2123
10122931
Internal Medicine[mh]
Radiotherapy (RT) for nasopharyngeal carcinoma is constrained by the proximity of organs at risk (OARs) to tumour volumes. Modern RT techniques allow high doses of radiation to be delivered to target volumes while sparing nearby OARs. This significantly improves local control, whilst reducing treatment‐related side effects, which improves patients' quality of life. To ensure the dose delivered to the tumour volume is precise and the dose to OARs is within tolerance, accurate delineation of OARs is necessary. Inter‐observer variability (IOV) contributes to delineation error in RT and impacts on the dose delivered to OARs. , IOV is a measure of the difference between contours completed by two or more observers examining the same material. Measures designed to minimise IOV include guidelines and atlases, multi‐modality imaging, standard protocols and auto‐contouring tools. The inclusion of Magnetic resonance imaging (MRI) in nasopharyngeal RT has shown reductions in IOV. Compared to computed tomography (CT), MRI provides superior soft‐tissue visualisation, which improves target volume and OAR delineation. Due to the complexity of soft‐tissue structures in the head and neck (H&N) region, MRI improves nasopharyngeal OAR delineation. These OARs display similar soft‐tissue contrast to surrounding structures on CT data sets but are more discernible on MRI scans and thus better delineated. , , , MRI is often combined with CT to optimise delineation of target volumes and OARs while maintaining accurate dose calculation. Currently, the existing CT‐based treatment planning workflow relies on target and OAR definition on MRI and a transfer of contours to CT via image registration. MRI‐CT co‐registration requires two separate imaging sessions and has some fundamental and logistical drawbacks as dual‐modality workflows may introduce misregistration and geometrical uncertainties. MRI‐only workflows are feasible; however, radiation therapists (RTs) need to be educated on how to identify and delineate OARs on MRI as well. Education courses conducted in‐person, online or a combination of both, have shown to be effective teaching modalities. , , Davis et al. demonstrated didactic and hands‐on interventions were more effective in facilitating change than didactic sessions alone. In this meta‐analysis studies with only didactic interventions had no statistically significant impact on participants' behaviour or healthcare outcomes, while studies with both didactic sessions and interactive interventions did. In a study by Awan et al., seven resident observers contoured 26 H&N OARs on a CT scan. After a teaching intervention, the observers contoured the same 26 OARs on another CT scan. The teaching intervention involved an atlas and real‐time software‐based feedback to help contour OARs in the H&N region. The mean DSC scores across all structures improved between phases, and each resident observer demonstrated statistically significant improvements in overall OAR contouring ( P < 0.01). These findings indicate educational interventions can improve the contouring of H&N OARs. As a result of these studies, we explored the possibility of using an interactive approach, so a didactic lesson on radiological anatomy in an interactive environment was used. The lack of published clinical findings on the effects of a teaching intervention for MRI‐only nasopharyngeal OAR delineation, emphasise the need for this research. This study evaluated whether participation in an education workshop minimises IOV when delineating nasopharyngeal OARs on MRI only. Context The South Western Sydney Local Health District Human Research Ethics Committee approved this study (HREC/16/LPOOL/603), which utilised retrospective imaging data obtained during routine clinical radiotherapy planning. Retrospective MRI data sets of five patients diagnosed with nasopharyngeal carcinoma were used. Inclusion criteria were patients treated for nasopharyngeal carcinoma, with a curative treatment intent, prescribed dose ≥60Gy, with any tumour and nodal stage but no metastasis present and had MRI scans including the temporal lobe superiorly and clavicles inferiorly. This study was a single‐centre, comparison study of OAR IOV pre and post a contouring education workshop intervention. Eleven radiation therapist observers were asked to contour 14 H&N OARs on five patient MRI data sets pre and post the education workshop. The 14 OARs delineated included: base of tongue, brainstem, left and right lacrimal glands, larynx, optic chiasm, left and right optic nerves, left and right parotid glands, pharyngeal constrictors, spinal cord and left and right temporal lobes. These OARs were selected as they display similar soft tissue contrast to surrounding structures on a CT data set but are more discernible on MRI scans. Imaging Images were acquired on a 3 T MRI scanner (Magnetom Skyra; Siemens Healthcare, Erlangen, Germany) with two 18‐channel receiver array coils. The surface coils were placed over the H&N region abutting each other using coil bridges on top of a H&N mask to cover the area of interest. Images were acquired in the transverse plane using T2_tse_DIXON sequence. Dixon in‐phase images were utilised for delineation in this study. In‐phase images are a combination of water and fat. The voxel size was 1x1x3mm, slice thickness was 3 mm with a 0‐slice gap, base resolution was 256, and the flip angle was 140°. The 250 mm field of view extended from the superior aspect of the orbital bone to the suprasternal notch. The repetition time was 1459.0 ms, while echo time was 83.0 ms. Pre‐workshop delineation Before the workshop, observers delineated 14 OARs using MIM version 6.9.5 (MIM software inc., Cleveland, Ohio). The observers were provided with H&N contouring work instructions based on consensus delineation guidelines for H&N OARs. The instructions described the borders of each OAR and an example image of the OAR contoured on a CT scan. The observers were blinded to each other's volumes. Observers were asked to indicate their experience levels as either no previous experience (never rostered for head/neck planning), some experience (≤3 months of consistently contouring a minimum of 5 plans a week) or experienced (≥6 months of consistently contouring a minimum of 5 plans on week). Observers were also asked to rank their levels of confidence when contouring each OAR on a 6‐point Likert scale ranging from 0–5 indicating not confident, slightly confident, somewhat confident, fairly confident, confident and very confident. Intervention The education workshop was conducted after observers finalised their initial volumes. The workshop included discussion on basic concepts of T1 and T2 weighted MRI scans, OAR anatomy in relation to other structures and their typical appearance on T2_DIXON_in‐phase acquisition, and how to recognise structures on this acquisition such as water, fat and bone. The 14 OARs were discussed in detail on the appearance of each OAR on T2_DIXON In‐phase sequence with images for aid. The education workshop was collaboratively developed by a senior MRI radiographer with more than 12 years of MRI experience employed in a radiotherapy department and a senior radiation therapist with expertise in H&N planning and MRI guided radiotherapy. As COVID‐19 restrictions prohibited in‐person workshops, the education intervention was held virtually. Observers were encouraged to ask questions and received real‐time feedback. Post‐workshop delineation To minimise recall bias, all observers were given a four‐week break between pre‐ and post‐workshop delineations, data sets were re‐labelled, and observers were blinded to their pre‐workshop volumes. Observers were given the same instructions as pre‐workshop along with new information from the education workshop. Observers were once again asked to rank their confidence levels for each OAR following delineations. In addition, observers completed a 5‐point Likert scale survey indicating whether they strongly disagreed, disagreed, were neutral, agreed or strongly agreed with being comfortable with H&N OAR contouring pre‐ and post‐workshop, and whether the education workshop was a worthwhile experience. Reference volumes Reference volumes were generated based on the consensus of two H&N radiation oncologists (RO) with extensive experience in utilising MRI for H&N radiotherapy. Each RO has contoured between 40–50 cases over the last 5 years. Their contours have been routinely audited weekly by other H&N RO's and radiologists at the hospital. For this study, each RO contoured seven OARs and then audited the other seven. If a contour was not accepted by the auditing RO, edits were made after discussion and agreement. The ROs followed international consensus guidelines to aid with contouring. Analysis Contour analysis was performed in MIM version 6.9.5 (MIM software inc., Cleveland, Ohio). The data obtained for each OAR included DSC, HD (mm), absolute volume (cc) and centroid X, Y and Z (cm). The relative volume and centroid difference (Δ) between each observer's OAR and the reference OAR were reported. DSC measures deviations of observers' contours from reference contours and evaluates their overlap. A DSC ≥0.7 indicated a ‘good’ agreement between observer and reference contours. HD identifies the greatest distance of a point in one contour to the closest point in another contour. A high HD indicates greater dissimilarity between the observers' contour and reference contour, while a HD of zero indicates identical contours. Relative volume is a volumetric comparison between observer and reference volumes. To calculate relative volume: Relative volume % = Observers absolute volume cc Reference absolute volume cc × 100 . Centroid ΔX, ΔY and ΔZ describe the difference in geometric centre of a volume in 3 planes, compared to the reference volume. The X‐axis represents left/right, Y‐axis represents anterior/posterior, and Z‐axis represents superior/inferior. To calculate centroid ΔX: Centroid ΔX = observers centroid X − reference centroid X The same equation was followed to calculate centroid ΔY and ΔZ. Quantitative analysis of pre‐ and post‐workshop differences involved DSC, HD, relative volume, centroid ΔX, ΔY and ΔZ and confidence levels for each OAR for all observers being analysed. Statistical analysis was performed using the Mann–Whitney U test on the program SPSS (IBM Corp. Released 2021. IBM SPSS Statistics for Macintosh, Version 28.0. Armonk, NY: IBM Corp), a P ‐value of < 0.05 was considered significant. Analysis of Education Program The education workshop was deemed beneficial for any OAR if ≥50% of observer contours demonstrated improvement based on IOV comparison metrics and if ≥50% of observer confidence improved. The South Western Sydney Local Health District Human Research Ethics Committee approved this study (HREC/16/LPOOL/603), which utilised retrospective imaging data obtained during routine clinical radiotherapy planning. Retrospective MRI data sets of five patients diagnosed with nasopharyngeal carcinoma were used. Inclusion criteria were patients treated for nasopharyngeal carcinoma, with a curative treatment intent, prescribed dose ≥60Gy, with any tumour and nodal stage but no metastasis present and had MRI scans including the temporal lobe superiorly and clavicles inferiorly. This study was a single‐centre, comparison study of OAR IOV pre and post a contouring education workshop intervention. Eleven radiation therapist observers were asked to contour 14 H&N OARs on five patient MRI data sets pre and post the education workshop. The 14 OARs delineated included: base of tongue, brainstem, left and right lacrimal glands, larynx, optic chiasm, left and right optic nerves, left and right parotid glands, pharyngeal constrictors, spinal cord and left and right temporal lobes. These OARs were selected as they display similar soft tissue contrast to surrounding structures on a CT data set but are more discernible on MRI scans. Images were acquired on a 3 T MRI scanner (Magnetom Skyra; Siemens Healthcare, Erlangen, Germany) with two 18‐channel receiver array coils. The surface coils were placed over the H&N region abutting each other using coil bridges on top of a H&N mask to cover the area of interest. Images were acquired in the transverse plane using T2_tse_DIXON sequence. Dixon in‐phase images were utilised for delineation in this study. In‐phase images are a combination of water and fat. The voxel size was 1x1x3mm, slice thickness was 3 mm with a 0‐slice gap, base resolution was 256, and the flip angle was 140°. The 250 mm field of view extended from the superior aspect of the orbital bone to the suprasternal notch. The repetition time was 1459.0 ms, while echo time was 83.0 ms. Before the workshop, observers delineated 14 OARs using MIM version 6.9.5 (MIM software inc., Cleveland, Ohio). The observers were provided with H&N contouring work instructions based on consensus delineation guidelines for H&N OARs. The instructions described the borders of each OAR and an example image of the OAR contoured on a CT scan. The observers were blinded to each other's volumes. Observers were asked to indicate their experience levels as either no previous experience (never rostered for head/neck planning), some experience (≤3 months of consistently contouring a minimum of 5 plans a week) or experienced (≥6 months of consistently contouring a minimum of 5 plans on week). Observers were also asked to rank their levels of confidence when contouring each OAR on a 6‐point Likert scale ranging from 0–5 indicating not confident, slightly confident, somewhat confident, fairly confident, confident and very confident. The education workshop was conducted after observers finalised their initial volumes. The workshop included discussion on basic concepts of T1 and T2 weighted MRI scans, OAR anatomy in relation to other structures and their typical appearance on T2_DIXON_in‐phase acquisition, and how to recognise structures on this acquisition such as water, fat and bone. The 14 OARs were discussed in detail on the appearance of each OAR on T2_DIXON In‐phase sequence with images for aid. The education workshop was collaboratively developed by a senior MRI radiographer with more than 12 years of MRI experience employed in a radiotherapy department and a senior radiation therapist with expertise in H&N planning and MRI guided radiotherapy. As COVID‐19 restrictions prohibited in‐person workshops, the education intervention was held virtually. Observers were encouraged to ask questions and received real‐time feedback. To minimise recall bias, all observers were given a four‐week break between pre‐ and post‐workshop delineations, data sets were re‐labelled, and observers were blinded to their pre‐workshop volumes. Observers were given the same instructions as pre‐workshop along with new information from the education workshop. Observers were once again asked to rank their confidence levels for each OAR following delineations. In addition, observers completed a 5‐point Likert scale survey indicating whether they strongly disagreed, disagreed, were neutral, agreed or strongly agreed with being comfortable with H&N OAR contouring pre‐ and post‐workshop, and whether the education workshop was a worthwhile experience. Reference volumes were generated based on the consensus of two H&N radiation oncologists (RO) with extensive experience in utilising MRI for H&N radiotherapy. Each RO has contoured between 40–50 cases over the last 5 years. Their contours have been routinely audited weekly by other H&N RO's and radiologists at the hospital. For this study, each RO contoured seven OARs and then audited the other seven. If a contour was not accepted by the auditing RO, edits were made after discussion and agreement. The ROs followed international consensus guidelines to aid with contouring. Contour analysis was performed in MIM version 6.9.5 (MIM software inc., Cleveland, Ohio). The data obtained for each OAR included DSC, HD (mm), absolute volume (cc) and centroid X, Y and Z (cm). The relative volume and centroid difference (Δ) between each observer's OAR and the reference OAR were reported. DSC measures deviations of observers' contours from reference contours and evaluates their overlap. A DSC ≥0.7 indicated a ‘good’ agreement between observer and reference contours. HD identifies the greatest distance of a point in one contour to the closest point in another contour. A high HD indicates greater dissimilarity between the observers' contour and reference contour, while a HD of zero indicates identical contours. Relative volume is a volumetric comparison between observer and reference volumes. To calculate relative volume: Relative volume % = Observers absolute volume cc Reference absolute volume cc × 100 . Centroid ΔX, ΔY and ΔZ describe the difference in geometric centre of a volume in 3 planes, compared to the reference volume. The X‐axis represents left/right, Y‐axis represents anterior/posterior, and Z‐axis represents superior/inferior. To calculate centroid ΔX: Centroid ΔX = observers centroid X − reference centroid X The same equation was followed to calculate centroid ΔY and ΔZ. Quantitative analysis of pre‐ and post‐workshop differences involved DSC, HD, relative volume, centroid ΔX, ΔY and ΔZ and confidence levels for each OAR for all observers being analysed. Statistical analysis was performed using the Mann–Whitney U test on the program SPSS (IBM Corp. Released 2021. IBM SPSS Statistics for Macintosh, Version 28.0. Armonk, NY: IBM Corp), a P ‐value of < 0.05 was considered significant. The education workshop was deemed beneficial for any OAR if ≥50% of observer contours demonstrated improvement based on IOV comparison metrics and if ≥50% of observer confidence improved. The study was conducted from May to August 2021. We began with 11 observers and five patient data sets; however, due to a COVID‐19 outbreak putting extra strain on staff, one observer dropped out and one patient data set was removed due to time constraints. COVID‐19 restrictions prohibited face‐to‐face workshops, so the education intervention had to be presented virtually. Observers experience in H&N radiotherapy varied, 4 of 10 observers had no previous experience, three had some experience, and 3 observers were experienced. The four data sets used were of patients with the following staging: T4N2M0, T2N2M0, T1N2M0 and T3N3bM0. As shown in Table , all OARs except the right optic nerve, pharyngeal constrictors and spinal cord had statistically significant improvements in at least one of the metrics measuring inter‐observer variation. Interestingly, the left optic nerve and right parotid gland demonstrated statistically significant improvement for HD across observers; however, the bilateral structures did not. All OARs had statistically significant improvements in observers' confidence levels when contouring each OAR. Mean DSC scores increased for ≥50% of observers on eight OARs (base of tongue, left and right lacrimal glands, larynx, optic chiasm, spinal cord, left and right temporal lobes). However, only base of tongue, larynx, spinal cord and right temporal lobe also achieved a mean DSC score of ≥0.7. The right and left temporal lobes demonstrated an outlier observer post‐workshop DSC score of 0.4 and 0.3, respectively (Fig. and Figure ). These outliers, however, had minimal impact on the mean post‐workshop DSC scores. The mean DSC for the right temporal lobe went from 0.66 with the outlier included to 0.69 without it. Similarly, the mean DSC for the left temporal lobe went from 0.62 with the outlier included to 0.66 without. Figure are screenshots of an axial slice of observer's post‐workshop optic chiasm, lacrimal glands and optic nerves contours from one of the patient data sets. These structures all had low mean DSC scores of ≤0.4 post‐workshop. As shown in Figure , mean HD scores decreased for ≥50% of observers on 12 OARs (base of tongue, left and right lacrimal glands, larynx, optic chiasm, left and right optic nerves, left and right parotid glands, pharyngeal constrictors, left and right temporal lobes). For mean relative volume, ≥50% of observers improved for 10 OARs (base of tongue, brainstem, left lacrimal gland, right lacrimal gland, larynx, optic chiasm, left parotid gland, right parotid gland, left temporal and right temporal lobe). For Centroid ΔX, ΔY and ΔZ, all OARs except left temporal lobe (Fig. ) for ΔX axis (left/right), left and right temporal lobes (Fig. ) for ΔY axis (anterior/posterior), and brainstem (Fig. ) for ΔZ axis (superior/inferior) had a post‐workshop mean centroid difference of ≤0.3 cm. Mean ΔX for left temporal lobe was 0.4 cm, mean ΔY for left and right temporal lobes were −1.2 cm and −1.1 cm, and mean ΔZ for brainstem was 0.6 cm. The percentage of OARs contoured by observers without previous experience had a higher percentage of improvement in mean DSC and relative volume scores. However, the percentage of organs with Hausdorff distance improvements were similar between the groups (Table ). An increase in confidence was also seen for all 14 OARs post‐workshop (Fig. ). All observers had an improvement in comfort levels with MRI OAR delineation after the education workshop (Fig. ). Out of the 10 observers, four observers agreed the education workshop was a worthwhile experience and 6 observers strongly agreed. Contouring variability is a well‐known source of geometric error in radiotherapy planning. , The impact of contour deviation has been associated with poorer clinical outcomes. RTs are highly skilled in CT image interpretation as CT imaging is routinely used for training and clinically. With the increased use of MRI in radiotherapy, including MRI‐only workflows, it is important RTs are skilled in identifying and delineating OARs on MRIs too. MRI guidelines for radiotherapy H&N contouring do not exist, and therefore, there are no guidelines to specify optimal sequences for OAR delineation. Current international guidelines are CT‐based and only include CT‐MRI fused datasets, signifying a need for future consideration of including MRI‐only data sets in these guidelines. MRI delineation skills are also valuable as the introduction of automated workflows becomes more routine, and RTs will be required to make a qualitative assessment on whether the generated delineation is correct. This is a novel study investigating the benefits of an educational workshop on reducing IOV when contouring in the H&N region specifically on MRI. The results showed the education workshop was beneficial in reducing IOV and improving observer confidence when delineating nasopharyngeal OARs on MRI scans. There have been previous CT‐based studies in this area, and our study supports previous findings that teaching interventions can reduce IOV when contouring H&N OARs. , , , , , , In the Bekelman et al. study, 11 radiation oncology residents contoured three H&N clinical tumour volumes (CTV) on CT scans before and after a teaching intervention. Six observers did not have prior experience in H&N contouring, and five observers did. The teaching intervention consisted of a didactic session on identifying anatomic landmarks on cross‐sectional images and a hands‐on practical session on CTV target delineation. Observers' contours were rated as adequate or inadequate, based on current radiotherapy protocols. For the group without prior experience, 60%, 0% and 20% of baseline contours were deemed adequate as compared to 100%, 40% and 80% in follow‐up contours. Likewise, for the group with prior experience, 83%, 33% and 67% of baseline contours were deemed appropriate compared to 100%, 67% and 100% for the follow‐up contours. Similarly, to our study, these findings suggest incorporating a teaching intervention led to improved delineation for participants, but especially for those with no prior experience. Our pilot study only showed a short‐term assessment on the education impact, with constant utilisation the clinical implication could be different. While observer confidence is a simplistic measure for the impact of the workshop, it's still relevant to know observer perception on the material delivered and if they felt they could identify OARS on MRI. In our study, we demonstrated the education workshop improved observers' contouring confidence levels, as all observers' confidence levels increased for all OARs post‐workshop. An increase in confidence was associated to an improvement in contouring consistency, as all OARs also showed a decrease in IOV. Similarly, Jaswal et al. demonstrated an increase in confidence scores as well as improvement in contouring, as across all contoured H&N structures there was a 0.20 median improvement in students' average DSC score ( P < 0.001). Whereas Stanley et al. found observer confidence was not reflected in contouring consistency as the ratio of smallest to largest contour volumes for each brain metastasis contoured by eight physicians varied from 1.25 to 4.47, indicating a high degree of IOV. The average observer's confidence, on the other hand, was relatively high, with a mean score of 3.2, on a scale where 4 indicated very high confidence. In our study, observer's level of experience was related to improvements in DSC and relative volume scores, as the percentage of OARs contoured by observers with no previous experience that improved post‐workshop was higher than the percentage of OARs that improved contoured by observers with previous experience. This is similar to other studies , reporting that a teaching intervention improved contouring particularly for participants without previous experience. The level of experience of observers did not seem to influence their HD scores as all observers improved at the same rate regardless of level of experience. Our study also found the largest centroid ΔX and ΔY variations were demonstrated for the temporal lobes (Fig. ). Since an OAR's volume scales faster than its surface area, the larger centroid ΔX and ΔY variations may be a consequence of the temporal lobe's large volume. Both the right and left temporal lobes demonstrated outlier DSC scores of 0.4 and 0.3, respectively, post‐workshop (Fig. and Figure ). A single observer was responsible for both outliers. This observer indicated no previous experience with head/neck planning, so this lack of experience may have contributed to the outliers. The brainstem had the largest centroid ΔZ variation (Fig. ) predominantly due to variability in defining the brainstem and spinal cord boundary. Brainstem DSC (Fig. ) and HD scores were also better before the education workshop. Similarly, spinal cord, optic nerves, parotid glands and pharyngeal constrictors DSC's remained equivalent post‐workshop (Fig. ). These structures are commonly delineated on CT by RTs, this familiarity with CT‐based delineation may have resulted in better concordance than post‐workshop. However, better concordance does not necessarily mean the contours are accurate. Despite this lack of improvement, the brainstem, spinal cord and parotid glands post‐workshop mean DSC's were 0.7, which is considered a ‘good’ agreement. However, the left and right optic nerve structures had low mean DSC scores of 0.4 post‐workshop and mean HD values of 1 cm post‐workshop (Fig. ). This HD value is clinically significant due to the optic nerves small size. Because of their small size and tubular geometry, optic nerves are difficult to delineate. Small absolute differences can also lead to poor scores when the volume of the organ is small. Our results also showed statistically significant HD improvements for only one bilateral structure for the optic nerve and parotid glands. This was caused by the small sample size and outliers in the data; however, this statistical significance does not indicate clinical significance. The pharyngeal constrictors had a post‐workshop mean DSC score of 0.5. This low DSC score may have been due to RTs CT familiarity. The pharyngeal constrictors are difficult to visualise on CT, so its contouring requires the accurate interpretation of guidelines based on anatomical landmarks. The observers may have contoured the pharyngeal constrictors based on perceived CT boundaries, which may have contributed to the higher degree of variations observed in this study for MRI. Interestingly, for the optic chiasm (Fig. ), post‐workshop DSC scores improved but remained under the 0.7 threshold (Fig. ). This may have been due to the use of T2_DIXON affecting its visibility, or it may be due to learnt behaviour from CT delineation. Likewise, the lacrimal glands (Fig. ) improved across all metrics but still had low DSC scores (Fig. ). This may have been due to distortion at the edge of the field of view on MRI and possible motion artefacts from eye movement. The orbital area is prone to distortions and signal loss on MRI scans due to interfaces among air, bone and soft tissue, resulting in an inhomogeneous magnetic field and susceptibility artefacts. This study had some limitations. The small sample size of observers and patient data sets may have limited the ability to test the effectiveness of the education workshop. Also, COVID‐19 restrictions required the education workshop to be delivered virtually, thereby limiting observer hands‐on and interactive experience. Due to a COVID‐19 outbreak putting additional strain on staff, one observer dropped out and one patient data set was removed from the study due to incomplete observer data. Observers may also have been influenced by CT recall bias, since they may have delineated organs based on perceived CT boundaries. In addition, the reference contours created by consensus of two ROs may have IOV. For future work, incorporating consensus volumes of ROs and radiologists may provide more accurate reference volumes. Contours were also only evaluated quantitatively, not qualitatively, so future studies should incorporate qualitative RO assessments of the clinical significance of volume changes. Further research is also needed to determine the most appropriate imaging sequence for each organ. This study demonstrated educational workshops can potentially reduce inter‐observer variability and improve observer confidence when delineating nasopharyngeal OARs on MRI scans. This study provides important insights concerning the emerging trend of MRI‐only radiotherapy planning workflows. It is a pilot study adding to a body of evidence regarding the emerging role of MRI in radiotherapy. All observers found the teaching intervention a valuable experience and all reported improvements in confidence levels post‐workshop. This is consistent with other studies of teaching interventions for contouring in radiation oncology and calls for investment and additional research in this area. This work was conducted as part of an honours candidature through the University of Newcastle. The primary author was the recipient of a scholarship awarded by the South Western Sydney Local Health District (SWSLHD). SWSLHD has a research agreement with Siemens Healthineers AG. However, no part of the design or execution of this study was conducted under this research agreement. This study was approved by the local human research ethics committee. Data S1 Supporting Information Click here for additional data file.
Dissemination of public health research to prevent non-communicable diseases: a scoping review
71a99edf-34c3-4bbe-b261-dfb4ff0e828e
10123991
Health Communication[mh]
Governments and non-government funders have invested substantially in a range of effective interventions to improve public health, demonstrated by significant improvements in preventive health behaviours when tested in empirical trials . However, knowledge produced in the course of public health research frequently fails to be adopted into routine practice, or takes an unacceptably long period of time to do so, with estimates of a gap up to 17 years . Knowledge translation (KT) covers a continuum of activities that span knowledge synthesis, dissemination, exchange and application of knowledge, in this context to improve health . The activity of dissemination is defined as an “active approach of spreading evidence-based interventions or knowledge to the target audience via determined channels using planned strategies” . Dissemination is primarily aimed at increasing end users’ awareness and knowledge of evidence, influencing intentions to use evidence, and increasing the likelihood of evidence adoption. Dissemination science therefore is defined as a systematic approach to determining effective strategies to communicate evidence with target audiences, for the purpose of changing these dissemination outcomes . A number of reviews have described and synthesised various dissemination theories, models and frameworks that can be used to better support the dissemination of evidence to public health policy makers and practitioners. One model used in the field of public health and policy decision making is Brownson and colleagues’ Model for Dissemination of Research . It is based on multiple theories including communication theory , and diffusion of innovations theory . The framework describes four key factors that may influence the impact of a dissemination strategy ; namely, the source (who is disseminating the information), the message (the information being communicated), the channel (how the information is communicated, e.g., modality), and the audience (the intended users of the information) (see Fig. ) (adapted from Wilson et al. ): Strategies to disseminate evidence will vary depending on the target end user and the way they use research evidence. There are a variety of end users of research evidence. For example, the dissemination of new school-based program to prevent adolescent uptake of e-cigarettes may be primarily targeted at policymakers in education and school principals, however additional tailored strategies will be important to communicate the information to other potential end users such as adolescents, parents, and school teachers. Within a field as diverse as public health, potential end users could include, but are not limited to, the community, practitioners, researchers, funders, industry bodies, and policymakers. Policymakers and practitioners are frequently the target audiences of dissemination activities as they are usually responsible for setting public health priorities, and financing and supporting the provision of public health services, as well implementing the policies. Policymakers and practitioners value evidence , and consider it in their decision making . However, they also commonly report issues with timely access to evidence that is both relevant and useful to help inform decision making . Therefore, there is increasing recognition in the scientific community that dissemination efforts must go beyond presenting research findings using traditional academic methods (such as peer-reviewed journals) to ensure they are tailored and presented to the needs of different end users.. The field of dissemination science is a relatively new field of study and this is reflected in the debate in the literature regarding key terminology , the importance, consistency and validity of outcomes used in research studies and the importance of better co-ordination of dissemination research to collectively progress the field . In the field of public health specifically, dissemination, scale up and implementation strategies are often conflated or not distinguished from knowledge translation more broadly , making it difficult to draw conclusions about the effects of specific dissemination strategies. It is also important to distinguish dissemination from the broader health communication and scale up literature. While the fields have some commonalities, health communication has been defined by the Centers for Disease Control and Prevention as “the study and use of communication strategies to inform and influence individual decisions that enhance health ” which reflects that it more commonly targets the general public as the audience in an effort to motivate behaviour change for the purpose of improving health . This communication is distinct from that targeting decision makers and practitioners who are responsible for supporting others to use and apply evidence (e.g., through their actions in implementing evidence guidelines and programs, such as a cancer screening service, or physical activity guidelines in schools). It is this latter communication that we are classifying as dissemination. While recognising the very broad scope of these constructs of public health, end users and dissemination, this scoping review will focus on the dissemination of evidence related to the prevention of non-communicable diseases (NCDs). NCDs are typically chronic conditions arising from genetic, behavioural or environmental factors, as opposed to infectious factors, and include conditions such as cardiovascular disease, many forms of cancer and diabetes. They impose a substantial and growing burden of disease on the global population , and are responsible for 74% of deaths globally each year . A reduction in premature mortality due to NCDs has been identified as one of the 2030 Sustainable Development Goals by the United Nations highlighting the importance of this issue. Further, consistent with our focus on dissemination, as opposed to health communication, this review will consider audiences responsible for adoption of this evidence at a community or population level. This will include policymakers, practitioners, researchers, public health administrators and other decision-makers, but will exclude the general public. Aim. In order for the field of dissemination science to progress and to support policy makers, practitioners and other end-users to adopt evidence in a more timely manner, it is essential that current evidence regarding dissemination strategies is mapped to determine the focus of future empirical research. As such, the primary aim of this scoping review is to identify and describe the literature examining strategies to disseminate public health evidence related to the prevention of NCDs. Secondary to this, we aimed to map studies against the components of Brownson et al’s Model for Dissemination of Research and according to their research design and methods (i.e., qualitative, quantitative, interventions) in order to provide insight into the levels of evidence available for different dissemination components. Protocol and registration The methods of this scoping review were conducted in accordance with the guidance issued by JBI . The findings are reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analyses extension for Scoping Reviews (PRISMA-ScR) . The development of the scoping review protocol was overseen by a multidisciplinary advisory group consisting of national and international experts in knowledge translation and NCD prevention from various academic institutions including The National Centre of Implementation Science ( https://ncois.org.au/ ) and the Collaboration for Enhanced Research Impact ( https://preventioncentre.org.au/resources/collaboration-for-enhanced-research-impact-ceri/ ). The protocol was prospectively deposited in Open Science framework at: 10.17605/OSF.IO/YJTN5 on 24th May, 2021 . Inclusion and exclusion criteria This scoping review targeted the dissemination of knowledge outputs related to the prevention of NCDs. Specifically, we were interested in strategies aiming to disseminate knowledge outputs, relating to public health research evidence and/or interventions, to stakeholders and policy and practitioner end users (i.e., end of project-KT) and their potential influence on evidence use or adoption, and determinants thereof such as knowledge, motivation and awareness. Further details of the included populations of interest are described in Table . We included dissemination related outcomes based on an adapted version of the outcomes in Leeman et al’s (2017) framework . These include: Reach : the number or proportion of individuals that information is disseminated to. Awareness : of the disseminated information. Knowledge : familiarity and understanding of the disseminated information. Attitudes : beliefs, feelings and behavioural tendencies about the disseminated information. Preferences : indication of a hypothetical choice for particular dissemination strategies over others. Intention to adopt : the probability of changing behaviour based on the disseminated information. Research adoption or uptake : if the disseminated information was used/implemented. “Experiences” of dissemination : data where participants reported which dissemination strategies (or components of Brownson’s model) they had previously used either to disseminate or to access disseminated information. For example, a sample of researchers reporting which channels were most commonly used for dissemination was considered as reporting experiences of dissemination. Although related to preferences, data suggests that preferred methods for dissemination do not always align with actual experiences , hence we treated these as two separate constructs. Measures could be objective (e.g., audit data of a particular public health practice following dissemination) or subjective (e.g., self-reported use of disseminated research, or intentions to use). Context Given the breadth of public health as a field of research and the substantial burden of disease imposed by NCDs , we limited the context of studies included in this review to those examining the primary and/or secondary prevention of NCDs defined as those on the Lancet’s Global Burden of Disease cause and risk summaries , such as cancer, cardiovascular diseases and mental disorders. Studies discussing dissemination in communicable, maternal, neonatal and nutritional diseases, as well as injuries were excluded as this review was primarily focused on NCDs. In addition, to ensure we comprehensively captured the evidence base in relation to the prevention of NCDs, studies in which more general perceptions and experiences of dissemination of public health evidence where NCD prevention were covered were also included. Study designs Given the broad aim of this scoping review, studies were not restricted by design. All empirical work was considered for inclusion including quantitative studies, which included cross-sectional, pre-post designs, controlled before after studies, quasi-randomised controlled trials, and randomised controlled trials (RCTS), as well as qualitative designs and mixed methods approaches including case studies. We excluded papers that did not provide new data, such as commentaries, editorials, letters to the editor, studies describing conceptual models or frameworks, and studies describing measurement tools. Search strategy Given the well documented challenges with searching the knowledge translation literature and lack of consistency in terminology , we created a list of keywords and search terms used in previous reviews and used this to develop our search strategy in collaboration with an information specialist (see Supplementary Material 1). We searched Medline, Psycinfo, and EBSCO Search Ultimate:health, communications and business/marketing databases, up to 25th May, 2021. As a number of potentially relevant reports of dissemination studies were expected to be in the grey literature, we also searched Open Grey ( https://opengrey.eu/ ) and key government public health websites in Australia, New Zealand, United Kingdom, Canada and the United States of America (USA). Consistent with Haddaway et al’s recommendation , we searched the top 200 results in Google and Google Scholar using the search terms dissemination strategy and public health. We also searched the reference lists of relevant evidence reviews to find additional primary studies. Following advice in the Cochrane handbook we hand searched the Journal of Science Communication as it was not indexed in the databases searched. Our search was limited to studies published from January 2000 onwards following the Canadian Institutes of Health Research definition of knowledge translation, dissemination and implementation in 2000 a substantial increase in work occurred in this area. Due to resource limitations, we excluded studies in which the full text was not available in English. Evidence screening and selection Duplicate citations were removed in Endnote and an initial title screen was conducted (HT) to exclude studies that clearly did not meet the inclusion criteria (e.g., studies focusing on infectious disease). Remaining citations were uploaded into Covidence where title and abstract screening was conducted independently and in duplicate by two members of the review team (HT, NS, SO’C, SN, EW, SMc, AR, CH, SY). The full text of potentially relevant studies was sourced, and evaluated independently and in duplicate against the inclusion/exclusion criteria by two members of the review team (MF, HT, AR, NS, SY). Conflicts were resolved by discussion or through consultation with a third reviewer if needed. Data extraction A data extraction template containing the data items was developed and piloted by members of the research team. This template was then used by two independent data extractors (HT, SN, ED, PL, EH, RS, SO’C). A third team member (SMc) was responsible for checking the extracted data. Any disagreements were resolved by the data extractors. The following data fields were extracted for studies deemed to meet inclusion criteria: citation details, study design, population group (policy makers, public health practitioners, community, etc.), sample size, country, setting (community, clinical or both), components related to dissemination (source, audience, message, channel), NCD or risk factor targeted (e.g., physical activity, skin cancer), and measures (awareness, knowledge, use etc.). Departures from protocol Although we planned to include the general public as an end-user group, following our initial search we determined that empirical dissemination efforts to the general public differ substantially to those from the end users previously described including policymakers, practitioners, researchers, and public health administrators and often take the form of mass media campaigns or similar, which have been extensively reviewed . We also decided to exclude reviews due to significant overlap with primary studies already identified but we searched their reference lists for additional eligible studies. In addition, we identified several studies in which dissemination was occurring to individuals or groups responsible for making decisions about program adoption in settings such as schools or community groups. Although these disseminated programs may have been aimed at individuals (e.g., a program to increase the physical activity level of children in schools), the dissemination activity was targeted at those who could make a decision regarding the program adoption in their setting. We included these studies as an additional population group of interest and classified this group as “community decision makers”. Examples included school principals, and workplace health committees. Data analysis In order to describe the scope of the research base, identify gaps, and in accordance with JBI guidance for data analysis of scoping reviews , frequencies and percentages were calculated for year of publication, study design, study population, country of respondents, setting, NCD/risk factor focus, and outcomes assessed. We classified studies based on their design into three broad categories. Descriptive studies, which described the nature and determinants of dissemination, were further grouped into: (1) qualitative and mixed methods studies, and (2) quantitative, non-experimental studies including cross-sectional designs and case studies. The third category included experimental studies testing dissemination strategies. Only studies which explicitly aimed to directly compare different strategies within a component of dissemination (e.g., comparing two or more different channels, or comparing a strategy versus a control) were classified as experimental studies for the purpose of this scoping review. Studies in which a single dissemination strategy was implemented were coded based on the nature of data collection (e.g., qualitative vs. quantitative). Unless otherwise noted, percentages reported in text refer to the percentage across all study types. With respect to the dissemination components, all studies were coded based on the availability of information about each component in each study. For experimental studies, information about all four components was typically present, but usually only one component was deliberately manipulated as part of the study design and this has been described in text. Study findings were narratively described based on the component factors related to dissemination. We classified these by study design as we believed this classification would allow researchers to identify gaps in the types of evidence available to inform practice. Given the large variety of channels available for dissemination of evidence, data were coded into broad categories. These categories were loosely organised based on different communication mediums. It should also be noted that some types of channels could be delivered in multiple ways. For example, information booklets and pamphlets could be mailed, emailed or provided in person. In these cases, we did not distinguish between mode of delivery when synthesising the data. The methods of this scoping review were conducted in accordance with the guidance issued by JBI . The findings are reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analyses extension for Scoping Reviews (PRISMA-ScR) . The development of the scoping review protocol was overseen by a multidisciplinary advisory group consisting of national and international experts in knowledge translation and NCD prevention from various academic institutions including The National Centre of Implementation Science ( https://ncois.org.au/ ) and the Collaboration for Enhanced Research Impact ( https://preventioncentre.org.au/resources/collaboration-for-enhanced-research-impact-ceri/ ). The protocol was prospectively deposited in Open Science framework at: 10.17605/OSF.IO/YJTN5 on 24th May, 2021 . This scoping review targeted the dissemination of knowledge outputs related to the prevention of NCDs. Specifically, we were interested in strategies aiming to disseminate knowledge outputs, relating to public health research evidence and/or interventions, to stakeholders and policy and practitioner end users (i.e., end of project-KT) and their potential influence on evidence use or adoption, and determinants thereof such as knowledge, motivation and awareness. Further details of the included populations of interest are described in Table . We included dissemination related outcomes based on an adapted version of the outcomes in Leeman et al’s (2017) framework . These include: Reach : the number or proportion of individuals that information is disseminated to. Awareness : of the disseminated information. Knowledge : familiarity and understanding of the disseminated information. Attitudes : beliefs, feelings and behavioural tendencies about the disseminated information. Preferences : indication of a hypothetical choice for particular dissemination strategies over others. Intention to adopt : the probability of changing behaviour based on the disseminated information. Research adoption or uptake : if the disseminated information was used/implemented. “Experiences” of dissemination : data where participants reported which dissemination strategies (or components of Brownson’s model) they had previously used either to disseminate or to access disseminated information. For example, a sample of researchers reporting which channels were most commonly used for dissemination was considered as reporting experiences of dissemination. Although related to preferences, data suggests that preferred methods for dissemination do not always align with actual experiences , hence we treated these as two separate constructs. Measures could be objective (e.g., audit data of a particular public health practice following dissemination) or subjective (e.g., self-reported use of disseminated research, or intentions to use). Context Given the breadth of public health as a field of research and the substantial burden of disease imposed by NCDs , we limited the context of studies included in this review to those examining the primary and/or secondary prevention of NCDs defined as those on the Lancet’s Global Burden of Disease cause and risk summaries , such as cancer, cardiovascular diseases and mental disorders. Studies discussing dissemination in communicable, maternal, neonatal and nutritional diseases, as well as injuries were excluded as this review was primarily focused on NCDs. In addition, to ensure we comprehensively captured the evidence base in relation to the prevention of NCDs, studies in which more general perceptions and experiences of dissemination of public health evidence where NCD prevention were covered were also included. Study designs Given the broad aim of this scoping review, studies were not restricted by design. All empirical work was considered for inclusion including quantitative studies, which included cross-sectional, pre-post designs, controlled before after studies, quasi-randomised controlled trials, and randomised controlled trials (RCTS), as well as qualitative designs and mixed methods approaches including case studies. We excluded papers that did not provide new data, such as commentaries, editorials, letters to the editor, studies describing conceptual models or frameworks, and studies describing measurement tools. Given the breadth of public health as a field of research and the substantial burden of disease imposed by NCDs , we limited the context of studies included in this review to those examining the primary and/or secondary prevention of NCDs defined as those on the Lancet’s Global Burden of Disease cause and risk summaries , such as cancer, cardiovascular diseases and mental disorders. Studies discussing dissemination in communicable, maternal, neonatal and nutritional diseases, as well as injuries were excluded as this review was primarily focused on NCDs. In addition, to ensure we comprehensively captured the evidence base in relation to the prevention of NCDs, studies in which more general perceptions and experiences of dissemination of public health evidence where NCD prevention were covered were also included. Given the broad aim of this scoping review, studies were not restricted by design. All empirical work was considered for inclusion including quantitative studies, which included cross-sectional, pre-post designs, controlled before after studies, quasi-randomised controlled trials, and randomised controlled trials (RCTS), as well as qualitative designs and mixed methods approaches including case studies. We excluded papers that did not provide new data, such as commentaries, editorials, letters to the editor, studies describing conceptual models or frameworks, and studies describing measurement tools. Given the well documented challenges with searching the knowledge translation literature and lack of consistency in terminology , we created a list of keywords and search terms used in previous reviews and used this to develop our search strategy in collaboration with an information specialist (see Supplementary Material 1). We searched Medline, Psycinfo, and EBSCO Search Ultimate:health, communications and business/marketing databases, up to 25th May, 2021. As a number of potentially relevant reports of dissemination studies were expected to be in the grey literature, we also searched Open Grey ( https://opengrey.eu/ ) and key government public health websites in Australia, New Zealand, United Kingdom, Canada and the United States of America (USA). Consistent with Haddaway et al’s recommendation , we searched the top 200 results in Google and Google Scholar using the search terms dissemination strategy and public health. We also searched the reference lists of relevant evidence reviews to find additional primary studies. Following advice in the Cochrane handbook we hand searched the Journal of Science Communication as it was not indexed in the databases searched. Our search was limited to studies published from January 2000 onwards following the Canadian Institutes of Health Research definition of knowledge translation, dissemination and implementation in 2000 a substantial increase in work occurred in this area. Due to resource limitations, we excluded studies in which the full text was not available in English. Duplicate citations were removed in Endnote and an initial title screen was conducted (HT) to exclude studies that clearly did not meet the inclusion criteria (e.g., studies focusing on infectious disease). Remaining citations were uploaded into Covidence where title and abstract screening was conducted independently and in duplicate by two members of the review team (HT, NS, SO’C, SN, EW, SMc, AR, CH, SY). The full text of potentially relevant studies was sourced, and evaluated independently and in duplicate against the inclusion/exclusion criteria by two members of the review team (MF, HT, AR, NS, SY). Conflicts were resolved by discussion or through consultation with a third reviewer if needed. A data extraction template containing the data items was developed and piloted by members of the research team. This template was then used by two independent data extractors (HT, SN, ED, PL, EH, RS, SO’C). A third team member (SMc) was responsible for checking the extracted data. Any disagreements were resolved by the data extractors. The following data fields were extracted for studies deemed to meet inclusion criteria: citation details, study design, population group (policy makers, public health practitioners, community, etc.), sample size, country, setting (community, clinical or both), components related to dissemination (source, audience, message, channel), NCD or risk factor targeted (e.g., physical activity, skin cancer), and measures (awareness, knowledge, use etc.). Although we planned to include the general public as an end-user group, following our initial search we determined that empirical dissemination efforts to the general public differ substantially to those from the end users previously described including policymakers, practitioners, researchers, and public health administrators and often take the form of mass media campaigns or similar, which have been extensively reviewed . We also decided to exclude reviews due to significant overlap with primary studies already identified but we searched their reference lists for additional eligible studies. In addition, we identified several studies in which dissemination was occurring to individuals or groups responsible for making decisions about program adoption in settings such as schools or community groups. Although these disseminated programs may have been aimed at individuals (e.g., a program to increase the physical activity level of children in schools), the dissemination activity was targeted at those who could make a decision regarding the program adoption in their setting. We included these studies as an additional population group of interest and classified this group as “community decision makers”. Examples included school principals, and workplace health committees. In order to describe the scope of the research base, identify gaps, and in accordance with JBI guidance for data analysis of scoping reviews , frequencies and percentages were calculated for year of publication, study design, study population, country of respondents, setting, NCD/risk factor focus, and outcomes assessed. We classified studies based on their design into three broad categories. Descriptive studies, which described the nature and determinants of dissemination, were further grouped into: (1) qualitative and mixed methods studies, and (2) quantitative, non-experimental studies including cross-sectional designs and case studies. The third category included experimental studies testing dissemination strategies. Only studies which explicitly aimed to directly compare different strategies within a component of dissemination (e.g., comparing two or more different channels, or comparing a strategy versus a control) were classified as experimental studies for the purpose of this scoping review. Studies in which a single dissemination strategy was implemented were coded based on the nature of data collection (e.g., qualitative vs. quantitative). Unless otherwise noted, percentages reported in text refer to the percentage across all study types. With respect to the dissemination components, all studies were coded based on the availability of information about each component in each study. For experimental studies, information about all four components was typically present, but usually only one component was deliberately manipulated as part of the study design and this has been described in text. Study findings were narratively described based on the component factors related to dissemination. We classified these by study design as we believed this classification would allow researchers to identify gaps in the types of evidence available to inform practice. Given the large variety of channels available for dissemination of evidence, data were coded into broad categories. These categories were loosely organised based on different communication mediums. It should also be noted that some types of channels could be delivered in multiple ways. For example, information booklets and pamphlets could be mailed, emailed or provided in person. In these cases, we did not distinguish between mode of delivery when synthesising the data. Study selection Following removal of duplicates, 20,343 records were screened based on their title or title and abstract, with 643 records progressing to full text screening. An additional 16 full text records identified through a manual search were also screened. Over half of the excluded studies were excluded due to wrong population or wrong intervention (see full list of excluded studies in Supplementary File 2). After full text screening, 107 studies (plus 1 thesis with data reported in 2 included studies) were selected for inclusion in the scoping review, as shown in Fig. . Following removal of duplicates, 20,343 records were screened based on their title or title and abstract, with 643 records progressing to full text screening. An additional 16 full text records identified through a manual search were also screened. Over half of the excluded studies were excluded due to wrong population or wrong intervention (see full list of excluded studies in Supplementary File 2). After full text screening, 107 studies (plus 1 thesis with data reported in 2 included studies) were selected for inclusion in the scoping review, as shown in Fig. . The number of studies published has increased over time. The number of publications identified in the last 10 years [period between 2011 and the time of our search (May 2021)] was 72, double that identified in the preceding period between 2000 and 2010 (n = 36). As seen in Table , descriptive study designs were utilised in the majority of included studies, namely qualitative or mixed methods (n = 52, 49%) or quantitative (n = 40, 37%) study designs. Only 15 (14%) studies reported the findings of an experimental study. Please see Supplementary Material 3 for a list of references included in each design category. The most frequent countries where studies were conducted were the United States (n = 39, 36%) and Canada (n = 33, 31%), while there were few studies conducted in low or middle income countries (n = 9, 8%) and none of these were experimental studies. The focus area or broad topic of dissemination was most commonly related to physical activity, diet or obesity (n = 44, 41%). Topics including cancer screening (n = 7, 7%), substance use (including smoking cessation; n = 23, 21%), and mental health (n = 11, 10%) were also covered. Some of the topics that were included in the “other” category included air pollution and skin cancer prevention, as well as chronic disease prevention. Within studies using qualitative, quantitative and mixed methods designs, a number of studies (n = 24, 26%) did not cover a specific focus area, but referred to dissemination of public health information in general. An example of this was studies in which public health researchers reported which channels they used to disseminate their research findings. Source Source was conceptualised and reported in two main ways in the included studies. In many of the descriptive studies reporting preference data, the information about source was based on the types of information accessed or preferred by participants for obtaining evidence. For example, practitioners may have been asked to indicate which sources they most frequently contacted if evidence was required for a particular initiative (e.g.,). For the remainder of the descriptive studies and in the experimental studies, a specific source of the information to be disseminated was identifiable. As those most commonly generating the evidence (through research), researchers and academic groups were the most frequently identified sources (n = 35, 54%), followed by government bodies (n = 13, 29%) which were usually health departments. In some studies, multiple sources were identified. For example, researchers may have partnered with local public health bodies to disseminate research findings to stakeholders. In one study, Ferdinands et al. described the involvement of an expert working group including researchers and practitioners, as well as partnership with local health services to develop and disseminate a nutrition report card on food environments for children. It was also common for individuals to take on multiple roles dependant on the nature of the evidence to be disseminated. For example, guideline development committees may include practitioners, researchers, and policymakers. Only one experimental study manipulated the source as part of their study, in addition to manipulating the message and the channel. A small number of studies, for example, those involving a survey of policymakers reporting only on channels accessed for dissemination, did not provide details or examples of the types of sources used, and therefore we were unable to categorise them in Table . Message Across qualitative/mixed methods studies, the message or information that was the subject of dissemination was most commonly new knowledge or evidence on a specific topic (n = 21, 40%), often in the form of a review. The message was only clearly identifiable in 58% (n = 30) of the qualitative/mixed methods studies, primarily because participant preferences or barriers to dissemination were the focus of many studies. In these studies, dissemination of a defined message was not a study aim. For the quantitative studies, in half of the studies (n = 20, 50%) the message was related to new knowledge or evidence. In some studies, the topic of the information to be disseminated varied across respondents (typically researchers), whereas for studies reporting on the outcomes of a dissemination effort the topic of information was more clearly defined. For example, the study by McVay et al. reports the results from a survey of public health researchers in which they assessed how “research findings” were disseminated to local public health agencies. In contrast, Mattran et al. provides an example of a study which describes the results of an effort to disseminate a report detailing state based physical activity data. In experimental studies, dissemination of an evidence-based program or intervention made up a greater proportion of the messages (n = 6, 40%) compared to dissemination of knowledge/research evidence (n = 5, 33%) or guidelines (n = 4, 27%). The types of programs or interventions disseminated included sun protection , workplace health including tobacco use prevention , and alcohol and substance abuse . A small number of studies also described particular communication techniques, such as use of plain language, formatting (e.g., dot point summaries) and the types of information to include as part of disseminated evidence (e.g., inclusion of local data) that were preferred by different audiences (data not shown). For example, in a study with US legislators, Dodson and colleagues found that there was interest in receiving cost data, as well as information about existing policies and what was occurring in other states when receiving evidence in order to inform policy decisions. In addition, three experimental studies manipulated aspects of the message as part of the design, such as the type or amount of detail provided in the dissemination materials . Channel For both the qualitative/mixed methods and the quantitative studies, the most commonly reported channels included academic mediums such as journal articles and conferences (n = 25, 48% and n = 14, 35%), policy briefs (n = 13, 25% and n = 2, 5%), websites/infographics (n = 17, 33% and n = 17, 43%), information pamphlets/brochures (n = 9, 17% and n = 14, 35%), training/workshops (n = 19, 37% and n = 23, 58%), and one-on-one meetings (n = 12, 23% and n = 14, 35%). Many studies reported the use of multiple channels for communicating information, and this was particularly the case for studies in which multiple audiences were targeted. For example, studies targeting other researchers, practitioners and the public may have reported disseminating evidence through journal articles and conferences (targeting other researchers), workshops (targeting practitioners) and websites and media (targeting the public). The “other” category included channels such as knowledge brokers (which can also be considered a source) and institutional repositories or clearinghouses, as well as channels targeted at the general public such as prenatal classes, telephone helplines. Studies also presented channel information in the context of preferences or data about access frequency, rather than for the dissemination of a specific message, for example see . Focusing specifically on the experimental studies, channel was the most commonly manipulated component out of Brownson’s four dissemination components, with 13/15 (86%) studies comparing two or more dissemination channels (or dissemination vs. a control). Some studies compared different mediums of information presentation, for example, mailed information vs. a presentation . Others compared a basic dissemination strategy to an enhanced approach utilising multiple strategies . One study compared the provision of a booklet to a wait-list control , while a cluster RCT compared the dissemination of guidelines and relevant materials to experimental communities with control communities who received no information . Audience Across all study designs, the most frequently identified audience was health practitioners (n = 61, 57%), such as GPs, nurses and allied health workers. For qualitative and mixed methods studies, policymakers (n = 24, 46%) and public health managers/administrators (n = 23, 44%) were targeted in almost as many studies as those targeting practitioners (n = 27, 52%). Many studies identified multiple audiences, which usually included at least one or more of these three groups, plus other relevant stakeholders such as public health managers, and researchers. For studies which disseminated evidence relevant to children and adolescents, stakeholders often included teachers, school principals and early childhood educators, which were categorised as community decision makers. For the quantitative studies, over half of the studies (n = 24, 60%) identified practitioners as an audience, but only a third targeted policymakers (n = 13, 33%) and public health managers/administrators (n = 13, 33%). Experimental studies followed a similar pattern, with two thirds (n = 10, 67%) of the experimental studies identifying practitioners as their intended audience. Two experimental studies , compared the effectiveness of a dissemination strategy across multiple audience groups. As noted previously when considering channels, studies often tailored their dissemination channel to the relevant audiences. For example, Monnard and colleagues described how they disseminated the findings from a large public health survey to a range of stakeholders including public health practitioners, academics and the broader community through a variety of mediums, such as community forums for the general public, and presentations and peer reviewed journal articles for researchers and public health professionals. Source was conceptualised and reported in two main ways in the included studies. In many of the descriptive studies reporting preference data, the information about source was based on the types of information accessed or preferred by participants for obtaining evidence. For example, practitioners may have been asked to indicate which sources they most frequently contacted if evidence was required for a particular initiative (e.g.,). For the remainder of the descriptive studies and in the experimental studies, a specific source of the information to be disseminated was identifiable. As those most commonly generating the evidence (through research), researchers and academic groups were the most frequently identified sources (n = 35, 54%), followed by government bodies (n = 13, 29%) which were usually health departments. In some studies, multiple sources were identified. For example, researchers may have partnered with local public health bodies to disseminate research findings to stakeholders. In one study, Ferdinands et al. described the involvement of an expert working group including researchers and practitioners, as well as partnership with local health services to develop and disseminate a nutrition report card on food environments for children. It was also common for individuals to take on multiple roles dependant on the nature of the evidence to be disseminated. For example, guideline development committees may include practitioners, researchers, and policymakers. Only one experimental study manipulated the source as part of their study, in addition to manipulating the message and the channel. A small number of studies, for example, those involving a survey of policymakers reporting only on channels accessed for dissemination, did not provide details or examples of the types of sources used, and therefore we were unable to categorise them in Table . Across qualitative/mixed methods studies, the message or information that was the subject of dissemination was most commonly new knowledge or evidence on a specific topic (n = 21, 40%), often in the form of a review. The message was only clearly identifiable in 58% (n = 30) of the qualitative/mixed methods studies, primarily because participant preferences or barriers to dissemination were the focus of many studies. In these studies, dissemination of a defined message was not a study aim. For the quantitative studies, in half of the studies (n = 20, 50%) the message was related to new knowledge or evidence. In some studies, the topic of the information to be disseminated varied across respondents (typically researchers), whereas for studies reporting on the outcomes of a dissemination effort the topic of information was more clearly defined. For example, the study by McVay et al. reports the results from a survey of public health researchers in which they assessed how “research findings” were disseminated to local public health agencies. In contrast, Mattran et al. provides an example of a study which describes the results of an effort to disseminate a report detailing state based physical activity data. In experimental studies, dissemination of an evidence-based program or intervention made up a greater proportion of the messages (n = 6, 40%) compared to dissemination of knowledge/research evidence (n = 5, 33%) or guidelines (n = 4, 27%). The types of programs or interventions disseminated included sun protection , workplace health including tobacco use prevention , and alcohol and substance abuse . A small number of studies also described particular communication techniques, such as use of plain language, formatting (e.g., dot point summaries) and the types of information to include as part of disseminated evidence (e.g., inclusion of local data) that were preferred by different audiences (data not shown). For example, in a study with US legislators, Dodson and colleagues found that there was interest in receiving cost data, as well as information about existing policies and what was occurring in other states when receiving evidence in order to inform policy decisions. In addition, three experimental studies manipulated aspects of the message as part of the design, such as the type or amount of detail provided in the dissemination materials . For both the qualitative/mixed methods and the quantitative studies, the most commonly reported channels included academic mediums such as journal articles and conferences (n = 25, 48% and n = 14, 35%), policy briefs (n = 13, 25% and n = 2, 5%), websites/infographics (n = 17, 33% and n = 17, 43%), information pamphlets/brochures (n = 9, 17% and n = 14, 35%), training/workshops (n = 19, 37% and n = 23, 58%), and one-on-one meetings (n = 12, 23% and n = 14, 35%). Many studies reported the use of multiple channels for communicating information, and this was particularly the case for studies in which multiple audiences were targeted. For example, studies targeting other researchers, practitioners and the public may have reported disseminating evidence through journal articles and conferences (targeting other researchers), workshops (targeting practitioners) and websites and media (targeting the public). The “other” category included channels such as knowledge brokers (which can also be considered a source) and institutional repositories or clearinghouses, as well as channels targeted at the general public such as prenatal classes, telephone helplines. Studies also presented channel information in the context of preferences or data about access frequency, rather than for the dissemination of a specific message, for example see . Focusing specifically on the experimental studies, channel was the most commonly manipulated component out of Brownson’s four dissemination components, with 13/15 (86%) studies comparing two or more dissemination channels (or dissemination vs. a control). Some studies compared different mediums of information presentation, for example, mailed information vs. a presentation . Others compared a basic dissemination strategy to an enhanced approach utilising multiple strategies . One study compared the provision of a booklet to a wait-list control , while a cluster RCT compared the dissemination of guidelines and relevant materials to experimental communities with control communities who received no information . Across all study designs, the most frequently identified audience was health practitioners (n = 61, 57%), such as GPs, nurses and allied health workers. For qualitative and mixed methods studies, policymakers (n = 24, 46%) and public health managers/administrators (n = 23, 44%) were targeted in almost as many studies as those targeting practitioners (n = 27, 52%). Many studies identified multiple audiences, which usually included at least one or more of these three groups, plus other relevant stakeholders such as public health managers, and researchers. For studies which disseminated evidence relevant to children and adolescents, stakeholders often included teachers, school principals and early childhood educators, which were categorised as community decision makers. For the quantitative studies, over half of the studies (n = 24, 60%) identified practitioners as an audience, but only a third targeted policymakers (n = 13, 33%) and public health managers/administrators (n = 13, 33%). Experimental studies followed a similar pattern, with two thirds (n = 10, 67%) of the experimental studies identifying practitioners as their intended audience. Two experimental studies , compared the effectiveness of a dissemination strategy across multiple audience groups. As noted previously when considering channels, studies often tailored their dissemination channel to the relevant audiences. For example, Monnard and colleagues described how they disseminated the findings from a large public health survey to a range of stakeholders including public health practitioners, academics and the broader community through a variety of mediums, such as community forums for the general public, and presentations and peer reviewed journal articles for researchers and public health professionals. Most studies reported findings across a variety of outcomes. Qualitative and mixed methods studies typically reported determinants of dissemination such as preferences (n = 31, 60%), and previous experiences of dissemination (e.g., what sources/channels are commonly accessed to find evidence, n = 36, 69%). In contrast, quantitative and experimental studies had a higher proportion of studies that reported outcomes that could be measured as a consequence of dissemination, such as awareness (n = 17, 31%), reach (n = 19, 35%), and intentions to use or apply the disseminated evidence (n = 20, 36%). When considering studies that reported on the dissemination of a specific program or research evidence (i.e., all experimental studies and 44% and 70% of the qualitative/mixed methods studies and quantitative studies respectively), outcomes were generally reported from the post-dissemination phase only. A few studies reported increases in outcomes such as knowledge or awareness, based on changes in these outcomes from pre to post dissemination (e.g.,). Adoption/uptake of the disseminated information was the most frequently reported outcome (n = 57, 53%), while knowledge was the least frequently reported outcome (n = 21, 20%). Standardised measures for outcomes were rarely used, with studies mostly using measures developed specifically for that study, prohibiting comparison of outcomes across studies. In addition, many studies provided minimal detail as to how outcomes were measured, further compounding difficulties in study comparisons. Summary of key findings This scoping review aimed to describe and map the literature examining dissemination of public health evidence related to the prevention of NCDs. Given the lack of consistency in the literature to describe dissemination studies, we intentionally used a broad approach to searching the literature. Our review used relatively extensive inclusion criteria including multiple study designs and covered two decades of published research, resulting in 107 studies being included in this review. Our findings are consistent with a recent scoping review of dissemination frameworks which identify variability of studies, inconsistencies and challenges with defining dissemination and few empirical studies applying dissemination specific frameworks . There are opportunities to improve the “science” to determine ‘what works’ for dissemination While the number of studies published in this area has increased in recent years, the results show that the majority continue to be descriptive studies examining general preferences and experiences of dissemination, or providing case study examples of dissemination efforts. Well controlled, experimental studies which compare dissemination strategies for communicating evidence are lacking, a finding echoed in several recent reviews . This is despite the extensive availability of rigorous evidence-based interventions that have demonstrated effectiveness (including cost-effectiveness) in the prevention of NCDs and reducing associated risk factors . The findings of this review suggest it is not a lack of available evidence to be disseminated, but rather a lack of evidence to guide dissemination efforts. Additionally much of the literature has focused on the experience of dissemination rather than specific efforts to advance dissemination science. Another area of the literature which appears to have substantial scope for future research is surrounding outcomes of dissemination. While most studies reported multiple outcomes, measures were typically poorly described and frequently collected only post-dissemination, limiting the potential to explore effectiveness of dissemination strategies. In addition, the most frequently reported measure was adoption/uptake. There is a significant opportunity for the science of dissemination outcome measurement to be improved, through more frequent measurement of key dissemination outcomes such as attitudes and knowledge , as well as the development of a measurement taxonomy, such as that developed for implementation research by Proctor et al.. A recent review proposed a number of constructs including knowledge utilization, awareness, and changes in policy uptake, that are described in dissemination frameworks and that may be important outcomes to measure to assess the impact of dissemination strategies . The topics which received the greatest amount of attention by dissemination researchers are in the areas of diet, physical activity, and obesity, followed by substance use, however the overall number of experimental research studies remains small across topics. This likely reflects the availability of high quality evidence in these areas for effective intervention approaches, as well as prevalence of risk factors. The limited number of studies exploring the dissemination of evidence related to cancer screening programs may be due to how the interventions are disseminated. Such interventions tend to be disseminated directly to individuals (i.e., members of the general public) for cancer screening such as through mass media campaigns and therefore were excluded from our scoping review due to the population of interest. Increased variety of producers (sources) and end-users (audiences) in dissemination practice: going beyond researchers, policymakers and practitioners Our scoping review revealed that researchers continue to be the primary disseminators (source) of evidence in this context. However, with an increased emphasis on co-production, and greater stakeholder involvement at all stages of the research cycle , there is evidence of groups such as health departments, practitioners and professional bodies taking on the role as key sources of disseminated information. This is likely to be beneficial for increasing the reach and impact of evidence due to their perceived credibility. A previous review by the research team has demonstrated the effect of using different messengers on improving implementation outcomes particularly in clinical settings. Although there were data from several descriptive studies suggesting some sources of evidence are preferred by policymakers compared to others , there is a need for additional work to determine the effect of different sources on dissemination outcomes. Most of the included studies targeted practitioners and/or policymakers as the identified audience. However, the scoping review revealed that there are a broad range of other stakeholders who may benefit from targeted dissemination even if they are not the primary users of the evidence. For example, groups such as politicians, advocacy groups, and professional associations may hold significant influence over decision makers and thus dissemination to these groups may prove fruitful in increasing eventual uptake of evidence. There is a need for greater evidence of the benefits of dissemination to other groups of stakeholders, as well as empirical data evaluating how manipulation of other components of dissemination (e.g., source and message) affects dissemination outcomes in different audiences. One group that emerged as a key audience for dissemination was the ‘community decision makers’, such as principals and school teachers who have a role in determining whether and how evidence is adopted in their setting. We argue this is an important group to consider when developing, delivering, and evaluating strategies to disseminate evidence surrounding prevention of NCDs, especially given the abundance of programs focusing on this within community environments such as schools . While groups such as public health officers, practitioners and policymakers can also influence evidence adoption in these settings (especially if acting as knowledge brokers), there may be advantages in undertaking targeted dissemination efforts to community decision makers who are embedded within the setting itself. Indeed, inclusion of all relevant end-users as part of the dissemination planning and roll-out process is a critical part of a co-creation approach to public health research and improving the impact of dissemination. Most evidence focuses on the channel of dissemination, but clarifying the dissemination message can be difficult Two broad aspects of the message were elicited through the scoping review: firstly, the type of evidence disseminated (e.g., a guideline, research synthesis, program/intervention), and secondly, what features of language and formatting are included as part of that communication (e.g., dot points, lay person language, presentation of local/contextual data). As the health communication literature has extensively explored this latter aspect (see for example ), we primarily focused on the types of evidence being disseminated. Evidence or research summaries were the most common types of evidence to be disseminated in qualitative and quantitative studies, whereas evidence-based interventions/programs were disseminated in just under half the experimental studies. Of the four dissemination components, message was the most difficult to classify as some of the studies did not explicitly describe what the message was, particularly studies which used surveys or interviews examining broader experiences of dissemination. The message was much clearer to identify in studies in which a specific dissemination strategy had been enacted. Channel appeared to receive the most attention in terms of exploration in the dissemination literature, and it dominated the experimental studies as the component most likely to be manipulated and compared in terms of effect on dissemination outcome. There is an extensive variety of available dissemination channels and mediums, and many of the studies included in this scoping review utilised multiple channels. Not surprisingly, methods targeted at fellow researchers such as peer reviewed articles and conferences were one of the most common cited channels, as well as training/workshops/presentations, which is consistent with other studies that have explored dissemination strategies utilised by researchers . The decision of which channel/s to use is informed by a number of factors including cost, familiarity, access, experience, as well as other components of dissemination such as the target audience and the source . This can make efforts to evaluate channel effectiveness complex, and while targeting of the channel is well acknowledged as essential in the literature , further practical guidance based on empirical evidence would be beneficial. Strengths and limitations of the review This scoping review has several strengths including the use of systematic and robust methods, from the prospective registration of the protocol, an extensive and comprehensive literature search and a dual independent screening process. We used an evidence based framework to map the review findings, which has resulted in the first scoping review we are aware of that maps the evidence for dissemination in the prevention of NCD in the field of public health. A common limitation within the dissemination literature is that dissemination as a field lacks clarity, with blurred boundaries of what constitutes dissemination compared to implementation and scale up more generally . The terms used to describe dissemination studies in the literature are numerous, and selecting the most efficient yet inclusive search strategy remains challenging. Despite undertaking a systematic search, relevant studies may have been missed. There is also some level of overlap between dissemination as we have included in this review and the related disciplines of health communication, scale up and social marketing. For example, much work has been done on message framing (e.g., gain vs. loss, ) but typically these studies have focussed on how health messages are communicated to patients and/or the public. Some studies on this topic may have been relevant, however exploring this vast literature was beyond the scope of this review. Lastly, there are additional attributes that could have been extracted from included studies, such as whether the dissemination strategy used was informed by a specific theory. However, the level of detail of reporting of many attributes was extremely variable, which may be related to the broad range of study designs included. As the aim of this review was to broadly map the dissemination literature covering the prevention of NCDs, we focused on those attributes most commonly reported and could most comprehensively describe the scope of the literature. Examining attributes such as the role of theory in development of dissemination strategy could be a worthy focus of future systematic reviews. There is also opportunity to explore a number of topics in greater detail, such as the evolution of strategies over time, and by sub-groups. For example, are some channels used more frequently for communication by particular sources compared to others. This scoping review aimed to describe and map the literature examining dissemination of public health evidence related to the prevention of NCDs. Given the lack of consistency in the literature to describe dissemination studies, we intentionally used a broad approach to searching the literature. Our review used relatively extensive inclusion criteria including multiple study designs and covered two decades of published research, resulting in 107 studies being included in this review. Our findings are consistent with a recent scoping review of dissemination frameworks which identify variability of studies, inconsistencies and challenges with defining dissemination and few empirical studies applying dissemination specific frameworks . There are opportunities to improve the “science” to determine ‘what works’ for dissemination While the number of studies published in this area has increased in recent years, the results show that the majority continue to be descriptive studies examining general preferences and experiences of dissemination, or providing case study examples of dissemination efforts. Well controlled, experimental studies which compare dissemination strategies for communicating evidence are lacking, a finding echoed in several recent reviews . This is despite the extensive availability of rigorous evidence-based interventions that have demonstrated effectiveness (including cost-effectiveness) in the prevention of NCDs and reducing associated risk factors . The findings of this review suggest it is not a lack of available evidence to be disseminated, but rather a lack of evidence to guide dissemination efforts. Additionally much of the literature has focused on the experience of dissemination rather than specific efforts to advance dissemination science. Another area of the literature which appears to have substantial scope for future research is surrounding outcomes of dissemination. While most studies reported multiple outcomes, measures were typically poorly described and frequently collected only post-dissemination, limiting the potential to explore effectiveness of dissemination strategies. In addition, the most frequently reported measure was adoption/uptake. There is a significant opportunity for the science of dissemination outcome measurement to be improved, through more frequent measurement of key dissemination outcomes such as attitudes and knowledge , as well as the development of a measurement taxonomy, such as that developed for implementation research by Proctor et al.. A recent review proposed a number of constructs including knowledge utilization, awareness, and changes in policy uptake, that are described in dissemination frameworks and that may be important outcomes to measure to assess the impact of dissemination strategies . The topics which received the greatest amount of attention by dissemination researchers are in the areas of diet, physical activity, and obesity, followed by substance use, however the overall number of experimental research studies remains small across topics. This likely reflects the availability of high quality evidence in these areas for effective intervention approaches, as well as prevalence of risk factors. The limited number of studies exploring the dissemination of evidence related to cancer screening programs may be due to how the interventions are disseminated. Such interventions tend to be disseminated directly to individuals (i.e., members of the general public) for cancer screening such as through mass media campaigns and therefore were excluded from our scoping review due to the population of interest. Increased variety of producers (sources) and end-users (audiences) in dissemination practice: going beyond researchers, policymakers and practitioners Our scoping review revealed that researchers continue to be the primary disseminators (source) of evidence in this context. However, with an increased emphasis on co-production, and greater stakeholder involvement at all stages of the research cycle , there is evidence of groups such as health departments, practitioners and professional bodies taking on the role as key sources of disseminated information. This is likely to be beneficial for increasing the reach and impact of evidence due to their perceived credibility. A previous review by the research team has demonstrated the effect of using different messengers on improving implementation outcomes particularly in clinical settings. Although there were data from several descriptive studies suggesting some sources of evidence are preferred by policymakers compared to others , there is a need for additional work to determine the effect of different sources on dissemination outcomes. Most of the included studies targeted practitioners and/or policymakers as the identified audience. However, the scoping review revealed that there are a broad range of other stakeholders who may benefit from targeted dissemination even if they are not the primary users of the evidence. For example, groups such as politicians, advocacy groups, and professional associations may hold significant influence over decision makers and thus dissemination to these groups may prove fruitful in increasing eventual uptake of evidence. There is a need for greater evidence of the benefits of dissemination to other groups of stakeholders, as well as empirical data evaluating how manipulation of other components of dissemination (e.g., source and message) affects dissemination outcomes in different audiences. One group that emerged as a key audience for dissemination was the ‘community decision makers’, such as principals and school teachers who have a role in determining whether and how evidence is adopted in their setting. We argue this is an important group to consider when developing, delivering, and evaluating strategies to disseminate evidence surrounding prevention of NCDs, especially given the abundance of programs focusing on this within community environments such as schools . While groups such as public health officers, practitioners and policymakers can also influence evidence adoption in these settings (especially if acting as knowledge brokers), there may be advantages in undertaking targeted dissemination efforts to community decision makers who are embedded within the setting itself. Indeed, inclusion of all relevant end-users as part of the dissemination planning and roll-out process is a critical part of a co-creation approach to public health research and improving the impact of dissemination. Most evidence focuses on the channel of dissemination, but clarifying the dissemination message can be difficult Two broad aspects of the message were elicited through the scoping review: firstly, the type of evidence disseminated (e.g., a guideline, research synthesis, program/intervention), and secondly, what features of language and formatting are included as part of that communication (e.g., dot points, lay person language, presentation of local/contextual data). As the health communication literature has extensively explored this latter aspect (see for example ), we primarily focused on the types of evidence being disseminated. Evidence or research summaries were the most common types of evidence to be disseminated in qualitative and quantitative studies, whereas evidence-based interventions/programs were disseminated in just under half the experimental studies. Of the four dissemination components, message was the most difficult to classify as some of the studies did not explicitly describe what the message was, particularly studies which used surveys or interviews examining broader experiences of dissemination. The message was much clearer to identify in studies in which a specific dissemination strategy had been enacted. Channel appeared to receive the most attention in terms of exploration in the dissemination literature, and it dominated the experimental studies as the component most likely to be manipulated and compared in terms of effect on dissemination outcome. There is an extensive variety of available dissemination channels and mediums, and many of the studies included in this scoping review utilised multiple channels. Not surprisingly, methods targeted at fellow researchers such as peer reviewed articles and conferences were one of the most common cited channels, as well as training/workshops/presentations, which is consistent with other studies that have explored dissemination strategies utilised by researchers . The decision of which channel/s to use is informed by a number of factors including cost, familiarity, access, experience, as well as other components of dissemination such as the target audience and the source . This can make efforts to evaluate channel effectiveness complex, and while targeting of the channel is well acknowledged as essential in the literature , further practical guidance based on empirical evidence would be beneficial. While the number of studies published in this area has increased in recent years, the results show that the majority continue to be descriptive studies examining general preferences and experiences of dissemination, or providing case study examples of dissemination efforts. Well controlled, experimental studies which compare dissemination strategies for communicating evidence are lacking, a finding echoed in several recent reviews . This is despite the extensive availability of rigorous evidence-based interventions that have demonstrated effectiveness (including cost-effectiveness) in the prevention of NCDs and reducing associated risk factors . The findings of this review suggest it is not a lack of available evidence to be disseminated, but rather a lack of evidence to guide dissemination efforts. Additionally much of the literature has focused on the experience of dissemination rather than specific efforts to advance dissemination science. Another area of the literature which appears to have substantial scope for future research is surrounding outcomes of dissemination. While most studies reported multiple outcomes, measures were typically poorly described and frequently collected only post-dissemination, limiting the potential to explore effectiveness of dissemination strategies. In addition, the most frequently reported measure was adoption/uptake. There is a significant opportunity for the science of dissemination outcome measurement to be improved, through more frequent measurement of key dissemination outcomes such as attitudes and knowledge , as well as the development of a measurement taxonomy, such as that developed for implementation research by Proctor et al.. A recent review proposed a number of constructs including knowledge utilization, awareness, and changes in policy uptake, that are described in dissemination frameworks and that may be important outcomes to measure to assess the impact of dissemination strategies . The topics which received the greatest amount of attention by dissemination researchers are in the areas of diet, physical activity, and obesity, followed by substance use, however the overall number of experimental research studies remains small across topics. This likely reflects the availability of high quality evidence in these areas for effective intervention approaches, as well as prevalence of risk factors. The limited number of studies exploring the dissemination of evidence related to cancer screening programs may be due to how the interventions are disseminated. Such interventions tend to be disseminated directly to individuals (i.e., members of the general public) for cancer screening such as through mass media campaigns and therefore were excluded from our scoping review due to the population of interest. Our scoping review revealed that researchers continue to be the primary disseminators (source) of evidence in this context. However, with an increased emphasis on co-production, and greater stakeholder involvement at all stages of the research cycle , there is evidence of groups such as health departments, practitioners and professional bodies taking on the role as key sources of disseminated information. This is likely to be beneficial for increasing the reach and impact of evidence due to their perceived credibility. A previous review by the research team has demonstrated the effect of using different messengers on improving implementation outcomes particularly in clinical settings. Although there were data from several descriptive studies suggesting some sources of evidence are preferred by policymakers compared to others , there is a need for additional work to determine the effect of different sources on dissemination outcomes. Most of the included studies targeted practitioners and/or policymakers as the identified audience. However, the scoping review revealed that there are a broad range of other stakeholders who may benefit from targeted dissemination even if they are not the primary users of the evidence. For example, groups such as politicians, advocacy groups, and professional associations may hold significant influence over decision makers and thus dissemination to these groups may prove fruitful in increasing eventual uptake of evidence. There is a need for greater evidence of the benefits of dissemination to other groups of stakeholders, as well as empirical data evaluating how manipulation of other components of dissemination (e.g., source and message) affects dissemination outcomes in different audiences. One group that emerged as a key audience for dissemination was the ‘community decision makers’, such as principals and school teachers who have a role in determining whether and how evidence is adopted in their setting. We argue this is an important group to consider when developing, delivering, and evaluating strategies to disseminate evidence surrounding prevention of NCDs, especially given the abundance of programs focusing on this within community environments such as schools . While groups such as public health officers, practitioners and policymakers can also influence evidence adoption in these settings (especially if acting as knowledge brokers), there may be advantages in undertaking targeted dissemination efforts to community decision makers who are embedded within the setting itself. Indeed, inclusion of all relevant end-users as part of the dissemination planning and roll-out process is a critical part of a co-creation approach to public health research and improving the impact of dissemination. Two broad aspects of the message were elicited through the scoping review: firstly, the type of evidence disseminated (e.g., a guideline, research synthesis, program/intervention), and secondly, what features of language and formatting are included as part of that communication (e.g., dot points, lay person language, presentation of local/contextual data). As the health communication literature has extensively explored this latter aspect (see for example ), we primarily focused on the types of evidence being disseminated. Evidence or research summaries were the most common types of evidence to be disseminated in qualitative and quantitative studies, whereas evidence-based interventions/programs were disseminated in just under half the experimental studies. Of the four dissemination components, message was the most difficult to classify as some of the studies did not explicitly describe what the message was, particularly studies which used surveys or interviews examining broader experiences of dissemination. The message was much clearer to identify in studies in which a specific dissemination strategy had been enacted. Channel appeared to receive the most attention in terms of exploration in the dissemination literature, and it dominated the experimental studies as the component most likely to be manipulated and compared in terms of effect on dissemination outcome. There is an extensive variety of available dissemination channels and mediums, and many of the studies included in this scoping review utilised multiple channels. Not surprisingly, methods targeted at fellow researchers such as peer reviewed articles and conferences were one of the most common cited channels, as well as training/workshops/presentations, which is consistent with other studies that have explored dissemination strategies utilised by researchers . The decision of which channel/s to use is informed by a number of factors including cost, familiarity, access, experience, as well as other components of dissemination such as the target audience and the source . This can make efforts to evaluate channel effectiveness complex, and while targeting of the channel is well acknowledged as essential in the literature , further practical guidance based on empirical evidence would be beneficial. This scoping review has several strengths including the use of systematic and robust methods, from the prospective registration of the protocol, an extensive and comprehensive literature search and a dual independent screening process. We used an evidence based framework to map the review findings, which has resulted in the first scoping review we are aware of that maps the evidence for dissemination in the prevention of NCD in the field of public health. A common limitation within the dissemination literature is that dissemination as a field lacks clarity, with blurred boundaries of what constitutes dissemination compared to implementation and scale up more generally . The terms used to describe dissemination studies in the literature are numerous, and selecting the most efficient yet inclusive search strategy remains challenging. Despite undertaking a systematic search, relevant studies may have been missed. There is also some level of overlap between dissemination as we have included in this review and the related disciplines of health communication, scale up and social marketing. For example, much work has been done on message framing (e.g., gain vs. loss, ) but typically these studies have focussed on how health messages are communicated to patients and/or the public. Some studies on this topic may have been relevant, however exploring this vast literature was beyond the scope of this review. Lastly, there are additional attributes that could have been extracted from included studies, such as whether the dissemination strategy used was informed by a specific theory. However, the level of detail of reporting of many attributes was extremely variable, which may be related to the broad range of study designs included. As the aim of this review was to broadly map the dissemination literature covering the prevention of NCDs, we focused on those attributes most commonly reported and could most comprehensively describe the scope of the literature. Examining attributes such as the role of theory in development of dissemination strategy could be a worthy focus of future systematic reviews. There is also opportunity to explore a number of topics in greater detail, such as the evolution of strategies over time, and by sub-groups. For example, are some channels used more frequently for communication by particular sources compared to others. In summary, this review has mapped the broad scope of the literature examining dissemination of evidence relevant to prevention of NCDs since 2000. It has identified a substantial base of qualitative and quantitative work, and opportunities for future experimental work. While there is a solid foundation of evidence when it comes to “what works” for the prevention of NCDs, there is much still to be learnt in order to determine “what works” to disseminate this evidence most effectively. If we are to reduce the evidence-practice gap in this area of public health, we need greater understanding of how to disseminate most effectively with each relevant end-user group; the audience who needs to receive the evidence. In particular, there is a need to determine the source that should deliver the information, how the message can be framed, and what channels are most appropriate for communication of the message to the audience. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3
A novel image-based method for simultaneous counting of
87c40f88-f5d8-4c4a-8c5d-45b2d870f650
10124123
Microbiology[mh]
Mixed cultures have long been a staple in food preparation and have been documented as early as 10 000 BC (Bourdichon et al., ; Prajapati & Nair, ). Before the advent of modern microbiology, craftsmen relied on spontaneous fermentation to produce tea, beer, cheese, and bread. Many fermentation processes utilize mixed cultures that contain two or more different microorganisms, which may include fungi (e.g., yeast and molds) or bacteria (e.g., lactic acid and acetic acid bacteria) (Hesseltine, , ). These varieties of mixed cultures are employed for their ability to create unique flavor profiles, health benefits, and food preservation capabilities (Smid & Lacroix, ). Yeasts are microorganisms that are eukaryotic and are members of the fungi kingdom. They are found naturally in the environment and are common on fruit skins, plant surfaces, and attached to some insects (Spencer & Spencer, ). Yeasts are well known for their ability to participate in fermentation, and this process occurs mainly in anaerobic or low-oxygen conditions. Yeast anaerobic fermentation yields ethanol and a myriad of other desirable and undesirable metabolic byproducts (Boulton & Quain, ; Swiegers et al., ). Examples of yeast-fermented products cover a broad range, including alcoholic beverages, kombucha, bread, and biofuels (Liszkowska & Berlowska, ; Rojas et al., ). Lactic acid bacteria (LABs) are a group of bacteria that are known for their probiotic properties and production of unique flavors in foods such as fermented yogurts, dairy products, and vegetables (Mathur et al., ). They can participate in homolactic fermentation or heterolactic fermentation. In the context of mixed culture, different LABs will grow at different time points during fermentation and can be affected by levels of tolerance to alcohol, pH, and the presence of various metabolites. Specifically, Lactiplantibacillus plantarum (formerly Lactobacillus plantarum ) has been studied for its probiotic activity, demonstrating adhesion to gastrointestinal cells, enabling fermentation of silage, and producing antimicrobial substances such as plantarcins that can inactivate pathogens (Soundharrajan et al., ). In addition, L. plantarum can produce large amounts of β-galactosidase that enable improved lactose digestion (Cebeci & Gürakan, ). Furthermore, L. plantarum can ferment fructooligosaccharides, which are indigestible sugars that cause dehydration (Cebeci & Gürakan, ). Craft breweries have grown significantly in the United States in the past decade (Brewers Association Releases Annual Craft Brewing Industry Production Report for 2020, ). According to the Master Brewers Association, craft brewing reached an astounding 22.2-billion-dollar market in 2020 with ∼9000 operating breweries, showing an annual increase of 4.5% in total operations (Brewers Association Releases Annual Craft Brewing Industry Production Report for 2020, ). The growth of demand also came with an appetite for new and unique flavors such as sour beers. Sour beers, like kettle sours, often utilize mixed cultures of yeast and bacteria to create a sweet and sour flavor profile (Hodgkin et al., ). Although less popular than modern light beers (pilsners) and a variety of craft beers, sour beers have a much longer history. Modern sour beers can be created with pitched mixed cultures, stepwise fermentation with different microorganisms, or can be brewed using wild fermentation. Sour beers have risen in popularity since the mid-1990s, when only a few craft brewers produced this style. In 2002, The Great American Beer Festival introduced a ‘‘Sour Beer’’ category that had only 15 entries. By 2013, this category boasted 238 entries (Tonsmeire, ). Sour beers now comprise ∼11.0% of beer sales and enjoyed a 73% increase in sales growth in 2016 in the United States (Statista, ). With the growth of sour beers have come different beer style categories and subsets of those categories. There are approximately eight categories of sour beers, including American wild ale, Berliner Weisse, Flanders red ale, gose, lambic, and Oud Bruin (Tonsmeire, ), as well as countless variations within these established categories. Traditionally, yeast and bacteria can be counted separately using the colony formation assay with the microorganisms streaked onto agar dishes, incubated for several days, and then counted to enumerate colony-forming units (CFUs; Sanders, ). However, this method can be time-consuming, requiring 24–72 hr for colony growth and can have high operator-dependent variation. Furthermore, enumeration of a mixed culture would require the use of multiple types of cultural media needing different incubation times, further increasing complexity. Other rapid methods, such as flow cytometry, may be used, but involve substantial capital cost, can be quite expensive to maintain, require a dedicated operator, and may be time-consuming due to the need to label to distinguish the microorganisms (Thomas et al., ). In the last decade, image cytometry has been used in many craft breweries for production and quality control to directly count yeast and measure viability to ensure consistency in beverage products (Chan et al., , ; Saldi et al., ). Previous publications have also shown accurate direct counting of Brettanomyces yeast and Lactobacillus bacteria (Hodgkin et al., ; Martyniak et al., ). In this work, we demonstrate the use of the Cellometer X2 image cytometer to simultaneously count a mixed culture of Saccharomyces cerevisiae ( S. cerevisiae ) and Lactiplantibacillus plantarum ( L. plantarum ), which has not been shown previously. First, we demonstrate the ability to count titrations of monoculture of yeast and bacteria with fluorescent staining. Second, we verify the use of the bacteria counting chambers for yeast counting. Third, we validate the direct counting of yeast and bacteria with various mixture ratios. Finally, we monitor the yeast and bacteria concentration during a standard mixed culture fermentation of a Berliner Weisse-style beer. In addition, manual counting of CFU is conducted concurrently for direct concentration comparison. Direct cell counting of mixed culture of yeast and bacteria using chamber-based image cytometric analysis has not been published previously. The utilization of two fluorescent dyes for optimizing staining and distinguishing yeast and bacteria cells, as well as the use of thin chamber slides for ensuring optimal focus, are highly innovative. The proposed novel image cytometry method for mixed cultures can rapidly characterize the concentration of yeast and bacteria microorganisms in beverage products during the course of fermentation, which may improve the consistency and quality of the end products. Saccharomyces cerevisiae Preparation Saccharomyces cerevisiae (Safale S-04) was purchased from Fermentis (Marcq-en-Barœul, France) in dry yeast packets. Dried yeast packets were rehydrated and grown overnight in 50 mL of potato dextrose broth (Difco, BD, Franklin Lakes, NJ). The yeast culture was isolated with streak plating on acidified potato dextrose agar (APDA, Difco) plates in triplicate. Next, the isolated yeast colonies were aseptically transferred to 10 mL of acidified potato dextrose broth and incubated for 24 hr at 30°C. For each experiment, an isolated yeast colony was aseptically transferred from APDA plates to 50 mL of potato dextrose broth in triplicate and incubated in a water bath shaker for 24 hr at 30°C. Next, ∼10 mL aliquots were collected from each sample and centrifuged (Eppendorf 5430, Framingham, MA) for 5 min at 4000 RPM (1800 × g ). Subsequently, the pellet was resuspended in 2 mL of 1X PBS (pH 7.4), yielding an approximate yeast concentration of 10 7 cells/mL. Lactiplantibacillus plantarum Preparation Lactiplantibacillus plantarum ( Lactobacillus plantarum ATCC 8014) was sourced from the American Type Culture Collection (ATCC, Manassas, VA). The strain was stored in 1 mL aliquots of DeMann, Rogosa, and Sharpe (MRS) broth (Difco) mixed 50:50 with 80% glycerol at −80°C. After thawing, each aliquot was transferred to 9 mL of MRS broth and incubated for 24 hr at 30°C. Next, the 24-hr growth culture was streaked onto MRS plates and incubated for 1–2 days at 30°C. For each experiment, a single isolated colony from each MRS streak plate was aseptically transferred to a tube containing 9 mL of sterilized MRS broth. The inoculated samples were incubated for 24 hr at 30°C. Fluorescent Stain Preparation and Staining Protocol The acridine orange (AO) and propidium iodide (PI) fluorescent nuclear stains were provided by Nexcelom (Lawrence, MA) and used to stain S. cerevisiae (Chan et al., ; Saldi et al., ). The ViaStain ™ AO/PI Staining Solution (CS2-0106-5mL) was used for the initial yeast monoculture staining. For yeast monoculture AO/PI staining, the samples were first diluted 1:1 with the yeast dilution buffer (Nexcelom) and then mixed 1:1 with the AO/PI dye for ∼2 min prior to image cytometric analysis. The SYTO BC fluorescent stain was purchased from Thermo Fisher Scientific (Carlsbad, CA). The working stock of SYTO BC was prepared by diluting 1:100 in deionized (DI) water. The working stock solution was mixed well and stored in the dark for staining at ambient temperature. Stock solutions were freshly prepared for each experiment. For bacteria monoculture staining, the samples were stained 1:1 with the SYTO BC working solution for ∼2 min prior to image cytometric analysis (Hodgkin et al., ). The ViaStain ™ AO Staining Solution (CS1-0108-5mL) and SYTO BC were used for yeast and bacteria mixed culture staining. The SYTO BC working stock (1:100 in water) was mixed 2:1 with AO and vortexed. The yeast and bacteria mixed culture was stained 1:1 with the SYTO BC/AO mixed stain for ∼2 min. All the fluorescent stains were prepared fresh for each experiment and stored at ambient temperature for the duration of the experiment. Cellometer X2 Image Cytometry Method The Cellometer X2 image cytometer utilizes a bright field and two fluorescent imaging channels: green (VC-535-402) and red (VC-660-502) for cell count, concentration, and viability measurement (Hodgkin et al., ; Martyniak et al., ; Saldi et al., ). The instrument implements a 10X objective, producing a resolution of ∼0.5 μm 2 /pixel. For the initial yeast counting experiments, both fluorescent channels were used. The VC-535-402 (excitation/emission: 470 nm/535 nm) was used to detect and enumerate AO-stained cells with exposure times between 500 and 1000 ms, while the VC-660-502 (excitation/emission: 540 nm/660 nm) was used to detect and enumerate PI-stained cells with exposure times between 2200 and 2700 ms. Following the AO/PI yeast staining protocol, 5 μL of the stained sample was pipetted into a Nexcelom counting chamber (CHT4-SD025) and inserted into the system. The chamber was immediately checked under the bright field for appropriate yeast morphology and potential contamination. After the chamber was reviewed and focused, the system would acquire bright field and fluorescent images at four different areas in the counting chamber. The images were analyzed automatically in the software to generate cell count, concentration, and viability using the following counting parameters: fluorescence (FL) channel 1, cell diameter (2.0–30.0 μm), roundness (0.00), fluorescent threshold (25.0), decluster Th factor (0.90); fluorescence (FL) channel 2, cell diameter (2.0–30.0 μm), roundness (0.00), fluorescent threshold (20.0), decluster Th factor (0.90). For the bacteria counting experiments, only the VC-535-402 channel was used with exposure times between 300 and 1500 ms. Following the SYTO BC staining protocol, 5 μL of the stained sample was pipetted into the CHT4-SD025 counting chamber. The inlet and outlet ports were quickly taped with Scotch tape to prevent evaporation and slow cell movement within the chamber. The cells in the taped chamber were allowed to settle for ∼30 s to further minimize movement, and then inserted into the system for image analysis using the following parameters: fluorescence (FL) channel 1: cell diameter (0.7–40.0 μm), roundness (0.00), fluorescent threshold (10.0), decluster Th factor (0.90). A similar procedure was performed for analyzing bacteria and yeast mixtures with two fluorescent channels, where both channels were set up with VC-535-402 to detect AO and SYTO BC fluorescence. Slightly stricter image analysis parameters were used to improve cell counting for yeast and bacteria stained with AO and SYTO BC, respectively with the following parameters: fluorescence (FL) channel 1––yeast, cell diameter (6.0–50.0 μm), roundness (0.00), fluorescent threshold (8.0), decluster Th factor (0.90); fluorescence (FL) channel 2––bacteria, cell diameter (0.5–5.0 μm), roundness (0.00), fluorescent threshold (10.0), decluster Th factor (0.90). Comparison of SD100 and SD025 Cell Counting Chamber for S. cerevisiae Previous publications have demonstrated the validation of yeast counting in Cellometer X2 image cytometer using the CHT4-SD100 cell counting chambers. Since S. cerevisiae and L. plantarum mixtures are required to use the CHT4-SD025, we performed a direct concentration comparison of yeast counting in both chambers. First, the stock yeast culture was diluted in deionized (DI) H 2 O to 0.25, 0.5, 0.75, and 1 dilution fractions ( n = 6/dilution). Each dilution was stained with AO/PI and immediately analyzed using the image cytometer for total concentration comparison. Independent Measurement of L. plantarum and S. cerevisiae Titration An initial titration experiment was performed for both L. plantarum and S. cerevisiae to demonstrate the counting capability of the image cytometer. After preparing the stock cultures for yeast and bacteria, the yeast sample was diluted with DI H 2 O to 0.1, 0.3, 0.5, 0.7, 0.9, and 1 dilution fractions ( n = 4/dilution), while the bacteria sample was diluted to 0.1, 0.25, 0.5, 0.75, and 1 dilution fractions ( n = 6/dilution). The yeast samples were stained with AO/PI, the bacteria samples were stained with SYTO BC, and subsequently imaged and analyzed using the image cytometer to produce cell concentration results. The titration experiment was repeated two more times and validated against the CFU manual counting method (described below). Lactobacillus plantarum and S. cerevisiae Mixed Culture Detection Validation Experiment To demonstrate the ability of the image cytometer to correctly identify and count yeast and bacteria in the mixed cultures, we performed three separate experiments at various mixture ratios. First, the stock yeast culture was diluted to 0.001, 0.01, 0.1, 0.2, 0.5, and 1 dilution fractions, and mixed with the stock bacteria culture at 1:1 ( n = 6/dilution). Second, the stock bacteria culture was diluted to the same dilution fractions and mixed with the stock yeast culture at 1:1 ( n = 6/dilution). Third, the stock yeast and bacteria cultures were mixed at different percentages of yeast/bacteria: 0, 25, 50, 75, and 100% ( n = 6/dilution). The mixtures from the three experiments were all stained with the AO/SYTO BC dye mixture and immediately analyzed using the image cytometer for total concentration measurement. The mixed culture detection experiment was repeated two more times and validated against the CFU manual counting method. CFU Manual Counting Method The measured cell concentrations of 24-hr yeast culture were compared between image cytometry and manual plate counting. The stock yeast solution was serially diluted in DI H 2 O from 10 −4 to 10 −6 dilution factors in duplicate and then spread onto APDA plates in triplicate. The inoculated plates were incubated for 2–3 days at 30°C. The most countable dilution plates were used, and the rest were discarded. Similarly, the measured cell concentrations of 24-hr L. plantarum cultures were also compared between image cytometry and manual plate counting. The bacteria stock cultures for L. plantarum were serially diluted with peptone water from 10 −5 to 10 −7 dilution factors in triplicate and then spread onto MRS agar plates in duplicate. The inoculated plates were incubated for 1–2 days at 30°C. The most countable dilution plates were used, and the rest were discarded. CFU and Image Cytometry Comparison Using ANOVA Counting results from the image cytometry and plating methods were back-calculated to the starting concentrations of their respective 24-hr growth cultures. The colony counting (CFU/mL) and cell counting (cells/mL) results were first converted to log scale (base 10), and the average results from each experiment were compared between image cytometry and traditional manual counting method using the ANOVA regression in JMP Pro 15.2.0 (466311). A p -value of <.05 was considered statistically significant. Berliner Wiesse Style Mixed Culture Fermentation Experiment A sour beer fermentation experiment was designed and conducted with the mixed culture process to demonstrate the ability of the proposed image cytometry method to accurately count the concentrations of L. plantarum and S. cerevisiae simultaneously in wort media during active fermentation. All equipment was cleaned, rinsed, and sanitized before each trial. Five Star PBW (Five Star Chemical, Arvada, CO) was used for cleaning, and Five Star Saniclean Low Foam was used to sanitize. A Sabco Brew-Magic™ system (Toledo, OH) was utilized to mash, sparge, and boil the wort, and a Sabco Chill-Wizard plate chiller (Toledo, OH) was used to cool the wort. To prepare for the mixed culture fermentation, an isolated colony from the L. plantarum preparation was aseptically transferred to 100 mL of MRS broth and incubated for 24 hr at 30°C until a concentration of 10 9 CFU/mL was achieved. Next, an isolated yeast colony from the S. cerevisiae preparation streak plates was aseptically transferred to 50 mL of potato dextrose broth and incubated for 24 hr at 30°C. The 24-hr yeast culture was added to a 1.040 specific gravity (SG) malt-dextrose solution and mixed using a stir bar at room temperature for up to 48 hr or until a concentration of at least 10 8 CFU/mL was achieved. The Sabco Brew-Magic™ system was used to prepare ∼6 gallons (22.7 L) of wort to be divided into three fermentation vessels. The wort was prepared by mashing 4.33 lb (1.96 kg) of German pilsner malt, 4.33 lb (1.96 kg) of German wheat malt, and 0.66 lb (0.30 kg) of rice hulls into 3 gallons (11.4 L) of water at 71°C. The mash was stirred every 20 min for a total of 60 min during the mashing step. At the end of the 60 min, the mash was allowed to recirculate (Vorloauf) for 10 min or until the liquid was clear. The grain was then rinsed (sparged) with 4 gallons (15.1 L) of water at 85°C. During sparging, 6 gallons (22.7 L) of wort was transferred to a kettle and boiled (kettled) for 60 min. After boiling, a sample was collected to test the pH and SG. The initial pH averaged 5.66, and the average initial SG was 1.0336. The wort was chilled using a Sabco Chill Wizard plate chiller/heat exchanger (Toledo, OH). Next, ∼0.75 gallons (2.8 L) of chilled wort was transferred to each of the three one-gallon (3.8-L) glass fermentation vessels with lids and three-piece airlocks. Each fermentation vessel was incubated between 22 and 24°C. Finally, the wort was inoculated with an estimated 750 million cells/mL of S. cerevisiae and ∼10 million cells/mL of L. plantarum , which were typical industry standards for a mixed-cultured beer. Three independent fermentation trials were performed and monitored. The wort was sampled at 0, 3, 6, 9, 12, 24, and 48 hr, where the concentrations of L. plantarum and S. cerevisiae were analyzed and monitored using the image cytometer. Similarly, plating and manual counting of CFU were performed on the APDA and MRS plates. For image cytometric analysis, 1.5 mL of sample was collected at each time point, diluted 1:10 in DI water, and then stained with the AO/SYTO BC dye mixture prior to image analysis ( n = 4). Unstained samples were diluted and plated based on the image cytometry results. The S. cerevisiae and L. plantarum samples were plated using APDA and MRS plates in triplicate, respectively. The agar plates were incubated at 30°C for 48–72 hr and 24–48 hr for S. cerevisiae and L. plantarum , respectively, and the resulting manual counting concentrations were compared directly to image cytometry. The resulting concentrations (CFU/mL) of the plating cultures and image cytometer (cells/mL) were converted to a log scale (base 10) for comparison (CFU/mL and cells/mL were used interchangeably). The average log(CFU/mL) or log(cells/mL) values for each experiment were compared using ANOVA in JMP Pro 15.2.0 (466 311). A p -value of <.05 was considered statistically significant. Finally, other parameters were collected during the fermentation, such as pH, SG, and aroma profile. Preparation Saccharomyces cerevisiae (Safale S-04) was purchased from Fermentis (Marcq-en-Barœul, France) in dry yeast packets. Dried yeast packets were rehydrated and grown overnight in 50 mL of potato dextrose broth (Difco, BD, Franklin Lakes, NJ). The yeast culture was isolated with streak plating on acidified potato dextrose agar (APDA, Difco) plates in triplicate. Next, the isolated yeast colonies were aseptically transferred to 10 mL of acidified potato dextrose broth and incubated for 24 hr at 30°C. For each experiment, an isolated yeast colony was aseptically transferred from APDA plates to 50 mL of potato dextrose broth in triplicate and incubated in a water bath shaker for 24 hr at 30°C. Next, ∼10 mL aliquots were collected from each sample and centrifuged (Eppendorf 5430, Framingham, MA) for 5 min at 4000 RPM (1800 × g ). Subsequently, the pellet was resuspended in 2 mL of 1X PBS (pH 7.4), yielding an approximate yeast concentration of 10 7 cells/mL. Preparation Lactiplantibacillus plantarum ( Lactobacillus plantarum ATCC 8014) was sourced from the American Type Culture Collection (ATCC, Manassas, VA). The strain was stored in 1 mL aliquots of DeMann, Rogosa, and Sharpe (MRS) broth (Difco) mixed 50:50 with 80% glycerol at −80°C. After thawing, each aliquot was transferred to 9 mL of MRS broth and incubated for 24 hr at 30°C. Next, the 24-hr growth culture was streaked onto MRS plates and incubated for 1–2 days at 30°C. For each experiment, a single isolated colony from each MRS streak plate was aseptically transferred to a tube containing 9 mL of sterilized MRS broth. The inoculated samples were incubated for 24 hr at 30°C. The acridine orange (AO) and propidium iodide (PI) fluorescent nuclear stains were provided by Nexcelom (Lawrence, MA) and used to stain S. cerevisiae (Chan et al., ; Saldi et al., ). The ViaStain ™ AO/PI Staining Solution (CS2-0106-5mL) was used for the initial yeast monoculture staining. For yeast monoculture AO/PI staining, the samples were first diluted 1:1 with the yeast dilution buffer (Nexcelom) and then mixed 1:1 with the AO/PI dye for ∼2 min prior to image cytometric analysis. The SYTO BC fluorescent stain was purchased from Thermo Fisher Scientific (Carlsbad, CA). The working stock of SYTO BC was prepared by diluting 1:100 in deionized (DI) water. The working stock solution was mixed well and stored in the dark for staining at ambient temperature. Stock solutions were freshly prepared for each experiment. For bacteria monoculture staining, the samples were stained 1:1 with the SYTO BC working solution for ∼2 min prior to image cytometric analysis (Hodgkin et al., ). The ViaStain ™ AO Staining Solution (CS1-0108-5mL) and SYTO BC were used for yeast and bacteria mixed culture staining. The SYTO BC working stock (1:100 in water) was mixed 2:1 with AO and vortexed. The yeast and bacteria mixed culture was stained 1:1 with the SYTO BC/AO mixed stain for ∼2 min. All the fluorescent stains were prepared fresh for each experiment and stored at ambient temperature for the duration of the experiment. The Cellometer X2 image cytometer utilizes a bright field and two fluorescent imaging channels: green (VC-535-402) and red (VC-660-502) for cell count, concentration, and viability measurement (Hodgkin et al., ; Martyniak et al., ; Saldi et al., ). The instrument implements a 10X objective, producing a resolution of ∼0.5 μm 2 /pixel. For the initial yeast counting experiments, both fluorescent channels were used. The VC-535-402 (excitation/emission: 470 nm/535 nm) was used to detect and enumerate AO-stained cells with exposure times between 500 and 1000 ms, while the VC-660-502 (excitation/emission: 540 nm/660 nm) was used to detect and enumerate PI-stained cells with exposure times between 2200 and 2700 ms. Following the AO/PI yeast staining protocol, 5 μL of the stained sample was pipetted into a Nexcelom counting chamber (CHT4-SD025) and inserted into the system. The chamber was immediately checked under the bright field for appropriate yeast morphology and potential contamination. After the chamber was reviewed and focused, the system would acquire bright field and fluorescent images at four different areas in the counting chamber. The images were analyzed automatically in the software to generate cell count, concentration, and viability using the following counting parameters: fluorescence (FL) channel 1, cell diameter (2.0–30.0 μm), roundness (0.00), fluorescent threshold (25.0), decluster Th factor (0.90); fluorescence (FL) channel 2, cell diameter (2.0–30.0 μm), roundness (0.00), fluorescent threshold (20.0), decluster Th factor (0.90). For the bacteria counting experiments, only the VC-535-402 channel was used with exposure times between 300 and 1500 ms. Following the SYTO BC staining protocol, 5 μL of the stained sample was pipetted into the CHT4-SD025 counting chamber. The inlet and outlet ports were quickly taped with Scotch tape to prevent evaporation and slow cell movement within the chamber. The cells in the taped chamber were allowed to settle for ∼30 s to further minimize movement, and then inserted into the system for image analysis using the following parameters: fluorescence (FL) channel 1: cell diameter (0.7–40.0 μm), roundness (0.00), fluorescent threshold (10.0), decluster Th factor (0.90). A similar procedure was performed for analyzing bacteria and yeast mixtures with two fluorescent channels, where both channels were set up with VC-535-402 to detect AO and SYTO BC fluorescence. Slightly stricter image analysis parameters were used to improve cell counting for yeast and bacteria stained with AO and SYTO BC, respectively with the following parameters: fluorescence (FL) channel 1––yeast, cell diameter (6.0–50.0 μm), roundness (0.00), fluorescent threshold (8.0), decluster Th factor (0.90); fluorescence (FL) channel 2––bacteria, cell diameter (0.5–5.0 μm), roundness (0.00), fluorescent threshold (10.0), decluster Th factor (0.90). S. cerevisiae Previous publications have demonstrated the validation of yeast counting in Cellometer X2 image cytometer using the CHT4-SD100 cell counting chambers. Since S. cerevisiae and L. plantarum mixtures are required to use the CHT4-SD025, we performed a direct concentration comparison of yeast counting in both chambers. First, the stock yeast culture was diluted in deionized (DI) H 2 O to 0.25, 0.5, 0.75, and 1 dilution fractions ( n = 6/dilution). Each dilution was stained with AO/PI and immediately analyzed using the image cytometer for total concentration comparison. L. plantarum and S. cerevisiae Titration An initial titration experiment was performed for both L. plantarum and S. cerevisiae to demonstrate the counting capability of the image cytometer. After preparing the stock cultures for yeast and bacteria, the yeast sample was diluted with DI H 2 O to 0.1, 0.3, 0.5, 0.7, 0.9, and 1 dilution fractions ( n = 4/dilution), while the bacteria sample was diluted to 0.1, 0.25, 0.5, 0.75, and 1 dilution fractions ( n = 6/dilution). The yeast samples were stained with AO/PI, the bacteria samples were stained with SYTO BC, and subsequently imaged and analyzed using the image cytometer to produce cell concentration results. The titration experiment was repeated two more times and validated against the CFU manual counting method (described below). and S. cerevisiae Mixed Culture Detection Validation Experiment To demonstrate the ability of the image cytometer to correctly identify and count yeast and bacteria in the mixed cultures, we performed three separate experiments at various mixture ratios. First, the stock yeast culture was diluted to 0.001, 0.01, 0.1, 0.2, 0.5, and 1 dilution fractions, and mixed with the stock bacteria culture at 1:1 ( n = 6/dilution). Second, the stock bacteria culture was diluted to the same dilution fractions and mixed with the stock yeast culture at 1:1 ( n = 6/dilution). Third, the stock yeast and bacteria cultures were mixed at different percentages of yeast/bacteria: 0, 25, 50, 75, and 100% ( n = 6/dilution). The mixtures from the three experiments were all stained with the AO/SYTO BC dye mixture and immediately analyzed using the image cytometer for total concentration measurement. The mixed culture detection experiment was repeated two more times and validated against the CFU manual counting method. The measured cell concentrations of 24-hr yeast culture were compared between image cytometry and manual plate counting. The stock yeast solution was serially diluted in DI H 2 O from 10 −4 to 10 −6 dilution factors in duplicate and then spread onto APDA plates in triplicate. The inoculated plates were incubated for 2–3 days at 30°C. The most countable dilution plates were used, and the rest were discarded. Similarly, the measured cell concentrations of 24-hr L. plantarum cultures were also compared between image cytometry and manual plate counting. The bacteria stock cultures for L. plantarum were serially diluted with peptone water from 10 −5 to 10 −7 dilution factors in triplicate and then spread onto MRS agar plates in duplicate. The inoculated plates were incubated for 1–2 days at 30°C. The most countable dilution plates were used, and the rest were discarded. Counting results from the image cytometry and plating methods were back-calculated to the starting concentrations of their respective 24-hr growth cultures. The colony counting (CFU/mL) and cell counting (cells/mL) results were first converted to log scale (base 10), and the average results from each experiment were compared between image cytometry and traditional manual counting method using the ANOVA regression in JMP Pro 15.2.0 (466311). A p -value of <.05 was considered statistically significant. A sour beer fermentation experiment was designed and conducted with the mixed culture process to demonstrate the ability of the proposed image cytometry method to accurately count the concentrations of L. plantarum and S. cerevisiae simultaneously in wort media during active fermentation. All equipment was cleaned, rinsed, and sanitized before each trial. Five Star PBW (Five Star Chemical, Arvada, CO) was used for cleaning, and Five Star Saniclean Low Foam was used to sanitize. A Sabco Brew-Magic™ system (Toledo, OH) was utilized to mash, sparge, and boil the wort, and a Sabco Chill-Wizard plate chiller (Toledo, OH) was used to cool the wort. To prepare for the mixed culture fermentation, an isolated colony from the L. plantarum preparation was aseptically transferred to 100 mL of MRS broth and incubated for 24 hr at 30°C until a concentration of 10 9 CFU/mL was achieved. Next, an isolated yeast colony from the S. cerevisiae preparation streak plates was aseptically transferred to 50 mL of potato dextrose broth and incubated for 24 hr at 30°C. The 24-hr yeast culture was added to a 1.040 specific gravity (SG) malt-dextrose solution and mixed using a stir bar at room temperature for up to 48 hr or until a concentration of at least 10 8 CFU/mL was achieved. The Sabco Brew-Magic™ system was used to prepare ∼6 gallons (22.7 L) of wort to be divided into three fermentation vessels. The wort was prepared by mashing 4.33 lb (1.96 kg) of German pilsner malt, 4.33 lb (1.96 kg) of German wheat malt, and 0.66 lb (0.30 kg) of rice hulls into 3 gallons (11.4 L) of water at 71°C. The mash was stirred every 20 min for a total of 60 min during the mashing step. At the end of the 60 min, the mash was allowed to recirculate (Vorloauf) for 10 min or until the liquid was clear. The grain was then rinsed (sparged) with 4 gallons (15.1 L) of water at 85°C. During sparging, 6 gallons (22.7 L) of wort was transferred to a kettle and boiled (kettled) for 60 min. After boiling, a sample was collected to test the pH and SG. The initial pH averaged 5.66, and the average initial SG was 1.0336. The wort was chilled using a Sabco Chill Wizard plate chiller/heat exchanger (Toledo, OH). Next, ∼0.75 gallons (2.8 L) of chilled wort was transferred to each of the three one-gallon (3.8-L) glass fermentation vessels with lids and three-piece airlocks. Each fermentation vessel was incubated between 22 and 24°C. Finally, the wort was inoculated with an estimated 750 million cells/mL of S. cerevisiae and ∼10 million cells/mL of L. plantarum , which were typical industry standards for a mixed-cultured beer. Three independent fermentation trials were performed and monitored. The wort was sampled at 0, 3, 6, 9, 12, 24, and 48 hr, where the concentrations of L. plantarum and S. cerevisiae were analyzed and monitored using the image cytometer. Similarly, plating and manual counting of CFU were performed on the APDA and MRS plates. For image cytometric analysis, 1.5 mL of sample was collected at each time point, diluted 1:10 in DI water, and then stained with the AO/SYTO BC dye mixture prior to image analysis ( n = 4). Unstained samples were diluted and plated based on the image cytometry results. The S. cerevisiae and L. plantarum samples were plated using APDA and MRS plates in triplicate, respectively. The agar plates were incubated at 30°C for 48–72 hr and 24–48 hr for S. cerevisiae and L. plantarum , respectively, and the resulting manual counting concentrations were compared directly to image cytometry. The resulting concentrations (CFU/mL) of the plating cultures and image cytometer (cells/mL) were converted to a log scale (base 10) for comparison (CFU/mL and cells/mL were used interchangeably). The average log(CFU/mL) or log(cells/mL) values for each experiment were compared using ANOVA in JMP Pro 15.2.0 (466 311). A p -value of <.05 was considered statistically significant. Finally, other parameters were collected during the fermentation, such as pH, SG, and aroma profile. Verification of SD025 Cell Counting Chamber for S. cerevisiae In this experiment, we verified that the use of the SD025 bacteria counting chamber can be used for yeast counting. In previous publications, yeast has been primarily counted in the SD100 chambers, thus it was critical to demonstrate that there are minimal effects on counting yeast in a thinner chamber. The yeast monoculture was first diluted with 0.25, 0.5, 0.75, and 1.00 dilution fractions and then counted with AO/PI staining. The results are shown in Fig. , which showed comparable concentration measurements between the two types of consumables. The percentage differences were 17.6, 5.6, 8.3, and 6.7%, respectively, for dilution fractions from 0.25 to 1.00. Overall, a two-sample t -test was calculated for each dilution and showed no significant statistical differences (0.66, 0.44, and 0.34) between the two consumables except the 0.25 dilution fraction (0.001), which may be due to cell counting precision at low concentration. Verification of L. plantarum and S. cerevisiae Titration Measurement To validate the ability of the image cytometer to measure the titration of yeast and bacteria monocultures, we prepared a monoculture of S. cerevisiae at dilution fractions of 0.1, 0.3, 0.5, 0.7, 0.9, and 1.0, as well as a monoculture of L. plantarum at dilution fractions of 0.1, 0.25, 0.50, 0.75, and 1.00. After staining the samples with AO/PI and SYTO BC, respectively, they were counted with the image cytometer to generate the titration results. The counted fluorescent images and results for L. plantarum and S. cerevisiae are shown in Fig. (top), which demonstrated the direct counting of fluorescently stained yeast and bacteria. The titration results showed a highly linear response for both microorganisms with R 2 values of 0.997 and 0.967 for L. plantarum and S. cerevisiae , respectively (bottom). This experiment repeated previously demonstrated cell counting methods using AO/PI and SYTO BC staining (Hodgkin et al., ; Saldi et al., ). The green outlines in the images showed individual yeast and bacteria counted. Furthermore, the ANOVA analysis showed that image cytometry and manual counting were statistically comparable when counting the monocultures of L. plantarum and S. cerevisiae for the three repeated experiments with a p -value of .49 and .85, respectively (Table ). Validation of L. plantarum and S. cerevisiae Mixture Measurement To demonstrate the ability of the image cytometer to simultaneously count yeast and bacteria in mixed culture, we prepared three different samples with various ratios of L. plantarum and S. cerevisiae in the mixture. The purpose of the first two experiments was to show the simultaneous counting of L. plantarum and S. cerevisiae when keeping one microorganism constant and diluting the other. The third experiment was to show simultaneous counting when both microorganisms were diluted. The bright field and fluorescent images, as well as the results of the mixed culture enumeration by image cytometry, are shown in Fig. , where the acquisition of both fluorescent channels was set to green to detect AO and SYTO BC. The parameters were set up to only count the yeast and bacteria in channels 1 and 2, respectively. The first experiment ( L. plantarum constant) showed R 2 values of 0.981 and 0.219 for yeast and bacteria, respectively. The second experiment ( S. cerevisiae constant) showed R 2 values of 0.056 and 0.994 for yeast and bacteria, respectively. The results indicated that the image cytometer was able to successfully measure the titration of yeast and bacteria, as well as measuring the constant concentrations. For the third experiment, the R 2 values were 0.967 and 0.970 for yeast and bacteria, respectively, which showed that the image cytometer can measure titrations from both organisms simultaneously. The ANOVA analysis showed that image cytometry and manual counting were statistically comparable for all three different mixture experiments at n = 3 (Tables and ). First, keeping L. plantarum constant, the p -values were .58 and 0.50 for bacteria and yeast, respectively. Second, keeping S. cerevisiae constant, the p -values were .96 and 0.46, respectively. Finally, the bacteria and yeast ratio mixture experiment showed p -values at .18 and .08, respectively. It is important to note that fluorescent staining is critical for the proposed image cytometry method to distinguish between yeast and bacteria cells. Fluorescence-based image analysis can also minimize background noise and debris in bright-field imaging. Verification of L. plantarum and S. cerevisiae Measurement in Fermentation The fermentation trials focused on producing a Berliner Wiesse product, a low-alcohol German sour beer that dates back to the 16th century. It is traditionally fermented with S. cerevisiae or Brettanomyces in combination with LABs using wort made from mixed extracted wheat and grain malt. In addition, Berliner Wiesse is traditionally brewed with small quantities of hops or without hops because the LABs are sensitive to compounds found in hops (Schurr et al., ) due to the antimicrobial properties that can disrupt cell membranes. Therefore, hops were removed from the fermentation recipe. The simplicity of the fermentation process aligned closely with the mixed-culture cell counting method developed in this work. The validated image cytometry cell counting method was used to determine the concentration of S. cerevisiae and L. plantarum during fermentation. The results showed a range of 5.93–7.38 log and 8.88–8.12 log, respectively (Fig. ). The S. cerevisiae concentration increased in the first 9 hr of the fermentation and peaked at ∼1.82 × 10 7 CFU/mL for both image cytometry and manual counting (APDA). The concentration decreased from 12 to 24 hr resulting in an average concentration of 3.72 × 10 6 CFU/mL for image cytometry and 2.29 × 10 6 for manual counting. Finally, the yeast concentration tapered off at ∼3.02 × 10 6 cells/mL. Saccharomyces cerevisiae experienced the highest growth in the first 9 hr, followed by a slight reduction at every time point throughout the 48 hr, which could be caused by the lower pH (∼5.1) inhibiting the yeast replication or by the ending of primary fermentation (Bamforth, ). Some brewers may define the end of the primary fermentation when the SG falls below 1.030. Even at a low pH, if fermentable sugars are present, yeast will continue to lower the SG as they can better tolerate a low pH and increase alcohol concentration. This can result in some sour beer fermentation spanning many months, which is usually caused by the presence of Brettanomyces . Due to time constraints in this work, fermentation trials were monitored for 48 hr, when the SG fell below 1.030, indicating the completion of primary fermentation. On the other hand, the L. plantarum concentration increased in the first 12 hr and then plateaued from 24 to 48 hr. The L. plantarum started with a concentration of 1.48 × 10 8 CFU/mL for image cytometry and 1.41 × 10 8 for manual counting (MRS). The concentration increased to ∼3.55 × 10 8 and 3.98 × 10 8 cells/mL for image cytometry and manual counting, respectively. The final concentration after 48 hr was ∼3.31 × 10 8 and 3.55 × 10 8 CFU/mL for image cytometry and manual counting, respectively. Lactobacillus plantarum was in a growth phase in the first 12 hr. Subsequently, L. plantarum plateaus, and the increases were negligible until the 48-hr time point. This fermentation characteristic was observed in each of the three fermentation trials. Lactic acid bacteria are the most active in the first 12 hr of fermentation, where a steep reduction in the pH can be seen in the fermenting wort for sour beers. In general, the pH can reduce from mid-5 to low 3, where the LABs are inhibited at low pH and can no longer replicate. The plateauing of concentration corresponded with the reduction of LAB activity that occurred after 12 hr. The calculated ANOVA comparison between image cytometry and manual counting showed statistically comparable results between the two counting methods as well as the two microorganisms during the fermentation (Table ). The p -values calculated for bacteria and yeast were .73 and .81, respectively, which indicates that there was no significant difference between the novel image cytometry method and traditional manual counting. The results from the mixed culture fermentation experiment were used to demonstrate the ability of the novel image cytometry method to simultaneously count yeast and bacteria directly from fermentation samples while simulating standard industry practices. We found no significant differences between the counting methods. Therefore, the proposed image cytometry method may be utilized for commercial brewing applications. S. cerevisiae In this experiment, we verified that the use of the SD025 bacteria counting chamber can be used for yeast counting. In previous publications, yeast has been primarily counted in the SD100 chambers, thus it was critical to demonstrate that there are minimal effects on counting yeast in a thinner chamber. The yeast monoculture was first diluted with 0.25, 0.5, 0.75, and 1.00 dilution fractions and then counted with AO/PI staining. The results are shown in Fig. , which showed comparable concentration measurements between the two types of consumables. The percentage differences were 17.6, 5.6, 8.3, and 6.7%, respectively, for dilution fractions from 0.25 to 1.00. Overall, a two-sample t -test was calculated for each dilution and showed no significant statistical differences (0.66, 0.44, and 0.34) between the two consumables except the 0.25 dilution fraction (0.001), which may be due to cell counting precision at low concentration. L. plantarum and S. cerevisiae Titration Measurement To validate the ability of the image cytometer to measure the titration of yeast and bacteria monocultures, we prepared a monoculture of S. cerevisiae at dilution fractions of 0.1, 0.3, 0.5, 0.7, 0.9, and 1.0, as well as a monoculture of L. plantarum at dilution fractions of 0.1, 0.25, 0.50, 0.75, and 1.00. After staining the samples with AO/PI and SYTO BC, respectively, they were counted with the image cytometer to generate the titration results. The counted fluorescent images and results for L. plantarum and S. cerevisiae are shown in Fig. (top), which demonstrated the direct counting of fluorescently stained yeast and bacteria. The titration results showed a highly linear response for both microorganisms with R 2 values of 0.997 and 0.967 for L. plantarum and S. cerevisiae , respectively (bottom). This experiment repeated previously demonstrated cell counting methods using AO/PI and SYTO BC staining (Hodgkin et al., ; Saldi et al., ). The green outlines in the images showed individual yeast and bacteria counted. Furthermore, the ANOVA analysis showed that image cytometry and manual counting were statistically comparable when counting the monocultures of L. plantarum and S. cerevisiae for the three repeated experiments with a p -value of .49 and .85, respectively (Table ). L. plantarum and S. cerevisiae Mixture Measurement To demonstrate the ability of the image cytometer to simultaneously count yeast and bacteria in mixed culture, we prepared three different samples with various ratios of L. plantarum and S. cerevisiae in the mixture. The purpose of the first two experiments was to show the simultaneous counting of L. plantarum and S. cerevisiae when keeping one microorganism constant and diluting the other. The third experiment was to show simultaneous counting when both microorganisms were diluted. The bright field and fluorescent images, as well as the results of the mixed culture enumeration by image cytometry, are shown in Fig. , where the acquisition of both fluorescent channels was set to green to detect AO and SYTO BC. The parameters were set up to only count the yeast and bacteria in channels 1 and 2, respectively. The first experiment ( L. plantarum constant) showed R 2 values of 0.981 and 0.219 for yeast and bacteria, respectively. The second experiment ( S. cerevisiae constant) showed R 2 values of 0.056 and 0.994 for yeast and bacteria, respectively. The results indicated that the image cytometer was able to successfully measure the titration of yeast and bacteria, as well as measuring the constant concentrations. For the third experiment, the R 2 values were 0.967 and 0.970 for yeast and bacteria, respectively, which showed that the image cytometer can measure titrations from both organisms simultaneously. The ANOVA analysis showed that image cytometry and manual counting were statistically comparable for all three different mixture experiments at n = 3 (Tables and ). First, keeping L. plantarum constant, the p -values were .58 and 0.50 for bacteria and yeast, respectively. Second, keeping S. cerevisiae constant, the p -values were .96 and 0.46, respectively. Finally, the bacteria and yeast ratio mixture experiment showed p -values at .18 and .08, respectively. It is important to note that fluorescent staining is critical for the proposed image cytometry method to distinguish between yeast and bacteria cells. Fluorescence-based image analysis can also minimize background noise and debris in bright-field imaging. L. plantarum and S. cerevisiae Measurement in Fermentation The fermentation trials focused on producing a Berliner Wiesse product, a low-alcohol German sour beer that dates back to the 16th century. It is traditionally fermented with S. cerevisiae or Brettanomyces in combination with LABs using wort made from mixed extracted wheat and grain malt. In addition, Berliner Wiesse is traditionally brewed with small quantities of hops or without hops because the LABs are sensitive to compounds found in hops (Schurr et al., ) due to the antimicrobial properties that can disrupt cell membranes. Therefore, hops were removed from the fermentation recipe. The simplicity of the fermentation process aligned closely with the mixed-culture cell counting method developed in this work. The validated image cytometry cell counting method was used to determine the concentration of S. cerevisiae and L. plantarum during fermentation. The results showed a range of 5.93–7.38 log and 8.88–8.12 log, respectively (Fig. ). The S. cerevisiae concentration increased in the first 9 hr of the fermentation and peaked at ∼1.82 × 10 7 CFU/mL for both image cytometry and manual counting (APDA). The concentration decreased from 12 to 24 hr resulting in an average concentration of 3.72 × 10 6 CFU/mL for image cytometry and 2.29 × 10 6 for manual counting. Finally, the yeast concentration tapered off at ∼3.02 × 10 6 cells/mL. Saccharomyces cerevisiae experienced the highest growth in the first 9 hr, followed by a slight reduction at every time point throughout the 48 hr, which could be caused by the lower pH (∼5.1) inhibiting the yeast replication or by the ending of primary fermentation (Bamforth, ). Some brewers may define the end of the primary fermentation when the SG falls below 1.030. Even at a low pH, if fermentable sugars are present, yeast will continue to lower the SG as they can better tolerate a low pH and increase alcohol concentration. This can result in some sour beer fermentation spanning many months, which is usually caused by the presence of Brettanomyces . Due to time constraints in this work, fermentation trials were monitored for 48 hr, when the SG fell below 1.030, indicating the completion of primary fermentation. On the other hand, the L. plantarum concentration increased in the first 12 hr and then plateaued from 24 to 48 hr. The L. plantarum started with a concentration of 1.48 × 10 8 CFU/mL for image cytometry and 1.41 × 10 8 for manual counting (MRS). The concentration increased to ∼3.55 × 10 8 and 3.98 × 10 8 cells/mL for image cytometry and manual counting, respectively. The final concentration after 48 hr was ∼3.31 × 10 8 and 3.55 × 10 8 CFU/mL for image cytometry and manual counting, respectively. Lactobacillus plantarum was in a growth phase in the first 12 hr. Subsequently, L. plantarum plateaus, and the increases were negligible until the 48-hr time point. This fermentation characteristic was observed in each of the three fermentation trials. Lactic acid bacteria are the most active in the first 12 hr of fermentation, where a steep reduction in the pH can be seen in the fermenting wort for sour beers. In general, the pH can reduce from mid-5 to low 3, where the LABs are inhibited at low pH and can no longer replicate. The plateauing of concentration corresponded with the reduction of LAB activity that occurred after 12 hr. The calculated ANOVA comparison between image cytometry and manual counting showed statistically comparable results between the two counting methods as well as the two microorganisms during the fermentation (Table ). The p -values calculated for bacteria and yeast were .73 and .81, respectively, which indicates that there was no significant difference between the novel image cytometry method and traditional manual counting. The results from the mixed culture fermentation experiment were used to demonstrate the ability of the novel image cytometry method to simultaneously count yeast and bacteria directly from fermentation samples while simulating standard industry practices. We found no significant differences between the counting methods. Therefore, the proposed image cytometry method may be utilized for commercial brewing applications. In this work, we have demonstrated the capability of the Cellometer X2 image cytometer to automatically distinguish and count L. plantarum and S. cerevisiae in monoculture, different mixture ratios, and mixed culture fermentation. The novel method was validated by comparing with traditional CFU manual counting, which demonstrated highly comparable cell concentration results for all experiments. The proposed image cytometry method in combination with fluorescent stains and size exclusion image analysis algorithms is the only cell counting technique that can count L. plantarum and S. cerevisiae simultaneously in a mixed culture. Therefore, it can provide an effective and efficient tool to characterize mixed cultures during fermentation, producing more consistent and higher-quality beverage products. Further research will be performed to expand the species of bacteria and Brettanomyces yeast and to assess the utility of the method in other fermented beverages.
Whole person assessment for family medicine: a systematic review
5e0375c4-2053-438b-a7b6-e2e84885d016
10124221
Family Medicine[mh]
Whole person care (WPC) is foundational to general practice (GP), or family practice, and recognised best practice (note that ‘GP’ (UK term) is also used to include ‘family practice’/‘family medicine’ (US term) throughout this paper). WPC is a way of describing generalist clinical practice that integrates biology and biography within relationship. There is an urgent need for coherent approaches to assessment of the whole person as clinicians across disciplines seek to manage complex and often fragmented, undifferentiated or unexplained presentations of psychosocial distress and multimorbidity. International research has called for approaches that look beyond symptoms to the whole person. Generalism has been defined by consensus as the expertise of whole person medicine, and a craft that integrates broad scope, relational process, healing orientation, and integrative wisdom. WPC is a theoretically robust approach that integrates the science of how the body is impacted by life experience. General Practitioners describe a theoretical basis for WPC as a multidimensional approach that considers multiple aspects (or whole person domains) of the person and their context and synthesises these to develop a whole person approach (see ). WPC has length (multiple consultations over time), breadth (not excluding any groups or conditions) and depth (delving beyond the presenting complaint to explore underlying issues and preventive health). Relationship is its essential foundation. It includes both empirical and experiential knowledge, and is often delivered within a healthcare team. WPC overlaps with biopsychosocial (BPS) care, while addressing criticisms of the BPS approach through its stronger emphasis on synthesis between domains and on the clinical relationship. WPC also overlaps with patient-centred care, which additionally emphasises the specific process of joint decision-making ; and with the perspectives approach to psychiatry that suggests attention to disease, dimensions, behaviour, and life story. WPC addresses deficiencies in the reductionist biomedical and biotechnical paradigm that has historically characterised modern medicine, providing a critical framework to address some of medicine’s most pressing problems. The biomedical paradigm is effective in treating well-defined diseases but does not account for the complexities of the person in context, or recognise the critical interactions between physical, psychological and other domains of health. Emerging fields such as psychoneuroimmunology are revealing fundamental interrelationships between these domains. This is particularly relevant when considering patients with complex presentations that do not fit neatly into the biomedical paradigm. Reliance on single disease-specific guidelines can result in polypharmacy and harmful medication interactions when applied to patients with multiple chronic conditions. Similarly, failure to appreciate the interaction of psychosocial and broader contextual factors with physical symptomatology in patients experiencing medically unexplained symptoms can result in expensive overinvestigation and multiple specialist consultations, to no avail. Doctors often find these patients challenging to treat, which can result in both doctor and patient frustration. A more nuanced approach is required to meet these patients’ needs, and to promote health as ‘a state of complete physical, mental and social well-being, and not merely the absence of disease or infirmity.’ The WPC paradigm addresses this need. Some have argued that General Practitioners’ commitment to WPC is more rhetorical than practical. The practice of WPC is based on sophisticated generalist clinical skills that prioritise breadth of scope, relational process and integrative approaches to both experiential and empirical evidence. General Practitioners identify that better translation of whole person approaches from theory into practical assessment is needed. A multilayered approach is required that spans individual, practice and health systems approaches, however, at the level of the individual clinician–patient interaction, General Practitioners have identified that a whole person assessment (WPA) framework is needed. It is unclear whether an assessment approach exists that accurately encompasses WPC, is suitable for the GP context, and is theoretically and empirically robust. This project aims to identify existing clinical approaches to WPA that are translatable to GP, with a secondary aim of comparing these to each other, and to theoretical models of WPC to determine whether WPAs suitable for GP exist. Examination of the characteristics and strengths of identified assessments will also provide a basis to develop such approaches. Study registration We registered the systematic review protocol on the International Prospective Register of Systematic Reviews (registration number CRD42020164417). Through discussion among the research team, we refined inclusion/exclusion criteria to better reflect a generic WPA tool during literature screening. Inclusion criteria Original research published in English from the fields of medicine, allied health, nursing, mental health and pastoral care that described clinical approaches or tools used to perform WPA and were relevant to GP were included. To be considered a WPA, approaches were required to assess biological/physical, emotional/psychological and at least one other aspect of the person or their context, and to involve direct patient–clinician interaction. Exclusion criteria We excluded literature that was non-English, described non-English assessment tools, focused on cross-cultural or single disease validation of tools, used outdated classification systems, was designed for outcome rather than clinical assessment or was from fields not listed above. We excluded tools unsuitable for adaptation to GP due to length (more than 1 hour to complete in a single session or as determined by reviewer judgement), limited applicability (designed for single disease/diagnosis/symptom), unsuitability for an outpatient setting, essential requirement for multidisciplinary team (MDT) involvement or requirement for special training beyond the scope of GP. We excluded patient self-rating scales, except when these were part of a broader approach involving patient–clinician interaction. Where multiple WPAs were developed from a single theoretical nursing framework, only the original WPA or one specifically developed for GP was included. Where an updated version of an included book chapter or tool was identified at the time of full text retrieval/data extraction, the updated version was included rather than the original. Search strategy HRT searched MEDLINE, CINAHL, PsycINFO and ATLA Religion databases to 9 March 2020. We developed the search strategy iteratively, then performed a preplanned search. The final strategy searched for synonyms for ‘whole person’ nearby synonyms for ‘assessment/tool’ and for health-related quality of life assessments that included both physical and emotional/psychological factors. The search strategies are shown in . Following reviewer feedback similar searches were rerun to include the terms ‘whole health’ and ‘whole person health’ in February 2023, and articles suggested by reviewers were also screened (however, only results published on or before the original search date were eligible for inclusion). We handsearched reference lists of included studies. For studies that described a relevant WPA but did not include the assessment itself, we searched the reference list and/or contacted the authors requesting a copy of the assessment. We searched the Scopus database for studies that cited included literature and provided validation or evaluation data for the WPAs. All citations were uploaded into Endnote V.X9 and duplicates removed. 10.1136/bmjopen-2022-065961.supp1 Supplementary data Study selection HRT and DC independently screened over 50% of titles/abstracts and achieved consensus through discussion. Remaining title/abstract screening was divided between HRT and DC, who assessed full texts where necessary and discussed studies considered borderline for inclusion. Disagreements were resolved through discussion and consensus with the research team. Quality appraisal We assessed quality of included literature using Johanna Briggs Institutes’ checklists. There was no Johanna Briggs Institute checklist available for validation studies, so Terwee’s criteria were used for these. Two reviewers (HRT and either JL, DK, MB or DC) independently appraised quality for an initial 20% of studies, achieved consensus and divided the remaining studies between them. No studies were excluded based on quality. Data extraction Fields of data extracted from included studies are shown in . We extracted information describing each included study and assessment, along with information to evaluate each assessment’s alignment with theoretical models of WPC (ie, broad scope/multidimensionality, relational process, approach to information synthesis, team-based care), theoretical robustness and practicality for GP. Data were extracted independently by two authors (HRT and DC, MB, JL or DK) and consensus achieved for approximately 20% of studies, and by HRT for remaining studies. Extracted data are available on reasonable request to the authors. Data synthesis We performed framework synthesis using NVivo Pro V.12 for data management. Extracted data were tabulated and the content of assessments was grouped into broad domains (biological, psychological, social, spiritual and administrative) following coding. We compared WPA data and evaluated WPAs’ alignment with theoretical models of WPC, appropriateness/feasibility for adaptation to GP, outcome data and robustness, and achieved consensus through discussion. Patient and public involvement There was no patient and public involvement in this research. We registered the systematic review protocol on the International Prospective Register of Systematic Reviews (registration number CRD42020164417). Through discussion among the research team, we refined inclusion/exclusion criteria to better reflect a generic WPA tool during literature screening. Original research published in English from the fields of medicine, allied health, nursing, mental health and pastoral care that described clinical approaches or tools used to perform WPA and were relevant to GP were included. To be considered a WPA, approaches were required to assess biological/physical, emotional/psychological and at least one other aspect of the person or their context, and to involve direct patient–clinician interaction. We excluded literature that was non-English, described non-English assessment tools, focused on cross-cultural or single disease validation of tools, used outdated classification systems, was designed for outcome rather than clinical assessment or was from fields not listed above. We excluded tools unsuitable for adaptation to GP due to length (more than 1 hour to complete in a single session or as determined by reviewer judgement), limited applicability (designed for single disease/diagnosis/symptom), unsuitability for an outpatient setting, essential requirement for multidisciplinary team (MDT) involvement or requirement for special training beyond the scope of GP. We excluded patient self-rating scales, except when these were part of a broader approach involving patient–clinician interaction. Where multiple WPAs were developed from a single theoretical nursing framework, only the original WPA or one specifically developed for GP was included. Where an updated version of an included book chapter or tool was identified at the time of full text retrieval/data extraction, the updated version was included rather than the original. HRT searched MEDLINE, CINAHL, PsycINFO and ATLA Religion databases to 9 March 2020. We developed the search strategy iteratively, then performed a preplanned search. The final strategy searched for synonyms for ‘whole person’ nearby synonyms for ‘assessment/tool’ and for health-related quality of life assessments that included both physical and emotional/psychological factors. The search strategies are shown in . Following reviewer feedback similar searches were rerun to include the terms ‘whole health’ and ‘whole person health’ in February 2023, and articles suggested by reviewers were also screened (however, only results published on or before the original search date were eligible for inclusion). We handsearched reference lists of included studies. For studies that described a relevant WPA but did not include the assessment itself, we searched the reference list and/or contacted the authors requesting a copy of the assessment. We searched the Scopus database for studies that cited included literature and provided validation or evaluation data for the WPAs. All citations were uploaded into Endnote V.X9 and duplicates removed. 10.1136/bmjopen-2022-065961.supp1 Supplementary data HRT and DC independently screened over 50% of titles/abstracts and achieved consensus through discussion. Remaining title/abstract screening was divided between HRT and DC, who assessed full texts where necessary and discussed studies considered borderline for inclusion. Disagreements were resolved through discussion and consensus with the research team. We assessed quality of included literature using Johanna Briggs Institutes’ checklists. There was no Johanna Briggs Institute checklist available for validation studies, so Terwee’s criteria were used for these. Two reviewers (HRT and either JL, DK, MB or DC) independently appraised quality for an initial 20% of studies, achieved consensus and divided the remaining studies between them. No studies were excluded based on quality. Fields of data extracted from included studies are shown in . We extracted information describing each included study and assessment, along with information to evaluate each assessment’s alignment with theoretical models of WPC (ie, broad scope/multidimensionality, relational process, approach to information synthesis, team-based care), theoretical robustness and practicality for GP. Data were extracted independently by two authors (HRT and DC, MB, JL or DK) and consensus achieved for approximately 20% of studies, and by HRT for remaining studies. Extracted data are available on reasonable request to the authors. We performed framework synthesis using NVivo Pro V.12 for data management. Extracted data were tabulated and the content of assessments was grouped into broad domains (biological, psychological, social, spiritual and administrative) following coding. We compared WPA data and evaluated WPAs’ alignment with theoretical models of WPC, appropriateness/feasibility for adaptation to GP, outcome data and robustness, and achieved consensus through discussion. There was no patient and public involvement in this research. Study selection and characteristics Searches retrieved 7535 non-duplicate studies; 59 were included after screening and these described 42 tools/approaches for WPA. Included literature comprised 44 journal articles, 13 books/book chapters, 1 clinical training module, 1 patient brochure and 1 government research report. Most literature was text/opinion; others included quantitative (analytical cross-sectional, case–control, cluster RCT), mixed-methods, qualitative or validation studies and clinical guidelines. Most was from the USA or UK, with one assessment from each of Germany, Canada, Norway and Switzerland. Characteristics of included WPA approaches/tools A description of the 42 included approaches/tools and their strengths and weaknesses is provided in . The majority of WPAs originated from the field of mental health, with others from nursing, primary care, general medicine, palliative care, geriatrics and allied health. Some were from multiple fields. 10.1136/bmjopen-2022-065961.supp2 Supplementary data The assessments were developed for various purposes, the most common being to guide holistic patient assessment by identifying key assessment domains and their relevant content. Others were designed to guide psychiatric formulation, needs assessment, case complexity assessment, clinical reasoning or medical documentation, or to elicit patient perspectives to inform the consultation. Accordingly, the assessments took various forms, including lists of questions/domains to assess, tools numerically scoring multiple assessment domains, interview guides coupled with visual tools to facilitate patient engagement, patient questionnaires which informed patient–clinician discussion, visual clinical reasoning tools, clinical guidelines or merely suggested approaches and care pathways. Some directly linked assessment to care planning. Theoretical alignment with WPC Identified WPAs varied in their alignment with WPC as previously defined. Breadth of scope/multidimensionality was required for inclusion in the review. However, the specific domains varied between WPAs. For example, inclusion of spirituality/religion differed notably between WPAs. The level of detail in which each domain was assessed also varied markedly. Notably, some approaches explicitly assessed patient strengths as well as difficulties. Most approaches omitted some characteristics of WPC. Most assessments did not describe how to synthesise assessment domains to conceptualise how they interacted and their combined influence on the person. A few, however, did give specific frameworks for synthesising information, including diagrammatic representations of multiple factors’ influence on the patient’s problem, psychiatric formulation guides or numerical case complexity scores. Depth of relational context of the assessment was often not described. While all included assessments required patient–clinician interaction, many did not describe the context of patient–clinician relationship and it was often unclear whether there was an expectation of pre-existing or ongoing therapeutic relationship. While length (multiple consultations over time) is a feature of WPC in GP, few WPAs were longitudinal. Most tools developed for primary care, however, assumed both pre-existing and ongoing patient–clinician relationship. Where the nature of relationship was described, prominent themes included patient/provider collaboration ; specific communication skills ; and the process of assessment as an opportunity to express empathy and to strengthen relationship. Depth of assessment was difficult to assess, perhaps because this is more a function of time and quality of relationship than a characteristic of the WPA itself. A team-based approach was included in some tools, though as stated above, WPAs where MDT assessment was essential were excluded for practical reasons. Theoretical robustness of assessment The theoretical robustness of identified WPAs, reflected by their theoretical basis, process of development, validation and evaluation of outcomes, varied and was often difficult to assess due to limited information. In many cases, the assessment’s theoretical basis was assumed, rather than described. Where described, the most common bases included a BPS framework, person-centred care and various systems models. Some approaches focused on being ‘scientific’, while others explicitly aimed to combine the art and science of medicine. Assessments’ development was variably described. Several were developed from authors’ experience and/or literature, and some were adaptations of other tools. Some involved stakeholder input in development: most of these consulted experts, with only two describing patient input. Few assessments were validated. Of the 18 tools amenable to validation, data were available for 7. The tool with the most robust validation data was the 36-item short form survey (SF-36), however, the method proposed by Wetzler (combining the SF-36 with health status graph assessment) was not validated. The Patient Perspective Survey (PPS) was quite robust, with content and construct validity, but variable internal consistency. Most assessments had not been evaluated. Acceptability, where reported, was generally high among clinicians and patients. However, this was often based on informal feedback, with a minority of studies including focus groups, interviews, written or survey responses. Where reported, most WPAs increased comprehensiveness of assessment (especially regarding non-biomedical factors, with one study reporting a shift from biomedical to psychosocial referral patterns), though one assessment reported an ongoing biomedical focus and another noted that no new information was elicited in long-standing relationships. While some assessments reported improvements in therapeutic relationship and treatment satisfaction, others reported no change in these measures ; authors hypothesising that this was due to a ceiling effect. Several studies reported improved outcomes, including quality of life, self-efficacy/empowerment, psychopathological symptoms, social outcomes, weight loss and self-assessed unmet BPS needs, while two reported no change in mental well-being. One reported reduced healthcare costs. Feasibility for GP Limited information about feasibility was available for many WPAs. The authors judged all included WPAs to be a reasonable length for GP implementation, however, few specified times for completion. Where stated, the time requirement varied from less than to over an hour (across multiple consultations). Several tools were flexible in content and length depending on patient needs. Training requirements were often unspecified. Where stated, some assessments required no formal training while others varied between hours and days. All assessments could be completed by a single provider, however, MDT involvement was preferable for some. This may be an advantage or disadvantage depending on the health system context. Preferred WPAs for GP We did not identify any validated and evaluated assessment that addressed the diverse reasons for performing a WPA and was suitable for direct implementation in GP. However, some identified WPAs may be suitable for specific GP purposes. To elicit patient perspectives and facilitate care planning (particularly for complex patients), the health status graph assessment , Personalised Health Planning and the PPS were considered most suitable, each with different advantages. These assessments are designed for the primary care context, broad, flexible and assume longitudinal care. They each involve the patient completing a survey (SF-36 (for health status graph assessment), Personal Health Inventory (for Personalised Health Planning) or PPS), which is used to inform the content of the consultation. However, they provide limited guidance for information synthesis. The health status graph assessment and the PPS have the advantages of requiring minimal training and the availability of some validation data; development of the PPS also involved both clinician and patient input. However, they are somewhat limited in scope (eg, neither addresses spiritual or religious aspects), and there is a cost to use the SF-36 for the health status graph assessment. Personalised Health Planning has the advantage of breadth (includes spiritual aspects) and some promising evaluation data. There are also studies evaluating use of Personalised Health Planning as one component of broader models of care (not included in this review, as not specifically focused on evaluation of the assessment component). Other reasonable options for eliciting patient perspectives and stimulating discussion included the Pizzi Health and Wellness Assessment and approaches designed for mental health ( DIALOG/DIALOG+ tool, Life Map ). To assess patient complexity to inform care planning, the Minnesota Complexity Assessment Method (MCAM) was considered most suitable. The MCAM has a clear theoretical basis (BPS), is designed for primary care, assumes ongoing clinician–patient relationship, is flexible and has a clear link between assessment and action planning. However, it is specific to complexity assessment, diagnostic assessment and clinical reasoning are a separate process. In addition, it is unvalidated (though the related Minnesota-Edinburgh Complexity Assessment Method (MECAM) has validation data) ; it does not address spiritual/religious aspects, and MDT involvement is ideal for full assessment. The MCAM is similar to the MECAM and Patient-Centred Assessment Method, however, these are designed to be conducted by nurses, whereas the MCAM is designed for a General Practitioner/nurse team and includes assessment of diagnostic challenge. Some assessments included visual tools to synthesise whole person information, conceptualise how different domains may affect the patient’s problem and inform clinical reasoning. These included Matthew’s Model of Clinical Reasoning (designed for occupational therapy assessments) and the Rehabilitation Problem-Solving form . While these are not directly translatable to the GP context, they could be useful following adaptation. Searches retrieved 7535 non-duplicate studies; 59 were included after screening and these described 42 tools/approaches for WPA. Included literature comprised 44 journal articles, 13 books/book chapters, 1 clinical training module, 1 patient brochure and 1 government research report. Most literature was text/opinion; others included quantitative (analytical cross-sectional, case–control, cluster RCT), mixed-methods, qualitative or validation studies and clinical guidelines. Most was from the USA or UK, with one assessment from each of Germany, Canada, Norway and Switzerland. A description of the 42 included approaches/tools and their strengths and weaknesses is provided in . The majority of WPAs originated from the field of mental health, with others from nursing, primary care, general medicine, palliative care, geriatrics and allied health. Some were from multiple fields. 10.1136/bmjopen-2022-065961.supp2 Supplementary data The assessments were developed for various purposes, the most common being to guide holistic patient assessment by identifying key assessment domains and their relevant content. Others were designed to guide psychiatric formulation, needs assessment, case complexity assessment, clinical reasoning or medical documentation, or to elicit patient perspectives to inform the consultation. Accordingly, the assessments took various forms, including lists of questions/domains to assess, tools numerically scoring multiple assessment domains, interview guides coupled with visual tools to facilitate patient engagement, patient questionnaires which informed patient–clinician discussion, visual clinical reasoning tools, clinical guidelines or merely suggested approaches and care pathways. Some directly linked assessment to care planning. Identified WPAs varied in their alignment with WPC as previously defined. Breadth of scope/multidimensionality was required for inclusion in the review. However, the specific domains varied between WPAs. For example, inclusion of spirituality/religion differed notably between WPAs. The level of detail in which each domain was assessed also varied markedly. Notably, some approaches explicitly assessed patient strengths as well as difficulties. Most approaches omitted some characteristics of WPC. Most assessments did not describe how to synthesise assessment domains to conceptualise how they interacted and their combined influence on the person. A few, however, did give specific frameworks for synthesising information, including diagrammatic representations of multiple factors’ influence on the patient’s problem, psychiatric formulation guides or numerical case complexity scores. Depth of relational context of the assessment was often not described. While all included assessments required patient–clinician interaction, many did not describe the context of patient–clinician relationship and it was often unclear whether there was an expectation of pre-existing or ongoing therapeutic relationship. While length (multiple consultations over time) is a feature of WPC in GP, few WPAs were longitudinal. Most tools developed for primary care, however, assumed both pre-existing and ongoing patient–clinician relationship. Where the nature of relationship was described, prominent themes included patient/provider collaboration ; specific communication skills ; and the process of assessment as an opportunity to express empathy and to strengthen relationship. Depth of assessment was difficult to assess, perhaps because this is more a function of time and quality of relationship than a characteristic of the WPA itself. A team-based approach was included in some tools, though as stated above, WPAs where MDT assessment was essential were excluded for practical reasons. The theoretical robustness of identified WPAs, reflected by their theoretical basis, process of development, validation and evaluation of outcomes, varied and was often difficult to assess due to limited information. In many cases, the assessment’s theoretical basis was assumed, rather than described. Where described, the most common bases included a BPS framework, person-centred care and various systems models. Some approaches focused on being ‘scientific’, while others explicitly aimed to combine the art and science of medicine. Assessments’ development was variably described. Several were developed from authors’ experience and/or literature, and some were adaptations of other tools. Some involved stakeholder input in development: most of these consulted experts, with only two describing patient input. Few assessments were validated. Of the 18 tools amenable to validation, data were available for 7. The tool with the most robust validation data was the 36-item short form survey (SF-36), however, the method proposed by Wetzler (combining the SF-36 with health status graph assessment) was not validated. The Patient Perspective Survey (PPS) was quite robust, with content and construct validity, but variable internal consistency. Most assessments had not been evaluated. Acceptability, where reported, was generally high among clinicians and patients. However, this was often based on informal feedback, with a minority of studies including focus groups, interviews, written or survey responses. Where reported, most WPAs increased comprehensiveness of assessment (especially regarding non-biomedical factors, with one study reporting a shift from biomedical to psychosocial referral patterns), though one assessment reported an ongoing biomedical focus and another noted that no new information was elicited in long-standing relationships. While some assessments reported improvements in therapeutic relationship and treatment satisfaction, others reported no change in these measures ; authors hypothesising that this was due to a ceiling effect. Several studies reported improved outcomes, including quality of life, self-efficacy/empowerment, psychopathological symptoms, social outcomes, weight loss and self-assessed unmet BPS needs, while two reported no change in mental well-being. One reported reduced healthcare costs. Limited information about feasibility was available for many WPAs. The authors judged all included WPAs to be a reasonable length for GP implementation, however, few specified times for completion. Where stated, the time requirement varied from less than to over an hour (across multiple consultations). Several tools were flexible in content and length depending on patient needs. Training requirements were often unspecified. Where stated, some assessments required no formal training while others varied between hours and days. All assessments could be completed by a single provider, however, MDT involvement was preferable for some. This may be an advantage or disadvantage depending on the health system context. We did not identify any validated and evaluated assessment that addressed the diverse reasons for performing a WPA and was suitable for direct implementation in GP. However, some identified WPAs may be suitable for specific GP purposes. To elicit patient perspectives and facilitate care planning (particularly for complex patients), the health status graph assessment , Personalised Health Planning and the PPS were considered most suitable, each with different advantages. These assessments are designed for the primary care context, broad, flexible and assume longitudinal care. They each involve the patient completing a survey (SF-36 (for health status graph assessment), Personal Health Inventory (for Personalised Health Planning) or PPS), which is used to inform the content of the consultation. However, they provide limited guidance for information synthesis. The health status graph assessment and the PPS have the advantages of requiring minimal training and the availability of some validation data; development of the PPS also involved both clinician and patient input. However, they are somewhat limited in scope (eg, neither addresses spiritual or religious aspects), and there is a cost to use the SF-36 for the health status graph assessment. Personalised Health Planning has the advantage of breadth (includes spiritual aspects) and some promising evaluation data. There are also studies evaluating use of Personalised Health Planning as one component of broader models of care (not included in this review, as not specifically focused on evaluation of the assessment component). Other reasonable options for eliciting patient perspectives and stimulating discussion included the Pizzi Health and Wellness Assessment and approaches designed for mental health ( DIALOG/DIALOG+ tool, Life Map ). To assess patient complexity to inform care planning, the Minnesota Complexity Assessment Method (MCAM) was considered most suitable. The MCAM has a clear theoretical basis (BPS), is designed for primary care, assumes ongoing clinician–patient relationship, is flexible and has a clear link between assessment and action planning. However, it is specific to complexity assessment, diagnostic assessment and clinical reasoning are a separate process. In addition, it is unvalidated (though the related Minnesota-Edinburgh Complexity Assessment Method (MECAM) has validation data) ; it does not address spiritual/religious aspects, and MDT involvement is ideal for full assessment. The MCAM is similar to the MECAM and Patient-Centred Assessment Method, however, these are designed to be conducted by nurses, whereas the MCAM is designed for a General Practitioner/nurse team and includes assessment of diagnostic challenge. Some assessments included visual tools to synthesise whole person information, conceptualise how different domains may affect the patient’s problem and inform clinical reasoning. These included Matthew’s Model of Clinical Reasoning (designed for occupational therapy assessments) and the Rehabilitation Problem-Solving form . While these are not directly translatable to the GP context, they could be useful following adaptation. This study sought to look for any tool available in the literature that was theoretically robust, broad and multidimensional in scope, relational in process, had an effective approach to information synthesis, and was feasible for GP implementation. Studies identified in this systematic review varied from this ideal. We identified multiple WPAs from diverse disciplines, with several purposes and formats. However, we did not identify any single theoretically robust approach that encompassed the diverse reasons for performing a WPA, was well validated and evaluated, and was suitable for direct implementation in GP. Nonetheless, from the WPAs identified, some were considered most suitable for specific purposes such as elicitation of patient perspectives, complexity assessment and facilitated clinical reasoning. WPC is considered fundamental to GP and our results demonstrate that the concept is also of interest to multiple other fields. Despite this, there are substantial theoretical and practical gaps in existing WPAs. All assessments included in this review had a degree of alignment with WPC frameworks, however, most did not fully encompass this concept. Due to our inclusion criteria, all WPAs included some breadth, or multidimensionality, of assessment. However, there was no consistent language used to describe included aspects of the person. For example, experiences which some assessments labelled psychological were considered by others as biological or spiritual. This makes understanding and comparing WPAs challenging. In addition, the depth in which each domain was assessed varied markedly between WPAs. Regarding information synthesis, very few tools gave a specific method to synthesise information into a whole person conceptualisation of the patient’s problem; none of these were designed for the GP context. While all assessments involved patient–clinician interaction, most gave limited information about the relational context of assessment, which is the basis for WPC. Included assessments originating from primary care were generally more robust in this relational respect. In addition, the theoretical robustness of many WPAs was often insufficient. While some assessments detailed their theoretical basis, many did not. Only two assessments described patient involvement in development, departing from best practice. Finally, practical implementation information (eg, time to complete, resources, training requirement), validation and evaluation data for assessments was often unpublished. The reason for limited evaluation and validation is unclear; many tools were developed by clinicians and this may reflect lack of academic interest or expertise. Together, this suggests that there is need for further work to promote WPA and care. This study provides a foundation for such work. We have identified several assessments that could be further adapted and evaluated for implementation for specific purposes in GP, as discussed above. We also identified strengths of individual WPAs, which could be incorporated into existing assessments, or combined to design a novel WPA for GP. These include: (1) Prioritising active patient involvement and aiming to use assessment to strengthen the therapeutic relationship, which are key aspects of patient-centred care ; (2) Breadth of assessment, including all domains of the person (see ); (3) Assessment of patient strengths as well as difficulties, reflecting strengths-based approaches previously shown to improve outcomes ; (4) Flexibility, through providing a broad framework from which the clinician selects relevant aspects to accommodate varying patient needs and time constraints ; (5) A longitudinal perspective, through guiding initial assessment while acknowledging that full detail can emerge over time to align with long-term GP care ; (6) Facilitated information synthesis and clinical reasoning and (7) A direct link between assessment and care planning. Assessments should also have a clear theoretical basis, such as that described previously, be developed with patient and clinician input, and be appropriately validated/evaluated. One approach incorporating these strengths may include a clinician framework outlining suggested domains and questions to guide WPA (supporting multidimensionality/breadth); a corresponding patient survey (supporting the relational process); and a visual tool to assist whole-person clinical reasoning and care planning (supporting information synthesis), with changes assessed over time. It is essential that this process of assessment remains relational, rather than becoming mechanical and systematised. While there is evidence that frameworks improve assessment of otherwise often neglected aspects of the person, there is also a risk that such processes can be ineffective or harmful if their predetermined structure results in missed cues, insensitive questioning styles and controlling interactions. This emphasises the need for a flexible and adaptable framework that is founded on the therapeutic relationship. While many WPA approaches exist, no unifying approach was identified. Most of the identified assessments were developed by independent clinicians. This supports their local relevance, but also results in heterogeneity and may impede evaluation. This situation may reflect a previous failure of academic GP to give sufficient attention to whole person practice. One barrier to such attention may include lack of a shared language and unifying philosophical framework between the traditionally distinct disciplines of biomedicine and social science, which are both relevant to this discussion; this has been addressed in recent work. Academic work may provide an underlying framework to inform medical education, and from which clinicians could adapt locally relevant approaches to WPA. Strengths of this study are its extensive search strategy and inclusion of literature from a range of disciplines to capture a breadth of assessments considered to be ‘whole person’. Specific criteria were developed to assess the usefulness of identified assessments from a GP perspective. The research team comprised experienced generalists from several disciplines (GP, palliative care/ethics, primary care research), providing breadth of insight. Reliability was strengthened by the involvement of multiple team members in data extraction, quality appraisal and data analysis. Limitations include that some relevant studies may have been missed due to the breadth of the topic. Reference lists of included studies were searched to help address this. Further relevant studies may have been published since the original search date (March 2020); it was not feasible to update the search prior to publication due to its breadth and size. It is interesting that the vast majority of included WPAs originated from the USA or UK. It is difficult to know whether this reflects a particular interest in the topic in these locations, publication bias or limitations of the review (eg, English language limitation). Inclusion of physical, emotional and at least one other domain, together with patient–clinician interaction, is an imprecise way to identify WPAs, however, was used in initial screening due to the need for clear inclusion criteria. More nuanced assessment of included studies was performed during data analysis and described above. The language used to describe domains in the papers was taken as stated; limited descriptive information in many papers made more specific understanding of these terms difficult. Similar aspects of assessments may, therefore, have been grouped under different domains in our findings. Excluding assessments designed for specific patient groups (eg, disease specific assessments, narrowly focused discipline specific assessments, classification tools) may have missed some assessments that could have contributed to the discussion; however, this was considered necessary to identify approaches most likely to be clinically useful in GP. Notably, no included assessments detailed Indigenous perspectives. A view of the whole is embedded in an Indigenous way of viewing the world and it would be valuable to explore these in future research, to add depth to this review’s findings. In summary, this research highlights a substantial need for ongoing work to translate the theoretical basis of WPC into a clinical WPA approach in GP. It provides a firm basis to do so, detailing the strengths and deficiencies of existing approaches to inform future development of a robust and flexible clinical WPA for GP. Such an approach is urgently needed to address the practical and ethical shortcomings of fragmented, reductionistic approaches to care and will assist in transforming primary healthcare and meeting the needs of patients with conditions such as multimorbidity and medically unexplained symptoms, supporting General Practitioners to provide optimal WPC. Reviewer comments Author's manuscript
Verbal autopsy analysis of maternal mortality in Bong County, Liberia: a retrospective mixed methods study
6dd6e1b3-743f-4db8-8cbf-29c07a61f7c7
10124242
Forensic Medicine[mh]
While the medical contributors to maternal mortality in sub-Saharan Africa are well known, the contextual contributors are less known and understudied. A retrospective mixed methods study using verbal autopsy forms found that limited resources, inadequate skills and ineffective communications were the main contextual causes of the recently increased maternal deaths in Bong County, Liberia. Availability of resources and transportation, expanding and equipping healthcare workforce, and improving communications between women, women’s family, within and between healthcare facilities need to be prioritised to prevent future similar maternal deaths. Maternal mortality is a global health problem, with the majority of the maternal deaths occurring in low-income and middle-income countries (LMICs). Liberia has one of the highest maternal mortality ratios (MMR) among LMICs with 742 deaths per 100 000 live births. While Liberia was making significant progress towards reducing maternal mortality, a 14-year long civil war (1989–2003) and the Ebola epidemic (2014–2016) severely challenged the country’s infrastructure for maternal care. Prior to the civil wars, Liberia had 293 functioning public health facilities. However, by the end of the civil unrest in 2003, this number dropped to 51. Many healthcare workers also fled the country, leaving only 30 physicians to serve a population of 3 million. After the devastating effects of the civil war on the health system, Liberia’s government, with assistance from donors and international non-governmental organisations, launched the national health plan in 2007 to improve healthcare services. Maternal, newborn, child health, as well as reproductive, and adolescent health were key components of the plan. Furthermore, the National Health Policy and Plan 2011–2021 aimed to increase the number of high performing facilities and institutes and strengthen the workforce to be people-centred, gender-sensitive and service oriented. With such prioritised efforts, the number of health facilities grew from 51 to 727 between 2003 and 2016. As Liberia was beginning to recover from the civil wars, its health system was overwhelmed by the 2014–2016 Ebola outbreak. Many health facilities were closed due to limited supplies and resources, and utilisation of maternal health services significantly declined. For example, approximately 16 000 antenatal care visits were recorded in January 2012, which dropped to 4000 by October 2014. Furthermore, facility-based deliveries decreased by 30% or more. Experts reported people’s fear of contracting Ebola at the health facilities and the lack of trust in the health system as explanation for such underutilisation of services. As an alternative, many women sought services from traditional birth attendants, or received no services at all, which increased pregnancy and childbirth related deaths and injuries. Confounded by socioeconomic and cultural factors, maternal mortality is highest for unsupervised deliveries occurring in rural and poor communities, and the deaths occurring outside of health facilities are often not properly examined nor recorded. Without proper records, the causes and contributing factors of death cannot be captured to prevent future deaths and injuries. Bong County is the third most populous county in Liberia with increased maternal deaths in recent years. In 2014 and 2015, there were fewer than 10 maternal deaths each year. However, between 2016 and 2019, maternal deaths increased to approximately 35 each year. To better understand the various contributing factors and sequential events of the maternal deaths, the Bong County’s Community Health Teams (CHTs) conducted verbal autopsies (VAs) on all maternal deaths that occurred in 2019. VA is a method widely used in settings where deaths occur without medical supervision. To understand the factors contributing to the high maternal mortality in Bong County, Liberia, a retrospective mixed method analysis was conducted using VA reports. This descriptive retrospective mixed methods analysis examined VAs of 35 maternal deaths that occurred in Bong County, Liberia in 2019. Setting Liberia is a country located in western Africa with a population of 5.19 million. With an MMR of 742 deaths per 100 000 live births, Liberia is approximately 8.2 years behind its national target to reach 496 deaths per 100 000 by 2021. In 2016, there were 727 health facilities including hospitals, clinics and health centres in Liberia. It has less than 1.5 skilled birth attendants per 1000 people, which is below the minimum threshold of 2.3 skilled providers required to ensure women have access to a skilled provider at birth for 80% of the population. Bong County is in the north-central portion of Liberia and consists of 12 districts. Of the 15 counties in Liberia, Bong County is the third most populated county with a population of 328 919. There are currently 3 hospitals and 45 rural health facilities in Bong County. Instrument The VA reports were collected using the Maternal Death Investigation and Reporting Form (MDIRF). In Liberia, Maternal and Newborn Death Verbal Audits were not routinely collected until 2016. There was high variability between CHTs regarding the quality and content of the audit reports prior to this time. In 2016, at the Liberia Maternal and Newborn Health Conference, the Ministry of Health decided to implement mandatory countrywide audit reporting forms for maternal and newborn deaths which occur in any setting, including both health facility and community-based deaths. To this means, the MDIRF was created ( ). When a death is reported, a team of 1–2 members of the reproductive health division of the CHT travel out to the location of death or in the case of a facility death to the home community of the deceased woman. The MDIRF form consists of both closed ended and open-ended questions with six main sections: (1) reporting information (eg, date of death and investigation), (2) deceased woman’s demographics information (eg, age, marital status, education level), (3) reproductive health history (eg, gravida/parity, antenatal care visits), (4) maternal death circumstances (eg, stage of pregnancy at the time of death, delivery location, referral status, mode of transportation), (5) reported cause of death (primary, secondary cause of death recorded by healthcare provider present at death) and (6) overall interpretation of the collected data and recommendations. Sections 1–5 consist of closed-ended questions and are collected from the deceased women’s family members and/or healthcare provider(s). Section 6 uses open-ended questions and is completed by the CHT member who collected the data. All data are first collected on paper copies and transferred to an Excel sheet or word document. These tools aim to generate information on maternal and newborn deaths to inform future decision-making and interventions to reduce preventable mortality. 10.1136/bmjoq-2022-002147.supp1 Supplementary data Data analysis An interdisciplinary death audit team was developed consisting of researchers from the University of Michigan and representatives from the CHT, analysed the data together to include diverse investigators who are proficient in health sciences, systems thinking and the local contexts. All identifiable information about the deceased women was removed before analysis. The quantitative data from the first five sections of the VA were analysed using Stata 17 (StataCorp). The primary causes of deaths were coded by three independent medically trained researchers after reviewing the first five sections. Next, the researchers compared the codes and discussed until unanimous agreement was achieved. The cause of deaths was recoded by the researchers for data quality and consistency after an in-depth review of the VA data. Two of the three researchers also reviewed the entire VA data including qualitative pieces, paying special attention to the sequenced maternal death circumstances from the moment a woman decided to seek help until death. The coding was inductive, where each of the investigators independently analysed and developed codes from the VA forms then reviewed, compared, overlapped and categorised the causal factors. Together, the researchers developed a codebook, which was then used to hand-code the data independently. The coded data were then compared and discussed until all codes reached consensus among study investigators. After all the quantitative and qualitative data were analysed, the death audit team collectively developed a list of recommendations on how to address the contextual causes (main themes) and contributing factors (subthemes) to prevent similar deaths. Liberia is a country located in western Africa with a population of 5.19 million. With an MMR of 742 deaths per 100 000 live births, Liberia is approximately 8.2 years behind its national target to reach 496 deaths per 100 000 by 2021. In 2016, there were 727 health facilities including hospitals, clinics and health centres in Liberia. It has less than 1.5 skilled birth attendants per 1000 people, which is below the minimum threshold of 2.3 skilled providers required to ensure women have access to a skilled provider at birth for 80% of the population. Bong County is in the north-central portion of Liberia and consists of 12 districts. Of the 15 counties in Liberia, Bong County is the third most populated county with a population of 328 919. There are currently 3 hospitals and 45 rural health facilities in Bong County. The VA reports were collected using the Maternal Death Investigation and Reporting Form (MDIRF). In Liberia, Maternal and Newborn Death Verbal Audits were not routinely collected until 2016. There was high variability between CHTs regarding the quality and content of the audit reports prior to this time. In 2016, at the Liberia Maternal and Newborn Health Conference, the Ministry of Health decided to implement mandatory countrywide audit reporting forms for maternal and newborn deaths which occur in any setting, including both health facility and community-based deaths. To this means, the MDIRF was created ( ). When a death is reported, a team of 1–2 members of the reproductive health division of the CHT travel out to the location of death or in the case of a facility death to the home community of the deceased woman. The MDIRF form consists of both closed ended and open-ended questions with six main sections: (1) reporting information (eg, date of death and investigation), (2) deceased woman’s demographics information (eg, age, marital status, education level), (3) reproductive health history (eg, gravida/parity, antenatal care visits), (4) maternal death circumstances (eg, stage of pregnancy at the time of death, delivery location, referral status, mode of transportation), (5) reported cause of death (primary, secondary cause of death recorded by healthcare provider present at death) and (6) overall interpretation of the collected data and recommendations. Sections 1–5 consist of closed-ended questions and are collected from the deceased women’s family members and/or healthcare provider(s). Section 6 uses open-ended questions and is completed by the CHT member who collected the data. All data are first collected on paper copies and transferred to an Excel sheet or word document. These tools aim to generate information on maternal and newborn deaths to inform future decision-making and interventions to reduce preventable mortality. 10.1136/bmjoq-2022-002147.supp1 Supplementary data An interdisciplinary death audit team was developed consisting of researchers from the University of Michigan and representatives from the CHT, analysed the data together to include diverse investigators who are proficient in health sciences, systems thinking and the local contexts. All identifiable information about the deceased women was removed before analysis. The quantitative data from the first five sections of the VA were analysed using Stata 17 (StataCorp). The primary causes of deaths were coded by three independent medically trained researchers after reviewing the first five sections. Next, the researchers compared the codes and discussed until unanimous agreement was achieved. The cause of deaths was recoded by the researchers for data quality and consistency after an in-depth review of the VA data. Two of the three researchers also reviewed the entire VA data including qualitative pieces, paying special attention to the sequenced maternal death circumstances from the moment a woman decided to seek help until death. The coding was inductive, where each of the investigators independently analysed and developed codes from the VA forms then reviewed, compared, overlapped and categorised the causal factors. Together, the researchers developed a codebook, which was then used to hand-code the data independently. The coded data were then compared and discussed until all codes reached consensus among study investigators. After all the quantitative and qualitative data were analysed, the death audit team collectively developed a list of recommendations on how to address the contextual causes (main themes) and contributing factors (subthemes) to prevent similar deaths. Demographics A total of 35 VA reports were collected from deceased women’s family and healthcare providers that cared for women in Bong County, Liberia. presents the demographic data of the deceased women. Twelve (34.2%) of the 35 deceased women were between 31 and 35 years old and 10 (28.5%) of the women were between 16 and 20 years old. The majority of the women were married or cohabiting (71.4%) and the half of them were farmers (51.4%). Sixteen (45.7%) women had more than 5 pregnancies and 11 (31.4%) women had more than 5 live births. Among the risk factors, infectious diseases (eg, malaria/fever/syphilis/hepatitis) were most common (51.4%). Obstetric history included 71.4% of the women having a history of a stillbirths, 31.4% had a previous abortion and 14.2% had a previous caesarean section. More than half of the women (60.0%) received antenatal care for the current pregnancy. All percentages for the tables took the missing observations into consideration. Maternal death circumstances All women (n=35) travelled to a rural health facility, staffed by a nurse or a midwife, to seek care. presents the maternal death circumstances leading up to death. When travelling from home to a health facility, 14 (40.0%) women took a commercial vehicle, 8 (22.8%) took an ambulance and 6 (17.1%) walked. Travel time to the nearest health facility was 30 min or less for 31.4% of the women. Twenty-one (60.0%) women were referred from a rural health facility to a district hospital capable of providing more comprehensive emergency obstetric care such as caesarean section and blood transfusion. One-third of these women took an ambulance and another one-third used a commercial vehicle to get to the hospital. For women referred from a rural health facility, travel time took less than 30 min for 5 (23.8%) women, between 31 min to an hour for 3 (14.28%) of the women and more than an hour for 8 (38.08%) women to arrive at the hospital. Eleven (31%) women died prior to delivery, 2 (5.7%) during delivery, 14 women (40.0%) within 24 hours of post partum and 5 (14.2%) within 42 days of post partum. Among women who died during delivery or postpartum period, 37.5% women had a spontaneous vaginal delivery, 33.33% had a caesarean section, 12.5% had induced vaginal delivery and 4.1% had assisted vaginal delivery. Only half of the women (48.5%) delivered at a health facility, but 71% of the women with a skilled provider present during delivery. This finding indicates that 22.5% of the women had a skilled provider outside of a health facility. The majority of the women died at a rural health facility or a hospital (85.7%). More women died during rainy season (62.8%), which is from May to October. One-third of the babies were alive and healthy, two were alive but in critical condition and another seven were stillbirths. Primary medical causes of maternal death were haemorrhage (50.0%), sepsis (32.3%), indirect causes (11.7%), hypertension (2.9%) and embolism (2.9%). Contextual causes and contributing factors of maternal deaths shows the tabulated contextual causes and the contributing factors of the maternal deaths. There are three contextual causes identified: (1) limited resources, (2) inadequate skills and knowledge and (3) ineffective communication. Limited resources Materials, transportation, facility and staff were identified as contributing factors under limited resources. Limited materials (28.5%) were the most frequently identified, closely followed by transportation (20.0%). Limited material resources included oxygen, medication, intravenous fluids, blood products and equipment including partographs at the facilities. In one instance, relatives were asked by providers to prepare necessary materials for delivery, including donating their own blood due to hospital shortage. When the relatives refused to donate blood and were not able to prepare the requested materials on time, the woman died. Limited transportation means, including ambulances were also significant contributors to maternal deaths. One of the deceased women’s providers stated, ‘The patient started bleeding and the ambulance was called at 10:50am. We were told that the ambulance was down and another ambulance at a close by hospital did not have fuel. At 12:10pm patient was taken in a commercial vehicle but died on route.’ Similarly, there was often only one ambulance available, causing delays for the next woman in need of an emergency transfer. Family members would then try to find commercial transportation which often took a significant amount of time and family resources. Limited facilities such as operating rooms and maternity waiting homes, lodging facilities for women who live far away from health facilities and/or have high-risk pregnancy to await delivery, contributed to delay in receiving care. One of the deceased women had to wait more than 12 hours to receive a caesarean section because there was no operating room available. In other instances, there were no maternity waiting homes for the women living far distances from a health facility to stay and await delivery. They were forced to travel long distances while in labour in order to access services. The lack of adequate numbers of healthcare providers as well as the types of providers available was also mentioned. There were insufficient lab technicians to perform timely and necessary labs for prompt diagnosis and care. Already overburdened providers struggled to provide care quickly and effectively for all patients or to properly supervise and mentor newer providers. Inadequate skills and knowledge Inadequate skills and knowledge included staff education and training (51.4%), patient education (54.2%) and community and family education (25.7%). Multiple VA reports showed that healthcare providers misdiagnosed or simply did not identify the complications that led to the adverse outcomes. A deceased women’s sister commented, ‘she stayed in the ER for two days before she was sent to the OB ward. The doctor ordered oxygen but there was no oxygen for her.’ Such incidence shows both limited obstetric triage and inadequate staff training. One of the providers mentioned ‘the death could have been prevented if the complication was identified earlier… the intrauterine fetal death was not identified, and the patient was kept in the maternity waiting home for two weeks. The fetus was macerated, and the patient became septic when she was referred to the hospital.’ In another instance, a friend of a deceased women said that the patient visited the clinic twice, but she was sent home, where she remained for two days. Limited patient and community and family education were also frequently mentioned (25.7%). The VA analysis identified insufficient education and emphasised around the importance of early and frequent antenatal care, available services, recognition of danger signs and debunking traditional medicines that are harmful to patients at the community level. Many of the deceased women had symptoms of danger signs such as vaginal bleeding, convulsion, fever, abdominal pain and fast breathing which were not identified by the patients or their families until these symptoms became severe. While these women can and should receive education on the importance of facility delivery and danger signs during antenatal care visits, the family and community members who are often in very close proximity with the women also need to be included in training to recognise and bring women to a healthcare facility or to support her when she needs emergent care. One instance detailed a severely sick woman requiring her to be transferred to the hospital from the rural health facility. When the parents and the relatives were informed, they refused. Later she went home with the family. Her symptoms worsened and she died on route being transferred to the hospital. In other instance, a family member reported they realised something was wrong but did not know how to properly seek help. Both deaths could have been prevented if the family and the community members were also aware of the danger signs and knew the appropriate next steps to seeking care. Ineffective communication Ineffective communications between providers, between health facility and hospital, and between providers and patient/family were also identified as contributors to maternal deaths. Between providers, information and updates regarding the patient care were not being communicated effectively. In one instance, a patient spent 23 hours in the emergency room (ER) before the ER nurse informed the on-call doctor. There were significant delays in communicating patient histories from the rural health facility to the hospital and between the nurses and the doctors. In many cases, the doctors would put in the order, the nurses would deliver the care, but not update the doctor about patient status after delivering care nor would the physician follow-up and tell the nurse whether he/she checked on the patient. Patients and patient families did not receive sufficient updates regarding patient status to make collective decisions regarding patient care. In cases where patients and family members were informed, insufficient explanations regarding the potential consequences were often given. Hence, there were multiple instances where the family was told to transfer the patient to a hospital without further information on what to tell the healthcare providers once they arrived at the facility. In other instances, patients and family members would not disclose the full information to the providers. One of the providers commented, ‘the woman initially denied any form of induced abortion… about two days after admission, she admitted that she had two herbal concoction a week prior to coming to the hospital’. As such, failed communication at multiple levels contributed to maternal deaths. presents the contextual causes, contributing factors and means to address each of the contributing factors per death audit team recommendation. A total of 35 VA reports were collected from deceased women’s family and healthcare providers that cared for women in Bong County, Liberia. presents the demographic data of the deceased women. Twelve (34.2%) of the 35 deceased women were between 31 and 35 years old and 10 (28.5%) of the women were between 16 and 20 years old. The majority of the women were married or cohabiting (71.4%) and the half of them were farmers (51.4%). Sixteen (45.7%) women had more than 5 pregnancies and 11 (31.4%) women had more than 5 live births. Among the risk factors, infectious diseases (eg, malaria/fever/syphilis/hepatitis) were most common (51.4%). Obstetric history included 71.4% of the women having a history of a stillbirths, 31.4% had a previous abortion and 14.2% had a previous caesarean section. More than half of the women (60.0%) received antenatal care for the current pregnancy. All percentages for the tables took the missing observations into consideration. All women (n=35) travelled to a rural health facility, staffed by a nurse or a midwife, to seek care. presents the maternal death circumstances leading up to death. When travelling from home to a health facility, 14 (40.0%) women took a commercial vehicle, 8 (22.8%) took an ambulance and 6 (17.1%) walked. Travel time to the nearest health facility was 30 min or less for 31.4% of the women. Twenty-one (60.0%) women were referred from a rural health facility to a district hospital capable of providing more comprehensive emergency obstetric care such as caesarean section and blood transfusion. One-third of these women took an ambulance and another one-third used a commercial vehicle to get to the hospital. For women referred from a rural health facility, travel time took less than 30 min for 5 (23.8%) women, between 31 min to an hour for 3 (14.28%) of the women and more than an hour for 8 (38.08%) women to arrive at the hospital. Eleven (31%) women died prior to delivery, 2 (5.7%) during delivery, 14 women (40.0%) within 24 hours of post partum and 5 (14.2%) within 42 days of post partum. Among women who died during delivery or postpartum period, 37.5% women had a spontaneous vaginal delivery, 33.33% had a caesarean section, 12.5% had induced vaginal delivery and 4.1% had assisted vaginal delivery. Only half of the women (48.5%) delivered at a health facility, but 71% of the women with a skilled provider present during delivery. This finding indicates that 22.5% of the women had a skilled provider outside of a health facility. The majority of the women died at a rural health facility or a hospital (85.7%). More women died during rainy season (62.8%), which is from May to October. One-third of the babies were alive and healthy, two were alive but in critical condition and another seven were stillbirths. Primary medical causes of maternal death were haemorrhage (50.0%), sepsis (32.3%), indirect causes (11.7%), hypertension (2.9%) and embolism (2.9%). shows the tabulated contextual causes and the contributing factors of the maternal deaths. There are three contextual causes identified: (1) limited resources, (2) inadequate skills and knowledge and (3) ineffective communication. Limited resources Materials, transportation, facility and staff were identified as contributing factors under limited resources. Limited materials (28.5%) were the most frequently identified, closely followed by transportation (20.0%). Limited material resources included oxygen, medication, intravenous fluids, blood products and equipment including partographs at the facilities. In one instance, relatives were asked by providers to prepare necessary materials for delivery, including donating their own blood due to hospital shortage. When the relatives refused to donate blood and were not able to prepare the requested materials on time, the woman died. Limited transportation means, including ambulances were also significant contributors to maternal deaths. One of the deceased women’s providers stated, ‘The patient started bleeding and the ambulance was called at 10:50am. We were told that the ambulance was down and another ambulance at a close by hospital did not have fuel. At 12:10pm patient was taken in a commercial vehicle but died on route.’ Similarly, there was often only one ambulance available, causing delays for the next woman in need of an emergency transfer. Family members would then try to find commercial transportation which often took a significant amount of time and family resources. Limited facilities such as operating rooms and maternity waiting homes, lodging facilities for women who live far away from health facilities and/or have high-risk pregnancy to await delivery, contributed to delay in receiving care. One of the deceased women had to wait more than 12 hours to receive a caesarean section because there was no operating room available. In other instances, there were no maternity waiting homes for the women living far distances from a health facility to stay and await delivery. They were forced to travel long distances while in labour in order to access services. The lack of adequate numbers of healthcare providers as well as the types of providers available was also mentioned. There were insufficient lab technicians to perform timely and necessary labs for prompt diagnosis and care. Already overburdened providers struggled to provide care quickly and effectively for all patients or to properly supervise and mentor newer providers. Inadequate skills and knowledge Inadequate skills and knowledge included staff education and training (51.4%), patient education (54.2%) and community and family education (25.7%). Multiple VA reports showed that healthcare providers misdiagnosed or simply did not identify the complications that led to the adverse outcomes. A deceased women’s sister commented, ‘she stayed in the ER for two days before she was sent to the OB ward. The doctor ordered oxygen but there was no oxygen for her.’ Such incidence shows both limited obstetric triage and inadequate staff training. One of the providers mentioned ‘the death could have been prevented if the complication was identified earlier… the intrauterine fetal death was not identified, and the patient was kept in the maternity waiting home for two weeks. The fetus was macerated, and the patient became septic when she was referred to the hospital.’ In another instance, a friend of a deceased women said that the patient visited the clinic twice, but she was sent home, where she remained for two days. Limited patient and community and family education were also frequently mentioned (25.7%). The VA analysis identified insufficient education and emphasised around the importance of early and frequent antenatal care, available services, recognition of danger signs and debunking traditional medicines that are harmful to patients at the community level. Many of the deceased women had symptoms of danger signs such as vaginal bleeding, convulsion, fever, abdominal pain and fast breathing which were not identified by the patients or their families until these symptoms became severe. While these women can and should receive education on the importance of facility delivery and danger signs during antenatal care visits, the family and community members who are often in very close proximity with the women also need to be included in training to recognise and bring women to a healthcare facility or to support her when she needs emergent care. One instance detailed a severely sick woman requiring her to be transferred to the hospital from the rural health facility. When the parents and the relatives were informed, they refused. Later she went home with the family. Her symptoms worsened and she died on route being transferred to the hospital. In other instance, a family member reported they realised something was wrong but did not know how to properly seek help. Both deaths could have been prevented if the family and the community members were also aware of the danger signs and knew the appropriate next steps to seeking care. Ineffective communication Ineffective communications between providers, between health facility and hospital, and between providers and patient/family were also identified as contributors to maternal deaths. Between providers, information and updates regarding the patient care were not being communicated effectively. In one instance, a patient spent 23 hours in the emergency room (ER) before the ER nurse informed the on-call doctor. There were significant delays in communicating patient histories from the rural health facility to the hospital and between the nurses and the doctors. In many cases, the doctors would put in the order, the nurses would deliver the care, but not update the doctor about patient status after delivering care nor would the physician follow-up and tell the nurse whether he/she checked on the patient. Patients and patient families did not receive sufficient updates regarding patient status to make collective decisions regarding patient care. In cases where patients and family members were informed, insufficient explanations regarding the potential consequences were often given. Hence, there were multiple instances where the family was told to transfer the patient to a hospital without further information on what to tell the healthcare providers once they arrived at the facility. In other instances, patients and family members would not disclose the full information to the providers. One of the providers commented, ‘the woman initially denied any form of induced abortion… about two days after admission, she admitted that she had two herbal concoction a week prior to coming to the hospital’. As such, failed communication at multiple levels contributed to maternal deaths. presents the contextual causes, contributing factors and means to address each of the contributing factors per death audit team recommendation. Materials, transportation, facility and staff were identified as contributing factors under limited resources. Limited materials (28.5%) were the most frequently identified, closely followed by transportation (20.0%). Limited material resources included oxygen, medication, intravenous fluids, blood products and equipment including partographs at the facilities. In one instance, relatives were asked by providers to prepare necessary materials for delivery, including donating their own blood due to hospital shortage. When the relatives refused to donate blood and were not able to prepare the requested materials on time, the woman died. Limited transportation means, including ambulances were also significant contributors to maternal deaths. One of the deceased women’s providers stated, ‘The patient started bleeding and the ambulance was called at 10:50am. We were told that the ambulance was down and another ambulance at a close by hospital did not have fuel. At 12:10pm patient was taken in a commercial vehicle but died on route.’ Similarly, there was often only one ambulance available, causing delays for the next woman in need of an emergency transfer. Family members would then try to find commercial transportation which often took a significant amount of time and family resources. Limited facilities such as operating rooms and maternity waiting homes, lodging facilities for women who live far away from health facilities and/or have high-risk pregnancy to await delivery, contributed to delay in receiving care. One of the deceased women had to wait more than 12 hours to receive a caesarean section because there was no operating room available. In other instances, there were no maternity waiting homes for the women living far distances from a health facility to stay and await delivery. They were forced to travel long distances while in labour in order to access services. The lack of adequate numbers of healthcare providers as well as the types of providers available was also mentioned. There were insufficient lab technicians to perform timely and necessary labs for prompt diagnosis and care. Already overburdened providers struggled to provide care quickly and effectively for all patients or to properly supervise and mentor newer providers. Inadequate skills and knowledge included staff education and training (51.4%), patient education (54.2%) and community and family education (25.7%). Multiple VA reports showed that healthcare providers misdiagnosed or simply did not identify the complications that led to the adverse outcomes. A deceased women’s sister commented, ‘she stayed in the ER for two days before she was sent to the OB ward. The doctor ordered oxygen but there was no oxygen for her.’ Such incidence shows both limited obstetric triage and inadequate staff training. One of the providers mentioned ‘the death could have been prevented if the complication was identified earlier… the intrauterine fetal death was not identified, and the patient was kept in the maternity waiting home for two weeks. The fetus was macerated, and the patient became septic when she was referred to the hospital.’ In another instance, a friend of a deceased women said that the patient visited the clinic twice, but she was sent home, where she remained for two days. Limited patient and community and family education were also frequently mentioned (25.7%). The VA analysis identified insufficient education and emphasised around the importance of early and frequent antenatal care, available services, recognition of danger signs and debunking traditional medicines that are harmful to patients at the community level. Many of the deceased women had symptoms of danger signs such as vaginal bleeding, convulsion, fever, abdominal pain and fast breathing which were not identified by the patients or their families until these symptoms became severe. While these women can and should receive education on the importance of facility delivery and danger signs during antenatal care visits, the family and community members who are often in very close proximity with the women also need to be included in training to recognise and bring women to a healthcare facility or to support her when she needs emergent care. One instance detailed a severely sick woman requiring her to be transferred to the hospital from the rural health facility. When the parents and the relatives were informed, they refused. Later she went home with the family. Her symptoms worsened and she died on route being transferred to the hospital. In other instance, a family member reported they realised something was wrong but did not know how to properly seek help. Both deaths could have been prevented if the family and the community members were also aware of the danger signs and knew the appropriate next steps to seeking care. Ineffective communications between providers, between health facility and hospital, and between providers and patient/family were also identified as contributors to maternal deaths. Between providers, information and updates regarding the patient care were not being communicated effectively. In one instance, a patient spent 23 hours in the emergency room (ER) before the ER nurse informed the on-call doctor. There were significant delays in communicating patient histories from the rural health facility to the hospital and between the nurses and the doctors. In many cases, the doctors would put in the order, the nurses would deliver the care, but not update the doctor about patient status after delivering care nor would the physician follow-up and tell the nurse whether he/she checked on the patient. Patients and patient families did not receive sufficient updates regarding patient status to make collective decisions regarding patient care. In cases where patients and family members were informed, insufficient explanations regarding the potential consequences were often given. Hence, there were multiple instances where the family was told to transfer the patient to a hospital without further information on what to tell the healthcare providers once they arrived at the facility. In other instances, patients and family members would not disclose the full information to the providers. One of the providers commented, ‘the woman initially denied any form of induced abortion… about two days after admission, she admitted that she had two herbal concoction a week prior to coming to the hospital’. As such, failed communication at multiple levels contributed to maternal deaths. presents the contextual causes, contributing factors and means to address each of the contributing factors per death audit team recommendation. While the top causes of maternal mortality worldwide are most often listed as major medical complications, including haemorrhage, sepsis, hypertension and unsafe abortion, this study enhances our understanding of the contextual causes that contribute to maternal outcomes for women presenting with these complications. This analysis of 35 cases of a maternal death confirmed that limited resources, inadequate skills and limited/ineffective communication were the main contextual, non-medical causes of maternal mortality. It presents a comprehensive understanding of the contributing organisational, sociotechnical and structural factors that impact obstetrical outcomes. Similarly, it provides us with data on how health system structures, providers and community factors contribute to the overall quality of care. Limited materials at the healthcare facility, including oxygen, intravenous fluids, medications and equipment, were most often cited under limited resources. Adequate supplies are fundamental to providing quality care to obstetric patients. Supply chain planning and increased accountability in the healthcare system to assure the availability of resources is crucial to improve maternal outcomes. Similar to our findings, another study of 30 maternal deaths in Indonesia found that lack of equipment and supplies contributed to 23% of maternal deaths. Limited transportation and availability of maternity waiting homes for women who live far distances from a health facility were also identified as contributing factors. More women took commercial vehicle compared with ambulances when travelling form home to rural health facility. One-third took commercial vehicle and another one-third took ambulances from the rural health facility to hospitals, indicating the scarcity of ambulances and the burden put on the patients and their family to secure transportation in case of emergency. Furthermore, 22.5% of the women delivered outside of health facility with a healthcare provider, indicating that the patients could not get to the facility on time, requiring the providers to travel to provide care. Stronger emphasis on birth planning and community engagement to identify potential sources of transportation or short-term loans to help pay for transportation should a woman from a rural area need to be transferred could help address this issue. In a study conducted in Botswana, half of 82 maternal deaths identified some barrier to accessing health services. Inadequate skills and knowledge were pervasive from the community level to hospital staff. Rural healthcare facility and hospital staff need continuing education on assessment and triage skills. Our data showed that haemorrhage and sepsis together contributed to more than 80% of the medical causes of maternal deaths. Both of which could have been addressed through clear protocols, competent skills and timely provision of care. Hence, regular quality improvement processes including team drills can contribute to problem-solving and keep staff updated on evidence-based practice. Inclusion of husbands or partners in antenatal care has shown a positive associate between women receiving antenatal care from a skilled provider, delivering at a health facility and seeking care from for obstetric complications from a healthcare provider. Men often have control of the resources needed for transportation to a higher-level facility and need to be better involved in the pregnancy and delivery process. Including men and family members in knowledge acquisition and planning for a birth using culturally appropriate methods and materials could fill this gap. How women and families obtain and use health information to make decisions is complex and is influenced by the social determinants of health. Finally, effective communication must be strengthened at all levels, including between providers at the same facility, between rural health facilities and district hospitals, and between providers and patients/families. There are numerous possibilities available for electronic communication. Low-cost mobile devices have the potential to revolutionise access to seamless healthcare across the continuum in low-resource settings. Mobile instant messaging applications such as WhatsApp have seen a global proliferation over the past several years. There are currently 1.5 billion users of the free WhatsApp platform in 180 countries. Electronic communication via messaging applications can not only enhance communication between providers and health facilities but also simplify and clarify documentation process. Clear and complete documentation is necessary to keep track of patient progress and plan of care. Hence, the use of messaging application can be an innovative way to keep all the providers on the same page regarding each patient’s status. Follow-up on all maternal deaths must be ensured to develop recommended actions to prevent future deaths. Limitations This study has several limitations. First, it is subject to recall bias since it is a retrospective analysis. Some interviews with family members, community members and healthcare providers took place months after the incident. Second, while the CHT interviewed the relevant stakeholders for each death, they may have missed important stakeholders that could have provided additional information. Third, because this study is a case series study specific to Bong County, Liberia, generalisation may be limited. Despite these limitations, this study provides critical insights regarding the contextual causes contributing to maternal deaths. This study has several limitations. First, it is subject to recall bias since it is a retrospective analysis. Some interviews with family members, community members and healthcare providers took place months after the incident. Second, while the CHT interviewed the relevant stakeholders for each death, they may have missed important stakeholders that could have provided additional information. Third, because this study is a case series study specific to Bong County, Liberia, generalisation may be limited. Despite these limitations, this study provides critical insights regarding the contextual causes contributing to maternal deaths. While the medical causes of maternal mortality are well known, the contextual causes are less understood and understudied. This retrospective mixed method study conducted an analysis on 35 maternal death using VA forms. It found that limited resources, inadequate skills and ineffective communications were the main contextual causes of recently increased maternal deaths in Bong County, Liberia. Improved supply chain, healthcare system accountability and community engagement are critical to assure the availability of resources and transportation needed to address reproductive emergencies. Liberia must also prioritise expanding healthcare workforce and better equip healthcare providers’ skills, knowledge, and interfacility and intrafacility communication through continued education and team drills. Widespread education and inclusion of the women’s husbands, family and community are also critical. Finally, innovative means such as electronic messaging applications also need to be incorporated for effective communication and documentation. Patient and public involvement When and how were patients/public first involved in the research? Bong County Health Team (BCHT) first brought up the idea to conduct a verbal on the deceased women to better understand the contextual causes of maternal deaths. How were the research question(s) developed and informed by their priorities, experience and preferences? The overall research question was brought up by the CHT, asking academic partners to assist with data analysis and interpretation. Data collection process heavily involved the CHT and the entire study was conducted in a collaborative process. How were patients/public involved in design, choice of outcome measures and recruitment for the study? The design, outcome measures, tools used and the recruitment (of healthcare providers and deceased women’s family) were pre-established by the Liberia government and the CHT, as mentioned in the Methods section of the manuscript. How were (or will) patients/public be involved in choosing the methods and agreeing plans for dissemination of the study results to participants and linked communities? We have discussed and agreed on how to disseminate the finding in academic settings (journals, academic partners in charge) and in local settings (local communities and administrative leaderships, BCHT in charge). When and how were patients/public first involved in the research? Bong County Health Team (BCHT) first brought up the idea to conduct a verbal on the deceased women to better understand the contextual causes of maternal deaths. How were the research question(s) developed and informed by their priorities, experience and preferences? The overall research question was brought up by the CHT, asking academic partners to assist with data analysis and interpretation. Data collection process heavily involved the CHT and the entire study was conducted in a collaborative process. How were patients/public involved in design, choice of outcome measures and recruitment for the study? The design, outcome measures, tools used and the recruitment (of healthcare providers and deceased women’s family) were pre-established by the Liberia government and the CHT, as mentioned in the Methods section of the manuscript. How were (or will) patients/public be involved in choosing the methods and agreeing plans for dissemination of the study results to participants and linked communities? We have discussed and agreed on how to disseminate the finding in academic settings (journals, academic partners in charge) and in local settings (local communities and administrative leaderships, BCHT in charge).
Criação de Modelos Embriológicos Cardíacos para Impressão 3D para Ensino de Anatomia e Embriologia
0cabe20f-d530-4521-92fc-c0b3e80866e7
10124573
Anatomy[mh]
Os estudantes de medicina têm uma grande dificuldade em visualizar as estruturas embriológicas e entender o desenvolvimento morfológico. Estudos relatam que os alunos comumente consideram a Embriologia como uma disciplina difícil e não se sentem confiantes com o conhecimento obtido. , O aprendizado de embriologia tradicional envolve a leitura de livros e interpretação de imagens planas, o que dificulta a percepção espacial e o entendimento do processo de formação embriológica. A tecnologia de modelagem tridimensional (3D) comumente oferece serviços para a engenharia, arquitetura, desenvolvimento de jogos e filmes. O desenvolvimento de modelos 3D consiste em criar e ligar vértices para formar malhas poligonais. A criação destas malhas permite a visualização em perspectiva, além de poder ser colorida, texturizada e animada pelo designer, e, posteriormente, poder ser impressas. A criação de modelos 3D pode ser muito vantajosa na Embriologia, uma vez que há estudos que mostram a tecnologia 3D como uma ferramenta auxiliadora para o ensino de anatomia e para o planejamento de cirurgias complexas. - Levando em consideração a complexidade do desenvolvimento do coração e a dificuldade apresentada pela maioria dos estudantes no aprendizado da embriologia cardíaca, este trabalho relata o desenvolvimento de modelos 3D para facilitar o aprendizado médico, buscando demonstrar o looping cardíaco e a septação atrial e ventricular, pontos críticos do desenvolvimento do coração. Trata-se de um estudo descritivo e observacional. Relatamos os resultados da criação dos modelos 3D para o ensino da embriologia cardíaca, considerando a evidência presente na literatura sobre os benefícios do uso da tecnologia 3D na compreensão da embriologia cardíaca. O trabalho iniciou com uma revisão da literatura sobre a embriologia cardíaca para a criação dos modelos com imagens de livros textos da área, apostilas de ensino médico e artigos científicos. Por meio do Blender ®, software de modelagem 3D de código aberto, foram criadas malhas, seguindo as referências obtidas, reproduzindo modelos embriológicos cardíacos. Esses modelos foram criados, texturizados e animados num computador do tipo PC com placa gráfica de desempenho normal. Posteriormente, dos 15 modelos criados, nove foram impressos através da impressora AnyCubic Kobra, com filamento PLA 1.75 mm na cor branca. A duração da impressão para cada modelo foi de aproximadamente 2,5 horas. Os arquivos estão disponíveis para download gratuito no endereço eletrônico: https://github.com/daviyahiro/cardiac-embryological-models. Foram criados 15 modelos os quais mostram: junção dos tubos cardíacos, looping cardíaco, formação dos coxins endocárdicos, septação atrial, forame primário, forame oval e septação ventricular. Além disso, com esses modelos, foi possível criar duas animações que demonstram o passo a passo do looping cardíaco e a septação atrial, semelhantemente às imagens de materiais didáticos, porém com profundidade. As animações, salvadas em .mp4, podem ser encontradas no mesmo endereço eletrônico dos modelos. Posteriormente, os modelos foram impressos para melhorar a experiência de ensino, permitindo a interação concreta do objeto. Na , é possível ver a dobra do coração, desde a junção dos tubos, formação da curvatura em C e finalmente dobra, que também podem ser manipulados para alterar a perspectiva em um software adequado. A apresenta a formação do septo atrial seguindo as suas devidas etapas, mostrando o septo primário, septo secundário, forame primário, forame secundário e forame oval. Apesar das imagens apresentar o mesmo ângulo, elas podem ser movidas de acordo com a necessidade do usuário. Nas e , estão os nove modelos, em estágios diferentes, impressos para a utilização no ensino. O correto entendimento do desenvolvimento cardíaco é passo fundamental para identificação e manejo das diversas malformações congênitas do sistema circulatório. Os modelos 3D mostram uma perspectiva e profundidade que não é possível em livros didáticos ou imagens. Esses modelos são de fácil acesso para visualização, uma vez que os arquivos são salvos em formato STL e podem ser manipulados por um celular, em aplicativos gratuitos como o ViewSTL®, ou um computador em sites online, possibilitando até mesmo importação para a tecnologia de Realidade Virtual, transformando-os em uma experiência mais rica. Além disso, os modelos impressos possuem baixo custo, já que se utilizam filamentos de plástico PLA ou ABS para impressão. Dessa forma, é possível ver com detalhes auxiliando no ensino de graduação ou comunicação com os pacientes e familiares sobre as malformações cardíacas. Há relatos na literatura de melhora no ensino através de modelos feitos de biscuit ou massa de modelar, porém as impressões 3D podem ser reproduzidas em maior quantidade e com menos desgaste. Além disso, as impressões oferecem uma possível solução para a dificuldade em obter peças anatômicas, que restringe algumas instituições de ensino. Os modelos também são úteis na criação de animações e vídeos em que mostram a formação do septo atrial em uma visão de perspectiva, dando melhor entendimento da sequência do desenvolvimento embrionário. Estudos mostram que o uso de materiais visuais complementa o ensino e auxiliam no engajamento do aluno na disciplina de Embriologia, , especialmente no âmbito da embriologia cardíaca. , Os modelos 3D apresentam vantagens quanto a sua reprodutibilidade e possibilidade de disponibilidade online para uso em diversas instituições. Esta técnica é muito versátil com a possibilidade de utilização em animações e criação de vídeos que auxiliem no aprendizado. A criação de modelos embriológicos de outras estruturas embriológicas ou até mesmo de modelos de doenças congênitas pode contribuir ainda mais na educação médica. Espera-se que os modelos 3D criados possibilitem a melhora na educação da embriologia cardíaca através da experiência visual e tátil que elas permitem.
Rustrela Virus as Putative Cause of Nonsuppurative Meningoencephalitis in Lions
d1a8f543-0d63-48a1-a72f-b9a91efe48ac
10124629
Anatomy[mh]
We retrospectively investigated 3 lions for the presence of RusV. The lions were identified in 2 zoos in northern and western Germany; they exhibited neurologic signs and nonsuppurative meningoencephalitis. Lion 1 died in 1980 in a zoo in Lower Saxony, whereas lions 2 and 3 were submitted for pathological examination in 1989 by a zoo in North Rhine-Westphalia. All 3 lions displayed a mild, multifocal, lymphohistiocytic meningoencephalitis and vasculitis ( , panel A) and occasional glial nodules. Inflammatory infiltrates were most prominent in the cerebral gray matter and less prominent in cerebral white matter, cerebellum, and meninges. The spinal cord was not available for analysis. We tested archived formalin-fixed, paraffin-embedded (FFPE) tissues for the presence of RusV RNA and antigen by quantitative reverse transcription PCR (qRT-PCR), in situ hybridization (ISH) and immunohistochemistry (IHC). We included FFPE tissues originating from 8 lions without nonsuppurative meningoencephalitis (lions 4–11) as controls . FFPE brain samples from lions 1–3 tested positive for RusV RNA by the broadly reactive qRT-PCR assay panRusV-2 . Cycle quantification (Cq) values were 29–38. We detected no RusV RNA in central nervous system (CNS) samples from any of the 8 control animals ( Table 1). We determined a partial host-genome RusV sequence 409 bp long for all 3 RusV-positive animals by Sanger sequencing of overlapping RT-PCR products . The sequences shared 97.8% nucleotide identity; phylogenetic analysis revealed all 3 sequences to form a single clade together with the sequence from a domestic cat in Hannover, Lower Saxony, in 2017 . Of note, this subclade was more closely related to sequences from cats with staggering disease in Austria than to sequences from zoo animals, domestic cats, and wild rodents in northeastern Germany . IHC investigation for the presence of RusV capsid antigen using monoclonal antibody 2H11B1 revealed multifocal, cytoplasmic, granular reactions, predominantly in cerebral cortical perikarya and their axons, in few astrocytes as well as in Purkinje cells of all 3 PCR-positive lions ( , panel B). Likewise, we detected RusV-specific RNA using a newly designed ISH probe in the brains of lions 2 and 3, but not of lion 1. We found viral RNA as a cytoplasmic granular signal in cortical perikarya ( , panel C). We observed RusV-specific capsid antigen and RNA in cerebral cortical neurons adjacent to perivascular infiltrates and also in neurons in more distant areas not associated with inflammatory changes. Neither IHC nor ISH revealed positive signals in any of the examined peripheral organs of the 3 RusV-positive animals and RusV-negative lion 7 ( Table 1) or in the CNS of control animals. IHC staining for dsRNA using the dsRNA antibodies K1 and J2 provided positive results in the CNS of all tested animals ( Table 1). Immunolabeling with anti-dsRNA antibody 9D5 remained negative for all 3 RusV-positive animals, whereas the RusV-negative lions 7 and 9 tested positive ( Table 1). The results of this study strongly indicate RusV as the potential cause of fatal lymphohistiocytic meningoencephalitis in lions from Germany in the 1980s. The animals were reported to have had neurologic disorders characterized by fever, depression, ataxia, and prolapse of the tongue . These clinical and histopathological findings are similar to those described previously for RusV-infected zoo animals and domestic cats ; they also resemble RuV-induced encephalitis in humans . A partial colocalization of RusV antigen and RNA detection with histopathologic lesions has been observed previously . Although the pathogenesis of RusV infection has not been elucidated, a virally triggered immune response that remains present even after focal virus clearance may provide an explanation for this phenomenon . In addition, vasculitis caused by a type III hypersensitivity reaction should be considered. The lack of viral antigen and RNA in organs other than the CNS of the infected lions is consistent with previous findings in RusV-infected zoo animals of other species, in which RusV RNA was predominantly detected in the CNS and only sporadically in other organs . These results indicate a strong neurotropism of RusV also in lions. In this study, we consistently detected RusV RNA and antigen in the affected animals using 3 independent methods (qRT-PCR, ISH, and IHC). The lack of viral RNA detection by ISH in the brain of lion 1, positive by qRT-PCR, could be a result of lower sensitivity of ISH or of a higher degree of RNA degradation in this >40-year-old sample. Furthermore, the crosslinking of proteins caused by formalin has been shown to influence the quality and accessibility of DNA or RNA in FFPE material . Immunohistochemical investigation for dsRNA revealed positive results in the brains of all investigated lions, regardless of their RusV infection status. Although it is possible that the RusV-negative lions were infected by other neurotropic RNA viruses, this scenario appears unlikely because no CNS lesions were observed in control animals 4–11. Thus, this method appears unsuitable for reliable detection of viral dsRNA in the brains of lions and perhaps other animals. In summary, our study reveals that RusV was present in northern and western Germany in the 1980s. Detecting RusV in lions indicates an even broader host range of RusV, encompassing a variety of different species and suggests that other wild and captive felids may be susceptible to RusV infection. As described previously , fulfilling Henle-Koch’s postulates by experimental reproduction of the disease has not been possible because of lack of RusV isolates. Nevertheless, the association between RusV detection and disease demonstrated in this study, combined with previous studies on RusV infections in zoo animals and domestic cats, strongly suggests RusV as a causative agent of meningoencephalitis in lions.
Presence of
8ec8d818-2fcc-4cde-8e1d-2cd8d6861bf0
10124662
Microbiology[mh]
Telehealth Policy, Practice, and Education: a Position Statement of the Society of General Internal Medicine
96806c51-2d3e-402b-900e-1621347a5a4b
10124932
Internal Medicine[mh]
Telehealth services expanded tremendously during the COVID-19 pandemic, when the Centers for Medicare and Medicaid Services (CMS) approved temporary waivers and flexibilities, enabling delivery of telemedicine (synchronous audio-video, or audio-only encounters) to patients in their homes, and outside of rural and geographic professional shortage areas. These waivers, initially tied to the Public Health Emergency (PHE), also allowed audio-only reimbursement and full payment parity for audio-video visits. The Congress has passed legislation to temporarily extend some flexibilities through 2024. However, permanent telehealth policy has not been established. The impact of expanded telemedicine can be analyzed using the quintuple aim (patient experience, health outcomes, cost, clinician well-being, and equity). – Most studies suggest telemedicine has improved care experiences. Providers report high satisfaction with telemedicine, , and patients regardless of sociodemographic traits also have high satisfaction with telemedicine. , – Early systematic reviews for specific scenarios found that telemedicine resulted in similar outcomes and cost to in-person care. – Multiple studies have reported higher appointment adherence with telemedicine, suggesting potential reduced costs associated with waste. , , However, long-term data on clinical outcomes, costs, and equity remain unclear. These outcomes will depend on reimbursement policies and implementation strategies that influence how, when, and to whom telemedicine is delivered. If properly supported, telemedicine can be a powerful tool that advances the quintuple aim. Without thoughtful policies, implementation strategies, and training, telemedicine can inadvertently facilitate less equitable, low-quality care. The PHE has provided an environment to identify policies that should be extended and areas where guidelines and infrastructure are needed. The Society of General Internal Medicine (SGIM) is a member-based professional association of more than 3000 general internists committed to expanding access to healthcare, advancing equity, improving medical training, and creating a just system of care where all people can achieve optimal health. Our members practice general internal medicine and medical specialties in various settings, including the Veterans Health Administration (VA) where telemedicine has a longstanding footprint, urban safety-net clinics where telemedicine was first introduced during the PHE, and academic clinics where medical students and residents learn to deliver virtual care. Telemedicine utilization during the pandemic was highest among cognitive specialties including internal medicine, pediatrics, family medicine, and psychiatry. , , For any field in which longitudinal relationships, counseling, shared decision-making, and chronic disease management are paramount, telemedicine is a critical tool that must be preserved. In this position statement, we share case studies of patient-clinician telemedicine interactions in the ambulatory setting, identifying relevant literature to date and highlighting where key telehealth policies are needed. We provide recommendations on behalf of SGIM in three domains: policy, clinical practice, and education. This position statement was developed by the SGIM Clinical Practice Committee, Health Policy Committee, and Education Committee, and was approved by SGIM Council on January 6th, 2023. Case 1: Access Across the Spectrum of Care Mr. T is a 55 year-old man with diabetes, hypertension and prior stroke who has missed three in-person appointments. His primary care physician (PCP) offers an audio-video visit, at which point Mr. T reports that he has had multiple falls and is having trouble getting to his diabetes follow-up appointments. The PCP conducts a detailed medication reconciliation as Mr. T’s wife displays bottles and discards multiple duplicates. During subsequent visits, the clinic’s pharmacist reviews Mr. T’s home blood sugar readings over the phone and titrates his insulin. An occupational therapist then conducts a virtual home safety assessment and recommends mobility aides for safe ambulation. Finally, the clinic’s social worker arranges transportation so that Mr. T can attend a face-to-face appointment with the PCP, including a diabetic foot exam and immunizations. This case highlights how telemedicine can benefit patients with complicated medication regimens, uncontrolled chronic diseases, and mobility or transportation challenges. Telemedicine facilitates management of chronic diseases (e.g., hypertension, diabetes, heart failure) which require frequent visits and team-based care. Not all visits need in-person evaluation, especially if the patient conducts home monitoring (e.g., blood pressure, blood glucose, weight) which is a guideline-recommended component of high-quality care. Studies demonstrate improved quality of care for patients with diabetes who had at least one telemedicine visit compared to those with in-person only. , Additionally, telemedicine visits are well-suited for detailed “brown bag” medication reconciliation. Audio-video visits allow clinicians to visualize medications and how patients manage, store, and organize them in their home environment. Transitions of care are particularly risky; studies estimate that more than half of patients have a medication error at hospital discharge and nearly 1 in 5 experience an adverse drug event. One study found that post-discharge telemedicine visits have higher completion rates than in-person, and reduce inequities in attendance between black and white patients. Finally, in-person visits can be difficult for patients with mobility or socioeconomic challenges who lack reliable transportation. While telemedicine was available for rural patients prior to COVID-19, mobility and transportation challenges exist for patients outside of rural areas, who also benefit from telemedicine access. Case 2: Telemedicine for Behavioral Health Ms. R is a 32 year-old woman with an opioid use disorder. After brief incarceration for drug possession charges, she seeks treatment at a nearby recovery center. A counselor recommends medication therapy, but no physicians within the geographical area provide the necessary services. Furthermore, the recovery center has connected her to a vocational program requiring substantial time, so she is unable to travel to another area for treatment. She schedules an audio-video visit with a telemedicine based opioid therapy (TBOT) physician, who prescribes home buprenorphine induction and emails written instructions. The program follows her closely via regular audio-video visits when she is at her transitional housing center, and phone visits when she is at her vocational training program and unable to access the internet. Treatment for substance use disorders (SUDs), with challenges related to inadequate access to knowledgeable clinicians or need for frequent monitoring, has benefited immensely from the expansion of telemedicine. The opioid epidemic worsened during COVID-19. Telemedicine is increasingly recognized as a vital component of the response and is associated with improved outcomes, including higher retention in care and lower likelihood of overdose. Prior to the pandemic, TBOT for SUD was used in rural areas. During the PHE, this model was introduced to other populations who benefit from low-barrier access, including transitional housing and those recently incarcerated. – Initial treatment with opiate agonist therapy requires frequent visits, which if required to be in-person, can inhibit engagement in job and recovery-related activities that facilitate improved outcomes. Telemedicine enables engagement in these critical activities and evidence-based medical treatment. Similarly, individuals with behavioral health conditions such as depression and anxiety disorders often are unable to access care from a trained clinician, and benefit from frequent counseling/monitoring, for which face-to-face visits may be impractical or prohibitive. Telemedicine has substantially reduced barriers to access. , The Congress has permanently expanded telemedicine for behavioral health beyond the PHE, including audio-only care when appropriate. Separately, the Department of Health and Human Services has proposed to permanently expand telemedicine (including audio-only) for buprenorphine treatment from Outpatient Treatment Programs. However, this does not apply for patients receiving methadone (which was never authorized during the PHE). As black patients are more likely to receive methadone, , these partial expansions exacerbate disparities stemming from practices and policies that are biased and racist in nature. Furthermore, a substantial portion of behavioral health and SUD treatment comes from primary care, including initial diagnosis and treatment, and ongoing management. Yet the expanded billing codes enabling permanent behavioral health and SUD telemedicine payment are not applicable for primary care services which provide integrated, whole-person care concomitantly addressing SUD, behavioral health, other chronic diseases, and their substantial interplay. This distinction of covered services cripples the intended effect of these policies, as does the distinction between behavioral and physical health conditions. Chronic medical conditions such as hypertension and diabetes share similar features with depression and SUD treatment, i.e., frequent interactions, discussing patient-reported symptoms and readings, counseling, and medication titration, and are well suited for telemedicine care. Case 3a: Audio-only—a Critical Tool to Improve Equity Ms. R is a 47 year-old female with diabetes and multiple hospital admissions for diabetic ketoacidosis due to running out of medications, who is currently experiencing homelessness. Her PCP offers her a virtual visit. She does not have a working phone but is able to borrow her sister’s. She is residing in a public outdoor setting without reliable internet access, so she opts for an audio-only appointment. During the call, the patient and PCP formulate a new treatment plan including mailing prescriptions to her sister’s home and having a community health worker call her sister to coordinate connection to housing services. Six months later, she moved into an independent home, was taking her medications more regularly, and had no further hospital admissions. Telemedicine uptake has been highest in neighborhoods with the most social deprivation, and is a critical tool to reduce disparities. While synchronous audio-video visits can provide a richer clinical experience than audio-only, numerous barriers exist. As practicing internists in a variety of primary care settings, we have encountered countless situations where telephone was the only option to reach patients. Studies have demonstrated that patients without access to mobile devices or adequate broadband service, older patients, non-English speakers, and those without a private location (such as in smaller, multi-tenant dwellings) faced significant obstacles to the usage of audio-video visits. – Digital redlining has been well described, with lower broadband availability in rural areas, low-income neighborhoods, and in Black and other racial and ethnic minority communities. These patients are more likely to engage in audio-only appointments, while White patients living in higher-income areas are more likely to attend audio-video visits. – These digital inequities highlight the critical role that audio-only and simple technological solutions play in preserving access to care, especially for those already facing structural inequities. , A 2021 survey found that in many cases audio-only visits were as likely as audio-video to resolve patients’ issues. Another study found that pharmacist-led audio-only medication reconciliation post-discharge reduced 30-day readmissions by 70%. Phone visits also enable frequent collection of home data (e.g., blood pressure or glucose) and medication adjustments for chronic disease management. The provision of audio-only services to populations affected by digital redlining and other broadband inequities can significantly improve access to care, positively influence upstream social drivers of health, and can help address longstanding health disparities that were amplified by the COVID-19 pandemic. Case 3b: Pitfalls—Bias and Double Standards Ms. D is a 78 year-old black veteran who recently relocated. She does not have access to public transportation for an in-person intake and is offered a telephone visit. During the phone call, her new physician reviews her medical history and performs a medication reconciliation, but Ms. D does not feel that she has made a personal connection with her PCP. After the visit, she sends a patient portal message asking if the clinic offers audio-video visits. Upon review, an intake audio-video visit was not offered. While audio-only access is critical for improving equity, audio-video visits allow for visual cues that enhance relationship-building, communication, and patient understanding, and they facilitate more accurate clinical assessments, and remain the standard of care. – While many studies have focused on patient-level barriers, one study found that practices and clinicians respectively contribute to 38% and 26% of variation in audio-video use in comparison to patient-level factors (9%). This reinforces the importance of considering multiple levels (individual, family, community, systems) where barriers to digital health equity exist, including implicit bias by clinicians and ancillary staff (e.g., ageism and racism). It also demonstrates the need for standardized decision support tools that ensure offering and provision of the appropriate service (audio-video, audio-only, or in-person visits). Case 4: Inappropriate Use—the Need for Clinical Guidelines Mr. A is a 19-year old male who develops abdominal discomfort. He starts taking over-the-counter ibuprofen and calcium carbonate. Subsequently, the pain worsens and he utilizes the urgent care video visit service offered by his insurance plan. The clinician prescribes him pantoprazole for possible dyspepsia and advises him to stop ibuprofen. That night, he wakes up with excruciating abdominal pain and calls 911. In the ED he is diagnosed with a perforated gastric ulcer. Telemedicine is not clinically appropriate for all scenarios. Some require a physical exam and rapid diagnostic testing (e.g., new-onset chest pain, acute shortness of breath, or worsening abdominal pain). Health systems have begun to develop workflows to triage when patients should not be scheduled for telemedicine visits. High-profile deaths after inappropriate virtual care have highlighted the need to expedite the creation of clear standards. Expanding education for trainees and staff clinicians to provide safe care that acknowledges increased diagnostic uncertainty is important to advancing high-quality telemedicine. Faculty development is critical, as most of today’s educators have not received formal instruction on teaching “webside manner” or telemedicine-specific assessment tools, although these resources have since been developed. Mr. T is a 55 year-old man with diabetes, hypertension and prior stroke who has missed three in-person appointments. His primary care physician (PCP) offers an audio-video visit, at which point Mr. T reports that he has had multiple falls and is having trouble getting to his diabetes follow-up appointments. The PCP conducts a detailed medication reconciliation as Mr. T’s wife displays bottles and discards multiple duplicates. During subsequent visits, the clinic’s pharmacist reviews Mr. T’s home blood sugar readings over the phone and titrates his insulin. An occupational therapist then conducts a virtual home safety assessment and recommends mobility aides for safe ambulation. Finally, the clinic’s social worker arranges transportation so that Mr. T can attend a face-to-face appointment with the PCP, including a diabetic foot exam and immunizations. This case highlights how telemedicine can benefit patients with complicated medication regimens, uncontrolled chronic diseases, and mobility or transportation challenges. Telemedicine facilitates management of chronic diseases (e.g., hypertension, diabetes, heart failure) which require frequent visits and team-based care. Not all visits need in-person evaluation, especially if the patient conducts home monitoring (e.g., blood pressure, blood glucose, weight) which is a guideline-recommended component of high-quality care. Studies demonstrate improved quality of care for patients with diabetes who had at least one telemedicine visit compared to those with in-person only. , Additionally, telemedicine visits are well-suited for detailed “brown bag” medication reconciliation. Audio-video visits allow clinicians to visualize medications and how patients manage, store, and organize them in their home environment. Transitions of care are particularly risky; studies estimate that more than half of patients have a medication error at hospital discharge and nearly 1 in 5 experience an adverse drug event. One study found that post-discharge telemedicine visits have higher completion rates than in-person, and reduce inequities in attendance between black and white patients. Finally, in-person visits can be difficult for patients with mobility or socioeconomic challenges who lack reliable transportation. While telemedicine was available for rural patients prior to COVID-19, mobility and transportation challenges exist for patients outside of rural areas, who also benefit from telemedicine access. Ms. R is a 32 year-old woman with an opioid use disorder. After brief incarceration for drug possession charges, she seeks treatment at a nearby recovery center. A counselor recommends medication therapy, but no physicians within the geographical area provide the necessary services. Furthermore, the recovery center has connected her to a vocational program requiring substantial time, so she is unable to travel to another area for treatment. She schedules an audio-video visit with a telemedicine based opioid therapy (TBOT) physician, who prescribes home buprenorphine induction and emails written instructions. The program follows her closely via regular audio-video visits when she is at her transitional housing center, and phone visits when she is at her vocational training program and unable to access the internet. Treatment for substance use disorders (SUDs), with challenges related to inadequate access to knowledgeable clinicians or need for frequent monitoring, has benefited immensely from the expansion of telemedicine. The opioid epidemic worsened during COVID-19. Telemedicine is increasingly recognized as a vital component of the response and is associated with improved outcomes, including higher retention in care and lower likelihood of overdose. Prior to the pandemic, TBOT for SUD was used in rural areas. During the PHE, this model was introduced to other populations who benefit from low-barrier access, including transitional housing and those recently incarcerated. – Initial treatment with opiate agonist therapy requires frequent visits, which if required to be in-person, can inhibit engagement in job and recovery-related activities that facilitate improved outcomes. Telemedicine enables engagement in these critical activities and evidence-based medical treatment. Similarly, individuals with behavioral health conditions such as depression and anxiety disorders often are unable to access care from a trained clinician, and benefit from frequent counseling/monitoring, for which face-to-face visits may be impractical or prohibitive. Telemedicine has substantially reduced barriers to access. , The Congress has permanently expanded telemedicine for behavioral health beyond the PHE, including audio-only care when appropriate. Separately, the Department of Health and Human Services has proposed to permanently expand telemedicine (including audio-only) for buprenorphine treatment from Outpatient Treatment Programs. However, this does not apply for patients receiving methadone (which was never authorized during the PHE). As black patients are more likely to receive methadone, , these partial expansions exacerbate disparities stemming from practices and policies that are biased and racist in nature. Furthermore, a substantial portion of behavioral health and SUD treatment comes from primary care, including initial diagnosis and treatment, and ongoing management. Yet the expanded billing codes enabling permanent behavioral health and SUD telemedicine payment are not applicable for primary care services which provide integrated, whole-person care concomitantly addressing SUD, behavioral health, other chronic diseases, and their substantial interplay. This distinction of covered services cripples the intended effect of these policies, as does the distinction between behavioral and physical health conditions. Chronic medical conditions such as hypertension and diabetes share similar features with depression and SUD treatment, i.e., frequent interactions, discussing patient-reported symptoms and readings, counseling, and medication titration, and are well suited for telemedicine care. Ms. R is a 47 year-old female with diabetes and multiple hospital admissions for diabetic ketoacidosis due to running out of medications, who is currently experiencing homelessness. Her PCP offers her a virtual visit. She does not have a working phone but is able to borrow her sister’s. She is residing in a public outdoor setting without reliable internet access, so she opts for an audio-only appointment. During the call, the patient and PCP formulate a new treatment plan including mailing prescriptions to her sister’s home and having a community health worker call her sister to coordinate connection to housing services. Six months later, she moved into an independent home, was taking her medications more regularly, and had no further hospital admissions. Telemedicine uptake has been highest in neighborhoods with the most social deprivation, and is a critical tool to reduce disparities. While synchronous audio-video visits can provide a richer clinical experience than audio-only, numerous barriers exist. As practicing internists in a variety of primary care settings, we have encountered countless situations where telephone was the only option to reach patients. Studies have demonstrated that patients without access to mobile devices or adequate broadband service, older patients, non-English speakers, and those without a private location (such as in smaller, multi-tenant dwellings) faced significant obstacles to the usage of audio-video visits. – Digital redlining has been well described, with lower broadband availability in rural areas, low-income neighborhoods, and in Black and other racial and ethnic minority communities. These patients are more likely to engage in audio-only appointments, while White patients living in higher-income areas are more likely to attend audio-video visits. – These digital inequities highlight the critical role that audio-only and simple technological solutions play in preserving access to care, especially for those already facing structural inequities. , A 2021 survey found that in many cases audio-only visits were as likely as audio-video to resolve patients’ issues. Another study found that pharmacist-led audio-only medication reconciliation post-discharge reduced 30-day readmissions by 70%. Phone visits also enable frequent collection of home data (e.g., blood pressure or glucose) and medication adjustments for chronic disease management. The provision of audio-only services to populations affected by digital redlining and other broadband inequities can significantly improve access to care, positively influence upstream social drivers of health, and can help address longstanding health disparities that were amplified by the COVID-19 pandemic. Ms. D is a 78 year-old black veteran who recently relocated. She does not have access to public transportation for an in-person intake and is offered a telephone visit. During the phone call, her new physician reviews her medical history and performs a medication reconciliation, but Ms. D does not feel that she has made a personal connection with her PCP. After the visit, she sends a patient portal message asking if the clinic offers audio-video visits. Upon review, an intake audio-video visit was not offered. While audio-only access is critical for improving equity, audio-video visits allow for visual cues that enhance relationship-building, communication, and patient understanding, and they facilitate more accurate clinical assessments, and remain the standard of care. – While many studies have focused on patient-level barriers, one study found that practices and clinicians respectively contribute to 38% and 26% of variation in audio-video use in comparison to patient-level factors (9%). This reinforces the importance of considering multiple levels (individual, family, community, systems) where barriers to digital health equity exist, including implicit bias by clinicians and ancillary staff (e.g., ageism and racism). It also demonstrates the need for standardized decision support tools that ensure offering and provision of the appropriate service (audio-video, audio-only, or in-person visits). Mr. A is a 19-year old male who develops abdominal discomfort. He starts taking over-the-counter ibuprofen and calcium carbonate. Subsequently, the pain worsens and he utilizes the urgent care video visit service offered by his insurance plan. The clinician prescribes him pantoprazole for possible dyspepsia and advises him to stop ibuprofen. That night, he wakes up with excruciating abdominal pain and calls 911. In the ED he is diagnosed with a perforated gastric ulcer. Telemedicine is not clinically appropriate for all scenarios. Some require a physical exam and rapid diagnostic testing (e.g., new-onset chest pain, acute shortness of breath, or worsening abdominal pain). Health systems have begun to develop workflows to triage when patients should not be scheduled for telemedicine visits. High-profile deaths after inappropriate virtual care have highlighted the need to expedite the creation of clear standards. Expanding education for trainees and staff clinicians to provide safe care that acknowledges increased diagnostic uncertainty is important to advancing high-quality telemedicine. Faculty development is critical, as most of today’s educators have not received formal instruction on teaching “webside manner” or telemedicine-specific assessment tools, although these resources have since been developed. These cases highlight situations where telemedicine may facilitate achievement of the quintuple aim, particularly regarding improved care experience, equity, and better health from improved access. They also highlight ongoing challenges and uncertainty which must be studied and addressed. SGIM provides recommendations in three domains: policy and payment, implementation and clinical practice, and medical education. Policy and Payment Prior to the COVID-19 pandemic, payors largely viewed telemedicine as a substitute for in-person care in rural areas. Per CMS, “statute requires that telehealth services be so analogous to in-person care such that the telehealth service is essentially a substitute for a face-to-face encounter.” The PHE has demonstrated that telemedicine can do more than simply replace in-person encounters due to geographic barriers, including increasing equitable access and extending longitudinal primary care. SGIM strongly urges telemedicine payment and policy changes to preserve safe, high-quality care. End Geographic, Originating Site, and Distant Site Requirements Pre-pandemic requirements limited who could receive telemedicine services based on where patients resided (rural, health professional shortage areas), where telemedicine services were received (not in the patient’s home), and who provided the services (community health centers were excluded). Permanently ending these requirements would allow telemedicine services to be provided to anyone who needs them. We urge the Congress to pass legislation to permanently effect these changes. Audio-Only Evaluation and Management (E/M) Services Must Be Allowed Until Structural Barriers to Video Access Are Eliminated Until all Americans have access to high-speed Internet, appropriate devices, and digital health literacy, audio-only services will remain critical to equitable delivery of healthcare. Even then, patient circumstances and preferences will favor audio-only at times. Many individuals who cannot currently access audio-video services belong to underserved communities that have a high burden of chronic diseases and have been historically discriminated against. Ending provisions for audio-only care will subject them to additional digital health discrimination. While the Congress and CMS have moved to preserve audio-only care for SUD and behavioral health, current provisions do not allow for comprehensive, whole-person care including management of concomitant chronic conditions to be addressed via audio-only in primary care settings. We urge the Congress to pass legislation to permanently expand the definition of telemedicine to include audio-only services. No modality of telemedicine should fully replace in-person care, but all forms of telemedicine should be an option for all patients. Accurate Assessments of Whether Telemedicine Services Are Substitutive or Additive Must Be Employed When Estimating Utilization and Cost Presently, the Congressional Budget Office scores telemedicine legislation as additive, stating the cost of reimbursing telemedicine would be in addition to all other services. The evidence-base and our experience do not support this. In some circumstances, such as post-discharge visits, telemedicine is fully substitutive. Virginia Medicaid data also demonstrated overall telemedicine care was substitutive, with no increase in costs. , In other circumstances, telemedicine leads to improved access and may result in appropriate additional visits for previously foregone care, such as for improved chronic disease management and as a result of improved visit adherence rates. Estimates of cost must reflect the nuanced use of telemedicine that the evidence suggests. Additionally, as evidence emerges about the impact of telemedicine (e.g., whether telemedicine can reduce emergency department visits and hospitalizations), these factors should be considered. Finally, beyond the direct impact on federal healthcare spending, a broader societal view must be considered, including reduced fuel and transportation costs and greenhouse gas emissions with use of telemedicine. New telemedicine Service Codes Must Be Developed to Allow Fair and Evidence-Based Reimbursement Telemedicine is predominantly used in cognitive specialties through E/M services, which have been inadequately reimbursed for decades in Medicare’s Physician Fee Schedule. Qualified professionals should be allowed to demonstrate the intensity of their medical decision-making within a robust set of Medicare E/M service codes that are appropriately reimbursed. We urge CMS to define new telemedicine service codes, including specific codes for audio-only services, which accurately reflect workload. Even as payment shifts to value-based programs and prospective payments, accurate valuation of services remains foundational. Telemedicine is a key component of advanced delivery models, allowing patients greater access to clinician and ancillary support for robust disease management. The National Academy of Medicine report on Implementing High Quality Primary Care highlights the importance of paying for teams to provide care, not just physicians to provide services. As such, virtual care provided by all team members, such as clinical pharmacists and diabetes educators, must be supported by telemedicine payment policy. Broadband Internet Must Be Available for All Policymakers must address structural barriers so that audio-video visits are an option for everyone. High-speed Internet is not currently accessible to all Americans. Policies outside of healthcare must be implemented to accelerate access, end digital redlining and ensure that broadband, a “super” determinant of health, is available to all. , Implementation and Clinical Practice SGIM makes the following recommendations for patient-centered clinical use at the individual patient level, and equitable telemedicine implementation at the health system level. Appropriate Clinical Use Audio-Only, Audio-Video, or In-Person Care Should Be Determined by Shared Decision-Making, Incorporating Patient Preferences and Clinical Appropriateness Some care, such as behavioral health counseling, titration of medications, or evaluation of simple acute conditions such as uncomplicated urinary tract infections, may be equally served by all modalities. Other care, such as device or insulin training, may only be appropriate for audio-video or in-person visits. Some cases will require in-person care, such as evaluation of new symptoms that require a physical exam. In other circumstances, such as exacerbation of chronic conditions, the appropriate modality of care will depend on the extent of the relationship and the resources available. For example, heart failure exacerbations may require physical examination and advanced diagnostics—or for patients with a home scale, pulse oximeter, and significant family support, virtual care to titrate diuretics and close follow-up may provide higher patient satisfaction and equal quality of care. Clinical guidelines must be developed and implemented, while still allowing for flexibility based on individual factors. Telemedicine Should Be Used Within the Context of Longitudinal, Trusted Relationships and Should Extend This Relationship, Not Fragment Care In general, telemedicine visits should be used in conjunction with in-person visits, and not as the sole manner of care. However, this must be contextualized for specific patients. In some cases, such as transportation barriers, a harm-reduction approach may dictate more virtual care than otherwise considered optimal. Equitable System-Level Implementation Measure Equity Healthcare systems should build data reporting infrastructure to monitor equitable implementation, including access to care and clinical outcomes of telemedicine services. Design for Equity Clinics and healthcare systems should design their virtual care offerings in partnership with patients and communities that experience disparities, thereby increasing the likelihood that virtual care programs equitably meet the needs of the community. Implement for Equity By measuring equity and forming community partnerships, clinics and healthcare systems should be well-positioned to identify potential pitfalls and implement strategies to mitigate barriers. The VA “Digital Divide” Consult service is an example of how to address patient-level barriers; this program provides a device and/or training to veterans and establishes agreements with Internet service providers to defer charges for low-income veterans. Additionally, telemedicine services and modalities should be offered to all patients using standard processes to minimize implicit bias. Telemedicine Education Telemedicine education must encompass two distinct domains: providing appropriate resources and support to faculty/educators who themselves never received telemedicine-specific training, and adopting educational strategies/competencies for current trainees. Implement Interactive Faculty Training on Telemedicine Skills Many medical educators are novices in digital health and must first develop mastery of their own skills to teach and evaluate. , To mitigate well-known barriers in faculty engagement (e.g., time, costs), institutions must provide time and/or compensation for faculty development and adapt external educational telemedicine resources to the individual instituation. Faculty development curricula can utilize interactive educational activities for active learning, including skills-targeted workshops and case-specific simulation. The Telehealth Mini-Residency for Providers in development at the VA National Simulation Center employs multiple interactive strategies for faculty participants to incorporate diagnostic reasoning principles into the triage mindset, visit type selection, and adaptation of the virtual physical exam. Develop Telemedicine-Specific Educational Strategies for Trainees that Align with the Association of American Medical Colleges Telehealth Competencies and Accreditation Council for Graduate Medical Education Milestones 2.0 We recommend using the telehealth competencies developed by the Association of American Medical Colleges to guide educational strategies for medical students, residents, and faculty. These include six domains related to patient safety, access and equity, communication, data collection, technology, and ethical practices . In addition, the updated 2021 Accreditation Council for Graduate Medical Education Milestones 2.0 includes a road map of how trainees should be implementing telemedicine visits into their practice. Prior to the COVID-19 pandemic, payors largely viewed telemedicine as a substitute for in-person care in rural areas. Per CMS, “statute requires that telehealth services be so analogous to in-person care such that the telehealth service is essentially a substitute for a face-to-face encounter.” The PHE has demonstrated that telemedicine can do more than simply replace in-person encounters due to geographic barriers, including increasing equitable access and extending longitudinal primary care. SGIM strongly urges telemedicine payment and policy changes to preserve safe, high-quality care. End Geographic, Originating Site, and Distant Site Requirements Pre-pandemic requirements limited who could receive telemedicine services based on where patients resided (rural, health professional shortage areas), where telemedicine services were received (not in the patient’s home), and who provided the services (community health centers were excluded). Permanently ending these requirements would allow telemedicine services to be provided to anyone who needs them. We urge the Congress to pass legislation to permanently effect these changes. Audio-Only Evaluation and Management (E/M) Services Must Be Allowed Until Structural Barriers to Video Access Are Eliminated Until all Americans have access to high-speed Internet, appropriate devices, and digital health literacy, audio-only services will remain critical to equitable delivery of healthcare. Even then, patient circumstances and preferences will favor audio-only at times. Many individuals who cannot currently access audio-video services belong to underserved communities that have a high burden of chronic diseases and have been historically discriminated against. Ending provisions for audio-only care will subject them to additional digital health discrimination. While the Congress and CMS have moved to preserve audio-only care for SUD and behavioral health, current provisions do not allow for comprehensive, whole-person care including management of concomitant chronic conditions to be addressed via audio-only in primary care settings. We urge the Congress to pass legislation to permanently expand the definition of telemedicine to include audio-only services. No modality of telemedicine should fully replace in-person care, but all forms of telemedicine should be an option for all patients. Accurate Assessments of Whether Telemedicine Services Are Substitutive or Additive Must Be Employed When Estimating Utilization and Cost Presently, the Congressional Budget Office scores telemedicine legislation as additive, stating the cost of reimbursing telemedicine would be in addition to all other services. The evidence-base and our experience do not support this. In some circumstances, such as post-discharge visits, telemedicine is fully substitutive. Virginia Medicaid data also demonstrated overall telemedicine care was substitutive, with no increase in costs. , In other circumstances, telemedicine leads to improved access and may result in appropriate additional visits for previously foregone care, such as for improved chronic disease management and as a result of improved visit adherence rates. Estimates of cost must reflect the nuanced use of telemedicine that the evidence suggests. Additionally, as evidence emerges about the impact of telemedicine (e.g., whether telemedicine can reduce emergency department visits and hospitalizations), these factors should be considered. Finally, beyond the direct impact on federal healthcare spending, a broader societal view must be considered, including reduced fuel and transportation costs and greenhouse gas emissions with use of telemedicine. New telemedicine Service Codes Must Be Developed to Allow Fair and Evidence-Based Reimbursement Telemedicine is predominantly used in cognitive specialties through E/M services, which have been inadequately reimbursed for decades in Medicare’s Physician Fee Schedule. Qualified professionals should be allowed to demonstrate the intensity of their medical decision-making within a robust set of Medicare E/M service codes that are appropriately reimbursed. We urge CMS to define new telemedicine service codes, including specific codes for audio-only services, which accurately reflect workload. Even as payment shifts to value-based programs and prospective payments, accurate valuation of services remains foundational. Telemedicine is a key component of advanced delivery models, allowing patients greater access to clinician and ancillary support for robust disease management. The National Academy of Medicine report on Implementing High Quality Primary Care highlights the importance of paying for teams to provide care, not just physicians to provide services. As such, virtual care provided by all team members, such as clinical pharmacists and diabetes educators, must be supported by telemedicine payment policy. Broadband Internet Must Be Available for All Policymakers must address structural barriers so that audio-video visits are an option for everyone. High-speed Internet is not currently accessible to all Americans. Policies outside of healthcare must be implemented to accelerate access, end digital redlining and ensure that broadband, a “super” determinant of health, is available to all. , Pre-pandemic requirements limited who could receive telemedicine services based on where patients resided (rural, health professional shortage areas), where telemedicine services were received (not in the patient’s home), and who provided the services (community health centers were excluded). Permanently ending these requirements would allow telemedicine services to be provided to anyone who needs them. We urge the Congress to pass legislation to permanently effect these changes. Until all Americans have access to high-speed Internet, appropriate devices, and digital health literacy, audio-only services will remain critical to equitable delivery of healthcare. Even then, patient circumstances and preferences will favor audio-only at times. Many individuals who cannot currently access audio-video services belong to underserved communities that have a high burden of chronic diseases and have been historically discriminated against. Ending provisions for audio-only care will subject them to additional digital health discrimination. While the Congress and CMS have moved to preserve audio-only care for SUD and behavioral health, current provisions do not allow for comprehensive, whole-person care including management of concomitant chronic conditions to be addressed via audio-only in primary care settings. We urge the Congress to pass legislation to permanently expand the definition of telemedicine to include audio-only services. No modality of telemedicine should fully replace in-person care, but all forms of telemedicine should be an option for all patients. Presently, the Congressional Budget Office scores telemedicine legislation as additive, stating the cost of reimbursing telemedicine would be in addition to all other services. The evidence-base and our experience do not support this. In some circumstances, such as post-discharge visits, telemedicine is fully substitutive. Virginia Medicaid data also demonstrated overall telemedicine care was substitutive, with no increase in costs. , In other circumstances, telemedicine leads to improved access and may result in appropriate additional visits for previously foregone care, such as for improved chronic disease management and as a result of improved visit adherence rates. Estimates of cost must reflect the nuanced use of telemedicine that the evidence suggests. Additionally, as evidence emerges about the impact of telemedicine (e.g., whether telemedicine can reduce emergency department visits and hospitalizations), these factors should be considered. Finally, beyond the direct impact on federal healthcare spending, a broader societal view must be considered, including reduced fuel and transportation costs and greenhouse gas emissions with use of telemedicine. Telemedicine is predominantly used in cognitive specialties through E/M services, which have been inadequately reimbursed for decades in Medicare’s Physician Fee Schedule. Qualified professionals should be allowed to demonstrate the intensity of their medical decision-making within a robust set of Medicare E/M service codes that are appropriately reimbursed. We urge CMS to define new telemedicine service codes, including specific codes for audio-only services, which accurately reflect workload. Even as payment shifts to value-based programs and prospective payments, accurate valuation of services remains foundational. Telemedicine is a key component of advanced delivery models, allowing patients greater access to clinician and ancillary support for robust disease management. The National Academy of Medicine report on Implementing High Quality Primary Care highlights the importance of paying for teams to provide care, not just physicians to provide services. As such, virtual care provided by all team members, such as clinical pharmacists and diabetes educators, must be supported by telemedicine payment policy. Policymakers must address structural barriers so that audio-video visits are an option for everyone. High-speed Internet is not currently accessible to all Americans. Policies outside of healthcare must be implemented to accelerate access, end digital redlining and ensure that broadband, a “super” determinant of health, is available to all. , SGIM makes the following recommendations for patient-centered clinical use at the individual patient level, and equitable telemedicine implementation at the health system level. Audio-Only, Audio-Video, or In-Person Care Should Be Determined by Shared Decision-Making, Incorporating Patient Preferences and Clinical Appropriateness Some care, such as behavioral health counseling, titration of medications, or evaluation of simple acute conditions such as uncomplicated urinary tract infections, may be equally served by all modalities. Other care, such as device or insulin training, may only be appropriate for audio-video or in-person visits. Some cases will require in-person care, such as evaluation of new symptoms that require a physical exam. In other circumstances, such as exacerbation of chronic conditions, the appropriate modality of care will depend on the extent of the relationship and the resources available. For example, heart failure exacerbations may require physical examination and advanced diagnostics—or for patients with a home scale, pulse oximeter, and significant family support, virtual care to titrate diuretics and close follow-up may provide higher patient satisfaction and equal quality of care. Clinical guidelines must be developed and implemented, while still allowing for flexibility based on individual factors. Some care, such as behavioral health counseling, titration of medications, or evaluation of simple acute conditions such as uncomplicated urinary tract infections, may be equally served by all modalities. Other care, such as device or insulin training, may only be appropriate for audio-video or in-person visits. Some cases will require in-person care, such as evaluation of new symptoms that require a physical exam. In other circumstances, such as exacerbation of chronic conditions, the appropriate modality of care will depend on the extent of the relationship and the resources available. For example, heart failure exacerbations may require physical examination and advanced diagnostics—or for patients with a home scale, pulse oximeter, and significant family support, virtual care to titrate diuretics and close follow-up may provide higher patient satisfaction and equal quality of care. Clinical guidelines must be developed and implemented, while still allowing for flexibility based on individual factors. In general, telemedicine visits should be used in conjunction with in-person visits, and not as the sole manner of care. However, this must be contextualized for specific patients. In some cases, such as transportation barriers, a harm-reduction approach may dictate more virtual care than otherwise considered optimal. Measure Equity Healthcare systems should build data reporting infrastructure to monitor equitable implementation, including access to care and clinical outcomes of telemedicine services. Design for Equity Clinics and healthcare systems should design their virtual care offerings in partnership with patients and communities that experience disparities, thereby increasing the likelihood that virtual care programs equitably meet the needs of the community. Implement for Equity By measuring equity and forming community partnerships, clinics and healthcare systems should be well-positioned to identify potential pitfalls and implement strategies to mitigate barriers. The VA “Digital Divide” Consult service is an example of how to address patient-level barriers; this program provides a device and/or training to veterans and establishes agreements with Internet service providers to defer charges for low-income veterans. Additionally, telemedicine services and modalities should be offered to all patients using standard processes to minimize implicit bias. Healthcare systems should build data reporting infrastructure to monitor equitable implementation, including access to care and clinical outcomes of telemedicine services. Clinics and healthcare systems should design their virtual care offerings in partnership with patients and communities that experience disparities, thereby increasing the likelihood that virtual care programs equitably meet the needs of the community. By measuring equity and forming community partnerships, clinics and healthcare systems should be well-positioned to identify potential pitfalls and implement strategies to mitigate barriers. The VA “Digital Divide” Consult service is an example of how to address patient-level barriers; this program provides a device and/or training to veterans and establishes agreements with Internet service providers to defer charges for low-income veterans. Additionally, telemedicine services and modalities should be offered to all patients using standard processes to minimize implicit bias. Telemedicine education must encompass two distinct domains: providing appropriate resources and support to faculty/educators who themselves never received telemedicine-specific training, and adopting educational strategies/competencies for current trainees. Implement Interactive Faculty Training on Telemedicine Skills Many medical educators are novices in digital health and must first develop mastery of their own skills to teach and evaluate. , To mitigate well-known barriers in faculty engagement (e.g., time, costs), institutions must provide time and/or compensation for faculty development and adapt external educational telemedicine resources to the individual instituation. Faculty development curricula can utilize interactive educational activities for active learning, including skills-targeted workshops and case-specific simulation. The Telehealth Mini-Residency for Providers in development at the VA National Simulation Center employs multiple interactive strategies for faculty participants to incorporate diagnostic reasoning principles into the triage mindset, visit type selection, and adaptation of the virtual physical exam. Develop Telemedicine-Specific Educational Strategies for Trainees that Align with the Association of American Medical Colleges Telehealth Competencies and Accreditation Council for Graduate Medical Education Milestones 2.0 We recommend using the telehealth competencies developed by the Association of American Medical Colleges to guide educational strategies for medical students, residents, and faculty. These include six domains related to patient safety, access and equity, communication, data collection, technology, and ethical practices . In addition, the updated 2021 Accreditation Council for Graduate Medical Education Milestones 2.0 includes a road map of how trainees should be implementing telemedicine visits into their practice. Many medical educators are novices in digital health and must first develop mastery of their own skills to teach and evaluate. , To mitigate well-known barriers in faculty engagement (e.g., time, costs), institutions must provide time and/or compensation for faculty development and adapt external educational telemedicine resources to the individual instituation. Faculty development curricula can utilize interactive educational activities for active learning, including skills-targeted workshops and case-specific simulation. The Telehealth Mini-Residency for Providers in development at the VA National Simulation Center employs multiple interactive strategies for faculty participants to incorporate diagnostic reasoning principles into the triage mindset, visit type selection, and adaptation of the virtual physical exam. We recommend using the telehealth competencies developed by the Association of American Medical Colleges to guide educational strategies for medical students, residents, and faculty. These include six domains related to patient safety, access and equity, communication, data collection, technology, and ethical practices . In addition, the updated 2021 Accreditation Council for Graduate Medical Education Milestones 2.0 includes a road map of how trainees should be implementing telemedicine visits into their practice. The opportunity for patients to receive care in the virtual setting has been paradigm-shifting. Telemedicine has considerably increased access to care and early evidence supports its ability to advance the goals of the quintuple aim, including improving patient satisfaction, clinical outcomes, and health equity. Policy steps must be taken at many levels to support telemedicine, including Congressional legislation to permanently end geographic and site restrictions and expanding the definition of telemedicine to include audio-only services. Additionally, CMS should create appropriate telemedicine E/M service codes for reimbursement. Policy outside of healthcare must also be developed to ensure the end of digital redlining practices, to ensure broadband internet availability for all. At the implementation level, clinical guidelines and decision support tools must be adopted to promote the most appropriate and equitable use of telemedicine. Telemedicine should not be used in all scenarios, nor as the sole modality. However, within the context of longitudinal, trusted relationships, telemedicine can extend and enhance comprehensive care. Additionally, health systems must develop data-reporting infrastructure and community partnerships to measure, design and implement telemedicine strategies that are meaningful and equitable for the communities they serve. Telemedicine educational strategies, competencies, and curricula must also be developed for training programs and continuing medical education, to ensure that future generations of physicians, as well as current physicians trained prior to the COVID-19 pandemic, can deliver high-quality, patient-centered telemedicine care that advances the goals of the quintuple aim. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 18 KB)
Glycolytic Genes Predict Immune Status and Prognosis Non-Small-Cell Lung Cancer Patients with Radiotherapy and Chemotherapy
7676ed82-f82d-4aa7-bf7e-2dd7d127b4a9
10125743
Internal Medicine[mh]
Lung cancer is one of the most common malignancies worldwide and the leading cause of cancer-related deaths. According to statistics, the number of lung cancer cases and deaths worldwide in 2021 was about 2.2 million and 1.6 million, which is a major public health problem seriously endangering human health . Non-small-cell lung cancer (NSCLC) is the main pathological type of lung cancer, which accounts for 80%-85% . Although immunotherapy and targeted therapy have made significant progress in recent years, chemotherapy and radiotherapy are still the main treatment for patients with advanced NSCLC . At present, the 5-year overall survival rate of NSCLC patients is only 20%. In the past, the benefits of cancer treatment development are limited and unsatisfactory . Therefore, more prognostic biomarkers are needed to assess the prognosis of patients and establish the corresponding individualized treatment. Tumor metabolic reprogramming is one of the main characteristics in the process of tumorigenesis and progression, including aerobic glycolysis (Warburg effect), lipid and protein synthesis, and enhanced glutamine metabolism . The change of glycolysis is the main feature, which means tumor cells still have active glycolytic activity in the environment of sufficient oxygen and produce large amounts of high lactate metabolites. This change can lead to tumor proliferation, enhanced ability to invade, and metastasize . Previous studies have made it clear that aerobic glycolysis is closely relevant to the resistance of tumor radiotherapy or chemotherapy . For example, inhibiting pkm2-mediated aerobic glycolysis can reverse 5-FU resistance in colon cancer . Long noncoding RNA urothelial carcinoma-associated protein 1 regulates radiation resistance through hexokinase 2/glycolysis pathway in cervical cancer . These related studies suggest that understanding the mechanism of glycolysis may help us find potential prognostic markers. Tumor immune microenvironment (time) also plays an important role in chemotherapy and radiotherapy resistance and significantly affects the prognosis of tumor patients . In addition, tumor immune microenvironment is also significantly related to glycolysis. Tumor cells use enough energy produced by glycolysis to change their immune microenvironment and inhibit the activation of immune cells and tumor inhibition, leading to tumor immune escape . Metabolic competition between tumor cells and immune cells will further promote tumor immunosuppression . Therefore, the combined analysis of glycolysis and immune status of patients will help us further understand the prognosis of NSCLC patients with radiotherapy or chemotherapy. In this study, we specifically analyzed the glycolysis-related genes (GRGs) and studied the effect of glycolysis on the survival and immune status of NSCLC patients undergoing radiotherapy or chemotherapy. In addition, we also build a risk scoring model based on GRGs. The model showed good prognostic ability in NSCLC patients and different clusters. This study can help us better explore the potential mechanism of poor prognosis of NSCLC patients with radiotherapy or chemotherapy and provide new ideas for better personalized treatment. 2.1. Data Collection Download the data set of LUAD and LUSC from TCGA ( https://portal.gdc.cancer.gov/ ) and GEO ( https://www.ncbi.nlm.nih.gov/geo/ ) databases as patient data of non-small-cell lung cancer, including gene expression profile of mRNA sequencing samples and clinical prognosis information. Inclusion criteria are as follows: (1) have complete clinical information and gene expression matrix; 2) are not repeated tumor samples; 3) received radiotherapy or chemotherapy: 116 samples are in TCGA database as training cohort. 49 samples are in GSE42127 dataset in GEO database as validation cohort. 289 GRGs were identified from the “HALLMARK GLYCOLYSIS” “REACTOME GLYCOLYSIS” “KEGG GLYCOLYSIS GLUCONEOGENESIS” gene sets in the MsigDB database. 2.2. Cluster Analysis First, we use the R package “Survival” to integrate gene expression data, survival time, and status and use the univariate Cox regression method to obtain 11 prognosis-related GRGs. Then, use the R package “Consensus Cluster Plus” to perform cluster analysis based on these 11 genes. Determine the optimal number of clusters K = 2 through the cumulative distribution curve graph. 2.3. Differential Genes and Functional Analyses The differential genes in the two clusters were evaluated using the R package “t.test” function, and the differential genes with adjust p value <0.05 and |logFC| > 1.5 were selected. Next, we used the gene annotation of the KEGG pathway obtained by the KEGG rest API and the GO annotation of the genes in the R package “ http://Org.Hs.eg.db ” and performed the enrichment analysis on the differential genes obtained by the R package “Cluster Profiler”, to analyze functional differences between clusters. 2.4. Immune Analyses Using the R package IOBR, the ESTIMATE method was selected to calculate the stroma, immune, and estimate scores of each group of samples. The TIMER and quanTIseq methods were selected to calculate the immune infiltrating cell score of each group of samples. 2.5. Risk Score Model Using the R package “Glmnet”, the survival time, survival status, and expression data of 11 GRGs were integrated, and the Lasso-Cox method was used for regression analysis. We chose the minimum lambda value of 0.0824 to obtain the optimal model. Based on this, we obtained 7 genes (ACSS1, ERO1A, GPC4, MERTK, PKP2, TXN, and ZNF292) and established a risk scoring model using their expression: risk score = −0.0308 × ACSS1 + 0.1205 × ERO1A-0.2470 × GPC4-0.0063 × MERTK + 0.0497 × PKP2 + 0.0584 × TXN-0.0491 × ZNF29. The samples were divided into high- and low-risk groups according to the obtained risk scores. Kaplan-Meier survival analysis was used to analyze the difference in overall survival between the two groups. Time-dependent ROC curve analysis was used to evaluate the prognostic predictive value of this risk model. Finally, a multivariate survival regression nomogram was constructed to evaluate the prognostic significance of risk model, tumor stage, and other characteristics in these samples. 2.6. Correlation Pathway Analysis The enrichment score of each sample in the gene set was calculated from the Gene Set Variation Analysis (GSVA) using the R package, and the hallmark gene sets were downloaded from the Molecular Signatures Database to evaluate the relevant pathways and molecular mechanisms. 2.7. Statistical Analysis Statistical analysis was performed using R software (version 4.0.5) and GraphPad Prism (version 8. 0. 1). Survival analysis was done using the Kaplan-Meier method. Differences between the two groups were determined by Student's two-tailed t -test. p < 0.05 was considered a significant difference. Download the data set of LUAD and LUSC from TCGA ( https://portal.gdc.cancer.gov/ ) and GEO ( https://www.ncbi.nlm.nih.gov/geo/ ) databases as patient data of non-small-cell lung cancer, including gene expression profile of mRNA sequencing samples and clinical prognosis information. Inclusion criteria are as follows: (1) have complete clinical information and gene expression matrix; 2) are not repeated tumor samples; 3) received radiotherapy or chemotherapy: 116 samples are in TCGA database as training cohort. 49 samples are in GSE42127 dataset in GEO database as validation cohort. 289 GRGs were identified from the “HALLMARK GLYCOLYSIS” “REACTOME GLYCOLYSIS” “KEGG GLYCOLYSIS GLUCONEOGENESIS” gene sets in the MsigDB database. First, we use the R package “Survival” to integrate gene expression data, survival time, and status and use the univariate Cox regression method to obtain 11 prognosis-related GRGs. Then, use the R package “Consensus Cluster Plus” to perform cluster analysis based on these 11 genes. Determine the optimal number of clusters K = 2 through the cumulative distribution curve graph. The differential genes in the two clusters were evaluated using the R package “t.test” function, and the differential genes with adjust p value <0.05 and |logFC| > 1.5 were selected. Next, we used the gene annotation of the KEGG pathway obtained by the KEGG rest API and the GO annotation of the genes in the R package “ http://Org.Hs.eg.db ” and performed the enrichment analysis on the differential genes obtained by the R package “Cluster Profiler”, to analyze functional differences between clusters. Using the R package IOBR, the ESTIMATE method was selected to calculate the stroma, immune, and estimate scores of each group of samples. The TIMER and quanTIseq methods were selected to calculate the immune infiltrating cell score of each group of samples. Using the R package “Glmnet”, the survival time, survival status, and expression data of 11 GRGs were integrated, and the Lasso-Cox method was used for regression analysis. We chose the minimum lambda value of 0.0824 to obtain the optimal model. Based on this, we obtained 7 genes (ACSS1, ERO1A, GPC4, MERTK, PKP2, TXN, and ZNF292) and established a risk scoring model using their expression: risk score = −0.0308 × ACSS1 + 0.1205 × ERO1A-0.2470 × GPC4-0.0063 × MERTK + 0.0497 × PKP2 + 0.0584 × TXN-0.0491 × ZNF29. The samples were divided into high- and low-risk groups according to the obtained risk scores. Kaplan-Meier survival analysis was used to analyze the difference in overall survival between the two groups. Time-dependent ROC curve analysis was used to evaluate the prognostic predictive value of this risk model. Finally, a multivariate survival regression nomogram was constructed to evaluate the prognostic significance of risk model, tumor stage, and other characteristics in these samples. The enrichment score of each sample in the gene set was calculated from the Gene Set Variation Analysis (GSVA) using the R package, and the hallmark gene sets were downloaded from the Molecular Signatures Database to evaluate the relevant pathways and molecular mechanisms. Statistical analysis was performed using R software (version 4.0.5) and GraphPad Prism (version 8. 0. 1). Survival analysis was done using the Kaplan-Meier method. Differences between the two groups were determined by Student's two-tailed t -test. p < 0.05 was considered a significant difference. 3.1. Two Clusters Identified Based on GRGs In this study, we selected the NSCLC dataset of TCGA and filtered out a total of 116 patients with radiotherapy or chemotherapy. Based on this dataset, univariate Cox regression was performed. The analysis yielded 4709 genes closely associated with prognosis. The top 10 prognosis-associated genes are presented in (ranked by p value). The Venn diagram indicated that 11 prognostic glycolysis genes were identified among these prognosis genes (ACCSS1, ERO1A, GPC4, PKP2, TXN, MERTK, ZNF292, ALDH3B2, PAM, and RRAGD) . The patients in the dataset were divided into two groups using consistent cluster analysis. 59 patients were clustered into cluster 1, and 57 patients were clustered into cluster 2 . C1 represents the low-expression cluster of GRGs, while C2 represents the opposite. Patients of cluster 2 had significantly worse overall survival than cluster 1 ( p < 0.001; ). These results suggest that glycolytic genes segregate NSCLC patients who have received chemotherapy or radiation into two clusters with different overall survival. 3.2. Differential Gene and Functional Analyses of Two Clusters To further probe the underlying mechanism of the difference in survival of these two clusters, we identified their differential genes and performed functional analysis. There were 3669 significantly differential genes, of which 1766 genes were upregulated in cluster 2 compared to cluster 1, and 1903 genes were downregulated (Figures and ). KEGG enrichment analysis showed that the differential genes were enriched in biological functions such as glucose metabolism, immunity, and nicotine addiction (Figures and ). GO enrichment analysis also showed that differential genes were enriched in biological processes such as immunity and glucose metabolism (Figures and ). These results indicate that the expression of glycolytic genes is closely related to the immune biological function. The abnormal immune function caused by these genes may be a contributing factor to the poor prognosis of patients with non-small-cell lung cancer after radiotherapy and chemotherapy. 3.3. Immune Analyses Next, we performed an immune analysis of patients in both molecular clusters to explore immune differences between them. The ESTIMATE algorithm showed that patients in cluster 1 had significantly higher stromal score compared to cluster 2 ( p < 0.001), immune score ( p < 0.001), and ESTIMATE score ( p < 0.001; ). The TIMER algorithm indicated that B cells ( p < 0.001), CD4 T cells ( p < 0.001), and macrophages ( p < 0.001) in cluster 1 and DC cells ( p < 0.001) were significantly higher than in cluster 2 . In addition, the quanTIseq algorithm also showed that B cells ( p < 0.001), macrophages M1 ( p < 0.001), B cells ( p < 0.001), macrophages M1 ( p < 0.001), macrophages M2 ( p < 0.001), neutrophils ( p < 0.001), NK cells ( p < 0.001), CD8 T cells ( p = 0.05) ( p < 0.001), Tregs ( p < 0.001) and DC cells ( p < 0.001) was more abundant . These results suggest that there are significant immune differences between the two subtypes. 3.4. Establishment of the Risk Score Model Based on GRGs To build a more accurate prognosis model, we used Lasso regression analysis to screen glycolysis prognosis genes and selected 7 genes (ACSS1, ERO1A, GPC4, PKP2, TXN, MERTK, and ZNF292) with λ = 0.09 as candidate genes (Figures and ). Based on the results, these 7 genes were identified to construct a risk model. The patients were divided into high- and low-risk groups by this risk model, and it was observed that with the increase of the risk score, the survival rate of the patients decreased significantly . The results of Kaplan-Meier survival analysis showed that the overall survival of the patients in the low-risk group was significantly better than that of the patients in the high-risk group. ; p < 0.001). ROC curve analysis showed that the AUC values for 1-year, 3-year, and 5-year survival rates were 0.82, 0.75, and 0.72, respectively. , indicating that the constructed risk model exhibited accurate predictive power over a 5-year period. 3.5. Prognostic Value of Risk Score Models in Different Clinical Clusters To further evaluate the role of the risk model in clinical application, we analyzed the prognostic value of the model for patients with different clinical characteristics (age, smoking, T stage, N stage, and clinical stage). The prognosis of the high-risk score group was always worse than that of the low-risk score group (Figures – ). In conclusion, the prognostic value of this risk model for NSCLC patients with chemotherapy or radiotherapy was not perturbed by other clinical characteristics. We next used univariate and multivariate Cox regression to analyze the association between risk scores, other clinical characteristics, and prognosis in NSCLC patients with chemotherapy or radiotherapy. Both N stage (HR = 2.294, p = 0.001) and tumor stage (HR = 2.232, p = 0.004) were significantly associated with prognosis. Multivariate Cox analysis showed that the risk score (HR = 4.191, p < 0.001) was the highest risk factor for patients receiving overexposure independent prognostic factors in chemotherapy-treated NSCLC patients . Taken together, these results suggest that this risk model has good prognostic value in NSCLC patients with chemotherapy or radiotherapy. 3.6. Risk Score Correlates with Activity of Chemotherapy and Radiotherapy Resistance-Related Pathways Thereafter, the relationship between the risk score and chemotherapy and radiotherapy resistance-related pathways was assessed. Using the ssGSEA algorithm, we found that the higher the risk score, the greater the risk of DNA repair ( p < 0.001, R = 0.32), and G2M checkpoint ( p < 0.001, R = 0.42), mitotic spindle ( p = 0.03, R = 0.20), and glycolytic ( p < 0.001, R = 0.25) pathways were more active (Figures – ). These results showed that with increasing risk score, pathway activity associated with chemotherapy and radiotherapy resistance also increased, suggesting a poor prognosis. 3.7. Construction of a Nomogram Thereafter, we constructed nomograms integrating risk models and clinical characteristics to provide a quantitative method for predicting 3- and 5-year OS probabilities in NSCLC patients with chemotherapy or radiotherapy, which can then be used in clinical practice. Based on multivariate Cox regression, as a result of the analysis, the nomogram integrated clinicopathological features and risk scores . The c-index value of the nomogram was 0.721, and the 3- and 5-year calibration curves were in good agreement with the standard curve, indicating that the model provided. The predictive performance at an effective level was obtained . Therefore, this risk score-based nomogram can be used in clinical practice to predict the prognosis of NSCLC patients with chemotherapy or radiotherapy. 3.8. Clinical Prognostic Value of Risk Scoring Models in a Validation Cohort Finally, we independently used the patients receiving chemotherapy in the GSE42127 dataset as the validation set of the risk model. The expression of seven genes was shown by a heat map . Kaplan-Meier survival analysis results also showed that patients with higher risk scores had a worse prognosis. ( ; p = 0.05). ROC curve analysis showed that the risk score had the best prediction effect at 5 years . The constructed nomogram also proved that the risk score model has considerable value in clinical prognostic work (Figures and ). In this study, we selected the NSCLC dataset of TCGA and filtered out a total of 116 patients with radiotherapy or chemotherapy. Based on this dataset, univariate Cox regression was performed. The analysis yielded 4709 genes closely associated with prognosis. The top 10 prognosis-associated genes are presented in (ranked by p value). The Venn diagram indicated that 11 prognostic glycolysis genes were identified among these prognosis genes (ACCSS1, ERO1A, GPC4, PKP2, TXN, MERTK, ZNF292, ALDH3B2, PAM, and RRAGD) . The patients in the dataset were divided into two groups using consistent cluster analysis. 59 patients were clustered into cluster 1, and 57 patients were clustered into cluster 2 . C1 represents the low-expression cluster of GRGs, while C2 represents the opposite. Patients of cluster 2 had significantly worse overall survival than cluster 1 ( p < 0.001; ). These results suggest that glycolytic genes segregate NSCLC patients who have received chemotherapy or radiation into two clusters with different overall survival. To further probe the underlying mechanism of the difference in survival of these two clusters, we identified their differential genes and performed functional analysis. There were 3669 significantly differential genes, of which 1766 genes were upregulated in cluster 2 compared to cluster 1, and 1903 genes were downregulated (Figures and ). KEGG enrichment analysis showed that the differential genes were enriched in biological functions such as glucose metabolism, immunity, and nicotine addiction (Figures and ). GO enrichment analysis also showed that differential genes were enriched in biological processes such as immunity and glucose metabolism (Figures and ). These results indicate that the expression of glycolytic genes is closely related to the immune biological function. The abnormal immune function caused by these genes may be a contributing factor to the poor prognosis of patients with non-small-cell lung cancer after radiotherapy and chemotherapy. Next, we performed an immune analysis of patients in both molecular clusters to explore immune differences between them. The ESTIMATE algorithm showed that patients in cluster 1 had significantly higher stromal score compared to cluster 2 ( p < 0.001), immune score ( p < 0.001), and ESTIMATE score ( p < 0.001; ). The TIMER algorithm indicated that B cells ( p < 0.001), CD4 T cells ( p < 0.001), and macrophages ( p < 0.001) in cluster 1 and DC cells ( p < 0.001) were significantly higher than in cluster 2 . In addition, the quanTIseq algorithm also showed that B cells ( p < 0.001), macrophages M1 ( p < 0.001), B cells ( p < 0.001), macrophages M1 ( p < 0.001), macrophages M2 ( p < 0.001), neutrophils ( p < 0.001), NK cells ( p < 0.001), CD8 T cells ( p = 0.05) ( p < 0.001), Tregs ( p < 0.001) and DC cells ( p < 0.001) was more abundant . These results suggest that there are significant immune differences between the two subtypes. To build a more accurate prognosis model, we used Lasso regression analysis to screen glycolysis prognosis genes and selected 7 genes (ACSS1, ERO1A, GPC4, PKP2, TXN, MERTK, and ZNF292) with λ = 0.09 as candidate genes (Figures and ). Based on the results, these 7 genes were identified to construct a risk model. The patients were divided into high- and low-risk groups by this risk model, and it was observed that with the increase of the risk score, the survival rate of the patients decreased significantly . The results of Kaplan-Meier survival analysis showed that the overall survival of the patients in the low-risk group was significantly better than that of the patients in the high-risk group. ; p < 0.001). ROC curve analysis showed that the AUC values for 1-year, 3-year, and 5-year survival rates were 0.82, 0.75, and 0.72, respectively. , indicating that the constructed risk model exhibited accurate predictive power over a 5-year period. To further evaluate the role of the risk model in clinical application, we analyzed the prognostic value of the model for patients with different clinical characteristics (age, smoking, T stage, N stage, and clinical stage). The prognosis of the high-risk score group was always worse than that of the low-risk score group (Figures – ). In conclusion, the prognostic value of this risk model for NSCLC patients with chemotherapy or radiotherapy was not perturbed by other clinical characteristics. We next used univariate and multivariate Cox regression to analyze the association between risk scores, other clinical characteristics, and prognosis in NSCLC patients with chemotherapy or radiotherapy. Both N stage (HR = 2.294, p = 0.001) and tumor stage (HR = 2.232, p = 0.004) were significantly associated with prognosis. Multivariate Cox analysis showed that the risk score (HR = 4.191, p < 0.001) was the highest risk factor for patients receiving overexposure independent prognostic factors in chemotherapy-treated NSCLC patients . Taken together, these results suggest that this risk model has good prognostic value in NSCLC patients with chemotherapy or radiotherapy. Thereafter, the relationship between the risk score and chemotherapy and radiotherapy resistance-related pathways was assessed. Using the ssGSEA algorithm, we found that the higher the risk score, the greater the risk of DNA repair ( p < 0.001, R = 0.32), and G2M checkpoint ( p < 0.001, R = 0.42), mitotic spindle ( p = 0.03, R = 0.20), and glycolytic ( p < 0.001, R = 0.25) pathways were more active (Figures – ). These results showed that with increasing risk score, pathway activity associated with chemotherapy and radiotherapy resistance also increased, suggesting a poor prognosis. Thereafter, we constructed nomograms integrating risk models and clinical characteristics to provide a quantitative method for predicting 3- and 5-year OS probabilities in NSCLC patients with chemotherapy or radiotherapy, which can then be used in clinical practice. Based on multivariate Cox regression, as a result of the analysis, the nomogram integrated clinicopathological features and risk scores . The c-index value of the nomogram was 0.721, and the 3- and 5-year calibration curves were in good agreement with the standard curve, indicating that the model provided. The predictive performance at an effective level was obtained . Therefore, this risk score-based nomogram can be used in clinical practice to predict the prognosis of NSCLC patients with chemotherapy or radiotherapy. Finally, we independently used the patients receiving chemotherapy in the GSE42127 dataset as the validation set of the risk model. The expression of seven genes was shown by a heat map . Kaplan-Meier survival analysis results also showed that patients with higher risk scores had a worse prognosis. ( ; p = 0.05). ROC curve analysis showed that the risk score had the best prediction effect at 5 years . The constructed nomogram also proved that the risk score model has considerable value in clinical prognostic work (Figures and ). Non-small-cell lung cancer is one of the most emerging and deadly cancers worldwide, with a markedly poor prognosis . Radiotherapy and chemotherapy are the main treatments for patients with advanced NSCLC, but these treatments often fail to achieve satisfactory results . Therefore, there is an urgent need to find potential prognostic markers for NSCLC patients with radiotherapy or chemotherapy. Although there have been many studies on NSCLC prognostic markers, most of them are not focused on radiotherapy or chemotherapy patients and have not penetrated into radiotherapy resistance and chemotherapy resistance. High activity of glycolysis and abnormal immune microenvironment are two important hallmarks of cancer and are closely related to radioresistance and chemoresistance . In this study, we screened out the prognostic GRGs and divided the patients into two clusters according to their expression levels. They have different clinical prognostic values and immune scores. These immune scores include T cell, B cell, neutrophil, and macrophage activity scores, which represent the immune microenvironment state of tumor. They can also be used as a reference for immunotherapy and can also evaluate the disease progress and comprehensive treatment prognosis of tumor patients . In addition, we also constructed a risk score model based on GRGs, which can accurately predict the prognosis of NSCLC patients undergoing radiotherapy or chemotherapy. Our findings may provide new ideas for the development of treatment regimens for NSCLC patients. First, we selected NSCLC patients who had received radiotherapy or chemotherapy in the TCGA and GEO databases as research subjects and screened out 10 GRGs (ACSS1, ERO1A, GPC4, PKP2, TXN, MERTK, ZNF292, ALDH3B2, PAM, RRAGD), and two clusters were identified based on their expression levels, which were significantly different in overall survival. We then performed differential gene and functional enrichment analysis on these two clusters to explore the underlying mechanisms of this survival difference. KEGG and GO enrichment analyses showed that the difference in immune and metabolic functions may mediate the effect of GRGs on the prognosis of patients with NSCLC after radiotherapy or chemotherapy. Therefore, we used ESTIMATE, TIMER, and quanTIseq scores to evaluate the immune infiltration of the two clusters. It has been shown to be closely related to the efficacy of radiotherapy or chemotherapy in NSCLC. The results showed that the subgroup with higher expression of GRGs had lower immune scores, which may be closely related to poor prognosis. Based on the above results, we used 7 genes (ACSS1, ERO1A, GPC4, PKP2, TXN, MERTK, and ZNF292) to construct a risk model to predict the prognosis of NSCLC patients with radiotherapy or chemotherapy. ACSS1 and ERO1A are highly expressed in tumors and can promote tumor progression metabolic changes associated with cancer cell survival . PKP2, TXN, and ZNF292 are also abnormally expressed in tumors and induce radioresistance of tumor cells . GPC4 can activate the Wnt/ β -catenin pathway and its downstream targets to increase 5-fluorouracil (5-FU) resistance and cell stemness in pancreatic cancer . MERTK can inhibit the immune effect of the body against tumors through the inflammatory pathway and PD-1 signaling axis, as well as regulating the functions of various immune cells . MERTK inhibitors have been confirmed to be used in combination with radiotherapy or chemotherapy in glioma, NSCLC, head and neck squamous cell carcinoma, and other tumors to achieve better efficacy . In addition, ERO1A, PKP2, and MERTK have been proved to promote the progress and drug resistance of NSCLC by enhancing the activation of tumor and PI3K, EGFR, and other signal pathways . In this study, we identified the good prognosis prediction effect of this risk model and confirmed this prediction effect using chemotherapy patients in the GSE42127 dataset as a validation cohort. This may help the clinical treatment of NSCLC patients with chemotherapy or radiotherapy and provide potential targets for individualized treatment. Although our study provides a risk model constructed with GRGs that has a good predictive effect on the prognosis of NSCLC patients with chemotherapy or radiotherapy, there are still many limitations. First, all data in our study are publicly available retrospective samples, and a certain number of prospective samples need to be included to confirm our results. Second, we only focus on clinical prognosis and do not dig deep into specific molecular mechanisms. Third, our research is a bioinformatics study, and there is a lack of specific basic experiments to verify. In conclusion, this study clustered NSCLC patients with chemotherapy or radiotherapy into two clusters based on GRGs. Functional analysis and immune scores showed that high glycolytic activity can lead to suppressed immune status and poor prognosis. At the same time, we also established a corresponding risk scoring model, hoping to provide new ideas and theoretical support for clinical treatment.
2296f762-3a09-4d0f-85dd-ffcf3172e626
10125782
Anatomy[mh]
Thymic epithelial tumors, including thymoma and thymic carcinoma, are rare tumors, with a prevalence of 0.15 cases per 100 000 people every year. The World Health Organization (WHO) classification pathologically classifies thymoma into type A, AB, B1, B2, and B3 based on the morphology of the tumor cells and the relative quantity of immature T lymphocytes. Moreover, these classifications reflect the invasive nature and prognosis of thymoma. Thymic carcinoma is known to have a remarkably worse prognosis than thymoma. Type B3 thymoma, the most malignant type of thymoma, is associated with cytological atypia, making its differentiation from thymic carcinoma difficult; however, this differentiation is essential because the treatments of these two diseases are distinct. Patients with advanced stage or recurrent thymic epithelial tumor are treated with chemotherapy. The National Comprehensive Cancer Network Guidelines version 2.2022 recommends regimens comprising anthracycline anticancer drugs, such as cisplatin, doxorubicin, and cyclophosphamide (CAP) for thymoma and regimens containing carboplatin/paclitaxel for thymic carcinoma. Recent reports have indicated the efficacy of treatment using anti‐PD‐1 antibodies for patients with thymic carcinoma. , A phase II clinical trial using the anti‐PD‐1 antibody pembrolizumab reported more frequent grade 3 or higher immune‐related adverse events in patients with thymoma than in those with thymic carcinoma ; hence, it is presumed that the clinical application of anti‐PD‐1 antibodies should be recommended for thymic carcinoma only. CD5, c‐kit, and GLUT‐1 have been used as markers for differentiating thymic carcinoma from thymoma. The sensitivity of CD5 as marker for thymic carcinoma identification ranges from 30% to 70%, whereas that of c‐kit is 70%–80%. However, c‐kit immunohistochemistry (IHC) is positive in 5%–15% of thymoma cases. The sensitivity of GLUT‐1 is 70%–100%, and the specificity is 50%–100%, hence better markers are required to improve diagnostic accuracy. , , , , , The diverse functions of cells are determined via different combinations of numerous RNA strands transcribed from genomic DNA. RNA is translated into various proteins, which are responsible for various cellular functions. Therefore, understanding the amount and type of RNA is essential for inferring cell functions in different diseases. The cap analysis of gene expression (CAGE) is a genome‐wide profiling protocol of gene expressions at promoter level by high‐throughput sequencing of capped 5′‐ends of mRNAs and long noncoding RNAs. , It was used in the FANTOM5 project to analyze gene expression in more than 1800 human samples, including human primary cells and tissue and cancer cell lines. , Moreover, CAGE has also been used to characterize various cancer cells, including identification of estrogen‐regulated genes in breast cancer cells and to resolve androgen receptor signaling in prostate cancer cells. , Furthermore, it has been used to find biomarkers that differentiate lung adenocarcinoma from squamous cell lung carcinoma. Pathologically, thymoma and thymic carcinoma have been regarded as part of a continuum of diseases; however, Radovich et al. indicated that these diseases are distinct biological entities, with completely different gene expression patterns, suggesting that it may be possible to base the search for biomarkers on differences in the gene expression. To identify genes as candidate differential markers for diagnosing thymic carcinoma and type B3 thymoma, we initially performed CAGE on RNA extracts obtained from a limited number of clinical samples of thymic carcinoma and type B3 thymoma, and subsequently performed IHC on an extended cohort as its validation. We further examined its function by using a thymic carcinoma cell line to understand the contribution of the candidate marker to cell proliferation and anti‐cancer drug sensitivity. Case selection of CAGE and IHC CAGE was performed on available frozen samples from thymic carcinoma ( n = 4) and type B3 thymoma ( n = 3), collected at Juntendo University Hospital between March 2010 and October 2012 (Table ). These seven tumor tissue specimens were collected following a protocol approved by the Institutional Review Board of Juntendo University. The tissue donors provided written informed consent. In the operating room, 3–5 mm 3 cubes of fresh tumor tissue were dissected and immediately placed in 1.0 ml of RNA stabilization reagent (Qiagen GmbH) for 24–48 h at 4°C. Thereafter, the specimens were stored at −80°C until RNA extraction. Total RNA was extracted from frozen tissue sections according to the standard protocol. IHC was performed on specimens from 64 cases (thymic carcinoma, n = 26; type B3 thymoma, n = 38), resected at Juntendo University Hospital between May 1986 and November 2017 including specimens subjected to CAGE sequencing. The 26 thymic carcinoma cases were classified histologically as squamous cell carcinoma. IHC was also performed on specimens from 22 cases of lung squamous cell carcinoma, resected at Juntendo University Hospital between January 2010 and January 2011. The histological diagnosis in the present study was made in accordance with the fifth edition of the WHO classification of thymic epithelial tumors. Two pathologists (TH and KH), blinded to the clinical data, reviewed all stained sections independently. When discrepancies arose, the slides were reviewed using a multiheaded microscope to reach a consensus. CAGE assay CAGE libraries were prepared following the previously described protocol. In brief, the total RNA extracts were subjected to a reverse transcription reaction with SuperScript III (Life Technologies). After purification using RNAclean XP (Beckman Coulter), double stranded‐RNA/cDNA hybrids were oxidized with sodium periodate to generate aldehydes from the diols of the ribose at the cap structure and 3′‐end, and these were biotinylated with biotin hydrazide (Vector Laboratories). The remaining single‐stranded RNA was digested with RNase I (Promega) before capturing the biotinylated cap structure with magnetic streptavidin beads (Dynal Streptavidin M‐270; Life Technologies). Single‐stranded cDNA was recovered by heat denaturation and was ligated with the 3′‐end and 5′‐end adaptors specific to the samples, subsequently. Double‐stranded cDNAs were prepared by using a primer and DeepVent (exo−) DNA polymerase (New England Biolabs) and were mixed so that sequencing with one lane could produce data from eight samples. Three nanograms of the mixed samples were used to prepare 120 μl of loading sample, which was loaded on c‐Bot, and sequenced by an Illumina HiSeq2500 sequencer (Illumina). Computational analysis of CAGE data to identify candidate markers The original samples from which individual reads were obtained were identified with the ligated adaptor sequences. After discarding reads including a base “N” or that hit a ribosomal RNA sequence (U13369.1) with rRNAdust, the reads were aligned to the reference genome (hg19) using BWA (version 0.7.10) and poorly aligned reads (mapping quality <20) were discarded using SAMtools (version 0.1.19). Only libraries with more than 2 million mapped reads were used for further analyses. The robust peak set was used as a reference set for transcription starting site (TSS) regions, and mapped reads starting from these regions were used as raw signals for the promoter activities. Differential analyses were conducted using the Deseq2 in the Galaxy software ecosystem ( https://usegalaxy.org ). Immunohistochemistry ( IHC ) IHC was performed on representative formalin‐fixed paraffin‐embedded (FFPE) tissues. The sections (thickness: 4 μm) were deparaffinized and hydrated. Immunohistochemical examinations were performed using antibodies against CALML5 (A‐3; Santa Cruz Biotechnology, 1:50 dilution); CD5 (4C7; Leica Biosystems, 1:100 dilution); c‐kit (Polyclonal; Dako Cytomation, 1:100 dilution);GLUT‐1 (18 901; Immuno‐Biological Laboratories Co., Ltd, 1:300 dilution); terminal deoxynucleotidyl transferase (TdT) (EP266; Agilent Technologies, prediluted), following the manufacturer's recommendations. Immunohistochemical staining was assessed independently by two independent pathologists (K.H. and T.H.) without prior knowledge of the clinicopathological data. A case was recorded as positive when more than 90% of the tumor cells stained positive for CALML5, more than 10% of the membrane of tumor cells stained positive for CD5, c‐kit, and GLUT‐1 and more than 10% of the lymphocyte nuclei stained positive for TdT. Cell culture The human thymic carcinoma cell line ThyL‐6 was established at the University of Fukui (Fukui, Japan), as previously described and maintained under 5% CO 2 at 37°C in RPMI‐1640 medium (Wako) supplemented with 10% fetal bovine serum, penicillin (100 U/ml), and streptomycin (100 μg/ml). A431 cells, MDA‐MB‐468 cells, and Lenti‐X 293T cells were obtained from the American Type Culture Collection (Manassas)and maintained under 5% CO 2 at 37°C in Dulbecco's Modified Eagle's medium (DMEM; Sigma‐Aldrich) supplemented with 10% fetal bovine serum, penicillin (100 U/ml), and streptomycin (100 μg/ml). Transfection and construction of the recombinant lentiviral vector The vector of CALML5 was purchased from the DNASU plasmid repository (Plasmid ID HsCD00506164). The vector of CALML5 and a green fluorescent protein (GFP) control construct were subcloned into the pLX307 lentiviral expression vector (Addgene) under the control of an EF‐1α promoter. The recombinant gene was transfected into the Lenti‐X 293T cell line with PSPAX2 and PMD.2G to produce a virus supernatant. The virus supernatant was harvested at 48 h and concentrated by Lenti‐X concentrator (Takara Bio Inc.). Viral fluid and polybrene were added to ThyL‐6 cells. The viral fluid and polybrene were replaced by medium after 24 h. Puromycin (2 μg/ml) was added after 24 h. The medium and puromycin were replaced after 72–96 h. RNA sequencing Total RNA was extracted from the cells using the Rneasy Mini Kit (Qiagen). Total RNA of each sample was quantified and quality checked by an Agilent 2100 Bioanalyzer (Agilent Technologies), NanoDrop (Thermo Fisher Scientific Inc.), and 1% agarose gel. About 1 μg of total RNA with RNA integrity number (RIN) above 7 was used for the following library preparations. Next generation sequencing library preparations were constructed according to the manufacturer's protocol (NEBNext Ultra RNA Library Prep Kit for Illumina, New England Biolabs). The poly(A) mRNA isolation was performed using the NEBNext Poly(A) mRNA Magnetic Isolation Module (New England Biolabs) or Ribo‐Zero rRNA removal Kit (Illumina). The mRNA fragmentation and priming were performed using NEBNext First Strand Synthesis Reaction Buffer and NEBNext Random Primers (New England Biolabs). First strand cDNA was synthesized using ProtoScript II Reverse Transcriptase (New England Biolabs) and the second‐strand cDNA was synthesized using Second Strand Synthesis Enzyme Mix (New England Biolabs). The double‐stranded cDNA purified by AxyPrep Mag PCR Clean‐up (Axygen) was then treated with End Prep Enzyme Mix (New England Biolabs) to repair both ends and add a dA‐tail in one reaction, and finally T‐A ligated to add adaptors to both ends. Size selection of the adaptor‐ligated DNA was then performed using AxyPrep Mag PCR Clean‐up (Axygen), and fragments of ~360 bp (with the approximate insert size of 300 bp) were recovered. Each sample was then amplified by PCR for 11 cycles using P5 and P7 primers, with both primers carrying sequences which can anneal with the flow cell primer to perform bridge PCR and a P7 primer carrying a six‐base index allowing for multiplexing. The PCR products were cleaned up using AxyPrep Mag PCR Clean‐up (Axygen), validated using an Agilent 2100 Bioanalyzer (Agilent Technologies), and quantified by using a Qubit 2.0 Fluorometer (Invitrogen). Then libraries with different indices were multiplexed and loaded on an Illumina NovaSeq instrument according to the manufacturer's instructions (Illumina). Sequencing was carried out using a 2 × 150 paired‐end (PE) configuration; image analysis and base calling were conducted by the HiSeq Control Software (HCS) + OLB + GAPipeline‐1.6 (Illumina) on the NovaSeq instrument. The sequences were processed and analyzed by GENEWIZ. We analyzed the obtained FASTQ files using the Galaxy software ecosystem ( https://usegalaxy.org ). Quality check was conducted using FastQ Quality Control. Trimmomatic was used to remove the low‐quality reads and adapter sequences. Mapping was conducted and count data were acquired with htseq‐count. Multidimensional scaling and differential analyses were conducted using the Deseq2 package. Quantitative real‐time PCR Total RNA was extracted from the cells using the Rneasy Mini Kit (Qiagen). cDNA was generated from 1 μg of RNA using the Revertra cDNA synthesis kit (Toyobo Life Science), according to the manufacturer's protocol. Quantitative real‐time PCR (qPCR) was performed using the Fast SYBR Green Master Mix (Applied Biosystems) under the following thermal cycling conditions: denaturation at 95°C for 20 s and 40 amplification cycles (denaturation at 95°C for 3 s, annealing and extension at 60°C for 30 s), concurrently with melt‐curve analysis. Actin was used as an internal control. The sequences of primers used for the analyses were as follows: CALML5 Forward: 5′‐GGTTGACACGGATGGAAACG‐3′ Reverse: 5′‐ACTCCTGGAAGCTGATTTCGC‐3′ Actin Forward: 5′‐CTCTTCCAGCCTTCCTTCCT‐3′ Reverse: 5′‐AGCACTGTGTTGGCGTACAG‐3′ Western blot analysis Cells were washed with iced‐cold PBS and lysed with 2% SDS buffer (50 mM Tris–HCl, pH 6.8%, 2% SDS, and 10% glycerol) supplemented with protease and phosphatase inhibitors (Roche). The protein concentration was measured using the DC protein assay (Bio‐Rad). Equal amounts of whole cell lysates (10–20 μg) were loaded onto 4%–20% Mini‐PROTEAN TGX Precast Gels (Bio‐Rad). After blocking with polyvinylidene difluoride (PVDF) blocking reagent for Can Get Signal (Toyobo Life Science), the blots were incubated overnight with the indicated primary antibodies: anti‐CALML5 antibody (ab154631) used at 1:1000 dilution and anti‐actin antibody (A5316) used at 1:5000 dilution. The membranes were incubated with the appropriate horseradish peroxidase‐conjugated secondary antibody (diluted 1:3000) (GE Healthcare), and this was followed by detection with enhanced chemiluminescence (ECL; GE Healthcare). All dilutions were made in Can Get Signal Immunoreaction Enhancer Solution (Toyobo Life Science). Images of the western blot signals were acquired by Chemidoc and Chemidoc MP imaging systems with Image Lab Touch Software (Bio‐Rad). Statistical analysis Statistical analyses were performed using GraphPad Prism 7.0 (GraphPad Software). The two‐tailed Student's t ‐test and ANOVA were used to compare the values. Statistically significant differences between the means were considered at p < 0.05. CAGE and IHC CAGE was performed on available frozen samples from thymic carcinoma ( n = 4) and type B3 thymoma ( n = 3), collected at Juntendo University Hospital between March 2010 and October 2012 (Table ). These seven tumor tissue specimens were collected following a protocol approved by the Institutional Review Board of Juntendo University. The tissue donors provided written informed consent. In the operating room, 3–5 mm 3 cubes of fresh tumor tissue were dissected and immediately placed in 1.0 ml of RNA stabilization reagent (Qiagen GmbH) for 24–48 h at 4°C. Thereafter, the specimens were stored at −80°C until RNA extraction. Total RNA was extracted from frozen tissue sections according to the standard protocol. IHC was performed on specimens from 64 cases (thymic carcinoma, n = 26; type B3 thymoma, n = 38), resected at Juntendo University Hospital between May 1986 and November 2017 including specimens subjected to CAGE sequencing. The 26 thymic carcinoma cases were classified histologically as squamous cell carcinoma. IHC was also performed on specimens from 22 cases of lung squamous cell carcinoma, resected at Juntendo University Hospital between January 2010 and January 2011. The histological diagnosis in the present study was made in accordance with the fifth edition of the WHO classification of thymic epithelial tumors. Two pathologists (TH and KH), blinded to the clinical data, reviewed all stained sections independently. When discrepancies arose, the slides were reviewed using a multiheaded microscope to reach a consensus. assay CAGE libraries were prepared following the previously described protocol. In brief, the total RNA extracts were subjected to a reverse transcription reaction with SuperScript III (Life Technologies). After purification using RNAclean XP (Beckman Coulter), double stranded‐RNA/cDNA hybrids were oxidized with sodium periodate to generate aldehydes from the diols of the ribose at the cap structure and 3′‐end, and these were biotinylated with biotin hydrazide (Vector Laboratories). The remaining single‐stranded RNA was digested with RNase I (Promega) before capturing the biotinylated cap structure with magnetic streptavidin beads (Dynal Streptavidin M‐270; Life Technologies). Single‐stranded cDNA was recovered by heat denaturation and was ligated with the 3′‐end and 5′‐end adaptors specific to the samples, subsequently. Double‐stranded cDNAs were prepared by using a primer and DeepVent (exo−) DNA polymerase (New England Biolabs) and were mixed so that sequencing with one lane could produce data from eight samples. Three nanograms of the mixed samples were used to prepare 120 μl of loading sample, which was loaded on c‐Bot, and sequenced by an Illumina HiSeq2500 sequencer (Illumina). CAGE data to identify candidate markers The original samples from which individual reads were obtained were identified with the ligated adaptor sequences. After discarding reads including a base “N” or that hit a ribosomal RNA sequence (U13369.1) with rRNAdust, the reads were aligned to the reference genome (hg19) using BWA (version 0.7.10) and poorly aligned reads (mapping quality <20) were discarded using SAMtools (version 0.1.19). Only libraries with more than 2 million mapped reads were used for further analyses. The robust peak set was used as a reference set for transcription starting site (TSS) regions, and mapped reads starting from these regions were used as raw signals for the promoter activities. Differential analyses were conducted using the Deseq2 in the Galaxy software ecosystem ( https://usegalaxy.org ). IHC ) IHC was performed on representative formalin‐fixed paraffin‐embedded (FFPE) tissues. The sections (thickness: 4 μm) were deparaffinized and hydrated. Immunohistochemical examinations were performed using antibodies against CALML5 (A‐3; Santa Cruz Biotechnology, 1:50 dilution); CD5 (4C7; Leica Biosystems, 1:100 dilution); c‐kit (Polyclonal; Dako Cytomation, 1:100 dilution);GLUT‐1 (18 901; Immuno‐Biological Laboratories Co., Ltd, 1:300 dilution); terminal deoxynucleotidyl transferase (TdT) (EP266; Agilent Technologies, prediluted), following the manufacturer's recommendations. Immunohistochemical staining was assessed independently by two independent pathologists (K.H. and T.H.) without prior knowledge of the clinicopathological data. A case was recorded as positive when more than 90% of the tumor cells stained positive for CALML5, more than 10% of the membrane of tumor cells stained positive for CD5, c‐kit, and GLUT‐1 and more than 10% of the lymphocyte nuclei stained positive for TdT. The human thymic carcinoma cell line ThyL‐6 was established at the University of Fukui (Fukui, Japan), as previously described and maintained under 5% CO 2 at 37°C in RPMI‐1640 medium (Wako) supplemented with 10% fetal bovine serum, penicillin (100 U/ml), and streptomycin (100 μg/ml). A431 cells, MDA‐MB‐468 cells, and Lenti‐X 293T cells were obtained from the American Type Culture Collection (Manassas)and maintained under 5% CO 2 at 37°C in Dulbecco's Modified Eagle's medium (DMEM; Sigma‐Aldrich) supplemented with 10% fetal bovine serum, penicillin (100 U/ml), and streptomycin (100 μg/ml). The vector of CALML5 was purchased from the DNASU plasmid repository (Plasmid ID HsCD00506164). The vector of CALML5 and a green fluorescent protein (GFP) control construct were subcloned into the pLX307 lentiviral expression vector (Addgene) under the control of an EF‐1α promoter. The recombinant gene was transfected into the Lenti‐X 293T cell line with PSPAX2 and PMD.2G to produce a virus supernatant. The virus supernatant was harvested at 48 h and concentrated by Lenti‐X concentrator (Takara Bio Inc.). Viral fluid and polybrene were added to ThyL‐6 cells. The viral fluid and polybrene were replaced by medium after 24 h. Puromycin (2 μg/ml) was added after 24 h. The medium and puromycin were replaced after 72–96 h. sequencing Total RNA was extracted from the cells using the Rneasy Mini Kit (Qiagen). Total RNA of each sample was quantified and quality checked by an Agilent 2100 Bioanalyzer (Agilent Technologies), NanoDrop (Thermo Fisher Scientific Inc.), and 1% agarose gel. About 1 μg of total RNA with RNA integrity number (RIN) above 7 was used for the following library preparations. Next generation sequencing library preparations were constructed according to the manufacturer's protocol (NEBNext Ultra RNA Library Prep Kit for Illumina, New England Biolabs). The poly(A) mRNA isolation was performed using the NEBNext Poly(A) mRNA Magnetic Isolation Module (New England Biolabs) or Ribo‐Zero rRNA removal Kit (Illumina). The mRNA fragmentation and priming were performed using NEBNext First Strand Synthesis Reaction Buffer and NEBNext Random Primers (New England Biolabs). First strand cDNA was synthesized using ProtoScript II Reverse Transcriptase (New England Biolabs) and the second‐strand cDNA was synthesized using Second Strand Synthesis Enzyme Mix (New England Biolabs). The double‐stranded cDNA purified by AxyPrep Mag PCR Clean‐up (Axygen) was then treated with End Prep Enzyme Mix (New England Biolabs) to repair both ends and add a dA‐tail in one reaction, and finally T‐A ligated to add adaptors to both ends. Size selection of the adaptor‐ligated DNA was then performed using AxyPrep Mag PCR Clean‐up (Axygen), and fragments of ~360 bp (with the approximate insert size of 300 bp) were recovered. Each sample was then amplified by PCR for 11 cycles using P5 and P7 primers, with both primers carrying sequences which can anneal with the flow cell primer to perform bridge PCR and a P7 primer carrying a six‐base index allowing for multiplexing. The PCR products were cleaned up using AxyPrep Mag PCR Clean‐up (Axygen), validated using an Agilent 2100 Bioanalyzer (Agilent Technologies), and quantified by using a Qubit 2.0 Fluorometer (Invitrogen). Then libraries with different indices were multiplexed and loaded on an Illumina NovaSeq instrument according to the manufacturer's instructions (Illumina). Sequencing was carried out using a 2 × 150 paired‐end (PE) configuration; image analysis and base calling were conducted by the HiSeq Control Software (HCS) + OLB + GAPipeline‐1.6 (Illumina) on the NovaSeq instrument. The sequences were processed and analyzed by GENEWIZ. We analyzed the obtained FASTQ files using the Galaxy software ecosystem ( https://usegalaxy.org ). Quality check was conducted using FastQ Quality Control. Trimmomatic was used to remove the low‐quality reads and adapter sequences. Mapping was conducted and count data were acquired with htseq‐count. Multidimensional scaling and differential analyses were conducted using the Deseq2 package. PCR Total RNA was extracted from the cells using the Rneasy Mini Kit (Qiagen). cDNA was generated from 1 μg of RNA using the Revertra cDNA synthesis kit (Toyobo Life Science), according to the manufacturer's protocol. Quantitative real‐time PCR (qPCR) was performed using the Fast SYBR Green Master Mix (Applied Biosystems) under the following thermal cycling conditions: denaturation at 95°C for 20 s and 40 amplification cycles (denaturation at 95°C for 3 s, annealing and extension at 60°C for 30 s), concurrently with melt‐curve analysis. Actin was used as an internal control. The sequences of primers used for the analyses were as follows: CALML5 Forward: 5′‐GGTTGACACGGATGGAAACG‐3′ Reverse: 5′‐ACTCCTGGAAGCTGATTTCGC‐3′ Actin Forward: 5′‐CTCTTCCAGCCTTCCTTCCT‐3′ Reverse: 5′‐AGCACTGTGTTGGCGTACAG‐3′ Cells were washed with iced‐cold PBS and lysed with 2% SDS buffer (50 mM Tris–HCl, pH 6.8%, 2% SDS, and 10% glycerol) supplemented with protease and phosphatase inhibitors (Roche). The protein concentration was measured using the DC protein assay (Bio‐Rad). Equal amounts of whole cell lysates (10–20 μg) were loaded onto 4%–20% Mini‐PROTEAN TGX Precast Gels (Bio‐Rad). After blocking with polyvinylidene difluoride (PVDF) blocking reagent for Can Get Signal (Toyobo Life Science), the blots were incubated overnight with the indicated primary antibodies: anti‐CALML5 antibody (ab154631) used at 1:1000 dilution and anti‐actin antibody (A5316) used at 1:5000 dilution. The membranes were incubated with the appropriate horseradish peroxidase‐conjugated secondary antibody (diluted 1:3000) (GE Healthcare), and this was followed by detection with enhanced chemiluminescence (ECL; GE Healthcare). All dilutions were made in Can Get Signal Immunoreaction Enhancer Solution (Toyobo Life Science). Images of the western blot signals were acquired by Chemidoc and Chemidoc MP imaging systems with Image Lab Touch Software (Bio‐Rad). Statistical analyses were performed using GraphPad Prism 7.0 (GraphPad Software). The two‐tailed Student's t ‐test and ANOVA were used to compare the values. Statistically significant differences between the means were considered at p < 0.05. Identification of genome‐wide differential biomarkers for thymic carcinoma and type B3 thymoma using CAGE We assessed promoter activity levels in thymic carcinoma ( n = 4) and type B3 thymoma ( n = 3) tissues using the CAGE protocol with a next‐generation sequencer (HiSeq2500). Radovich et al. reported that thymic carcinoma and type B3 thymoma with the same origin have different expression patterns. It suggests that focusing on the differences in gene expression would lead to the discovery of an appropriate biomarker. The results of our CAGE data analysis confirmed the existence of appropriate biomarkers. Figure shows a volcano plot of our CAGE data showing the difference in gene expression patterns between thymic squamous cell carcinoma and type B3 thymoma. In thymic squamous cell carcinoma compared with type B3 thymoma, the TMPRSS4 , CALML5 , HEPACAM2 , and POU2F3 genes were the top four differentially expressed genes (log2 fold change >8.17) in our samples (Table ) and the CD5 , KIT , and SLC2A1 genes were the already reported differentially expressed genes (log2 fold change >1.47) (Table ). The log2 fold change and adjusted p ‐value for CD5 , KIT , and SLC2A1 are shown in Table . Only KIT had an adjusted p ‐value of less than 0.01 and the highest log2 fold change. The promoter activities of TMPRSS4 , CALML5 , CD5 , KIT , and SLC2A1 are shown in scatter plots of CAGE data (Figure ). Expression of the novel candidates ( TMPRSS4 and CALML5 ) and the already reported markers ( KIT and SLC2A1 ) was significantly greater in thymic squamous cell carcinoma than in type B3 thymoma. However, no significant difference was observed in CD5 expression. The mRNA expression levels of the novel ( TMPRSS4 and CALML5 ) and known ( CD5 , KIT , and SLC2A1 ) markers obtained by analyzing Cancer Genome Atlas (TCGA) data are represented in bar graphs (as mean with standard error of the mean) (Figure ). Thymic squamous cell carcinoma was diagnosed in three cases and type B3 thymoma in 11 cases. There was a significant difference in CALML5 , CD5 , and KIT mRNA expression, but not in TMPRSS4 and SLC2A1 mRNA expression. CALML5 , in contrast to existing markers, effectively differentiates between thymic carcinoma and type B3 thymoma We identified CALML5 as a diagnostic marker that is able to distinguish thymic carcinoma from type B3 thymoma. CALML5 mRNA was significantly more expressed in thymic squamous cell carcinoma than in type B3 thymoma according to both our CAGE and TCGA data analysis. CALML5, CD5, c‐kit, and GLUT‐1 were examined at the protein level with IHC to confirm their clinical utility for differentiating thymic carcinoma from type B3 thymoma (Figure ). The patient characteristics are shown in Table . The proportion of patients with stage 4 tended to be higher in the thymic carcinoma group than in the type B3 thymoma group, and it is consistent with the difference of patient characteristics between them in daily clinical practice. While CALML5 was expressed in the cytoplasm and the nuclei of thymic carcinoma, it was not expressed in type B3 thymoma. The samples of thymic carcinoma used for CAGE was positive for CALML5 in three of four cases. Only thymic carcinoma with the lowest CPM of the four cases was negative for CALML5. All three cases of type B3 thymoma were also negative. CALML5 as a diagnostic marker had a sensitivity of 73.1% (19/26 cases) and specificity of 94.7% (36/38 cases) for thymic carcinoma (Table ). The sensitivity of CALML5 was higher than that of CD5, which was 69.2% (18/26 cases), and the specificity of CALML5 was higher than that of GLUT‐1, which was 60.5% (23/38 cases), and the same as the specificity of c‐kit, which was 94.7% (36/38 cases). Previous studies have reported that CD5 and c‐kit had sensitivities of 30%–70% and 70%–80% for thymic carcinoma, respectively, whereas c‐kit showed a specificity of 85%–95%. Therefore, the results indicated that CALML5 has a higher sensitivity than CD5 and a specificity equal to or higher than that of c‐kit. , , , Furthermore, the tumor cells stained diffusely, which makes it easy to confirm the presence of CALML5 expression even with a smaller number of tumor cells. There was also a single case of CD5 − c‐kit − CALML5 + thymic carcinoma (Figure ). When used in combination, CALML5, CD5, c‐kit, and GLUT‐1 had a sensitivity of 100% (26/26 cases) and specificity of 100% (38/38 cases). IHC for CALML5 was performed in four specimens (one thymic adenocarcinoma, and three thymic carcinoid). All four cases were negative for CALML5. Thymic squamous cell carcinoma can invade the lungs, and lung squamous cell carcinoma can invade the mediastinum, making it difficult to distinguish between the two tumors. IHC was also performed in 22 cases of lung squamous cell carcinoma using with antibodies against CALML5 (Figure ). The sensitivity was 4.5% (1/22 cases). CALML5 may be useful in differentiating between thymic squamous cell carcinoma and lung squamous cell carcinoma. CALML5 is involved in cell proliferation and increases cisplatin sensitivity To study the functional relevance of CALML5 to thymic carcinoma progression, we established CALML5 overexpressing thymic carcinoma cell lines (Figure ). RNA sequencing was performed using CALML5 overexpressing ThyL‐6 cells and the enhanced green fluorescent protein (EGFP) expressing ThyL‐6 cells (as control). When comparing them in gene set enrichment analysis (GSEA), the most upregulated of the Hallmark gene sets in CALML5 overexpressing ThyL‐6 cells was the E2F gene set (Figure ). Since the results of the RNA sequencing suggested that CALML5 may be involved in the cell cycle, we compared cell proliferation between CALML5 overexpressing ThyL‐6 cells and EGFP‐expressing ThyL‐6 cells. Cell proliferation was significantly faster (Figure ) and the sensitivity to cisplatin was significantly higher (Figure ) in CALML5 overexpressing ThyL‐6 cells. B3 thymoma using CAGE We assessed promoter activity levels in thymic carcinoma ( n = 4) and type B3 thymoma ( n = 3) tissues using the CAGE protocol with a next‐generation sequencer (HiSeq2500). Radovich et al. reported that thymic carcinoma and type B3 thymoma with the same origin have different expression patterns. It suggests that focusing on the differences in gene expression would lead to the discovery of an appropriate biomarker. The results of our CAGE data analysis confirmed the existence of appropriate biomarkers. Figure shows a volcano plot of our CAGE data showing the difference in gene expression patterns between thymic squamous cell carcinoma and type B3 thymoma. In thymic squamous cell carcinoma compared with type B3 thymoma, the TMPRSS4 , CALML5 , HEPACAM2 , and POU2F3 genes were the top four differentially expressed genes (log2 fold change >8.17) in our samples (Table ) and the CD5 , KIT , and SLC2A1 genes were the already reported differentially expressed genes (log2 fold change >1.47) (Table ). The log2 fold change and adjusted p ‐value for CD5 , KIT , and SLC2A1 are shown in Table . Only KIT had an adjusted p ‐value of less than 0.01 and the highest log2 fold change. The promoter activities of TMPRSS4 , CALML5 , CD5 , KIT , and SLC2A1 are shown in scatter plots of CAGE data (Figure ). Expression of the novel candidates ( TMPRSS4 and CALML5 ) and the already reported markers ( KIT and SLC2A1 ) was significantly greater in thymic squamous cell carcinoma than in type B3 thymoma. However, no significant difference was observed in CD5 expression. The mRNA expression levels of the novel ( TMPRSS4 and CALML5 ) and known ( CD5 , KIT , and SLC2A1 ) markers obtained by analyzing Cancer Genome Atlas (TCGA) data are represented in bar graphs (as mean with standard error of the mean) (Figure ). Thymic squamous cell carcinoma was diagnosed in three cases and type B3 thymoma in 11 cases. There was a significant difference in CALML5 , CD5 , and KIT mRNA expression, but not in TMPRSS4 and SLC2A1 mRNA expression. , in contrast to existing markers, effectively differentiates between thymic carcinoma and type B3 thymoma We identified CALML5 as a diagnostic marker that is able to distinguish thymic carcinoma from type B3 thymoma. CALML5 mRNA was significantly more expressed in thymic squamous cell carcinoma than in type B3 thymoma according to both our CAGE and TCGA data analysis. CALML5, CD5, c‐kit, and GLUT‐1 were examined at the protein level with IHC to confirm their clinical utility for differentiating thymic carcinoma from type B3 thymoma (Figure ). The patient characteristics are shown in Table . The proportion of patients with stage 4 tended to be higher in the thymic carcinoma group than in the type B3 thymoma group, and it is consistent with the difference of patient characteristics between them in daily clinical practice. While CALML5 was expressed in the cytoplasm and the nuclei of thymic carcinoma, it was not expressed in type B3 thymoma. The samples of thymic carcinoma used for CAGE was positive for CALML5 in three of four cases. Only thymic carcinoma with the lowest CPM of the four cases was negative for CALML5. All three cases of type B3 thymoma were also negative. CALML5 as a diagnostic marker had a sensitivity of 73.1% (19/26 cases) and specificity of 94.7% (36/38 cases) for thymic carcinoma (Table ). The sensitivity of CALML5 was higher than that of CD5, which was 69.2% (18/26 cases), and the specificity of CALML5 was higher than that of GLUT‐1, which was 60.5% (23/38 cases), and the same as the specificity of c‐kit, which was 94.7% (36/38 cases). Previous studies have reported that CD5 and c‐kit had sensitivities of 30%–70% and 70%–80% for thymic carcinoma, respectively, whereas c‐kit showed a specificity of 85%–95%. Therefore, the results indicated that CALML5 has a higher sensitivity than CD5 and a specificity equal to or higher than that of c‐kit. , , , Furthermore, the tumor cells stained diffusely, which makes it easy to confirm the presence of CALML5 expression even with a smaller number of tumor cells. There was also a single case of CD5 − c‐kit − CALML5 + thymic carcinoma (Figure ). When used in combination, CALML5, CD5, c‐kit, and GLUT‐1 had a sensitivity of 100% (26/26 cases) and specificity of 100% (38/38 cases). IHC for CALML5 was performed in four specimens (one thymic adenocarcinoma, and three thymic carcinoid). All four cases were negative for CALML5. Thymic squamous cell carcinoma can invade the lungs, and lung squamous cell carcinoma can invade the mediastinum, making it difficult to distinguish between the two tumors. IHC was also performed in 22 cases of lung squamous cell carcinoma using with antibodies against CALML5 (Figure ). The sensitivity was 4.5% (1/22 cases). CALML5 may be useful in differentiating between thymic squamous cell carcinoma and lung squamous cell carcinoma. is involved in cell proliferation and increases cisplatin sensitivity To study the functional relevance of CALML5 to thymic carcinoma progression, we established CALML5 overexpressing thymic carcinoma cell lines (Figure ). RNA sequencing was performed using CALML5 overexpressing ThyL‐6 cells and the enhanced green fluorescent protein (EGFP) expressing ThyL‐6 cells (as control). When comparing them in gene set enrichment analysis (GSEA), the most upregulated of the Hallmark gene sets in CALML5 overexpressing ThyL‐6 cells was the E2F gene set (Figure ). Since the results of the RNA sequencing suggested that CALML5 may be involved in the cell cycle, we compared cell proliferation between CALML5 overexpressing ThyL‐6 cells and EGFP‐expressing ThyL‐6 cells. Cell proliferation was significantly faster (Figure ) and the sensitivity to cisplatin was significantly higher (Figure ) in CALML5 overexpressing ThyL‐6 cells. We analyzed RNA expression data using CAGE, and selected candidate biomarkers for differentiating type B3 thymoma from thymic carcinoma. Thereafter, we identified the protein with IHC, and showed that higher CALML5 expression is consistent with the CAGE data and useful for differentiating thymic carcinoma from type B3 thymoma. Our results demonstrated that CALML5 was a more sensitive biomarker than CD5, and our results and comparisons with previous reports on CD5, which are already used to differentiate between thymoma and thymic carcinoma, showed that CALML5 was as specific or more specific than c‐kit. , , , Moreover, CD5, c‐kit and GLUT‐1 are expressed only on the cell membrane; thus, the staining area is small, lowering the sensitivity of detection in small biopsy samples, which are frequently used for diagnosing thymic carcinoma. However, the diffuse distribution of CALML5 in the cytoplasm enlarges the staining area, making evaluation easier than with CD5, c‐kit and GLUT‐1. No single marker has 100% sensitivity and 100% specificity for differentiating thymic carcinoma from thymoma. However, when used in combination, CALML5, CD5, c‐kit and GLUT‐1 increased the sensitivity to 100% (26/26 cases) and the specificity to 100% (38/38 cases). CALML5 is also presumed to improve diagnostic accuracy when combined with CD5, c‐kit, and/or GLUT‐1 IHC. Because thymic squamous cell carcinoma can invade the lungs, and lung squamous cell carcinoma can invade the mediastinum, distinguishing between the two tumors is difficult. The sensitivity of CALML5 expression as a biomarker of lung squamous cell carcinoma was 4.5%. Therefore, CALML5 may help distinguish between thymic squamous cell carcinoma and lung squamous cell carcinoma. According to the deduced amino acid sequence, CALML5 has 52% homology with calmodulin, the major calcium‐binding protein; because of this, it is also called calmodulin‐like skin protein (CLSP), as it is expressed in the epidermis. CALML5 is a ZNF750‐ and TINCR‐induced protein that binds stratifin to regulate epidermal differentiation. We assumed that CALML5 is not only involved in epidermal differentiation, but also in the differentiation of thymic epithelial cells, and CALML5 IHC may stain thymic carcinoma. IHC results showed protein expression of CALML5 in thymic squamous cell carcinoma, but almost no protein expression of CALML5 in lung squamous cell carcinoma, thymic adenocarcinoma, thymic carcinoid. Therefore, CALML5 is useful for differentiating between thymoma and thymic carcinoma, rather than between low‐grade tumor and carcinoma based on the results of IHC. CALML5 is a poor prognostic factor for HPV‐associated oropharyngeal cancer and lung adenocarcinoma, and K63‐linked ubiquitination of CALML5 is found in breast cancer tissue, but not in the surrounding healthy tissue. , , We created a CALML5 overexpressing cell line, ThyL‐6, to investigate the role of CALML5 in thymic carcinoma. RNA sequencing was performed, and it was found that CALML5 may be involved in cell proliferation. Our results suggest that CALML5 may be involved in the proliferation of thymic carcinoma cells and enhance cisplatin sensitivity and that it may be a therapeutic target for thymic carcinoma. Further investigation is warranted in the future. The present study had certain limitations. Due to the rarity of thymic carcinoma and thymoma, the amount of CAGE data was small because of the small number of cases, indicating that other useful biomarkers for differentiation may have been overlooked. As we could not find thymic squamous cell carcinoma cell lines, we used ThyL‐6, a thymic undifferentiated carcinoma cell line. Also, we could not find CALML5‐overexpressing thymic carcinoma cell lines, and therefore we could not confirm that suppressed CALML5 expression in thymic carcinoma cell lines with high CALML5 expression reduces cell proliferation and cisplatin sensitivity. In conclusion, in the present study, we discovered that CALML5 is a potential biomarker for differentiating type B3 thymoma from thymic carcinoma, using CAGE and IHC. Further studies are warranted to validate our results, and we expect that clinical use of CALML5 will improve the accuracy of diagnosis in the future. Koichiro Kanamori and Kentaro Suina provided formal analysis, investigation, methodology, and writing—original draft preparation; Takehito Shukuya performed study conceptualization, data curation, methodology, project administration, writing—original draft preparation; Takuo Hayashi, Yoichiro Mitsuishi, Shoko Sonobe Shimamura, Wira Winardi, Masayoshi Itoh, and Hideya Kawaji provided data curation, methodology, investigation and writing—review & editing; Ken Tajima, Ryo Ko, Tetsuhiko Asao, Fumiyuki Takahashi, Kazuya Takamochi and Kenji Suzuki provided resources, supervision, and writing—review and editing. Kazuya Takamochi provided funding acquisition, supervision, and writing—review and editing. All authors read and approved the final manuscript. The authors have no conflicts of interest directly relevant to the content of this article. FIGURE S1. Thymic squamous cell carcinoma shown with H&E and IHC for CALML5, CD5, c‐kit, and GLUT‐1. The tumor cells are diffusely positive for CALML5 and negative for CD5, c‐kit. GLUT‐1 is expressed in the membrane. Scale bar is 100 μm. TSQCC, thymic squamous cell carcinoma; H&E, hematoxylin and eosin. FIGURE S2. Lung squamous cell carcinoma shown with H&E and IHC for CALML5. The tumor cells are negative for CALML5. Scale bar is 100 μm. LSQCC, lung squamous cell carcinoma; H&E, hematoxylin and eosin; IHC, immunohistochemistry. FIGURE S3. CALML5 overexpression experiments with ThyL‐6. (A) mRNA expression levels of CALML5 in ThyL‐6 cells. The expression levels of CALML5 are lower in ThyL‐6 than in A431 and MDA‐MB‐468. (B) Western blot analysis of CALML5 overexpression in ThyL‐6. (C) Using the hallmark gene set in gene set enrichment analysis (GSEA), target genes of E2F are positively correlated with the highest CALML5 overexpression in ThyL‐6 cells. (D) Effects of CALML5 overexpression on ThyL‐6 cell proliferation. The cell number on day 5 relative to day 0 is shown. CALML5 overexpression significantly promotes cell proliferation. ** p < 0.005. (E) The effect of CALML5 overexpression on the chemosensitivity of ThyL‐6 cells to cisplatin (CDDP). Results are presented as the average of three independent experiments ± standard deviation. NES, normalized enrichment score; EGFP, enhanced green fluorescent protein. Click here for additional data file.
Hepatocellular Carcinoma Rupture after Introducing Lenvatinib: An Autopsy Case Report
eaf9e685-592b-4be1-b8d1-fcf7b695f29f
10125806
Forensic Medicine[mh]
Molecular-targeted therapy has been recommended for patients with hepatocellular carcinoma (HCC) but a well-preserved liver function (Child-Pugh A) as well as for patients with advanced HCC of Barcelona Clinic Liver Cancer (BCLC) stage C ( ) or earlier-stage tumors progressing upon or unsuitable for locoregional therapies ( ). Lenvatinib is a multityrosine kinase receptor inhibitor used as one of the agents in molecular-targeted therapy. In a Phase III study (REFLECT) of patients with untreated unresectable HCC, the overall survival of those treated with lenvatinib was noninferior to that in those treated with sorafenib ( ). Furthermore, a previous real-world retrospective study showed that lenvatinib yields a high early response rate and tolerability for advanced HCC ( ). Thus, it is widely used as a first-line agent in molecular-targeted therapy for patients with unresectable HCC. HCC hemorrhaging/rupture is a relatively rare adverse event of lenvatinib administration ( ), but it has been reported only recently ( , ). However, no case report has included a histopathological evaluation of the effects of lenvatinib. We herein report an autopsy case of HCC rupture after lenvatinib introduction. A 69-year-old man (height 158 cm, body wight 68 kg, body mass index 27.2 kg/m 2 ) was referred to our department for a workup of multiple liver tumors through abdominal ultrasonography. His initial blood test results were as follows: white blood cell count, 10,300 /μL; red blood cell count, 361×10 4 /μL; hemoglobin level, 9.2 g/dL; platelet count, 78.9×10 4 /μL; aspartate aminotransferase, 46 U/L; alanine aminotransferase, 23 U/L; alkaline phosphatase, 992 U/L; gamma-glutamyl transpeptidase, 344 U/L; total bilirubin, 0.5 mg/dL; total protein, 7.1 g/dL; albumin, 2.8 g/dL; blood urea nitrogen, 22 mg/dL; creatinine, 0.97 mg/dL; protein induced by vitamin K absence or antagonist-II, 5,210 mAU/mL; alpha-fetoprotein (AFP), 3 ng/mL; and prothrombin time, 72.7%. His blood platelet count was high; however, the reason was unclear. The patient was negative for hepatitis B virus surface antigen, anti-hepatitis C virus antibody, and several autoantibodies, and he had been consuming approximately 250 g of alcohol per day for 30 years. Contrast-enhanced computed tomography showed an undulating liver surface and blunt liver edge with an approximately 14-cm-diameter tumor, multiple liver tumors, and no ascites ( ). There were no aneurysms in the tumors, no vascular invasion, and no distant metastasis according to computed tomography. The patient was thus diagnosed with an alcoholic liver disorder and unresectable HCC based on BCLC stage B ( ). His liver function was Child-Pugh A6; therefore, oral administration of lenvatinib (12 mg per day) was initiated as palliative chemotherapy. High blood pressure was not observed. However, although he continued to be hospitalized and had no symptoms, he complained of nauseous without abdominal pain and suddenly developed cardiac arrest 7 days after starting lenvatinib. Cardiopulmonary resuscitation was performed; however, he did not regain consciousness and died. Blood tests were checked immediately after cardiopulmonary arrest occurred, and anemia was found to have progressed (hemoglobin level from 9.8 to 7.2 g/dL). Contrast-enhanced computed tomography could not be performed during cardiopulmonary resuscitation. An autopsy was performed one hour after death. In the abdomen, large blood clots were found around the liver with a total of 2.7 L of bloody ascites. Macroscopically, multiple white-to-tan nodular masses were observed in the liver ( ). The largest tan-colored tumor (14×13 cm with a 6.5-cm crack) in the right lobe had reached the liver surface and contained coagulated blood. Furthermore, scattered hemorrhagic lesions were found in the largest tumor. Microscopically, the tan-colored tumors showed highly to moderately differentiated HCCs, forming a glandular tuft-like structure with bile plugs ( ), whereas the white-colored tumors showed moderately to poorly differentiated HCCs with marked nuclear atypia ( ). The hemorrhaging area was scattered in nearly all HCCs ( ). Multicentric denatured/necrotic lesions were located adjacent to some hemorrhagic lesions ( ). Necrosis and hemorrhaging were conspicuous around the crack in the tumor. These features suggested that hemorrhaging was more pronounced in markedly necrotic areas, possibly causing increased intratumoral pressure and thereby tumor rupture. Regarding hemorrhaging, no marked difference was observed on the basis of the histological grade ( ) and necrosis. Background liver parenchyma did not show any abnormal histological findings. To our knowledge, this is the first case report of a patient with HCC hemorrhaging/rupture after introducing lenvatinib who was evaluated histologically. HCC hemorrhaging/rupture occurs in 10% of cases in the natural course ( ). The case fatality rate due to tumor rupture is 25-75%, making it a serious complication in patients with HCC ( ). A serum AFP level of ≥400 ng/mL, tumor protrusion from the liver surface, ascites, and tumor size of ≥5 cm are considered risk factors for HCC rupture ( ). In our case, the risk of HCC rupture may have been high because of tumor protrusion from the liver surface and the large tumor size, regardless of lenvatinib treatment. HCC hemorrhaging/rupture is a relatively rare adverse effect of lenvatinib administration ( , ). A previous study showed that tumor hemorrhaging due to lenvatinib was observed in 7.4% (n=5/68) of the study population ( ). The mean period from lenvatinib administration to tumor hemorrhaging was 4.4 (±2.2) days, a relatively short duration from lenvatinib administration. In addition, the risk factor of lenvatinib-induced hemorrhaging was reported to be a large tumor size (≥9 cm) ( ). Our case also met these criteria. It should be considered that lenvatinib for HCC can cause potentially fatal tumor hemorrhaging/rupture, especially in patients with large HCC and/or tumor protrusion from the liver surface, in the early phase of lenvatinib treatment. Our patient was determined to have transarterial chemoembolization (TACE) refractoriness due to up-to-7 criteria out (tumor size and number) ( ); therefore, lenvatinib treatment was selected. However, a recent Phase II trial (TACTICS) of patients with unresectable HCC revealed that TACE plus sorafenib significantly improved the progression-free survival over TACE alone ( ). In addition, an international conference presentation of a Phase II trial (TACTICS-L) showed that the combination of TACE plus lenvatinib showed promising therapeutic efficacy in patients with unresectable HCC ( ). The treatment protocols involved starting sorafenib/lenvatinib two to three weeks prior to the first TACE session. Based on the findings of these recent trials ( , ), TACE plus sorafenib/lenvatinib treatment is deemed safe and is becoming a promising treatment strategy for unresectable HCC in clinical practice. However, patients were excluded from those trials if their maximum tumor diameter was >10 cm; therefore, adverse events, such as tumor hemorrhaging/rupture, associated with lenvatinib as up-front therapy prior to the first TACE session for large HCCs remain unknown. TACE prior to lenvatinib introduction might be considered for patients with large HCCs to prevent tumor hemorrhaging/rupture. The mechanism underlying HCC hemorrhaging/rupture associated with lenvatinib has not yet been clarified. Hemorrhaging is occasionally caused by tyrosine kinase inhibitors that inhibit vascular endothelial growth factor (VEGF) receptors ( ). However, although necrotic lesions were also partially detected in our case pathologically, the main characteristic was spotted hemorrhaging scattered in nearly all tumors, regardless of their differentiation. This feature indicated that microbleeding occurred in HCCs over a short period of time due to lenvatinib. However, our case has a few limitations. First, we only evaluated the effects of lenvatinib for HCC seven days after introducing lenvatinib and were unable to check the changes over time. Second, the HCC rupture in this case cannot be confirmed to have been entirely caused by lenvatinib. In conclusion, we pathologically examined the autopsy findings of a patient who died due to HCC rupture seven days after the introduction of lenvatinib. Pathologically, hemorrhagic areas were observed in all HCCs, regardless of tumor differentiation. These pathological features are unusual for normal HCC, possibly due to the effects of lenvatinib.
Digitale Gesundheitsanwendungen in der Hals‑, Nasen- und Ohrenheilkunde
2e943b3a-dae2-47cd-b9dc-d74f451bf1b7
10125941
Otolaryngology[mh]
Entsprechend des Ziels dieser Arbeit wurden zunächst das DiGA-Verzeichnis analysiert und Anwendungen identifiziert, deren Indikationsbereich eine Erkrankung aus dem HNO-Bereich betrifft. Dabei wurden sowohl Anwendungen mit in die weitere Auswertung eingeschlossen, die für Erkrankungen ausschließlich des HNO-Bereichs gedacht sind (wie z. B. Anwendungen zur Behandlung von Tinnitus), aber auch Anwendungen, die für mehrere Erkrankungen erstellt wurden und unter anderem Erkrankungen aus dem HNO-Bereich betreffen (wie z. B. Anwendung zur Verhaltenstherapie nach bösartigen Erkrankungen). Zusätzlich wurden Anwendungen in die Auswertung einbezogen, die Erkrankungen behandeln, die neben anderen Fachrichtungen auch von Ärzten aus dem Bereich der HNO-Heilkunde behandelt werden (wie z. B. der Insomnie). Für die Auswahl der Anwendungen wurden auch die jeweiligen ICD-Codes, bei denen die DiGA angewendet werden soll, berücksichtigt. Neben Anwendungen, die entweder vorläufig oder dauerhaft in das Verzeichnis aufgenommen wurden, sind auch Anwendungen eingeschlossen worden, die nach einer vorläufigen oder dauerhaften Aufnahme aktuell gestrichen sind (Abb. ). Bei jeder der identifizierten Anwendungen erfolgt zunächst eine Beschreibung des jeweiligen Anwendungsbereichs und anschließend eine Analyse der zugrunde liegenden Evidenz. Das Level der zugrunde liegenden Evidenz wurde nach den Empfehlungen des Oxford Centre for Evidence-Based Medicine für therapeutische Studien angegeben (Tab. ; ). In Abb. sind die Ergebnisse der Prüfungen von allen bisher eingereichten DiGA, die auch die Grundlage für unsere Auswertung bilden, dargestellt. Unter Anwendung der beschriebenen Einschlusskriterien wurden insgesamt sechs DiGA identifiziert, die im Weiteren entsprechend untersucht werden. Zwei DiGA sind für die Verhaltenstherapie bei einem Tinnitus vorgesehen, eine DiGA zur Rauchentwöhnung, eine DIGA zur Behandlung von schädlichem Alkoholkonsum, eine DiGA zur Verhaltenstherapie bei Insomnien und eine DiGA zur Linderung psychischer und psychosomatischer Folgen von Diagnose und Therapie verschiedener Malignome (Tab. ). Kalmeda Tinnitus-App Die Behandlung eines Tinnitus orientiert sich an der aktuellen S3-Leitlinie – wesentlicher Pfeiler der Behandlung ist eine kognitive Verhaltenstherapie (KVT), die mit einer hohen zugrunde liegenden Evidenz empfohlen werden . Die Empfehlung schließt auch internetbasierte Methoden ein, also auch DiGA. Die Kalmeda Tinnitus-App (mynoise GmbH, Duisburg, Deutschland) wurde in einer offenen, kontrollierten und randomisierten Studie, die den Effekt der Anwendung zwischen einer Interventionsgruppe und einer Kontrollgruppe, die erst nach einer dreimonatigen Wartezeit die Anwendungen nutzen konnte, untersucht. Primärer Outcome-Parameter war der Effekt auf die Belastung durch den Tinnitus (Tinnitus-Fragebogen von Göbel und Hiller), sekundäre Outcome-Parameter die Veränderung von Tinnitus-Belastung, Depressionsneigung, Stresserleben und Selbstwirksamkeit. Insgesamt wurden 187 PatientInnen in die Studie eingeschlossen, und alle Parameter in der Interventionsgruppe zeigten im Gegensatz zur Kontrollgruppe statistisch signifikante Verbesserungen. Meine Tinnitus-App – das digitale Tinnitus-Counseling Anders als die Kalmeda Tinnitus-App stellt die DiGA Meine Tinnitus-App (Sonorem GmbH, Hamburg, Deutschland) ein digitales Angebot für das Counseling von PatientInnen mit Tinnitus im Rahmen der Erstversorgung nach Untersuchung durch den zuständigen Arzt dar. Durch die Vermittlung tinnitusspezifischer Multimediainhalte soll eine Aufklärung stattfinden und eine Basis für weitere therapeutische Interventionen geschaffen werden. Aktuell ist diese DiGA nur vorläufig in das DiGA-Verzeichnis aufgenommen, und die Studien zum Nachweis eines positiven Effekts dieser Behandlungsmethode laufen noch. Grundlage für die vorläufige Aufnahme war eine Probedatensammlung von insgesamt 67 TeilnehmerInnen, die eine Verbesserung der Tinnitus-Belastung (ermittelt mittels Mini-TF-12) und der krankheitsbedingten Schwierigkeiten im Alltag (BVB-2000) belegte. Eine randomisierte kontrollierte Studie zur Bestätigung dieser Beobachtung rekrutiert aktuell PatientInnen. NichtraucherHelden-App Tabakkonsum ist, gerade in der Kombination mit Alkoholabusus, der bedeutendste Risikofaktor für die Entstehung von Malignomen im Bereich der Schleimhäute von Mundhöhle, Pharynx und Larynx. Die S3-Leitlinie „Rauchen und Tabakabhängigkeit: Screening, Diagnostik und Behandlung“ empfiehlt verhaltenstherapeutische Gruppen- und Einzelinterventionen mit dem höchsten Empfehlungsgrad. Über die DiGA NichtraucherHelden-App (NichtraucherHelden GmbH, Stuttgart, Deutschland) sollen PatientInnen durch eine KVT beim anhaltenden Überwinden einer Nikotinabhängigkeit unterstützt werden . Auch diese DiGA ist aktuell vorläufig in das DiGA-Verzeichnis aufgenommen. Grundlage hierfür ist eine Pilotstudie, bei der an einem Kollektiv von 50 eingeschlossenen ProbandInnen nach einem Anwendungszeitraum von vier Monaten die 7‑Tage-Prävalenz der Rauchabstinenz gemessen wurde. Eine randomisierte kontrollierte Studie rekrutiert aktuell PatientInnen. Somnio Entsprechend Leitlinienempfehlungen soll bei Erwachsenen jeden Lebensalters mit Insomnie eine KVT die erste Behandlungsoption sein [ , , ]. Bei der DiGA Somnio (mementor DE GmbH, Leipzig, Deutschland) werden evidenzbasierte Inhalte zur KVT für Insomnie (KVT-I) vermittelt. Der positive Versorgungseffekt wurde im Rahmen einer randomisierten kontrollierten Studie evaluiert . Im Interventionsarm wurde die DiGA genutzt und mit einer Wartelisten-Kontrollgruppe verglichen. Es zeigte sich eine signifikante Besserung von Insomnie Schweregrad Index, Beck Depression Inventory, Brief Symptom Inventory und SF-12 Health Survey in der Interventionsgruppe. Mika Die DiGA Mika (Fosanis GmbH, Berlin, Deutschland) hat das Ziel, die psychischen und psychosomatischen Folgen von Diagnose und Therapie verschiedener Malignome zu lindern. PatientInnen sollen auf verschiedenen Themengebieten weitergebildet und zum Selbstmanagement befähigt werden und bekommen Werkzeuge an die Hand, um selbst Einfluss auf den Erkrankungsverlauf zu nehmen. In einer randomisierten kontrollieren Pilotstudie wurde eine Interventionsgruppe (Nutzung von Mika) mit einer Kontrollgruppe (Standardversorgung) verglichen – primärer Endpunkt war die psychische Belastung, gemessen mit dem PHQ‑9. Nach einem Zeitraum von 12 Wochen zeigte sich hier eine signifikante Verbesserung in der Interventionsgruppe. Aktuell ist die DiGA nicht verfügbar, da sie auf Antrag des Herstellers aus dem Verzeichnis gestrichen wurde. Derzeit rekrutiert eine weitere randomisierte kontrollierte Studie Patienten zum Nachweis eines positiven Versorgungseffekts an einem größeren Patientenkollektiv. Vorvida Wie bereits ausgeführt stellt Alkoholkonsum einen Risikofaktor in der Entstehung von Malignomen im Bereich der Schleimhäute der Mundhöhle und des Pharynx dar, ist aber darüber hinaus ein globales Gesundheitsproblem mit unterschiedlichen schwerwiegenden Folgen. Es konnte im Vorfeld gezeigt werden, dass eine internetbasierte Selbsthilfetherapie zu einer Alkoholkonsumreduktion bei Erwachsenen führen kann . Diese Interventionsart ist auch in der S3-Leitlinie „Screening, Diagnose und Behandlung alkoholbezogener Störungen“ entsprechend empfohlen. Vorvida (GAIA AG, Hamburg, Deutschland) ist ein internetbasierter Ansatz zur kognitiven Verhaltenstherapie mit dem Ziel, den Alkoholkonsum zu reduzieren. In einer randomisierten kontrollierten Studie wurde der Effekt an einer Population von 608 Erwachsenen untersucht und mithilfe verschiedener Endpunkte (selbstberichteter Alkoholkonsum, Trinkverhalten, Zufriedenheit) nach sechsmonatiger Studiendauer belegt . Die Behandlung eines Tinnitus orientiert sich an der aktuellen S3-Leitlinie – wesentlicher Pfeiler der Behandlung ist eine kognitive Verhaltenstherapie (KVT), die mit einer hohen zugrunde liegenden Evidenz empfohlen werden . Die Empfehlung schließt auch internetbasierte Methoden ein, also auch DiGA. Die Kalmeda Tinnitus-App (mynoise GmbH, Duisburg, Deutschland) wurde in einer offenen, kontrollierten und randomisierten Studie, die den Effekt der Anwendung zwischen einer Interventionsgruppe und einer Kontrollgruppe, die erst nach einer dreimonatigen Wartezeit die Anwendungen nutzen konnte, untersucht. Primärer Outcome-Parameter war der Effekt auf die Belastung durch den Tinnitus (Tinnitus-Fragebogen von Göbel und Hiller), sekundäre Outcome-Parameter die Veränderung von Tinnitus-Belastung, Depressionsneigung, Stresserleben und Selbstwirksamkeit. Insgesamt wurden 187 PatientInnen in die Studie eingeschlossen, und alle Parameter in der Interventionsgruppe zeigten im Gegensatz zur Kontrollgruppe statistisch signifikante Verbesserungen. Anders als die Kalmeda Tinnitus-App stellt die DiGA Meine Tinnitus-App (Sonorem GmbH, Hamburg, Deutschland) ein digitales Angebot für das Counseling von PatientInnen mit Tinnitus im Rahmen der Erstversorgung nach Untersuchung durch den zuständigen Arzt dar. Durch die Vermittlung tinnitusspezifischer Multimediainhalte soll eine Aufklärung stattfinden und eine Basis für weitere therapeutische Interventionen geschaffen werden. Aktuell ist diese DiGA nur vorläufig in das DiGA-Verzeichnis aufgenommen, und die Studien zum Nachweis eines positiven Effekts dieser Behandlungsmethode laufen noch. Grundlage für die vorläufige Aufnahme war eine Probedatensammlung von insgesamt 67 TeilnehmerInnen, die eine Verbesserung der Tinnitus-Belastung (ermittelt mittels Mini-TF-12) und der krankheitsbedingten Schwierigkeiten im Alltag (BVB-2000) belegte. Eine randomisierte kontrollierte Studie zur Bestätigung dieser Beobachtung rekrutiert aktuell PatientInnen. Tabakkonsum ist, gerade in der Kombination mit Alkoholabusus, der bedeutendste Risikofaktor für die Entstehung von Malignomen im Bereich der Schleimhäute von Mundhöhle, Pharynx und Larynx. Die S3-Leitlinie „Rauchen und Tabakabhängigkeit: Screening, Diagnostik und Behandlung“ empfiehlt verhaltenstherapeutische Gruppen- und Einzelinterventionen mit dem höchsten Empfehlungsgrad. Über die DiGA NichtraucherHelden-App (NichtraucherHelden GmbH, Stuttgart, Deutschland) sollen PatientInnen durch eine KVT beim anhaltenden Überwinden einer Nikotinabhängigkeit unterstützt werden . Auch diese DiGA ist aktuell vorläufig in das DiGA-Verzeichnis aufgenommen. Grundlage hierfür ist eine Pilotstudie, bei der an einem Kollektiv von 50 eingeschlossenen ProbandInnen nach einem Anwendungszeitraum von vier Monaten die 7‑Tage-Prävalenz der Rauchabstinenz gemessen wurde. Eine randomisierte kontrollierte Studie rekrutiert aktuell PatientInnen. Entsprechend Leitlinienempfehlungen soll bei Erwachsenen jeden Lebensalters mit Insomnie eine KVT die erste Behandlungsoption sein [ , , ]. Bei der DiGA Somnio (mementor DE GmbH, Leipzig, Deutschland) werden evidenzbasierte Inhalte zur KVT für Insomnie (KVT-I) vermittelt. Der positive Versorgungseffekt wurde im Rahmen einer randomisierten kontrollierten Studie evaluiert . Im Interventionsarm wurde die DiGA genutzt und mit einer Wartelisten-Kontrollgruppe verglichen. Es zeigte sich eine signifikante Besserung von Insomnie Schweregrad Index, Beck Depression Inventory, Brief Symptom Inventory und SF-12 Health Survey in der Interventionsgruppe. Die DiGA Mika (Fosanis GmbH, Berlin, Deutschland) hat das Ziel, die psychischen und psychosomatischen Folgen von Diagnose und Therapie verschiedener Malignome zu lindern. PatientInnen sollen auf verschiedenen Themengebieten weitergebildet und zum Selbstmanagement befähigt werden und bekommen Werkzeuge an die Hand, um selbst Einfluss auf den Erkrankungsverlauf zu nehmen. In einer randomisierten kontrollieren Pilotstudie wurde eine Interventionsgruppe (Nutzung von Mika) mit einer Kontrollgruppe (Standardversorgung) verglichen – primärer Endpunkt war die psychische Belastung, gemessen mit dem PHQ‑9. Nach einem Zeitraum von 12 Wochen zeigte sich hier eine signifikante Verbesserung in der Interventionsgruppe. Aktuell ist die DiGA nicht verfügbar, da sie auf Antrag des Herstellers aus dem Verzeichnis gestrichen wurde. Derzeit rekrutiert eine weitere randomisierte kontrollierte Studie Patienten zum Nachweis eines positiven Versorgungseffekts an einem größeren Patientenkollektiv. Wie bereits ausgeführt stellt Alkoholkonsum einen Risikofaktor in der Entstehung von Malignomen im Bereich der Schleimhäute der Mundhöhle und des Pharynx dar, ist aber darüber hinaus ein globales Gesundheitsproblem mit unterschiedlichen schwerwiegenden Folgen. Es konnte im Vorfeld gezeigt werden, dass eine internetbasierte Selbsthilfetherapie zu einer Alkoholkonsumreduktion bei Erwachsenen führen kann . Diese Interventionsart ist auch in der S3-Leitlinie „Screening, Diagnose und Behandlung alkoholbezogener Störungen“ entsprechend empfohlen. Vorvida (GAIA AG, Hamburg, Deutschland) ist ein internetbasierter Ansatz zur kognitiven Verhaltenstherapie mit dem Ziel, den Alkoholkonsum zu reduzieren. In einer randomisierten kontrollierten Studie wurde der Effekt an einer Population von 608 Erwachsenen untersucht und mithilfe verschiedener Endpunkte (selbstberichteter Alkoholkonsum, Trinkverhalten, Zufriedenheit) nach sechsmonatiger Studiendauer belegt . Im Rahmen der erfolgten Auswertung wurden insgesamt sechs DiGA mit direktem oder indirektem Bezug zur HNO-Heilkunde identifiziert, von denen drei dauerhaft und zwei vorläufig in das Verzeichnis aufgenommen wurden. Eine DiGA ist aktuell (Stand 05.10.2022) vom Hersteller zurückgezogen worden. Es wird insgesamt ersichtlich, dass sich vor allem Erkrankungen für eine Behandlung mit einer DiGA eignen, für die verhaltenstherapeutische Behandlungsansätze verfügbar sind bzw. eine Therapiemöglichkeit darstellen. Dies betrifft nicht nur DiGA, die im HNO-Bereich Anwendung finden, sondern auch DiGA anderer Fachdisziplinen (z. B. zur Therapie von Depressionen oder Angststörungen). Für alle DiGA, gerade für die dauerhaft aufgenommenen, liegen Studien eines hohen Evidenzlevels zugrunde – für die vorläufig aufgenommenen laufen aktuell Studien, mit denen ein ähnlich hohes Evidenzlevel erreicht werden kann. Einschränkend ist hier jedoch anzumerken, dass die Studienergebnisse, trotz Nachweis eines positiven Versorgungseffekts, nicht in jedem Fall in Zeitschriften mit Peer-Review-Verfahren veröffentlicht wurden. Trotz des innovativen Verfahrens und der zum Zeitpunkt der Einführung weltweiten Einmaligkeit von „Apps auf Rezept“ gibt es jedoch auch Kritikpunkte an DiGA . Kontrovers werden etwa die Kosten, die aus der Verschreibung von DiGA für die Krankenversicherungen entstehen, diskutiert. Eine Umfrage, die von Handelsblatt Inside unter den zwanzig größten gesetzlichen Krankenkassen durchgeführt wurde (darunter die AOK, TK, Barmer, DAK-Gesundheit und weitere kleinere Betriebskrankenkassen), kam zum Ergebnis, dass bei diesen Versicherungen, die insgesamt 62 Mio. Versicherte abdecken, innerhalb des ersten Jahres nach Einführung der DiGA 38.000 Verschreibungen bewilligt wurden. Hochgerechnet auf die Zahl aller gesetzlich Versicherter (73 Mio.) würde dies insgesamt 45.000 verschriebene und bewilligte DiGA in Deutschland ergeben. Zum damaligen Zeitpunkt betrug der Durchschnittspreis der aufgenommenen DiGA 402 €, sodass im ersten Jahr nach Einführung der DiGA die Kosten für die Krankenkassen bei ungefähr 18 Mio. € gelegen sind. Damit lagen die Verschreibungen und damit verbundenen Kosten noch unter einer Schätzung der Boston Consulting Group, die (auch unter der Annahme, dass DiGA häufig auch in mehreren aufeinanderfolgenden Quartalen verschrieben werden) von Kosten zwischen 100 und 200 Mio. € für die Jahre 2021 und 2022 ausging. Bis zum Jahr 2025 wurden sogar Kosten von über einer Milliarde € pro Jahr prognostiziert (bei konstanten Preisen der DiGA und einer Durchdringung zwischen ein und zwei Prozent) . Aktuell ist es so, dass sich die Preise für DiGA in einer Spanne von minimal 119 € pro Quartal bis maximal 744 € pro Quartal bewegen. Die vorgestellten DiGA für Erkrankungen mit Bezug zum HNO-Bereich bewegen sich innerhalb einer Preisspanne von 203,97 € und 499,00 € und weisen damit ebenfalls eine erhebliche Preisspanne auf. Es ist bemerkenswert, dass die Hersteller von DiGA im Erprobungsjahr der jeweiligen Anwendung die Wirksamkeit noch nicht nachweisen müssen, dennoch aber den Preis für die DiGA selbst bestimmen können. Dies wird wiederholt von den Spitzenverbänden der Krankenkassen kritisiert, von anderen Verbänden, wie etwa dem Spitzenverband Digitale Gesundheitsversorgung, aber verteidigt: Nur so sei es überhaupt möglich, dass kleinere Hersteller entsprechende Anwendungen entwickeln können, und somit werde Innovation gefördert. Dieses Erprobungsjahr wird tatsächlich von vielen Herstellern genutzt, Stand März 2022 waren zwei Drittel der gelisteten DiGA lediglich vorläufig in das DiGA-Verzeichnis aufgenommen. Im Umkehrschluss führt dies natürlich dazu, dass während dieses ersten Jahres Anwendungen verschrieben werden, denen noch nicht die notwendige wissenschaftliche Evidenz zugrunde liegt, sondern lediglich vorläufige Ergebnisse . Möglich ist dies, weil DiGA als Medizinprodukte mit einer geringen Risikoklasse eingestuft werden (Risikoklassen I oder IIa, Tabelle 8), ein entsprechendes Vorgehen könnte man sich z. B. im Rahmen der Einführung von Medikamenten kaum vorstellen. Hier muss sicherlich auch beachtet werden, ob durch eine DiGA, deren Effektivität noch nicht ausreichend belegt ist, möglicherweise eine etablierte, auf entsprechende Evidenz basierende Therapieform verdrängt wird oder ob eine DiGA eine Ergänzung einer derartigen Therapieform darstellt. Während Zweiteres sicher eine günstige Situation darstellt, kann die erste Konstellation, auch wenn von der DiGA selbst nur ein geringes Risiko ausgeht, dennoch ein Schaden für den Patienten oder die Patientin entstehen. Die Ansprüche, die von den Gesetzgebern für den Nachweis eines positiven Versorgungseffekt gestellt werden, sind dennoch hoch, sodass Hersteller entweder nach vorläufiger Aufnahme ihrer Anwendung in das Verzeichnis diese nach Durchführung entsprechender Studien wieder zurückziehen mussten oder bereits während der Antragsphase aufgrund eines nicht hinreichenden Studiendesigns ihre Anträge überarbeiten oder zurückziehen mussten . Zusammenfassend lässt sich sagen, dass das Digitale-Versorgung-Gesetz einige Neuerungen beinhaltet und gerade die Einführung der DiGA, also der „Apps auf Rezept“, zu diesem Zeitpunkt eine weltweite Einmaligkeit war und inzwischen sogar vielen Ländern als Vorbild dient. Gerade Start-ups nutzten diese neue Möglichkeit, und es wurden schnell DiGA für ganz unterschiedliche Indikationen und Erkrankungsbilder entwickelt, wobei sich schnell zeigte, dass sich gerade Erkrankungen mit verhaltenstherapeutischen Behandlungsmöglichkeiten gut für eine derartige Behandlungsform eigenen. Das mag auch dazu geführt haben, dass sich zwischen verschiedenen Fachrichtungen deutliche Unterschiede hinsichtlich der Verordnungshäufigkeit von DiGA zeigten. Es gibt Erkrankungen aus dem HNO-Bereich, die sich gut für DiGA-vermittelte und sogar leitlinienkonforme Therapieformen eignen. In der vorliegenden Arbeit werden insgesamt fünf unterschiedliche DiGA für Erkrankungen aus dem HNO-Bereich identifiziert, von denen aktuell (Stand 05.10.2022) vier tatsächlich verfügbar sind und teilweise zu den mit am häufigsten verordneten DiGA insgesamt gehören . Die Evidenz hinter diesen DiGA basiert entweder auf randomisierten kontrollierten Studien, die bereits beendet wurden, oder auf Studien, oder aktuell noch rekrutieren. Trotz des innovativen Aspekts der DiGA und des nachgewiesenen Versorgungseffekts einzelner DiGA gibt es jedoch verschiedene Kritikpunkte, etwa, was die Preise für die Verordnung von DiGA angeht. Möglicherweise sind dies Erklärungen, warum DiGA von vielen Ärzten noch nicht akzeptiert bzw. zumindest nicht häufig verordnet werden . Die noch junge Form der Therapie mittels digitaler Gesundheitsanwendungen wird zukünftig sicherlich noch durch neue Regelungen oder Anpassungen verändert, um so auf bisherige Kritikpunkte zu reagieren. Dennoch stellt dieser Bestandteil des Digitale-Versorgung-Gesetzes eine tatsächliche Innovation dar, die vielen Patienten eine entsprechende Therapie ermöglicht kann. Gerade in Zeiten des Mangels an Ärzten (und Psychotherapeuten), noch verstärkt etwa in ländlichen Regionen, können DiGA eine zunehmend wichtigere Rolle spielen.
Accelerating guideline dissemination in nursing homes during the COVID-19 pandemic: A patient-centered randomized controlled trial
80d6b136-71ea-4e98-89f3-f44a9af6f5b4
10126215
Patient-Centered Care[mh]
A landmark report by the Institute of Medicine identified a 17-year translational gap in research, that is, the lag between scientific discovery and implementation of those discoveries into practice. Further, the report found that only 30% of emerging interventions jump this gap. Although there are numerous reasons for these delays in, or complete absence of, effective intervention implementation, a major reason is the lack of knowledge of such discoveries by frontline providers. The COVID-19 pandemic presented a unique opportunity to jump the translational gap given the worldwide focus on the increased pace of discovery and the necessity for translation of effective interventions in order to save lives. , There were few locations of greater need for support early in the COVID-19 pandemic than the nursing home setting. As a result of their environment and host risk factors, nursing home residents accounted for nearly 4 of 10 COVID-19 deaths early in the pandemic. Although infectious outbreaks occur in these settings each year, COVID-19 presented a unique challenge given its newness and resultant uncertainty in the guidance in the early days of the pandemic. Although the Centers for Medicare and Medicaid Services (CMS) requires nursing homes to have an infection preventionist, the quality of infection control training and ability to administer infection control policies in the nursing home environment is highly variable. Further, barriers to infection mitigation strategies exist in addition to a lack of formal training, including challenges related to managing resident transfers, transmission prevention, and information overload. , These challenges, coupled without an implementation science lens to understand how to effectively implement evidence-based infection control strategies in the nursing home setting, further heightened the difficulty in managing the COVID-19 pandemic. The effective implementation of evidence-based infection control strategies in nursing homes is limited. While the Centers for Disease Control and Prevention (CDC) published guidance to assist nursing homes in addressing the pandemic, effective implementation requires organizational capacity, staff engagement, and problem-solving. We identified Project ECHO (Extensions for Community Health Outcomes), an evidence-based telehealth model, , , as one pathway to overcoming this translational gap. We tested an intervention utilizing Project ECHO to virtually connect academic medicine experts with nursing home staff and administrators to proactively support evidence-based infection control guideline implementation. We hypothesize that nursing homes in the ECHO+ intervention of the COVID-19 Project ECHO program will have fewer COVID-19 infections, hospitalizations, flu-like illness, and deaths, and improved quality of life (QoL) outcomes in comparison to the ECHO intervention. Engagement on multiple levels in the study's planning, design, and implementation of our diverse 16-member Stakeholder Advisory Board strengthened the patient-centeredness of the study. Design: A stratified cluster randomized design was employed to deliver the Project ECHO intervention. Using a 1:1 ratio, we randomly assigned 136 nursing homes (with approximately 16,700 residents) to ECHO or ECHO+. Randomization was stratified by geographic location (rural vs. urban), baseline COVID-19 infection rate (some vs. none), and facility capacity (<60 beds vs. ≥60 beds). Patient-centered outcomes, including nursing home residents with COVID-19 infections, flu-like illness, COVID-19 hospitalizations, deaths, and quality of life (QoL), were assessed at baseline (intervention start date), and at 4, 6, 12, and 18 months. Our study was guided by the RE-AIM framework to critically evaluate both the effectiveness and implementation outcomes of the proposed cluster RCT. The comprehensive study protocol has been previously published. An overview of the study methods are outlined below. Using the ECHO model, our primary aim was to compare the effectiveness of a 16-week AHRQ phase 1 COVID-19 Project ECHO intervention (ECHO) followed by nine weekly, optional 60 min office hour sessions (AHRQ phase 2) to ECHO plus 17 additional sessions, 60 min in length, including an 8-week flu-focused refresher series for fall 2021, focused on infection control (ECHO+) in reducing the number of nursing home residents with COVID-19. Our secondary aim was to compare the effectiveness of ECHO versus ECHO+ on other patient-centered outcomes, including QoL, flu-like symptoms, hospitalizations, and deaths. Sample: National nursing home lists to assist recruitment efforts were obtained using Centers for Medicare & Medicaid Services (CMS) data, state agency and nursing home association contact websites, and our engaged stakeholders. Facility eligibility criteria for participation included U.S.-based, CMS-eligible skilled nursing facilities with access to a computer or electronic device for intervention participation. Facilities were ineligible if they had previously participated in a Project ECHO-led COVID-19 series. Approval for this study has been obtained from the Penn State Institutional Review Board at the Pennsylvania State University (STUDY00015883). All participants received information about the study and were asked to give consent before participating in the study. Recruitment began in the beginning of December 2020 and was extended until the end of January 2021. To support the interactive nature of the ECHO model, participants were assigned to cohorts, which allowed for smaller group discussions. Procedure: The intervention for this study included the Agency for Healthcare Research and Quality (AHRQ) ECHO National Nursing Home COVID-19 Action Network (hereinafter, “the Network”), supported by AHRQ and in collaboration with Project ECHO at the University of New Mexico Health Sciences Center and the Institute for Healthcare Improvement (IHI). This network provided training and mentorship to nursing homes across the country to increase the implementation of evidence-based infection prevention and safety practices to protect residents and staff. Using the Project ECHO model of tele-mentoring, all nursing homes received the intervention in two sequential phases ( , ). Phase 1: Nursing homes in both study arms received a 16-week Network curriculum via real-time, interactive videoconferencing using Zoom at no cost to participants. The curriculum was developed specifically for this intervention in partnership between AHRQ, the University of New Mexico's ECHO Institute, and the IHI. Session recordings were available for those who were unable to participate live. Phase 1 sessions were up to 90 min in duration and held weekly for 4 months (16 sessions total) at regularly scheduled times. All sessions followed the required program format, which is standard for the ECHO model, including introductions (5 min), didactic presentations (10-15 min), case presentations (30 min), question and answer period (30 min), and close and debrief (5 min). Typically, each session included case-based discussions (1-2 cases/session) to ensure mastery of the content and skills. Participants were encouraged to ask clarifying questions and weigh in on recommendations, followed by ECHO experts who provided advice on addressing each case using best practices. Recommendations were summarized verbally during the session and distributed via email following. Phase 2: The ECHO group was offered an optional nine weekly 60 min office hours, in which participants could drop in on an as-needed basis to ask specific questions and receive guidance from our experts on a variety of topics. The ECHO+ group received an additional 9 weeks of live 60 min ECHO sessions, following the format described for Phase 1 and covering emerging topics developed by the research team specifically for this intervention. These topics were identified as timely and important by our stakeholders, subject matter experts, and feedback from participating nursing homes. If nursing home staff were unable to attend the session live, they were offered the recording of that session. Further, ECHO+ facilities received an additional 8-session refresher series running from September to November 2021, providing an opportunity to further cover topics that were part of the CDC infection control training and prioritized by our stakeholders and nursing home participants. The ECHO group was offered an optional nine weekly 60 min office hours, in which participants could drop in on an as-needed basis to ask specific questions and receive guidance from our experts on a variety of topics. Measures: The primary outcome, COVID-19 infection rate reduction in nursing homes and secondary outcomes were obtained using the Nursing Home COVID-19 Public File. Specifically, the variables assessed included the number of weekly and total resident admissions, the number of weekly and total resident COVID-19 deaths, the number of residents with new influenza, and the number of residents with acute respiratory illness symptoms excluding COVID-19 and/or influenza. Qualitative interviews of ECHO participants were conducted at 6 and 12 months. These interviews will provide informative insights into the implementation science behind the COVID-19 Project ECHO program. Formal results from these interviews will be presented elsewhere, however, representative quotes are described in the discussion. Data Analysis: We performed cluster-level analyses using the aggregated outcomes (e.g., infection, hospitalization) originating from the CMS weekly nursing home level data (confirmed COVID cases per 1000 residents, admissions per 1000 residents, deaths per 1000 residents, and influenza cases per 1000 residents). We found 4-week totals (incidence) for the weeks preceding each study time points coinciding with baseline, month 4, month 6, month 12, and month 18.  Given that all of the outcomes were continuous but skewed in distribution, comparisons were made of all of these outcomes at each study time point between the two study groups using a Wilcoxon Rank Sum test. Our primary outcome was the number of nursing home residents with COVID-19 infection as reported in the Nursing Home COVID-19 Public File, self-reported by each facility and listed by week ending (with a 2-week lag in upload). Data periods for this study are defined below ( ). To determine baseline, we took cumulative data for the 4-week period prior to the ECHO cohort start date. To determine the 4-month time point, we took cumulative data for the 4-week period that coincided with 4 months post ECHO cohort start date, the conclusion of Phase 1 (16 weeks). To determine the 6-month time point, we took cumulative data for the 4-week period that coincided with 6 months post ECHO cohort start date, the conclusion of Phase 2 (25 weeks). To determine the 12-month time point, we took cumulative data for the 4-week period beginning 12 months post-baseline for our original cohorts. To determine the 18-month time point, we took cumulative data for the 4-week period beginning 18 months post-baseline for our original cohorts. To normalize data, variables that were not calculated per 1000 residents were divided by the total number of occupied beds in the facility, then multiplied by 1000. These are notated in the tables below. In total, 290 nursing homes expressed interest in our study and were assessed for eligibility. Of these, 154 facilities were excluded for reasons such as not meeting inclusion criteria and declining to participate. A total of 136 nursing homes were randomized, with 68 allocated to ECHO, and 68 allocated to ECHO+ ( ). Following the intention-to-treat principle, a total of 136 facilities were included in the final analysis. Summary statistics and distributions for demographic and characteristic variables are included in . Covariates included in this study were relatively evenly split between the study arms, with the exception of three. There was an approximate 10% difference in facility payment model (51.41% for-profit in ECHO vs. 61.76% for-profit in ECHO+, 47.06% not-for-profit in ECHO vs. 38.24% not-for-profit in ECHO+), facility type (51.47% independent in ECHO vs. 42.65% independent in ECHO+, 48.53% networked in ECHO vs. 57.35% networked in ECHO+), and memory care status (16.18% no in ECHO vs. 225.00% no in ECHO+, 83.82% yes in ECHO vs. 75.00% yes in ECHO+) observed between the ECHO and ECHO+ study arms. None of these differences was statistically significant. Generally, attendance across both Phase 1 and 2 of the COVID-19 Project Intervention decreased over time. Average facility attendance can be found in . There was a stark decline in facilities attending Phase 2 sessions. Primary aim results can be found in . COVID-19 infections decreased over time from baseline to 6 months then increased again at the 12 and 18-month timepoints. We did not complete a subgroup analysis as there were no observed differences across study arms, sample sizes would have been greatly reduced, and results would not have been informative. Missing data was minimal, due to the initial reporting requirements for nursing homes set by CMS. Hospitalization and COVID-19 deaths decreased over time from baseline to 6 months but then increased slightly at 12 months. Influenza rates remained low throughout the study period. Infection rate, hospitalizations, and deaths ( ) were all lower at 18 months compared to baseline. There was a significant difference in the Total COVID-19 deaths per 1000 residents in 4 weeks observed, with deaths higher in the ECHO+ study arm. The intervention delivered between both groups was similar over the 6-month time point, and there were no observed differences at this time point. During September-November 2021 we delivered a fall refresher series with a focus on influenza and late-breaking COVID-19 topics with the ECHO+ group. At 12 months there was still no significant difference between ECHO and ECHO+ for infection rate, hospitalization, death, and influenza rate. Missing data was minimal, due to the initial reporting requirements for nursing homes set by CMS. Overall, there were no observed differences between the ECHO and ECHO+ study arms in the primary outcome: the total number of confirmed COVID-19 cases per 1,000 residents in four weeks across study time points. We found that the COVID-19 infection rate decreased from baseline to 6 months, but then increased again at 12 and 18 months with hospitalizations and deaths mirroring this trend. However, these findings were complicated by the evolving epidemiology of the pandemic and the rollout of COVID-19 vaccines. The CDC's Advisory Committee on Immunization Practices (ACIP) recommended those living in long-term care facilities, including nursing homes, be offered the COVID-19 vaccine first in the initial phases of the COVID-19 vaccination program, along with healthcare professionals and non-institutionalized older adults. Nursing home residents and staff began receiving the COVID-19 vaccine following this in collaboration with the Pharmacy Partnership for Long-Term Care Program in January 2021. This vaccination effort contributed to a dramatic and significant decline in morbidity and mortality from COVID-19 and likely explains declines in cases and hospitalizations seen at the 4 and 6 month ECHO timepoints. Concerns regarding the Omicron variant began in late 2021, with the first case of the Omicron variant detected in the U.S. in December 2021. Omicron and its subvariants are known to circumvent vaccine derived immunity. Therefore, the observed increase in cases at the 12 and 18-month study time points aligns appropriately with the subvariant surges and loss of vaccine protection against infection. It is unsurprising that there were minimal observed differences in infection rate, hospitalizations, and deaths between the study arms through the study period as vaccination rates were similar between the two ECHO cohorts. There was only one study time point, at 18 months, that had a statistically significant difference in total COVID-19 deaths, with deaths observed to be higher in the ECHO+ group. Rates of flu-like illness remained low throughout the entire study period, and there was no observed difference between the ECHO and ECHO+ study arms in the number of total confirmed cases of new influenza per 1,000 residents in four weeks. This is in line with national trends of influenza virus throughout the COVID-19 pandemic, in which influenza-like illness (ILI) rates were unusually low. This can be attributed to the widespread preventative measures taken across the U.S., including social distancing, mask-wearing, handwashing, reduced international travel, closure of schools, and more. In fact, when observing the COVID-19 timeline, one study conducted by the CDC found that “influenza virus circulation declined sharply within two weeks of the COVID-19 emergency declaration and widespread implementation of community mitigation strategies.” Further, COVID-19 risk perception is associated with a decrease in ILI, and being that those residing in nursing homes have one of the highest risks of adverse COVID-related outcomes, it would be appropriate to surmise that this population also had a significant reduction in ILI. Attendance through the COVID-19 Project ECHO program was suboptimal. We observed a high number of facilities attending early sessions, especially those that occurred in Phase 1. These sessions were held between December 2020 and May 2021, which demonstrate curriculum delivery during a time of high need. This is in comparison to Phase 2 for both ECHO and ECHO+ cohorts, which was held through late Summer 2021. We hypothesize that the later sessions occurring during a time where facilities were more well-prepared to handle the COVID-19 pandemic, coupled with the COVID information overload observed throughout the pandemic, as defined by an excessive amount of information dissemination that surpasses one's information processing capacity, resulted in decreasing participation through the COVID-19 Project ECHO program. It is likely that decreasing participation in Phase 2 resulted in the lack of difference found between the ECHO and ECHO+ study arms, as this is where the key differentiators occurred in the COVID-19 Project ECHO program. Throughout the COVID-19 pandemic, there was an almost insurmountable gap between rhetoric and reality in education for long-term care professionals, contributing to the translational gap. Although public health professionals consistently emphasized the importance of practicing evidence-based infection control strategies in nursing homes, there were limited interventions to actually provide long-term care professionals with the tools and knowledge for implementation. Information insufficiency is defined as the gap between what one needs to know about a given topic and their current knowledge, or what one actually knows. During the COVID-19 pandemic, information insufficiency occurred for various reasons, including confusing and unclear guidance and everchanging knowledge about the virus, which directly resulted in policy changes. Receiving regulatory guidance from local, state, and federal agencies was reported to be confusing for nursing home administrators and staff and often contradictory. Several studies on the lived experiences of nursing home staff highlight that challenges in effective and reliable communication were explicitly identified as a factor that hindered the implementation of an appropriate infection control response. , , The ECHO model has the ability to directly address this gap between communication and practice. The Project ECHO platform promptly positioned itself at the forefront of the COVID-19 crisis and was utilized in this project to ultimately be a resource for nursing home administrators and staff, and to increase knowledge, confidence, and self-efficacy in translating evidence into best-practice care. While the majority of study primary and secondary outcomes had null results, the impact the intervention had on participants spans beyond our proposed outcomes. The qualitative interviews conducted throughout the study period emphasized that the learning community created through the Project ECHO program created a space for facilities to confirm that they were implementing evidence-based practices. One representative quote from a participant stated, “ [Project ECHO] was a good opportunity to confirm the steps we had taken were in line with best practice .” Further, another participant actually detailed how Project ECHO was a more preferable format to review information in, in which they stated, “ I know some colleagues were basically taking [state guidelines] or directives and just giving them to staff. Well, something that's 12 pages long with a bunch of legal verbiage is not something that is going to be digested… ECHO was just more user-friendly for the staff and easier to implement.” Another participant described that “ information that we would get from the Department of Health or our local health department, they seem to be all at odds with each other. The information we received from ECHO was consistent. ” In addition to its ability to provide educational support, Project ECHO provided socializing opportunities for participating staff that may have positive emotional benefits. Long-term care professionals at the frontlines of caring for our society's most vulnerable and frail population were grappling with the heightened responsibilities and professional workload introduced by COVID-19, resulting in additional work-related strain. The pandemic introduced a mental health crisis in the long-term care setting, with elevated levels of burnout among staff and administrators. , , The Project ECHO program was ultimately a support network for these facilities and their staff, directly impacting feelings of burnout and loneliness. One participant described that the program “ made it feel like we weren't on an island by ourselves” and that “it made you feel like you weren't alone.” We believe the study had several strengths, including generalizability. First, the broad inclusion criteria and representativeness of nursing home facilities across multiple states suggests the program may apply in multiple settings. Second, the national presence of the ECHO program, highlighted by the AHRQ program, provides an opportunity for intervention utilization broadly. Lastly, the use of nationally available data, both through CMS’ COVID-19 data and the MDS, provides an efficient data source for the analysis of wide-scale studies. The major influential limitation of this study was the minimal differences between the study arms due to contractual requirements, which resulted in both ECHO and ECHO+ participants receiving similar interventions in phase 1. This was a result of the launch of the national AHRQ program shortly following study funding and a concern of offering a different comparator. Another issue outside the study's control was the unpredictable rise and fall of COVID-19 infection rates as a result of the course of the pandemic. These rates made it difficult to understand the intervention impact, if any. Further, the stress placed upon nursing home facilities during the pandemic limited their ability in many cases for active study involvement. Research requiring engagement remained a challenge due to facility workload, particularly related to additional burdens due to the pandemic, staff turnover, and staff burnout. Active engagement from our representative stakeholder group was helpful in developing strategies for participant engagement. The short study time period further limits our ability to understand significant differences in longer-term outcomes that may result. Additionally, we were unable to prohibit additional participation in external COVID-19-related interventions, and if this participation may have impacted our findings. In conclusion, Project ECHO provides an innovative approach to addressing the gap between healthcare guidelines and implementation. Understanding how to operationalize best practices through interaction with subject matter experts and a peer network is a unique opportunity to address leading public health challenges, like the COVID-19 pandemic. The successful adaption of the ECHO intervention to the nursing home facility setting suggests opportunity for use in a variety of healthcare conditions and topics and may be a future pathway to overcoming the translational gap. This work was partially supported by a Patient-Centered Outcomes Research Institute® (PCORI®) Award (COVID-2020C2-10728) and partially supported by the National Center for Advancing Translational Sciences, National Institutes of Health (NIH) [Grant numbers UL1 TR002014, UL1 TR00045]. The Project ECHO intervention used in this research was partially supported by Contract No. 75Q80120C00003 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services (HHS). The content is solely the responsibility of the authors and does not necessarily represent the official views of Patient-Centered Outcomes Research Institute® (PCORI®), its Board of Governors or Methodology Committee, AHRQ, HHS, or the NIH. None
Time of Clinic Appointment and Serious Illness Communication in Oncology
ccbf4fa0-4cd4-4cd3-a855-d57edace01c4
10126780
Internal Medicine[mh]
Early serious illness communication in oncology increases goal-concordant care, decreases clinician and caregiver moral distress, and increases hospice near the end of life. , Serious illness conversations (SIC) are discussions between clinicians and patients about the progression of an advanced health condition that could adversely strain the patient’s quality of life or their caregivers’. These conversations typically inquire the patient’s knowledge of their illness and prognosis, deliver new prognostic information, and explore next steps of medical care consistent with the patient’s goals and priorities. Organizations including the National Coalition for Hospice and Palliative Care and the National Comprehensive Cancer Network recommend that oncologists initiate serious illness discussions for patients early in the cancer care continuum. In addition, a majority of surveyed patients and caregivers in oncology prefer earlier and more in-depth goals of care conversations. However, serious illness communication generally does not occur until approximately 1 month before death – if at all. Time pressures and decision fatigue during a busy clinic day may be contributing reasons that prevent clinicians from engaging in necessary conversations. , Aspects of care that are not immediately urgent may be omitted when clinicians fall behind in their schedule throughout the day. Furthermore, as the day progresses, physicians experience a progressive inability to continue making difficult decisions after having made many already, referred to as decision fatigue. Decision fatigue is well characterized in primary care settings. For example, studies have reported that time of day is associated with lower rates of flu vaccinations, cancer screening referrals, and statin prescriptions, as well as greater unwarranted antibiotic and opioid orders by primary care physicians. - However, decision fatigue in serious illness communication has not been described. Given prior evidence of suboptimal clinician decision-making in non-oncology settings in latter parts of a clinic day or session, we investigated the association between appointment time and likelihood of serious illness conversations. Our work expands knowledge on the frequency and nature of serious illness conversations; and informs interventions that promote proactive communication and goal-concordant cancer care delivery. This work presents a secondary post-hoc analysis of a randomized clinical trial (NLM, NCT03984773) among 75 clinicians and 17 696 patients with cancer. Results from the trial were previously published. The trial was conducted between July 2019 to April 2020 and investigated the use of machine-generated mortality predictions and behavioral nudges to clinicians to promote serious illness conversations. The trial protocol and overall study was granted approval by The University of Pennsylvania Institutional Review Board with a waiver of written informed consent. We merged billing and institutional electronic health record (EHR) data from Clarity, an EPIC® reporting database, to identify a cohort of medical oncology encounters from 1 of 9 medical oncology clinics (8 disease-specific clinics within a tertiary practice, 1 general oncology clinic) within a large academic healthcare system. We studied return patient encounters with a clinician (medical oncology physician, nurse practitioner, or physician assistant) from June 17, 2019 to April 17, 2020. New patient encounters, encounters with clinicians with <40 total appointments during the study period, and encounters after the first documented SIC within the study period ( Supplementary Figure S1 ). While SIC conversations were not audio recorded, SIC documentation was used as a surrogate for serious illness communication, as SIC documentation is a quality metric used by many organizations including the American Society for Clinical Oncology’s Quality Oncology Practice Initiative and the Centers for Medicare and Medicaid Services Oncology Care Model; and has been used to define SIC conversations in prior work. , We ascertained the presence of a SIC from either (1) a specific SIC note type in the EHR, or (2) an SIC smart phrase in clinical progress notes. A smart phrase is a pre-built template or shortcut for entering commonly used phrases, sentences, or paragraphs into a patient’s EHR record. Smart phrases are designed to save time and increase efficiency in documenting patient encounters. The ACP smart phrase in the EHR is customized to pull a pre-defined SIC template, created by the Ariadne Labs, into the note. Appointment times between 8am and 4pm were separated by the hour. For example, all appointments between 8am and 8:59am were assigned to 8am. Visits before 8am and after 4pm were grouped with the 8am and 4pm timepoints, respectively. Oncology clinicians (ie medical oncology physician, nurse practitioner, or physician assistant) in eligible clinics practiced in either a morning (8am to 11am) or afternoon (12pm to 4pm) session and could alternate between morning and afternoon sessions on different days. Time was indicated by grouping appointment times in the order they occur in a session (eg 8am and 12pm were grouped as hour 1). Advanced practice providers (APPs), including physician assistants and nurse practitioners, were consistently assigned to oncology physicians, with a ratio of oncologists to APPs ranging from 1:1 to 2:1. We use the generalized estimating equation approach, clustering by individual clinician, to estimate the probability of SIC documentation. Session hour (1-5) was included as a categorical variable to calculate the relative odds of documentation for each hour of a session after the first and as a continuous variable for assessing overall linear time trend. We adjusted for patient age, race, ethnicity, and gender, insurance, tumor type and stage, Charlson comorbidity count, and appointment month/year. As the study period coincided with a quality improvement effort to prompt conversations among patients at risk of short-term mortality, we additionally adjusted for whether a patient’s clinician received a conversation prompt for a specific encounter. To evaluate potential bias from the morning to afternoon session transition, we performed a sensitivity analysis using a restricted sample that excluded encounters from 12pm. Our final model includes 11 independent variables and is trained on 55367 observations, which exceeds the minimum recommended sample size for an observational multivariable regression analysis of at least 650 (n = 100 + 50 i, i = 11). Two-sided Wald tests were used to test all hypotheses with P < .05 indicating statistical significance. Analyses were performed between December 2021 and February 2023 using R, version 4.0.3. The reporting of the study conforms to STROBE guidelines. The sample consisted of 75 oncology clinicians and 55 367 encounters with 17 696 patients. Patients’ mean (SD) age was 61.9 (14.0) years; 53.6% were female and 70.6% were non-Hispanic White. Compared to patients without SICs, patients with SICs were more likely to have gastrointestinal (14.1% vs 26.8%) and thoracic (12.6% vs 24.7%) malignancies and to carry Medicare insurance (47.0% vs 57.4%); other demographic characteristics were similar . Unadjusted SIC documentation rates in the morning and afternoon sessions are displayed in . Documentation rate decreased from 2.1 to 1.5% in the morning clinic session (8am-12pm) and from 1.2% to .9% in the afternoon clinic session (1pm-4pm). Adjusted odds ratios (ORs) for SIC documentation rates were significantly lower for all hours of each session after the earliest hour (adjusted OR .91 [95% CI, .84-.97], P = .006 for overall linear trend) . Results were consistent in the sensitivity analysis (adjusted OR .90 [95% CI, .83-.97], P = .007 for overall linear trend). Oncology clinicians’ likelihood of having and documenting serious illness conversations decreases as a clinic session progresses. Falling behind schedule and decision fatigue could be contributing reasons for this effect. - Serious illness conversations involve coordinated efforts between the patients, their healthcare team, and their families to discuss disease trajectory and goals of care. These conversations can require significant time commitments and can be hard to prioritize if delays interfere with clinician schedules. Furthermore, SICs in oncology often cover emotionally charged topics, like unfavorable treatment outcomes, and can discourage clinicians from engaging in such hard conversations towards the end of a clinic session as they experience decision fatigue. Ultimately, lower rates of discussions about goals of care later in a session could result in more aggressive treatment regimens and ICU admissions near end-of-life, as well as fewer hospice referrals. This work expands the literature on how time of day affects clinician decision-making; and mechanisms for delayed serious illness conversations in oncology which could reason the paucity of similar discussions in adjacent fields. We acknowledge some methodological limitations that should be considered in the interpretation of our results. While we only examined return visits and adjusted for important metrics of patient severity, we could not account for unmeasured confounders such as patient and family wishes in this observational study. However, by clustering at the level of the oncologist, we accounted for clinician-specific variation in scheduling and patient risk. Furthermore, the cohort in this observational study constitutes clinicians and patients from a single academic institution where the clinicians had been trained a priori on specific SIC EHR documentation practices, which may restrict the generalizability of our results. We also acknowledge that physicians and patients may have had serious illness conversations without documentation or were not documented using the serious illness conversation template in the EHR, and the frequency of these conversations may be underreported. Finally, a low baseline rate of SICs could limit the conclusions drawn from the study. Although, our findings were similar to rates measured in prior studies (1.9% - 4.9%). The low rate could be explained by the inclusion of all patients, not just decedents, in our analyses; and by the nature of SICs, which are more in-depth and can take longer than traditional advance care planning conversations. Efforts to improve the quality of care should recognize the time pressure on patients and physicians, the effects of behavioral interventions, and the time costs of improving patient-physician communication. Several straightforward practice changes could address these time pressures. Proactive scheduling of high-risk patients earlier in a clinic session or scheduling separate visits for serious illness communication could facilitate necessary conversations and should be further studied. Alternatively, democratizing serious illness communication to other members of the health care team – including lay health workers – may offload clinicians who are under time pressures from potentially low-quality or missed serious illness communication. Future work should study the downstream effects of time-based decisions for serious illness conversations on end-of-life outcomes (ie chemotherapy treatments in the last 14 days of life, ICU admissions in the last 30 days, and late or non-referrals to hospice). In conclusion, oncologist-patient serious illness communication decreases considerably through the clinic day, reflecting potential time pressures and decision fatigue that warrant proactive strategies to avoid missed conversations. Supplemental material - Time of Clinic Appointment and Serious Illness Communication in Oncology Click here for additional data file. Supplemental material for Supplemental material - Time of Clinic Appointment and Serious Illness Communication in Oncology by Likhitha Kolla, Jinbo Chen and Ravi B. Parikh in Cancer Control Journal
Catering of high-risk foods and potential of stored food menu data for timely outbreak investigations in healthcare facilities, Italy and Germany
7c336012-4c87-48b5-bd43-458ab4e7cd9c
10126884
Microbiology[mh]
In 2020, a total of 3086 foodborne outbreaks, including 20 017 cases and 37 deaths, were reported by the European Union member states to the European Food Safety Authority . Among foodborne outbreaks, healthcare-associated foodborne outbreaks (HA-FBOs) are of public health concern. Indeed, a literature review on HA-FBOs in 37 member countries of the Organisation for Economic Cooperation and Development retrieved 85 outbreaks occurring between 2001 and 2018, which were mainly associated with the consumption of food contaminated with Salmonella (24 outbreaks), norovirus (22 outbreaks) and Listeria (19 outbreaks) . HA-FBOs result from the consumption of contaminated food served in healthcare facilities (HCFs) and represent a risk, especially for vulnerable patients (children up to the age of 5 years, elderly people, pregnant, immunosuppressed) . HA-FBOs are likely underreported due to the lack of systematic surveillance of foodborne outbreaks in HCFs, since food is rarely considered a potential vehicle for healthcare-associated outbreaks, as compared to other routes of transmission . Moreover, outbreaks with relatively low numbers of cases distributed across different HCFs or protracted outbreaks (e.g. listeriosis outbreaks) can only be detected by systematic surveillance and by routine whole-genome sequencing . Examples of these include a listeriosis outbreak in Germany involving 13 cases associated with the consumption of meat in HCFs , a listeriosis HA-FBO in the UK with nine cases in different HCFs linked to the consumption of contaminated sandwiches provided by a common supplier and a listeriosis HA-FBO in Italy among cancer and immunocompromised patients likely due to a contaminated meat slicer in the hospital kitchen . Other HA-FBOs among vulnerable patients were reported and associated with food considered high-risk for vulnerable patients in HCFs, such as (raw) pork products , oysters or (uncooked) frozen berries . Usually, in outbreak investigations, food menu data are obtained through patient interviews, which are often resource-intensive. Furthermore, this is subject to inaccurate patient recall of previously consumed meals , especially in outbreaks caused by pathogens with a long incubation time like Listeria monocytogenes and hepatitis A virus, possibly leading to failures in investigations or implication of the contaminated food product. In community settings, consumption purchase data (e.g. credit card data on food purchases) have been successfully used as an alternative or complementary data source to support outbreak investigations . Likewise, in an outbreak of listeriosis in different hospitals in Australia in 2013, an electronic menu database (which records all hospital food menu items ordered by patients during their admissions) allowed investigators to rapidly identify potential food sources . In Italy, a national guideline for food catering in hospitals, HCFs and schools was published by the Ministry of Health in 2021, which addresses various aspects of food service, including food safety . The need for food business operators to store food menu data or to keep reference samples of food is not explicitly mentioned in the guideline above. However, this is frequently requested from food business operators in the food service contracts by the institutions that manage and control the service, such as the hospitals and the local health units. In Germany, the food safety sector provides food safety recommendations to minimise the risk of contaminated food in hospital kitchens and food safety recommendations for communal facilities with a focus on vulnerable groups . Furthermore, the public health and hospital infection control sectors publish hygiene requirements when handling food in HCFs and for immunosuppressed patients . Although there is no legal obligation for caterers in HCFs to store reference samples, except in the case of monitoring of zoonoses and zoonotic pathogens, there are recommendations in Germany to do so . In this study, we investigated the data availability, accessibility and usability of food menu data in Italian and German HCFs; we wanted to identify possible gaps and provide recommendations to better identify food vehicles associated with HA-FBOs. Study setting We conducted a survey among HCFs, jointly in Italy and in Germany, between June and November 2019, as well as in February 2021. The survey was addressed to the direction of the HCFs and completed by hospital hygienists, kitchen managers, caterers or dieticians in charge of managing the food menus for the patients. In Italy, a convenience sample consisted of 22 HCFs; 14 HCFs were selected by the Istituto Superiore di Sanità (ISS) and eight HCFs were selected by the Istituto Zooprofilattico Sperimentale dell'Abruzzo e del Molise (IZSAM). The first 14 HCFs consisted mainly of second-level paediatric hospitals (including special and reference clinical units) all over Italy. These hospitals were selected due to the vulnerability of patients to foodborne infections, and to investigate special aspects of hospital catering policies and food consumption in a specific patient population such as paediatrics. In addition, a few general hospitals were selected among the hospitals interested in establishing an active surveillance for hospital-acquired viral infections. The latter eight HCFs were selected by the IZSAM because of existing contacts and consisted of secondary hospitals and nursing homes in the Abruzzo and Molise regions. In Germany, a convenience sample consisted of 13 HCFs; six HCFs that had previously participated in a project on healthcare-associated infections in long-term care facilities and seven HCFs (primary, tertiary and specialised care hospitals) that had been collaborating in projects and studies with the Robert Koch Institute. Data collection and questionnaire In Italy, the same questionnaire was administered through different approaches: self-administration of a semi-structured online questionnaire using LimeSurvey for paediatric hospitals, and face-to-face interviews using a semi-structured static (paper) questionnaire administered with the participating HCFs in the Abruzzo and Molise regions. In Germany, the questionnaire was designed as an Adobe Acrobat form to facilitate self-administration by the participating HCFs. The core questionnaire covered information on catering service management (in-house catering: all food preparation steps from raw ingredients to the final food served to patients undertaken within the facility; external catering: all the services, preparation and cooking steps undertaken by an external company; mixed catering: in-house catering combined with external service providers), format and storage duration of food menu data, availability of food menu data for each patient, history of food menu data by the HCF in relation to a suspected foodborne outbreak and information on whether the HCF provided known high-risk foods, such as deli salads, raw/fermented sausage products, soft cheese, smoked fish and frozen berries . We did not include questions related to food preparation, storage conditions or overall hygiene practices, as we did not have indicators to objectify the answers. Questions on reference food samples were added only in the questionnaire in Germany, as storage of reference food samples in HCFs is recommended in parts of Germany . Data analysis Questionnaire data were only analysed descriptively (frequency and proportions). We conducted a survey among HCFs, jointly in Italy and in Germany, between June and November 2019, as well as in February 2021. The survey was addressed to the direction of the HCFs and completed by hospital hygienists, kitchen managers, caterers or dieticians in charge of managing the food menus for the patients. In Italy, a convenience sample consisted of 22 HCFs; 14 HCFs were selected by the Istituto Superiore di Sanità (ISS) and eight HCFs were selected by the Istituto Zooprofilattico Sperimentale dell'Abruzzo e del Molise (IZSAM). The first 14 HCFs consisted mainly of second-level paediatric hospitals (including special and reference clinical units) all over Italy. These hospitals were selected due to the vulnerability of patients to foodborne infections, and to investigate special aspects of hospital catering policies and food consumption in a specific patient population such as paediatrics. In addition, a few general hospitals were selected among the hospitals interested in establishing an active surveillance for hospital-acquired viral infections. The latter eight HCFs were selected by the IZSAM because of existing contacts and consisted of secondary hospitals and nursing homes in the Abruzzo and Molise regions. In Germany, a convenience sample consisted of 13 HCFs; six HCFs that had previously participated in a project on healthcare-associated infections in long-term care facilities and seven HCFs (primary, tertiary and specialised care hospitals) that had been collaborating in projects and studies with the Robert Koch Institute. In Italy, the same questionnaire was administered through different approaches: self-administration of a semi-structured online questionnaire using LimeSurvey for paediatric hospitals, and face-to-face interviews using a semi-structured static (paper) questionnaire administered with the participating HCFs in the Abruzzo and Molise regions. In Germany, the questionnaire was designed as an Adobe Acrobat form to facilitate self-administration by the participating HCFs. The core questionnaire covered information on catering service management (in-house catering: all food preparation steps from raw ingredients to the final food served to patients undertaken within the facility; external catering: all the services, preparation and cooking steps undertaken by an external company; mixed catering: in-house catering combined with external service providers), format and storage duration of food menu data, availability of food menu data for each patient, history of food menu data by the HCF in relation to a suspected foodborne outbreak and information on whether the HCF provided known high-risk foods, such as deli salads, raw/fermented sausage products, soft cheese, smoked fish and frozen berries . We did not include questions related to food preparation, storage conditions or overall hygiene practices, as we did not have indicators to objectify the answers. Questions on reference food samples were added only in the questionnaire in Germany, as storage of reference food samples in HCFs is recommended in parts of Germany . Questionnaire data were only analysed descriptively (frequency and proportions). In total, 35 HCFs (22 in Italy and 13 in Germany) participated in our survey, including 26 hospitals (19 in Italy and seven in Germany) and nine nursing homes (three in Italy and six in Germany) of various sizes, according to bed capacity ( ). Catering systems included in-house, external and mixed catering. Catering activities (mixed and external catering) were mainly outsourced by Italian hospitals (18/19 hospitals), whereas in Germany, in-house catering was more often reported in hospitals (3/7) and in nursing homes (5/6) compared to the Italian hospitals (1/19) and nursing homes (1/3). The majority (17/19) of hospitals in Italy reported that a direct link between the food menu data and individual patients (i.e. documentation of patient-specific food menu choices) could be established, in contrast to half of the participating hospitals in Germany. In nursing homes, the direct link of food menu data to individual nursing home residents was uncommon both in Italy (1/3) and in Germany (1/6). Heterogeneity also existed in food menu data formats, and ranged from paper, electronic (PDF, Word, Excel), to fully searchable electronic databases (e.g. as part of commercial software used for catering management). Electronic databases were available for most of the Italian hospitals (15/19), in contrast to the German hospitals (3/7). No electronic databases were used by the nursing homes of our study. The storage duration of menu data differed considerably between HCFs, ranging from no storage up to 10 years. We asked the German HCFs whether they collected information on who ordered but did not eat an ordered meal. This question was only answered by nine HCFs (four hospitals and five nursing homes); of these nine, only three nursing homes collected this information. High-risk foods were offered on the menu in 3/8 Italian HCFs from the Abruzzo and Molise regions, as well as in all German HCFs ( ). One hospital in Germany, in which a previous HA-FBO occurred due to the consumption of spreadable raw fermented sausage (German Teewurst ), did not offer this food on its menu anymore, whereas other potentially high-risk foods were still offered to patients; for example, soft cheese and smoked fish. In Germany, reference food samples from the lunch meals were taken by 11/13 HCFs. The storage time reported for these reference samples ranged from less than 7 to more than 20 days. Food menu data in HCFs The survey highlighted that the availability of patient-linked food menu data, data formats (paper, electronic data/PDF, searchable databases) and data storage duration were highly heterogeneous between the investigated HCFs and the two countries. In particular, it was not possible to link food menu items to individual patients in about half of the participating German HCFs and 14% of the Italian HCFs. We found that food menu data may not be currently analysed to support outbreak investigations in a large number of HCFs in Germany, as well as in some HCFs in Italy. A single HCF in Italy reported that food menu data were previously used in a suspected foodborne outbreak investigation; analysing these food menu data, the researchers concluded that the norovirus was likely transmitted human-to-human, and not foodborne . One HA-FBO reported by a German hospital was associated with the consumption of spreadable raw fermented sausage (German Teewurst) contaminated with Salmonella Derby, affecting very old patients in hospitals or elderly care homes . In this outbreak, food menu data were not available, as the spread was typically offered with other options, buffet style, with bread for breakfast and cold dinner. To increase their usability for outbreak purposes, food menu data should be documented for all offered meals (e.g. breakfast, lunch and dinner) and be unequivocally linked to individual patients or nursing home residents. Despite the lack of a legal obligation to store food menu data, a minimum duration of storage of menu data (at least 1 year) would be crucial for the investigation of protracted outbreaks such as listeriosis outbreaks or for food with a long shelf-life, such as frozen food. For instance, in a listeriosis HA-FBO in Germany in 2019 , electronic food menu data were insufficiently available and not patient-specific to support the analysis of detailed food exposure data. Furthermore, specific information about ‘consumed meals’ instead of only ‘ordered meals’ would be beneficial. For data that are already collected in digital format, storage costs have considerably fallen in recent years. Further digitisation of hospital services, including IT solutions that allow faster and differentiated data on patient meal requests to the kitchen, may be expected . The digitisation and collection of additional data will result in additional costs, including those for human resources. The cost–benefit of collecting and digitising food menu data in HCFs should be evaluated, since usage and analysis of these data may have shared benefits for different healthcare professionals such as dieticians, caterers and infectious diseases specialists, as well as for increasing patient satisfaction (subjective rating of hospital food services quality) . Concerning the storage of food reference samples, it would make sense to collect samples of all meals (breakfast, lunch and dinner). However, for protracted outbreaks, the storage of reference samples may be impractical due to limited storage capacities. Safe food in HCFs The current survey suggests that despite existing food safety recommendations , patients and nursing home residents are exposed to food considered to be of high risk for HA-FBO among vulnerable patients. Further research is needed to identify whether the presence of such food items on the menu is related to a lack of knowledge of food safety recommendations and/or reflects a demand by the patients and nursing home residents, as well as to assess whether these foods are also effectively offered to specific vulnerable patient groups (e.g. immunocompromised) in HCFs. Previous food monitoring among 1880 HCFs by the German Food Safety Authorities in 2017 also highlighted a lack of knowledge of recommendations about high-risk food by 45% of the participating HCFs . In the current study, a nursing home in Germany indicated, as a reason for not participating in the survey, that ‘food poisoning is not an issue’ in their HCF, highlighting the need to both increase awareness about the risk of HA-FBOs and to strengthen food hygiene recommendations among staff and food business operators in HCFs. Limitations The main limitation of our study is that a small convenience sample of German and Italian HCFs was used, and that the distribution of HCF types and sizes may not be representative. It should be noted that the answers to the questionnaire were self-reported by the respondents of the HCFs. A larger representative follow-up survey is needed to achieve more explanatory powers also regarding differences between and within nursing homes and hospitals of different sizes, organisational structures and healthcare levels. To demonstrate the use of food menu data of HCFs for outbreak investigations, we would need to compare HCFs with linked patient–food menu data to HCFs without linked patient–food menu data in outbreak situations. As hospital-acquired infections are not that frequent, simulations may be the first step. The survey highlighted that the availability of patient-linked food menu data, data formats (paper, electronic data/PDF, searchable databases) and data storage duration were highly heterogeneous between the investigated HCFs and the two countries. In particular, it was not possible to link food menu items to individual patients in about half of the participating German HCFs and 14% of the Italian HCFs. We found that food menu data may not be currently analysed to support outbreak investigations in a large number of HCFs in Germany, as well as in some HCFs in Italy. A single HCF in Italy reported that food menu data were previously used in a suspected foodborne outbreak investigation; analysing these food menu data, the researchers concluded that the norovirus was likely transmitted human-to-human, and not foodborne . One HA-FBO reported by a German hospital was associated with the consumption of spreadable raw fermented sausage (German Teewurst) contaminated with Salmonella Derby, affecting very old patients in hospitals or elderly care homes . In this outbreak, food menu data were not available, as the spread was typically offered with other options, buffet style, with bread for breakfast and cold dinner. To increase their usability for outbreak purposes, food menu data should be documented for all offered meals (e.g. breakfast, lunch and dinner) and be unequivocally linked to individual patients or nursing home residents. Despite the lack of a legal obligation to store food menu data, a minimum duration of storage of menu data (at least 1 year) would be crucial for the investigation of protracted outbreaks such as listeriosis outbreaks or for food with a long shelf-life, such as frozen food. For instance, in a listeriosis HA-FBO in Germany in 2019 , electronic food menu data were insufficiently available and not patient-specific to support the analysis of detailed food exposure data. Furthermore, specific information about ‘consumed meals’ instead of only ‘ordered meals’ would be beneficial. For data that are already collected in digital format, storage costs have considerably fallen in recent years. Further digitisation of hospital services, including IT solutions that allow faster and differentiated data on patient meal requests to the kitchen, may be expected . The digitisation and collection of additional data will result in additional costs, including those for human resources. The cost–benefit of collecting and digitising food menu data in HCFs should be evaluated, since usage and analysis of these data may have shared benefits for different healthcare professionals such as dieticians, caterers and infectious diseases specialists, as well as for increasing patient satisfaction (subjective rating of hospital food services quality) . Concerning the storage of food reference samples, it would make sense to collect samples of all meals (breakfast, lunch and dinner). However, for protracted outbreaks, the storage of reference samples may be impractical due to limited storage capacities. The current survey suggests that despite existing food safety recommendations , patients and nursing home residents are exposed to food considered to be of high risk for HA-FBO among vulnerable patients. Further research is needed to identify whether the presence of such food items on the menu is related to a lack of knowledge of food safety recommendations and/or reflects a demand by the patients and nursing home residents, as well as to assess whether these foods are also effectively offered to specific vulnerable patient groups (e.g. immunocompromised) in HCFs. Previous food monitoring among 1880 HCFs by the German Food Safety Authorities in 2017 also highlighted a lack of knowledge of recommendations about high-risk food by 45% of the participating HCFs . In the current study, a nursing home in Germany indicated, as a reason for not participating in the survey, that ‘food poisoning is not an issue’ in their HCF, highlighting the need to both increase awareness about the risk of HA-FBOs and to strengthen food hygiene recommendations among staff and food business operators in HCFs. The main limitation of our study is that a small convenience sample of German and Italian HCFs was used, and that the distribution of HCF types and sizes may not be representative. It should be noted that the answers to the questionnaire were self-reported by the respondents of the HCFs. A larger representative follow-up survey is needed to achieve more explanatory powers also regarding differences between and within nursing homes and hospitals of different sizes, organisational structures and healthcare levels. To demonstrate the use of food menu data of HCFs for outbreak investigations, we would need to compare HCFs with linked patient–food menu data to HCFs without linked patient–food menu data in outbreak situations. As hospital-acquired infections are not that frequent, simulations may be the first step. We aimed to explore the availability, accessibility and usability of food menu data in HCFs to support the identification of food vehicles associated with HA-FBOs. We found that food menu data analyses to support outbreak investigations is challenging in Italy and in Germany due to incomplete documentation. As the survey suggests knowledge gaps on existing food safety recommendations in HCFs, we recommend further training to increase compliance with recommendations. In Italy, the results of this study were discussed with the healthcare professionals who participated in the survey in an online workshop, also as a measure to reduce knowledge gaps. It may be worthwhile to explore whether electronic bedside meal ordering systems, which already have the potential to improve dietary intake and patient satisfaction , may as well provide good opportunities to store patient food menu data. Hopefully, the digitisation opportunities that occurred during the COVID-19 pandemic will also be used to accelerate developments towards further digitisation of food menu data in HCFs. This will be a prerequisite to better assess the burden of contaminated food items in HA-FBOs.
Characterization of the Rhizosphere Bacterial Microbiome and Coffee Bean Fermentation in the Castillo-Tambo and Bourbon Varieties in the Popayán-Colombia Plateau
35b3f9cb-f5f2-4143-a134-14ba4c7a0316
10127060
Microbiology[mh]
Coffee is a globally significant crop cultivated in over 50 countries and is the most widely consumed beverage worldwide . The active properties of coffee have been reported to reduce the risk of pathologies such as diabetes and Parkinson’s disease . It estimates consumption of coffee to be approximately 2.5 billion cups per day , and it stands as the fifth most traded product globally. In Colombia, coffee constitutes the primary export product . Currently, two main species of coffee are cultivated: Coffea arabica, known as arabica coffee, which represents 75–80% of global production, and Coffea canephora , known as robusta coffee , which represents about 20–25% of global production and differs from arabica coffee in terms of flavor, caffeine content, and production conditions . The soil is a dynamic and complex ecosystem that is essential for the growth and development of plants. For a coffee plant to produce 100 pounds of green coffee, it must extract approximately 1.45 kg of nitrogen, 0.28 kg of phosphorus, and 1.74 kg of potassium from the soil . Soil fertility, defined as the ability of the earth to provide essential nutrients to plant roots, can sometimes be limited. However, microorganisms can help solubilize these nutrients and make them available to plants. These microorganisms are crucial in maintaining soil fertility and plant health . The bacterial community of coffee soils is diverse and can be influenced by environmental conditions, coffee varieties, and processing methods, ultimately impacting the quality of coffee beans . Besides maintaining soil fertility, soil bacteria perform essential ecosystem functions, such as nutrient cycling, organic matter decomposition, biological nitrogen fixation, and phosphorus solubilization . As a perennial crop, coffee can harbor many beneficial microorganisms in its rhizosphere, including phosphate-solubilizing and nitrogen-fixing bacteria, which can significantly supply the plant’s nutritional needs . However, the specific species of rhizosphere bacteria that provide these benefits need to be better understood. Further research is needed to reveal their incredible biodiversity , identify the strains that can modulate rhizosphere microbial structures, and determine their contribution to coffee bean fermentation and the quality of the resulting beverage , similar to what has been reported in products such as wine or tomato . The study of rhizosphere bacteria is a challenging task because of the large number of organisms that exist in the soil. It is essential to characterize and identify these microorganisms to advance ecological studies of plant rhizospheres and coffee cultivation. Among the modern methods available, molecular sequencing is highly effective . It involves a series of biochemical methods and techniques that permit the determination of the order of nucleotides in a DNA oligonucleotide , specifically, regions of the 16 S Ribosomal ribonucleic acid (rRNA) that can identify microorganisms present in the rhizosphere and fruit and explain their role in productive processes . Given the significant role of soil bacteria in plant health and coffee production, it is essential to recognize the impact of microbial structure and its function in the fermentation process of this crop. This research aims to identify the microbial composition of the rhizosphere of the Bourbon and Castillo coffee varieties and determine their contribution to the fermentation process of washed coffee from the Popayán-Cauca plateau in Colombia. Recognizing the microbiological correlation between soil, plant, and fermentation process can provide critical insights into the variables that influence the quality of the final product. Finally, it organized the document as follows: Sect. 2 presents a review of the relevant literature on this topic. Section 3 describes the materials and methods used to collect and analyze the experimental data. Section 4 presents the study’s results, including descriptive and inferential statistical analyses. Finally, Sect. 5 summarizes the main findings of the research and the conclusions reached. The microbial diversity in soil plays a critical role in terrestrial ecosystems’ nutrient cycling and decomposition processes . Microorganisms perform various biochemical processes in the soil, such as oxidation-reduction and interspecific and intraspecific interactions . In particular, for crops like coffee, studying microbial diversity is crucial because microorganisms’ habitat and biochemical processes can contribute to benefits such as increased productivity and soil conservation, which are essential for the growth of products that rely on soil nutrients . Several studies have investigated DNA sequencing techniques to understand coffee production. Silva et al. examined the microbial biota associated with coffee’s dry and wet processing by taking soil samples from trees across different production cycles. Their sequencing analysis revealed that bacteria and filamentous fungi were the most commonly found organisms, but their appearances varied. They concluded that the microbial flora in coffee production is much more complex and varied during the wet stage than in the dry stage. Velmourougane et al. in evaluated the long-term impact of organic and conventional coffee cultivation methods using soil DNA sequencing. Their findings suggested that organic methods resulted in higher rates of macrofauna, microbial population, and diversity than the conventional system. This indicates that coffee soil cultivated under organic systems has better long-term properties than conventional ones. Another study analyzed the influence of continuous cultivation on soil chemical properties and microbial communities using DNA sequencing. The sequencing results from soil samples indicated that long-term monoculture decreased soil pH and reduced soil bacterial and fungal richness. Furthermore, Veloso et al. in investigated how fermentation influences the final quality of coffee and the interactions between soil, fruit, altitude, and slope exposure on the microbiome of coffee plants using DNA sequencing. Their findings suggested that environmental factors contribute to the structure of bacterial and fungal communities and can influence the growth of these organisms. Finally, despite the importance and utility of employing various methods and techniques to study the organisms present in the soil where coffee is grown, no studies have been found that investigate the relationship between the microbial flora of the coffee soil with the organoleptic properties and how these microorganisms influence the fermentation processes in the post-production of the fruit and the quality of the beverage, as far as the authors are aware. Therefore, there needs to be more knowledge on this research topic. Estimating the diversity and richness of the bacterial community in the rhizosphere and coffee bean fermentation A total of 200,000 sequence reads of 16 S ribosomal RNA were obtained, and after filtering low-quality reads, 34,500 reads were kept for analysis. It compared the diversity among the different samples using rarefaction curves with 5% similarity, which showed that all samples except for the coffee fermentation time sample reached the saturation point (Fig. ). It found the diversity of microorganisms in coffee fermentation to be low compared to the diverse microorganisms in the coffee rhizosphere. The Simpson 1-D, Shannon-Wiener, and Chao-1 diversity estimators were used to evaluate the bacterial community’s diversity and richness. Analysis of variance indicated the effect of fermentation time and temperature on bacterial community richness (Fig. A), except for T0. Samples from T-24-A, T-12-C, and T-24-C showed significantly lower richness values than T0. The ambient temperature at 12 h altered the richness, while at 24 h, this parameter decreased significantly. In contrast, samples of coffee beans fermented at warm temperatures showed decreased richness in the different samples. Regarding the diversity parameter (H′) (Fig. B), sampling at T0 had the highest diversity compared to sampling at temperature and warmth. The ambient temperature indicated the lowest diversity, followed by warm temperature, indicating that the temperature variable modulates the diversity of the bacterial community in the coffee fermentation process. Sampling time did not affect diversity (12 and 24 h). Figure C also showed that the ambient temperature trends at 12 and 24 h were like the previous graphs and were in line with the results shown by the diversity index. Characterization of the microbial communities The Proteobacteria phylum was the most prevalent among the bacterial community across different sampling points and temperatures, accounting for 76.2% of the observed microbial composition. Other prominent phyla included Firmicutes (4.6%), Bacteroidetes (3.1%), and Acidobacteria (4.5%), while the remaining phyla did not exceed 3% (Fig. ). Bacterial microbiome analysis during fermentation indicated changes in the microbial composition with increasing fermentation time and temperature, which favored the growth of Proteobacteria and Firmicutes phyla. It detected Bacteroidetes and Actinobacteria at the initial sampling point (T0) but disappeared with prolonged fermentation time and temperature. A greater diversity of phyla was observed in the rhizosphere, including Proteobacteria , Actinobacteria , Acidobacteria , Bacteroidetes , Verrucomicrobia , Gemmatiomonadetes , Chloroflexi , Planctomycetes , Nitrospira , and Fusobacteria groups, with the last six phyla being typical of rhizospheres. The microbial composition of the rhizosphere varied between the Castillo-Tambo and Bourbon coffee varieties. Taxonomic analysis of the fermentation production process revealed the prevalence of Enterobacteriales , Rhodospirillales , and Lactobacillales orders. In contrast, in the rhizosphere, Sphingomonadales , Sphingobacteriales , Rhizobiales , Burkholderiales , Actinomycetales , Verrucomicrobiales , Acidobacteriales , Solirubrobacteriales , Acidimicrobiales , Solibacterales , Gemmatimonadales , Nitrosomonadales , Desulfuromonadales , Ktedonobacteriales , Fusobacteriales , Syntrophobacterales , and Nitrospirales orders were dominant. In relation to the diversity of microorganisms present in the samples analyzed, several genera were identified (Fig. ), such as Verrucomicrobium , Tatumella , Planctomyces , Geobacter , Pantoea , Stella , Cupriavidus , Chitinophaga , Terrimonas , Klebsiella , Pelobacter , Nitrospira , Reyranella , Phenylobacterium , Hylemonella , Aciditerrimonas , Actinoallomurus , Streptomyces , Sphingomonas , Steroidobacter , Niastella , Thermoflavimicrobium , Nitrosovibrio , Holophaga , Bacillus , Pseudomonas , Koribacter , Arthrobacter , Rhizobium , Kaistobacter , Anaeromyxobacter , Erwinia , Pedosphaera , Candidatus solibacter , Shigella , Rhodoplanes , Bradyrhizobium , Solirubrobacter , and Mucilaginibacter . Specifically, at time T0, the highest abundances of the genera Pantoea , Sphingomonas , and Tatumella were observed. At time T12A, the most abundant genera were Acinetobacter , Erwinia , and Tatumella . At time T24A, the most abundant genera were Tatumella , Shigella , and Pantoea . At time T12C, the most abundant genera were Tatumella , Pantoea , and Weissella . At time T24C, the most abundant genera were Tatumella , Gluconobacter , and Leuconostoc . The abundance of the different microbial genera varied between the different times. Some genera, such as Tatumella and Pantoea were consistently abundant across multiple time points . Other genera, such as Leuconostoc , and Gluconobacter were more abundant at later times, suggesting that they may play a role in the later stages of coffee fermentation . Beta diversity pattern of the rhizosphere community and fermentation To explore beta diversity, a Principal Component Analysis (PCA) was conducted (Fig. ) to examine the relationship between variables and Operational Taxonomic Units (OTUs). It selected only OTUs representing over ten sequences in this investigation, resulting in a dataset of 1265 OTUs and seven variables. Outlier analysis of the graphs did not reveal any anomalies. The first two dimensions of the PCA explained 81.06% of the total inertia of the dataset, suggesting that 81.06% of the overall variability of the OTU cloud is explicable in the plane. This high percentage indicates that the primary component explains a significant part of the dataset’s variability. The variability explained by this plane is considerably more significant than the reference value, which corresponds to 31.68%, highlighting the relevance of the variability captured by the plane (The reference value being 0.95 quantiles). The PCA analysis confirmed that it separated the fermentation process samples from the rhizosphere samples in the Cartesian plane. Dimension one revealed that the fermentation samples oppose OTUs with a strongly positive coordinate on the axis to the right of the graph. The samples T24A, T12C, T12A, T24C, and T0, which share high values for the variables, confirmed this separation. Notably, the variables T12A, T24A, T12C, and T24C indicated a high correlation with this dimension (0.97, 0.96, 0.97, 0.94, respectively), indicating the microorganisms present in these samples are alike. In contrast, T0 displayed a lower correlation and formed a subgroup, indicating where the microorganisms differed from the fermenting samples. The second group, represented by the rhizosphere samples, was found in dimension two and faced individuals characterized by a strongly positive coordinate on the axis towards the top of the graph with the OTUs. A graphical representation (Fig. ) was constructed to classify individuals based on their unique sets of variables, identifying four distinct clusters. A set of variables characterizes each cluster, with the strength of each variable listed in descending order. Cluster 1 is defined by high values for variables including OTU_2101, OTU_1, OTU_2183, OTU_1267, and OTU_925, while displaying low values for variables like OTU_491, OTU_685, OTU_243, OTU_113, OTU_175, OTU_290, OTU_991, OTU_581, OTU_990, and OTU_190. This cluster is composed of individuals such as T12C. In this line, cluster 2 is characterized by high values for variables such as OTU_819, OTU_715, OTU_524, OTU_1099, OTU_1079, OTU_1055, OTU_1031, OTU_643, OTU_404, and OTU_749, but low values for the variable OTU_2018. Individuals belonging to this cluster exhibit these characteristic features. Besides, cluster 3 comprises individuals such as Var_Borbon and is identified by high values for variables like OTU_900, OTU_894, OTU_888, OTU_875, OTU_1116, OTU_857, OTU_852, OTU_765, OTU_727, and OTU_926. Finally, Cluster 4 is composed of individuals like Var_Castillo and is distinguished by high values for variables such as OTU_835, OTU_957, OTU_868, OTU_850, OTU_841, OTU_815, OTU_787, OTU_759, OTU_613, and OTU_496. Multivariate analysis of the fermentation bacterial microbiome about organoleptic properties A multivariate analysis, as depicted in Fig. , was performed to establish a relationship between OTUs and the organoleptic properties of coffee. The study encompassed five individuals and 59 variables and identified no outliers. The first two dimensions of the analysis accounted for 69.43% of the total variability in the dataset, signifying a strong association between OTUs and the sensory attributes of coffee. The analysis found that OTUs correlated strongly with specific organoleptic characteristics, including balance, flavor, residual flavor, body, aroma, and acidity. For example, it related the beverage balance to OTU_51 (non-laboratory cultivable bacteria of the species Bacillus sp ), while it strongly related the flavor to OTU_64 and OTU_29 (uncategorized, non-cultivable bacteria). The residual flavor was closely related to OTUs 31 ( uncultured Solirubrobacter sp ) and 2384 (Pantoea sp), and the body was related to OTUs 104 ( uncultured Acidimicrobium sp ) and 12 ( uncultured Holophaga sp ). The aroma was related to OTU_13 ( Klebsiella pneumoniae ), and acidity was related to OTUs 46 ( uncultured Connexibacter ) and 44 ( Rhizobium sp ). A total of 200,000 sequence reads of 16 S ribosomal RNA were obtained, and after filtering low-quality reads, 34,500 reads were kept for analysis. It compared the diversity among the different samples using rarefaction curves with 5% similarity, which showed that all samples except for the coffee fermentation time sample reached the saturation point (Fig. ). It found the diversity of microorganisms in coffee fermentation to be low compared to the diverse microorganisms in the coffee rhizosphere. The Simpson 1-D, Shannon-Wiener, and Chao-1 diversity estimators were used to evaluate the bacterial community’s diversity and richness. Analysis of variance indicated the effect of fermentation time and temperature on bacterial community richness (Fig. A), except for T0. Samples from T-24-A, T-12-C, and T-24-C showed significantly lower richness values than T0. The ambient temperature at 12 h altered the richness, while at 24 h, this parameter decreased significantly. In contrast, samples of coffee beans fermented at warm temperatures showed decreased richness in the different samples. Regarding the diversity parameter (H′) (Fig. B), sampling at T0 had the highest diversity compared to sampling at temperature and warmth. The ambient temperature indicated the lowest diversity, followed by warm temperature, indicating that the temperature variable modulates the diversity of the bacterial community in the coffee fermentation process. Sampling time did not affect diversity (12 and 24 h). Figure C also showed that the ambient temperature trends at 12 and 24 h were like the previous graphs and were in line with the results shown by the diversity index. The Proteobacteria phylum was the most prevalent among the bacterial community across different sampling points and temperatures, accounting for 76.2% of the observed microbial composition. Other prominent phyla included Firmicutes (4.6%), Bacteroidetes (3.1%), and Acidobacteria (4.5%), while the remaining phyla did not exceed 3% (Fig. ). Bacterial microbiome analysis during fermentation indicated changes in the microbial composition with increasing fermentation time and temperature, which favored the growth of Proteobacteria and Firmicutes phyla. It detected Bacteroidetes and Actinobacteria at the initial sampling point (T0) but disappeared with prolonged fermentation time and temperature. A greater diversity of phyla was observed in the rhizosphere, including Proteobacteria , Actinobacteria , Acidobacteria , Bacteroidetes , Verrucomicrobia , Gemmatiomonadetes , Chloroflexi , Planctomycetes , Nitrospira , and Fusobacteria groups, with the last six phyla being typical of rhizospheres. The microbial composition of the rhizosphere varied between the Castillo-Tambo and Bourbon coffee varieties. Taxonomic analysis of the fermentation production process revealed the prevalence of Enterobacteriales , Rhodospirillales , and Lactobacillales orders. In contrast, in the rhizosphere, Sphingomonadales , Sphingobacteriales , Rhizobiales , Burkholderiales , Actinomycetales , Verrucomicrobiales , Acidobacteriales , Solirubrobacteriales , Acidimicrobiales , Solibacterales , Gemmatimonadales , Nitrosomonadales , Desulfuromonadales , Ktedonobacteriales , Fusobacteriales , Syntrophobacterales , and Nitrospirales orders were dominant. In relation to the diversity of microorganisms present in the samples analyzed, several genera were identified (Fig. ), such as Verrucomicrobium , Tatumella , Planctomyces , Geobacter , Pantoea , Stella , Cupriavidus , Chitinophaga , Terrimonas , Klebsiella , Pelobacter , Nitrospira , Reyranella , Phenylobacterium , Hylemonella , Aciditerrimonas , Actinoallomurus , Streptomyces , Sphingomonas , Steroidobacter , Niastella , Thermoflavimicrobium , Nitrosovibrio , Holophaga , Bacillus , Pseudomonas , Koribacter , Arthrobacter , Rhizobium , Kaistobacter , Anaeromyxobacter , Erwinia , Pedosphaera , Candidatus solibacter , Shigella , Rhodoplanes , Bradyrhizobium , Solirubrobacter , and Mucilaginibacter . Specifically, at time T0, the highest abundances of the genera Pantoea , Sphingomonas , and Tatumella were observed. At time T12A, the most abundant genera were Acinetobacter , Erwinia , and Tatumella . At time T24A, the most abundant genera were Tatumella , Shigella , and Pantoea . At time T12C, the most abundant genera were Tatumella , Pantoea , and Weissella . At time T24C, the most abundant genera were Tatumella , Gluconobacter , and Leuconostoc . The abundance of the different microbial genera varied between the different times. Some genera, such as Tatumella and Pantoea were consistently abundant across multiple time points . Other genera, such as Leuconostoc , and Gluconobacter were more abundant at later times, suggesting that they may play a role in the later stages of coffee fermentation . To explore beta diversity, a Principal Component Analysis (PCA) was conducted (Fig. ) to examine the relationship between variables and Operational Taxonomic Units (OTUs). It selected only OTUs representing over ten sequences in this investigation, resulting in a dataset of 1265 OTUs and seven variables. Outlier analysis of the graphs did not reveal any anomalies. The first two dimensions of the PCA explained 81.06% of the total inertia of the dataset, suggesting that 81.06% of the overall variability of the OTU cloud is explicable in the plane. This high percentage indicates that the primary component explains a significant part of the dataset’s variability. The variability explained by this plane is considerably more significant than the reference value, which corresponds to 31.68%, highlighting the relevance of the variability captured by the plane (The reference value being 0.95 quantiles). The PCA analysis confirmed that it separated the fermentation process samples from the rhizosphere samples in the Cartesian plane. Dimension one revealed that the fermentation samples oppose OTUs with a strongly positive coordinate on the axis to the right of the graph. The samples T24A, T12C, T12A, T24C, and T0, which share high values for the variables, confirmed this separation. Notably, the variables T12A, T24A, T12C, and T24C indicated a high correlation with this dimension (0.97, 0.96, 0.97, 0.94, respectively), indicating the microorganisms present in these samples are alike. In contrast, T0 displayed a lower correlation and formed a subgroup, indicating where the microorganisms differed from the fermenting samples. The second group, represented by the rhizosphere samples, was found in dimension two and faced individuals characterized by a strongly positive coordinate on the axis towards the top of the graph with the OTUs. A graphical representation (Fig. ) was constructed to classify individuals based on their unique sets of variables, identifying four distinct clusters. A set of variables characterizes each cluster, with the strength of each variable listed in descending order. Cluster 1 is defined by high values for variables including OTU_2101, OTU_1, OTU_2183, OTU_1267, and OTU_925, while displaying low values for variables like OTU_491, OTU_685, OTU_243, OTU_113, OTU_175, OTU_290, OTU_991, OTU_581, OTU_990, and OTU_190. This cluster is composed of individuals such as T12C. In this line, cluster 2 is characterized by high values for variables such as OTU_819, OTU_715, OTU_524, OTU_1099, OTU_1079, OTU_1055, OTU_1031, OTU_643, OTU_404, and OTU_749, but low values for the variable OTU_2018. Individuals belonging to this cluster exhibit these characteristic features. Besides, cluster 3 comprises individuals such as Var_Borbon and is identified by high values for variables like OTU_900, OTU_894, OTU_888, OTU_875, OTU_1116, OTU_857, OTU_852, OTU_765, OTU_727, and OTU_926. Finally, Cluster 4 is composed of individuals like Var_Castillo and is distinguished by high values for variables such as OTU_835, OTU_957, OTU_868, OTU_850, OTU_841, OTU_815, OTU_787, OTU_759, OTU_613, and OTU_496. A multivariate analysis, as depicted in Fig. , was performed to establish a relationship between OTUs and the organoleptic properties of coffee. The study encompassed five individuals and 59 variables and identified no outliers. The first two dimensions of the analysis accounted for 69.43% of the total variability in the dataset, signifying a strong association between OTUs and the sensory attributes of coffee. The analysis found that OTUs correlated strongly with specific organoleptic characteristics, including balance, flavor, residual flavor, body, aroma, and acidity. For example, it related the beverage balance to OTU_51 (non-laboratory cultivable bacteria of the species Bacillus sp ), while it strongly related the flavor to OTU_64 and OTU_29 (uncategorized, non-cultivable bacteria). The residual flavor was closely related to OTUs 31 ( uncultured Solirubrobacter sp ) and 2384 (Pantoea sp), and the body was related to OTUs 104 ( uncultured Acidimicrobium sp ) and 12 ( uncultured Holophaga sp ). The aroma was related to OTU_13 ( Klebsiella pneumoniae ), and acidity was related to OTUs 46 ( uncultured Connexibacter ) and 44 ( Rhizobium sp ). Diversity and microbial richness in coffee beans Various factors influence the diversity and richness of the rhizosphere microbiome during coffee bean fermentation. One of the factors is pH, which gradually decreases from a pH of 6.5 at T0 to pH of 4 at T24 , limiting the proliferation of microorganisms that are not adapted to a highly acidic environment. Bacterial structures change due to sugars; as the pH decreases, only bacteria tolerant to more acidic environments survive . The diversity of the coffee bean microbiome is further impacted by the microorganisms’ requirement to penetrate the root tissues’ interior and move through vascular bundles to reach the bean . Endophytic microorganisms, which establish interactions with plant cells, possess a competitive advantage in this process. Additionally, the lengthening of fermentation time reduces the availability of the substrate (sugar), which initiates resource competition among microbial communities . Plant Growth-Promoting Rhizobacteria (PGPR) are beneficial bacteria that colonize the root surface, enhance plant growth and development, and may also influence the diversity of the rhizosphere microbiome. These PGPR can contribute to the fermentation process by producing enzymes that collapse complex organic compounds and release nutrients used by other microorganisms . Another critical factor affecting the diversity and richness of the rhizosphere microbiome is plant defense compounds, including phytochemicals and allelochemicals. These compounds can have both inhibitory and stimulatory effects on the growth and activity of microorganisms. Their production depends on the plant species and the environmental conditions, and they can impact the microbial community structure and function in the rhizosphere . In summary, multiple factors, such as pH, substrate availability, PGPR and plant defense compounds, and soil type and composition, can affect the diversity and richness of the rhizosphere microbiome during coffee bean fermentation. A comprehensive understanding of these factors can aid in optimizing the fermentation process and improving the quality of coffee beans. Influence of fermentation times and temperatures on bacterial community composition and microbial structure in coffee production Regarding the characterization of microbial communities, the results of bacterial microbiome analysis revealed that increasing fermentation times and temperatures impacted the composition of bacterial communities, with the most growth observed in the Proteobacteria and Firmicutes phyla. It was also noted that the Bacteroidetes and Actinobacteria phyla disappeared with increasing fermentation time and temperature. The rhizosphere exhibited a greater diversity of phyla, including those typical of this environment (such as Verrucomicrobia , Gemmatiomonadetes , Chloroflexi , Planctomycetes , Nitrospira , and Fusobacteria ), indicating that the microbial structure of the rhizosphere may be unique when compared to the fermentation production process. Additionally, the microbial structure of the rhizosphere for the Castillo-Tambo and Bourbon coffee varieties was found to differ in proportions. The dominant taxonomic orders in the fermentation production process were Enterobacteriales , Rhodospirillales , and Lactobacillales , while in the rhizosphere, the dominant orders were Sphingomonadales , Sphingobacteriales , Rhizobiales , Burkholderiales , Actinomycetales , Verrucomicrobiales , Acidobacteriales , Solirubrobacteriales , Acidimicrobiales , Solibacterales , Gemmatimonadales , Nitrosomonadales , Desulfuromonadales , Ktedonobacteriales , Fusobacteriales , Syntrophobacterales , and Nitrospirales . These findings provide insight into the influence of fermentation times and temperatures on the composition of bacterial communities and the unique microbial structure of the rhizosphere in coffee production. The rhizosphere, which is the zone of soil surrounding the roots of a plant, plays a crucial role in the growth and development of coffee plants. The presence of microorganisms within the rhizosphere can impact the plant’s ability to resist pathogens, improve nutrient uptake, and optimize growth conditions. In particular, various species of Proteobacteria , Firmicutes , Bacteroidetes , and Acidobacteria have been found to participate in metabolic processes that impact the quality of coffee . For instance, some species of Proteobacteria , including Acetobacter , produce enzymes that convert carbon dioxide into acetic acid during coffee fermentation, influencing the coffee’s flavor and acidity. Furthermore, some species of Proteobacteria , such as Burkholderia and Pseudomonas , produce volatile compounds that contribute to the aroma and flavor of coffee, and certain Proteobacteria species may play a role in protecting against pathogens and promoting plant growth . Firmicutes , such as Lactobacillus and Leuconostoc , are known for their ability to ferment sugars, which can contribute to the production of volatile compounds and the acidity of coffee. Certain Firmicutes species, such as Lactobacillus and Leuconostoc , also produce diacetyl, a compound that impacts the flavor of the coffee . Concerning the bacterial genera present in the fermented coffee samples, it can be deduced that a wide range of bacterial species are involved in the fermentation process, each of which plays an essential role at different stages. The results suggest a decrease in bacterial diversity as fermentation time progresses, and certain bacterial species seem to dominate the process more than others. Overall, the results suggest a high level of bacterial species diversity in all coffee samples, indicating a complex and intricate interaction between microorganisms and coffee during fermentation. The observed decrease in bacterial diversity over time suggests that only organisms capable of surviving under such conditions, such as the hardy Tatumella and Pantoea , prevail. Specifically, the most abundant microbial genera include Pantoea, Gluconobacter , Klebsiella , and Leuconostoc . These genera are present in all samples, with a higher abundance in the more advanced fermentation samples. Pantoea and Gluconobacter are genera of lactic acid bacteria which are known to be involved in the production of lactic acid and acetic acid, which could contribute to the characteristic taste and aroma of coffee . Klebsiella , conversely, is a genus of bacteria known to break down sugars and amino acids present in coffee, which could contribute to the release of aromatic compounds . Leuconostoc is a genus of lactic acid bacteria known to produce lactic acid, which could contribute to the acidity of coffee . Ultimately, it is essential to highlight the importance of the bacterial diversity in the samples, which suggests numerous microorganisms that can influence the coffee fermentation process. Therefore, it is imperative to properly characterize each of these microorganisms to identify those that have a determining influence on the properties of the final product and, in this way, improve its quality. Influence of coffee variety and fermentation on bacterial community diversity in coffee production The analysis reveals that coffee variety influences beta diversity when separating samples in the plane. The biplot analysis identifies OTU_1, OTU_8, and OTU_3 as the main drivers of separation between rhizosphere and fermentation, with minor contributions from OTU_10, OTU_9, OTU_12, OTU_10, OTU_14, OTU_19, OTU_18, and OTU_20. Dendrogram (Fig. ) analysis confirms this separation, observing four distinct groups of variables where original varieties are not related to samples with varying fermentation times. This suggests that bacterial communities change as fermentation progresses because of different biological interactions, such as nutrient competition or predation, which have been reported in other studies involving Jeotgal , Glycine max and coffee . When focusing on the two coffee varieties (Var_Cas and Var_Bor) without considering fermentation, it is observed that they are dissimilar to each other and grouped in different clades, which is reflected in elements such as height, leaf shape, and coffee produced from these species. This is because organisms interacting in their rhizosphere play a significant role in plant growth and its derivatives. In this line, Fig. indicates that elevating the fermentation temperature heightens the metabolic rate of microorganisms, escalating their physiological activity and accelerating the fermentation processes . A detailed analysis shows that OTU_1 is related to a group of bacteria that cannot be cultivated in the laboratory or have no taxonomic assignment, and its closest relative is Uncultured Klebsiella sp , while OTU_3 is related to Pantoea agglomerans . These bacterial groups are abundant in all fermentation samples and have been reported as microorganisms producing pectinase and organic acids related to improving the organoleptic characteristics of coffee beverages . OTU_8, OTU_10, OTU_9, OTU_12, OTU_10, OTU_14, OTU_19, OTU_18, and OTU_20 are related to non-culturable bacteria or those without taxonomic assignment and are mainly found in the rhizosphere samples of the two coffee varieties. These results suggest a wide diversity of microorganisms and open new research avenues to study their activity and role in the rhizosphere of coffee in the Castillo-Tambo and Bourbon varieties cultivated in the Popayán plateau and their potential relationship with organoleptic characteristics in coffee beverages. In , the importance of soil microorganisms in the cultivation of grapevines is described, highlighting how they influence fermentation and wine quality. Moreover, microbial biogeography’s relevance in wine production is denoted when considering how microbes interact with environmental conditions, thus driving wine quality, style, and denomination of origin. Exploring the role of non-cultivable microorganisms in improving the organoleptic characteristics of coffee Most microorganisms investigated in this analysis were determined to be non-cultivable in laboratory conditions, thus emphasizing the necessity of further research to explore their potential impact on the organoleptic characteristics of coffee. Previous research on grapevine crops has demonstrated the presence and participation of particular microorganisms, such as Rhizobium Pantone , in the malolactic fermentation process, contributing to the formation of complex aromas in wine . Furthermore, the quality and distinctiveness of wine from a specific geographic region have been linked to the composition of microorganisms present and their impact on wine fermentation attributes . Future laboratory research should identify microorganisms that can enhance the organoleptic qualities of coffee cultivated in the Cauca department. This study should consider coffee’s microbial ecology and metabolome to produce a product with a potential origin denomination. Apart from non-cultivable microorganisms, it is also crucial to consider the role of cultivable microorganisms in enhancing the organoleptic properties of coffee. Many yeast, bacterial, and fungal species have been detected in coffee beans’ wet processing fermentation process, which may impact the end product’s taste, aroma, and other sensory attributes. The composition and quantity of microorganisms present during fermentation can vary depending on coffee beans’ origin and processing methods, environmental conditions, and cultivation management practices . Overall, it is evident that microorganisms play a significant role in shaping the organoleptic characteristics of coffee. Therefore, further research is essential to explain the specific contributions of both cultivable and non-cultivable microorganisms to optimize the sensory qualities of this beloved beverage. Finally, the present study has limitations worth noting. First, the obtained results are susceptible to the influence of agro-climatic factors that the coffee plant was exposed to during its life cycle. Second, the data were collected during a specific timeframe, which may reflect something other than the coffee plant’s overall growth and development. It is crucial to acknowledge that the study’s findings may vary significantly depending on the coffee plant’s age, with this study only using samples from three-year-old coffee plants. Therefore, it is imperative to recognize that it cannot extend the obtained results to other age groups of coffee plants or diverse coffee-growing regions within Colombia with distinct soil and climatic conditions. Various factors influence the diversity and richness of the rhizosphere microbiome during coffee bean fermentation. One of the factors is pH, which gradually decreases from a pH of 6.5 at T0 to pH of 4 at T24 , limiting the proliferation of microorganisms that are not adapted to a highly acidic environment. Bacterial structures change due to sugars; as the pH decreases, only bacteria tolerant to more acidic environments survive . The diversity of the coffee bean microbiome is further impacted by the microorganisms’ requirement to penetrate the root tissues’ interior and move through vascular bundles to reach the bean . Endophytic microorganisms, which establish interactions with plant cells, possess a competitive advantage in this process. Additionally, the lengthening of fermentation time reduces the availability of the substrate (sugar), which initiates resource competition among microbial communities . Plant Growth-Promoting Rhizobacteria (PGPR) are beneficial bacteria that colonize the root surface, enhance plant growth and development, and may also influence the diversity of the rhizosphere microbiome. These PGPR can contribute to the fermentation process by producing enzymes that collapse complex organic compounds and release nutrients used by other microorganisms . Another critical factor affecting the diversity and richness of the rhizosphere microbiome is plant defense compounds, including phytochemicals and allelochemicals. These compounds can have both inhibitory and stimulatory effects on the growth and activity of microorganisms. Their production depends on the plant species and the environmental conditions, and they can impact the microbial community structure and function in the rhizosphere . In summary, multiple factors, such as pH, substrate availability, PGPR and plant defense compounds, and soil type and composition, can affect the diversity and richness of the rhizosphere microbiome during coffee bean fermentation. A comprehensive understanding of these factors can aid in optimizing the fermentation process and improving the quality of coffee beans. Regarding the characterization of microbial communities, the results of bacterial microbiome analysis revealed that increasing fermentation times and temperatures impacted the composition of bacterial communities, with the most growth observed in the Proteobacteria and Firmicutes phyla. It was also noted that the Bacteroidetes and Actinobacteria phyla disappeared with increasing fermentation time and temperature. The rhizosphere exhibited a greater diversity of phyla, including those typical of this environment (such as Verrucomicrobia , Gemmatiomonadetes , Chloroflexi , Planctomycetes , Nitrospira , and Fusobacteria ), indicating that the microbial structure of the rhizosphere may be unique when compared to the fermentation production process. Additionally, the microbial structure of the rhizosphere for the Castillo-Tambo and Bourbon coffee varieties was found to differ in proportions. The dominant taxonomic orders in the fermentation production process were Enterobacteriales , Rhodospirillales , and Lactobacillales , while in the rhizosphere, the dominant orders were Sphingomonadales , Sphingobacteriales , Rhizobiales , Burkholderiales , Actinomycetales , Verrucomicrobiales , Acidobacteriales , Solirubrobacteriales , Acidimicrobiales , Solibacterales , Gemmatimonadales , Nitrosomonadales , Desulfuromonadales , Ktedonobacteriales , Fusobacteriales , Syntrophobacterales , and Nitrospirales . These findings provide insight into the influence of fermentation times and temperatures on the composition of bacterial communities and the unique microbial structure of the rhizosphere in coffee production. The rhizosphere, which is the zone of soil surrounding the roots of a plant, plays a crucial role in the growth and development of coffee plants. The presence of microorganisms within the rhizosphere can impact the plant’s ability to resist pathogens, improve nutrient uptake, and optimize growth conditions. In particular, various species of Proteobacteria , Firmicutes , Bacteroidetes , and Acidobacteria have been found to participate in metabolic processes that impact the quality of coffee . For instance, some species of Proteobacteria , including Acetobacter , produce enzymes that convert carbon dioxide into acetic acid during coffee fermentation, influencing the coffee’s flavor and acidity. Furthermore, some species of Proteobacteria , such as Burkholderia and Pseudomonas , produce volatile compounds that contribute to the aroma and flavor of coffee, and certain Proteobacteria species may play a role in protecting against pathogens and promoting plant growth . Firmicutes , such as Lactobacillus and Leuconostoc , are known for their ability to ferment sugars, which can contribute to the production of volatile compounds and the acidity of coffee. Certain Firmicutes species, such as Lactobacillus and Leuconostoc , also produce diacetyl, a compound that impacts the flavor of the coffee . Concerning the bacterial genera present in the fermented coffee samples, it can be deduced that a wide range of bacterial species are involved in the fermentation process, each of which plays an essential role at different stages. The results suggest a decrease in bacterial diversity as fermentation time progresses, and certain bacterial species seem to dominate the process more than others. Overall, the results suggest a high level of bacterial species diversity in all coffee samples, indicating a complex and intricate interaction between microorganisms and coffee during fermentation. The observed decrease in bacterial diversity over time suggests that only organisms capable of surviving under such conditions, such as the hardy Tatumella and Pantoea , prevail. Specifically, the most abundant microbial genera include Pantoea, Gluconobacter , Klebsiella , and Leuconostoc . These genera are present in all samples, with a higher abundance in the more advanced fermentation samples. Pantoea and Gluconobacter are genera of lactic acid bacteria which are known to be involved in the production of lactic acid and acetic acid, which could contribute to the characteristic taste and aroma of coffee . Klebsiella , conversely, is a genus of bacteria known to break down sugars and amino acids present in coffee, which could contribute to the release of aromatic compounds . Leuconostoc is a genus of lactic acid bacteria known to produce lactic acid, which could contribute to the acidity of coffee . Ultimately, it is essential to highlight the importance of the bacterial diversity in the samples, which suggests numerous microorganisms that can influence the coffee fermentation process. Therefore, it is imperative to properly characterize each of these microorganisms to identify those that have a determining influence on the properties of the final product and, in this way, improve its quality. The analysis reveals that coffee variety influences beta diversity when separating samples in the plane. The biplot analysis identifies OTU_1, OTU_8, and OTU_3 as the main drivers of separation between rhizosphere and fermentation, with minor contributions from OTU_10, OTU_9, OTU_12, OTU_10, OTU_14, OTU_19, OTU_18, and OTU_20. Dendrogram (Fig. ) analysis confirms this separation, observing four distinct groups of variables where original varieties are not related to samples with varying fermentation times. This suggests that bacterial communities change as fermentation progresses because of different biological interactions, such as nutrient competition or predation, which have been reported in other studies involving Jeotgal , Glycine max and coffee . When focusing on the two coffee varieties (Var_Cas and Var_Bor) without considering fermentation, it is observed that they are dissimilar to each other and grouped in different clades, which is reflected in elements such as height, leaf shape, and coffee produced from these species. This is because organisms interacting in their rhizosphere play a significant role in plant growth and its derivatives. In this line, Fig. indicates that elevating the fermentation temperature heightens the metabolic rate of microorganisms, escalating their physiological activity and accelerating the fermentation processes . A detailed analysis shows that OTU_1 is related to a group of bacteria that cannot be cultivated in the laboratory or have no taxonomic assignment, and its closest relative is Uncultured Klebsiella sp , while OTU_3 is related to Pantoea agglomerans . These bacterial groups are abundant in all fermentation samples and have been reported as microorganisms producing pectinase and organic acids related to improving the organoleptic characteristics of coffee beverages . OTU_8, OTU_10, OTU_9, OTU_12, OTU_10, OTU_14, OTU_19, OTU_18, and OTU_20 are related to non-culturable bacteria or those without taxonomic assignment and are mainly found in the rhizosphere samples of the two coffee varieties. These results suggest a wide diversity of microorganisms and open new research avenues to study their activity and role in the rhizosphere of coffee in the Castillo-Tambo and Bourbon varieties cultivated in the Popayán plateau and their potential relationship with organoleptic characteristics in coffee beverages. In , the importance of soil microorganisms in the cultivation of grapevines is described, highlighting how they influence fermentation and wine quality. Moreover, microbial biogeography’s relevance in wine production is denoted when considering how microbes interact with environmental conditions, thus driving wine quality, style, and denomination of origin. Most microorganisms investigated in this analysis were determined to be non-cultivable in laboratory conditions, thus emphasizing the necessity of further research to explore their potential impact on the organoleptic characteristics of coffee. Previous research on grapevine crops has demonstrated the presence and participation of particular microorganisms, such as Rhizobium Pantone , in the malolactic fermentation process, contributing to the formation of complex aromas in wine . Furthermore, the quality and distinctiveness of wine from a specific geographic region have been linked to the composition of microorganisms present and their impact on wine fermentation attributes . Future laboratory research should identify microorganisms that can enhance the organoleptic qualities of coffee cultivated in the Cauca department. This study should consider coffee’s microbial ecology and metabolome to produce a product with a potential origin denomination. Apart from non-cultivable microorganisms, it is also crucial to consider the role of cultivable microorganisms in enhancing the organoleptic properties of coffee. Many yeast, bacterial, and fungal species have been detected in coffee beans’ wet processing fermentation process, which may impact the end product’s taste, aroma, and other sensory attributes. The composition and quantity of microorganisms present during fermentation can vary depending on coffee beans’ origin and processing methods, environmental conditions, and cultivation management practices . Overall, it is evident that microorganisms play a significant role in shaping the organoleptic characteristics of coffee. Therefore, further research is essential to explain the specific contributions of both cultivable and non-cultivable microorganisms to optimize the sensory qualities of this beloved beverage. Finally, the present study has limitations worth noting. First, the obtained results are susceptible to the influence of agro-climatic factors that the coffee plant was exposed to during its life cycle. Second, the data were collected during a specific timeframe, which may reflect something other than the coffee plant’s overall growth and development. It is crucial to acknowledge that the study’s findings may vary significantly depending on the coffee plant’s age, with this study only using samples from three-year-old coffee plants. Therefore, it is imperative to recognize that it cannot extend the obtained results to other age groups of coffee plants or diverse coffee-growing regions within Colombia with distinct soil and climatic conditions. In this research, the microbial community of the soil of coffee trees of the Bourbon and Castillo varieties grown on the Popayán-Cauca plateau was characterized using DNA sequencing. The microorganisms found in the rhizosphere were then compared to the organoleptic properties of the coffee, as determined by cupping according to ISO 17.025. The results suggest that each variety of coffee has a distinct microbial profile, which may be related to the plants’ physiological, nutritional, and sanitary needs. Furthermore, the investigation revealed a rich microbial diversity in the soil and during fermentation, with several microorganisms belonging to bacterial taxa that are not amenable to laboratory cultivation. This suggests the potential for identifying microorganisms of agronomic interest and further understanding their role in the life cycle of coffee plants. While investigating the fermentation process of coffee beans on the Popayán plateau, it was observed that the phyla Proteobacteria and Firmicutes were the dominant bacterial taxa, independent of the fermentation temperature performed. During this process, examination of the microbial community revealed a high level of diversity and richness in microorganisms colonizing endocarp surfaces. Additionally, it was noted that the bacterial community underwent structural changes because of sampling time and temperature variations. In this line, further investigations are required to fully comprehend the significance and function of these microorganisms in the catabolic process of coffee fermentation, employing similar methodologies to those utilized in the wine production industry. Finally, the research demonstrated a strong relationship between specific rhizospheric microorganisms and coffee’s organoleptic properties, such as flavor, acidity, balance, and residual flavor. Therefore, future work could focus on improving these characteristics by manipulating specific microbial traits to optimize the organoleptic properties of the coffee beverage. Many of the microorganisms that influence the physical attributes of coffee are not culturable, indicating that techniques such as metagenomics or metabolomics may be necessary for a comprehensive analysis of their role in coffee fermentation processes. Location It conducted the experiments at two locations, namely the Corporación Universitaria Comfacauca - Unicomfacauca, in Popayán and the hacienda Los Naranjos of La Venta in the Cajibio municipality, owned by the Parque Tecnológico de Innovación del Café (TECNICAFE). The geographical coordinates of the experimental sites are 2°35’11.6” N and 76°33’11.2” W, in the Cauca department of Colombia. It grew the crop approximately 1862 m above sea level, while the average temperature ranges between 12 and 23 °C . Plant material The present investigation employed plant materials from Hacienda Los Naranjos in Cajibio. Specifically, two distinct varieties of the coffee plant, namely Bourbon and Castillo Tambo, were chosen based on their unique characteristics and growth habits. The Bourbon variety exhibits a tall growth habit and moderate yield, with the potential to produce high-quality coffee at high altitudes . Nonetheless, it is no longer cultivated in the region because of its susceptibility to rust attacks. The Castillo Tambo variety is a hybrid of Caturra and Timor Hybrid varieties, which provide rust resistance, high productivity, excellent beverage quality, and adaptability to diverse coffee ecotypes . Samples were collected from two-year-old trees, and it analyzed both the roots and coffee beans to evaluate the bacterial microbiome during the fermentation process and in the rhizosphere. Fermentation trials and cup profiles At TECNICAFE, 10 kg of cherry coffee at different stages of commercial maturity were harvested and processed. It transported the beans in refrigerated compartments to Supracafé’s processing plant , where they underwent a semi-washing process using a pulper to produce Baba coffee . The first processing stage included removing the exocarp through the pulping process and it placed the samples at 4 °C to prevent fermentation. Finally, the samples were transported to the Corporación Universitaria Comfacauca - Unicomfacauca for further experimentation. The experimental design involved subjecting the coffee beans to fermentation under two different temperature conditions: ambient (19.5 °C) and warm (24 °C), to simulate the environmental conditions of the Popayán plateau , where the analyzed samples were obtained. Samples were taken and subjected to aerobic fermentation, extracting samples in triplicate at 0, 12, and 24 h, labeled as T0 (zero fermentation time), T12A (sampling at 12 h at ambient temperature), T24A (sampling at 24 h of fermentation at ambient temperature), T12C and T24C (sampling at 12 and 24 h at a hot temperature, respectively). Subsequently, the samples were stored at -20 °C for DNA extraction. In addition, the degree Brix and pH were quantified. Upon completion of the fermentation process, it washed the samples with abundant water to eliminate mucilage and dried them to achieve 11% grain moisture. The cup profile of the samples was determined by sizing, threshing, and sieving the beans in 14 mesh, followed by the analysis of physical defects of the coffee based on the recommendation of the Federación Nacional de Cafeteros . Two expert and certified tasters evaluated the quality of the coffee following the method established by the Specialty Coffee Association of America (SCAA) . The qualification values were calculated according to the ISO 17.025 procedure of Almacafé. In this line, It is crucial to specify that the organoleptic properties examined in this study encompassed the ultimate evaluations of aroma, flavor, acidity, body, uniformity, balance, sweetness, and aftertaste. Coffee rhizosphere assays This investigation collected soil samples from the rhizosphere of two coffee varieties, Bourbon and Castillo-Tambo, in Cauca, Colombia. It carefully obtained the rhizosphere soil by removing the top 5 cm of soil surrounding the coffee plant base while minimizing damage to the root system. This technique is widely used in rhizosphere studies and has been described in the scientific literature . To differentiate between rhizosphere and bulk soil, it collected rhizosphere samples in duplicate from 15 plants per replicate, whereas it collected bulk soil samples from the exact location but at least 5 millimeters away from the coffee plant base . Then, to ensure the rhizosphere’s integrity, the top layer of soil adhering to the roots under analysis was removed using a spatula. This method is commonly employed in studies that distinguish rhizosphere soil from bulk soil . Subsequently, the soil samples were transported to the biotechnology laboratory of Corporación Universitaria Comfacauca-Unicomfacauca, where rhizosphere soil from Bourbon (Var_Bor) and Castillo-Tambo (Var_cas) coffee varieties were extracted in duplicate from the coffee roots. The extracted samples were stored at -20 °C for further analysis. Extraction of DNA from coffee mucilage and rhizosphere It maintained the collected samples at -20 °C until further use. Upon thawing, it immediately processed the samples at 20 °C for DNA extraction from the mucilage and rhizosphere. Following the manufacturer’s recommended procedure, a commercial DNeasy PowerSoil kit from QIAGEN , following the manufacturer’s recommended procedure. It diluted the extracted DNA with ultrapure water free of DNases and RNases and then stored at -20 °C for further use. The quality of the extracted DNA was analyzed on a 0.8% (w/v) agarose gel and quantified with the NanoDrop 2000 spectrophotometer (Thermo Scientific, Waltham, MA, USA) . The bacterial community in the rhizosphere of the Castillo and Bourbon varieties was amplified by Polymerase Chain Reaction (PCR) of the V4 region of the 16S rRNA genes. For this, forward illCUs515F 5’-GTGYCAGCMGCCGCGGTAA and reverse new806RB 5’-GGACTACNVGGGTWTCTAAT primers were used at a concentration of 20ng/µL. The reaction mixture had a 100 µL, containing 200 µM of dNTPs, 2.5 mM MgCl2, 0.5% DMSO, 1.25 U of Go Taq Polymerase, and 20 ng of metagenomic DNA. PCR was performed on a GenePro BIOER thermal cycler, following the cycling conditions of initial denaturation at 95 °C for 10 min, followed by 28 cycles of denaturation at 95 °C for 45 s, hybridization at 53 °C for 45 s and elongation at 72 °C for 5 min. Amplification products were analyzed by electrophoresis on 1% (w/v) agarose gels and stained with ethidium bromide. Electrophoresis was performed by applying 100 volts for 15 min on a MiniRun GE-100, and the gels were visualized on a transilluminator. The amplified fragments were purified with the commercial Accuprep kit (BIONEER) and eluted in a final volume of 30 µl. Samples were sent to Molecular Research LP (MR DNA) laboratory in the USA for sequencing by Illumina MiSeq technology . Statistical analysis of the samples The sequencing data was analyzed using the Mothur platform tools . Reads that did not meet specific criteria were removed, including reads with a size less than 100 base pairs (bp), a sequence difference (mismatch) in the barcode, and reads not aligned with the SILVA database using the UCLUST method , Furthermore, the UCHIME method . They removed chimeras from the sequence data. OTUs were formed by clustering sequences with a 95% similarity threshold, and it eliminated OTUs with less than three sequences to minimize sequencing errors, following similar procedures as in previous studies . The estimation of microbial community richness was determined using rarefaction curves with the resampling method without replacement by the Mothur settings. To estimate diversity indexes, such as Chao-1 and Shannon richness (H’) of the microbial community, the program Past was employed. ANOVA was used to analyze the obtained data, and the InfoStat software , was used to perform Tukey’s test to compare means with a 5% significance level (P ≤ 0.05). Likewise, a multivariate analysis comprising PCA and cluster analysis , were performed to identify relationships between variables and individuals. To investigate the association between genetic samples and fermentation times, OTUs with over ten sequences were selected, resulting in 1265 individuals. To explore the correlation between OTUs and organoleptic properties, it reduced the number of individuals to 60 by eliminating those with fewer than 500 genetic sequences. It conducted the experiments at two locations, namely the Corporación Universitaria Comfacauca - Unicomfacauca, in Popayán and the hacienda Los Naranjos of La Venta in the Cajibio municipality, owned by the Parque Tecnológico de Innovación del Café (TECNICAFE). The geographical coordinates of the experimental sites are 2°35’11.6” N and 76°33’11.2” W, in the Cauca department of Colombia. It grew the crop approximately 1862 m above sea level, while the average temperature ranges between 12 and 23 °C . The present investigation employed plant materials from Hacienda Los Naranjos in Cajibio. Specifically, two distinct varieties of the coffee plant, namely Bourbon and Castillo Tambo, were chosen based on their unique characteristics and growth habits. The Bourbon variety exhibits a tall growth habit and moderate yield, with the potential to produce high-quality coffee at high altitudes . Nonetheless, it is no longer cultivated in the region because of its susceptibility to rust attacks. The Castillo Tambo variety is a hybrid of Caturra and Timor Hybrid varieties, which provide rust resistance, high productivity, excellent beverage quality, and adaptability to diverse coffee ecotypes . Samples were collected from two-year-old trees, and it analyzed both the roots and coffee beans to evaluate the bacterial microbiome during the fermentation process and in the rhizosphere. At TECNICAFE, 10 kg of cherry coffee at different stages of commercial maturity were harvested and processed. It transported the beans in refrigerated compartments to Supracafé’s processing plant , where they underwent a semi-washing process using a pulper to produce Baba coffee . The first processing stage included removing the exocarp through the pulping process and it placed the samples at 4 °C to prevent fermentation. Finally, the samples were transported to the Corporación Universitaria Comfacauca - Unicomfacauca for further experimentation. The experimental design involved subjecting the coffee beans to fermentation under two different temperature conditions: ambient (19.5 °C) and warm (24 °C), to simulate the environmental conditions of the Popayán plateau , where the analyzed samples were obtained. Samples were taken and subjected to aerobic fermentation, extracting samples in triplicate at 0, 12, and 24 h, labeled as T0 (zero fermentation time), T12A (sampling at 12 h at ambient temperature), T24A (sampling at 24 h of fermentation at ambient temperature), T12C and T24C (sampling at 12 and 24 h at a hot temperature, respectively). Subsequently, the samples were stored at -20 °C for DNA extraction. In addition, the degree Brix and pH were quantified. Upon completion of the fermentation process, it washed the samples with abundant water to eliminate mucilage and dried them to achieve 11% grain moisture. The cup profile of the samples was determined by sizing, threshing, and sieving the beans in 14 mesh, followed by the analysis of physical defects of the coffee based on the recommendation of the Federación Nacional de Cafeteros . Two expert and certified tasters evaluated the quality of the coffee following the method established by the Specialty Coffee Association of America (SCAA) . The qualification values were calculated according to the ISO 17.025 procedure of Almacafé. In this line, It is crucial to specify that the organoleptic properties examined in this study encompassed the ultimate evaluations of aroma, flavor, acidity, body, uniformity, balance, sweetness, and aftertaste. This investigation collected soil samples from the rhizosphere of two coffee varieties, Bourbon and Castillo-Tambo, in Cauca, Colombia. It carefully obtained the rhizosphere soil by removing the top 5 cm of soil surrounding the coffee plant base while minimizing damage to the root system. This technique is widely used in rhizosphere studies and has been described in the scientific literature . To differentiate between rhizosphere and bulk soil, it collected rhizosphere samples in duplicate from 15 plants per replicate, whereas it collected bulk soil samples from the exact location but at least 5 millimeters away from the coffee plant base . Then, to ensure the rhizosphere’s integrity, the top layer of soil adhering to the roots under analysis was removed using a spatula. This method is commonly employed in studies that distinguish rhizosphere soil from bulk soil . Subsequently, the soil samples were transported to the biotechnology laboratory of Corporación Universitaria Comfacauca-Unicomfacauca, where rhizosphere soil from Bourbon (Var_Bor) and Castillo-Tambo (Var_cas) coffee varieties were extracted in duplicate from the coffee roots. The extracted samples were stored at -20 °C for further analysis. It maintained the collected samples at -20 °C until further use. Upon thawing, it immediately processed the samples at 20 °C for DNA extraction from the mucilage and rhizosphere. Following the manufacturer’s recommended procedure, a commercial DNeasy PowerSoil kit from QIAGEN , following the manufacturer’s recommended procedure. It diluted the extracted DNA with ultrapure water free of DNases and RNases and then stored at -20 °C for further use. The quality of the extracted DNA was analyzed on a 0.8% (w/v) agarose gel and quantified with the NanoDrop 2000 spectrophotometer (Thermo Scientific, Waltham, MA, USA) . The bacterial community in the rhizosphere of the Castillo and Bourbon varieties was amplified by Polymerase Chain Reaction (PCR) of the V4 region of the 16S rRNA genes. For this, forward illCUs515F 5’-GTGYCAGCMGCCGCGGTAA and reverse new806RB 5’-GGACTACNVGGGTWTCTAAT primers were used at a concentration of 20ng/µL. The reaction mixture had a 100 µL, containing 200 µM of dNTPs, 2.5 mM MgCl2, 0.5% DMSO, 1.25 U of Go Taq Polymerase, and 20 ng of metagenomic DNA. PCR was performed on a GenePro BIOER thermal cycler, following the cycling conditions of initial denaturation at 95 °C for 10 min, followed by 28 cycles of denaturation at 95 °C for 45 s, hybridization at 53 °C for 45 s and elongation at 72 °C for 5 min. Amplification products were analyzed by electrophoresis on 1% (w/v) agarose gels and stained with ethidium bromide. Electrophoresis was performed by applying 100 volts for 15 min on a MiniRun GE-100, and the gels were visualized on a transilluminator. The amplified fragments were purified with the commercial Accuprep kit (BIONEER) and eluted in a final volume of 30 µl. Samples were sent to Molecular Research LP (MR DNA) laboratory in the USA for sequencing by Illumina MiSeq technology . The sequencing data was analyzed using the Mothur platform tools . Reads that did not meet specific criteria were removed, including reads with a size less than 100 base pairs (bp), a sequence difference (mismatch) in the barcode, and reads not aligned with the SILVA database using the UCLUST method , Furthermore, the UCHIME method . They removed chimeras from the sequence data. OTUs were formed by clustering sequences with a 95% similarity threshold, and it eliminated OTUs with less than three sequences to minimize sequencing errors, following similar procedures as in previous studies . The estimation of microbial community richness was determined using rarefaction curves with the resampling method without replacement by the Mothur settings. To estimate diversity indexes, such as Chao-1 and Shannon richness (H’) of the microbial community, the program Past was employed. ANOVA was used to analyze the obtained data, and the InfoStat software , was used to perform Tukey’s test to compare means with a 5% significance level (P ≤ 0.05). Likewise, a multivariate analysis comprising PCA and cluster analysis , were performed to identify relationships between variables and individuals. To investigate the association between genetic samples and fermentation times, OTUs with over ten sequences were selected, resulting in 1265 individuals. To explore the correlation between OTUs and organoleptic properties, it reduced the number of individuals to 60 by eliminating those with fewer than 500 genetic sequences.
Canadian Women in Otolaryngology—Head and Neck Surgery part 1: the relationship of gender identity to career trajectory and experiences of harassment
f30537f5-2040-4190-bc75-001d1d1c9048
10127062
Otolaryngology[mh]
There is increasing awareness in the literature and in popular media about gender inequity in medicine. Though women in medicine are increasing in numbers, there are many barriers that still exist. As of 2017, 40% of practicing physicians and 63% of medical students in Canada were women, and a recent Canadian Medical Association (CMA) study projected that by 2030 over 50% of physicians will identify as female . However, there is yet to be a proportional rise in women in leadership roles. A recent 2020 study showed that only 18.6% of residency and fellowship directors and 5.1% of department chairs in the United States are women, and that significantly less of those female directors had achieved full professor rank as opposed to their male counterparts . Monetary compensation has also been shown to be statistically different between male and female physicians. Even accounting for location and years in practice, female physicians in Ontario bill approximately 74% of what their male counterparts do . While some might attribute this to differences in hours worked, this myth has been dispelled . Though identifying inequities is an important first step, it is much more complex to examine what factors perpetuate this inequity. Factors that have been previously attributed to the attrition of female physicians in academic medicine include lack of mentorship and role models, professional isolation, work-life imbalance, salary inequities, bullying and harassment . In this two-part investigation, Canadian otolaryngologists were surveyed on the influence of gender on career progression and harassment in the workplace (Part I) and family, fertility, and lactation (Part II). The present paper reports on gender differences in career progression and harassment. Data collection Institutional ethics board approval was obtained from Western University (REB# 118,283). This study was a survey of Canadian OHNS attendings (both in active practice and retired) and trainees in accredited OHNS training programs. Physicians from specialties other than OHNS were excluded. The survey was available from March to May 2021. The survey was sent via the national listserv to 838 potential respondents (551 consultants, 118 emeritus status, and 169 residents). This was then promoted via social media including Facebook, Instagram, and Twitter. Topics addressed included demographics, residency experience, leadership and advancement, and harassment (Additional file ). Survey development involved a literature search identifying previous relevant studies for reference and question adaptation, followed by an iterative process of survey generation and editing by all authors, representing academic and community otolaryngologists in several subspecialties as well as a resident in training to capture a breadth of experience. It was then translated into French to allow participation from all Canadian otolaryngologists. Responses were collected in REDCap® (Version 11.1.13. Copyright © 2020 REDCap) and compared based on respondent gender. Survey responses were anonymous. Data analysis Completed surveys were excluded if they did not include demographic data or if they exclusively included demographic data. Descriptive statistics including means, standard deviations and frequencies of study outcomes were evaluated. Gender and career differences in study outcomes were explored with chi-squared tests for categorical outcomes and independent sample t-tests for continuous variables. We employed an alpha level of 0.05 to determine statistical significance. Data was processed and analyzed using R . Institutional ethics board approval was obtained from Western University (REB# 118,283). This study was a survey of Canadian OHNS attendings (both in active practice and retired) and trainees in accredited OHNS training programs. Physicians from specialties other than OHNS were excluded. The survey was available from March to May 2021. The survey was sent via the national listserv to 838 potential respondents (551 consultants, 118 emeritus status, and 169 residents). This was then promoted via social media including Facebook, Instagram, and Twitter. Topics addressed included demographics, residency experience, leadership and advancement, and harassment (Additional file ). Survey development involved a literature search identifying previous relevant studies for reference and question adaptation, followed by an iterative process of survey generation and editing by all authors, representing academic and community otolaryngologists in several subspecialties as well as a resident in training to capture a breadth of experience. It was then translated into French to allow participation from all Canadian otolaryngologists. Responses were collected in REDCap® (Version 11.1.13. Copyright © 2020 REDCap) and compared based on respondent gender. Survey responses were anonymous. Completed surveys were excluded if they did not include demographic data or if they exclusively included demographic data. Descriptive statistics including means, standard deviations and frequencies of study outcomes were evaluated. Gender and career differences in study outcomes were explored with chi-squared tests for categorical outcomes and independent sample t-tests for continuous variables. We employed an alpha level of 0.05 to determine statistical significance. Data was processed and analyzed using R . Demographics At survey closure, a total of 183 surveys were returned representing 21.8% of the potential respondents. Of those, 100 respondents identified as male (of 633 potential responses—a 16% response rate) and 83 identified as female (of 205 potential responses—a 40% response rate). Of the respondents, 120 (66%) identified as attending physicians and 62 (34%) identified as resident physicians. One respondent did not specify. The surveys represented a variety of subspecialties, practice environments and years of practice experience. Of practicing physicians that specified, 54 (45%) were in academic practice, 35 (29%) practiced in the community, and the remaining 30 (25%) were in a hybrid community-academic practice. There was no significant difference between genders in practice setting (χ 2 = 0.59, p = 0.75) or resident intent to subspecialize (χ 2 = 7.57, p = 0.37), however there was a significant difference in gender of attending surgeon by practice subspecialty (χ 2 = 22.8, p < 0.001). Male respondents were more likely to have subspecialized in head and neck surgery (38%) while female respondents were more likely to have subspecialized in pediatric otolaryngology (35%). In general, female respondents were less likely to report having colleagues of the same gender, either as residents or as staff (p < 0.001). Gender-based inequality Participants were asked to what degree they agreed with statements regarding equality of treatment of residents and attendings based on gender, rated on a Likert scale. Seventy-four percent of male respondents agreed while only 39% of female respondents agreed that their department had the same expectations of residents regardless of gender. Sixty-nine percent of male respondents agreed while only 35% of female respondents agreed that residents of all genders were evaluated fairly based on the same criteria. Sixty-five percent of male respondents agreed while only 36% of female respondents agreed that their program treated all residents equally regardless of gender. Finally, 65% of male respondents agreed while only 36% of female respondents agreed that the same leadership opportunities were open to everyone regardless of gender. In each case, the female respondents were significantly less likely to agree to the statement (all p-values < 0.01). Leadership The survey included questions about academic rank, leadership positions and time taken to achieve these academic positions. While there was no statistical difference in academic rank between genders (χ 2 = 9.99, p = 0.07), it is worth noting that only one female respondent of the 46 respondents to this question had achieved the rank of full professor (2.2%) vs twelve male respondents (12/72, 16.7%). There was a significant difference between male and female respondents identified in many leadership positions, however. There were 64 male and 38 female respondents to this question. Twelve male respondents (19%) identified as a department chair or chief as opposed to one female respondent (2.6%, p = 0.028). There were also 21 male site chiefs (33%) as opposed to 4 female (11%) (p = 0.011), and 18 male division chiefs (28%) as opposed to 2 female (5.3%, p < 0.01). A significantly higher proportion of female respondents reported having no leadership roles (17/38, 45%) as compared to their male colleagues (14/64, 22%) (p = 0.015). There was no significant difference between genders in the role of program director, assistant program director or rotation supervisor (Table ). Harassment Respondents were asked whether they had experienced harassment during their residency training and whether they would quantify the amount as “harassment free”, “subtle undertones”, “noticeable tones”, “significant” or “unsure”. Female respondents were significantly more likely to report higher levels of harassment (p < 0.01) than their male counterparts (Fig. ). Respondents experiencing harassment were then asked to qualify the type of harassment as verbal (non-sexual), sexual (verbal), sexual (physical), racial/ethnic, or physical (Table ). The only type of harassment with a statistical difference between male and female respondents was sexual (verbal) harassment (p < 0.001). When asked who was responsible for this harassment, male respondents reported statistically more harassment in residency by leaders in their department (p = 0.02), where female residents experienced more harassment from patients or patient family members (p < 0.01). There was no statistical difference between genders for harassment from colleagues/other residents, ancillary staff or administration (Table ). Overall, the incidence of harassment was high during training, with 37.9% of male and 75.2% of female respondents experiencing some harassment during training. Of those reporting harassment, 87% of female and 92% of male respondents reported this harassment was verbal non-sexual. Forty-five percent of female respondents reported experiencing verbal sexual harassment during their training while only 11% of male residents reported the same. Similar responses were endorsed by attending physicians. Overall, 31.6% of male and 68% of female attending staff reported some level of harassment, with the degree of harassment experienced by female staff being higher (p < 0.01, Fig. ). When asked to categorize the type of harassment, female attending staff were more likely to have experienced verbal nonsexual harassment (p = 0.03), while male attending staff were more likely to have experienced racial or ethnic harassment (p < 0.01) (Table ). When asked who was responsible for this harassment, female respondents reported significantly more harassment from patients and family members (p = 0.03) (Table ). Of those experiencing harassment, 93% of female attending respondents and 65% of male attending respondents reported verbal non-sexual harassment. Forty-one percent of female attending physicians who had experienced harassment reported verbal sexual harassment while only 22% of male attending staff reported the same. This difference was not statistically significant (p = 0.2). A free text response was included as part of the section of the survey on harassment. The responses were varied and poignant. There were 32 total responses provided within this section. Thematic analysis identified prominent themes of reporting experiencing harassment, most commonly verbal, and most often from colleagues. One respondent said: “Verbal harassment on an everyday basis which are mentally draining and annoying”. Another respondent said: “As a female junior resident, navigating relationships with nursing staff (OR, ICU, ward) or administrative staff can be challenging. Differential treatment to female vs. male trainees can be obvious, and was sometimes even hostile to women. Some hospital sites, especially those with fewer/infrequent trainees, were noticeably worse for this”. At survey closure, a total of 183 surveys were returned representing 21.8% of the potential respondents. Of those, 100 respondents identified as male (of 633 potential responses—a 16% response rate) and 83 identified as female (of 205 potential responses—a 40% response rate). Of the respondents, 120 (66%) identified as attending physicians and 62 (34%) identified as resident physicians. One respondent did not specify. The surveys represented a variety of subspecialties, practice environments and years of practice experience. Of practicing physicians that specified, 54 (45%) were in academic practice, 35 (29%) practiced in the community, and the remaining 30 (25%) were in a hybrid community-academic practice. There was no significant difference between genders in practice setting (χ 2 = 0.59, p = 0.75) or resident intent to subspecialize (χ 2 = 7.57, p = 0.37), however there was a significant difference in gender of attending surgeon by practice subspecialty (χ 2 = 22.8, p < 0.001). Male respondents were more likely to have subspecialized in head and neck surgery (38%) while female respondents were more likely to have subspecialized in pediatric otolaryngology (35%). In general, female respondents were less likely to report having colleagues of the same gender, either as residents or as staff (p < 0.001). Participants were asked to what degree they agreed with statements regarding equality of treatment of residents and attendings based on gender, rated on a Likert scale. Seventy-four percent of male respondents agreed while only 39% of female respondents agreed that their department had the same expectations of residents regardless of gender. Sixty-nine percent of male respondents agreed while only 35% of female respondents agreed that residents of all genders were evaluated fairly based on the same criteria. Sixty-five percent of male respondents agreed while only 36% of female respondents agreed that their program treated all residents equally regardless of gender. Finally, 65% of male respondents agreed while only 36% of female respondents agreed that the same leadership opportunities were open to everyone regardless of gender. In each case, the female respondents were significantly less likely to agree to the statement (all p-values < 0.01). The survey included questions about academic rank, leadership positions and time taken to achieve these academic positions. While there was no statistical difference in academic rank between genders (χ 2 = 9.99, p = 0.07), it is worth noting that only one female respondent of the 46 respondents to this question had achieved the rank of full professor (2.2%) vs twelve male respondents (12/72, 16.7%). There was a significant difference between male and female respondents identified in many leadership positions, however. There were 64 male and 38 female respondents to this question. Twelve male respondents (19%) identified as a department chair or chief as opposed to one female respondent (2.6%, p = 0.028). There were also 21 male site chiefs (33%) as opposed to 4 female (11%) (p = 0.011), and 18 male division chiefs (28%) as opposed to 2 female (5.3%, p < 0.01). A significantly higher proportion of female respondents reported having no leadership roles (17/38, 45%) as compared to their male colleagues (14/64, 22%) (p = 0.015). There was no significant difference between genders in the role of program director, assistant program director or rotation supervisor (Table ). Respondents were asked whether they had experienced harassment during their residency training and whether they would quantify the amount as “harassment free”, “subtle undertones”, “noticeable tones”, “significant” or “unsure”. Female respondents were significantly more likely to report higher levels of harassment (p < 0.01) than their male counterparts (Fig. ). Respondents experiencing harassment were then asked to qualify the type of harassment as verbal (non-sexual), sexual (verbal), sexual (physical), racial/ethnic, or physical (Table ). The only type of harassment with a statistical difference between male and female respondents was sexual (verbal) harassment (p < 0.001). When asked who was responsible for this harassment, male respondents reported statistically more harassment in residency by leaders in their department (p = 0.02), where female residents experienced more harassment from patients or patient family members (p < 0.01). There was no statistical difference between genders for harassment from colleagues/other residents, ancillary staff or administration (Table ). Overall, the incidence of harassment was high during training, with 37.9% of male and 75.2% of female respondents experiencing some harassment during training. Of those reporting harassment, 87% of female and 92% of male respondents reported this harassment was verbal non-sexual. Forty-five percent of female respondents reported experiencing verbal sexual harassment during their training while only 11% of male residents reported the same. Similar responses were endorsed by attending physicians. Overall, 31.6% of male and 68% of female attending staff reported some level of harassment, with the degree of harassment experienced by female staff being higher (p < 0.01, Fig. ). When asked to categorize the type of harassment, female attending staff were more likely to have experienced verbal nonsexual harassment (p = 0.03), while male attending staff were more likely to have experienced racial or ethnic harassment (p < 0.01) (Table ). When asked who was responsible for this harassment, female respondents reported significantly more harassment from patients and family members (p = 0.03) (Table ). Of those experiencing harassment, 93% of female attending respondents and 65% of male attending respondents reported verbal non-sexual harassment. Forty-one percent of female attending physicians who had experienced harassment reported verbal sexual harassment while only 22% of male attending staff reported the same. This difference was not statistically significant (p = 0.2). A free text response was included as part of the section of the survey on harassment. The responses were varied and poignant. There were 32 total responses provided within this section. Thematic analysis identified prominent themes of reporting experiencing harassment, most commonly verbal, and most often from colleagues. One respondent said: “Verbal harassment on an everyday basis which are mentally draining and annoying”. Another respondent said: “As a female junior resident, navigating relationships with nursing staff (OR, ICU, ward) or administrative staff can be challenging. Differential treatment to female vs. male trainees can be obvious, and was sometimes even hostile to women. Some hospital sites, especially those with fewer/infrequent trainees, were noticeably worse for this”. This large survey of Canadian otolaryngologists serves to highlight the significant differences in the experiences of male and female residents and practicing physicians in this specialty. In particular, there are significant differences in leadership roles attained by female physicians as compared to male colleagues. While there was no statistical difference in the academic rank of the respondents within our survey, there was a significant lack of female representation in high level leadership positions such as program director and department chair. The reasons for these discrepancies are complex and likely interrelated. In terms of inequality between genders, female respondents were less likely to agree that there was equal treatment between genders starting as early as residency training. While it is impossible to quantify “equal treatment”, the fact that this perceived discrepancy exists so early on may discourage female otolaryngology residents from aspiring to an academic career or a position in leadership. In addition, 43% of surveyed American female otolaryngologists reported that having a child had influenced their decision regarding department or practice leadership roles . Having role models in leadership positions encourages young surgeons and current or future parents to see themselves in those positions in the future. As such, increasing female representation now is integral to ensuring gender diversity in leadership in the future. Harassment can also be a barrier to progression in academic rank and leadership and is not unique to any particular specialty in medicine. Recent surveys of female French intensivists and female American anesthesiologists report high levels of harassment at work . A recent survey of the Women in Otolaryngology section of the American Academy of Otolaryngology noted that 41% of respondents had experienced at least a subtle level of harassment during their residency training . In our study we were able to show that women report a higher level of harassment both in residency and as an attending physician. Female respondents also endorsed more harassment from patients and colleagues than their male peers. This correlated with a lower rate of female identified surgeons in leadership positions. There is a significant amount of literature that shows that women are less likely to pursue an academic medical or surgical career . While this study was not powered to link levels of harassment with academic progression, one can surmise that this could be one of many reasons that female surgeons are less likely to hold leadership positions in their academic institutions. Harassment not only limits academic advancement. Workplace violence, including verbal harassment, has been linked to increased objective measures of burnout, decreased job satisfaction, and attrition [ – ]. Workplace violence among healthcare workers may also cause psychological distress, decreased sleep and ultimately has downstream effects on the quality of patient care . It is clear that minimizing harassment for all members of the care team is a crucial step in improving the experience of physicians and patients alike. Once we can ascertain that women in otolaryngology are less likely to hold leadership positions and more likely to experience harassment, we then can start to discuss ways to mitigate these issues. In medicine, we can learn from other sectors that have already started to bridge this gap. For example, in 2018 the Harvard Business Review discussed ways to encourage women on a path to leadership in medicine . Since women often have other significant roles in their families such as childcare or caring for ill family members, they highlighted that a work-family balance would be integral to supporting that goal. Progressive policies for family leave, support for lactating parents, and childcare policies such as on-site or emergency back-up childcare could support women in furthering their academic and leadership careers. Career flexibility such as the option to telecommute, flex-time, and flexibility in academic promotion also could encourage more women to include leadership roles or academic duties in their career progression. In terms of mitigating bias and harassment in the workplace they recommended implicit bias training for all staff and leaders, salary reviews to hold departments accountable for overt discrepancies between genders, and better reporting systems and legal support for those who have experienced harassment. Finally, they suggested that both formal and informal mentorship and sponsorship between women would be integral to including more female physicians at the highest levels of leadership. While implementing all those suggestions would be a fundamental shift in the way careers in medicine are currently structured, it would not only encourage gender diversity at the highest levels of medical leadership but would improve work-life balance for our male colleagues as well. The proportion of female respondents to the survey was significantly higher than male, with 40% and 16% response rates, respectively, and this introduces the potential for selection bias. There may be several reasons for this observed difference in response rate. The survey title hinted at its contents, which included “Fertility, family planning, and lactation,” which may have resonated more with female otolaryngologists. Harassment was also mentioned in the title, perhaps leading to an increased likelihood of responses from those who had experienced harassment, both male and female. It is possible that the experience of male otolaryngologists in particular was not completely captured in the population that chose to participate. However, the high response rate of women otolaryngologist also represents a strength of this study. With over 40% of the Canadian Society of Otolaryngology’s female membership represented, the data likely relatively accurately depicts the current landscape for female otolaryngologists in Canada. An additional strength is in the building of the survey tool: despite not being validated, it was edited and vetted by multiple members of the otolaryngology community and underwent translation to allow participation from our French Canadian otolaryngology colleagues. The study has several limitations, including overall low response rate (21.8%) and the use of the Canadian Society of Otolaryngology Listserv for distribution, as not all Canadian otolaryngologists are members of this society. In addition, the focus on the Canadian population may mean it is less generalizable to other populations, though several studies in other countries and specialties suggest our findings are similar [ , , , , , – ]. The survey utilized was not validated. Finally, all of the respondents identified as male or female, so we do not have data on career advancement or harassment in non-binary identified individuals. Ideally a higher response rate would improve (but not guarantee) the chance of exploring the full diversity of experience based on the spectrum that is gender identity. Based on this large-scale survey of Canadian otolaryngologists, we have been able to demonstrate that female otolaryngologists occupy fewer leadership roles and experience higher levels of harassment than their male peers both in training and afterwards. Female otolaryngology trainees also perceive a lower level of gender equality during their training as opposed to their male peers. By identifying these trends, we can work towards a safer and more equitable workplace. Additional file 1 . Survey material distributed to Canadian Society of Otolaryngology—Head and Neck Surgery membership.
Biofilm and wound healing: from bench to bedside
e498caf6-9aa9-4c32-a7d5-f99f04a913e6
10127443
Debridement[mh]
Biofilms have been known since the seventeenth century when they were first described as animalcules by Anton von Leeuwenhoek. But it was not until the early twentieth century that their amazing interaction with wound biology was unraveled. Although bacteria are ubiquitous, their existence as attached colonies enables them to assume multicellular behavior. Heukelekian and Heller in 1940 first observed that a suitable “surface” enables bacteria to grow in colonies, and once an active bacterial slime is established, the biological process is greatly accelerated . Geesey et al. used phase-contrast microscopy and proposed that the slime-enmeshed microbial colonies constituted 99% of functional communities within which most sessile bacteria live . This led to the dismission of the classical paradigm of planktonic bacterial lifestyle which was widely prevalent in the previous century. Electron microscopic studies revealed microbiological variety and the physical arrangement of this shiny, translucent layer . Characklis et al. in 1973 highlighted its tenacity against eradication methods and revealed that bactericidal hypochlorite did not reduce the slime . Although believed to be dreary, dull, flat layers of cells covered with slime, it was the advent of confocal microscopic images, which opened the door to a wider and deeper perspective of wound–biofilm interaction. They have a complex architecture that makes one wonder whether simple prokaryotes could transform into eukaryotic complexities. Bill Costerton coined the term “biofilm” and in 2002, biofilms were first described as a microbially derived sessile community characterized by cells that are irreversibly attached to a substratum or interface or to each other, embedded in a matrix of extracellular polymeric substance (EPS) that they have produced, and exhibit an altered phenotype with respect to growth rate and gene transcription. This led to a significant question “what is the need for bacterial evolution through such intricacies with the ultimate desire for grouped behavior?” A biofilm confers bacteria certain abilities which are absent in the free-living planktonic form. It provides a microbial home for the colonizing organisms by creating an appropriate physicochemical environment and renders protection from the host, environment, and other competing species. Microbes existing in a biofilm are 1000–1500 times more resistant to antibiotics than in their planktonic state. They also facilitate the uptake of nutrients and the removal of metabolic products through the primitive circulatory system . An important characteristic seen in biofilm bacteria is gene transfer and quorum sensing both of which confer biofilms with distinct properties that provide protection, aid inter-cellular communication and preferentially encourage the growth of beneficial species . Biofilm-related gene expression regulation is yet another mechanism that offers protection against antibiotics. Eventually, the presence of such an “evolved species” makes the wound persistent and chronic. Even with the possession of modern technologies to study these organisms, researchers have realized their incompetence to completely understand the biofilms. The ability of the microorganisms to adapt and attach to any environment poses a great challenge in the treatment of recalcitrant wounds. The transition of free-floating planktonic forms to biofilm growth involves complex, multiple signals from spatial and temporal reorganization to changes in gene expression. However, this “cocooned” infestation is not necessarily pathogenic. Percival et al., in their review of microbial biofilm, proposed that it is not the dormant, relatively benign, or commensal bacteria that impairs the wound-healing process but the presence of biochemically and genetically upregulated pathogenic biofilm bacteria which do . These colonized resistant benign biofilm bacteria upon upregulation can revert to virulent pathogenic biofilm bacteria causing harm, tissue damage, and finally dissemination. Further interaction of the polymicrobial biofilm community with its surrounding environment creates devastating “hyper-inflammation”, thereby, establishing a chronic and recalcitrant wound-healing process . The direct effect includes the production of destructive enzymes and toxins, whereas the indirect effect promotes a hyper-inflammatory state which slowly brings the wound-healing process more bacteria-centric (controlled by bacteria) rather than host-centric (controlled by the host’s physiological processes). Eventually, it results in an imbalance between the favorable growth factors and the destructive lytic enzymes, free radicals which subsequently affect cell proliferation and wound-healing capability. The persistent inflammation in the wound bed allows for abundant nutrient-rich exudate that helps further the bacterial cause. Furthermore, this hyper-inflammation prevents a Th2 response, a process of development of adaptive immunity, necessary to train the immune system for better recognition and killing. A. Host factors Wound depth : Although not well understood, studies involving wound samples have shown that ulcer depth is positively associated with an anaerobic environment and proliferation of facultative anaerobic bacteria although the relation is inverse in relation to Staphylococcus . Wound duration: Gardner et al. demonstrated that considering the entire wound microbiome, ulcer duration positively correlated with bacterial diversity and species richness with a relative abundance of Proteobacteria and negatively correlated with a relative abundance of Staphylococcus . Local tissue hypoxia: Microvascular complications lead to tissue ischemia causing a considerable delay in wound healing . Studies have linked miRNAs to angiogenesis and various stages of wound healing. It is believed that tissue hypoxia and low oxygen tension, alter their levels impairing the wound-healing process as evidenced by murine models of ischemic wounds . This hypoxic environment coupled with the presence of necrosed tissue, promotes the proliferation of facultative anaerobes . Immune system: Chronic wounds exhibit a persistent inflammatory phase, causing physiological changes in the wound bed and predisposing to a wide variety of bacterial species colonization . It is suggested that the downregulation of TLR-2 in injured tissue impedes the immune system and inflammatory response, which causes a reduced chemotactic effect that delays the recruitment of various inflammatory cells [ – ]. The sustained production of pro-inflammatory cytokines, impaired immune cell function, poor angiogenic response, microvascular complications, compromised keratinocytes, downregulation of fibroblast proliferation and migration, and subsequent decrease in production of growth factors associated with wound healing have been reported and implicated as causes of delayed healing in diabetic animal models . B. Microbial factors High bacterial diversity : Dowd et al. introduced the concept of functional equivalent pathogroups (FEP) in which individual members of the biofilm community do not cause disease individually but it is their co-aggregation into an FEP that provides the synergistic effect. This gives the biofilm community the favorable factors necessary to maintain sustained inflammation and infection in the wound . Redel et al. observed that ecological alterations in the wound micro-environment influence the risk of wound infections . Oates et al. concluded that chronic diabetic foot wounds harbored greater eubacterial diversity than healthy skin with Staphylococcus aureus being the most common organism . These studies were derived mostly on microbiome-based studies from wound and cutaneous samples of patients with chronic wounds, thus revealing the in vivo situations. To understand the bacterial diversity, Percival et al. in their review paper reflected on the presence of anaerobes in DFUs and infection . The role of anaerobes in biofilms and their coexistence with aerobic species has been largely under investigation. But current evidence emphasizes the significance of anaerobes in multi‐species biofilm communities [ – ]. The true frequency of anaerobes in surgical wounds remains unclear, largely related to a diverse variety of bacterial culture methods, the types of samples taken for analysis, and the transport media used. However, anaerobes are predominantly seen in DFUs that are deeper, more chronic, and associated with ischemia, gangrene, or a foul odor . Summarizing the available literature, the most encountered bacteria in chronic wound biofilms are the ESKAPE pathogens ( Enterococcus faecium , S. aureus , Klebsiella pneumoniae , Acinetobacter baumannii , Pseudomonas aeruginosa , and Enterobacter spp). Others such as coagulase-negative staphylococci and Proteus spp. are also involved . While the focus until now had largely been on the diverse bacterial pathogens in chronic wounds, the role of fungi (particularly Candida species) in wound biofilms is assuming increased significance . However, it should be emphasized that majority of this literature has been the result of studies on bacteria isolated from the wounds rather than directly studying the biofilm from the wounds. 2) Microbial load: It is well documented that wounds are susceptible to infections when the microbial load reaches a critical level or “critical colonization” . However, the concept is contentious. Bendy et al. in 1964 first proposed that microbial numbers play an important role in the non-healing of wounds . This was supported by other studies emphasizing the critical level which was hazardous for the healing process . However, in 1997, Robson et al. in his study documented that healing occurred even in presence of high bacteria count . This observation questioned the concept of the role of microbial load in assessing the risk of wound infection and healing progression. The fact that bacteria are randomly distributed within the wound environment and determining their microbial density is very subjective, revolutionized the idea that microbial number alone should not be used to predict wound infection. 3) Microbial pathogenicity: Polymicrobial nature of biofilms in chronic wounds produces a synergistic effect that results in non-virulent bacterial species becoming virulent and causing damage to the host. These synergistic interactions within a biofilm eventually affect the bio-volume and bio-functionality of the biofilm . However, it is the critical control and modulation of gene expression which enables phenotypically different forms of bacteria to survive during adverse external conditions . This concept of biological insurance confers resistance to the host immune response and various antimicrobials . In addition, the synergistic, antagonistic, and mutualistic cooperation between the microbes in the wound bed facilitates a complex yet balanced microbial community that exists in a state of homeostasis with the host wound bed . The stability and long-term survival of this microbial community are possible through the formation of an efficient communication system that can coordinate the function and activities of different species as well as gene expression . To study the notion of biofilm dissemination, Guilhen et al. suggested that biofilm detaches and disperses in response to various environmental and biological signals which helps in their colonization into the surrounding environment . Several virulence traits of P. aeruginosa are involved in biofilm formation. Among the surface appendages, type IV pili help in adhesion in biofilm formation, flagella help in maturation of the biofilm, lipopolysaccharide (LPS) layer activates neutrophils to ‘trap’ pathogens thus indirectly protecting itself. Besides, the secretion systems (type III–VI) also help in inflammation and invasion. Among the secreted proteins, EPS of P. aeruginosa is also the main constituent of P. aeruginosa biofilms. Lipase A and alginate secreted by the bacteria interacts in the extracellular matrix of the biofilm resulting in drug resistance. Several quorum-sensing (QS) pathways regulate the release of further virulence factors such as elastase, rhamnolipids, and exotoxin A for the maturation of the biofilm . Similarly, another biofilm-forming organism S. aureus also possesses specific virulence traits like microbial surface components recognizing adhesive matrix molecules (MSCRAMMs), fibronectin-binding proteins, autolysins (AtlA), protein A, biofilm-associated protein (Bap), and teichoic acids that help in adhesion to surfaces and host cells and in maintaining the structural integrity of the biofilm . In this context, an emerging concept of ‘theft biofilm’ should be discussed, where the host skin lipids are ‘stolen’ by the bacteria from the skin wound micro-environment to induce the excessive production of several virulence factors. In particular, researchers have shown that P. aeruginosa , possessing the largest biofilm aggregates, is capable of utilizing host lipid factors in the upregulation of the ceramidase system which in turn augments biofilm formation . C. Environmental factors Heterogeneity within the biofilm seems mandatory to maintain its ecological stability. Hence, any exogenous and endogenous physiological and biochemical changes will alter the relative microbial competitiveness within the wound and, therefore, alter the homeostasis. The demographic characteristics, personal hygiene of the patient, glycemic control, and previous antibiotic exposure all seem to impact the biofilm and its development . Wound depth : Although not well understood, studies involving wound samples have shown that ulcer depth is positively associated with an anaerobic environment and proliferation of facultative anaerobic bacteria although the relation is inverse in relation to Staphylococcus . Wound duration: Gardner et al. demonstrated that considering the entire wound microbiome, ulcer duration positively correlated with bacterial diversity and species richness with a relative abundance of Proteobacteria and negatively correlated with a relative abundance of Staphylococcus . Local tissue hypoxia: Microvascular complications lead to tissue ischemia causing a considerable delay in wound healing . Studies have linked miRNAs to angiogenesis and various stages of wound healing. It is believed that tissue hypoxia and low oxygen tension, alter their levels impairing the wound-healing process as evidenced by murine models of ischemic wounds . This hypoxic environment coupled with the presence of necrosed tissue, promotes the proliferation of facultative anaerobes . Immune system: Chronic wounds exhibit a persistent inflammatory phase, causing physiological changes in the wound bed and predisposing to a wide variety of bacterial species colonization . It is suggested that the downregulation of TLR-2 in injured tissue impedes the immune system and inflammatory response, which causes a reduced chemotactic effect that delays the recruitment of various inflammatory cells [ – ]. The sustained production of pro-inflammatory cytokines, impaired immune cell function, poor angiogenic response, microvascular complications, compromised keratinocytes, downregulation of fibroblast proliferation and migration, and subsequent decrease in production of growth factors associated with wound healing have been reported and implicated as causes of delayed healing in diabetic animal models . High bacterial diversity : Dowd et al. introduced the concept of functional equivalent pathogroups (FEP) in which individual members of the biofilm community do not cause disease individually but it is their co-aggregation into an FEP that provides the synergistic effect. This gives the biofilm community the favorable factors necessary to maintain sustained inflammation and infection in the wound . Redel et al. observed that ecological alterations in the wound micro-environment influence the risk of wound infections . Oates et al. concluded that chronic diabetic foot wounds harbored greater eubacterial diversity than healthy skin with Staphylococcus aureus being the most common organism . These studies were derived mostly on microbiome-based studies from wound and cutaneous samples of patients with chronic wounds, thus revealing the in vivo situations. To understand the bacterial diversity, Percival et al. in their review paper reflected on the presence of anaerobes in DFUs and infection . The role of anaerobes in biofilms and their coexistence with aerobic species has been largely under investigation. But current evidence emphasizes the significance of anaerobes in multi‐species biofilm communities [ – ]. The true frequency of anaerobes in surgical wounds remains unclear, largely related to a diverse variety of bacterial culture methods, the types of samples taken for analysis, and the transport media used. However, anaerobes are predominantly seen in DFUs that are deeper, more chronic, and associated with ischemia, gangrene, or a foul odor . Summarizing the available literature, the most encountered bacteria in chronic wound biofilms are the ESKAPE pathogens ( Enterococcus faecium , S. aureus , Klebsiella pneumoniae , Acinetobacter baumannii , Pseudomonas aeruginosa , and Enterobacter spp). Others such as coagulase-negative staphylococci and Proteus spp. are also involved . While the focus until now had largely been on the diverse bacterial pathogens in chronic wounds, the role of fungi (particularly Candida species) in wound biofilms is assuming increased significance . However, it should be emphasized that majority of this literature has been the result of studies on bacteria isolated from the wounds rather than directly studying the biofilm from the wounds. 2) Microbial load: It is well documented that wounds are susceptible to infections when the microbial load reaches a critical level or “critical colonization” . However, the concept is contentious. Bendy et al. in 1964 first proposed that microbial numbers play an important role in the non-healing of wounds . This was supported by other studies emphasizing the critical level which was hazardous for the healing process . However, in 1997, Robson et al. in his study documented that healing occurred even in presence of high bacteria count . This observation questioned the concept of the role of microbial load in assessing the risk of wound infection and healing progression. The fact that bacteria are randomly distributed within the wound environment and determining their microbial density is very subjective, revolutionized the idea that microbial number alone should not be used to predict wound infection. 3) Microbial pathogenicity: Polymicrobial nature of biofilms in chronic wounds produces a synergistic effect that results in non-virulent bacterial species becoming virulent and causing damage to the host. These synergistic interactions within a biofilm eventually affect the bio-volume and bio-functionality of the biofilm . However, it is the critical control and modulation of gene expression which enables phenotypically different forms of bacteria to survive during adverse external conditions . This concept of biological insurance confers resistance to the host immune response and various antimicrobials . In addition, the synergistic, antagonistic, and mutualistic cooperation between the microbes in the wound bed facilitates a complex yet balanced microbial community that exists in a state of homeostasis with the host wound bed . The stability and long-term survival of this microbial community are possible through the formation of an efficient communication system that can coordinate the function and activities of different species as well as gene expression . To study the notion of biofilm dissemination, Guilhen et al. suggested that biofilm detaches and disperses in response to various environmental and biological signals which helps in their colonization into the surrounding environment . Several virulence traits of P. aeruginosa are involved in biofilm formation. Among the surface appendages, type IV pili help in adhesion in biofilm formation, flagella help in maturation of the biofilm, lipopolysaccharide (LPS) layer activates neutrophils to ‘trap’ pathogens thus indirectly protecting itself. Besides, the secretion systems (type III–VI) also help in inflammation and invasion. Among the secreted proteins, EPS of P. aeruginosa is also the main constituent of P. aeruginosa biofilms. Lipase A and alginate secreted by the bacteria interacts in the extracellular matrix of the biofilm resulting in drug resistance. Several quorum-sensing (QS) pathways regulate the release of further virulence factors such as elastase, rhamnolipids, and exotoxin A for the maturation of the biofilm . Similarly, another biofilm-forming organism S. aureus also possesses specific virulence traits like microbial surface components recognizing adhesive matrix molecules (MSCRAMMs), fibronectin-binding proteins, autolysins (AtlA), protein A, biofilm-associated protein (Bap), and teichoic acids that help in adhesion to surfaces and host cells and in maintaining the structural integrity of the biofilm . In this context, an emerging concept of ‘theft biofilm’ should be discussed, where the host skin lipids are ‘stolen’ by the bacteria from the skin wound micro-environment to induce the excessive production of several virulence factors. In particular, researchers have shown that P. aeruginosa , possessing the largest biofilm aggregates, is capable of utilizing host lipid factors in the upregulation of the ceramidase system which in turn augments biofilm formation . Heterogeneity within the biofilm seems mandatory to maintain its ecological stability. Hence, any exogenous and endogenous physiological and biochemical changes will alter the relative microbial competitiveness within the wound and, therefore, alter the homeostasis. The demographic characteristics, personal hygiene of the patient, glycemic control, and previous antibiotic exposure all seem to impact the biofilm and its development . A. Recalcitrant wound healing Two hypotheses have been postulated to understand the complex pathway that leads to biofilm-mediated recalcitrant wound healing. First, the specific bacterial hypothesis suggests that although heterogeneity and complex microbial diversity are integral to the biofilm in a chronic wound, only a few bacterial species contribute to wound infection and are involved in non-healing wounds. In contrast, the non-specific bacterial hypothesis considers the whole biofilm as a unit and suggests that the complex heterogeneous microflora causes delayed wound healing. These theories are yet to be proven conclusively, as it appears that no one possibility in any given wound at any point in time is responsible for any specific outcome. Their understanding can help us use directed therapies to tackle infection and promote wound healing . There have been studies demonstrating the clinical translation of metagenomics-based data. In a longitudinal, prospective study on the microbiome of diabetic foot ulcer wounds, it was shown that the microbial ‘genetic signature’ of the biofilm clearly regulated clinical outcomes. While variants of S. aureus in the wound–biofilm microbiome predicted unfavorable outcomes, commensals such as Corynebacterium striatum and Alcaligenes faecalis from the wound margins also influenced healing. In addition, wound debridement significantly caused a ‘microbiome shift’ in wound microflora with reduction in low-virulence pathogenic anaerobes for a better outcome, as against the antibiotic treatment . Quantitative estimation of bacterial aggregates from varying depths of the wound surfaces had revealed the localization of S. aureus biofilms superficially as compared to those of P. aeruginosa , which were found much deeper. The knowledge of this spatial organization of the biofilm microflora further supports the benefit of debridement in its clinical management . Another microbiome shift in wound flora is often observed with the administration of topical and systemic antimicrobials, which causes a relative increase in members of Pseudomonadaceae at the cost of a decrease in Streptococcaceae . The concept of FEPs, clinical interventions causing microbiome shifts in biofilms and actual impact of wound debridement in terms of biofilm management and promotion of wound healing has been better revealed by translation of molecular data to that of clinically relevant outcome. Two hypotheses have been postulated to understand the complex pathway that leads to biofilm-mediated recalcitrant wound healing. First, the specific bacterial hypothesis suggests that although heterogeneity and complex microbial diversity are integral to the biofilm in a chronic wound, only a few bacterial species contribute to wound infection and are involved in non-healing wounds. In contrast, the non-specific bacterial hypothesis considers the whole biofilm as a unit and suggests that the complex heterogeneous microflora causes delayed wound healing. These theories are yet to be proven conclusively, as it appears that no one possibility in any given wound at any point in time is responsible for any specific outcome. Their understanding can help us use directed therapies to tackle infection and promote wound healing . There have been studies demonstrating the clinical translation of metagenomics-based data. In a longitudinal, prospective study on the microbiome of diabetic foot ulcer wounds, it was shown that the microbial ‘genetic signature’ of the biofilm clearly regulated clinical outcomes. While variants of S. aureus in the wound–biofilm microbiome predicted unfavorable outcomes, commensals such as Corynebacterium striatum and Alcaligenes faecalis from the wound margins also influenced healing. In addition, wound debridement significantly caused a ‘microbiome shift’ in wound microflora with reduction in low-virulence pathogenic anaerobes for a better outcome, as against the antibiotic treatment . Quantitative estimation of bacterial aggregates from varying depths of the wound surfaces had revealed the localization of S. aureus biofilms superficially as compared to those of P. aeruginosa , which were found much deeper. The knowledge of this spatial organization of the biofilm microflora further supports the benefit of debridement in its clinical management . Another microbiome shift in wound flora is often observed with the administration of topical and systemic antimicrobials, which causes a relative increase in members of Pseudomonadaceae at the cost of a decrease in Streptococcaceae . The concept of FEPs, clinical interventions causing microbiome shifts in biofilms and actual impact of wound debridement in terms of biofilm management and promotion of wound healing has been better revealed by translation of molecular data to that of clinically relevant outcome. Tolerance and resistance to antimicrobial agents is a common property of biofilms. Tolerance to antimicrobial agents refers to the ability of the members of the bacterial biofilm to transitorily withstand lethal concentration of antibiotics or biocides primarily by slowing down the vital processes. Such type of inactive and non-dividing cells is called persister cells. The metabolic processes, which are often the targets of antimicrobial agents, are downregulated in persisters. However, they can revert to their normal metabolic functions and replication rate once the antimicrobial agents are removed. In this relevance, small colony variants (SCVs) of S. aureus and P. aeruginosa are very good examples for their persistence in host cells. These SCVs have often been isolated in clinical samples thus emphasizing their existence inside the host micro-environment . The resistance of biofilms to antibiotics is impressive when compared with planktonic cells. Evidence suggests that when microbes exist in a biofilm, they can become 10–1000 times more resistant to the effects of antimicrobial agents . Although this fact is well established, its underlying molecular mechanisms are not completely understood. Various mechanisms for developing antimicrobial resistance have been suggested (as detailed below), but it is probably the combination of these mechanisms that provide the outcome. The polysaccharide matrix is suggested to act as the barrier which prevents access to the bacterial cell. De Beer et al. concluded that the limited penetration of chlorine into the biofilm matrix is an important factor influencing the reduced efficacy of this biocide as compared with its action against planktonic bacteria . Suci et al. studied the penetration of ciprofloxacin through P. aeruginosa biofilms with the help of infra-red spectrometry in a culture-based in vitro model. They found that transport of the antibiotic to the biofilm-substratum interface during the 21-min exposure to 100 microgram/ml was found to be significantly impeded by the biofilm. These results suggest that barriers to drug transport inside bacterial cells may be an important factor in antimicrobial resistance . However, Dunne et al. through an in vitro dialysis chamber-based model simulating infected bioimplants failed to demonstrate sterilization of staphylococcal biofilm even though a combination of vancomycin and rifampicin improved the perfusion of the drugs thereby suggesting an alternate method of antimicrobial resistance . Anderl et al. in their study to investigate the penetration of ampicillin and ciprofloxacin through biofilms in an in vitro model showed that despite full penetration of these antibiotics, there was increased resistance of the wild-type strains to ciprofloxacin and the mutant strains to ampicillin as well as ciprofloxacin, suggesting other mechanisms of antibiotic resistance . Due to nutrient limitations, mature biofilms show a gradual transition of bacterial growth from slow to no growth . This physiological change accounts for their survival against antibiotics. It was also observed that the sensitivity of both planktonic and biofilm bacteria to antimicrobials increased with increasing growth rate, thereby indicating that a slow growth rate protects the biofilm cells from antimicrobial action [ – ]. Interestingly, recent studies have shown that a slow growth rate of deeper bacteria within the biofilm is not due to nutrient limitation per se, but secondary to a general stress response such as temperature changes, pH changes, and the presence of other chemical agents . This hypothesis is plausible as the stress response is the cause of physiological changes that protect the bacteria from environmental stresses. It has been also shown that RpoS (a gene encoding sigma factor in Escherichia coli which regulates the stress response and allows cells not only to survive environmental adversities but also prepares them for subsequent stress) is the central regulator of this response and its deletion results in differences within biofilm cell density . QS is a cell–cell communication mechanism that synchronizes gene expression in response to population cell density. Davies et al. reported that an inter-cellular signal molecule in the development of P. aeruginosa biofilms led to the formation of flat, undifferentiated biofilms unlike the wild-type biofilms, which are sensitive to the biocide sodium dodecyl sulfate . However, it has also been shown that antibiotic resistance remains unaffected in defective QS . Thus, the interpretation of the QS mechanism in the development of antibiotic resistance needs support from further studies. The activation or repression of genes when cells attach to any surface results in the expression of the “biofilm-type increased resistance” phenotype. Induction of multidrug efflux pumps and alteration in outer membrane proteins are some of the acquired protective mechanisms that protect the bacteria from the detrimental effects of antimicrobial agent [ – ]. Host immune response resistance The transition from planktonic form to complex biofilm produces small molecules, which increase inflammation and induce host cell death. Although planktonic cells are readily cleared, biofilms reduce the effectiveness of immune cells to overcome the epithelial barrier, host microbiome, and various complement fractions to ensure survival. Overall, biofilms stimulate a unique immune response that is yet to be fully understood. Host immune responses to biofilm constituents The deep embedment of bacteria helps to evade the host immune system. Instead, the immune system comes first in contact with components of the EPS matrix which is a diverse, hydrated mixture of extracellular DNA (e-DNA, bacterial and host), proteins, polysaccharides, and lipids. Besides being a mechanical barrier, EPS helps elicit the host immune response both in form of immunomodulation and immunogenetic. Some studies have also suggested that bacterial exopolysaccharides block the host immune response by reducing the production of pro-inflammatory cytokines and reactive oxygen species. Besides inactivating innate immunity, they also inhibit complement activation and adaptive immunity . It is important to note that the spectrum of the host response to biofilms and their specific components remains unclear, and more research is needed in this area. As in vitro studies do not take into consideration of the complex wound bed environment, it is also unclear how in vitro results can be translated into clinical scenarios. Host cell response to pathogenic biofilms Neutrophils play an important role in effectively controlling and eliminating bacterial pathogens. Several mechanisms like phagocytosis, release of antimicrobial peptides, release of reactive oxygen species and formation of web-like chromatin structures called neutrophil extracellular traps (NETs) seem to be involved. These NETs protect against large-sized pathogens including biofilms which are not effectively engulfed by neutrophils alone. NETosis causes release of chromatin and other proteins from the neutrophils in a controlled manner thus clearing the pathogen. Studies conducted on porcine burn wound have clearly demonstrated that biofilms of S. aureus ‘skew’ the neutrophils through its leucocidins and diminish the effects of NETosis. Similarly, LPS layer of P. aeruginosa also induces the activation of NETs only to protect itself from other invading pathogens and strengthen its biofilm . Pathogenic biofilms weaken the host immune cells through several mechanisms such as immobilizing polymorphonuclear neutrophils (PMNs), decreasing the phagocytic potential of macrophages, inhibiting reactive oxygen species production, and reducing bacterial opsonization. In addition, bacteria have evolved to utilize both PMNs and macrophages to enhance hyper-inflammation, and thus leading to a bacteria-centric immune process in which persistent hyper-inflammation is maintained in the wound bed resulting in inflammatory exudate formation, which keeps on providing nutrition to the microbes . The weakened immune response that fails to translate from innate to a more organized adaptive immunity is ineffective to control and kill the microbes in the long run, contributes to hyper-inflammation, causing collateral damage to host tissue due to heightened levels of matrix metalloproteases, neutrophil elastases, and inflammatory cytokines . The transition from planktonic form to complex biofilm produces small molecules, which increase inflammation and induce host cell death. Although planktonic cells are readily cleared, biofilms reduce the effectiveness of immune cells to overcome the epithelial barrier, host microbiome, and various complement fractions to ensure survival. Overall, biofilms stimulate a unique immune response that is yet to be fully understood. Host immune responses to biofilm constituents The deep embedment of bacteria helps to evade the host immune system. Instead, the immune system comes first in contact with components of the EPS matrix which is a diverse, hydrated mixture of extracellular DNA (e-DNA, bacterial and host), proteins, polysaccharides, and lipids. Besides being a mechanical barrier, EPS helps elicit the host immune response both in form of immunomodulation and immunogenetic. Some studies have also suggested that bacterial exopolysaccharides block the host immune response by reducing the production of pro-inflammatory cytokines and reactive oxygen species. Besides inactivating innate immunity, they also inhibit complement activation and adaptive immunity . It is important to note that the spectrum of the host response to biofilms and their specific components remains unclear, and more research is needed in this area. As in vitro studies do not take into consideration of the complex wound bed environment, it is also unclear how in vitro results can be translated into clinical scenarios. Host cell response to pathogenic biofilms Neutrophils play an important role in effectively controlling and eliminating bacterial pathogens. Several mechanisms like phagocytosis, release of antimicrobial peptides, release of reactive oxygen species and formation of web-like chromatin structures called neutrophil extracellular traps (NETs) seem to be involved. These NETs protect against large-sized pathogens including biofilms which are not effectively engulfed by neutrophils alone. NETosis causes release of chromatin and other proteins from the neutrophils in a controlled manner thus clearing the pathogen. Studies conducted on porcine burn wound have clearly demonstrated that biofilms of S. aureus ‘skew’ the neutrophils through its leucocidins and diminish the effects of NETosis. Similarly, LPS layer of P. aeruginosa also induces the activation of NETs only to protect itself from other invading pathogens and strengthen its biofilm . Pathogenic biofilms weaken the host immune cells through several mechanisms such as immobilizing polymorphonuclear neutrophils (PMNs), decreasing the phagocytic potential of macrophages, inhibiting reactive oxygen species production, and reducing bacterial opsonization. In addition, bacteria have evolved to utilize both PMNs and macrophages to enhance hyper-inflammation, and thus leading to a bacteria-centric immune process in which persistent hyper-inflammation is maintained in the wound bed resulting in inflammatory exudate formation, which keeps on providing nutrition to the microbes . The weakened immune response that fails to translate from innate to a more organized adaptive immunity is ineffective to control and kill the microbes in the long run, contributes to hyper-inflammation, causing collateral damage to host tissue due to heightened levels of matrix metalloproteases, neutrophil elastases, and inflammatory cytokines . The deep embedment of bacteria helps to evade the host immune system. Instead, the immune system comes first in contact with components of the EPS matrix which is a diverse, hydrated mixture of extracellular DNA (e-DNA, bacterial and host), proteins, polysaccharides, and lipids. Besides being a mechanical barrier, EPS helps elicit the host immune response both in form of immunomodulation and immunogenetic. Some studies have also suggested that bacterial exopolysaccharides block the host immune response by reducing the production of pro-inflammatory cytokines and reactive oxygen species. Besides inactivating innate immunity, they also inhibit complement activation and adaptive immunity . It is important to note that the spectrum of the host response to biofilms and their specific components remains unclear, and more research is needed in this area. As in vitro studies do not take into consideration of the complex wound bed environment, it is also unclear how in vitro results can be translated into clinical scenarios. Neutrophils play an important role in effectively controlling and eliminating bacterial pathogens. Several mechanisms like phagocytosis, release of antimicrobial peptides, release of reactive oxygen species and formation of web-like chromatin structures called neutrophil extracellular traps (NETs) seem to be involved. These NETs protect against large-sized pathogens including biofilms which are not effectively engulfed by neutrophils alone. NETosis causes release of chromatin and other proteins from the neutrophils in a controlled manner thus clearing the pathogen. Studies conducted on porcine burn wound have clearly demonstrated that biofilms of S. aureus ‘skew’ the neutrophils through its leucocidins and diminish the effects of NETosis. Similarly, LPS layer of P. aeruginosa also induces the activation of NETs only to protect itself from other invading pathogens and strengthen its biofilm . Pathogenic biofilms weaken the host immune cells through several mechanisms such as immobilizing polymorphonuclear neutrophils (PMNs), decreasing the phagocytic potential of macrophages, inhibiting reactive oxygen species production, and reducing bacterial opsonization. In addition, bacteria have evolved to utilize both PMNs and macrophages to enhance hyper-inflammation, and thus leading to a bacteria-centric immune process in which persistent hyper-inflammation is maintained in the wound bed resulting in inflammatory exudate formation, which keeps on providing nutrition to the microbes . The weakened immune response that fails to translate from innate to a more organized adaptive immunity is ineffective to control and kill the microbes in the long run, contributes to hyper-inflammation, causing collateral damage to host tissue due to heightened levels of matrix metalloproteases, neutrophil elastases, and inflammatory cytokines . Biofilms cannot be detected in any wound with the naked eye. The presence of a slimy, shiny, translucent layer on the wound surface is a non-specific finding and at best is a piece of probable indirect evidence of the presence of biofilm. To aid in their recognition, several clinical cues have been identified : Wound failing to heal despite the standard of care, or local infection persisting for more than 45 days Persistence of formation of necrotic tissue and friable granulation tissue in the wound bed Failure of antimicrobial agents to facilitate healing Layer of slime on the wound surface that can be easily removed but rebuilds quickly The wound heals partially only to break down again These clinical cues should arouse suspicion and help to initiate early biofilm-based wound care, although their actual identification requires advanced laboratory techniques. It must be emphasized here that neither there is any quantifiable marker for biofilm detection nor any objective method to define the areas in a wound affected by biofilm. The molecular methods such as DNA/RNA-based analyses and meta-genomic or more recently whole-genome sequencing are more sensitive and accurate but have their own limitations, the most significant of which is their inability to provide information on whether microbial cells are viable, whether the organisms are in a biofilm or planktonic phenotype. False positive detection of contaminating DNA from the clinical environment (including the patient’s skin, surgical instruments, and gloves) is also a matter of concern. The discovery of Bap in S. aureus as well as other Bap homologs on many other bacterial species is revolutionizing the field of biofilm biomarkers. These proteins are found to be present on bacterial surfaces and are reported to be involved in biofilm formation . Other biofilm matrix components such as cellulose, EPS, and e-DNA can also be used as potential biomarkers and may offer species identification [ – ]. Various other approaches such as proteomics and metabolomics are rapidly expanding due to the ability to study biofilm physiology more closely and accurately . Biofilm imaging technology is an advancing field that provides greater insight into the dynamics and complexities of biofilms. The ability to visualize 3D biofilm images combined with fluorescent staining using confocal scanning laser microscopy (CSLM) helps to visualize biofilms in real-time . Besides these novel technologies, the traditional methods of in vitro culture techniques still exist. The slow-growing persisters may not form colonies under routine culturing conditions and thus cause a false-negative result. Nevertheless, biofilms with a highly heterogeneous population of fastidious strains require specific growth factors for their cultivation. Even though the field of biofilm diagnosis is fast evolving, routine biofilm characterization and detection have multiple challenges. Therefore, the need for a standardized and reliable method for detection in clinical settings cannot be overlooked. The biofilm construct, wound micro-environment, and the intrinsic specialties of the biofilm bacteria make the biofilm extremely tolerant to antibiotics and antimicrobial agents. It additionally creates the demand for developing novel anti-biofilm strategies that can be used clinically at the bedside and help improve therapeutic response, and provide a better clinical outcome. Table summarizes various anti-biofilm therapeutic strategies available. Therapies targeting bacteria These modalities target microbial structure and function. The mechanisms can range from direct toxicity to the bacterial cell to inhibition of their enzymes and bacterial signaling pathways. Bacteriophage therapy Bacteriophages are viruses and the natural predators of bacteria. Their ability to negate the protective biofilm stems from these proposed mechanisms: High host specificity thereby preserving beneficial bacterial flora Production of phage-encoded enzymes (polysaccharide depolymerase, and alginase) disrupting the biofilm matrix Its intrinsic ability to multiply within the bacterial host cell and liberate new virus particles by bacterial cell lysis. The cell kill is highly specific and the phage population also goes down as soon as the bacterial population decreases. Studies on animal wound infection models have demonstrated positive results in the early phases of biofilm formation although the beneficial effect was not sustained in well-formed biofilm . The effect of bacteriophage in a mouse wound model against multidrug-resistant P. aeruginosa showed promising outcomes . The phage therapy was then tested on patients with non-healing infected wounds that showed significant improvement in wound healing . Needless to say, the use of bacteriophages as therapeutic agents is yet to be widely accepted. The added benefit of their synergism with concurrent antibiotic use cannot only enhance bacterial killing but is also likely to reduce antibiotic resistance. Nano-antimicrobials and metals The ability of nano-formulations to cross the biofilm barrier and overcome antimicrobial resistance has increased their popularity in recent years. Besides having an intrinsic antimicrobial activity (such as silver), these also target biofilm matrix and enhance the effect of other modalities (magnetic hyperthermia-based technology). Both in vitro and in vivo studies have demonstrated that silver inhibits both early and mature biofilms . The broad-spectrum antimicrobial ability of these formulations comes from their ability to bind to bacterial structures and destabilize the intermolecular adhesion bonds. Besides using nanoparticles, recent work utilizes the use of nanohybrid enzymes with the aim to activate reactive oxygen species . However their cytotoxicity at high concentrations precludes their clinical use at present. Nonetheless, their physical ability to penetrate the dense matrix with a low likelihood to develop resistance makes them effective against biofilms. Besides silver, other metals such as cerium and gallium also demonstrate anti-biofilm effects. They interfere with the formation and maturation of biofilm. Consequently, these can be used as a topical application in wound care, thus disrupting and preventing biofilm formation . However, with a handful of approved products, further research is needed for the clinical use of these metals as effective armaments against biofilms. Another method for increased delivery of antimicrobials to the wound site is through the implantation of biomaterial containing the desired antimicrobial. This method of local and sustained delivery of antimicrobials has been successfully used in preventing biofilm-related wound complications, especially infections in bones . The presence of the antimicrobials at the site of the wound affects the wound micro-environment leading to disruption of the biofilm and initiation of the proliferative phase and healing. Blue light therapy Several studies have demonstrated the positive benefits of photo biomodulation in wound healing. Although biological mechanisms are yet to be understood completely, studies have shown that visible light between the 400 nm and 500 nm wavelengths has an antimicrobial and anti-biofilm effect . Halstead et al. in their in vitro study tested blue light against planktonic and biofilm bacteria and showed significant bacterial sensitivity to the blue light treatment . It is interesting to note that Gram-positive bacteria are less susceptible and the effect on the older biofilm is still a matter of debate. The ease of administration, minimal side effects, action against a wide variety of microorganisms, and low potential for tolerance, makes them propitious in the management of chronic biofilms. QS inhibitors QS is an important signaling system consisting of oligopeptides which are released in the extracellular fluid and facilitate cell-to-cell communication in bacterial colonies. QS is responsible for maintaining bacterial population density and virulence factor production . Inhibiting these pathways can prevent biofilm formation and reduce bacterial virulence. Studies have shown that chlorogenic acid decreases bacterial load and accelerates healing in a mouse wound model of P. aeruginosa infection via QS . In S. aureus , QS has been shown to be inhibited by RNAIII inhibiting peptide (RIP) and its derivatives . QS inhibitors due to their marked synergistic effect with antibiotics can be used as adjuncts to increase the susceptibility of biofilms to antimicrobials . However, their toxic effects on the host cells at working concentration and their reduced efficacy in the in vivo model limit their clinical use at present . In this context, a polyphenolic phytochemical, curcumin, has been extensively studied as an anti-biofilm agent. Curcumin acts by inhibiting the QS systems and disrupts biofilm formation by inhibiting bacterial adhesion to host receptors . Several nano-formulations incorporating curcumin are available for applications both on wound surfaces and on implantable devices to prevent biofilm formation. Matrix-degrading enzymes Biofilm matrix degradation is yet another promising anti-biofilm strategy. The use of DNAase I, Dispersin B (DspB), and a-amylase to degrade complex biofilm structure allows for increased antibiotic penetration and therefore increases its efficacy . This novel biofilm degrading strategy not only inhibits biofilm formation but also disrupts the mature biofilms of S. aureu s, Vibrio cholerae , and P. aeruginosa . However, the cost of synthesizing pure enzymes for clinical application makes it expensive and limits their clinical use. Nonetheless, combining biofilm matrix-degrading enzymes and antibiotics is a highly effective tool for removing biofilms from recalcitrant wounds . Antimicrobial peptides and natural compounds Antimicrobial peptides are positively charged, amphipathic peptides, 15–30 amino acids in length that can be produced by bacteria and fungi. They bind to negatively charged structural molecules on the microbial membrane and thereby exert a broad spectrum of antimicrobial activity . The major advantage is their ability to act on slow-growing, non-multiplying bacteria as encountered in biofilms. The ability to modify their primary amino acid sequences to enhance their effectiveness and stability makes them attractive anti-biofilm agents . However, their increased susceptibility to body fluid pH, proteolytic activity, and ionic strength makes their clinical application challenging. Natural and plant-based derivatives have also been used as preventive measures against biofilms. In this regard, the antibacterial effects of honey, both sidr and manuka need mention. The antibacterial effects are presumed to be multifactorial and are thought to be due to the substantial content of dicarbonyl methylglyoxal (MGO), bee defensin-1, a number of phenolic compounds, and complex carbohydrates. Antimicrobial effects exerted by the osmotic effect of high sugar concentration, low pH, and the presence of hydrogen peroxide produced by bee-derived glucose oxidase, are other mechanisms of its antibacterial activity . Furthermore, manuka honey has been shown to affect gene expression in multi-drug-resistant Staphylococcus aureus (MRSA) . Ultrasonic treatment Although low-frequency ultrasound is not effective alone in killing biofilm-growing bacteria, it can be combined with antibiotics to enhance antibiotic transport across the biofilms by enhancing the sensitivity of biofilms to antimicrobial agents . Studies have shown that this combination helps in the increased killing of P. aeruginosa and S. aureus associated biofilms and those caused by drug-resistant E. coli . Employing ultrasonic therapy in the management of non-healing wounds is a promising non-invasive means to decrease bacterial bioburden. Electrical and electrochemical approaches Recent years have ignited the interest in electroceuticals and the effect of electrical current in various stages of wound healing . Human studies have shown that electrical stimulation increases cutaneous perfusion and accelerates wound healing . In a study by Banerjee et al., the growth of P. aeruginosa was markedly arrested in the presence of wireless electroceutical dressing (WED), which in the presence of wound exudate gets activated to generate an electric field. Due to its ability to produce ROS, biofilm thickness was decreased and the activity of quorum-sensing genes was repressed . Similarly, another study demonstrated the ability of WED to disrupt biofilm aggregates and accelerate wound closure by restoring skin barrier functions . These electroceuticals provide novel therapeutic options to improve wound outcomes by enhancing re-epithelization and disrupting biofilms. Its low cost, better safety profile, and long shelf life provide added advantages to its use. Therapies targeting wound micro-environment Modification of local pH The wound bed pH shifts from acidic to alkaline to neutral and then again acidic as the wound heals . Studies have shown that failure of most acute and chronic wounds to heal is correlated with alkaline pH of 7.15 to 8.9 . The arduous wounds which are stalled due to a prolonged inflammatory phase are also subjected to increased protease activity that is pH dependent. Acidification of wounds using topical acetic acid , polyacrylic acid, and polycarboxylate vinyl resins have been employed to study wound healing. It is argued that wound acidification being an adjuvant to healing, controls P. aeruginosa , which is present in 40% of chronic wounds and is often resistant to antimicrobial therapy . Negative pressure wound therapy (NPWT) Negative pressure wound therapy (NPWT) applies continuous or intermittent sub-atmospheric pressure to the wound surface. Currently, it is a standard of care in difficult wound management. NPWT may assist wound healing by increasing tissue perfusion and help in the production of granulation tissue besides reducing exudates, edema, and bacterial contamination . Recent work suggests that NPWT with the instillation of antimicrobials such as diluted hypochlorous acid contributes to a significant reduction in wound bioburden and thereby shows promising results in wounds with mature biofilms . With the added advantage of absent bacterial resistance development, this technique in combination with topical antiseptics is ideal in the management of difficult-to-heal wounds. Hyperbaric oxygen therapy (HBOT) It is well known that persistent hypoxia in chronic wounds limits healing. HBOT is an evolving therapy in which 100% oxygen above atmospheric pressure is supplied to the tissues for a defined period with the aim to increase the partial pressure of oxygen in the circulation and thereby increase its delivery to the wound bed. It aids wound healing by improving oxygenation, decreasing inflammation, and enhancing neovascularization . Its positive effect on reducing bacterial biofilms both in vitro and in vivo has also been demonstrated . This may be due to its antimicrobial effect via inducing oxidative stress and host immune system modulation apart from acting synergistically with the antibiotics and thereby enhancing its effects . With almost no likelihood of the development of bacterial resistance, the efficacy of HBOT as an adjunct therapy against biofilms is promising. Surfactants Surfactants have the capacity to unite compounds with different polarities and reduce the surface tension of the surrounding medium and thereby decreasing their ability to stick together. Besides being used as wound scrubs surfactants can also be used as carriers of antimicrobials. In comparison to the standard silver sulfadiazine cream, surfactants have pro-healing effects on full-thickness skin wounds . Surfactant polymer dressing has been shown to decrease the growth rate of both Gram-positive and Gram-negative organisms, but the resultant effect was mostly bacteriostatic . Surfactants work by disrupting the EPS matrix and converting the biofilm bacteria to planktonic phenotype. This makes bacterial removal easier from wound surfaces and their susceptibility toward antibiotics when used in combination. These molecules can be used to coat dressings and sutures and have fewer chances of developing resistance. Therapies targeting bacteria and chronic wound micro-environment Probiotics The use of live bacteria for achieving health benefits ranges from simple prevention of viral gastroenteritis to the treatment of inflammatory bowel disease. With their immunomodulatory role and ability to replace biofilm-growing pathogens, their use is being considered for the prevention of biofilm formation. Walencka et al. in their study to evaluate the ability of the Lactobacillus acidophilus -derived substances to inhibit S. aureus and S. epidermidis biofilms concluded that inhibition of bacterial attachment and biofilm disruption occurs by influencing cell-to-cell and cell-to-surface interactions . Furthermore, Sadowska et al. observed the antagonistic effect of bacteriocin-like inhibitory substances produced by L. acidophilus against the S. aureus strains . Varma et al. investigated the anti-infective properties of Lactobacillus fermentum by co-incubating with S. aureus and P. aeruginosa and observed growth inhibition, increased cytotoxicity, and decreased biofilm formation . Although the results of laboratory studies are promising, there is still a long way to identify the ideal probiotic that can be used clinically as an anti-biofilm tool. Mesenchymal stem cells Mesenchymal stem cells (MSCs) due to their antimicrobial effect hold tremendous potential for wound infection management. These exert anti-infective effects through both direct and indirect mechanisms. Their ability to secrete antimicrobial peptides as well as modulate pro- and anti-inflammatory immune responses have aroused interest in their therapeutic potential in biofilm-laden wounds . Probably, the most compelling evidence is derived from the article by Johnson et al. who studied the effects of MSC administration in canine models of biofilm-infected wounds, and concluded that the best outcome was found in the co-administration of activated MSC with antibiotics. Furthermore, repeated systemic administration of activated MSC had better bacterial clearing and wound healing . Wood et al. demonstrated that human adipose tissue-derived mesenchymal stem cells (AT-MSCs) inhibited the growth of S. aureus and P. aeruginosa , which was attributed to secretion of antibacterial factors, enhanced phagocytosis and reduced bacterial adhesion . Their attractive differentiating potential and ability to speed up the wound-healing process by promoting angiogenesis and reducing scar formation makes them a promising tool. However, heterogeneity in their preparation, suboptimal wound bed preparation, cell viability, and the need for larger controlled clinical trials for ensuring safety, preclude their widespread clinical use. These modalities target microbial structure and function. The mechanisms can range from direct toxicity to the bacterial cell to inhibition of their enzymes and bacterial signaling pathways. Bacteriophage therapy Bacteriophages are viruses and the natural predators of bacteria. Their ability to negate the protective biofilm stems from these proposed mechanisms: High host specificity thereby preserving beneficial bacterial flora Production of phage-encoded enzymes (polysaccharide depolymerase, and alginase) disrupting the biofilm matrix Its intrinsic ability to multiply within the bacterial host cell and liberate new virus particles by bacterial cell lysis. The cell kill is highly specific and the phage population also goes down as soon as the bacterial population decreases. Studies on animal wound infection models have demonstrated positive results in the early phases of biofilm formation although the beneficial effect was not sustained in well-formed biofilm . The effect of bacteriophage in a mouse wound model against multidrug-resistant P. aeruginosa showed promising outcomes . The phage therapy was then tested on patients with non-healing infected wounds that showed significant improvement in wound healing . Needless to say, the use of bacteriophages as therapeutic agents is yet to be widely accepted. The added benefit of their synergism with concurrent antibiotic use cannot only enhance bacterial killing but is also likely to reduce antibiotic resistance. Nano-antimicrobials and metals The ability of nano-formulations to cross the biofilm barrier and overcome antimicrobial resistance has increased their popularity in recent years. Besides having an intrinsic antimicrobial activity (such as silver), these also target biofilm matrix and enhance the effect of other modalities (magnetic hyperthermia-based technology). Both in vitro and in vivo studies have demonstrated that silver inhibits both early and mature biofilms . The broad-spectrum antimicrobial ability of these formulations comes from their ability to bind to bacterial structures and destabilize the intermolecular adhesion bonds. Besides using nanoparticles, recent work utilizes the use of nanohybrid enzymes with the aim to activate reactive oxygen species . However their cytotoxicity at high concentrations precludes their clinical use at present. Nonetheless, their physical ability to penetrate the dense matrix with a low likelihood to develop resistance makes them effective against biofilms. Besides silver, other metals such as cerium and gallium also demonstrate anti-biofilm effects. They interfere with the formation and maturation of biofilm. Consequently, these can be used as a topical application in wound care, thus disrupting and preventing biofilm formation . However, with a handful of approved products, further research is needed for the clinical use of these metals as effective armaments against biofilms. Another method for increased delivery of antimicrobials to the wound site is through the implantation of biomaterial containing the desired antimicrobial. This method of local and sustained delivery of antimicrobials has been successfully used in preventing biofilm-related wound complications, especially infections in bones . The presence of the antimicrobials at the site of the wound affects the wound micro-environment leading to disruption of the biofilm and initiation of the proliferative phase and healing. Blue light therapy Several studies have demonstrated the positive benefits of photo biomodulation in wound healing. Although biological mechanisms are yet to be understood completely, studies have shown that visible light between the 400 nm and 500 nm wavelengths has an antimicrobial and anti-biofilm effect . Halstead et al. in their in vitro study tested blue light against planktonic and biofilm bacteria and showed significant bacterial sensitivity to the blue light treatment . It is interesting to note that Gram-positive bacteria are less susceptible and the effect on the older biofilm is still a matter of debate. The ease of administration, minimal side effects, action against a wide variety of microorganisms, and low potential for tolerance, makes them propitious in the management of chronic biofilms. QS inhibitors QS is an important signaling system consisting of oligopeptides which are released in the extracellular fluid and facilitate cell-to-cell communication in bacterial colonies. QS is responsible for maintaining bacterial population density and virulence factor production . Inhibiting these pathways can prevent biofilm formation and reduce bacterial virulence. Studies have shown that chlorogenic acid decreases bacterial load and accelerates healing in a mouse wound model of P. aeruginosa infection via QS . In S. aureus , QS has been shown to be inhibited by RNAIII inhibiting peptide (RIP) and its derivatives . QS inhibitors due to their marked synergistic effect with antibiotics can be used as adjuncts to increase the susceptibility of biofilms to antimicrobials . However, their toxic effects on the host cells at working concentration and their reduced efficacy in the in vivo model limit their clinical use at present . In this context, a polyphenolic phytochemical, curcumin, has been extensively studied as an anti-biofilm agent. Curcumin acts by inhibiting the QS systems and disrupts biofilm formation by inhibiting bacterial adhesion to host receptors . Several nano-formulations incorporating curcumin are available for applications both on wound surfaces and on implantable devices to prevent biofilm formation. Matrix-degrading enzymes Biofilm matrix degradation is yet another promising anti-biofilm strategy. The use of DNAase I, Dispersin B (DspB), and a-amylase to degrade complex biofilm structure allows for increased antibiotic penetration and therefore increases its efficacy . This novel biofilm degrading strategy not only inhibits biofilm formation but also disrupts the mature biofilms of S. aureu s, Vibrio cholerae , and P. aeruginosa . However, the cost of synthesizing pure enzymes for clinical application makes it expensive and limits their clinical use. Nonetheless, combining biofilm matrix-degrading enzymes and antibiotics is a highly effective tool for removing biofilms from recalcitrant wounds . Antimicrobial peptides and natural compounds Antimicrobial peptides are positively charged, amphipathic peptides, 15–30 amino acids in length that can be produced by bacteria and fungi. They bind to negatively charged structural molecules on the microbial membrane and thereby exert a broad spectrum of antimicrobial activity . The major advantage is their ability to act on slow-growing, non-multiplying bacteria as encountered in biofilms. The ability to modify their primary amino acid sequences to enhance their effectiveness and stability makes them attractive anti-biofilm agents . However, their increased susceptibility to body fluid pH, proteolytic activity, and ionic strength makes their clinical application challenging. Natural and plant-based derivatives have also been used as preventive measures against biofilms. In this regard, the antibacterial effects of honey, both sidr and manuka need mention. The antibacterial effects are presumed to be multifactorial and are thought to be due to the substantial content of dicarbonyl methylglyoxal (MGO), bee defensin-1, a number of phenolic compounds, and complex carbohydrates. Antimicrobial effects exerted by the osmotic effect of high sugar concentration, low pH, and the presence of hydrogen peroxide produced by bee-derived glucose oxidase, are other mechanisms of its antibacterial activity . Furthermore, manuka honey has been shown to affect gene expression in multi-drug-resistant Staphylococcus aureus (MRSA) . Ultrasonic treatment Although low-frequency ultrasound is not effective alone in killing biofilm-growing bacteria, it can be combined with antibiotics to enhance antibiotic transport across the biofilms by enhancing the sensitivity of biofilms to antimicrobial agents . Studies have shown that this combination helps in the increased killing of P. aeruginosa and S. aureus associated biofilms and those caused by drug-resistant E. coli . Employing ultrasonic therapy in the management of non-healing wounds is a promising non-invasive means to decrease bacterial bioburden. Electrical and electrochemical approaches Recent years have ignited the interest in electroceuticals and the effect of electrical current in various stages of wound healing . Human studies have shown that electrical stimulation increases cutaneous perfusion and accelerates wound healing . In a study by Banerjee et al., the growth of P. aeruginosa was markedly arrested in the presence of wireless electroceutical dressing (WED), which in the presence of wound exudate gets activated to generate an electric field. Due to its ability to produce ROS, biofilm thickness was decreased and the activity of quorum-sensing genes was repressed . Similarly, another study demonstrated the ability of WED to disrupt biofilm aggregates and accelerate wound closure by restoring skin barrier functions . These electroceuticals provide novel therapeutic options to improve wound outcomes by enhancing re-epithelization and disrupting biofilms. Its low cost, better safety profile, and long shelf life provide added advantages to its use. Bacteriophages are viruses and the natural predators of bacteria. Their ability to negate the protective biofilm stems from these proposed mechanisms: High host specificity thereby preserving beneficial bacterial flora Production of phage-encoded enzymes (polysaccharide depolymerase, and alginase) disrupting the biofilm matrix Its intrinsic ability to multiply within the bacterial host cell and liberate new virus particles by bacterial cell lysis. The cell kill is highly specific and the phage population also goes down as soon as the bacterial population decreases. Studies on animal wound infection models have demonstrated positive results in the early phases of biofilm formation although the beneficial effect was not sustained in well-formed biofilm . The effect of bacteriophage in a mouse wound model against multidrug-resistant P. aeruginosa showed promising outcomes . The phage therapy was then tested on patients with non-healing infected wounds that showed significant improvement in wound healing . Needless to say, the use of bacteriophages as therapeutic agents is yet to be widely accepted. The added benefit of their synergism with concurrent antibiotic use cannot only enhance bacterial killing but is also likely to reduce antibiotic resistance. The ability of nano-formulations to cross the biofilm barrier and overcome antimicrobial resistance has increased their popularity in recent years. Besides having an intrinsic antimicrobial activity (such as silver), these also target biofilm matrix and enhance the effect of other modalities (magnetic hyperthermia-based technology). Both in vitro and in vivo studies have demonstrated that silver inhibits both early and mature biofilms . The broad-spectrum antimicrobial ability of these formulations comes from their ability to bind to bacterial structures and destabilize the intermolecular adhesion bonds. Besides using nanoparticles, recent work utilizes the use of nanohybrid enzymes with the aim to activate reactive oxygen species . However their cytotoxicity at high concentrations precludes their clinical use at present. Nonetheless, their physical ability to penetrate the dense matrix with a low likelihood to develop resistance makes them effective against biofilms. Besides silver, other metals such as cerium and gallium also demonstrate anti-biofilm effects. They interfere with the formation and maturation of biofilm. Consequently, these can be used as a topical application in wound care, thus disrupting and preventing biofilm formation . However, with a handful of approved products, further research is needed for the clinical use of these metals as effective armaments against biofilms. Another method for increased delivery of antimicrobials to the wound site is through the implantation of biomaterial containing the desired antimicrobial. This method of local and sustained delivery of antimicrobials has been successfully used in preventing biofilm-related wound complications, especially infections in bones . The presence of the antimicrobials at the site of the wound affects the wound micro-environment leading to disruption of the biofilm and initiation of the proliferative phase and healing. Several studies have demonstrated the positive benefits of photo biomodulation in wound healing. Although biological mechanisms are yet to be understood completely, studies have shown that visible light between the 400 nm and 500 nm wavelengths has an antimicrobial and anti-biofilm effect . Halstead et al. in their in vitro study tested blue light against planktonic and biofilm bacteria and showed significant bacterial sensitivity to the blue light treatment . It is interesting to note that Gram-positive bacteria are less susceptible and the effect on the older biofilm is still a matter of debate. The ease of administration, minimal side effects, action against a wide variety of microorganisms, and low potential for tolerance, makes them propitious in the management of chronic biofilms. QS is an important signaling system consisting of oligopeptides which are released in the extracellular fluid and facilitate cell-to-cell communication in bacterial colonies. QS is responsible for maintaining bacterial population density and virulence factor production . Inhibiting these pathways can prevent biofilm formation and reduce bacterial virulence. Studies have shown that chlorogenic acid decreases bacterial load and accelerates healing in a mouse wound model of P. aeruginosa infection via QS . In S. aureus , QS has been shown to be inhibited by RNAIII inhibiting peptide (RIP) and its derivatives . QS inhibitors due to their marked synergistic effect with antibiotics can be used as adjuncts to increase the susceptibility of biofilms to antimicrobials . However, their toxic effects on the host cells at working concentration and their reduced efficacy in the in vivo model limit their clinical use at present . In this context, a polyphenolic phytochemical, curcumin, has been extensively studied as an anti-biofilm agent. Curcumin acts by inhibiting the QS systems and disrupts biofilm formation by inhibiting bacterial adhesion to host receptors . Several nano-formulations incorporating curcumin are available for applications both on wound surfaces and on implantable devices to prevent biofilm formation. Biofilm matrix degradation is yet another promising anti-biofilm strategy. The use of DNAase I, Dispersin B (DspB), and a-amylase to degrade complex biofilm structure allows for increased antibiotic penetration and therefore increases its efficacy . This novel biofilm degrading strategy not only inhibits biofilm formation but also disrupts the mature biofilms of S. aureu s, Vibrio cholerae , and P. aeruginosa . However, the cost of synthesizing pure enzymes for clinical application makes it expensive and limits their clinical use. Nonetheless, combining biofilm matrix-degrading enzymes and antibiotics is a highly effective tool for removing biofilms from recalcitrant wounds . Antimicrobial peptides are positively charged, amphipathic peptides, 15–30 amino acids in length that can be produced by bacteria and fungi. They bind to negatively charged structural molecules on the microbial membrane and thereby exert a broad spectrum of antimicrobial activity . The major advantage is their ability to act on slow-growing, non-multiplying bacteria as encountered in biofilms. The ability to modify their primary amino acid sequences to enhance their effectiveness and stability makes them attractive anti-biofilm agents . However, their increased susceptibility to body fluid pH, proteolytic activity, and ionic strength makes their clinical application challenging. Natural and plant-based derivatives have also been used as preventive measures against biofilms. In this regard, the antibacterial effects of honey, both sidr and manuka need mention. The antibacterial effects are presumed to be multifactorial and are thought to be due to the substantial content of dicarbonyl methylglyoxal (MGO), bee defensin-1, a number of phenolic compounds, and complex carbohydrates. Antimicrobial effects exerted by the osmotic effect of high sugar concentration, low pH, and the presence of hydrogen peroxide produced by bee-derived glucose oxidase, are other mechanisms of its antibacterial activity . Furthermore, manuka honey has been shown to affect gene expression in multi-drug-resistant Staphylococcus aureus (MRSA) . Although low-frequency ultrasound is not effective alone in killing biofilm-growing bacteria, it can be combined with antibiotics to enhance antibiotic transport across the biofilms by enhancing the sensitivity of biofilms to antimicrobial agents . Studies have shown that this combination helps in the increased killing of P. aeruginosa and S. aureus associated biofilms and those caused by drug-resistant E. coli . Employing ultrasonic therapy in the management of non-healing wounds is a promising non-invasive means to decrease bacterial bioburden. Recent years have ignited the interest in electroceuticals and the effect of electrical current in various stages of wound healing . Human studies have shown that electrical stimulation increases cutaneous perfusion and accelerates wound healing . In a study by Banerjee et al., the growth of P. aeruginosa was markedly arrested in the presence of wireless electroceutical dressing (WED), which in the presence of wound exudate gets activated to generate an electric field. Due to its ability to produce ROS, biofilm thickness was decreased and the activity of quorum-sensing genes was repressed . Similarly, another study demonstrated the ability of WED to disrupt biofilm aggregates and accelerate wound closure by restoring skin barrier functions . These electroceuticals provide novel therapeutic options to improve wound outcomes by enhancing re-epithelization and disrupting biofilms. Its low cost, better safety profile, and long shelf life provide added advantages to its use. Modification of local pH The wound bed pH shifts from acidic to alkaline to neutral and then again acidic as the wound heals . Studies have shown that failure of most acute and chronic wounds to heal is correlated with alkaline pH of 7.15 to 8.9 . The arduous wounds which are stalled due to a prolonged inflammatory phase are also subjected to increased protease activity that is pH dependent. Acidification of wounds using topical acetic acid , polyacrylic acid, and polycarboxylate vinyl resins have been employed to study wound healing. It is argued that wound acidification being an adjuvant to healing, controls P. aeruginosa , which is present in 40% of chronic wounds and is often resistant to antimicrobial therapy . Negative pressure wound therapy (NPWT) Negative pressure wound therapy (NPWT) applies continuous or intermittent sub-atmospheric pressure to the wound surface. Currently, it is a standard of care in difficult wound management. NPWT may assist wound healing by increasing tissue perfusion and help in the production of granulation tissue besides reducing exudates, edema, and bacterial contamination . Recent work suggests that NPWT with the instillation of antimicrobials such as diluted hypochlorous acid contributes to a significant reduction in wound bioburden and thereby shows promising results in wounds with mature biofilms . With the added advantage of absent bacterial resistance development, this technique in combination with topical antiseptics is ideal in the management of difficult-to-heal wounds. Hyperbaric oxygen therapy (HBOT) It is well known that persistent hypoxia in chronic wounds limits healing. HBOT is an evolving therapy in which 100% oxygen above atmospheric pressure is supplied to the tissues for a defined period with the aim to increase the partial pressure of oxygen in the circulation and thereby increase its delivery to the wound bed. It aids wound healing by improving oxygenation, decreasing inflammation, and enhancing neovascularization . Its positive effect on reducing bacterial biofilms both in vitro and in vivo has also been demonstrated . This may be due to its antimicrobial effect via inducing oxidative stress and host immune system modulation apart from acting synergistically with the antibiotics and thereby enhancing its effects . With almost no likelihood of the development of bacterial resistance, the efficacy of HBOT as an adjunct therapy against biofilms is promising. Surfactants Surfactants have the capacity to unite compounds with different polarities and reduce the surface tension of the surrounding medium and thereby decreasing their ability to stick together. Besides being used as wound scrubs surfactants can also be used as carriers of antimicrobials. In comparison to the standard silver sulfadiazine cream, surfactants have pro-healing effects on full-thickness skin wounds . Surfactant polymer dressing has been shown to decrease the growth rate of both Gram-positive and Gram-negative organisms, but the resultant effect was mostly bacteriostatic . Surfactants work by disrupting the EPS matrix and converting the biofilm bacteria to planktonic phenotype. This makes bacterial removal easier from wound surfaces and their susceptibility toward antibiotics when used in combination. These molecules can be used to coat dressings and sutures and have fewer chances of developing resistance. The wound bed pH shifts from acidic to alkaline to neutral and then again acidic as the wound heals . Studies have shown that failure of most acute and chronic wounds to heal is correlated with alkaline pH of 7.15 to 8.9 . The arduous wounds which are stalled due to a prolonged inflammatory phase are also subjected to increased protease activity that is pH dependent. Acidification of wounds using topical acetic acid , polyacrylic acid, and polycarboxylate vinyl resins have been employed to study wound healing. It is argued that wound acidification being an adjuvant to healing, controls P. aeruginosa , which is present in 40% of chronic wounds and is often resistant to antimicrobial therapy . Negative pressure wound therapy (NPWT) applies continuous or intermittent sub-atmospheric pressure to the wound surface. Currently, it is a standard of care in difficult wound management. NPWT may assist wound healing by increasing tissue perfusion and help in the production of granulation tissue besides reducing exudates, edema, and bacterial contamination . Recent work suggests that NPWT with the instillation of antimicrobials such as diluted hypochlorous acid contributes to a significant reduction in wound bioburden and thereby shows promising results in wounds with mature biofilms . With the added advantage of absent bacterial resistance development, this technique in combination with topical antiseptics is ideal in the management of difficult-to-heal wounds. It is well known that persistent hypoxia in chronic wounds limits healing. HBOT is an evolving therapy in which 100% oxygen above atmospheric pressure is supplied to the tissues for a defined period with the aim to increase the partial pressure of oxygen in the circulation and thereby increase its delivery to the wound bed. It aids wound healing by improving oxygenation, decreasing inflammation, and enhancing neovascularization . Its positive effect on reducing bacterial biofilms both in vitro and in vivo has also been demonstrated . This may be due to its antimicrobial effect via inducing oxidative stress and host immune system modulation apart from acting synergistically with the antibiotics and thereby enhancing its effects . With almost no likelihood of the development of bacterial resistance, the efficacy of HBOT as an adjunct therapy against biofilms is promising. Surfactants have the capacity to unite compounds with different polarities and reduce the surface tension of the surrounding medium and thereby decreasing their ability to stick together. Besides being used as wound scrubs surfactants can also be used as carriers of antimicrobials. In comparison to the standard silver sulfadiazine cream, surfactants have pro-healing effects on full-thickness skin wounds . Surfactant polymer dressing has been shown to decrease the growth rate of both Gram-positive and Gram-negative organisms, but the resultant effect was mostly bacteriostatic . Surfactants work by disrupting the EPS matrix and converting the biofilm bacteria to planktonic phenotype. This makes bacterial removal easier from wound surfaces and their susceptibility toward antibiotics when used in combination. These molecules can be used to coat dressings and sutures and have fewer chances of developing resistance. Probiotics The use of live bacteria for achieving health benefits ranges from simple prevention of viral gastroenteritis to the treatment of inflammatory bowel disease. With their immunomodulatory role and ability to replace biofilm-growing pathogens, their use is being considered for the prevention of biofilm formation. Walencka et al. in their study to evaluate the ability of the Lactobacillus acidophilus -derived substances to inhibit S. aureus and S. epidermidis biofilms concluded that inhibition of bacterial attachment and biofilm disruption occurs by influencing cell-to-cell and cell-to-surface interactions . Furthermore, Sadowska et al. observed the antagonistic effect of bacteriocin-like inhibitory substances produced by L. acidophilus against the S. aureus strains . Varma et al. investigated the anti-infective properties of Lactobacillus fermentum by co-incubating with S. aureus and P. aeruginosa and observed growth inhibition, increased cytotoxicity, and decreased biofilm formation . Although the results of laboratory studies are promising, there is still a long way to identify the ideal probiotic that can be used clinically as an anti-biofilm tool. Mesenchymal stem cells Mesenchymal stem cells (MSCs) due to their antimicrobial effect hold tremendous potential for wound infection management. These exert anti-infective effects through both direct and indirect mechanisms. Their ability to secrete antimicrobial peptides as well as modulate pro- and anti-inflammatory immune responses have aroused interest in their therapeutic potential in biofilm-laden wounds . Probably, the most compelling evidence is derived from the article by Johnson et al. who studied the effects of MSC administration in canine models of biofilm-infected wounds, and concluded that the best outcome was found in the co-administration of activated MSC with antibiotics. Furthermore, repeated systemic administration of activated MSC had better bacterial clearing and wound healing . Wood et al. demonstrated that human adipose tissue-derived mesenchymal stem cells (AT-MSCs) inhibited the growth of S. aureus and P. aeruginosa , which was attributed to secretion of antibacterial factors, enhanced phagocytosis and reduced bacterial adhesion . Their attractive differentiating potential and ability to speed up the wound-healing process by promoting angiogenesis and reducing scar formation makes them a promising tool. However, heterogeneity in their preparation, suboptimal wound bed preparation, cell viability, and the need for larger controlled clinical trials for ensuring safety, preclude their widespread clinical use. The use of live bacteria for achieving health benefits ranges from simple prevention of viral gastroenteritis to the treatment of inflammatory bowel disease. With their immunomodulatory role and ability to replace biofilm-growing pathogens, their use is being considered for the prevention of biofilm formation. Walencka et al. in their study to evaluate the ability of the Lactobacillus acidophilus -derived substances to inhibit S. aureus and S. epidermidis biofilms concluded that inhibition of bacterial attachment and biofilm disruption occurs by influencing cell-to-cell and cell-to-surface interactions . Furthermore, Sadowska et al. observed the antagonistic effect of bacteriocin-like inhibitory substances produced by L. acidophilus against the S. aureus strains . Varma et al. investigated the anti-infective properties of Lactobacillus fermentum by co-incubating with S. aureus and P. aeruginosa and observed growth inhibition, increased cytotoxicity, and decreased biofilm formation . Although the results of laboratory studies are promising, there is still a long way to identify the ideal probiotic that can be used clinically as an anti-biofilm tool. Mesenchymal stem cells (MSCs) due to their antimicrobial effect hold tremendous potential for wound infection management. These exert anti-infective effects through both direct and indirect mechanisms. Their ability to secrete antimicrobial peptides as well as modulate pro- and anti-inflammatory immune responses have aroused interest in their therapeutic potential in biofilm-laden wounds . Probably, the most compelling evidence is derived from the article by Johnson et al. who studied the effects of MSC administration in canine models of biofilm-infected wounds, and concluded that the best outcome was found in the co-administration of activated MSC with antibiotics. Furthermore, repeated systemic administration of activated MSC had better bacterial clearing and wound healing . Wood et al. demonstrated that human adipose tissue-derived mesenchymal stem cells (AT-MSCs) inhibited the growth of S. aureus and P. aeruginosa , which was attributed to secretion of antibacterial factors, enhanced phagocytosis and reduced bacterial adhesion . Their attractive differentiating potential and ability to speed up the wound-healing process by promoting angiogenesis and reducing scar formation makes them a promising tool. However, heterogeneity in their preparation, suboptimal wound bed preparation, cell viability, and the need for larger controlled clinical trials for ensuring safety, preclude their widespread clinical use. Since the biofilm is invisible to the naked eye, identification of biofilm “clinical cues” guides an astute clinician to initiate biofilm-based multifaceted treatment early. The presence of a shiny, slimy layer on a non-healing wound bed that reforms rapidly after its removal and does not respond to standard wound care treatment and antimicrobial intervention is arguably the best indirect evidence of the presence of biofilm in the wound. However, the World Union of Wound Healing Societies (WUWHS) position statement indicates that ‘all non-healing chronic wounds potentially harbor biofilms’ and insists that treatment of such wounds should be directed towards disruption of biofilms and prevention of their reformation . Another consensus document also indicates similar observations and suggests a holistic approach . However, it should be acknowledged that the majority of the studies on the management of biofilm-related wound infections have shown a reduction in biomass or bioburden in wounds rather than eradication of the biofilms. That is to say, the complete eradication of biofilms is extremely difficult. Therefore, a pertinent question is whether better results in wound healing are expected after reduction or after the complete eradication of biofilms. While studies demonstrate that responses to treatment greatly increase even after the reduction of the size of biofilms, recurrence or reformation of biofilms poses a real challenge. As biofilm formation involves a constant balance between the planktonic bacteria and the biofilm-associated bacteria, it would be wise to speculate that the reduction of either population of cells helps to tilt the balance towards the host’s immune factors. However, as bacterial evolution has been faster than expected, there might be other emerging strategies to outsmart this approach. Until further evidence comes in support, reliance on a holistic approach is a better mode of tackling wound biofilms. The biofilm-based wound care (BBWC) is the holistic approach to biofilm management with an emphasis on initial aggressive debridement and cleansing to reduce the biofilm burden as well as increase antimicrobial susceptibility . The aim is to step-down or bulk up the treatment depending on the healing progression. Once the necrotic, devitalized tissue is removed and the wound bed is prepared, the step-down process ensures the prevention of microbial recontamination and subsequent biofilm reformation. This can be achieved using topical antimicrobials and barrier dressing. In case the wound seems still recalcitrant after 4 weeks of the chosen treatment, the patient and the wound should be reassessed and an alternative treatment strategy should be planned (Fig. ). A. Prevention of biofilm formation Once the early presence of biofilm is suspected, a proactive approach should be considered to reduce its burden and maturation. The newer anti-biofilm agents can specifically target the early stages of biofilm formation. Prevention of attachment Anti-adhesion agents such as mannosides, pillicides, and curlicides have shown very promising results in in vitro studies [ – ]. Other agents such as lactoferrin, ethylene diamine tetraacetic acid (EDTA) , xylitol, and honey have been shown to cause bacterial destabilization and block the attachment . Other agents which disrupt biofilm EPS (such as EDTA) and interfere with signal transduction mechanisms (such as farnesol, Iberin, aioene, and manuka honey) also prevent biofilm formation and stabilization. Prevention of colony formation and biofilm maturation Bacteriophages, nanoparticles, antimicrobial peptides, anti-biofilm polysaccharides, and EPS degrading enzymes, all exert a significant inhibitory effect on micro-colony aggregation and biofilm maturation. Biofilm dispersion This strategy is based on the principle that dispersion forces the biofilm to assume planktonic phenotype making them more susceptible to the combined administration of antimicrobial agents. Ma et al. exploited the use of biofilm dispersal protein, thereby providing a new tool that can facilitate biofilm dispersion . Studies have also shown that d -amino acids promote biofilm disassembly by disrupting adhesive fiber interactions . Another biofilm-disassembly molecule, Norspermidine works complementary to D-amino acids , thereby making these useful in anti-biofilm therapy. B. Disruption of existing biofilm For the wound-healing process to progress smoothly, the wound bed must be well perfused, moist, free of necrotic, dead tissue, and clear of infection. Meticulous wound care with regular cleansing, debridement, and barrier dressing can help extirpate the obstinate biofilm and promote healthy granulation tissue formation and re-epithelization. Regular cleansing Regular wound irrigation should be part of routine wound management. This is to remove the necrotic material and reduce bacterial load. Low-pressure irrigation using a bulb syringe is sufficient for most wounds. For highly contaminated wounds, high-pressure pulse irrigation using sterile saline should be considered. Repeated debridement Wound debridement facilitates the separation of necrotic tissue from the wound bed. This can be accomplished by various means such as sharp, mechanical, autolytic, and enzymatic debridement . Sharp debridement, although effective and rapid to reduce the bacterial load and stimulate healthy granulation tissue, has the major disadvantage of being painful. Autolytic debridement is the natural method in which proteolytic enzymes in wound fluid remove the necrotic tissue from the wound bed. This natural process can be augmented with the use of semi-occlusive dressings which keeps the wound moist for a long time. Although easy and feasible, the major drawback of this technique is the time taken to produce satisfactory results and the high risk of anaerobic growth which requires frequent monitoring. Enzymatic debridement is yet another way to digest the proteins in dead nonviable tissue while preserving the healthy tissue underneath. Commercially available collagenase and papain are two widely used agents. Although they are slow and effective in wounds with minimal necrotic tissues, these are usually used in adjunct to surgical debridement . C. Prevention of biofilm reformation Once the wound bed is adequately prepared, antimicrobial/anti-biofilm agents should be applied locally to inhibit biofilm reformation. Several antimicrobials such as acetic acid, honey, iodine, polyhexamethylene biguanide (PHMB), and silver have been used in this regard. Although used synonymously, antimicrobials are broad-spectrum agents that are bactericidal or bacteriostatic to the microbes whereas anti-biofilm agents are novel compounds that act against biofilm at various stages of its formation . As discussed above, one of the most important characteristics of biofilms is their increased tolerance to antimicrobial agents. Treatment based on laboratory-derived antimicrobial susceptibility tests may not always correlate with therapeutic success. Topical application provides high local concentrations by delivering antibiotics directly to the site of infection with low or even undetectable serum concentrations, thus avoiding systemic side effects. Topical antibiotics are also beneficial in an avascular area where parenterally administered antibiotics cannot easily reach. Furthermore, topical application may decrease the chances of developing antimicrobial resistance . In this context, antibiotic therapy may have a role in the treatment of established biofilm-associated infections and even as prophylaxis to prevent infection in certain circumstances . Much of the evidence for topical antimicrobial is derived from in vitro studies and due to the large disparity between testing conditions and intended application, most of these anti-biofilm strategies fail when used in vivo . While delivering antimicrobials topically, the concept of minimum biofilm eradication concentration (MBEC) should be kept in mind. Although MBEC is believed to be lower when the antimicrobial exposure time is longer , further studies are needed to confirm whether MBEC for in vitro studies translates similarly in clinical infection. D. Reassessment It is an important aspect to determine the success of biofilm treatment. All the initial cues which led to the suspicion of biofilm should be reviewed. The parameters such as reduction in local signs of infection and the decrease in slough are important determinants of successful wound healing. In addition, it is suggested that all the treatment modalities should be given for at least 2 weeks before deciding on their efficacy . Once the early presence of biofilm is suspected, a proactive approach should be considered to reduce its burden and maturation. The newer anti-biofilm agents can specifically target the early stages of biofilm formation. Prevention of attachment Anti-adhesion agents such as mannosides, pillicides, and curlicides have shown very promising results in in vitro studies [ – ]. Other agents such as lactoferrin, ethylene diamine tetraacetic acid (EDTA) , xylitol, and honey have been shown to cause bacterial destabilization and block the attachment . Other agents which disrupt biofilm EPS (such as EDTA) and interfere with signal transduction mechanisms (such as farnesol, Iberin, aioene, and manuka honey) also prevent biofilm formation and stabilization. Prevention of colony formation and biofilm maturation Bacteriophages, nanoparticles, antimicrobial peptides, anti-biofilm polysaccharides, and EPS degrading enzymes, all exert a significant inhibitory effect on micro-colony aggregation and biofilm maturation. Biofilm dispersion This strategy is based on the principle that dispersion forces the biofilm to assume planktonic phenotype making them more susceptible to the combined administration of antimicrobial agents. Ma et al. exploited the use of biofilm dispersal protein, thereby providing a new tool that can facilitate biofilm dispersion . Studies have also shown that d -amino acids promote biofilm disassembly by disrupting adhesive fiber interactions . Another biofilm-disassembly molecule, Norspermidine works complementary to D-amino acids , thereby making these useful in anti-biofilm therapy. Anti-adhesion agents such as mannosides, pillicides, and curlicides have shown very promising results in in vitro studies [ – ]. Other agents such as lactoferrin, ethylene diamine tetraacetic acid (EDTA) , xylitol, and honey have been shown to cause bacterial destabilization and block the attachment . Other agents which disrupt biofilm EPS (such as EDTA) and interfere with signal transduction mechanisms (such as farnesol, Iberin, aioene, and manuka honey) also prevent biofilm formation and stabilization. Bacteriophages, nanoparticles, antimicrobial peptides, anti-biofilm polysaccharides, and EPS degrading enzymes, all exert a significant inhibitory effect on micro-colony aggregation and biofilm maturation. This strategy is based on the principle that dispersion forces the biofilm to assume planktonic phenotype making them more susceptible to the combined administration of antimicrobial agents. Ma et al. exploited the use of biofilm dispersal protein, thereby providing a new tool that can facilitate biofilm dispersion . Studies have also shown that d -amino acids promote biofilm disassembly by disrupting adhesive fiber interactions . Another biofilm-disassembly molecule, Norspermidine works complementary to D-amino acids , thereby making these useful in anti-biofilm therapy. For the wound-healing process to progress smoothly, the wound bed must be well perfused, moist, free of necrotic, dead tissue, and clear of infection. Meticulous wound care with regular cleansing, debridement, and barrier dressing can help extirpate the obstinate biofilm and promote healthy granulation tissue formation and re-epithelization. Regular cleansing Regular wound irrigation should be part of routine wound management. This is to remove the necrotic material and reduce bacterial load. Low-pressure irrigation using a bulb syringe is sufficient for most wounds. For highly contaminated wounds, high-pressure pulse irrigation using sterile saline should be considered. Repeated debridement Wound debridement facilitates the separation of necrotic tissue from the wound bed. This can be accomplished by various means such as sharp, mechanical, autolytic, and enzymatic debridement . Sharp debridement, although effective and rapid to reduce the bacterial load and stimulate healthy granulation tissue, has the major disadvantage of being painful. Autolytic debridement is the natural method in which proteolytic enzymes in wound fluid remove the necrotic tissue from the wound bed. This natural process can be augmented with the use of semi-occlusive dressings which keeps the wound moist for a long time. Although easy and feasible, the major drawback of this technique is the time taken to produce satisfactory results and the high risk of anaerobic growth which requires frequent monitoring. Enzymatic debridement is yet another way to digest the proteins in dead nonviable tissue while preserving the healthy tissue underneath. Commercially available collagenase and papain are two widely used agents. Although they are slow and effective in wounds with minimal necrotic tissues, these are usually used in adjunct to surgical debridement . Regular wound irrigation should be part of routine wound management. This is to remove the necrotic material and reduce bacterial load. Low-pressure irrigation using a bulb syringe is sufficient for most wounds. For highly contaminated wounds, high-pressure pulse irrigation using sterile saline should be considered. Wound debridement facilitates the separation of necrotic tissue from the wound bed. This can be accomplished by various means such as sharp, mechanical, autolytic, and enzymatic debridement . Sharp debridement, although effective and rapid to reduce the bacterial load and stimulate healthy granulation tissue, has the major disadvantage of being painful. Autolytic debridement is the natural method in which proteolytic enzymes in wound fluid remove the necrotic tissue from the wound bed. This natural process can be augmented with the use of semi-occlusive dressings which keeps the wound moist for a long time. Although easy and feasible, the major drawback of this technique is the time taken to produce satisfactory results and the high risk of anaerobic growth which requires frequent monitoring. Enzymatic debridement is yet another way to digest the proteins in dead nonviable tissue while preserving the healthy tissue underneath. Commercially available collagenase and papain are two widely used agents. Although they are slow and effective in wounds with minimal necrotic tissues, these are usually used in adjunct to surgical debridement . Once the wound bed is adequately prepared, antimicrobial/anti-biofilm agents should be applied locally to inhibit biofilm reformation. Several antimicrobials such as acetic acid, honey, iodine, polyhexamethylene biguanide (PHMB), and silver have been used in this regard. Although used synonymously, antimicrobials are broad-spectrum agents that are bactericidal or bacteriostatic to the microbes whereas anti-biofilm agents are novel compounds that act against biofilm at various stages of its formation . As discussed above, one of the most important characteristics of biofilms is their increased tolerance to antimicrobial agents. Treatment based on laboratory-derived antimicrobial susceptibility tests may not always correlate with therapeutic success. Topical application provides high local concentrations by delivering antibiotics directly to the site of infection with low or even undetectable serum concentrations, thus avoiding systemic side effects. Topical antibiotics are also beneficial in an avascular area where parenterally administered antibiotics cannot easily reach. Furthermore, topical application may decrease the chances of developing antimicrobial resistance . In this context, antibiotic therapy may have a role in the treatment of established biofilm-associated infections and even as prophylaxis to prevent infection in certain circumstances . Much of the evidence for topical antimicrobial is derived from in vitro studies and due to the large disparity between testing conditions and intended application, most of these anti-biofilm strategies fail when used in vivo . While delivering antimicrobials topically, the concept of minimum biofilm eradication concentration (MBEC) should be kept in mind. Although MBEC is believed to be lower when the antimicrobial exposure time is longer , further studies are needed to confirm whether MBEC for in vitro studies translates similarly in clinical infection. It is an important aspect to determine the success of biofilm treatment. All the initial cues which led to the suspicion of biofilm should be reviewed. The parameters such as reduction in local signs of infection and the decrease in slough are important determinants of successful wound healing. In addition, it is suggested that all the treatment modalities should be given for at least 2 weeks before deciding on their efficacy . Although extensive data arising out of in vitro and in vivo animal research do demonstrate various mechanisms by which this slimy layer interferes with the wound-healing process [ – ], most of the research studies discussed above are in the experimental phases barring a few which have been successfully introduced in the patient care. The real question remains how to detect biofilms in wound beds in real life and how much wound beds should be involved by biofilms to cause a significant delay in clinical healing. To date, there are little data to suggest to what extent biofilm needs to be present to negatively impact healing. A non-invasive technique has been described that creates a “biofilm map” in the wound bed by “blotting” the wound and mapping it using a specific dye solution to tag the free DNA shed by the biofilm in the wound . It has given considerable clues to localizing biofilm on the wound surface and predicting wound behavior in terms of slough development in subsequent weeks depending on the extent of surface area stained by the dye. But more research is indicated for its commercial use. Since no gold standard technique exists for visualization and measurements of biofilm in wounds, bench research forces us to reconsider whether the laboratory observations can be translated into clinical practice. Evidence-based practice can only be guided by clinical research, which at present is inadequate in substantiating the in vitro anti-biofilm mechanisms tested in the laboratories. Therefore, well-designed clinical trials are the need of the hour to test laboratory evidence and translate these novel techniques to reach the patient’s bedside. The laboratory and clinical evidence now establish that the bacterial biofilm is a major potentiator of wound intractability and delayed healing. The pathogenesis is thought to be multifactorial and involves a diverse species of microbes and their intricate interaction with host cells in the wound bed micro-environment. More needs to be understood to detect and reverse the effects of biofilms in wounds. The in vitro experiments are mostly research-based and may not have significant anti-biofilm effects in real-life scenarios. Moreover, most novel diagnostic tools are not clinically available. From a therapeutic perspective, the multimodality approach is currently being strengthened with the search for various anti-biofilm options. Although these recent laboratory developments are promising, translational research is the need of the hour to have a potential impact on wound health, long-term morbidity, and quality of life. Even with the mounting scientific evidence, clinical diagnosis is still limited by the so-called “diagnostic” clinical cues and mechanical debridement along with topical antimicrobial therapy remains the pillars of wound–biofilm management.
Amino Acid Availability Determines Plant Immune Homeostasis in the Rhizosphere Microbiome
c4e53359-9a5a-47da-84f9-9fb71c92fa88
10127609
Microbiology[mh]
A myriad of microorganisms, including pathogens and mutualists, live in the plant rhizosphere and actively influence plant fitness ( ). To protect themselves from pathogens, plants use pattern recognition receptors (PRRs) that can specifically sense microbe-associated molecular patterns (MAMPs), which are evolutionarily conserved across diverse microbes. Perception of MAMPs results in pattern-triggered immunity (PTI), which includes a reactive oxygen species burst, calcium influx, and defense gene expression ( ). As both pathogens and commensals contain MAMPs that can be recognized by the plant innate immune system, both must suppress or evade immunity in order to successfully colonize. While mechanisms of immunity suppression by pathogenic bacteria are well-established, how commensal microbes evade or suppress plant immunity to promote their own fitness is poorly understood. Some successful pathogens can suppress PTI by injecting effector proteins into the plant cytosol via the type III secretion systems (T3SS) ( ). In addition to injecting effectors, pathogenic bacteria can manipulate phytohormones to suppress host immunity. The pathogenic bacterium Pseudomonas syringae pv tomato DC3000 ( Pto ) can secrete the phytotoxin coronatine (COR) that mimics the active form of jasmonic acid (JA), JA-lIe, to promote JA-dependent defense. Since the JA and salicylic acid (SA) pathways antagonize each other, inducing JA signaling by COR suppresses SA signaling, which is critical for resistance against Pto ( ). Lastly, instead of suppressing host immunity, pathogenic microbes can degrade their MAMPs to prevent recognition by PRRs. For instance, P. syringae can secrete AprA, an extracellular alkaline protease that can degrade flagellin monomers, thereby avoiding immune recognition ( ). Although commensals also have MAMPs that have the potential to induce PTI, many do not induce immune responses, suggesting that commensals can suppress or evade the plant immune system ( ). A growth promoting and biocontrol bacterial strain, Pseudomonas sp. WCS365, can evade host immunity by fine tuning biofilm formation ( ). Pseudomonas capeferrum WCS358 can suppress root local immunity by secreting organic acids to lower the pH of the rhizosphere ( ). Dyella japonica suppresses root immunity through a Type II secretion-dependent mechanism without affecting rhizosphere pH ( ). Collectively, these findings indicate that rhizosphere commensals possess diverse mechanisms to modulate host immunity. The beneficial root-associated bacterial strain, P. simiae WCS417, was previously shown to suppress root immunity ( , ). It lowers the pH of the rhizosphere to a greater extent than P. capeferrum WCS358, and produces more gluconic acid ( ); however, we found that deletion of pqqF , which results in loss of gluconic acid biosynthesis and immunity suppression in P. capeferrum WCS358 (8), does not impair the ability of WCS417 to suppress rhizosphere immunity. Here, we describe a forward genetic screen that identified a novel mechanism of root immunity suppression in P. simiae WCS417, where amino acid biosynthesis prevents rhizosphere alkalization and suppresses immunity. Acidification is sufficient, but not necessary, for Arabidopsis immunity suppression by P. simiae WCS417. Gluconic acid biosynthesis via pqqF is necessary for Arabidopsis rhizosphere immunity suppression in P. capeferrum WCS358 and Pseudomonas aeruginosa PAO1 as measured by expression of a PTI-inducible reporter CYP71A12 pro :GUS expression ( ). CYP71A12 is involved in biosynthesis of the antimicrobial camalexin and is induced in the root elongation zone or maturation zone upon sensing MAMPs such as flg22 or chitin ( ). As a result, induction of CYP71A12 pro :GUS reports the activation of PTI. P. simiae WCS417 produces more gluconic acid and lowers the pH of the rhizosphere to a greater extent than WCS358 suggesting WCS417 pqqF is also required for rhizosphere immunity suppression ( ). Surprisingly, we found that the WCS417 ΔpqqF mutant can still completely suppress flg22-triggered expression of CYP71A12 pro :GUS ( ). We found that while P. simiae WCS417 acidifies seedling exudates to a pH of 3.7, a clean deletion of pqqF in P. simiae WCS417 resulted in a significant increase in the pH of seedling exudates to pH 5.0 ( ). In contrast, while wildtype P. aeruginosa PAO1 lowers the pH of the seedling exudates to pH 4.0 and supresses CYP71A12 pro :GUS , disruption of PAO1 pqqF resulted in a less dramatic increase in the pH of seedling exudates to pH 4.5 ( ) but results in a complete loss of suppression in host immunity ( ; ( )). Collectively these data suggest a gluconic acid independent mechanism of immunity suppression by P. simiae WCS417. Previous work showed that lowering the rhizosphere pH to 3.7 with hydrochloric acid (HCl) resulted in complete inhibition of CYP71A12 pro :GUS expression ( ). Lowering the rhizosphere pH to between 5.5 to 4.6 resulted in partial immunity suppression, indicating that we should expect about 50% of roots to retain CYP71A12 pro :GUS expression at pH 5.0, the pH of the P. simiae WCS417 ΔpqqF mutant growing in seedling exudates. We tested the effect of a pH gradient on suppression of CYP71A12 pro :GUS and indeed found that modifying the pH of the rhizosphere to 4.7–5.0 in the absence of bacteria results in intermediate suppression of plant immunity, with about 50% of roots retaining CYP71A12 pro :GUS expression ( ). The pH of the WCS417 ΔpqqF mutant is around 5.0, but results in complete inhibition of CYP71A12 pro :GUS expression ( ). These data suggest that WCS417 possesses additional mechanisms to suppress host immunity. P. simiae WCS417 a rgF is required for rhizosphere acidification and immunity suppression. To identify additional genes that are necessary for host immunity suppression, we generated an EMS-mutagenized library of P. simiae WCS417, and screened for mutants that were unable to suppress flg22-mediated induction of the CYP71A12 pro :GUS reporter. We screened 960 EMS-mutagenized colonies of WCS417 in duplicate for their ability to suppress flg22-induced expression of the CYP71A12 pro :GUS reporter. A single mutant, named 10E10, was found incapable of suppressing flg22-induced immunity ( ). We found that 10E10 completely failed to reduce the pH of seedling exudates, suggesting that it might contribute to immunity suppression through rhizosphere acidification ( ). Because we only identified one mutant from the screen, we wondered if mutations in WCS417 that result in a loss of immunity suppression are rare, or if our screen was not saturating. To test this, we determined whether we could identify an allele of pqqF in our screen. Gluconic acid has previously been shown to be required for zinc solubilization, and so we tested whether we could identify a mutant unable to solubilize zinc. By growing bacteria on zinc phosphate media ( ), only the strains that can solubilize zinc phosphate will produce a clear halo on the plate. We found a single mutant, 4E4, that cannot solubilize zinc phosphate ( ). The genome of 4E4 was sequenced, and we identified mutations in both pqqF and pqqB ( , Table S1). However, 4E4 can still suppress flg22-triggered CYP71A12 pro :GUS expression, which is consistent with our finding that the WCS417 ΔpqqF mutant can still suppress plant immunity. As our screen successfully identified a mutation in pqqF , which suggests that mutants that fail to suppress root immunity might be rare in P. simiae WCS417. 10.1128/mbio.03424-22.1 FIG S1 P. simiae WCS417 can solubilize ZnSO 4 , while the mutant 4E4 cannot. Rescreening the P. simiae WCS417 EMS library on media containing ZnSO 4 and glucose identified a single mutant that could not solubilize zinc. Sequencing the 4E4 mutant identified point mutations in both pqqF and pqqB . The 7A1 strain can solubilize ZnSO4, and is shown as a positive control. Download FIG S1, PDF file, 0.5 MB . Copyright © 2023 Liu et al. 2023 Liu et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . 10.1128/mbio.03424-22.12 DATA SET S1 The process of HILIC+/− shows results from metabolomics analysis. Data Set S1, Table S1, A total of 35 non-synonymous candidate genes from genome mapping of the 10E10 WCS417 mutant. Data Set S1, Table S2, total of 48 non-synonymous candidate genes from genome mapping of the 4E4 WCS417 mutant. Data Set S1, Table S3 Stains used in this study. Download Data Set S1, XLSX file, 0.5 MB . Copyright © 2023 Liu et al. 2023 Liu et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . We sequenced the genome of the P. simiae WCS417 10E10 mutant, and identified 35 non-synonymous mutations with respect to the parental WCS417 strain ( , Table S2). To narrow down candidate genes, we made use of a PAO1 transposon insertion library ( ), and tested transposon insertion mutants in PAO1 orthologs of genes from the mapping list that we hypothesized were most likely to contribute to immunity suppression ( ). We found that an insertion in argF uniquely impaired the ability of PAO1 to acidify seedling exudates ( ). ArgF encodes ornithine carbamoyltransferase, which is involved in arginine biosynthesis, by converting l -ornithine to L-citrulline ( , ). The mutation in argF in P. simiae WCS417 is predicted to affect its catalytic site Ser-Thr-Arg-Thr-Arg, where a C to T change is predicted to convert the third arginine to cysteine ( ). Thus, we hypothesized that a loss of function of argF likely underlines the lack of immunity suppression by 10E10. 10.1128/mbio.03424-22.2 FIG S2 A screen of P. aeruginosa PAO1 transposon insertion mutants in candidate genes with point mutations in P. simiae WCS417 10E10 identified argF , which cannot acidify seedling exudates. Genome sequencing of 10E10 identified 35 non-synonymous candidate genes ( , Table S1). Orthologs were identified in PAO1. argF ::Tn5-1/-2: PS417_05595, ornithine carbamoyltransferase; maeB : PS417_01950, malate dehydrogenase; PA4465: PS417_04330, glmZ(sRNA)-inactivating NTPase; PA0214: PS417_26590, malonate decarboxylase subunit epsilon; PA0575: PS417_25885, diguanylate cyclase; and PA0148: PS417_03205, adenine deaminase. Statistics were calculated by using two-way ANOVA and Tukey’s HSD. Error bars represent mean +/− SD, and * indicate differences at P < 0.05. Download FIG S2, PDF file, 0.2 MB . Copyright © 2023 Liu et al. 2023 Liu et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . We tested whether a loss of P. simiae WCS417 argF could explain the inability of the 10E10 mutant to suppress immunity. First, we confirmed the C to T mutation in the argF catalytic site in the 10E10 mutant by PCR. We then complemented the 10E10 mutant with argF expressed by its native promoter in the pBBR1MCS-5 plasmid, and found that expression of argF under its native promoter ( argF pro - argF ) rescued the 10E10 mutant, and the complemented strain suppressed flg22-triggered CYP71A12 pro :GUS expression, resulting in acidification of seedling exudates to a similar degree as wildtype WCS417 ( and ). We made a clean deletion of argF in WCS417, and found that the Δ argF mutant phenocopied the inability of the 10E10 mutant to suppress flg22-triggered CYP71A12 pro :GUS expression, and could not acidify seedling exudates ( and ). These results illustrate that a loss of function of argF underlies the inability to suppress immunity by the 10E10 mutant. pqqF and argF pathways synergistically regulate rhizosphere pH. Fungal pathogens can produce glutamate or glutamine to increase the local concentration of ammonia, and raise host pH to promote their own virulence ( , ). As a result, it is an intriguing possibility that a loss of function mutation in argF may result in accumulation of alkaline arginine precursors such as ornithine, polyamines, and ammonia ( ). A second, potentially confounding possibility is that arginine is limiting for growth in the rhizosphere, and so the ΔargF mutant may not be able to acidify because a lack of arginine is insufficient to support growth. If the former is true, then conditions resulting in accumulation of arginine precursors, such as addition of exogenous arginine, should result in accumulation of alkaline arginine precursors and rhizosphere alkalization. If the latter is true, and arginine is only required for growth, then addition of arginine should always result in increased growth and acidification. We tested whether exogenous arginine would result in an increase or decrease in rhizosphere pH in P. simiae WCS417, the ΔargF mutant, and the ΔpqqF mutant. As exogenous arginine should suppress argF expression, it should mimic the argF mutant, and result in accumulation of alkaline precursors. We found that there was no effect on pH following the addition of 1 mM exogenous arginine in wildtype P. simiae WCS417 growing in seedling exudates ( ). However, exogenous arginine fully restored acidification by both the 10E10 and Δ argF mutants, indicating that their inability to acidify is at least partly due to limited availability of arginine ( ). Indeed, we confirmed that both mutants have growth defects in seedling exudates, which can be rescued by addition of exogenous arginine ( and ). Intriguingly, we found that arginine resulted in a significant increase in the pH of a ΔpqqF mutant growing in seedling exudates ( ). This suggests that inhibition of argF may indeed result in increased pH through accumulation of arginine precursors, but that this effect may be masked by the large amount of gluconic acid produced by P. simiae WCS417. P. simiae WCS417 produces large amounts of gluconic acid ( ), and we only observed an increase in pH with application of exogenous amino acids in the ΔpqqF mutant ( ). As a result, we hypothesized that argF might be necessary for lowering pH, but that gluconic acid might mask the effect of argF . If this is the case, we would expect that exogenous arginine would restore growth, but not lower the pH of a double ΔpqqFΔargF mutant growing in seedling exudates. In contrast, if argF only contributes to growth, addition of exogenous arginine should restore pH of the ΔpqqFΔargF mutant to the level of the single Δ pqqF mutant. Consistent with a role of ArgF in rhizosphere acidification, we found that addition of exogenous arginine fully restored growth of a ΔpqqFΔargF or ΔargF mutants to wildtype levels ( ). However, arginine addition to the ΔpqqFΔargF double mutant resulted in significantly higher rhizosphere pH than the ΔpqqF mutant alone ( ). These data indicate that argF is required for regulation of pH in the rhizosphere independent of growth. We initially observed that the P. simiae WCS417 ΔpqqF mutant could not acidify the rhizosphere to a level that should result in full suppression of CYP71A12 pro :GUS expression. However, the mutant can still fully suppress CYP71A12 pro :GUS expression, suggesting a pH-independent mechanism of immunity suppression. While exogenous arginine resulted in a rhizosphere pH of 5.8, at which we would predict no suppression of the CYP71A12 pro :GUS reporter ( ), we found that arginine treatment of the ΔpqqFΔargF double mutant fully restored immunity suppression ( ). These data indicate that, while low pH is sufficient to suppress immunity, it is not necessary in WCS417, again supporting that there is a second, pH-independent mechanism of immunity suppression by P. simiae WCS417. 10.1128/mbio.03424-22.3 FIG S3 Addition of arginine to the P. simiae WCS417 ΔargF ΔpqqF mutant restores immunity suppression, but not acidification. Exogenous arginine had no effect on the ability of wild type P. simiae WCS417 to suppress expression of the CYP71A12 pro :GUS reporter in the presence of flg22, but restored the immunity suppression by the 10E10 mutant, the ΔargF mutant, and the ΔpqqFΔargF double mutant. The addition of arginine to the Δ pqqF and pqqFΔargF double mutant fails to restore acidification ( ). Download FIG S3, JPG file, 0.3 MB . Copyright © 2023 Liu et al. 2023 Liu et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . Ornithine accumulation contributes to rhizosphere alkalization. To test the hypothesis that accumulation of alkaline precursors of arginine, such as glutamate and ammonia could result in an increase in rhizosphere pH ( ), we added arginine, proline, or glutamine to seedling exudates containing bacteria, which should result in an accumulation of their precursors, glutamate, and ammonia ( ). We also tested the addition of exogenous methionine, leucine, tryptophan, serine, and histidine (which should not affect glutamate catabolism), as controls. To avoid confounding our findings with acidification through gluconic acid biosynthesis, we added amino acids to the P. simiae WCS417 ΔpqqF mutant. We found that exogenous arginine, proline, glutamine, serine and glutamate, but not methionine, leucine, tryptophan, or histidine, significantly raised the pH of the ΔpqqF mutant close to the level of mock seedling exudates ( ). Moreover, we found this alkalization phenotype is even more dramatic in PAO1 as arginine, glutamine, and glutamate raised the pH of seedling exudates inoculated with PAO1 pqqF ::Tn5 to around 8.0 ( ). These data indicate that, similar to pathogenic fungi, exogenous arginine, proline, glutamine, or glutamate likely result in rhizosphere alkalization through ammonia accumulation. 10.1128/mbio.03424-22.4 FIG S4 Addition of amino acids downstream of glutamine biosynthesis raised the pH of P. aeruginosa PAO1 growing in seedling exudates. Seedling exudates were treated with 1 mM amino acids and mock treated, or treated with PAO1 or the pqqF:: Tn5 mutant. This showed that arginine, proline, glutamine, and glutamate significantly raised the pH of the pqqF mutant growing in seedling exudates. All the experiments were independently repeated at least 3 times. Statistics were calculated by using one-way ANOVA and Tukey’s HSD. Error bars represent mean +/− SD, and letters indicate differences at P < 0.05. Download FIG S4, PDF file, 0.4 MB . Copyright © 2023 Liu et al. 2023 Liu et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . To genetically test whether a loss of arginine, proline, glutamine, or glutamate could specifically result in increased pH of seedling exudates, we selected 6 mutants that have insertions in the genes that are required for amino acid biosynthesis from the P. aeruginosa PAO1 transposon insertion library ( , Table S3). We found that proA ::Tn5 (deficient in proline biosynthesis) and gltB ::Tn5 (deficient in glutamine biosynthesis) neither acidify seedling exudates, nor suppress PTI ( ), which is consistent with the glutamate biosynthetic pathway being required for rhizosphere acidification. Two additional amino acid insertions in metZ ::Tn5 (deficient in methionine biosynthesis) and leuC::Tn5 (deficient in leucine biosynthesis) also resulted in a loss of acidification of seedling exudates and immunity suppression of seedlings, suggesting that leucine and methionine may be limiting for bacterial growth in the rhizosphere ( ). In addition, serA :: Tn5 (deficient in serine biosynthesis) suppressed host immunity and had decreased pH of seedling exudates, indicating that the rhizosphere may contain enough serine to support bacterial growth ( ). Interestingly, a hisB ::Tn5 mutant (deficient in histidine biosynthesis) can also acidify seedling exudates but induced immunity on its own, further indicating that low pH alone is not sufficient for immunity suppression ( ). Collectively, these data confirm that the rhizosphere is deficient in glutamate and downstream amino acids, and so bacteria must actively synthesize these in the rhizosphere. 10.1128/mbio.03424-22.5 FIG S5 Bacterial auxotrophs cannot suppress plant immunity or acidify the rhizosphere. (A) P. aeruginosa PAO1 amino acid auxotrophic mutants fail to acidify the seedling exudates with the exception of hisB ::Tn5 and serA ::Tn5 in PAO1. (B) Amino acid auxotrophic mutants fail to suppress flg22-induced CYP71A12 pro :GUS expression, with the exception of serA ::Tn5. The hisB ::Tn5 PAO1 mutant induces CYP71A12 pro :GUS expression in the absence of exogenous flg22 treatment. All the experiments were independently repeated 3 times. Statistics were calculated by using one-way ANOVA. Error bar represents mean +/− SD. Download FIG S5, JPG file, 0.4 MB . Copyright © 2023 Liu et al. 2023 Liu et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . As fungal alkalization through glutamate secretion is accompanied by increased fungal growth and virulence ( ), we tested whether an increase in pH might result in bacterial overgrowth. We observed that arginine, glutamine, and glutamate-mediated alkalization were accompanied by dramatic bacteria overgrowth of both the WCS417 and PAO1 Δ pqqF mutants ( and ). To determine if immunity was suppressed at high pH, we tested whether flg22 could still trigger CYP71A12 pro :GUS expression at pH 8, and found that expression still occurred ( ), indicating that overgrowth is occurring at a pH that is not sufficient to suppress immunity. These data indicate that pH correlates with bacteria growth, and that by limiting certain amino acids, it is possible that plants can control the rhizosphere pH and bacterial growth via arginine biosynthesis. 10.1128/mbio.03424-22.6 FIG S6 The addition of arginine, glutamine, or glutamate resulted in overgrowth of the P. aeruginosa pqqF mutant. P. aeruginosa PAO1 or the pqqF: Tn5 mutant were grown in seedling exudates, and inoculated with 1 mM of the indicated amino acid. Statistics were calculated by using one-way ANOVA and Tukey’s HSD. Error bars represent mean +/− SD, and letters indicate differences at P < 0.05. Download FIG S6, PDF file, 0.4 MB . Copyright © 2023 Liu et al. 2023 Liu et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . 10.1128/mbio.03424-22.7 FIG S7 Flg22-triggered immunity is intact at pH 8. We found, upon raising the pH to 8.0 (the pH associated with bacterial overgrowth) with KOH, that flg22-triggered expression of the CYP71A12 pro :GUS promoter, meaning it is not suppressed. Download FIG S7, PDF file, 0.9 MB . Copyright © 2023 Liu et al. 2023 Liu et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . To test our prediction that rhizosphere alkalization is due to accumulation of ammonia from inhibiting the arginine biosynthesis pathway, we measured the ammonium concentration in seedling exudates. In contrast to our prediction that addition of arginine, glutamine, or glutamate would increase ammonium, we found that the ΔpqqF mutant consumes more ammonium than wild type bacteria in the presence of these amino acids ( ). This suggests that, when arginine biosynthesis is inhibited, ammonia is converted to a distinct compound that contributes to rhizosphere alkalization. 10.1128/mbio.03424-22.8 FIG S8 Ammonium concentration does explain alkalinization in the P. simiae WCS417 ΔpqqF mutant growing in seedling exudates treated with arginine. Ammonium was quantified in seedling exudates (mock), or seedling exudates grown with P. simiae WCS417 or the Δ pqqF mutant. Seedling exudates were treated with 1 mM of the indicated amino acid. Statistics were calculated by using one-way ANOVA and Tukey’s HSD. Error bars represent mean +/− SD, * indicate differences at P < 0.05, and *** indicate differences at P < 0.01. Download FIG S8, PDF file, 0.4 MB . Copyright © 2023 Liu et al. 2023 Liu et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . To uncover the compound that caused the rhizosphere alkalization, we performed untargeted metabolomics of seedling exudates, or seedling exudates containing P. simiae WCS417 or the ΔpqqF mutant with or without arginine. We found that mock and bacterial treatments were clearly separated in the pooled principal coordinates analysis (PCoA), as shown in PC1 ( P < 0.001) ( ). We found that the wild type WCS417 and the ΔpqqF mutant also have distinct metabolite profiles, regardless of the presence of exogenous arginine, as shown in PC2 ( P < 0.002), indicating that loss of pqqF has a significant impact on the bacterial metabolism ( ). While arginine did not cause a significant global change across all conditions ( P < 0.1), we saw that arginine affected the metabolite profile of the ΔpqqF mutant to a greater degree than it did when added to wildtype P. simiae WCS417, as shown in PC2 ( ). When we observed individual metabolites, we found that, while wildtype bacteria produced a large amount of gluconic acid, the ΔpqqF mutant produced no detectable gluconic acid ( ). We also found that, while seedling exudates with wild type WCS417 contain similar amount of arginine as seedling exudates with no bacteria, the ΔpqqF mutant completely depleted the exogenous arginine, indicating that the ΔpqqF mutant converted the exogenous arginine to a distinct compound ( ). Although we did not observe significant differences of citrulline between the WCS417 and ΔpqqF mutant, addition of exogenous arginine resulted in a reduction in the overall amount of citrulline, indicating feedback inhibition of arginine on argF ( ) ( ). We queried our untargeted metabolomics data for compounds that uniquely accumulated in the P. simiae WCS417 ΔpqqF mutant in the presence of arginine ( ). We found that the ΔpqqF mutant uniquely accumulates significantly higher ornithine and proline than the wild type in the presence of exogenous arginine ( ). While proline has a neutral pKa, ornithine is an alkaline non-proteogenic amino acid with pKa of 10.29, and it is a substrate of ArgF in the arginine biosynthesis pathway. Although spermidine and putrescine are also alkaline polyamine compounds derived from ornithine, we only detected trace amount of putrescine, spermidine, and spermine that were too low to be quantified. Additionally, exogenous arginine did not significantly change the amount of malic acid in the ΔpqqF mutant relative to wildtype WCS417 ( ), suggesting that addition of exogenous arginine unlikely affected the TCA cycle. These data suggest that, upon arginine addition to the ΔpqqF mutant, the accumulation of ornithine underlies the rhizosphere alkalization. To test the possibility that ornithine, but not polyamines, resulted in increase in rhizosphere pH, we added exogenous polyamines or ornithine to WCS417 or the ΔpqqF mutant. We found that the addition of exogenous ornithine or putrescine had no effect on the pH of WCS417 in seedling exudates, and that ornithine, but not putrescine, resulted in a significant increase in the pH of the ΔpqqF mutant ( ). However, we observed no effect of ornithine addition to the wildtype P. aeruginosa PAO1 or the PAO1 ΔpqqF mutant ( ), indicating that the compound that results in alkalization might differ in PAO1 from WCS417. Acidification-mediated immunity suppression is peptide-specific in roots. Our data show that Pseudomonas have multiple, partially redundant mechanisms to acidify the rhizosphere, indicating that maintenance of correct pH may be critical to establishing plant immune homeostasis. Previously, we found that acidification to pH 3.7 partially blocked flg22-mediated induction of the MYB51 pro :GUS reporter gene, but had no effect on the expression of SA-triggered NPR1 pro :GUS and MYB72 pro :GUS triggered by WCS417 or WCS358 at pH 3.7 in roots, indicating that acidification can only impair a section of plant innate immunity ( ). Thus, we wondered whether acidification suppression of immunity affects MAMP binding to receptors, or affects downstream signaling. If acidification specifically blocks flg22-induced immunity, then low pH should not affect other MAMP-triggered immunity pathways. We tested the effect of acidification on the damage-associated molecular pattern At pep1, which is BAK1-dependent ( , ), and chitin, a sugar polymer from fungal cell walls, which is BAK1-independent in Arabidopsis ( , ). All 3 MAMPs (flg22, At pep1, and chitin) share a MAPK cascade ( ). We tested whether acidification can also suppress chitin- and At pep1-triggered immunity using the MAMPs marker gene reporters CYP71A12 pro :GUS . We found that pH 3.7 abolished At pep1-, but not chitin-triggered, expression of the CYP71A12 pro :GUS reporter ( ). These data indicate that acidification likely interferes with the peptide-triggered immunity. To test if low pH affects specific defense responses, we tested whether expression of multiple PTI-induced genes was blocked by using additional PTI-inducible reporters. We used PER5 (AT1G14550), which is a peroxidase ( ), and FRK1 (AT2G19190), which encodes a LRR receptor kinase ( ). Both genes are flg22 and At pep1-inducible in the root ( ). We found that both flg22- and At pep1-induced FRK1 ::mVenus expression, which was significantly suppressed by low pH ( , ). In contrast, lower pH did not affect flg22-induced PER5 :: mVenus expression or At pep1-induced PER5 :: mVenus expression ( ). These results suggest that acidification can only affect a sector of PTI, and rhizosphere pH may be critical to determine the baseline setting on the plant immune thermostat. Arabidopsis immunity suppression by P. simiae WCS417. Gluconic acid biosynthesis via pqqF is necessary for Arabidopsis rhizosphere immunity suppression in P. capeferrum WCS358 and Pseudomonas aeruginosa PAO1 as measured by expression of a PTI-inducible reporter CYP71A12 pro :GUS expression ( ). CYP71A12 is involved in biosynthesis of the antimicrobial camalexin and is induced in the root elongation zone or maturation zone upon sensing MAMPs such as flg22 or chitin ( ). As a result, induction of CYP71A12 pro :GUS reports the activation of PTI. P. simiae WCS417 produces more gluconic acid and lowers the pH of the rhizosphere to a greater extent than WCS358 suggesting WCS417 pqqF is also required for rhizosphere immunity suppression ( ). Surprisingly, we found that the WCS417 ΔpqqF mutant can still completely suppress flg22-triggered expression of CYP71A12 pro :GUS ( ). We found that while P. simiae WCS417 acidifies seedling exudates to a pH of 3.7, a clean deletion of pqqF in P. simiae WCS417 resulted in a significant increase in the pH of seedling exudates to pH 5.0 ( ). In contrast, while wildtype P. aeruginosa PAO1 lowers the pH of the seedling exudates to pH 4.0 and supresses CYP71A12 pro :GUS , disruption of PAO1 pqqF resulted in a less dramatic increase in the pH of seedling exudates to pH 4.5 ( ) but results in a complete loss of suppression in host immunity ( ; ( )). Collectively these data suggest a gluconic acid independent mechanism of immunity suppression by P. simiae WCS417. Previous work showed that lowering the rhizosphere pH to 3.7 with hydrochloric acid (HCl) resulted in complete inhibition of CYP71A12 pro :GUS expression ( ). Lowering the rhizosphere pH to between 5.5 to 4.6 resulted in partial immunity suppression, indicating that we should expect about 50% of roots to retain CYP71A12 pro :GUS expression at pH 5.0, the pH of the P. simiae WCS417 ΔpqqF mutant growing in seedling exudates. We tested the effect of a pH gradient on suppression of CYP71A12 pro :GUS and indeed found that modifying the pH of the rhizosphere to 4.7–5.0 in the absence of bacteria results in intermediate suppression of plant immunity, with about 50% of roots retaining CYP71A12 pro :GUS expression ( ). The pH of the WCS417 ΔpqqF mutant is around 5.0, but results in complete inhibition of CYP71A12 pro :GUS expression ( ). These data suggest that WCS417 possesses additional mechanisms to suppress host immunity. WCS417 a rgF is required for rhizosphere acidification and immunity suppression. To identify additional genes that are necessary for host immunity suppression, we generated an EMS-mutagenized library of P. simiae WCS417, and screened for mutants that were unable to suppress flg22-mediated induction of the CYP71A12 pro :GUS reporter. We screened 960 EMS-mutagenized colonies of WCS417 in duplicate for their ability to suppress flg22-induced expression of the CYP71A12 pro :GUS reporter. A single mutant, named 10E10, was found incapable of suppressing flg22-induced immunity ( ). We found that 10E10 completely failed to reduce the pH of seedling exudates, suggesting that it might contribute to immunity suppression through rhizosphere acidification ( ). Because we only identified one mutant from the screen, we wondered if mutations in WCS417 that result in a loss of immunity suppression are rare, or if our screen was not saturating. To test this, we determined whether we could identify an allele of pqqF in our screen. Gluconic acid has previously been shown to be required for zinc solubilization, and so we tested whether we could identify a mutant unable to solubilize zinc. By growing bacteria on zinc phosphate media ( ), only the strains that can solubilize zinc phosphate will produce a clear halo on the plate. We found a single mutant, 4E4, that cannot solubilize zinc phosphate ( ). The genome of 4E4 was sequenced, and we identified mutations in both pqqF and pqqB ( , Table S1). However, 4E4 can still suppress flg22-triggered CYP71A12 pro :GUS expression, which is consistent with our finding that the WCS417 ΔpqqF mutant can still suppress plant immunity. As our screen successfully identified a mutation in pqqF , which suggests that mutants that fail to suppress root immunity might be rare in P. simiae WCS417. 10.1128/mbio.03424-22.1 FIG S1 P. simiae WCS417 can solubilize ZnSO 4 , while the mutant 4E4 cannot. Rescreening the P. simiae WCS417 EMS library on media containing ZnSO 4 and glucose identified a single mutant that could not solubilize zinc. Sequencing the 4E4 mutant identified point mutations in both pqqF and pqqB . The 7A1 strain can solubilize ZnSO4, and is shown as a positive control. Download FIG S1, PDF file, 0.5 MB . Copyright © 2023 Liu et al. 2023 Liu et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . 10.1128/mbio.03424-22.12 DATA SET S1 The process of HILIC+/− shows results from metabolomics analysis. Data Set S1, Table S1, A total of 35 non-synonymous candidate genes from genome mapping of the 10E10 WCS417 mutant. Data Set S1, Table S2, total of 48 non-synonymous candidate genes from genome mapping of the 4E4 WCS417 mutant. Data Set S1, Table S3 Stains used in this study. Download Data Set S1, XLSX file, 0.5 MB . Copyright © 2023 Liu et al. 2023 Liu et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . We sequenced the genome of the P. simiae WCS417 10E10 mutant, and identified 35 non-synonymous mutations with respect to the parental WCS417 strain ( , Table S2). To narrow down candidate genes, we made use of a PAO1 transposon insertion library ( ), and tested transposon insertion mutants in PAO1 orthologs of genes from the mapping list that we hypothesized were most likely to contribute to immunity suppression ( ). We found that an insertion in argF uniquely impaired the ability of PAO1 to acidify seedling exudates ( ). ArgF encodes ornithine carbamoyltransferase, which is involved in arginine biosynthesis, by converting l -ornithine to L-citrulline ( , ). The mutation in argF in P. simiae WCS417 is predicted to affect its catalytic site Ser-Thr-Arg-Thr-Arg, where a C to T change is predicted to convert the third arginine to cysteine ( ). Thus, we hypothesized that a loss of function of argF likely underlines the lack of immunity suppression by 10E10. 10.1128/mbio.03424-22.2 FIG S2 A screen of P. aeruginosa PAO1 transposon insertion mutants in candidate genes with point mutations in P. simiae WCS417 10E10 identified argF , which cannot acidify seedling exudates. Genome sequencing of 10E10 identified 35 non-synonymous candidate genes ( , Table S1). Orthologs were identified in PAO1. argF ::Tn5-1/-2: PS417_05595, ornithine carbamoyltransferase; maeB : PS417_01950, malate dehydrogenase; PA4465: PS417_04330, glmZ(sRNA)-inactivating NTPase; PA0214: PS417_26590, malonate decarboxylase subunit epsilon; PA0575: PS417_25885, diguanylate cyclase; and PA0148: PS417_03205, adenine deaminase. Statistics were calculated by using two-way ANOVA and Tukey’s HSD. Error bars represent mean +/− SD, and * indicate differences at P < 0.05. Download FIG S2, PDF file, 0.2 MB . Copyright © 2023 Liu et al. 2023 Liu et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . We tested whether a loss of P. simiae WCS417 argF could explain the inability of the 10E10 mutant to suppress immunity. First, we confirmed the C to T mutation in the argF catalytic site in the 10E10 mutant by PCR. We then complemented the 10E10 mutant with argF expressed by its native promoter in the pBBR1MCS-5 plasmid, and found that expression of argF under its native promoter ( argF pro - argF ) rescued the 10E10 mutant, and the complemented strain suppressed flg22-triggered CYP71A12 pro :GUS expression, resulting in acidification of seedling exudates to a similar degree as wildtype WCS417 ( and ). We made a clean deletion of argF in WCS417, and found that the Δ argF mutant phenocopied the inability of the 10E10 mutant to suppress flg22-triggered CYP71A12 pro :GUS expression, and could not acidify seedling exudates ( and ). These results illustrate that a loss of function of argF underlies the inability to suppress immunity by the 10E10 mutant. and argF pathways synergistically regulate rhizosphere pH. Fungal pathogens can produce glutamate or glutamine to increase the local concentration of ammonia, and raise host pH to promote their own virulence ( , ). As a result, it is an intriguing possibility that a loss of function mutation in argF may result in accumulation of alkaline arginine precursors such as ornithine, polyamines, and ammonia ( ). A second, potentially confounding possibility is that arginine is limiting for growth in the rhizosphere, and so the ΔargF mutant may not be able to acidify because a lack of arginine is insufficient to support growth. If the former is true, then conditions resulting in accumulation of arginine precursors, such as addition of exogenous arginine, should result in accumulation of alkaline arginine precursors and rhizosphere alkalization. If the latter is true, and arginine is only required for growth, then addition of arginine should always result in increased growth and acidification. We tested whether exogenous arginine would result in an increase or decrease in rhizosphere pH in P. simiae WCS417, the ΔargF mutant, and the ΔpqqF mutant. As exogenous arginine should suppress argF expression, it should mimic the argF mutant, and result in accumulation of alkaline precursors. We found that there was no effect on pH following the addition of 1 mM exogenous arginine in wildtype P. simiae WCS417 growing in seedling exudates ( ). However, exogenous arginine fully restored acidification by both the 10E10 and Δ argF mutants, indicating that their inability to acidify is at least partly due to limited availability of arginine ( ). Indeed, we confirmed that both mutants have growth defects in seedling exudates, which can be rescued by addition of exogenous arginine ( and ). Intriguingly, we found that arginine resulted in a significant increase in the pH of a ΔpqqF mutant growing in seedling exudates ( ). This suggests that inhibition of argF may indeed result in increased pH through accumulation of arginine precursors, but that this effect may be masked by the large amount of gluconic acid produced by P. simiae WCS417. P. simiae WCS417 produces large amounts of gluconic acid ( ), and we only observed an increase in pH with application of exogenous amino acids in the ΔpqqF mutant ( ). As a result, we hypothesized that argF might be necessary for lowering pH, but that gluconic acid might mask the effect of argF . If this is the case, we would expect that exogenous arginine would restore growth, but not lower the pH of a double ΔpqqFΔargF mutant growing in seedling exudates. In contrast, if argF only contributes to growth, addition of exogenous arginine should restore pH of the ΔpqqFΔargF mutant to the level of the single Δ pqqF mutant. Consistent with a role of ArgF in rhizosphere acidification, we found that addition of exogenous arginine fully restored growth of a ΔpqqFΔargF or ΔargF mutants to wildtype levels ( ). However, arginine addition to the ΔpqqFΔargF double mutant resulted in significantly higher rhizosphere pH than the ΔpqqF mutant alone ( ). These data indicate that argF is required for regulation of pH in the rhizosphere independent of growth. We initially observed that the P. simiae WCS417 ΔpqqF mutant could not acidify the rhizosphere to a level that should result in full suppression of CYP71A12 pro :GUS expression. However, the mutant can still fully suppress CYP71A12 pro :GUS expression, suggesting a pH-independent mechanism of immunity suppression. While exogenous arginine resulted in a rhizosphere pH of 5.8, at which we would predict no suppression of the CYP71A12 pro :GUS reporter ( ), we found that arginine treatment of the ΔpqqFΔargF double mutant fully restored immunity suppression ( ). These data indicate that, while low pH is sufficient to suppress immunity, it is not necessary in WCS417, again supporting that there is a second, pH-independent mechanism of immunity suppression by P. simiae WCS417. 10.1128/mbio.03424-22.3 FIG S3 Addition of arginine to the P. simiae WCS417 ΔargF ΔpqqF mutant restores immunity suppression, but not acidification. Exogenous arginine had no effect on the ability of wild type P. simiae WCS417 to suppress expression of the CYP71A12 pro :GUS reporter in the presence of flg22, but restored the immunity suppression by the 10E10 mutant, the ΔargF mutant, and the ΔpqqFΔargF double mutant. The addition of arginine to the Δ pqqF and pqqFΔargF double mutant fails to restore acidification ( ). Download FIG S3, JPG file, 0.3 MB . Copyright © 2023 Liu et al. 2023 Liu et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . To test the hypothesis that accumulation of alkaline precursors of arginine, such as glutamate and ammonia could result in an increase in rhizosphere pH ( ), we added arginine, proline, or glutamine to seedling exudates containing bacteria, which should result in an accumulation of their precursors, glutamate, and ammonia ( ). We also tested the addition of exogenous methionine, leucine, tryptophan, serine, and histidine (which should not affect glutamate catabolism), as controls. To avoid confounding our findings with acidification through gluconic acid biosynthesis, we added amino acids to the P. simiae WCS417 ΔpqqF mutant. We found that exogenous arginine, proline, glutamine, serine and glutamate, but not methionine, leucine, tryptophan, or histidine, significantly raised the pH of the ΔpqqF mutant close to the level of mock seedling exudates ( ). Moreover, we found this alkalization phenotype is even more dramatic in PAO1 as arginine, glutamine, and glutamate raised the pH of seedling exudates inoculated with PAO1 pqqF ::Tn5 to around 8.0 ( ). These data indicate that, similar to pathogenic fungi, exogenous arginine, proline, glutamine, or glutamate likely result in rhizosphere alkalization through ammonia accumulation. 10.1128/mbio.03424-22.4 FIG S4 Addition of amino acids downstream of glutamine biosynthesis raised the pH of P. aeruginosa PAO1 growing in seedling exudates. Seedling exudates were treated with 1 mM amino acids and mock treated, or treated with PAO1 or the pqqF:: Tn5 mutant. This showed that arginine, proline, glutamine, and glutamate significantly raised the pH of the pqqF mutant growing in seedling exudates. All the experiments were independently repeated at least 3 times. Statistics were calculated by using one-way ANOVA and Tukey’s HSD. Error bars represent mean +/− SD, and letters indicate differences at P < 0.05. Download FIG S4, PDF file, 0.4 MB . Copyright © 2023 Liu et al. 2023 Liu et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . To genetically test whether a loss of arginine, proline, glutamine, or glutamate could specifically result in increased pH of seedling exudates, we selected 6 mutants that have insertions in the genes that are required for amino acid biosynthesis from the P. aeruginosa PAO1 transposon insertion library ( , Table S3). We found that proA ::Tn5 (deficient in proline biosynthesis) and gltB ::Tn5 (deficient in glutamine biosynthesis) neither acidify seedling exudates, nor suppress PTI ( ), which is consistent with the glutamate biosynthetic pathway being required for rhizosphere acidification. Two additional amino acid insertions in metZ ::Tn5 (deficient in methionine biosynthesis) and leuC::Tn5 (deficient in leucine biosynthesis) also resulted in a loss of acidification of seedling exudates and immunity suppression of seedlings, suggesting that leucine and methionine may be limiting for bacterial growth in the rhizosphere ( ). In addition, serA :: Tn5 (deficient in serine biosynthesis) suppressed host immunity and had decreased pH of seedling exudates, indicating that the rhizosphere may contain enough serine to support bacterial growth ( ). Interestingly, a hisB ::Tn5 mutant (deficient in histidine biosynthesis) can also acidify seedling exudates but induced immunity on its own, further indicating that low pH alone is not sufficient for immunity suppression ( ). Collectively, these data confirm that the rhizosphere is deficient in glutamate and downstream amino acids, and so bacteria must actively synthesize these in the rhizosphere. 10.1128/mbio.03424-22.5 FIG S5 Bacterial auxotrophs cannot suppress plant immunity or acidify the rhizosphere. (A) P. aeruginosa PAO1 amino acid auxotrophic mutants fail to acidify the seedling exudates with the exception of hisB ::Tn5 and serA ::Tn5 in PAO1. (B) Amino acid auxotrophic mutants fail to suppress flg22-induced CYP71A12 pro :GUS expression, with the exception of serA ::Tn5. The hisB ::Tn5 PAO1 mutant induces CYP71A12 pro :GUS expression in the absence of exogenous flg22 treatment. All the experiments were independently repeated 3 times. Statistics were calculated by using one-way ANOVA. Error bar represents mean +/− SD. Download FIG S5, JPG file, 0.4 MB . Copyright © 2023 Liu et al. 2023 Liu et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . As fungal alkalization through glutamate secretion is accompanied by increased fungal growth and virulence ( ), we tested whether an increase in pH might result in bacterial overgrowth. We observed that arginine, glutamine, and glutamate-mediated alkalization were accompanied by dramatic bacteria overgrowth of both the WCS417 and PAO1 Δ pqqF mutants ( and ). To determine if immunity was suppressed at high pH, we tested whether flg22 could still trigger CYP71A12 pro :GUS expression at pH 8, and found that expression still occurred ( ), indicating that overgrowth is occurring at a pH that is not sufficient to suppress immunity. These data indicate that pH correlates with bacteria growth, and that by limiting certain amino acids, it is possible that plants can control the rhizosphere pH and bacterial growth via arginine biosynthesis. 10.1128/mbio.03424-22.6 FIG S6 The addition of arginine, glutamine, or glutamate resulted in overgrowth of the P. aeruginosa pqqF mutant. P. aeruginosa PAO1 or the pqqF: Tn5 mutant were grown in seedling exudates, and inoculated with 1 mM of the indicated amino acid. Statistics were calculated by using one-way ANOVA and Tukey’s HSD. Error bars represent mean +/− SD, and letters indicate differences at P < 0.05. Download FIG S6, PDF file, 0.4 MB . Copyright © 2023 Liu et al. 2023 Liu et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . 10.1128/mbio.03424-22.7 FIG S7 Flg22-triggered immunity is intact at pH 8. We found, upon raising the pH to 8.0 (the pH associated with bacterial overgrowth) with KOH, that flg22-triggered expression of the CYP71A12 pro :GUS promoter, meaning it is not suppressed. Download FIG S7, PDF file, 0.9 MB . Copyright © 2023 Liu et al. 2023 Liu et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . To test our prediction that rhizosphere alkalization is due to accumulation of ammonia from inhibiting the arginine biosynthesis pathway, we measured the ammonium concentration in seedling exudates. In contrast to our prediction that addition of arginine, glutamine, or glutamate would increase ammonium, we found that the ΔpqqF mutant consumes more ammonium than wild type bacteria in the presence of these amino acids ( ). This suggests that, when arginine biosynthesis is inhibited, ammonia is converted to a distinct compound that contributes to rhizosphere alkalization. 10.1128/mbio.03424-22.8 FIG S8 Ammonium concentration does explain alkalinization in the P. simiae WCS417 ΔpqqF mutant growing in seedling exudates treated with arginine. Ammonium was quantified in seedling exudates (mock), or seedling exudates grown with P. simiae WCS417 or the Δ pqqF mutant. Seedling exudates were treated with 1 mM of the indicated amino acid. Statistics were calculated by using one-way ANOVA and Tukey’s HSD. Error bars represent mean +/− SD, * indicate differences at P < 0.05, and *** indicate differences at P < 0.01. Download FIG S8, PDF file, 0.4 MB . Copyright © 2023 Liu et al. 2023 Liu et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . To uncover the compound that caused the rhizosphere alkalization, we performed untargeted metabolomics of seedling exudates, or seedling exudates containing P. simiae WCS417 or the ΔpqqF mutant with or without arginine. We found that mock and bacterial treatments were clearly separated in the pooled principal coordinates analysis (PCoA), as shown in PC1 ( P < 0.001) ( ). We found that the wild type WCS417 and the ΔpqqF mutant also have distinct metabolite profiles, regardless of the presence of exogenous arginine, as shown in PC2 ( P < 0.002), indicating that loss of pqqF has a significant impact on the bacterial metabolism ( ). While arginine did not cause a significant global change across all conditions ( P < 0.1), we saw that arginine affected the metabolite profile of the ΔpqqF mutant to a greater degree than it did when added to wildtype P. simiae WCS417, as shown in PC2 ( ). When we observed individual metabolites, we found that, while wildtype bacteria produced a large amount of gluconic acid, the ΔpqqF mutant produced no detectable gluconic acid ( ). We also found that, while seedling exudates with wild type WCS417 contain similar amount of arginine as seedling exudates with no bacteria, the ΔpqqF mutant completely depleted the exogenous arginine, indicating that the ΔpqqF mutant converted the exogenous arginine to a distinct compound ( ). Although we did not observe significant differences of citrulline between the WCS417 and ΔpqqF mutant, addition of exogenous arginine resulted in a reduction in the overall amount of citrulline, indicating feedback inhibition of arginine on argF ( ) ( ). We queried our untargeted metabolomics data for compounds that uniquely accumulated in the P. simiae WCS417 ΔpqqF mutant in the presence of arginine ( ). We found that the ΔpqqF mutant uniquely accumulates significantly higher ornithine and proline than the wild type in the presence of exogenous arginine ( ). While proline has a neutral pKa, ornithine is an alkaline non-proteogenic amino acid with pKa of 10.29, and it is a substrate of ArgF in the arginine biosynthesis pathway. Although spermidine and putrescine are also alkaline polyamine compounds derived from ornithine, we only detected trace amount of putrescine, spermidine, and spermine that were too low to be quantified. Additionally, exogenous arginine did not significantly change the amount of malic acid in the ΔpqqF mutant relative to wildtype WCS417 ( ), suggesting that addition of exogenous arginine unlikely affected the TCA cycle. These data suggest that, upon arginine addition to the ΔpqqF mutant, the accumulation of ornithine underlies the rhizosphere alkalization. To test the possibility that ornithine, but not polyamines, resulted in increase in rhizosphere pH, we added exogenous polyamines or ornithine to WCS417 or the ΔpqqF mutant. We found that the addition of exogenous ornithine or putrescine had no effect on the pH of WCS417 in seedling exudates, and that ornithine, but not putrescine, resulted in a significant increase in the pH of the ΔpqqF mutant ( ). However, we observed no effect of ornithine addition to the wildtype P. aeruginosa PAO1 or the PAO1 ΔpqqF mutant ( ), indicating that the compound that results in alkalization might differ in PAO1 from WCS417. Our data show that Pseudomonas have multiple, partially redundant mechanisms to acidify the rhizosphere, indicating that maintenance of correct pH may be critical to establishing plant immune homeostasis. Previously, we found that acidification to pH 3.7 partially blocked flg22-mediated induction of the MYB51 pro :GUS reporter gene, but had no effect on the expression of SA-triggered NPR1 pro :GUS and MYB72 pro :GUS triggered by WCS417 or WCS358 at pH 3.7 in roots, indicating that acidification can only impair a section of plant innate immunity ( ). Thus, we wondered whether acidification suppression of immunity affects MAMP binding to receptors, or affects downstream signaling. If acidification specifically blocks flg22-induced immunity, then low pH should not affect other MAMP-triggered immunity pathways. We tested the effect of acidification on the damage-associated molecular pattern At pep1, which is BAK1-dependent ( , ), and chitin, a sugar polymer from fungal cell walls, which is BAK1-independent in Arabidopsis ( , ). All 3 MAMPs (flg22, At pep1, and chitin) share a MAPK cascade ( ). We tested whether acidification can also suppress chitin- and At pep1-triggered immunity using the MAMPs marker gene reporters CYP71A12 pro :GUS . We found that pH 3.7 abolished At pep1-, but not chitin-triggered, expression of the CYP71A12 pro :GUS reporter ( ). These data indicate that acidification likely interferes with the peptide-triggered immunity. To test if low pH affects specific defense responses, we tested whether expression of multiple PTI-induced genes was blocked by using additional PTI-inducible reporters. We used PER5 (AT1G14550), which is a peroxidase ( ), and FRK1 (AT2G19190), which encodes a LRR receptor kinase ( ). Both genes are flg22 and At pep1-inducible in the root ( ). We found that both flg22- and At pep1-induced FRK1 ::mVenus expression, which was significantly suppressed by low pH ( , ). In contrast, lower pH did not affect flg22-induced PER5 :: mVenus expression or At pep1-induced PER5 :: mVenus expression ( ). These results suggest that acidification can only affect a sector of PTI, and rhizosphere pH may be critical to determine the baseline setting on the plant immune thermostat. Here we report a forward genetic screen that identified a bacterial gene ornithine carbamoyltransferase argF from P. simiae WCS417 that is required for host immunity suppression, colonization, and acidification. The Δ argF mutant is auxotrophic, and exogenous arginine restored Δ argF -mediated host immunity suppression, colonization, and acidification to wildtype levels. This indicates that amino acid biosynthesis plays an important role in rhizosphere colonization. This is not the first time that amino acid biosynthesis has been shown to be necessary for root colonization ( , ). Interestingly, a previous TnSeq screen found that amino acid auxotrophs, including insertions in argF in P. simiae WCS417, exhibited enhanced fitness in the Arabidopsis rhizosphere ( ). We suspect the difference between these findings may be because of a community of transposon insertion mutants in a TnSeq screen, where the presence of other mutants could potentially provide amino acids in trans to auxotrophs. In fact, metabolic exchange, including amino acid cross-feeding among microbes, is characteristic and reciprocal in a microbial community ( ). Our data suggest that the rhizosphere may be limiting in many amino acids, and that by synthesizing certain amino acids, bacteria will alter the rhizosphere pH and affect plant immune homeostasis. To disentangle the role of amino acid biosynthesis in rhizosphere acidification from their role in growth, we supplied different amino acids to Pseudomonas pqqF mutants, which cannot produce gluconic acid but retain some rhizosphere acidification ( ). We found that only arginine, proline, ornithine, glutamine, and glutamate caused rhizosphere alkalization, and arginine, glutamine, and glutamate caused bacterial overgrowth in the pqqF -deficient mutants. Metabolic profiling suggests that alkalization by Pseudomonas was likely the consequence of ornithine accumulation, an alkaline non-proteogenic amino acid, and the substrate of ArgF. Interestingly, alkalization is associated with overgrowth of pqqF -deficient mutants, which is reminiscent of alkalization-mediated invasive growth in fungi ( ). However, Pseudomonas does not utilize ammonia for alkalization, as ammonia-driven alkalization requires carbon deprivation ( , ). We found alkalization in the presence of glucose, as the ΔpqqF mutant retains more glucose than the wild type because it does not convert glucose to gluconic acid. We found that the overgrowth of the ΔpqqF mutant in the presence of arginine was accompanied by significantly more glucose consumption than the ΔpqqF mutant without exogenous arginine. These data suggest that maintaining a balance of carbon and nitrogen containing compounds is essential for pH homeostasis and bacterial growth regulation. We found that, while P. simiae WCS417 produces gluconic acid and significantly acidifies the rhizosphere, it is not necessary for immunity suppression in P. simiae WCS417, indicating there is a distinct mechanism of immunity suppression. The ΔpqqF mutant and the ΔpqqFΔargF mutant supplemented with exogenous arginine caused an increased in rhizosphere pH, but still suppressed immunity. This indicates that there are additional pH-independent mechanisms for host immunity manipulation in P. simiae WCS417. Rhizosphere acidification seems to be a general characteristic of many root-associated microbes ( , ). Thus, rhizosphere acidification could be a conserved evolutionary trait of root-associated microbes. However, suppression of host immunity may also open the window for pathogens. In this study, we found that acidification can only dampen a sector of immunity, indicating that plant must also evolve novel mechanisms to counteract acidification-mediated immunity suppression, which may act as a selection force for microbial colonization. Recently, roots were shown to contain both acidic (early elongation zone and root tip) and alkaline domains (late elongation zone/root hair zone) ( ). Interestingly, we found that acidic pH does not suppress chitin-triggered immunity in the maturation zone, suggesting that acidification-mediated immunity suppression might be zone-dependent ( ) rather than ligand dependent. Collectively, host immunity suppression is crucial for host colonization of both commensals and pathogens. Crosstalk between the microbes and the plant immune system is an ongoing process. Our results highlight that, apart from serving as colonization factors and nutrients, bacterial amino acid biosynthesis plays a novel dual role in the rhizosphere acidification and host immunity suppression ( ). As acidification quenches only a sector of host immunity, it is clear that more mechanisms of host immunity suppression are still to be discovered. Plant materials and growth conditions. We used Arabidopsis thaliana wild type Col-0, CYP71A12 pro :GUS reporter ( ) and FRK1 :: mVenus , PER5 :: mVenus ( ) in our study. All the plant material used in our study were grown in the same condition, which was in a climate-controlled growth room at 21 ° C, 16 h light/8 h dark cycle with light intensity of 100 μM. Plants were grown in the ½ X Murashige and Skoog (MS) media with 1% MES [2-(N-morpholino) ethanesulfonic acid] with 0.5% sucrose, adjusted by KOH to a pH of 5.7 ( ). Bacterial strains and growth condition. Strains that were used in this study are listed in , Table S2. P. simiae WCS417 and mutants were cultured overnight in LB or King’s B at 28°C with shaking at 180 rpm. P. aeruginosa PAO1 was cultured overnight in LB at 37°C with shaking at 180 rpm. PAO1 transposon insertion mutants were obtained from the 2-allele PAO1 transposon insertion library ( ). Wildtype PAO1 and the transposon insertion mutants used in this study were cultured in LB with 25 μg/mL tetracycline at 37°C. Escherichia coli were cultured in 37°C with 15 μg/mL or 100 μg/mL gentamicin, depending on the experiment. ß-glucosidase histochemical assays. The reporter lines CYP71A12 pro :GUS in the Arabidopsis Col-0 genetic background contains the CYP71A12 promoter driving the expression of the ß-glucosidase (GUS) reporter gene ( ). Seeds were grown in 48-well plates for 1 week after surface sterilization, and were grown in the condition described above. Each well contained 300 μL ½x MS media and 0.5% sucrose. On day 8, the media was replaced by fresh ½x MS with 0.5% sucrose media. Bacteria were grown overnight in LB, washed in 10 mM MgSO 4 , and serially diluted to an OD 600 of 0.02 in 10 mM MgSO 4 . On day 9, 30 μL bacteria were added to each well (final OD 600 of 0.002), and the plates were returned to the growth room for at least 18 h before adding flg22. On day 10, 500 μM flg22 was added to a final concentration of 500 nm, and the media was replaced with GUS staining solution after 4.5 h of incubation. The GUS staining solution was made fresh at a final concentration of 0.5 M sodium phosphate buffer (pH 7), 0.5M EDTA, 50 mM potassium ferricyanide, 50 mM potassium ferrocyanide, 50 mM X-Gluc (5-bromo-4-chloro-3-indolyl-beta-d-glucuronic acid), and 10 μL Triton X-100. Plates were then incubated at 37°C without light until the control roots treated with flg22 developed a visible blue color (approximately 3 to 4 h). Finally, to clear the tissue, the GUS stain was replaced with 95% ethanol, and washed with water afterwards. Images were taken with a Macro Zoom Fluorescence Microscope MVX10. Bacterial growth curves. Overnight cultures of bacteria were grown in LB, then were serially diluted to an OD 600 of 0.2 in 10 mM MgSO 4 for growth curves. Growth curves were performed by adding 10 μL of the diluted culture to 90 μL rich media (LB), minimal media (M9 salts supplemented with 30 mM succinate), or seedling exudates (M9 salts supplemented with 30 mM succinate, with or without 1 mM arginine). Bacteria growth was quantified by measuring OD 600 on a Versamax plate reader (Molecular Devices). Data presented in this study represent the average of 3 biological replicates. WCS417 EMS mutant library construction and screening. P. simiae WCS417r (rif resistant variant) was mutagenized by spinning down and washing an overnight culture, and exposing it to 1, 2, or 4% EMS for 1 h. Mutagenized cells were plated on King’s B with 50 μg/mL streptomycin, and it was found that after treatment with 4% EMS there was ~ 100-fold increase in the number of resistant cells relative to the parental strain, so these cells were used for library construction. For library construction, mutagenized cells were plated on LB + rifampicin 50 μg/mL, and individual colonies were placed into wells of 96-well deep well plates in LB media. Each plate contained 92 EMS mutants, and 4 wells contained the parental strain as positive controls. After overnight growth, 75 μL of LB containing library bacteria were pipetted into a fresh 96-well plate, and 25 μL of 80% glycerol was added. The library was stored at −80°C. To screen the library, seedlings were grown in 96-well plates in MS media as described above. Three seedlings were grown in each well containing 100 μL MS media. The library was stamped onto rectangular plates containing solid LB media, and then subcultured into 96-well deep bottom plates containing LB. After overnight growth, the OD 600 of 4 independent wells was measured, and the average was taken. This was used to calculate an approximate dilution factor to dilute all 96 wells to a final OD 600 = 0.05. A total of 10 μL of the diluted culture was added to each well, containing 90 μL MS, for a final average bacterial concentration of 0.005. The screen was repeated in duplicate, and candidates that failed to suppress immunity in both replicates were retested. Mapping WCS417 EMS mutations. To map WCS417 EMS mutations, we sequenced the genomes of the parental line used to make the library, as well as the genomes of each individual mutant. Genomic DNA was extracted from WCS417 and the 10E10 EMS mutant with the Puregene Core Kit (Qiagen). A PE150 short insert library was prepared and sequenced on an Illumina HiSeq 2500 (Novogene). After adapter trimming with Cutadapt ( ), reads were aligned to the WCS417 genome using the Bowtie2 ( ) aligner with default parameters. Variant calling was performed with BCFtools ( ). Low quality SNPs with a quality score under 20 were filtered out, and SNPs found in the parent strain were discarded from consideration. Bacteria mutant complementation. Complementation of the 10E10 mutant was performed by PCR amplifying the coding sequence and native promoter of argF in WCS417. The PCR product containing HindIII and BamHI restriction sites was ligated to the plasmid pBBR1MSC5, and the ligation product was transformed into the competent cell E.coli DH5α, and plated on gentamicin 100 μg/mL plates for selection of positive colonies. Positive colonies containing the proargF :pBBR1MSC5 construct were confirmed by colony PCR, and further confirmed by Sanger sequencing. The confirmed pBBR1MCS-5::Pro argF - argF construct was then transformed into the 10E10 mutant. Generation of deletion mutants in WCS417. Clean deletions of argF or pqqF in WCS417 were made using a double-recombination method in Gram-negative bacteria using counter selection with sacB ( ). Two sets of primers were designed to amplify the 500 bp flanking region upstream and downstream of argF or pqqF . Primer 1 with the restriction enzymes site HinIII, and primer 4 with the restriction enzyme site BamHI are the left and right primers of the upstream and downstream flanking region of the target gene, respectively. Primer 2 and primer 3 are the right and left primers of the upstream and downstream flanking region of the target gene, respectively. Both primer 2 and primer 3 consist of 15 bp of region upstream and 15 bp of region downstream of the target gene. Thus, primer pairs primer 1 and primer 2, and primer 3 and primer 4 were used to amplify the 500 bp regions upstream and downstream of the target gene, respectively. Overlap PCR was performed with the upstream and downstream PCR products, which were digested with HindIII and BamHI, and ligated to the pEXG2 suicide vector containing sacB ( ), and then transformed into E.coli DH5α. The positive colonies were selected for plating on LB plates with gentamicin 15 μg/mL, and then confirmed by colony PCR. The deletion constructs for argF or pqqF were further confirmed by Sanger sequencing. The confirmed argF or pqqF deletion constructs were then transformed into the competent SM10λ cells, and were selected on LB plates with gentamicin 15 μg/mL. Conjugation of the SM10λ containing the deletion construct and WCS417 was performed, and the transconjugants were selected on plates containing nalidixic acid 15 μg/mL and gentamicin 100 μg/mL. Positive colonies were re-streaked, and cultured overnight in plain LB. Cell pellets were diluted to 10X and 100X, and were each plated onto 10% sucrose plates and gentamicin 100 μg/mL plates to select for the second recombination. The Δ argFΔpqqF double mutant was made by conjugating the ΔpqqF mutant with the SM10λ strain containing the argF -pPEXG2 deletion construct, and the selection was performed as described above. Seedling exudates. To generate seedling exudates, A. thaliana Col-0 seeds were grown in half strength MS media containing 0.5% sucrose for 7 days, as described above. The seedling exudates were collected from all the wells, and immediately syringe filtered with a 0.22 μm filter and frozen at −20°C. Amino acid solutions. A total of 100 mM (100X) stock solutions of l -arginine, L-proline, l -glutamine, l-glutamate, l -ornithine, l -leucine, l -methionine, l -histidine, L-tryptophan, and l -serine were made in water and filter sterilized using a 0.22 μm filter, before storing at 4°C. pH assay in seeding exudates. Bacteria were grown in LB overnight and were serially diluted to OD 600 0.02 in 10 mM MgSO 4 . Bacteria were inoculated into 24-well plates containing seedling exudates to a final OD of 0.002, and the plates were incubated in a 28°C or 37°C incubator for 18 h. Final concentration of 1 mM amino acids (100X dilution of stocks) were added when indicated. Then, 1 mL of culture was directly taken from each well, and the OD was measured by a spectrophotometer. Each experiment included 3 technical replicates, and was independently repeated at least three times. Ammonium quantification. Ammonium concentration in the pH assay was measured by an Ammonia assay kit (Sigma). The kit provides reagents that reacts with ammonia/ammonium ions, which produces fluorescence signals that are proportional to the ammonia concentration in the sample. A total of 1 mL of each sample was taken from the pH assay and centrifuged for 5 min at 14,000 rpm. Then, 10 μL of the supernatant was used for ammonia quantification following the manufacturer’s instructions. Plates were read by a Spectramax plate reader (λ ex = 360/λ em = 450 nm). Metabolomic profiling. The samples described above in the “pH assays in seedling exudates” were used for untargeted metabolomics analysis. A 60 μL aliquot of each media sample was diluted with 60 μL of acetonitrile (ACN), and vortexed for 10 sec. The mixture was then centrifuged at 14000 rpm for 15 min at 4°C. The supernatant was transferred to glass inserts in 2 mL autosampler vials for liquid chromatography-mass spectrometry (LC-MS) analysis. The analysis was performed using a Bruker Impact II Ultra-High Resolution Qq-Time-of-Flight mass spectrometer coupled to an Agilent 1290 Infinity Liquid Chromatography system. For each sample, 2 μL was injected onto an EMD Millipore SeQuant ZIC-pHILIC column (200 Å, 5 μm, 2.1 × 150 mm) for hydrophilic interaction chromatography (HILIC) separation. For negative ionization mode, the mobile phases (MPs) were 10 mM ammonium acetate in 95/5 water/ACN at pH 9.8 (MP A), and 95/5 ACN/water (MP B). The MPs for positive ionization were the same, except MP A was pH 4.8. The same LC gradient was used for both ionization modes at a flow rate of 0.150 mL/min. The separation gradient, described in percentage of MP B, started at 95%, dropped to 5% over 20 min, and then increased to 95% over 1 min; the column was equilibrated for 14 min between injections. A pooled quality control sample was injected at 5 different volumes for metabolic signal correction, and high-quality feature selection ( ). Sodium formate was injected for mass calibration. The mass spectrometer was operated in Auto MS/MS mode. The ionization capillary voltage was 3.6 kV for negative mode, and 4.5 kV for positive mode. The nebulizer gas was 1.6 bar. The dry gas was 7 L/min, and the dry temperature was 220°C. The mass range collected was from 70 to 1500 m/z at 8 Hz. The collision energy was 20 to 50 eV. The acquired data were calibrated and processed with MS-DIAL (ver. 4.80). The resulting metabolite intensity tables were exported for statistical analysis. Fluorescence reporter imaging and quantification. FRK1 ::mVenus and PER5 ::mVenus seedlings were grown in 600 μL of ½x MS media with 0.5% sucrose and a pH of 5.7 in 24-well plates. On day 8, the media was replaced with 540 μL of fresh ½x MS with 0.5% sucrose at a pH 5.7. On day 9, MAMPs were added to a final concentration of 500 μM flg22, 100 nM At pep1, or 0.1 mg/mL chitin were added. For low pH conditions, ½x MS with 0.5% sucrose with pH 3.7 were added, along with the elicitors described above, and incubated for 4.5 h. Images were taken with a Macro Zoom Fluorescence Microscope MVX10 microscope. We used Arabidopsis thaliana wild type Col-0, CYP71A12 pro :GUS reporter ( ) and FRK1 :: mVenus , PER5 :: mVenus ( ) in our study. All the plant material used in our study were grown in the same condition, which was in a climate-controlled growth room at 21 ° C, 16 h light/8 h dark cycle with light intensity of 100 μM. Plants were grown in the ½ X Murashige and Skoog (MS) media with 1% MES [2-(N-morpholino) ethanesulfonic acid] with 0.5% sucrose, adjusted by KOH to a pH of 5.7 ( ). Strains that were used in this study are listed in , Table S2. P. simiae WCS417 and mutants were cultured overnight in LB or King’s B at 28°C with shaking at 180 rpm. P. aeruginosa PAO1 was cultured overnight in LB at 37°C with shaking at 180 rpm. PAO1 transposon insertion mutants were obtained from the 2-allele PAO1 transposon insertion library ( ). Wildtype PAO1 and the transposon insertion mutants used in this study were cultured in LB with 25 μg/mL tetracycline at 37°C. Escherichia coli were cultured in 37°C with 15 μg/mL or 100 μg/mL gentamicin, depending on the experiment. The reporter lines CYP71A12 pro :GUS in the Arabidopsis Col-0 genetic background contains the CYP71A12 promoter driving the expression of the ß-glucosidase (GUS) reporter gene ( ). Seeds were grown in 48-well plates for 1 week after surface sterilization, and were grown in the condition described above. Each well contained 300 μL ½x MS media and 0.5% sucrose. On day 8, the media was replaced by fresh ½x MS with 0.5% sucrose media. Bacteria were grown overnight in LB, washed in 10 mM MgSO 4 , and serially diluted to an OD 600 of 0.02 in 10 mM MgSO 4 . On day 9, 30 μL bacteria were added to each well (final OD 600 of 0.002), and the plates were returned to the growth room for at least 18 h before adding flg22. On day 10, 500 μM flg22 was added to a final concentration of 500 nm, and the media was replaced with GUS staining solution after 4.5 h of incubation. The GUS staining solution was made fresh at a final concentration of 0.5 M sodium phosphate buffer (pH 7), 0.5M EDTA, 50 mM potassium ferricyanide, 50 mM potassium ferrocyanide, 50 mM X-Gluc (5-bromo-4-chloro-3-indolyl-beta-d-glucuronic acid), and 10 μL Triton X-100. Plates were then incubated at 37°C without light until the control roots treated with flg22 developed a visible blue color (approximately 3 to 4 h). Finally, to clear the tissue, the GUS stain was replaced with 95% ethanol, and washed with water afterwards. Images were taken with a Macro Zoom Fluorescence Microscope MVX10. Overnight cultures of bacteria were grown in LB, then were serially diluted to an OD 600 of 0.2 in 10 mM MgSO 4 for growth curves. Growth curves were performed by adding 10 μL of the diluted culture to 90 μL rich media (LB), minimal media (M9 salts supplemented with 30 mM succinate), or seedling exudates (M9 salts supplemented with 30 mM succinate, with or without 1 mM arginine). Bacteria growth was quantified by measuring OD 600 on a Versamax plate reader (Molecular Devices). Data presented in this study represent the average of 3 biological replicates. P. simiae WCS417r (rif resistant variant) was mutagenized by spinning down and washing an overnight culture, and exposing it to 1, 2, or 4% EMS for 1 h. Mutagenized cells were plated on King’s B with 50 μg/mL streptomycin, and it was found that after treatment with 4% EMS there was ~ 100-fold increase in the number of resistant cells relative to the parental strain, so these cells were used for library construction. For library construction, mutagenized cells were plated on LB + rifampicin 50 μg/mL, and individual colonies were placed into wells of 96-well deep well plates in LB media. Each plate contained 92 EMS mutants, and 4 wells contained the parental strain as positive controls. After overnight growth, 75 μL of LB containing library bacteria were pipetted into a fresh 96-well plate, and 25 μL of 80% glycerol was added. The library was stored at −80°C. To screen the library, seedlings were grown in 96-well plates in MS media as described above. Three seedlings were grown in each well containing 100 μL MS media. The library was stamped onto rectangular plates containing solid LB media, and then subcultured into 96-well deep bottom plates containing LB. After overnight growth, the OD 600 of 4 independent wells was measured, and the average was taken. This was used to calculate an approximate dilution factor to dilute all 96 wells to a final OD 600 = 0.05. A total of 10 μL of the diluted culture was added to each well, containing 90 μL MS, for a final average bacterial concentration of 0.005. The screen was repeated in duplicate, and candidates that failed to suppress immunity in both replicates were retested. To map WCS417 EMS mutations, we sequenced the genomes of the parental line used to make the library, as well as the genomes of each individual mutant. Genomic DNA was extracted from WCS417 and the 10E10 EMS mutant with the Puregene Core Kit (Qiagen). A PE150 short insert library was prepared and sequenced on an Illumina HiSeq 2500 (Novogene). After adapter trimming with Cutadapt ( ), reads were aligned to the WCS417 genome using the Bowtie2 ( ) aligner with default parameters. Variant calling was performed with BCFtools ( ). Low quality SNPs with a quality score under 20 were filtered out, and SNPs found in the parent strain were discarded from consideration. Complementation of the 10E10 mutant was performed by PCR amplifying the coding sequence and native promoter of argF in WCS417. The PCR product containing HindIII and BamHI restriction sites was ligated to the plasmid pBBR1MSC5, and the ligation product was transformed into the competent cell E.coli DH5α, and plated on gentamicin 100 μg/mL plates for selection of positive colonies. Positive colonies containing the proargF :pBBR1MSC5 construct were confirmed by colony PCR, and further confirmed by Sanger sequencing. The confirmed pBBR1MCS-5::Pro argF - argF construct was then transformed into the 10E10 mutant. Clean deletions of argF or pqqF in WCS417 were made using a double-recombination method in Gram-negative bacteria using counter selection with sacB ( ). Two sets of primers were designed to amplify the 500 bp flanking region upstream and downstream of argF or pqqF . Primer 1 with the restriction enzymes site HinIII, and primer 4 with the restriction enzyme site BamHI are the left and right primers of the upstream and downstream flanking region of the target gene, respectively. Primer 2 and primer 3 are the right and left primers of the upstream and downstream flanking region of the target gene, respectively. Both primer 2 and primer 3 consist of 15 bp of region upstream and 15 bp of region downstream of the target gene. Thus, primer pairs primer 1 and primer 2, and primer 3 and primer 4 were used to amplify the 500 bp regions upstream and downstream of the target gene, respectively. Overlap PCR was performed with the upstream and downstream PCR products, which were digested with HindIII and BamHI, and ligated to the pEXG2 suicide vector containing sacB ( ), and then transformed into E.coli DH5α. The positive colonies were selected for plating on LB plates with gentamicin 15 μg/mL, and then confirmed by colony PCR. The deletion constructs for argF or pqqF were further confirmed by Sanger sequencing. The confirmed argF or pqqF deletion constructs were then transformed into the competent SM10λ cells, and were selected on LB plates with gentamicin 15 μg/mL. Conjugation of the SM10λ containing the deletion construct and WCS417 was performed, and the transconjugants were selected on plates containing nalidixic acid 15 μg/mL and gentamicin 100 μg/mL. Positive colonies were re-streaked, and cultured overnight in plain LB. Cell pellets were diluted to 10X and 100X, and were each plated onto 10% sucrose plates and gentamicin 100 μg/mL plates to select for the second recombination. The Δ argFΔpqqF double mutant was made by conjugating the ΔpqqF mutant with the SM10λ strain containing the argF -pPEXG2 deletion construct, and the selection was performed as described above. To generate seedling exudates, A. thaliana Col-0 seeds were grown in half strength MS media containing 0.5% sucrose for 7 days, as described above. The seedling exudates were collected from all the wells, and immediately syringe filtered with a 0.22 μm filter and frozen at −20°C. A total of 100 mM (100X) stock solutions of l -arginine, L-proline, l -glutamine, l-glutamate, l -ornithine, l -leucine, l -methionine, l -histidine, L-tryptophan, and l -serine were made in water and filter sterilized using a 0.22 μm filter, before storing at 4°C. Bacteria were grown in LB overnight and were serially diluted to OD 600 0.02 in 10 mM MgSO 4 . Bacteria were inoculated into 24-well plates containing seedling exudates to a final OD of 0.002, and the plates were incubated in a 28°C or 37°C incubator for 18 h. Final concentration of 1 mM amino acids (100X dilution of stocks) were added when indicated. Then, 1 mL of culture was directly taken from each well, and the OD was measured by a spectrophotometer. Each experiment included 3 technical replicates, and was independently repeated at least three times. Ammonium concentration in the pH assay was measured by an Ammonia assay kit (Sigma). The kit provides reagents that reacts with ammonia/ammonium ions, which produces fluorescence signals that are proportional to the ammonia concentration in the sample. A total of 1 mL of each sample was taken from the pH assay and centrifuged for 5 min at 14,000 rpm. Then, 10 μL of the supernatant was used for ammonia quantification following the manufacturer’s instructions. Plates were read by a Spectramax plate reader (λ ex = 360/λ em = 450 nm). The samples described above in the “pH assays in seedling exudates” were used for untargeted metabolomics analysis. A 60 μL aliquot of each media sample was diluted with 60 μL of acetonitrile (ACN), and vortexed for 10 sec. The mixture was then centrifuged at 14000 rpm for 15 min at 4°C. The supernatant was transferred to glass inserts in 2 mL autosampler vials for liquid chromatography-mass spectrometry (LC-MS) analysis. The analysis was performed using a Bruker Impact II Ultra-High Resolution Qq-Time-of-Flight mass spectrometer coupled to an Agilent 1290 Infinity Liquid Chromatography system. For each sample, 2 μL was injected onto an EMD Millipore SeQuant ZIC-pHILIC column (200 Å, 5 μm, 2.1 × 150 mm) for hydrophilic interaction chromatography (HILIC) separation. For negative ionization mode, the mobile phases (MPs) were 10 mM ammonium acetate in 95/5 water/ACN at pH 9.8 (MP A), and 95/5 ACN/water (MP B). The MPs for positive ionization were the same, except MP A was pH 4.8. The same LC gradient was used for both ionization modes at a flow rate of 0.150 mL/min. The separation gradient, described in percentage of MP B, started at 95%, dropped to 5% over 20 min, and then increased to 95% over 1 min; the column was equilibrated for 14 min between injections. A pooled quality control sample was injected at 5 different volumes for metabolic signal correction, and high-quality feature selection ( ). Sodium formate was injected for mass calibration. The mass spectrometer was operated in Auto MS/MS mode. The ionization capillary voltage was 3.6 kV for negative mode, and 4.5 kV for positive mode. The nebulizer gas was 1.6 bar. The dry gas was 7 L/min, and the dry temperature was 220°C. The mass range collected was from 70 to 1500 m/z at 8 Hz. The collision energy was 20 to 50 eV. The acquired data were calibrated and processed with MS-DIAL (ver. 4.80). The resulting metabolite intensity tables were exported for statistical analysis. FRK1 ::mVenus and PER5 ::mVenus seedlings were grown in 600 μL of ½x MS media with 0.5% sucrose and a pH of 5.7 in 24-well plates. On day 8, the media was replaced with 540 μL of fresh ½x MS with 0.5% sucrose at a pH 5.7. On day 9, MAMPs were added to a final concentration of 500 μM flg22, 100 nM At pep1, or 0.1 mg/mL chitin were added. For low pH conditions, ½x MS with 0.5% sucrose with pH 3.7 were added, along with the elicitors described above, and incubated for 4.5 h. Images were taken with a Macro Zoom Fluorescence Microscope MVX10 microscope.
Tundra Soil Viruses Mediate Responses of Microbial Communities to Climate Warming
67da3c8f-5645-4084-89b8-65d6676b3709
10127799
Microbiology[mh]
Northern-latitude tundra soils store a considerable amount of C (estimated to be 1,672 to 1,832 Pg), accounting for about half of the global soil organic carbon (SOC) reservoir ( ). Historical record shows that the climate warming-related temperature increase is more rapid in northern-latitude areas than the global average ( ). The sustained temperature rise causes the highly vulnerable permafrost to thaw, stimulating the microbial degradation of SOC previously restricted by low temperature and freezing conditions ( ). This may eventually lead to large amounts of soil C being converted to greenhouse gases (mainly CO 2 and CH 4 ) and discharged into the atmosphere, further exacerbating climate warming ( ). Various studies from past decades have a provided mechanistic understanding of microbial community responses to warming in permafrost regions ( ). As part of a unique effort focusing on relatively well-drained upland tundra undergoing recent permafrost degradation, the Carbon in Permafrost Experimental Heating Research (CiPEHR) site was established in central Alaska in 2008 ( , ). At the CiPEHR site, the Alaskan tundra soils were subjected to in situ experimental warming (~1.1°C above ambient temperature averaged across five winters) ( ). Previous studies found that short-term (~1.5 years) climate warming did not significantly affect the taxonomic composition of microorganisms but did significantly alter community functional structure based on GeoChip analysis ( ). Under continuous warming (~5 years), the functional traits and taxonomic composition of bacterial communities changed significantly, along with increased abundances of C decomposition and methanogenesis genes ( , ). Meanwhile, microbial responses to warming also varies at different soil depths of permafrost region ( ). In addition, the fungal community showed an enhancement in C degradation capacity and a shift in functional composition under winter warming ( ). However, most of the current studies have focused only on prokaryotes and lower eukaryotes, but the responses of the viral community to climate warming have received little attention. As an essential component of the ecological community, viruses are considered to be the center of ecological interactions in aquatic systems, influencing microbial dynamics through viral lysis, regulating microbial physiology through temperate infections and gene transfer between microbes, and directly impact biogeochemical cycles in the ocean by encoding auxiliary metabolic genes (AMGs) ( ). Although the role of viruses in tundra soil systems remains understudied, the Carbohydrate-Active enZymes (CAZymes) and other AMGs with the functions of C utilization encoded by tundra soil viruses have been confirmed to be crucial in the soil C cycle ( , ). In addition, it has been discovered in recent studies that viruses can regulate the process of C cycling by actively infecting organisms that are critical to the soil C cycle under subfreezing anoxic conditions, highlighting the modulatory role of viruses as a major community-structuring agent on tundra soil C loss ( , ). Thus, changes in AMGs encoded by viruses and virus-microbe interaction may also be crucial for the response of the entire tundra soil microbial system to climate warming. Here, we performed a de novo assembly of data obtained from shotgun metagenomic sequencing of northern-latitude permafrost region ( ) to mine and analyze the viral information in different depths and temperatures and conducted a GeoChip analysis of viral genes in this region ( ). The main objectives of this study were (i) to determine the changes in soil viral community and functional gene structures after ~5 years of experimental warming, (ii) to investigate how viral communities and functions are shaped by soil depth and other environmental factors, (iii) to explore whether viral effects are important factors causing the different feedbacks of microbial communities to warming, and (iv) to analyze virus-microbe linkages to gain a comprehensive understanding of the biogeochemical role played by viruses in tundra soil ecosystems. Our results indicated that viruses adopted different strategies to regulate microbial communities at different depths, which can cause different microbial responses to warming. Recovery and analysis of viruses from different samples. The number of recovered viruses was related to the soil depth rather than the temperature. With the same sequencing effort, more metagenomic viral contigs (mVCs) were recovered at 45 to 55 cm than at 15 to 25 cm (Student t test, P < 0.05) ( ). However, it appeared that experimental warming had no significant effect on the number of mVCs recovered from this permafrost region at the two depths ( ). Soil depth and warming were critical in affecting the composition and structure of viral genomes, but depth could be more important. First, the GC contents of mVCs across different data sets were compared, and the results showed that the GC contents of mVCs at 45 to 55 cm was significantly lower than that at 15 to 25 cm (Student t test, P < 0.001) ( ). In addition, the GC contents of mVCs at both depths was significantly reduced with experimental warming (Student t test, P < 0.001) ( ). Linear mixed-effects model analysis showed that soil depths ( R 2 = 0.1395, P < 0.001) had a greater effect on the GC contents of mVCs than warming ( R 2 = 0.0272, P < 0.001) ( ). Since the N atoms per residue side chain (N-ARSC) of microbial genomes are usually driven by the surrounding environmental energy and nutrient ( , ), we next analyzed the N-ARSC of recovered mVCs. The results showed that the N-ARSC of mVCs was significantly higher (Student t test, P < 0.001) at 15 to 25 cm than at 45 to 55 cm ( ). The higher N content at 15 to 25 cm than at 45 to 55 cm (Student t test, P < 0.001) indicated that the features of viral genomes might be driven by the surrounding N availability ( , ). Experimental warming resulted in a reduction in the N-ARSC of mVCs at 15 to 25 cm and 45 to 55 cm, but only significantly (Student t test, P < 0.05) at 15 to 25 cm ( ). Further linear mixed-effects model analysis showed that soil depths ( R 2 = 0.0319, P < 0.001) also had a greater effect on the N-ARSC of mVCs than warming ( R 2 = 0.0068, P < 0.01) ( ). Drivers of viral community and functional gene composition. After dereplication of all recovered viral genome sequences, 1,385 species-level viral operational taxonomic units (vOTUs) were used for downstream analyses. vOTU rarefaction curve analysis showed that the amount of sequencing data appeared to be sufficient for detecting dominant viral members in the permafrost soil (see in the supplemental material). The completeness of viral representative genomes in the 1,385 vOTUs was as follows: 2% complete genomes and/or high quality, 2% medium quality, 71.6% low quality, and others were not determined ( ). Next, we analyzed the abundance patterns of the vOTUs across different depths and temperatures. The 1,385 vOTUs were clustered across different samples based on Bray-Curtis distances, forming two main groups corresponding to different soil depths (see ). Only 46 of the 1,385 vOTUs were detected at all temperatures and depths (see ). Most of these vOTUs were clustered and had closer normalized abundance within the same soil depth ( ). Experimental warming had no significant effect on the abundance patterns of vOTUs ( ). 10.1128/mbio.03009-22.1 FIG S1 vOTU accumulation curves for the viral data sets in this permafrost region. The black lines within the box indicate the median value of the vOTU number, and the ranges of the error bars represent random replicates. Download FIG S1, TIF file, 0.9 MB . Copyright © 2023 Ji et al. 2023 Ji et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . 10.1128/mbio.03009-22.2 FIG S2 Analysis of viral diversity and abundance across different data sets. (A) Heatmap showing the abundance patterns of 1,385 vOTUs. Samples and vOTUs are clustered based on the Bray-Curtis dissimilarity matrix. (B) Number of vOTUs with abundance associations in the four data sets. (C to E) Shannon diversity, Simpson index, and richness indicate the diversity and richness of viral community across different data sets. A Student t test was used to determine the significance. ns, no significant difference ( P > 0.05). (F) NMDS analysis of viral communities based on the Bray-Curtis dissimilarity matrix calculated by the normalized mean coverage of vOTUs. ANOSIM was applied to detect the differences of viral communities between different depths or temperatures. Download FIG S2, TIF file, 1.3 MB . Copyright © 2023 Ji et al. 2023 Ji et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . Multiple analyses revealed that soil depth rather than warming was the main factor influencing the distribution of viral communities in this permafrost region. First, the composition and diversity of viral communities in different depths or temperatures were analyzed based on the normalized abundance of vOTUs. The Shannon diversity, richness, and Simpson index indicated that neither warming nor depth had a significant effect on the diversity of viral communities (see to ). Further analysis using nonmetric multidimensional scaling (NMDS) showed that viral communities were clustered according to soil depth (see ). Analysis of similarity (ANOSIM) indicated that viral compositions of warmed plots ( P = 0.002) and control plots ( P = 0.0032) were significantly different at the two soil depths (see ). Experimental warming affected viral composition at neither 15 to 25 cm ( P = 0.8236) nor 45 to 55 cm ( P = 0.5171) (see ). The composition of viral communities was significantly correlated with environmental factors (Mantel test, R = 0.6522, P = 0.001). A significant canonical correspondence analysis (CCA) model (permutation test, P < 0.001) showed that total N, total C, soil thaw duration, growing season temperature, moisture, soil bulk density, plant biomass, and soil thawing depth were important environmental factors significantly controlling the viral community structure (permutation test, P < 0.05) ( ). Variation partitioning analysis (VPA) showed that the eight environmental factors explained 42.9% of the variation in viral communities, suggesting that they were important drivers shaping viral community composition ( ). In contrast to shotgun sequencing, microarray-based hybridization is able to provide more quantitative information ( , ). Thus, these samples were also analyzed by functional gene arrays. GeoChip 5.0 analysis identified a total of 127 DNA viral genes of 2,848 virus-specific probes ( , ) at the two depths of this permafrost region. Among these, 69 genes were related to prokaryotic viruses and 58 genes were associated with eukaryotic viruses. Among them, the majority of viral genes detected were related to replication (59 of 127) and viral structure (52 of 127), which could be mainly because the viral probes on the GeoChip were most derived from these genes. In addition, some viral genes had functions related to infection (2 of 127) and lysis (14 of 127) were also detected. Experimental warming had no significant effect on viral functional gene structures based on GeoChip at neither 15 to 25 cm (ANOSIM, P = 0.0841) nor 45 to 55 cm (ANOSIM, P = 0.0949) ( ). Consistent with community composition, soil depth had a significant effect (ANOSIM, P = 0.0001) on viral functional gene structures based on GeoChip ( ). In addition, the function of viruses was significantly correlated with environmental factors (Mantel test, R = 0.391, P = 0.001). The CCA model (permutation test, P < 0.001) and VPA showed that six significant environmental factors, including total N, total C, growing season temperature, soil bulk density, soil thaw duration, and winter temperature (permutation test, P < 0.001), explained 49.8% of the variation in viral functional composition, suggesting that environmental factors were also important drivers shaping viral function ( ). Null model analysis was then employed to discern the relative importance of deterministic and stochastic processes in driving viral community and functional gene structures. For viral communities, only the stochastic ratio of 45 to 55 cm control was >50%, suggesting that deterministic processes played more important roles in driving viral communities in the permafrost region ( ). Interestingly, warming significantly decreased (Student t test, P < 0.05) the stochastic ratio of viral communities at 45 to 55 cm ( ), suggesting that warming could impose deterministic effects on viruses ( ). For viral function, stochastic ratios were consistently <50%, indicating that viral functional traits were highly deterministic ( ). Neither warming nor soil depths had significant effects on the stochastic ratios of viral functions ( ). Taxonomy and clustering of viral community. To determine the similarity of tundra soil viruses, 1,385 vOTUs identified in this study were compared to several publicly available viral sequence data sets: (i) 2,213 bacterial viruses and 91 archaeal viruses (RefSeq v85) ( ) and (ii) 1,907 vOTUs, recovered from metagenomes and viromes from Stordalen Mire bulk soil, a long-term climate change research site in northern Sweden ( ). In the network of shared protein clusters constructed using previous method ( ), 476 (~34%) of 1,385 vOTUs clustered with the publicly available viral data sets, yielding a total of 243 viral clusters ( ). However, only five vOTUs were clustered with known viral genomes belonging to five different viral clusters with the majority of the viruses in tundra soil ecosystem remaining unknown ( ). Approximately 42.8% (104 of 243) of the viral clusters contained vOTUs from both northern-latitude tundra and Stordalen Mire ( ). These vOTUs clustered together in the network, indicating that viral communities in tundra soil from different regions were related, suggesting a broad global geographic distribution of soil viral communities as previously reported ( and ) ( ). According to the taxonomic results of PhaGCN2 based on the latest ICTV classification ( https://doi.org/10.5281/zenodo.7442695 ), 171 (~12.3%) of 1,385 vOTUs were annotated and 105 of 171 (~61.4%) could be further classified at family level. Another 66 of 171 vOTUs could only be classified at the subfamily or genus level, so we assumed that they belong to an unclassified family. Also, 148 of 171 vOTUs were assigned to the class Caudoviricetes (mainly including the families Kyanoviridae , Casjensviridae , and Peduoviridae ). In addition, 23 vOTUs assigned to other classes potentially belong to the families Phycodnaviridae , Microviridae , Tectiviridae , Adenoviridae , Herpesviridae , Lipothrixviridae , and Inoviridae ( ). In general, viral taxonomic abundance had relatively similar distribution patterns in the same depth, in particular the viral taxa with high abundance ( ). Among them, the relative abundance of Kyanoviridae and Inoviridae dominated at 15 to 25 cm, while the relative abundance of Casjensviridae and Peduoviridae dominated at 45 to 55 cm (Student t test, P < 0.05) ( ). Experimental warming had no significant effects on the abundance of viral taxa ( ). The relative abundances of all viral taxa at family level of two depths did not change significantly under experimental warming (Student t test, P > 0.05) ( ). Auxiliary metabolic genes harbored in viruses. In different ecological environments, viruses can participate directly in biogeochemical cycles through the encoding of AMGs ( , ). To further investigate the impact of viral communities on soil biogeochemical cycling, we examined and refined 311 AMGs encoded by the vOTUs. In general, tundra soil viruses tended to encode AMGs for “metabolism of cofactors and vitamins” (25.08%) and “carbohydrate metabolism” (22.19%) based on VIBRANT and DRAM-v annotations (see in the supplemental material). In addition, AMGs for “nucleotide metabolism” (9.97%), “glycan biosynthesis and metabolism” (13.5%), and “amino acid metabolism” (14.15%) accounted for a large proportion (see ). Based on the normalized relative abundance of AMGs ( , ), heatmap analysis showed a noticeable depth-stratified distribution of AMGs, which was consistent with previous findings in marine, sulfidic mine tailings, and other environments ( ) ( , ). There was generally no significant effect of warming on the relative abundance of viral AMGs, only the relative abundance of AMGs for “protein families: metabolism” was significantly increased at 45 to 55 cm (Student t test, P < 0.05) ( ). 10.1128/mbio.03009-22.8 TABLE S1 Auxiliary metabolic gene information for viral ORFs in the permafrost region. Download Table S1, XLSX file, 0.03 MB . Copyright © 2023 Ji et al. 2023 Ji et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . Among the viral AMGs related to carbohydrate metabolism, 21 AMGs were annotated as 17 CAZymes, belonging to three CAZymes functional classes (carbohydrate-binding modules, glycoside hydrolases [GHs], and glycosyltransferases) ( ; see also ). The detailed functional annotation of these GHs showed that there were a variety of key enzymes with degradation functions for complex polysaccharides in permafrost, mainly including β-glucosidase (GH3), cellulase (GH5), β-glucanase (GH16), β-mannosidase (GH26), chitosanase (GH46), α- l -arabinofuranosidase (GH51), and α- l -rhamnosidase (GH78) (see ). In particular, 10 of 11 GHs were found in the viral genomes of the warmed group, indicating that warming might promote the horizontal gene transfer (HGT) of GHs mediated by viruses ( ). However, warming did not cause significant changes in the relative abundance of GHs carried by viruses, but the relative abundance of GHs was significantly greater at depths of 15 to 25 cm than at depths of 45 to 55 cm (Student t test, P < 0.05). Genomic analysis of vOTUs encoding GHs revealed that many vOTUs (6 of 10 vOTUs) were lysogenic viruses containing integrase genes, and all GHs were not previously found in viral genomes (see ). Among them, a high-quality viral genome named ERR3313649_contig_1027 (70 kb, 91.3% completeness) encoding GH16 was identified at the 45 to 55 cm under warming conditions containing several common viral structural proteins (substrate proteins, tail proteins, and terminase large subunit) and a lysogenic profile (integrase and some DNA metabolic modules), and it has abundance only at 45 to 55 cm under warming conditions (see ). 10.1128/mbio.03009-22.3 FIG S3 Genome maps of some AMG-containing contigs and structure models of these AMGs. (A) Arrows represent the locations and directions of predicted genes in the viral genomes. Genes with different functions are annotated by different colors. For CAZymes, genes that can be compared to known viruses are indicated in dark purple; otherwise, they are indicated in light purple. (B) S metabolism-related AMGs that can be compared to known viruses are indicated in dark green; otherwise, they are indicated in light green. (C) Protein structures of selected AMGs based on structural modeling using Phyre2. Download FIG S3, TIF file, 2.4 MB . Copyright © 2023 Ji et al. 2023 Ji et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . Markedly, sulfur (S) metabolism-related AMGs carried by viruses were also abundant in permafrost ( and ). They were mainly involved in the functions of assimilatory sulfate reduction or organic S transformation ( ). Assimilatory sulfate reduction-related AMGs carried by eight vOTUs were annotated as phosphoadenosine phosphosulfate reductase ( cysH ) and adenylylsulfate kinase ( cysC ) that have been previously identified in a variety of environments (see and ) ( ). Genomic analysis showed that all assimilatory sulfate reduction-related AMGs were previously detected in the viral genomes ( , ), and five of them were present at 45 to 55 cm under warming conditions ( ; see also ). In addition, some S metabolism-related AMGs of unknown functions were found at 45 to 55 cm, and the relative abundance of them was significantly higher than at 15 to 25 cm (Student t test, P < 0.05) ( ). Analysis of viral lifestyles and specific genes in tundra soil. Lysis-lysogeny decision-making in viruses is a critical factor influencing soil nutrient cycling ( ). Therefore, the elaboration of viral lifestyles is important in the study of viral ecology ( ). In all, 235 lysogenic vOTUs were identified in this study, containing 76 proviruses (integrated lysogenic viruses) and 159 temperate viruses (free lysogenic viruses) (see ). The relative abundance of viruses showed that 5 years of warming did not alter the proportion of lysogenic viruses and lytic viruses ( ; see also ). However, the relative abundances of lysogenic viruses at 15 to 25 cm were significantly higher than at 45 to 55 cm (Student t test, P < 0.01), reflecting the difference of viral lifestyle at different depths ( ). Markedly, the relative abundances of lysogenic viruses and genes were both correlated positively ( P < 0.05) with total C and N, indicating that eutrophic drove more viruses to adopt lysogenic lifestyles compared to oligotrophic conditions (see and ). 10.1128/mbio.03009-22.4 FIG S4 Analysis of the lifestyle of tundra soil viruses. (A) Proportions of the number of viruses with different lifestyles across different data sets, including 15 to 25 cm (control, n = 141), 15 to 25 cm (warmed, n = 227), 45 to 55 cm (control, n = 486), and 45 to 55 cm (warmed, n = 531). (B) Relative abundances of viruses with different lifestyles across different data sets. Download FIG S4, TIF file, 0.8 MB . Copyright © 2023 Ji et al. 2023 Ji et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . 10.1128/mbio.03009-22.5 FIG S5 Correlation analysis of lysogenic viruses and genes with surrounding nutrients. (A) Correlation analysis between the relative abundance (normalized abundance of lysogenic viruses/total normalized abundance) of lysogenic viruses and total C and N. (B) Correlation analysis between the relative abundance of lysogenic gene (normalized abundance of lysogenic gene/total normalized abundance) and total C and N. Download FIG S5, TIF file, 0.6 MB . Copyright © 2023 Ji et al. 2023 Ji et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . In addition to identifying lysogenic signals in shotgun sequencing, viral functional genes based on GeoChip are also important information for analyzing viral lifestyles ( ). Among the viral genes detected by GeoChip 5.0 ( , ), the abundance of viral genes related to lysis function was significantly higher at 45 to 55 cm than at 15 to 25 cm (paired t test, P < 0.05), suggesting that some viruses may tend to select for carrying more lysis genes at 45 to 55 cm ( ). It should be noted that the lysis-lysogeny decision analyzed by GeoChip could only represent a fraction of tundra viral members, since GeoChip targeted limited classes of viral lysis genes ( ). In addition, we observed that the abundance of viral structural genes was significantly lower at 45 to 55 cm than at 15 to 25 cm (paired t test, P < 0.05). This result was consistent with the differences of viral genomic features ( and ), further indicating that soil depths could be an important factor in shaping the evolution of viral genomes. However, neither warming nor soil depths had significant effects on the abundance of viral genes related to replication or infection ( ). Virus-microbe lineage analysis. A total of 201 microbial metagenome-assembled genomes (MAGs) were binned from metagenomes, including 188 bacterial and 13 archaeal MAGs spanned 12 bacterial and 2 archaeal phyla (see ). After clustering, 88 microbial operational taxonomic units (mOTUs) were used for subsequent analysis. The mOTUs were highly represented by the phyla Actinobacteria ( n = 21), Proteobacteria ( n = 14), “ Candidatus Dormibacteraeota” ( n = 12), and Acidobacteria ( n = 11). Based on the normalized abundance among the samples, microbial communities had more similar abundance patterns at the same depth, while these highly represented phyla was highly abundant at both two depths (see ). Several methods were then used to predict viral hosts and to establish links between the vOTUs and mOTUs. Sequence homology and CRISPR spacers are the most effective methods to identify viral hosts ( ), and previous studies have pointed out that viruses can obtain tRNAs from their host genomes during the process of infection ( ). These methods for finding signals for the exchange of genetic material between vOTUs and mOTUs provided host information for approximately 19.4% of vOTUs with the oligonucleotide frequency (ONF) method identifying possible hosts of another 2.3% of the vOTUs (see ). 10.1128/mbio.03009-22.6 FIG S6 Abundance patterns of mOTUs across different samples. Samples are clustered based on the Bray-Curtis dissimilarity. The color intensity in each panel represents the normalized abundance of mOTUs of each lineage. Download FIG S6, TIF file, 0.2 MB . Copyright © 2023 Ji et al. 2023 Ji et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . 10.1128/mbio.03009-22.9 TABLE S2 Host prediction of 1,385 viral operational taxonomic units. Download Table S2, XLSX file, 0.1 MB . Copyright © 2023 Ji et al. 2023 Ji et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . At the phylum level, the host range of these viruses (276 of 1,385 vOTUs) covered 12 bacterial and archaeal phyla (excluding the phyla Thaumarchaeota and “ Candidatus Aminicenantes”) in our data sets (see ). We then established the network analysis between 276 vOTUs and their corresponding mOTUs as previously described ( ) ( ). Among them, Actinobacteria (~22.8%) and Proteobacteria (~25.4%) were the most frequently predicted host phyla ( ). At 15 to 25 cm, viruses mainly infected the phyla Acidobacteria (mainly including the class Acidobacteriia ) and Actinobacteria (mainly including the classes Actinobacteria and Thermoleophilia ), which were the most abundant in the microbial community ( ). At 45 to 55 cm, viruses infected the phyla Proteobacteria (mainly including the class Deltaproteobacteria ) and Euryarchaeota (only including the class Methanomicrobia ) were dominant, which might be related to sulfate reduction and methanogenesis ( ). In addition, we also found some viruses infected the genus Bradyrhizobium (belonging to the class Alphaproteobacteria ) that potentially affected the N cycle in the soil (see ). Under the experimental warming, the number of vOTUs infecting the class Acidobacteriia increased at 15 to 25 cm, while the number of vOTUs infecting the classes Deltaproteobacteria and Betaproteobacteria increased at 45 to 55 cm ( ). Warming had no effect on the number of vOTUs infecting the phylum Euryarchaeota . In our data sets, most vOTUs infected a specific bacterial phylum or class ( ). However, at the species classification level, there existed some vOTUs that could infect multiple mOTUs (see ; see also ). Remarkably, the CAZymes and some genes related to S metabolism carried by viruses might not be acquired from a limited number of potential hosts, rather as a large number of different bacterial or archaeal phyla (including the phyla Proteobacteria , Acidobacteria , Actinobacteria , Chloroflexi , Euryarchaeota , and “ Candidatus Dormibacteraeota”) were infected by these viruses (see ). 10.1128/mbio.03009-22.7 FIG S7 Analysis of virus-microbe interaction in permafrost region. (A) Virus-host interaction network at the species level. Viral genomes (hexagons) are connected to predicted microbial host genomes (circles) by edges. Viral genomes and host genomes are sized by abundance (normalized mean coverage) across data sets. Host genomes are colored according to their taxonomy. Viral genomes encoding CAZymes or AMGs for S metabolism are outlined in red and gold, respectively. The edges between the key microorganisms of the C, N, and S cycles and their corresponding viruses are colored red, brown, and purple, respectively. (B) Correlation analysis of virus-microbe abundance of each specific lineage. Linear regression model analysis is performed based on the virus-microbe abundance correlations for the specific lineage in each dataset. R 2 values and Pearson’s correlations in linear regression models are presented in . Download FIG S7, TIF file, 1.9 MB . Copyright © 2023 Ji et al. 2023 Ji et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . The normalized abundance determined by reads mapping showed a strong correlation between viruses and prokaryotic microbes in permafrost region ( ). In the virus-microbe linkage, the virus/microbe abundance ratios (VMRs) were greater than one for most specific lineages with the class Deltaproteobacteria being the highest at 9.5, indicating a higher percentage of Deltaproteobacteria population were infected by viruses ( ). In addition, the VMRs of the phylum Euryarchaeota and the classes Acidobacteria and Candidatus Saccharibacteria were all greater than 5, representing the most active lineages of virus-microbe interactions in permafrost region ( ). Analysis of virus-microbe abundance correlations for specific lineages revealed that different virus-microbe lineages had different trends in abundance changes with sustained experimental warming ( ). Meanwhile, the specific virus-microbe lineages responded differently to warming at the two depths ( ; see also ). Under warming, the VMRs of the class Acidimicrobia increased significantly (two-way analysis of variance [ANOVA], P < 0.05) at 15 to 25 cm, while the VMRs of the phyla Verrucomicrobia , Candidatus Cryosericota , and Gemmatimonadetes decreased significantly (two-way ANOVA, P < 0.05) at 45 to 55 cm ( ; see also and ). This indicated that the viral lytic capacity of these bacteria was altered under sustained warming. In addition, we observed that the abundance relationships of a large number of virus-microbe lineages (including the classes Gammaproteobacteria , Acidobacteriia , etc.) had significant differences between the different depths (two-way ANOVA, P < 0.05), indicating that the infection mode of soil viruses was related to depth ( ; see also and ). The linear mixed-effects models were performed for the VMR of each specific lineage that was significantly affected by both warming and depth. The results showed that soil depth not warming was a more important factor influencing the VMRs of Verrucomicrobia and Acidimicrobiia (see ). Interestingly, the VMRs of some virus-microbe lineages were also significantly correlated with some environmental factors ( ). For example, the VMRs of the two usually competing groups in soil habitats, the class Methanomicrobia and the class Deltaproteobacteria , were negatively ( P < 0.01) and positively ( P < 0.05) correlated with total C, respectively ( ). This indicated that viruses may regulate the dynamics of microbial communities in response to changes in the surrounding nutrients. 10.1128/mbio.03009-22.10 TABLE S3 Linear mixed-effects model analysis and linear regressions of the normalized abundance of the microbes and their viruses. Download Table S3, XLSX file, 0.01 MB . Copyright © 2023 Ji et al. 2023 Ji et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . The number of recovered viruses was related to the soil depth rather than the temperature. With the same sequencing effort, more metagenomic viral contigs (mVCs) were recovered at 45 to 55 cm than at 15 to 25 cm (Student t test, P < 0.05) ( ). However, it appeared that experimental warming had no significant effect on the number of mVCs recovered from this permafrost region at the two depths ( ). Soil depth and warming were critical in affecting the composition and structure of viral genomes, but depth could be more important. First, the GC contents of mVCs across different data sets were compared, and the results showed that the GC contents of mVCs at 45 to 55 cm was significantly lower than that at 15 to 25 cm (Student t test, P < 0.001) ( ). In addition, the GC contents of mVCs at both depths was significantly reduced with experimental warming (Student t test, P < 0.001) ( ). Linear mixed-effects model analysis showed that soil depths ( R 2 = 0.1395, P < 0.001) had a greater effect on the GC contents of mVCs than warming ( R 2 = 0.0272, P < 0.001) ( ). Since the N atoms per residue side chain (N-ARSC) of microbial genomes are usually driven by the surrounding environmental energy and nutrient ( , ), we next analyzed the N-ARSC of recovered mVCs. The results showed that the N-ARSC of mVCs was significantly higher (Student t test, P < 0.001) at 15 to 25 cm than at 45 to 55 cm ( ). The higher N content at 15 to 25 cm than at 45 to 55 cm (Student t test, P < 0.001) indicated that the features of viral genomes might be driven by the surrounding N availability ( , ). Experimental warming resulted in a reduction in the N-ARSC of mVCs at 15 to 25 cm and 45 to 55 cm, but only significantly (Student t test, P < 0.05) at 15 to 25 cm ( ). Further linear mixed-effects model analysis showed that soil depths ( R 2 = 0.0319, P < 0.001) also had a greater effect on the N-ARSC of mVCs than warming ( R 2 = 0.0068, P < 0.01) ( ). After dereplication of all recovered viral genome sequences, 1,385 species-level viral operational taxonomic units (vOTUs) were used for downstream analyses. vOTU rarefaction curve analysis showed that the amount of sequencing data appeared to be sufficient for detecting dominant viral members in the permafrost soil (see in the supplemental material). The completeness of viral representative genomes in the 1,385 vOTUs was as follows: 2% complete genomes and/or high quality, 2% medium quality, 71.6% low quality, and others were not determined ( ). Next, we analyzed the abundance patterns of the vOTUs across different depths and temperatures. The 1,385 vOTUs were clustered across different samples based on Bray-Curtis distances, forming two main groups corresponding to different soil depths (see ). Only 46 of the 1,385 vOTUs were detected at all temperatures and depths (see ). Most of these vOTUs were clustered and had closer normalized abundance within the same soil depth ( ). Experimental warming had no significant effect on the abundance patterns of vOTUs ( ). 10.1128/mbio.03009-22.1 FIG S1 vOTU accumulation curves for the viral data sets in this permafrost region. The black lines within the box indicate the median value of the vOTU number, and the ranges of the error bars represent random replicates. Download FIG S1, TIF file, 0.9 MB . Copyright © 2023 Ji et al. 2023 Ji et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . 10.1128/mbio.03009-22.2 FIG S2 Analysis of viral diversity and abundance across different data sets. (A) Heatmap showing the abundance patterns of 1,385 vOTUs. Samples and vOTUs are clustered based on the Bray-Curtis dissimilarity matrix. (B) Number of vOTUs with abundance associations in the four data sets. (C to E) Shannon diversity, Simpson index, and richness indicate the diversity and richness of viral community across different data sets. A Student t test was used to determine the significance. ns, no significant difference ( P > 0.05). (F) NMDS analysis of viral communities based on the Bray-Curtis dissimilarity matrix calculated by the normalized mean coverage of vOTUs. ANOSIM was applied to detect the differences of viral communities between different depths or temperatures. Download FIG S2, TIF file, 1.3 MB . Copyright © 2023 Ji et al. 2023 Ji et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . Multiple analyses revealed that soil depth rather than warming was the main factor influencing the distribution of viral communities in this permafrost region. First, the composition and diversity of viral communities in different depths or temperatures were analyzed based on the normalized abundance of vOTUs. The Shannon diversity, richness, and Simpson index indicated that neither warming nor depth had a significant effect on the diversity of viral communities (see to ). Further analysis using nonmetric multidimensional scaling (NMDS) showed that viral communities were clustered according to soil depth (see ). Analysis of similarity (ANOSIM) indicated that viral compositions of warmed plots ( P = 0.002) and control plots ( P = 0.0032) were significantly different at the two soil depths (see ). Experimental warming affected viral composition at neither 15 to 25 cm ( P = 0.8236) nor 45 to 55 cm ( P = 0.5171) (see ). The composition of viral communities was significantly correlated with environmental factors (Mantel test, R = 0.6522, P = 0.001). A significant canonical correspondence analysis (CCA) model (permutation test, P < 0.001) showed that total N, total C, soil thaw duration, growing season temperature, moisture, soil bulk density, plant biomass, and soil thawing depth were important environmental factors significantly controlling the viral community structure (permutation test, P < 0.05) ( ). Variation partitioning analysis (VPA) showed that the eight environmental factors explained 42.9% of the variation in viral communities, suggesting that they were important drivers shaping viral community composition ( ). In contrast to shotgun sequencing, microarray-based hybridization is able to provide more quantitative information ( , ). Thus, these samples were also analyzed by functional gene arrays. GeoChip 5.0 analysis identified a total of 127 DNA viral genes of 2,848 virus-specific probes ( , ) at the two depths of this permafrost region. Among these, 69 genes were related to prokaryotic viruses and 58 genes were associated with eukaryotic viruses. Among them, the majority of viral genes detected were related to replication (59 of 127) and viral structure (52 of 127), which could be mainly because the viral probes on the GeoChip were most derived from these genes. In addition, some viral genes had functions related to infection (2 of 127) and lysis (14 of 127) were also detected. Experimental warming had no significant effect on viral functional gene structures based on GeoChip at neither 15 to 25 cm (ANOSIM, P = 0.0841) nor 45 to 55 cm (ANOSIM, P = 0.0949) ( ). Consistent with community composition, soil depth had a significant effect (ANOSIM, P = 0.0001) on viral functional gene structures based on GeoChip ( ). In addition, the function of viruses was significantly correlated with environmental factors (Mantel test, R = 0.391, P = 0.001). The CCA model (permutation test, P < 0.001) and VPA showed that six significant environmental factors, including total N, total C, growing season temperature, soil bulk density, soil thaw duration, and winter temperature (permutation test, P < 0.001), explained 49.8% of the variation in viral functional composition, suggesting that environmental factors were also important drivers shaping viral function ( ). Null model analysis was then employed to discern the relative importance of deterministic and stochastic processes in driving viral community and functional gene structures. For viral communities, only the stochastic ratio of 45 to 55 cm control was >50%, suggesting that deterministic processes played more important roles in driving viral communities in the permafrost region ( ). Interestingly, warming significantly decreased (Student t test, P < 0.05) the stochastic ratio of viral communities at 45 to 55 cm ( ), suggesting that warming could impose deterministic effects on viruses ( ). For viral function, stochastic ratios were consistently <50%, indicating that viral functional traits were highly deterministic ( ). Neither warming nor soil depths had significant effects on the stochastic ratios of viral functions ( ). To determine the similarity of tundra soil viruses, 1,385 vOTUs identified in this study were compared to several publicly available viral sequence data sets: (i) 2,213 bacterial viruses and 91 archaeal viruses (RefSeq v85) ( ) and (ii) 1,907 vOTUs, recovered from metagenomes and viromes from Stordalen Mire bulk soil, a long-term climate change research site in northern Sweden ( ). In the network of shared protein clusters constructed using previous method ( ), 476 (~34%) of 1,385 vOTUs clustered with the publicly available viral data sets, yielding a total of 243 viral clusters ( ). However, only five vOTUs were clustered with known viral genomes belonging to five different viral clusters with the majority of the viruses in tundra soil ecosystem remaining unknown ( ). Approximately 42.8% (104 of 243) of the viral clusters contained vOTUs from both northern-latitude tundra and Stordalen Mire ( ). These vOTUs clustered together in the network, indicating that viral communities in tundra soil from different regions were related, suggesting a broad global geographic distribution of soil viral communities as previously reported ( and ) ( ). According to the taxonomic results of PhaGCN2 based on the latest ICTV classification ( https://doi.org/10.5281/zenodo.7442695 ), 171 (~12.3%) of 1,385 vOTUs were annotated and 105 of 171 (~61.4%) could be further classified at family level. Another 66 of 171 vOTUs could only be classified at the subfamily or genus level, so we assumed that they belong to an unclassified family. Also, 148 of 171 vOTUs were assigned to the class Caudoviricetes (mainly including the families Kyanoviridae , Casjensviridae , and Peduoviridae ). In addition, 23 vOTUs assigned to other classes potentially belong to the families Phycodnaviridae , Microviridae , Tectiviridae , Adenoviridae , Herpesviridae , Lipothrixviridae , and Inoviridae ( ). In general, viral taxonomic abundance had relatively similar distribution patterns in the same depth, in particular the viral taxa with high abundance ( ). Among them, the relative abundance of Kyanoviridae and Inoviridae dominated at 15 to 25 cm, while the relative abundance of Casjensviridae and Peduoviridae dominated at 45 to 55 cm (Student t test, P < 0.05) ( ). Experimental warming had no significant effects on the abundance of viral taxa ( ). The relative abundances of all viral taxa at family level of two depths did not change significantly under experimental warming (Student t test, P > 0.05) ( ). In different ecological environments, viruses can participate directly in biogeochemical cycles through the encoding of AMGs ( , ). To further investigate the impact of viral communities on soil biogeochemical cycling, we examined and refined 311 AMGs encoded by the vOTUs. In general, tundra soil viruses tended to encode AMGs for “metabolism of cofactors and vitamins” (25.08%) and “carbohydrate metabolism” (22.19%) based on VIBRANT and DRAM-v annotations (see in the supplemental material). In addition, AMGs for “nucleotide metabolism” (9.97%), “glycan biosynthesis and metabolism” (13.5%), and “amino acid metabolism” (14.15%) accounted for a large proportion (see ). Based on the normalized relative abundance of AMGs ( , ), heatmap analysis showed a noticeable depth-stratified distribution of AMGs, which was consistent with previous findings in marine, sulfidic mine tailings, and other environments ( ) ( , ). There was generally no significant effect of warming on the relative abundance of viral AMGs, only the relative abundance of AMGs for “protein families: metabolism” was significantly increased at 45 to 55 cm (Student t test, P < 0.05) ( ). 10.1128/mbio.03009-22.8 TABLE S1 Auxiliary metabolic gene information for viral ORFs in the permafrost region. Download Table S1, XLSX file, 0.03 MB . Copyright © 2023 Ji et al. 2023 Ji et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . Among the viral AMGs related to carbohydrate metabolism, 21 AMGs were annotated as 17 CAZymes, belonging to three CAZymes functional classes (carbohydrate-binding modules, glycoside hydrolases [GHs], and glycosyltransferases) ( ; see also ). The detailed functional annotation of these GHs showed that there were a variety of key enzymes with degradation functions for complex polysaccharides in permafrost, mainly including β-glucosidase (GH3), cellulase (GH5), β-glucanase (GH16), β-mannosidase (GH26), chitosanase (GH46), α- l -arabinofuranosidase (GH51), and α- l -rhamnosidase (GH78) (see ). In particular, 10 of 11 GHs were found in the viral genomes of the warmed group, indicating that warming might promote the horizontal gene transfer (HGT) of GHs mediated by viruses ( ). However, warming did not cause significant changes in the relative abundance of GHs carried by viruses, but the relative abundance of GHs was significantly greater at depths of 15 to 25 cm than at depths of 45 to 55 cm (Student t test, P < 0.05). Genomic analysis of vOTUs encoding GHs revealed that many vOTUs (6 of 10 vOTUs) were lysogenic viruses containing integrase genes, and all GHs were not previously found in viral genomes (see ). Among them, a high-quality viral genome named ERR3313649_contig_1027 (70 kb, 91.3% completeness) encoding GH16 was identified at the 45 to 55 cm under warming conditions containing several common viral structural proteins (substrate proteins, tail proteins, and terminase large subunit) and a lysogenic profile (integrase and some DNA metabolic modules), and it has abundance only at 45 to 55 cm under warming conditions (see ). 10.1128/mbio.03009-22.3 FIG S3 Genome maps of some AMG-containing contigs and structure models of these AMGs. (A) Arrows represent the locations and directions of predicted genes in the viral genomes. Genes with different functions are annotated by different colors. For CAZymes, genes that can be compared to known viruses are indicated in dark purple; otherwise, they are indicated in light purple. (B) S metabolism-related AMGs that can be compared to known viruses are indicated in dark green; otherwise, they are indicated in light green. (C) Protein structures of selected AMGs based on structural modeling using Phyre2. Download FIG S3, TIF file, 2.4 MB . Copyright © 2023 Ji et al. 2023 Ji et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . Markedly, sulfur (S) metabolism-related AMGs carried by viruses were also abundant in permafrost ( and ). They were mainly involved in the functions of assimilatory sulfate reduction or organic S transformation ( ). Assimilatory sulfate reduction-related AMGs carried by eight vOTUs were annotated as phosphoadenosine phosphosulfate reductase ( cysH ) and adenylylsulfate kinase ( cysC ) that have been previously identified in a variety of environments (see and ) ( ). Genomic analysis showed that all assimilatory sulfate reduction-related AMGs were previously detected in the viral genomes ( , ), and five of them were present at 45 to 55 cm under warming conditions ( ; see also ). In addition, some S metabolism-related AMGs of unknown functions were found at 45 to 55 cm, and the relative abundance of them was significantly higher than at 15 to 25 cm (Student t test, P < 0.05) ( ). Lysis-lysogeny decision-making in viruses is a critical factor influencing soil nutrient cycling ( ). Therefore, the elaboration of viral lifestyles is important in the study of viral ecology ( ). In all, 235 lysogenic vOTUs were identified in this study, containing 76 proviruses (integrated lysogenic viruses) and 159 temperate viruses (free lysogenic viruses) (see ). The relative abundance of viruses showed that 5 years of warming did not alter the proportion of lysogenic viruses and lytic viruses ( ; see also ). However, the relative abundances of lysogenic viruses at 15 to 25 cm were significantly higher than at 45 to 55 cm (Student t test, P < 0.01), reflecting the difference of viral lifestyle at different depths ( ). Markedly, the relative abundances of lysogenic viruses and genes were both correlated positively ( P < 0.05) with total C and N, indicating that eutrophic drove more viruses to adopt lysogenic lifestyles compared to oligotrophic conditions (see and ). 10.1128/mbio.03009-22.4 FIG S4 Analysis of the lifestyle of tundra soil viruses. (A) Proportions of the number of viruses with different lifestyles across different data sets, including 15 to 25 cm (control, n = 141), 15 to 25 cm (warmed, n = 227), 45 to 55 cm (control, n = 486), and 45 to 55 cm (warmed, n = 531). (B) Relative abundances of viruses with different lifestyles across different data sets. Download FIG S4, TIF file, 0.8 MB . Copyright © 2023 Ji et al. 2023 Ji et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . 10.1128/mbio.03009-22.5 FIG S5 Correlation analysis of lysogenic viruses and genes with surrounding nutrients. (A) Correlation analysis between the relative abundance (normalized abundance of lysogenic viruses/total normalized abundance) of lysogenic viruses and total C and N. (B) Correlation analysis between the relative abundance of lysogenic gene (normalized abundance of lysogenic gene/total normalized abundance) and total C and N. Download FIG S5, TIF file, 0.6 MB . Copyright © 2023 Ji et al. 2023 Ji et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . In addition to identifying lysogenic signals in shotgun sequencing, viral functional genes based on GeoChip are also important information for analyzing viral lifestyles ( ). Among the viral genes detected by GeoChip 5.0 ( , ), the abundance of viral genes related to lysis function was significantly higher at 45 to 55 cm than at 15 to 25 cm (paired t test, P < 0.05), suggesting that some viruses may tend to select for carrying more lysis genes at 45 to 55 cm ( ). It should be noted that the lysis-lysogeny decision analyzed by GeoChip could only represent a fraction of tundra viral members, since GeoChip targeted limited classes of viral lysis genes ( ). In addition, we observed that the abundance of viral structural genes was significantly lower at 45 to 55 cm than at 15 to 25 cm (paired t test, P < 0.05). This result was consistent with the differences of viral genomic features ( and ), further indicating that soil depths could be an important factor in shaping the evolution of viral genomes. However, neither warming nor soil depths had significant effects on the abundance of viral genes related to replication or infection ( ). A total of 201 microbial metagenome-assembled genomes (MAGs) were binned from metagenomes, including 188 bacterial and 13 archaeal MAGs spanned 12 bacterial and 2 archaeal phyla (see ). After clustering, 88 microbial operational taxonomic units (mOTUs) were used for subsequent analysis. The mOTUs were highly represented by the phyla Actinobacteria ( n = 21), Proteobacteria ( n = 14), “ Candidatus Dormibacteraeota” ( n = 12), and Acidobacteria ( n = 11). Based on the normalized abundance among the samples, microbial communities had more similar abundance patterns at the same depth, while these highly represented phyla was highly abundant at both two depths (see ). Several methods were then used to predict viral hosts and to establish links between the vOTUs and mOTUs. Sequence homology and CRISPR spacers are the most effective methods to identify viral hosts ( ), and previous studies have pointed out that viruses can obtain tRNAs from their host genomes during the process of infection ( ). These methods for finding signals for the exchange of genetic material between vOTUs and mOTUs provided host information for approximately 19.4% of vOTUs with the oligonucleotide frequency (ONF) method identifying possible hosts of another 2.3% of the vOTUs (see ). 10.1128/mbio.03009-22.6 FIG S6 Abundance patterns of mOTUs across different samples. Samples are clustered based on the Bray-Curtis dissimilarity. The color intensity in each panel represents the normalized abundance of mOTUs of each lineage. Download FIG S6, TIF file, 0.2 MB . Copyright © 2023 Ji et al. 2023 Ji et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . 10.1128/mbio.03009-22.9 TABLE S2 Host prediction of 1,385 viral operational taxonomic units. Download Table S2, XLSX file, 0.1 MB . Copyright © 2023 Ji et al. 2023 Ji et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . At the phylum level, the host range of these viruses (276 of 1,385 vOTUs) covered 12 bacterial and archaeal phyla (excluding the phyla Thaumarchaeota and “ Candidatus Aminicenantes”) in our data sets (see ). We then established the network analysis between 276 vOTUs and their corresponding mOTUs as previously described ( ) ( ). Among them, Actinobacteria (~22.8%) and Proteobacteria (~25.4%) were the most frequently predicted host phyla ( ). At 15 to 25 cm, viruses mainly infected the phyla Acidobacteria (mainly including the class Acidobacteriia ) and Actinobacteria (mainly including the classes Actinobacteria and Thermoleophilia ), which were the most abundant in the microbial community ( ). At 45 to 55 cm, viruses infected the phyla Proteobacteria (mainly including the class Deltaproteobacteria ) and Euryarchaeota (only including the class Methanomicrobia ) were dominant, which might be related to sulfate reduction and methanogenesis ( ). In addition, we also found some viruses infected the genus Bradyrhizobium (belonging to the class Alphaproteobacteria ) that potentially affected the N cycle in the soil (see ). Under the experimental warming, the number of vOTUs infecting the class Acidobacteriia increased at 15 to 25 cm, while the number of vOTUs infecting the classes Deltaproteobacteria and Betaproteobacteria increased at 45 to 55 cm ( ). Warming had no effect on the number of vOTUs infecting the phylum Euryarchaeota . In our data sets, most vOTUs infected a specific bacterial phylum or class ( ). However, at the species classification level, there existed some vOTUs that could infect multiple mOTUs (see ; see also ). Remarkably, the CAZymes and some genes related to S metabolism carried by viruses might not be acquired from a limited number of potential hosts, rather as a large number of different bacterial or archaeal phyla (including the phyla Proteobacteria , Acidobacteria , Actinobacteria , Chloroflexi , Euryarchaeota , and “ Candidatus Dormibacteraeota”) were infected by these viruses (see ). 10.1128/mbio.03009-22.7 FIG S7 Analysis of virus-microbe interaction in permafrost region. (A) Virus-host interaction network at the species level. Viral genomes (hexagons) are connected to predicted microbial host genomes (circles) by edges. Viral genomes and host genomes are sized by abundance (normalized mean coverage) across data sets. Host genomes are colored according to their taxonomy. Viral genomes encoding CAZymes or AMGs for S metabolism are outlined in red and gold, respectively. The edges between the key microorganisms of the C, N, and S cycles and their corresponding viruses are colored red, brown, and purple, respectively. (B) Correlation analysis of virus-microbe abundance of each specific lineage. Linear regression model analysis is performed based on the virus-microbe abundance correlations for the specific lineage in each dataset. R 2 values and Pearson’s correlations in linear regression models are presented in . Download FIG S7, TIF file, 1.9 MB . Copyright © 2023 Ji et al. 2023 Ji et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . The normalized abundance determined by reads mapping showed a strong correlation between viruses and prokaryotic microbes in permafrost region ( ). In the virus-microbe linkage, the virus/microbe abundance ratios (VMRs) were greater than one for most specific lineages with the class Deltaproteobacteria being the highest at 9.5, indicating a higher percentage of Deltaproteobacteria population were infected by viruses ( ). In addition, the VMRs of the phylum Euryarchaeota and the classes Acidobacteria and Candidatus Saccharibacteria were all greater than 5, representing the most active lineages of virus-microbe interactions in permafrost region ( ). Analysis of virus-microbe abundance correlations for specific lineages revealed that different virus-microbe lineages had different trends in abundance changes with sustained experimental warming ( ). Meanwhile, the specific virus-microbe lineages responded differently to warming at the two depths ( ; see also ). Under warming, the VMRs of the class Acidimicrobia increased significantly (two-way analysis of variance [ANOVA], P < 0.05) at 15 to 25 cm, while the VMRs of the phyla Verrucomicrobia , Candidatus Cryosericota , and Gemmatimonadetes decreased significantly (two-way ANOVA, P < 0.05) at 45 to 55 cm ( ; see also and ). This indicated that the viral lytic capacity of these bacteria was altered under sustained warming. In addition, we observed that the abundance relationships of a large number of virus-microbe lineages (including the classes Gammaproteobacteria , Acidobacteriia , etc.) had significant differences between the different depths (two-way ANOVA, P < 0.05), indicating that the infection mode of soil viruses was related to depth ( ; see also and ). The linear mixed-effects models were performed for the VMR of each specific lineage that was significantly affected by both warming and depth. The results showed that soil depth not warming was a more important factor influencing the VMRs of Verrucomicrobia and Acidimicrobiia (see ). Interestingly, the VMRs of some virus-microbe lineages were also significantly correlated with some environmental factors ( ). For example, the VMRs of the two usually competing groups in soil habitats, the class Methanomicrobia and the class Deltaproteobacteria , were negatively ( P < 0.01) and positively ( P < 0.05) correlated with total C, respectively ( ). This indicated that viruses may regulate the dynamics of microbial communities in response to changes in the surrounding nutrients. 10.1128/mbio.03009-22.10 TABLE S3 Linear mixed-effects model analysis and linear regressions of the normalized abundance of the microbes and their viruses. Download Table S3, XLSX file, 0.01 MB . Copyright © 2023 Ji et al. 2023 Ji et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . Compared to marine viral communities, the functional roles that viruses play in soil ecosystems is poorly understood. This is largely due to the complex structure of soils ( ). With the advancement of experimental and sequencing technology, several studies on soil viruses have been conducted, involving mangrove, tundra, farmland, and other soil ecosystems ( , , ). While the response of soil microorganisms induced by climate warming has received extensive attention, only a few studies have focused on the response of viral communities ( , ). The present study explored the response of viral communities to warming and other environmental factors and compared the differences in viral communities at two critical depths of permafrost. A total of 1,385 vOTUs were recovered, expanding the current uncultivated database (IMG/VR v4) of viruses in permafrost by 1.3-fold ( ). Caudoviricetes was the dominant viral group in permafrost ( ), which was consistent with several other soil ecosystems such as mangrove and farmland ( , ), showing that Caudoviricetes can largely represent the known diversity of double-stranded DNA viral distribution in soil. Environmental conditions are major drivers of the soil microbial community development ( , ). Viruses shape microbial community and associated processes in soil ( ), but it is not known whether viruses are also driven by environmental conditions ( ). Our study showed that multiple environmental factors collectively drive viral community and functional gene structures in soil above permafrost ( ), filling major knowledge gaps in the understanding of the linkage between viruses and edaphic conditions ( ). Markedly, soil depth, rather than warming treatment, was a strong factor in determining viral community and functional gene structures, which was consistent with previous study of bacterial communities in this region ( ). The beta diversity, the abundance patterns of vOTUs, and the similarity clustering network based on viral shared protein clusters revealed that viral community composition was highly endemic by soil depth. Furthermore, the abundance of some viral functional gene classes and specific virus-microbe lineages was also strongly stratified. The long-term experimental warming did not alter the composition of viral communities at two depths, but affected viral functional potential related to AMGs. This was similar to some previous studies on microorganisms ( ), showing that functional potential was more sensitive to warming than community composition. We believed that tundra soil viruses were sensitive to warming, but the viral evolution driven by warming in a short period of time was only manifested in changes in viral functional traits rather than viral community. However, some significant changes in microbial community composition of tundra soil under warming were observed at the CiPEHR site ( , ), suggesting that viral communities may be less sensitive to warming than bacterial and fungal communities. Remarkably, AMGs encoded by viruses related to C metabolism were relatively abundant, especially at 15 to 25 cm warmed ( ). Recently, some virus-encoded GHs with precise functions have been identified in peatland and mangrove soils. Although their specific mechanisms were unknown, it is certain that they played important roles in the degradation of complex carbohydrates ( , ). A total of 11 GH genes belonging to 9 families were identified in 10 vOTUs, most of which (10 of 11 GHs) were identified at warming groups ( ). Interestingly, 6 of 10 vOTUs carrying GHs were lysogenic viruses (see ). These lysogenic viruses could acquire the ability to degrade SOC from their hosts by HGT and then pass on this ability when infecting other hosts, which might be another critical way for viruses to regulate the soil C cycle ( , ). In the process of virus-host coevolution, the frequency of HGT is extremely high ( ). Climate warming has been shown to drive the evolution of microbial networks in soil toward greater complexity and stability ( , ). In this case, the acquisition of essential functional genes through HGT was the result of the coevolution of the entire microbial ecological network ( , ). Several recent studies focusing on the involvement of viruses in the S cycle have found that they could encode AMGs related to S metabolism that affect this biogeochemical cycle ( , ). The findings in sulfidic mine tailings indicated that viruses could help their hosts to utilize sulfate under anoxic conditions and thereby facilitate their own reproduction and metabolism ( ). Combined with the viral feedback in tundra soil, we believed that warming might promote the reduction in redox potentials, thus allowing some viruses to carry AMGs related to assimilatory sulfate reduction pathway at 45 to 55 cm. In addition, we also observed a reduction of AMGs related to organic S transformation carried by viruses at 45 to 55 cm under warming ( ). These results suggested that viruses tended to participate in inorganic S transformation (especially sulfate reduction) under warming, which might be beneficial to viral survival. As an important process for microorganisms to participate in the S cycle, sulfate reduction also plays a key driving role in the soil C cycle ( ). We speculated that warming might promote viruses to participate in the soil S cycle, mainly in the process of sulfate reduction. Occurring simultaneously, the regulation of S cycle by viruses might affect the soil C cycle, which is mainly reflected by promoting the loss of SOC, ultimately aggravating climate warming ( ). The study of tundra soil microbial metabolism showed that the relative abundance based on 16S rRNA gene sequence of Euryarchaeota at 45 to 55 cm in this permafrost region was significantly increased under experimental warming ( ). Correspondingly, a large number of vOTUs that infected Euryarchaeota were mined in the 45- to 55-cm data set. As methanogens (particularly those using acetate for CH 4 production) were stimulated by warming, the content of total C in soil was significantly reduced ( P < 0.05). This might lead to more viral infection of methanogens (belonging to the class Methanomicrobia ) and less viral infection of sulfate-reducing bacteria (belonging to the class Deltaproteobacteria ), thus allowing the sulfate reduction pathway to dominate at 45 to 55 cm ( ). The regulation of the dynamic balance of the two usually competing groups by viruses might facilitate the mitigation of methane production. In addition, a previous study showed a greater response of methanogenesis from acetate compared to other methanogenesis pathways under warming, mainly due to the plant-microbe interactions caused by the increased biomass of Eriophorum vaginatum and the prevalence of Carex rostrata ( ). Through the virus-microbe linkage, we believed that the regulation of hosts by viruses might be another key factor causing the response of methanogenesis. Warming and thawing might change the predation strategy of viruses on some methanogens, which promote a shift in the type of methane production from mostly hydrogenotrophic to more acetoclastic ( , ). In conclusion, these results demonstrated the potential contribution of viral infection and lysis to the biogeochemical cycling in tundra soil ecosystems under sustained warming and thawing. Viral ecology studies showed that encoding AMGs and lysing microbes to release C were the two main pathways for viruses to affect biogeochemical cycles ( ). However, the extent to which the two pathways contribute to the C cycle is not yet clear. In our research, the viral life cycle strategies and viral infection patterns were significantly different at two depths of this permafrost region ( ; see also ). At 15 to 25 cm, a higher proportion of lysogenic viruses facilitated the occurrence of HGT ( ), allowing more viruses to carry GHs to participate in the C cycle. At 45 to 55 cm, some viruses tended to encode more abundant genes related to lysis function, releasing C that fueled the microbial food web through the stronger viral shunt. Considering the more rapid feedback of microbial communities to warming at 45 to 55 cm indicated by previous studies ( ), we speculated that the contribution of viral shunt to the C cycle might be higher than that of encoded AMGs over a short period of time. The differences in the main ways of viruses to influencing the C cycle at might be an important factor for the different microbial feedbacks to warming at the two depths. Lysis-lysogeny decision-making in viruses may be determined by surrounding nutrients and microbial population density based on previous studies ( , ). Normally, the VMRs were correlated positively with microbial population density when they increased sufficiently high, which was consistent with the “kill the winner” hypothesis ( , ). A recent study on vertical gradients of soil viruses showed that the abundance of viruses in agriculture soil area decreased ( ) and lysogeny increased with increasing soil depths ( ). However, our study indicated that tundra soil viruses had different patterns. Compared to the transition zone at the permafrost-active layer boundary (45 to 55 cm), the active layer (15 to 25 cm) had greater nutrient abundance (such as total C and N) and microbial metabolic genes ( ). However, we observed fewer viral members and a higher proportion of lysogenic viruses at 15 to 25 cm than at 45 to 55 cm ( and ). Although these results are contrary to our conventional knowledge, they can be well explained by the “Piggyback-the-Winner” hypothesis ( ). This hypothesis suggests that more viruses select for lysogeny when microbes are rapidly growing and performing active metabolism, thus conferring competitive advantage to commensals against niche invasion ( , ). Several viral studies in coral reefs ( ) and mammalian gut ecosystems ( ) have supported this hypothesis. Therefore, “Piggyback-the-Winner” may be a widely adopted strategy for tundra soil viruses, although the factors driving this pattern are not clear. Future efforts are needed to verify the “Piggyback-the-Winner” hypothesis in tundra soil viruses and to resolve the mechanisms shaping viral life cycle strategies. Conclusions. Here, we systemically explored the composition and function of viral communities at two soil depths in response to warming in Alaskan tundra soils. The results showed that viral functions were sensitive to warming, which was mainly reflected in the increased number of GHs encoded by viruses, and the altered VMRs for specific lineages. Environmental factors were major drivers shaping viral community and functional gene structures, and soil depth had a greater impact on viral community and functional composition than experimental warming. Furthermore, the strong viral shunt and the altered predation strategies of viruses on microbes might be relevant to the greater abundance of genes related to microbial C metabolism at 45 to 55 cm. Also, the HGT mediated by lysogenic viruses at 15 to 25 cm might make an important contribution to the increase in abundance of GHs. In addition to the effects on the soil C cycle, some viruses were found to regulate other biogeochemical cycles by encoding genes related to S metabolism and infecting some key microorganisms of the S and N cycles. In a word, our research showed that viruses contribute to the response of microbial communities to warming via diverse ways and highlighted viral importance in soil ecological research. However, considering that metagenomic analysis can only recover a very small portion of viral communities, and is less reproducible and quantitative ( , ), a comprehensive study using a combination of viromes, metagenomes, and GeoChip is needed to better understand the responses of viral communities and associated functional genes to warming. Here, we systemically explored the composition and function of viral communities at two soil depths in response to warming in Alaskan tundra soils. The results showed that viral functions were sensitive to warming, which was mainly reflected in the increased number of GHs encoded by viruses, and the altered VMRs for specific lineages. Environmental factors were major drivers shaping viral community and functional gene structures, and soil depth had a greater impact on viral community and functional composition than experimental warming. Furthermore, the strong viral shunt and the altered predation strategies of viruses on microbes might be relevant to the greater abundance of genes related to microbial C metabolism at 45 to 55 cm. Also, the HGT mediated by lysogenic viruses at 15 to 25 cm might make an important contribution to the increase in abundance of GHs. In addition to the effects on the soil C cycle, some viruses were found to regulate other biogeochemical cycles by encoding genes related to S metabolism and infecting some key microorganisms of the S and N cycles. In a word, our research showed that viruses contribute to the response of microbial communities to warming via diverse ways and highlighted viral importance in soil ecological research. However, considering that metagenomic analysis can only recover a very small portion of viral communities, and is less reproducible and quantitative ( , ), a comprehensive study using a combination of viromes, metagenomes, and GeoChip is needed to better understand the responses of viral communities and associated functional genes to warming. Data acquisition and de novo assembly. All high-throughput sequencing reads used in this research were obtained from NCBI Sequence Read Archive (SRA) database, derived from a previously northern-latitude tundra soil microbiome study ( ). Here, two critical permafrost depths: 15 to 25 cm (active layer at outset of the experiment) and 45 to 55 cm (transition zone at the permafrost-active layer boundary at the outset of the experiment) of Alaskan tundra soils at CiPEHR site (63°52′59″N, 149°13′32″W) were warmed in situ (~1.1°C above ambient temperature) for five winters from 2008 to 2013. Then, soil DNA was extracted from the warmed and control groups, and sequenced on an Illumina HiSeq 2500 instrument (150 bp, paired-end mode). In addition, data were obtained for environmental factors and GeoChip at CiPEHR site with the same coordinates and sampling dates from another study ( ). Here, tundra soil functional genes were analyzed with a microarray-based tool (GeoChip 5.0) containing 161,961 probes belonging to 1,447 gene families, among which there were 2,848 virus-specific probes ( , ). Raw signal intensities after treating and analysis were normalized by Feng et al. ( ). Raw sequencing reads for all samples (15 to 25 cm, control [ n = 6]; 15 to 25 cm, warmed [ n = 6]; 45 to 55 cm, control [ n = 6]; 45 to 55 cm, warmed [ n = 6]) were trimmed using CLC Genomic Workbench (v20.0.3; Qiagen Bioinformatics, Denmark) with the following parameters: quality score limit, 0.05; maximum number of ambiguities, set to 2; and minimum read length, 35 bases ( ). Next, filtered reads from each metagenome were separately assembled with CLC’s de novo assembler using the default parameters for downstream analysis. Metagenome-assembled genome binning and microbial OTU grouping. For each assembly sets, contigs were binned using metaWRAP v1.3.2 with binning module (–metabat2 –maxbin2 –concoct) and Bin_refinement module (>50% completeness and <10% contamination) ( ). The final produced bin sets were clustered and dereplicated at 95% average nucleotide identity (ANI) using dRep v3.2.2, generating 88 microbial operational taxonomic units (mOTUs) ( ). mOTU taxonomies were assigned using GTDB-tk v1.7.0 based on the Genome Taxonomy Database R06-RS202 ( , ). The classification results were further refined by comparing to NCBI taxonomy. Finally, the marker genes of MAGs identified by GTDB-tk were used to construct the bacterial and archaeal trees by RAxML v8.2.12 ( ). vOTU table generation. VirSorter v1.0.3 ( ), DeepVirFinder v1.0 ( ), VirSorter2 v2.2.2 ( ), and CheckV v0.7.0 ( ) were used to recover metagenomic viral contigs (mVCs) from the contigs assembled by CLC’s de novo assembler. For contigs of ≥5 kb ( , ), mVCs were identified based on the following criteria: (i) VirSorter (virome database) categories 1, 2, 4, and 5 (for proviruses, only proviral regions were retained) ( ); (ii) DeepVirFinder (default settings) score ≥ 0.9 and P ≤ 0.01; (iii) high confidence level (score ≥ 0.9 or score ≥ 0.7 but have hallmark genes) of VirSorter2 (–keep-original-sequence); and (iv) VirSorter categories 3 and 6 (proviral regions), DeepVirFinder score ≥ 0.7 and P < 0.05, and VirSorter2 score ≥ 0.5 were further screened using CheckV (containing at least one viral hallmark gene) ( ). For contigs ≤5 kb, VirSorter2 was applied to identifying the viruses. Only the high-confidence and circular contigs were retained and further filtered by CheckV. The final data sets of mVCs recovered were extracted proviral regions according to CheckV. Next, all mVCs were dereplicated and clustered at 95% ANI and 85% alignment fraction of shorter sequences using CD-HIT v4.8.1, generating 1,385 viral operational taxonomic units (vOTUs) ( ). For these vOTUs, genome completeness assessment was performed using CheckV. Taxonomic composition of vOTUs. Prodigal v2.6.3 (-p meta) was used to predict open reading frames (ORFs) of the 1,385 vOTUs ( ). The vOTUs were classified as previously described. To begin analysis, all proteins encoded by viruses were utilized as input files in vConTACT2 v0.9.19 to run against the NCBI Viral RefSeq v85 database and Stordalen Mire viruses ( , ). The vOTUs were clustered based on the similarity of shared protein clusters between genomes, resulting in the construction of viral gene sharing network. Then, cytoscape v3.8.0 was used to visualize the network ( ). Considering the major revisions of viral taxonomy in ICTV ( , ), the vOTUs were further annotated using PhaGCN v2.1 with the latest ICTV classification ( , ). Annotation of permafrost viral AMGs. To determine AMGs contained by viral genomes, viral genes were identified using the following methods. (i) Proteins encoded by viruses were input to DRAM-v pipeline ( ). AMGs identified by DRAM-v with unknown function were then compared to the CAZymes, NCyc, and SCyc databases using diamond (identity ≥ 40%, coverage ≥ 60%, and E value < 1E−10) and the KEGG database using KofamKOALA (E value < 1E−10, and HMM score > 50) to determine their potential function (see ) ( ). This process excluded 184 vOTUs owing to being discarded by Virsorter2 ( ). (ii) Annotation of viral AMGs based on VIBRANT pipeline (see ) ( ). For these putative AMGs verified by DRAM-v and VIBRANT, we manually managed to select CAZymes, as well as N and S metabolism genes (see ). Finally, Phyre2 was used for protein function prediction and structural modeling (see ) ( ). Genomic analyses of viruses. The functions of viral genes excluding AMGs were predicted by screening against the NCBI-nr database using BLASTp (bit score > 50, and E value < 1E–5) ( ). Genome maps of viruses were carried out with Geneious ( ). For the identification of lysogenic signals (including integrase, recombinase, repressor, or provirus) ( ), the proteins encoded by viruses were run against EggNOG and Pfam database and were manually confirmed based on BLASTp against NCBI-nr database (bit score > 50, and E value < 1E–5) ( , ). Host prediction of viruses. Several different methods were used to assign hosts to vOTUs, including nucleotide sequence homology, CRISPR spacers matches, tRNA gene matches, and oligonucleotide frequency (ONF). First, viral sequences were homologously matched to recovered MAGs based on shared genomic regions through BLASTn ( ). Only the best matches that hit ≥2,500 bp and had a ≥70% identity were retained ( ). For some circular viral contigs or proviruses of ≤5 kb, the prediction criteria proposed by Li et al. was followed ( ). Next, the CRISPR spacer sequences in the viral and host genomes are reliable features for identifying recent virus-host interactions as hosts place a fragment of the infecting phage sequence into CRISPR arrays on their genome ( ). The CRISPR spacer sequences in MAGs were automatically identified using CRISPR CasFinder ( ) and compared to viral sequences using BLASTn with the following parameters: E value threshold of 1E−5, percentage identity of 95%, using 1 as a maximum target sequence. Next, the tRNA genes contained in the viral sequences were identified by tRNAscan-SE v2.0.9 and aligned with the recovered MAGs using BLASTn (sequence identity and coverage ≥ 90%) ( ). Lastly, hosts were assigned to vOTUs based on the similarity of k-mers (DNA sequences of length k) frequencies between sequences. VirHostMatcher v1.0 was run with default parameters ( ). The matches with d 2 * values of <0.25 and the highest scores were retained. All prediction results and sources are listed in . Mapping reads to contigs and generating the mOTU and vOTU tables. The filtered reads from the 24 samples were mapped to viral contigs (bamm make -t 40 -m 100M) using BamM v1.7.3 ( https://github.com/Ecogenomics/BamM ), and the BamM “filter” v1.7.3 was used to screen the reads mapped to viral contigs with the following parameters: identity ≥ 95% and coverage ≥ 90% ( , ). CoverM v0.6.1 ( https://github.com/wwood/CoverM ) was first used to remove viral contigs that were covered by reads of ≤70% of their length and then used to calculate the average read depth of each contig across each sample (“trimmed_mean” coverage mode -exclude the top and bottom 10% coverage of each contig) ( ). Finally, the normalized abundance was calculated for all vOTUs to allow comparison between different samples as follows: for each sample, the number of its reads was divided by one hundred million reads to generate a value, and then the average depth of each contig was divided by this value to obtain the normalized abundance ( ). For the normalized abundance of each mOTUs, the coverage was calculated as the ratio of the length of each contig binned in an mOTU to that mOTU multiplied by the average depth of the corresponding contig and then summed. The same method was used for normalized abundance of vOTUs. The detailed information of the normalized abundances of 88 mOTUs and 1,385 vOTUs are separately listed in the mOTU and vOTU tables ( https://doi.org/10.5281/zenodo.7442695 ). Statistical analyses. The python script “get_ARSC.py” ( https://github.com/faylward/pangenomics/ ) was used to calculate the GC contents and N/C atoms per residue side chain (N/C-ARSC) of vOTUs ( ). Linear mixed-effects models were analyzed using the lme4 package in R v4.0.2 ( ). In linear mixed-effects models, warming and soil depths were used as fixed effects, and the block was used as random intercept effect ( ). ANOVA was used to calculated the P values, and the partR2 package was used to obtain the marginal R 2 (the variance of each fixed effect) from the linear mixed-effects models ( ). Alpha- and beta-diversity analyses of the viral community were performed using the vegan package in R v4.0.2 ( ). The Bray-Curtis distance was calculated using the “vegdist” function of vegan and used for NMDS. ANOSIM was used to test the significance between different depths or temperatures presented in NMDS. Shannon index, richness, and Simpson index value differences between different depths or temperatures were tested using a Student t test (two tailed). The Mantel test was used to evaluate the correlation between environmental factors and viral community and functional gene structures. The Bray-Curtis dissimilarity values were used to represent the compositional variations of viral community and function. Dissimilarity between environmental factors was calculated using Euclidean distances based on soil winter temperature, growing season temperature, soil thaw duration, soil bulk density, total N content, and total C content. Canonical correspondence analysis (CCA) was calculated using the vegan package to model the major environmental factors shaping viral community and functional gene structures ( ). To disentangle the relative importance of deterministic and stochastic processes in underlying viral community and function assembly, a total of 1,000 null models were analyzed based on Bray-Curtis distance according to a modified method as previously described ( , ). The stochastic ratio was calculated for warming and control samples in each depth using taxonomic metrics in the NST package ( ). The lineage-specific virus-microbe abundance modules for each phylum/class between different depths or temperatures were compared using two-way ANOVA, as previously described ( ). Pearson correlations were performed using the “rcorr” function of Hmisc to assess the relationships between the VMRs and environmental variables ( ). Data availability. All raw reads used in this work are obtained from NCBI Sequence Read Archive (SRA) database generated from a previously published northern-latitude tundra soil microbiome study ( ). Environmental factors and GeoChip 5.0 data at the same site of permafrost are obtained from a previously published diazotrophic community study ( ). Detailed GeoChip data and environmental factor information, mOTU and vOTU tables, and mVC and MAG sequences used in this study are available at https://doi.org/10.5281/zenodo.7442695 . de novo assembly. All high-throughput sequencing reads used in this research were obtained from NCBI Sequence Read Archive (SRA) database, derived from a previously northern-latitude tundra soil microbiome study ( ). Here, two critical permafrost depths: 15 to 25 cm (active layer at outset of the experiment) and 45 to 55 cm (transition zone at the permafrost-active layer boundary at the outset of the experiment) of Alaskan tundra soils at CiPEHR site (63°52′59″N, 149°13′32″W) were warmed in situ (~1.1°C above ambient temperature) for five winters from 2008 to 2013. Then, soil DNA was extracted from the warmed and control groups, and sequenced on an Illumina HiSeq 2500 instrument (150 bp, paired-end mode). In addition, data were obtained for environmental factors and GeoChip at CiPEHR site with the same coordinates and sampling dates from another study ( ). Here, tundra soil functional genes were analyzed with a microarray-based tool (GeoChip 5.0) containing 161,961 probes belonging to 1,447 gene families, among which there were 2,848 virus-specific probes ( , ). Raw signal intensities after treating and analysis were normalized by Feng et al. ( ). Raw sequencing reads for all samples (15 to 25 cm, control [ n = 6]; 15 to 25 cm, warmed [ n = 6]; 45 to 55 cm, control [ n = 6]; 45 to 55 cm, warmed [ n = 6]) were trimmed using CLC Genomic Workbench (v20.0.3; Qiagen Bioinformatics, Denmark) with the following parameters: quality score limit, 0.05; maximum number of ambiguities, set to 2; and minimum read length, 35 bases ( ). Next, filtered reads from each metagenome were separately assembled with CLC’s de novo assembler using the default parameters for downstream analysis. For each assembly sets, contigs were binned using metaWRAP v1.3.2 with binning module (–metabat2 –maxbin2 –concoct) and Bin_refinement module (>50% completeness and <10% contamination) ( ). The final produced bin sets were clustered and dereplicated at 95% average nucleotide identity (ANI) using dRep v3.2.2, generating 88 microbial operational taxonomic units (mOTUs) ( ). mOTU taxonomies were assigned using GTDB-tk v1.7.0 based on the Genome Taxonomy Database R06-RS202 ( , ). The classification results were further refined by comparing to NCBI taxonomy. Finally, the marker genes of MAGs identified by GTDB-tk were used to construct the bacterial and archaeal trees by RAxML v8.2.12 ( ). VirSorter v1.0.3 ( ), DeepVirFinder v1.0 ( ), VirSorter2 v2.2.2 ( ), and CheckV v0.7.0 ( ) were used to recover metagenomic viral contigs (mVCs) from the contigs assembled by CLC’s de novo assembler. For contigs of ≥5 kb ( , ), mVCs were identified based on the following criteria: (i) VirSorter (virome database) categories 1, 2, 4, and 5 (for proviruses, only proviral regions were retained) ( ); (ii) DeepVirFinder (default settings) score ≥ 0.9 and P ≤ 0.01; (iii) high confidence level (score ≥ 0.9 or score ≥ 0.7 but have hallmark genes) of VirSorter2 (–keep-original-sequence); and (iv) VirSorter categories 3 and 6 (proviral regions), DeepVirFinder score ≥ 0.7 and P < 0.05, and VirSorter2 score ≥ 0.5 were further screened using CheckV (containing at least one viral hallmark gene) ( ). For contigs ≤5 kb, VirSorter2 was applied to identifying the viruses. Only the high-confidence and circular contigs were retained and further filtered by CheckV. The final data sets of mVCs recovered were extracted proviral regions according to CheckV. Next, all mVCs were dereplicated and clustered at 95% ANI and 85% alignment fraction of shorter sequences using CD-HIT v4.8.1, generating 1,385 viral operational taxonomic units (vOTUs) ( ). For these vOTUs, genome completeness assessment was performed using CheckV. Prodigal v2.6.3 (-p meta) was used to predict open reading frames (ORFs) of the 1,385 vOTUs ( ). The vOTUs were classified as previously described. To begin analysis, all proteins encoded by viruses were utilized as input files in vConTACT2 v0.9.19 to run against the NCBI Viral RefSeq v85 database and Stordalen Mire viruses ( , ). The vOTUs were clustered based on the similarity of shared protein clusters between genomes, resulting in the construction of viral gene sharing network. Then, cytoscape v3.8.0 was used to visualize the network ( ). Considering the major revisions of viral taxonomy in ICTV ( , ), the vOTUs were further annotated using PhaGCN v2.1 with the latest ICTV classification ( , ). To determine AMGs contained by viral genomes, viral genes were identified using the following methods. (i) Proteins encoded by viruses were input to DRAM-v pipeline ( ). AMGs identified by DRAM-v with unknown function were then compared to the CAZymes, NCyc, and SCyc databases using diamond (identity ≥ 40%, coverage ≥ 60%, and E value < 1E−10) and the KEGG database using KofamKOALA (E value < 1E−10, and HMM score > 50) to determine their potential function (see ) ( ). This process excluded 184 vOTUs owing to being discarded by Virsorter2 ( ). (ii) Annotation of viral AMGs based on VIBRANT pipeline (see ) ( ). For these putative AMGs verified by DRAM-v and VIBRANT, we manually managed to select CAZymes, as well as N and S metabolism genes (see ). Finally, Phyre2 was used for protein function prediction and structural modeling (see ) ( ). The functions of viral genes excluding AMGs were predicted by screening against the NCBI-nr database using BLASTp (bit score > 50, and E value < 1E–5) ( ). Genome maps of viruses were carried out with Geneious ( ). For the identification of lysogenic signals (including integrase, recombinase, repressor, or provirus) ( ), the proteins encoded by viruses were run against EggNOG and Pfam database and were manually confirmed based on BLASTp against NCBI-nr database (bit score > 50, and E value < 1E–5) ( , ). Several different methods were used to assign hosts to vOTUs, including nucleotide sequence homology, CRISPR spacers matches, tRNA gene matches, and oligonucleotide frequency (ONF). First, viral sequences were homologously matched to recovered MAGs based on shared genomic regions through BLASTn ( ). Only the best matches that hit ≥2,500 bp and had a ≥70% identity were retained ( ). For some circular viral contigs or proviruses of ≤5 kb, the prediction criteria proposed by Li et al. was followed ( ). Next, the CRISPR spacer sequences in the viral and host genomes are reliable features for identifying recent virus-host interactions as hosts place a fragment of the infecting phage sequence into CRISPR arrays on their genome ( ). The CRISPR spacer sequences in MAGs were automatically identified using CRISPR CasFinder ( ) and compared to viral sequences using BLASTn with the following parameters: E value threshold of 1E−5, percentage identity of 95%, using 1 as a maximum target sequence. Next, the tRNA genes contained in the viral sequences were identified by tRNAscan-SE v2.0.9 and aligned with the recovered MAGs using BLASTn (sequence identity and coverage ≥ 90%) ( ). Lastly, hosts were assigned to vOTUs based on the similarity of k-mers (DNA sequences of length k) frequencies between sequences. VirHostMatcher v1.0 was run with default parameters ( ). The matches with d 2 * values of <0.25 and the highest scores were retained. All prediction results and sources are listed in . The filtered reads from the 24 samples were mapped to viral contigs (bamm make -t 40 -m 100M) using BamM v1.7.3 ( https://github.com/Ecogenomics/BamM ), and the BamM “filter” v1.7.3 was used to screen the reads mapped to viral contigs with the following parameters: identity ≥ 95% and coverage ≥ 90% ( , ). CoverM v0.6.1 ( https://github.com/wwood/CoverM ) was first used to remove viral contigs that were covered by reads of ≤70% of their length and then used to calculate the average read depth of each contig across each sample (“trimmed_mean” coverage mode -exclude the top and bottom 10% coverage of each contig) ( ). Finally, the normalized abundance was calculated for all vOTUs to allow comparison between different samples as follows: for each sample, the number of its reads was divided by one hundred million reads to generate a value, and then the average depth of each contig was divided by this value to obtain the normalized abundance ( ). For the normalized abundance of each mOTUs, the coverage was calculated as the ratio of the length of each contig binned in an mOTU to that mOTU multiplied by the average depth of the corresponding contig and then summed. The same method was used for normalized abundance of vOTUs. The detailed information of the normalized abundances of 88 mOTUs and 1,385 vOTUs are separately listed in the mOTU and vOTU tables ( https://doi.org/10.5281/zenodo.7442695 ). The python script “get_ARSC.py” ( https://github.com/faylward/pangenomics/ ) was used to calculate the GC contents and N/C atoms per residue side chain (N/C-ARSC) of vOTUs ( ). Linear mixed-effects models were analyzed using the lme4 package in R v4.0.2 ( ). In linear mixed-effects models, warming and soil depths were used as fixed effects, and the block was used as random intercept effect ( ). ANOVA was used to calculated the P values, and the partR2 package was used to obtain the marginal R 2 (the variance of each fixed effect) from the linear mixed-effects models ( ). Alpha- and beta-diversity analyses of the viral community were performed using the vegan package in R v4.0.2 ( ). The Bray-Curtis distance was calculated using the “vegdist” function of vegan and used for NMDS. ANOSIM was used to test the significance between different depths or temperatures presented in NMDS. Shannon index, richness, and Simpson index value differences between different depths or temperatures were tested using a Student t test (two tailed). The Mantel test was used to evaluate the correlation between environmental factors and viral community and functional gene structures. The Bray-Curtis dissimilarity values were used to represent the compositional variations of viral community and function. Dissimilarity between environmental factors was calculated using Euclidean distances based on soil winter temperature, growing season temperature, soil thaw duration, soil bulk density, total N content, and total C content. Canonical correspondence analysis (CCA) was calculated using the vegan package to model the major environmental factors shaping viral community and functional gene structures ( ). To disentangle the relative importance of deterministic and stochastic processes in underlying viral community and function assembly, a total of 1,000 null models were analyzed based on Bray-Curtis distance according to a modified method as previously described ( , ). The stochastic ratio was calculated for warming and control samples in each depth using taxonomic metrics in the NST package ( ). The lineage-specific virus-microbe abundance modules for each phylum/class between different depths or temperatures were compared using two-way ANOVA, as previously described ( ). Pearson correlations were performed using the “rcorr” function of Hmisc to assess the relationships between the VMRs and environmental variables ( ). All raw reads used in this work are obtained from NCBI Sequence Read Archive (SRA) database generated from a previously published northern-latitude tundra soil microbiome study ( ). Environmental factors and GeoChip 5.0 data at the same site of permafrost are obtained from a previously published diazotrophic community study ( ). Detailed GeoChip data and environmental factor information, mOTU and vOTU tables, and mVC and MAG sequences used in this study are available at https://doi.org/10.5281/zenodo.7442695 .
Genomic Features Predict Bacterial Life History Strategies in Soil, as Identified by Metagenomic Stable Isotope Probing
43179b6c-a794-4baf-9021-fbeab73e22d9
10128055
Microbiology[mh]
Soil-dwelling microorganisms are essential mediators of terrestrial C cycling ( ), yet their immense diversity ( , ) and physiological complexity, as well as the mazelike heterogeneity of their habitats ( ), make it difficult to study their ecology in situ . A major limitation is that microbial contributions to soil C cycling cannot be defined on the basis of a discrete set of functional genes, such as for other biogeochemical processes (e.g., nitrification, denitrification, methanogenesis, methylotrophy, sulfate reduction, sulfide oxidation, etc.). The formation of persistent soil carbon, in particular, is largely governed by the formation of microbial macromolecules produced by anabolic processes. Life history theory has been proposed as a framework for predicting bacterial activity in soils ( ). Life history theory proposes that fitness trade-offs define competitive interactions with respect to environmental characteristics ( , ). These trade-offs are described in terms of energy allocation to growth, resource acquisition, and survival ( , , ). For example, trade-offs between bacterial growth rate and yield are thought to constrain bacterial activity with respect to environmental variability ( ). Such trade-offs can influence C fate by controlling the amount of C mineralized to CO 2 or converted into microbial products that become soil organic matter (SOM) ( ). Information about microbial ecological strategies can be used to improve the accuracy of global C-cycling models ( ). Unfortunately, bacterial life history traits resist in situ characterization, and experiments with cultured strains often ignore the complex microbe-microbe and microbe-environment interactions that occur in soil ( ). In a previous study ( ), we quantified the dynamics of C acquisition and growth for diverse soil-dwelling bacteria through multisubstrate DNA stable isotope probing (DNA-SIP) enabled by high-throughput 16S rRNA gene amplicon sequencing. This experiment tracked bacterial assimilation of nine different C sources through the soil food web over a period of 48 days (see in the supplemental material). The nine C sources were selected to represent diverse molecules, which vary in bioavailability and are derived from plant biomass degradation. We defined bioavailability based on the ability of a molecule to cross the cell membrane and quantified it operational on the basis of solubility and hydrophobicity ( ). Through this approach, we demonstrated that Grime’s C-S-R life history framework explains significant variation in bacterial growth and C acquisition dynamics in soil ( ). 10.1128/mbio.03584-22.5 FIG S1 Experimental diagram of the previously described multisubstrate DNA-SIP study from Barnett et al. ( ) (top) and the newly described metagenomic-SIP sequencing, processing, and analysis described here (bottom). In the top panel, the circles represent the days when microcosms were harvested, while the filled circles represent the samples chosen for metagenomic-SIP sequencing. The original DNA-SIP study used multiple-window high-resolution DNA-SIP to identify bacterial operational taxonomic units (OTUs) that assimilated 13 C from each of the 13 C sources within the harvested microcosms. The 13 C-labeling patterns, along with population dynamics in soils (unfractionated DNA), were used to generate the three activity characteristics. To compare genomic features in 13 C-labeled contigs within a treatment or MAG, we then averaged the characteristics of the OTUs either 13 C labeled in the treatment or taxonomically mapped to the MAG and 13 C labeled in the same treatment, respectively. For the metagenomic-SIP sequencing, pooled fractions between 1.72 and 1.77 g mL −1 were sequenced. 13 C-labeled contigs were distinguished as being >1,000 bp long, having at least 5× coverage in the 13 C treatment library, and having over a 1.5-fold increase in coverage in the 13 C treatment compared to its corresponding 12 C control. Download FIG S1, PDF file, 0.3 MB . Copyright © 2023 Barnett et al. 2023 Barnett et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . The C-S-R framework describes trade-offs with respect to resource acquisition and environmental variability ( , ). Competitors (C) have high investment in resource acquisition and favor intermediate levels of environmental variability. Stress tolerators (S) have low investment in resource acquisition and are disfavored by temporal variability. Ruderals (R) have low investment in resource acquisition and are favored by high levels of temporal variability. Despite the growing interest in applying life history theory to explain bacterial activity and ecology in complex ecosystems, we know little of the genetic basis of life history traits. In this current study, we have sought to identify genomic features that underlie bacterial life history traits linked to the C-S-R framework. Since the majority of soil-dwelling bacteria remain uncultivated and poorly described ( , ), there is great utility in identifying genomic features that predict their ecological strategies ( ). Genomic features of life history strategies have been identified in marine bacteria ( ) and proposed for soil-dwelling bacteria ( ). Genomic features associated with growth, resource acquisition, and survival are of particular interest when assessing life history trade-offs ( , , , ). Numerous genes control such quantitative traits, however, and it is difficult to predict these complex traits de novo from genomic data. We hypothesized that life history strategies impose trade-offs that alter genomic investment in gene systems (i.e., numbers of genes devoted to a particular system) linked to resource acquisition (e.g., secreted enzyme production, secondary metabolite production, and membrane transport), environmental variability (e.g., transcriptional regulation, attachment, and motility), and survival (e.g., osmotic stress response and dormancy). To link these gene systems to life history strategies, we performed metagenomic analysis of 13 C-labeled DNA (metagenomic-SIP) produced in our previous multisubstrate DNA-SIP experiment ( ). The multisubstrate DNA-SIP experiment provided data on resource acquisition and growth dynamics for specific operational taxonomic units (OTUs; defined at 97% sequence identity; see ). Metagenomic-SIP allowed us to map resource acquisition and growth dynamics onto 13 C-labeled contigs and 13 C-labeled metagenome-assembled genomes (MAGs). We developed several metrics for assessing resource acquisition and growth dynamics of bacterial taxa ( ). Resource bioavailability was determined as the average bioavailability of the 13 C-labeled C sources assimilated by an OTU. Maximum log 2 fold change (max LFC) was determined as the maximal change in differential abundance of an OTU in response to C input. The latency of C assimilation was determined for OTUs as the difference in time between maximal 13 C mineralization and earliest 13 C labeling for a given C source. Latency changes in proportion to the likelihood that taxa engage in primary assimilation of 13 C directly from a C source or secondary assimilation of 13 C following microbial processing. For metagenomic-SIP, we selected eight of the 13 C-labeled samples from the multisubstrate DNA-SIP experiment because these samples were enriched in genomes from taxa whose resource acquisition and growth dynamics represented extremes in the C-S-R life history framework ( and ). This strategy, by diminishing the confounding contribution of genomes from organisms having intermediate life history strategies, facilitates identification of genome features that underlie life history trade-offs. We took three approaches to analyzing these metagenomic-SIP data, each increasing in complexity, (i) a 13 C-labeled contig-based approach to assess whether community-scale genome feature enrichment correlates with resource acquisition and growth parameters, (ii) a 13 C-labeled MAG-based approach to assess whether genome feature enrichment correlates with resource acquisition and growth parameters, and (iii) a 13 C-labeled MAG-based approach to assess trade-offs between genome features predicted from the C-S-R framework. 10.1128/mbio.03584-22.2 DATA SET S1 Data file (XLSX) that contains information on OTUs, MAGs, and metagenomic libraries as described in text. Download Data Set S1, XLSX file, 0.3 MB . Copyright © 2023 Barnett et al. 2023 Barnett et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . 10.1128/mbio.03584-22.6 FIG S2 (A) Activity characteristic values of the 13 C-labeled OTUs (as previously described in Barnett et al. ) detected in each 13 C-labeled treatment subjected to metagenomic sequencing. Briefly, maximum log 2 fold change represents the change in differential abundance from the initial condition (time zero) to the point when relative abundance was maximal for a given OTU. C source bioavailability is defined as the average bioavailability of all the substrates assimilated by each OTU, with the bioavailability of each C source defined operationally based on its mineralization dynamics (as previously described in Barnett et al. ). C assimilation latency is defined as the difference in time between the point when 13 C source mineralization was maximal and the point at which an OTU was observed to assimilate the 13 C substrate. (B) The number of genes in 13 C-labeled contigs correlates with the number of 13 C-labeled OTUs identified in Barnett et al. ( ), suggesting that we recovered the genomic content of the target, active bacteria. Gene count is normalized by the number of reads recovered from the 13 C treatment libraries ( ). Normalized number of genes found in the 13 C-labeled contigs from each treatment. The numbers within or above the bars indicate the number of genes before normalization. (C) Relationship between the normalized number of genes in the 13 C-labeled contigs and the number of OTUs 13 C labeled under the same treatment (Pearson’s r = 0.795, P = 0.018). The red line represents the linear regression, with red shading indicating the 95% confidence intervals. (D) Phylum-level breakdown of the taxonomically annotated genes in the 13 C-labeled contigs under each treatment (genes) and the phylum-level breakdown of the 13 C-labeled OTUs under each treatment in Barnett et al. ( ) (OTUs). Genes used for this analysis were only those with taxonomic annotations. Gene taxonomy was assigned using the IMG pipeline. OTU taxonomy was assigned using the SILVA 111 release. (E) Abundance of genes from each feature in the 13 C-labeled contigs from each treatment. For all features except SMBCs, feature abundance is calculated as the percentage of total protein-coding genes having the feature. For SMBCs, abundance is calculated as the number of SMBCs divided by the total protein-coding gene count. Download FIG S2, PDF file, 2.3 MB . Copyright © 2023 Barnett et al. 2023 Barnett et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . 10.1128/mbio.03584-22.3 TABLE S1 Information on the metagenomic-SIP libraries. 13 C-labeled contigs were defined as being over 1,000 bp long, having at least 5× coverage in the 13 C treatment library, and having at least a 1.5-fold increase in coverage between the 12 C control and 13 C treatment libraries after accounting for sequencing depth. The gene counts include all genes predicted from the 13 C-labeled contigs in each treatment. Download Table S1, PDF file, 0.10 MB . Copyright © 2023 Barnett et al. 2023 Barnett et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . The third approach was designed to identify bacterial life history strategies by characterizing trade-offs between genomic investment in regulatory flexibility and resource acquisition, as predicted from the C-S-R framework ( , ). We chose to assess genomic investment in regulatory flexibility as the number of transcription factors (TF) relative to total gene number (TF/gene). Environmental variability will favor high TF/gene because transcription factors regulate gene expression in response to changes in the cellular environment ( ). We chose to assess genomic investment in resource acquisition as the number of genes encoding secreted enzymes (SE), secondary metabolite biosynthetic pathways (SM), and membrane transporters (MT). Secreted enzymes and secondary metabolites (such as surfactants, siderophores, and antibiotics) enable bacteria to access and control extracellular resources that are otherwise unavailable for membrane transport because they are poorly soluble or sorbed to the soil matrix. Membrane transporters are required for uptake of substances available in the aqueous phase. Membrane transporter activity provides the physiological foundation for concepts of oligotrophy and copiotrophy, a life history framework used commonly to describe bacteria ( , ). On the basis of previous conceptualizations of the C-S-R framework, we predicted a trade-off whereby investment in resource acquisition (SE + SM) would be highest relative to investment in membrane transport for intermediate levels of regulatory flexibility (TF/gene) and lowest at both high and low levels of regulatory flexibility. By clustering MAGs based on these trade-offs and comparing resource acquisition and growth parameters across clusters, we demonstrate the ability of these genomic features to predict bacterial life history strategies. Identification of 13 C-labeled contigs with metagenomic-SIP. We used metagenomic-SIP to enrich DNA from 13 C-labeled bacteria and to identify 13 C-labeled contigs, thereby linking genomic content to C acquisition. Overall, we recovered between 5 × 10 8 and 1.3 × 10 9 reads in each metagenome library after quality control (see in the supplemental material). Coassembly generated over 1.2 × 10 6 contigs that were >1,000 bp long, of which 639,258 were 13 C labeled in at least one treatment (>5× coverage in the 13 C treatment library and >1.5-fold enriched coverage relative to corresponding 12 C controls ). As expected, after normalizing for sequencing depth, the number of genes annotated from 13 C-labeled contigs was positively correlated with the number of 13 C-labeled OTUs in each treatment (Pearson’s r = 0.795, P = 0.018) ( and c). The phylum representation observed for 13 C-labeled contigs differed somewhat from that observed for 13 C-labeled OTUs as determined by 16S rRNA gene sequencing ( ). This difference could be due to loss of some contigs from 13 C-labeled metagenomic libraries on the basis of genome G+C content or due to differences in annotation methodologies used in metagenomic and 16S rRNA gene-based methods ( ). 10.1128/mbio.03584-22.1 TEXT S1 Supporting text and references as described in the text. Download Text S1, DOCX file, 0.04 MB . Copyright © 2023 Barnett et al. 2023 Barnett et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . Genomic features of 13 C contigs explain variation in resource acquisition and growth dynamics. We first tested whether the targeted genomic features explained variation in resource acquisition and growth dynamics at the community level, as assessed across the entire collection of 13 C-labeled contigs ( ; ) and 13 C-labeled OTUs observed from each 13 C-labeled treatment ( ). This contig-based approach is meaningful because the 13 C source identity had a large and significant effect on the identity of 13 C-labeled taxa, with this variation driven by the overall dynamics of 13 C assimilation and growth, as previously described ( ). Three of the eight genomic features we examined explained significant variation in resource acquisition and growth dynamics ( ). Methyl-accepting chemotaxis protein genes (MCPs) were positively correlated with max LFC (Pearson’s r = 0.954, P = 0.002) ( ), indicating that these genes are frequent in taxa that increase relative abundance dramatically in respond to new C inputs. In addition, membrane transporter (Pearson’s r = 0.907, P = 0.015) and osmotic stress response genes (Pearson’s r = 0.938, P = 0.004) were both positively correlated with C source bioavailability ( and ). Features that were not found to explain significant variation in the 13 C-labeled contigs are discussed in . 10.1128/mbio.03584-22.7 FIG S3 Relationships between the frequency of 8 genome features in 13 C-labeled contigs and all 3 in situ activity characteristics of 13 C-labeled OTUs across treatments. For all except SMBCs, abundance is calculated as the percentage of protein-coding genes in 13 C-labeled contigs that are annotated within the genomic feature. SMBC abundance is calculated as the SMBC count divided by total protein-coding gene count. Red or gray lines represent the linear relationships, with shading indicating the 95% confidence intervals. Red relationships are statistically significant, with P values adjusted for multiple comparisons using the Benjamini-Hochberg procedure ( n = 8). Correlation statistics are listed in . Download FIG S3, PDF file, 0.9 MB . Copyright © 2023 Barnett et al. 2023 Barnett et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . Soil consists of a complex matrix ( , ) in which microbial access to C is limited by spatial and temporal variability ( , ). Moisture is a major determinant of resource availability in soils, controlling soil matrix conductivity and tortuosity and thereby regulating rates of diffusion ( ) as well as sorption/desorption kinetics ( ). For these reasons, soil moisture is a major determinant of bacterial activity in soils ( ). While resource concentration is a major determinant of bacterial growth kinetics in aquatic environments, bioavailability is a major determinant of bacterial growth kinetics in soil ( ). Bioavailability, defined as the ability of a resource to cross the membrane, is determined in soil by solubility, sorption dynamics, and soil moisture ( , , ). High-bioavailability C sources (e.g., glucose, xylose, and glycerol) are highly soluble, less likely to be sorbed to soil minerals, readily available for membrane transport, and their availability to cells governed primarily by diffusive transport as limited by soil moisture ( ). These substrates are degraded rapidly, and so, elevated concentrations are ephemeral in soils ( ). Hence, to compete effectively for highly bioavailable C sources, bacteria must exploit ephemeral periods when their resources are present in high concentrations. Low-bioavailability C sources (e.g., cellulose and palmitic acid), in contrast, cannot be transported directly across the membrane until they are transformed by extracellular microbial products such as secreted enzymes ( , , ) or biosurfactants ( ). These substrates are typically insoluble in soils and degraded over a span of weeks, months, or even years. Hence, to compete effectively for low-bioavailability C sources, soil-dwelling bacteria must invest in resource acquisition by manufacturing extracellular products that facilitate access to insoluble particulate materials. Chemotactic bacteria can move through soil pore water and water films, allowing preferential access to C sources detected by MCPs ( , ). MCPs are a dominant chemoreceptor family shared by diverse bacterial phyla ( , ), and they are widely recognized as directing chemotaxis ( , ). Our finding that MCP genes increase in proportion to the max LFC of bacterial taxa ( ) suggests that chemotaxis is an important determinant of fitness for bacteria whose relative abundance increases dramatically during ephemeral periods of high resource availability. Similar explosive population dynamics are expected for organisms having a ruderal strategy as described in Grime’s C-S-R framework ( ). Hence, we hypothesize that chemotaxis is adaptive in soils for growth-adapted bacteria that compete for ephemeral resources whose availability is driven by high environmental variability and that MCP gene count is a genomic feature that can help identify soil-dwelling bacteria having this life history trait. Membrane transport regulates resource uptake, and transporter kinetics have been described as a key determinant of copiotrophic and oligotrophic life history strategies in aquatic environments ( , ). Hence, membrane transport is likely a key determinant of bacterial life history strategies in soil. We show that high membrane transporter gene frequency correlates with the ability of soil bacteria to acquire high-bioavailability C sources ( ). We hypothesize that high membrane transporter gene count is adaptive for bacteria that compete for ephemeral, highly bioavailable C sources. In soil, high membrane transporter gene count is likely indicative of more copiotrophic bacteria, with copiotrophs encompassing a wide diversity of life history strategies, including both ruderals and competitors as defined by Grime’s framework ( ). We also hypothesize that low membrane transporter gene count is likely an indicator of oligotrophic bacteria that compete for less bioavailable C sources in soil, with low MT gene frequency indicating a tendency toward specialization in the resources used in diverse soil habitats. Osmotic stress genes are affiliated with several cellular systems for surviving low water activity, including compatible solutes, aquaporins, and ion homeostasis ( , ). Osmotic stress systems are of vital importance for microbial survival in soils due to the high variation in water activity ( , ). We show that osmotic stress genes are more frequent in soil-dwelling bacteria that acquire C from highly bioavailable C sources ( ). Highly bioavailable C sources are transiently abundant in water-filled pore space when soils are moist ( ). Soil pores dry out rapidly as moisture becomes limiting; hence, we predict that osmotic stress is adaptive for bacteria that exploit resources present in water-filled pore space. In contrast, bacteria specializing in low-bioavailability C sources localize preferentially to surfaces ( ). Water films and biofilms are favored on soil surfaces ( ), buffering the organisms localized there from rapid variation in water activity. Our results suggest that osmotic stress is adaptive for soil-dwelling bacteria of more copiotrophic character (i.e., ruderals and competitors), and those that compete for high-bioavailability substrates whose availability corresponds with rapid changes in water activity. One might naively predict that osmotic stress genes would be characteristic of organisms having a stress-tolerant life history strategy. The observation that osmotic stress genes do not predict a “stress-tolerant” bacterial lifestyle requires us to carefully consider how we define stress in bacterial ecology. Grime’s original framework, from plant ecology, describes plant stress as limitation for light, nutrients, and/or water, which are resources required for plant growth ( ). This plant-centric definition of stress, based on resource limitation, conflicts with the microbiological definition in which stress is usually interpreted as abiotic stress (e.g., tolerance to pH, salinity, temperature, O 2 ). Those bacteria that are adapted for resource limitation are typically defined as oligotrophs. Hence, Grime’s “stress tolerator” strategy, as interpreted in the proper ecological context, is indicative of bacteria having oligotrophic characteristics ( ) and not those adapted for extremes of abiotic stress (e.g., extremophiles). These contrasting definitions of stress are a potential source of confusion when life history theory developed for plants is applied to bacteria. We propose that a better understanding of bacterial life history theory would be provided by interpreting the “S” in C-S-R as a scarcity adapted rather than stress adapted ( ). Genomic features of 13 C MAGs explain variation in resource acquisition and growth dynamics. A limitation of the contig-based analysis described above is that statistical power is low since we have only 8 treatments. Hence, we also used 13 C MAGs to evaluate associations between genomic features and activity characteristics. We recovered 27 “medium-quality” MAGs ( ) from the 13 C-labeled contigs (>50% completeness and <10% contamination) ( ; ). We linked these 13 C-MAGs to corresponding OTUs that were 13 C labeled in the exact same 13 C-labeled DNA sample at the exact same time on the basis of taxonomic annotations (assigned by GTDB-tk ) ( ). For example, the 13 C-labeled MAG Glucose_Day01_bin.1 was classified to the family Burkholderiaceae and therefore linked to all Burkholderiaceae OTUs 13 C labeled in the glucose day 1 treatment. Three MAGs did not match any OTU (Cellulose_Day30_bin.7, PalmiticAcid_Day48_bin.4, and Vanillin_Day48_bin.1), and these unmatched MAGs were discarded from the analysis because growth and resource acquisition dynamics could not be assigned. While classification at the family level could group together functionally divergent taxa, the fact that these taxa were 13 C labeled by the same substrates and at the same time in the same samples increases the likelihood that these groupings are ecologically meaningful. For each 13 C-labeled MAG, activity characteristics were averaged across the matching 13 C-labeled OTUs ( ; ). We then evaluated the number of genes associated with each genomic feature, normalized for MAG size ( ; ). As before, membrane transporter genes were positively correlated with C source bioavailability (Pearson’s r = 0.550, P = 0.043) ( ), and we found that transcription factor genes (Pearson’s r = 0.881, P < 0.001) and secondary metabolite biosynthetic gene cluster (SMBC) abundance (Pearson’s r = 0.712, P = 0.001) were also positively correlated with C source bioavailability ( and ). Features that were not found to explain significant variation in the 13 C MAGs are discussed in . 10.1128/mbio.03584-22.8 FIG S4 Activity characteristics of the 13 C-labeled OTUs mapped to each 13 C-labeled MAG. Boxplots are colored by the treatment under which 13 C labeling occurs. Red bars indicate mean values. Three MAGs had no matching OTUs, Cellulose_Day30_bin.7, PalmiticAcid_Day48_bin.4, and Vanillin_Day48_bin.1. MAG and mapped OTU details are found in . Download FIG S4, PDF file, 0.7 MB . Copyright © 2023 Barnett et al. 2023 Barnett et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . 10.1128/mbio.03584-22.9 FIG S5 Relationships between the abundance of all 8 genome features in 13 C-labeled MAGs and all 3 activity characteristics of OTUs matching MAG taxonomy and 13 C labeling. For all except SMBCs, abundance is calculated as the percentage of protein-coding genes in the MAG that are annotated within the genomic feature. SMBC abundance is calculated as the SMBC count divided by the total protein-coding gene count. Red or gray lines represent the linear relationships, with shading indicating the 95% confidence intervals. Red relationships are statistically significant, with P values adjusted for multiple comparisons using the Benjamini-Hochberg procedure ( n = 8). Correlation statistics are listed in . Download FIG S5, PDF file, 0.9 MB . Copyright © 2023 Barnett et al. 2023 Barnett et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . Having high numbers of transcription factors is thought to be an adaptive trait for microbes living in highly variable environments ( , , ). Certain taxa are known to be enriched in transcription factor families, but the evolutionary basis of this variation in gene frequency is not well established ( ). Our finding that transcription factor gene frequency correlates with C source bioavailability ( ) suggests that growth on ephemeral C sources favors high transcription factor gene count because this adaptive trait allows bacteria to respond effectively to high environmental variability. The metabolic and physiological changes induced by these transcription factors may include previously discussed features such as MCP, membrane transporters, or osmotic stress systems. Our results support the idea that genomic investment in transcription factors is an adaptive trait that varies with environmental variability of the ecological niche. Secondary metabolites include a wide range of small molecules produced by organisms. Bacteria often use these molecules to interact with their environments. Examples include antibiotics that kill or prevent the growth of other organisms, signaling molecules that mediate intercellular interactions, siderophores, chelators, and biosurfactants used to access insoluble nutrients ( ). Secondary metabolites can facilitate competition for limited resources ( , ), and they can even mediate microbial predation ( ). Production of secondary metabolites requires multiple genes often found in clusters (i.e., SMBCs) ( , ). We show that SMBC frequency correlates with C source bioavailability ( ). This finding runs counter to the idea that secondary metabolites are important for competition for low-bioavailability resources ( , , ). Given that this observation matches patterns observed for transcription factor and membrane transporter genes, we expect that SMBCs are favored by conditions of environmental variability and/or resource acquisition. Genomic feature correlation in publicly available soil genomes and metagenomes. We observed through metagenomic-SIP that C source bioavailability correlates with membrane transporter gene, osmotic stress gene, transcription factor gene, and SMBC frequencies, and we hypothesize that these gene frequencies are predictive of an organism’s position on the copiotroph-oligotroph continuum. From this hypothesis, we predict that these genomic features should correlate in independent genomic and metagenomic data sets. We assessed these relationships in several data sets generated from a range of different soils (see ). Since membrane transporter gene frequency was significantly associated with C source bioavailability at both the community level ( 13 C-labeled contigs) and genome level ( 13 C-labeled MAGs), we compared the gene frequencies for membrane transporter genes with those of transcription factor genes, osmotic stress genes, and SMBCs in each independent data set. Support for a relationship between membrane transporter and both transcription factor and osmotic stress genes was supported in 4 of 7 independent data sets ( to ). We found no correlation between membrane transporter genes and SMBC frequencies within any of the data sets ( ). We also observed that MCP gene counts ( ) and predicted rRNA gene ( rrn ) copy number ( ) both correlate with max LFC when new C is added to soil. We hypothesize that these traits are linked to ruderal strategies (a subset of copiotrophs); hence, we predict that rrn copy number should correlate with MCP gene frequency in independent data sets. We compared MCP gene frequency to the natural log of either rrn copy number (for RefSoil) or tRNA gene count (for reference metagenome MAGs). While the RefSoil database contains complete genomes with accurate rrn copy numbers, MAGs from metagenomic data sets do not provide accurate rrn annotations; therefore, we used tRNA gene abundances as a proxy since tRNA gene count correlates with rrn copy number ( ). In further support of this proxy, we observed that rrn copy number and tRNA gene count are strongly correlated in RefSoil bacterial genomes (Pearson’s r = 0.792, P < 0.001). The natural log of rrn copy number was positively correlated with MCP gene abundance across the RefSoil data set ( ), yet the natural logs of the tRNA gene counts were not correlated with MCP gene abundance in any of the other data sets ( to ). The correlational approach, as applied above, has two notable limitations. First, many of the genes in metagenomic data sets are poorly annotated. Inaccurate annotation can produce inaccurate gene counts for all of the gene systems we assessed. Second, adaptive trade-offs between gene systems will not produce straightforward correlations because the concept of a trade-off implies an interaction whereby the adaptive benefit varies depending on the life history strategy of the organism ( ). Trade-offs in genomic investment define life history strategies. Trade-offs occur when the benefit of a trait in a given environment differs between two groups. For example, increases in environmental variability might tend to favor more investment in resource acquisition for oligotrophic organisms (because an increase in resource variability in an environment lacking resources will tend to increase resource availability) but less investment in resource acquisition in copiotrophic organisms (because investing in extracellular products that enable resource acquisition provides little benefit in a highly disturbed environment). To detect such defining life history trade-offs among our 13 C-labeled MAGs, we examined relationships between genomic investment in regulatory flexibility, as defined by transcription factor gene frequency (TF/gene); genomic investment in resource acquisition, as defined by the sum of gene counts for SE and secondary metabolite biosynthetic gene clusters (SM); and genomic investment in MT. We sum SM and SE because these features represent genomic investment in extracellular products used for resource acquisition. The products of extracellular reactions must undergo transport across the membrane prior to their metabolism; hence, we express genome investment in resource acquisition as the ratio (SE + SM)/MT. This ratio will be high for microbes producing numerous extracellular products and low for microbes that invest in uptake from the aqueous phase but are otherwise unable to acquire C from the soil matrix (which is mostly present in the particulate form or attached to soil minerals). Groups of genomes adapted to similar life history strategies should exhibit comparable genomic investment in these gene systems. We used k -means clustering based on TF/gene and (SE + SM)/MT to group all 27 MAGs into 3 clusters that we hypothesized would represent the C-S-R strategies. We then determined whether the properties of the genomes in each cluster matched predictions from the C-S-R framework. We observed evidence for trade-offs in both regulatory flexibility and resource acquisition among these three clusters. Transcription factor genes tended to increase with total gene count (as expected), but TF/gene differed between the three clusters ( ). When the genome size was small, the three clusters differed little in transcription factor gene count, but as total gene count increased, the clusters diverged, with one cluster having less regulatory flexibility than the other two ( ). We also observe that SE + SM gene counts tend to increase in proportion to membrane transporter gene counts in two clusters (as expected since both are associated with extracellular resources), but the other cluster, which has the highest membrane transporter gene counts, maintains low SE + SM counts ( ). When these relationships are plotted together, we observe that one cluster tends to increase relative investment in resource acquisition [(SE + SM)/MT] along with regulatory flexibility (TF/gene), while the other two have the opposite response ( ). These three clusters demonstrate adaptive trade-offs consistent with Grime’s C-S-R framework. The scarcity strategists (i.e., oligotrophs [S]) have low regulatory flexibility ( ) and generally low genomic investment in transport ( ), but their genomic investment in resource acquisition tends to increase in proportion to regulatory flexibility ( ). That is, scarcity strategists whose ecological niches are the most constant require little genomic investment in regulatory flexibility and resource acquisition, while those whose niches are more variable require more investment in regulatory flexibility and resource acquisition. In contrast, ruderals (R) have high regulatory flexibility ( ) and high investment in transport ( ), but they have low genomic investment in resource acquisition ( and ). Finally, the competitive strategists (C) have intermediate to high levels of regulatory flexibility ( ) and intermediate investment in membrane transport ( ), but high genomic investment in resource acquisition ( ), with little relationship between resource acquisition and regulatory flexibility ( ). We expect many intermediate strategies among the C-S-R vertices, and as expected, we see that scarcity specialists adapted for high levels of regulatory flexibility are difficult to distinguish from competitive specialists adapted for lower levels of regulatory flexibility. MAGs assigned to the three clusters differ in their resource acquisition and growth dynamics, consistent with the expectations of life history theory. Ruderals and competitors acquired C sources that had significantly higher bioavailability than scarcity specialists ( ), and they also consumed a higher diversity of C sources than the scarcity specialists; this difference was significant ( ). Ruderals, however, had significantly higher max LFC relative to competitors, indicating the ability to increase population size dramatically in response to C input ( ). In terms of genomic features, we see that both ruderals and competitors have higher transcription factor and osmotic stress gene frequencies than scarcity specialists ( ), while only the ruderals have higher membrane transporter gene abundance than scarcity specialists, and these differences are significant ( ). Ruderals are distinguished from both competitors and scarcity specialists by their low investment in secreted enzymes and high investment in MCP ( ). Competitors are distinguished from both scarcity and ruderal specialists by their higher investment in adhesion ( ). The general theme is that both ruderals and competitors have copiotrophic characteristics, but ruderals appear to be opportunists with adaptations that maximize their ability to exploit ephemeral resources, while competitors have a greater genomic investment in resource acquisition. Scarcity specialists appear less well adapted for regulatory flexibility and more likely to specialize in their C sources ( ). It is interesting to note that scarcity specialists did not have a high investment in adhesion genes despite their tendency to use C sources of low bioavailability. One hypothesis to explain this observation is that scarcity specialists exhibit metabolic dependency such that their access to insoluble C sources is facilitated by other members of the community. Support for this hypothesis comes from the fact that competitive specialists become labeled by low-bioavailability substrates after 7 to 14 days, while the scarcity specialists become labeled by these same substrates, but only after 14 to 48 days ( ). Predicting ecological strategies from genome features. We used parameters of TF/genjme and (SE + SM)/MT, defined from the three 13 C-labeled MAG clusters described above, to predict life history strategies for RefSoil genomes. The resulting RefSoil genome clusters, predicted from these genome parameters, exhibited genomic characteristics representative of the expected life history trade-offs ( to ). The relationship between TF/gene and (SE + SM)/MT is roughly triangular, as we would expect for the C-S-R framework ( ). Yet it is apparent that a vast diversity of intermediate life history strategies exist ( ), and this is also an expected result since relatively few taxa will maximize adaptive trade-offs, while most will optimize adaptive traits to suit their particular ecological niche. Genomes having ruderal characteristics are enriched in the Gammaproteobacteria and Firmicutes ( ; ), as we would expect, though members of these phyla can be found in all three clusters ( ), owing to the vast diversity of these groups. In addition, genomes having competitive characteristics are highly enriched in the Actinobacteria and Betaproteobacteria , while genomes characteristic of scarcity specialists are enriched in the Alphaproteobacteria and other diverse phyla (e.g., Verrucomicrobia , Acidobacteria , Gemmatimonadetes , Chloroflexi , etc.) whose members are difficult to cultivate in laboratory media ( ; ). Most bacterial phyla are metabolically and ecologically diverse, and we would not expect homogeneity among species within a phylum. In addition, previous observations show that C assimilation dynamics in soil are not well predicted by phylum-level classification ( ). However, certain strategies are more common in some phyla than others, and these patterns, along with the taxonomic makeup of our MAG clusters ( ), match general expectations. Furthermore, the three clusters we defined for RefSoil genomes possess patterns of genomic investment that match predictions derived from the C-S-R framework and are consistent with predictions based on the 13 C-labeled MAGs ( ; ). 10.1128/mbio.03584-22.4 TABLE S2 Summary of parameter values for inferred life history clusters. A symbol (+) was added every time a cluster was observed to have a significantly higher value than another cluster in both 13 C-MAG comparisons and the RefSoil comparisons. Abbreviations are defined in the text. Download Table S2, PDF file, 0.1 MB . Copyright © 2023 Barnett et al. 2023 Barnett et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . 10.1128/mbio.03584-22.10 FIG S6 (A) Distribution of RefSoil bacterial taxa (at the phylum or class level) across the life history strategies predicted from clusters of TF/gene and (SE + SM)/MT. Percentage of genomes from each taxon in each predicted life history strategy with the total number of genomes used above each stacked bar. (B) Percentage of genomes in each predicted life history strategy cluster that are classified to each phylum/class. Phylum/class abbreviations are Actino., Actinobacteria ; Alpha., Alphaproteobacteria ; Bact., Bacteroidetes ; Cyano., Cyanobacteria ; Delta., Deltaproteobacteria ; Firm., Firmicutes ; Gamma., Gammaproteobacteria ; Spiro., Spirochetes ; and <10, taxa that contain less than 10 genomes. (C) Genomic investment in gene systems differs across life history strategies predicted from TF/genes and (SE + SM)/MT. Data are from RefSoil genomes with k -means clustering trained by clusters identified from 13 C-labeled MAGs. In all cases, variation across life history clusters was first tested with Kruskal-Wallis tests, and, where statistically significant ( P < 0.05), post hoc pairwise tests were performed using Dunn tests. Download FIG S6, PDF file, 0.7 MB . Copyright © 2023 Barnett et al. 2023 Barnett et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . Conclusions. Metagenomic-SIP enables us to link genome features to growth dynamics and C acquisition dynamics of bacteria as they occur in soil. We used a targeted approach, employing data from a multisubstrate DNA-SIP experiment, to select bacterial genomes that maximize life history trade-offs. We identified genomic features (MCP, membrane transporter genes, osmotic stress genes, transcription factor genes, and SMBCs) that are associated with growth and C acquisition dynamics of soil-dwelling bacteria. We also identified genomic signatures [TF/gene and (SE + SM)/MT] that represent life history parameters useful in inferring bacterial ecological strategies from genome sequence data. We show that while many intermediate strategies exist, there are diverse taxa that maximize life history trade-offs defined by these genomic parameters. The genomic signatures we identified are readily assessed using genomic and metagenomic sequencing, and these parameters may be useful in the assessment of bacterial life history strategies. 13 C-labeled contigs with metagenomic-SIP. We used metagenomic-SIP to enrich DNA from 13 C-labeled bacteria and to identify 13 C-labeled contigs, thereby linking genomic content to C acquisition. Overall, we recovered between 5 × 10 8 and 1.3 × 10 9 reads in each metagenome library after quality control (see in the supplemental material). Coassembly generated over 1.2 × 10 6 contigs that were >1,000 bp long, of which 639,258 were 13 C labeled in at least one treatment (>5× coverage in the 13 C treatment library and >1.5-fold enriched coverage relative to corresponding 12 C controls ). As expected, after normalizing for sequencing depth, the number of genes annotated from 13 C-labeled contigs was positively correlated with the number of 13 C-labeled OTUs in each treatment (Pearson’s r = 0.795, P = 0.018) ( and c). The phylum representation observed for 13 C-labeled contigs differed somewhat from that observed for 13 C-labeled OTUs as determined by 16S rRNA gene sequencing ( ). This difference could be due to loss of some contigs from 13 C-labeled metagenomic libraries on the basis of genome G+C content or due to differences in annotation methodologies used in metagenomic and 16S rRNA gene-based methods ( ). 10.1128/mbio.03584-22.1 TEXT S1 Supporting text and references as described in the text. Download Text S1, DOCX file, 0.04 MB . Copyright © 2023 Barnett et al. 2023 Barnett et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . 13 C contigs explain variation in resource acquisition and growth dynamics. We first tested whether the targeted genomic features explained variation in resource acquisition and growth dynamics at the community level, as assessed across the entire collection of 13 C-labeled contigs ( ; ) and 13 C-labeled OTUs observed from each 13 C-labeled treatment ( ). This contig-based approach is meaningful because the 13 C source identity had a large and significant effect on the identity of 13 C-labeled taxa, with this variation driven by the overall dynamics of 13 C assimilation and growth, as previously described ( ). Three of the eight genomic features we examined explained significant variation in resource acquisition and growth dynamics ( ). Methyl-accepting chemotaxis protein genes (MCPs) were positively correlated with max LFC (Pearson’s r = 0.954, P = 0.002) ( ), indicating that these genes are frequent in taxa that increase relative abundance dramatically in respond to new C inputs. In addition, membrane transporter (Pearson’s r = 0.907, P = 0.015) and osmotic stress response genes (Pearson’s r = 0.938, P = 0.004) were both positively correlated with C source bioavailability ( and ). Features that were not found to explain significant variation in the 13 C-labeled contigs are discussed in . 10.1128/mbio.03584-22.7 FIG S3 Relationships between the frequency of 8 genome features in 13 C-labeled contigs and all 3 in situ activity characteristics of 13 C-labeled OTUs across treatments. For all except SMBCs, abundance is calculated as the percentage of protein-coding genes in 13 C-labeled contigs that are annotated within the genomic feature. SMBC abundance is calculated as the SMBC count divided by total protein-coding gene count. Red or gray lines represent the linear relationships, with shading indicating the 95% confidence intervals. Red relationships are statistically significant, with P values adjusted for multiple comparisons using the Benjamini-Hochberg procedure ( n = 8). Correlation statistics are listed in . Download FIG S3, PDF file, 0.9 MB . Copyright © 2023 Barnett et al. 2023 Barnett et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . Soil consists of a complex matrix ( , ) in which microbial access to C is limited by spatial and temporal variability ( , ). Moisture is a major determinant of resource availability in soils, controlling soil matrix conductivity and tortuosity and thereby regulating rates of diffusion ( ) as well as sorption/desorption kinetics ( ). For these reasons, soil moisture is a major determinant of bacterial activity in soils ( ). While resource concentration is a major determinant of bacterial growth kinetics in aquatic environments, bioavailability is a major determinant of bacterial growth kinetics in soil ( ). Bioavailability, defined as the ability of a resource to cross the membrane, is determined in soil by solubility, sorption dynamics, and soil moisture ( , , ). High-bioavailability C sources (e.g., glucose, xylose, and glycerol) are highly soluble, less likely to be sorbed to soil minerals, readily available for membrane transport, and their availability to cells governed primarily by diffusive transport as limited by soil moisture ( ). These substrates are degraded rapidly, and so, elevated concentrations are ephemeral in soils ( ). Hence, to compete effectively for highly bioavailable C sources, bacteria must exploit ephemeral periods when their resources are present in high concentrations. Low-bioavailability C sources (e.g., cellulose and palmitic acid), in contrast, cannot be transported directly across the membrane until they are transformed by extracellular microbial products such as secreted enzymes ( , , ) or biosurfactants ( ). These substrates are typically insoluble in soils and degraded over a span of weeks, months, or even years. Hence, to compete effectively for low-bioavailability C sources, soil-dwelling bacteria must invest in resource acquisition by manufacturing extracellular products that facilitate access to insoluble particulate materials. Chemotactic bacteria can move through soil pore water and water films, allowing preferential access to C sources detected by MCPs ( , ). MCPs are a dominant chemoreceptor family shared by diverse bacterial phyla ( , ), and they are widely recognized as directing chemotaxis ( , ). Our finding that MCP genes increase in proportion to the max LFC of bacterial taxa ( ) suggests that chemotaxis is an important determinant of fitness for bacteria whose relative abundance increases dramatically during ephemeral periods of high resource availability. Similar explosive population dynamics are expected for organisms having a ruderal strategy as described in Grime’s C-S-R framework ( ). Hence, we hypothesize that chemotaxis is adaptive in soils for growth-adapted bacteria that compete for ephemeral resources whose availability is driven by high environmental variability and that MCP gene count is a genomic feature that can help identify soil-dwelling bacteria having this life history trait. Membrane transport regulates resource uptake, and transporter kinetics have been described as a key determinant of copiotrophic and oligotrophic life history strategies in aquatic environments ( , ). Hence, membrane transport is likely a key determinant of bacterial life history strategies in soil. We show that high membrane transporter gene frequency correlates with the ability of soil bacteria to acquire high-bioavailability C sources ( ). We hypothesize that high membrane transporter gene count is adaptive for bacteria that compete for ephemeral, highly bioavailable C sources. In soil, high membrane transporter gene count is likely indicative of more copiotrophic bacteria, with copiotrophs encompassing a wide diversity of life history strategies, including both ruderals and competitors as defined by Grime’s framework ( ). We also hypothesize that low membrane transporter gene count is likely an indicator of oligotrophic bacteria that compete for less bioavailable C sources in soil, with low MT gene frequency indicating a tendency toward specialization in the resources used in diverse soil habitats. Osmotic stress genes are affiliated with several cellular systems for surviving low water activity, including compatible solutes, aquaporins, and ion homeostasis ( , ). Osmotic stress systems are of vital importance for microbial survival in soils due to the high variation in water activity ( , ). We show that osmotic stress genes are more frequent in soil-dwelling bacteria that acquire C from highly bioavailable C sources ( ). Highly bioavailable C sources are transiently abundant in water-filled pore space when soils are moist ( ). Soil pores dry out rapidly as moisture becomes limiting; hence, we predict that osmotic stress is adaptive for bacteria that exploit resources present in water-filled pore space. In contrast, bacteria specializing in low-bioavailability C sources localize preferentially to surfaces ( ). Water films and biofilms are favored on soil surfaces ( ), buffering the organisms localized there from rapid variation in water activity. Our results suggest that osmotic stress is adaptive for soil-dwelling bacteria of more copiotrophic character (i.e., ruderals and competitors), and those that compete for high-bioavailability substrates whose availability corresponds with rapid changes in water activity. One might naively predict that osmotic stress genes would be characteristic of organisms having a stress-tolerant life history strategy. The observation that osmotic stress genes do not predict a “stress-tolerant” bacterial lifestyle requires us to carefully consider how we define stress in bacterial ecology. Grime’s original framework, from plant ecology, describes plant stress as limitation for light, nutrients, and/or water, which are resources required for plant growth ( ). This plant-centric definition of stress, based on resource limitation, conflicts with the microbiological definition in which stress is usually interpreted as abiotic stress (e.g., tolerance to pH, salinity, temperature, O 2 ). Those bacteria that are adapted for resource limitation are typically defined as oligotrophs. Hence, Grime’s “stress tolerator” strategy, as interpreted in the proper ecological context, is indicative of bacteria having oligotrophic characteristics ( ) and not those adapted for extremes of abiotic stress (e.g., extremophiles). These contrasting definitions of stress are a potential source of confusion when life history theory developed for plants is applied to bacteria. We propose that a better understanding of bacterial life history theory would be provided by interpreting the “S” in C-S-R as a scarcity adapted rather than stress adapted ( ). 13 C MAGs explain variation in resource acquisition and growth dynamics. A limitation of the contig-based analysis described above is that statistical power is low since we have only 8 treatments. Hence, we also used 13 C MAGs to evaluate associations between genomic features and activity characteristics. We recovered 27 “medium-quality” MAGs ( ) from the 13 C-labeled contigs (>50% completeness and <10% contamination) ( ; ). We linked these 13 C-MAGs to corresponding OTUs that were 13 C labeled in the exact same 13 C-labeled DNA sample at the exact same time on the basis of taxonomic annotations (assigned by GTDB-tk ) ( ). For example, the 13 C-labeled MAG Glucose_Day01_bin.1 was classified to the family Burkholderiaceae and therefore linked to all Burkholderiaceae OTUs 13 C labeled in the glucose day 1 treatment. Three MAGs did not match any OTU (Cellulose_Day30_bin.7, PalmiticAcid_Day48_bin.4, and Vanillin_Day48_bin.1), and these unmatched MAGs were discarded from the analysis because growth and resource acquisition dynamics could not be assigned. While classification at the family level could group together functionally divergent taxa, the fact that these taxa were 13 C labeled by the same substrates and at the same time in the same samples increases the likelihood that these groupings are ecologically meaningful. For each 13 C-labeled MAG, activity characteristics were averaged across the matching 13 C-labeled OTUs ( ; ). We then evaluated the number of genes associated with each genomic feature, normalized for MAG size ( ; ). As before, membrane transporter genes were positively correlated with C source bioavailability (Pearson’s r = 0.550, P = 0.043) ( ), and we found that transcription factor genes (Pearson’s r = 0.881, P < 0.001) and secondary metabolite biosynthetic gene cluster (SMBC) abundance (Pearson’s r = 0.712, P = 0.001) were also positively correlated with C source bioavailability ( and ). Features that were not found to explain significant variation in the 13 C MAGs are discussed in . 10.1128/mbio.03584-22.8 FIG S4 Activity characteristics of the 13 C-labeled OTUs mapped to each 13 C-labeled MAG. Boxplots are colored by the treatment under which 13 C labeling occurs. Red bars indicate mean values. Three MAGs had no matching OTUs, Cellulose_Day30_bin.7, PalmiticAcid_Day48_bin.4, and Vanillin_Day48_bin.1. MAG and mapped OTU details are found in . Download FIG S4, PDF file, 0.7 MB . Copyright © 2023 Barnett et al. 2023 Barnett et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . 10.1128/mbio.03584-22.9 FIG S5 Relationships between the abundance of all 8 genome features in 13 C-labeled MAGs and all 3 activity characteristics of OTUs matching MAG taxonomy and 13 C labeling. For all except SMBCs, abundance is calculated as the percentage of protein-coding genes in the MAG that are annotated within the genomic feature. SMBC abundance is calculated as the SMBC count divided by the total protein-coding gene count. Red or gray lines represent the linear relationships, with shading indicating the 95% confidence intervals. Red relationships are statistically significant, with P values adjusted for multiple comparisons using the Benjamini-Hochberg procedure ( n = 8). Correlation statistics are listed in . Download FIG S5, PDF file, 0.9 MB . Copyright © 2023 Barnett et al. 2023 Barnett et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . Having high numbers of transcription factors is thought to be an adaptive trait for microbes living in highly variable environments ( , , ). Certain taxa are known to be enriched in transcription factor families, but the evolutionary basis of this variation in gene frequency is not well established ( ). Our finding that transcription factor gene frequency correlates with C source bioavailability ( ) suggests that growth on ephemeral C sources favors high transcription factor gene count because this adaptive trait allows bacteria to respond effectively to high environmental variability. The metabolic and physiological changes induced by these transcription factors may include previously discussed features such as MCP, membrane transporters, or osmotic stress systems. Our results support the idea that genomic investment in transcription factors is an adaptive trait that varies with environmental variability of the ecological niche. Secondary metabolites include a wide range of small molecules produced by organisms. Bacteria often use these molecules to interact with their environments. Examples include antibiotics that kill or prevent the growth of other organisms, signaling molecules that mediate intercellular interactions, siderophores, chelators, and biosurfactants used to access insoluble nutrients ( ). Secondary metabolites can facilitate competition for limited resources ( , ), and they can even mediate microbial predation ( ). Production of secondary metabolites requires multiple genes often found in clusters (i.e., SMBCs) ( , ). We show that SMBC frequency correlates with C source bioavailability ( ). This finding runs counter to the idea that secondary metabolites are important for competition for low-bioavailability resources ( , , ). Given that this observation matches patterns observed for transcription factor and membrane transporter genes, we expect that SMBCs are favored by conditions of environmental variability and/or resource acquisition. We observed through metagenomic-SIP that C source bioavailability correlates with membrane transporter gene, osmotic stress gene, transcription factor gene, and SMBC frequencies, and we hypothesize that these gene frequencies are predictive of an organism’s position on the copiotroph-oligotroph continuum. From this hypothesis, we predict that these genomic features should correlate in independent genomic and metagenomic data sets. We assessed these relationships in several data sets generated from a range of different soils (see ). Since membrane transporter gene frequency was significantly associated with C source bioavailability at both the community level ( 13 C-labeled contigs) and genome level ( 13 C-labeled MAGs), we compared the gene frequencies for membrane transporter genes with those of transcription factor genes, osmotic stress genes, and SMBCs in each independent data set. Support for a relationship between membrane transporter and both transcription factor and osmotic stress genes was supported in 4 of 7 independent data sets ( to ). We found no correlation between membrane transporter genes and SMBC frequencies within any of the data sets ( ). We also observed that MCP gene counts ( ) and predicted rRNA gene ( rrn ) copy number ( ) both correlate with max LFC when new C is added to soil. We hypothesize that these traits are linked to ruderal strategies (a subset of copiotrophs); hence, we predict that rrn copy number should correlate with MCP gene frequency in independent data sets. We compared MCP gene frequency to the natural log of either rrn copy number (for RefSoil) or tRNA gene count (for reference metagenome MAGs). While the RefSoil database contains complete genomes with accurate rrn copy numbers, MAGs from metagenomic data sets do not provide accurate rrn annotations; therefore, we used tRNA gene abundances as a proxy since tRNA gene count correlates with rrn copy number ( ). In further support of this proxy, we observed that rrn copy number and tRNA gene count are strongly correlated in RefSoil bacterial genomes (Pearson’s r = 0.792, P < 0.001). The natural log of rrn copy number was positively correlated with MCP gene abundance across the RefSoil data set ( ), yet the natural logs of the tRNA gene counts were not correlated with MCP gene abundance in any of the other data sets ( to ). The correlational approach, as applied above, has two notable limitations. First, many of the genes in metagenomic data sets are poorly annotated. Inaccurate annotation can produce inaccurate gene counts for all of the gene systems we assessed. Second, adaptive trade-offs between gene systems will not produce straightforward correlations because the concept of a trade-off implies an interaction whereby the adaptive benefit varies depending on the life history strategy of the organism ( ). Trade-offs occur when the benefit of a trait in a given environment differs between two groups. For example, increases in environmental variability might tend to favor more investment in resource acquisition for oligotrophic organisms (because an increase in resource variability in an environment lacking resources will tend to increase resource availability) but less investment in resource acquisition in copiotrophic organisms (because investing in extracellular products that enable resource acquisition provides little benefit in a highly disturbed environment). To detect such defining life history trade-offs among our 13 C-labeled MAGs, we examined relationships between genomic investment in regulatory flexibility, as defined by transcription factor gene frequency (TF/gene); genomic investment in resource acquisition, as defined by the sum of gene counts for SE and secondary metabolite biosynthetic gene clusters (SM); and genomic investment in MT. We sum SM and SE because these features represent genomic investment in extracellular products used for resource acquisition. The products of extracellular reactions must undergo transport across the membrane prior to their metabolism; hence, we express genome investment in resource acquisition as the ratio (SE + SM)/MT. This ratio will be high for microbes producing numerous extracellular products and low for microbes that invest in uptake from the aqueous phase but are otherwise unable to acquire C from the soil matrix (which is mostly present in the particulate form or attached to soil minerals). Groups of genomes adapted to similar life history strategies should exhibit comparable genomic investment in these gene systems. We used k -means clustering based on TF/gene and (SE + SM)/MT to group all 27 MAGs into 3 clusters that we hypothesized would represent the C-S-R strategies. We then determined whether the properties of the genomes in each cluster matched predictions from the C-S-R framework. We observed evidence for trade-offs in both regulatory flexibility and resource acquisition among these three clusters. Transcription factor genes tended to increase with total gene count (as expected), but TF/gene differed between the three clusters ( ). When the genome size was small, the three clusters differed little in transcription factor gene count, but as total gene count increased, the clusters diverged, with one cluster having less regulatory flexibility than the other two ( ). We also observe that SE + SM gene counts tend to increase in proportion to membrane transporter gene counts in two clusters (as expected since both are associated with extracellular resources), but the other cluster, which has the highest membrane transporter gene counts, maintains low SE + SM counts ( ). When these relationships are plotted together, we observe that one cluster tends to increase relative investment in resource acquisition [(SE + SM)/MT] along with regulatory flexibility (TF/gene), while the other two have the opposite response ( ). These three clusters demonstrate adaptive trade-offs consistent with Grime’s C-S-R framework. The scarcity strategists (i.e., oligotrophs [S]) have low regulatory flexibility ( ) and generally low genomic investment in transport ( ), but their genomic investment in resource acquisition tends to increase in proportion to regulatory flexibility ( ). That is, scarcity strategists whose ecological niches are the most constant require little genomic investment in regulatory flexibility and resource acquisition, while those whose niches are more variable require more investment in regulatory flexibility and resource acquisition. In contrast, ruderals (R) have high regulatory flexibility ( ) and high investment in transport ( ), but they have low genomic investment in resource acquisition ( and ). Finally, the competitive strategists (C) have intermediate to high levels of regulatory flexibility ( ) and intermediate investment in membrane transport ( ), but high genomic investment in resource acquisition ( ), with little relationship between resource acquisition and regulatory flexibility ( ). We expect many intermediate strategies among the C-S-R vertices, and as expected, we see that scarcity specialists adapted for high levels of regulatory flexibility are difficult to distinguish from competitive specialists adapted for lower levels of regulatory flexibility. MAGs assigned to the three clusters differ in their resource acquisition and growth dynamics, consistent with the expectations of life history theory. Ruderals and competitors acquired C sources that had significantly higher bioavailability than scarcity specialists ( ), and they also consumed a higher diversity of C sources than the scarcity specialists; this difference was significant ( ). Ruderals, however, had significantly higher max LFC relative to competitors, indicating the ability to increase population size dramatically in response to C input ( ). In terms of genomic features, we see that both ruderals and competitors have higher transcription factor and osmotic stress gene frequencies than scarcity specialists ( ), while only the ruderals have higher membrane transporter gene abundance than scarcity specialists, and these differences are significant ( ). Ruderals are distinguished from both competitors and scarcity specialists by their low investment in secreted enzymes and high investment in MCP ( ). Competitors are distinguished from both scarcity and ruderal specialists by their higher investment in adhesion ( ). The general theme is that both ruderals and competitors have copiotrophic characteristics, but ruderals appear to be opportunists with adaptations that maximize their ability to exploit ephemeral resources, while competitors have a greater genomic investment in resource acquisition. Scarcity specialists appear less well adapted for regulatory flexibility and more likely to specialize in their C sources ( ). It is interesting to note that scarcity specialists did not have a high investment in adhesion genes despite their tendency to use C sources of low bioavailability. One hypothesis to explain this observation is that scarcity specialists exhibit metabolic dependency such that their access to insoluble C sources is facilitated by other members of the community. Support for this hypothesis comes from the fact that competitive specialists become labeled by low-bioavailability substrates after 7 to 14 days, while the scarcity specialists become labeled by these same substrates, but only after 14 to 48 days ( ). We used parameters of TF/genjme and (SE + SM)/MT, defined from the three 13 C-labeled MAG clusters described above, to predict life history strategies for RefSoil genomes. The resulting RefSoil genome clusters, predicted from these genome parameters, exhibited genomic characteristics representative of the expected life history trade-offs ( to ). The relationship between TF/gene and (SE + SM)/MT is roughly triangular, as we would expect for the C-S-R framework ( ). Yet it is apparent that a vast diversity of intermediate life history strategies exist ( ), and this is also an expected result since relatively few taxa will maximize adaptive trade-offs, while most will optimize adaptive traits to suit their particular ecological niche. Genomes having ruderal characteristics are enriched in the Gammaproteobacteria and Firmicutes ( ; ), as we would expect, though members of these phyla can be found in all three clusters ( ), owing to the vast diversity of these groups. In addition, genomes having competitive characteristics are highly enriched in the Actinobacteria and Betaproteobacteria , while genomes characteristic of scarcity specialists are enriched in the Alphaproteobacteria and other diverse phyla (e.g., Verrucomicrobia , Acidobacteria , Gemmatimonadetes , Chloroflexi , etc.) whose members are difficult to cultivate in laboratory media ( ; ). Most bacterial phyla are metabolically and ecologically diverse, and we would not expect homogeneity among species within a phylum. In addition, previous observations show that C assimilation dynamics in soil are not well predicted by phylum-level classification ( ). However, certain strategies are more common in some phyla than others, and these patterns, along with the taxonomic makeup of our MAG clusters ( ), match general expectations. Furthermore, the three clusters we defined for RefSoil genomes possess patterns of genomic investment that match predictions derived from the C-S-R framework and are consistent with predictions based on the 13 C-labeled MAGs ( ; ). 10.1128/mbio.03584-22.4 TABLE S2 Summary of parameter values for inferred life history clusters. A symbol (+) was added every time a cluster was observed to have a significantly higher value than another cluster in both 13 C-MAG comparisons and the RefSoil comparisons. Abbreviations are defined in the text. Download Table S2, PDF file, 0.1 MB . Copyright © 2023 Barnett et al. 2023 Barnett et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . 10.1128/mbio.03584-22.10 FIG S6 (A) Distribution of RefSoil bacterial taxa (at the phylum or class level) across the life history strategies predicted from clusters of TF/gene and (SE + SM)/MT. Percentage of genomes from each taxon in each predicted life history strategy with the total number of genomes used above each stacked bar. (B) Percentage of genomes in each predicted life history strategy cluster that are classified to each phylum/class. Phylum/class abbreviations are Actino., Actinobacteria ; Alpha., Alphaproteobacteria ; Bact., Bacteroidetes ; Cyano., Cyanobacteria ; Delta., Deltaproteobacteria ; Firm., Firmicutes ; Gamma., Gammaproteobacteria ; Spiro., Spirochetes ; and <10, taxa that contain less than 10 genomes. (C) Genomic investment in gene systems differs across life history strategies predicted from TF/genes and (SE + SM)/MT. Data are from RefSoil genomes with k -means clustering trained by clusters identified from 13 C-labeled MAGs. In all cases, variation across life history clusters was first tested with Kruskal-Wallis tests, and, where statistically significant ( P < 0.05), post hoc pairwise tests were performed using Dunn tests. Download FIG S6, PDF file, 0.7 MB . Copyright © 2023 Barnett et al. 2023 Barnett et al. https://creativecommons.org/licenses/by/4.0/ This content is distributed under the terms of the Creative Commons Attribution 4.0 International license . Metagenomic-SIP enables us to link genome features to growth dynamics and C acquisition dynamics of bacteria as they occur in soil. We used a targeted approach, employing data from a multisubstrate DNA-SIP experiment, to select bacterial genomes that maximize life history trade-offs. We identified genomic features (MCP, membrane transporter genes, osmotic stress genes, transcription factor genes, and SMBCs) that are associated with growth and C acquisition dynamics of soil-dwelling bacteria. We also identified genomic signatures [TF/gene and (SE + SM)/MT] that represent life history parameters useful in inferring bacterial ecological strategies from genome sequence data. We show that while many intermediate strategies exist, there are diverse taxa that maximize life history trade-offs defined by these genomic parameters. The genomic signatures we identified are readily assessed using genomic and metagenomic sequencing, and these parameters may be useful in the assessment of bacterial life history strategies. Soil microcosms, DNA extraction, and isopycnic centrifugation. The multisubstrate DNA-SIP experiment that provided the DNA samples we used for metagenomic-SIP has been described in detail elsewhere ( ). An overview of the experimental design for this prior DNA-SIP experiment is provided for reference in in the supplemental material. Briefly, a mixture of 9 different C sources was added to soil at 0.4 mg C g −1 dry soil each (each representing about 3.3% of total soil C), moisture was maintained at 50% water-holding capacity, and sampling was performed destructively over a period of 48 days. All treatments were derived from the exact same soil sample (from an agricultural field managed under a diverse organic cropping rotation), they received the exact same C sources, and they were incubated under the exact same conditions; the only variable manipulated was the identity of the 13 C-labeled C source. Eight 13 C treatments from this prior experiment (each defined by the identity of the 13 C source and the time of sampling) were chosen for metagenomic-SIP because the previous analysis ( ) indicated that their 13 C-labeled DNA was enriched in bacteria that maximized differences in life history strategy ( and see also from the prior study ). The treatments selected for metagenomic-SIP were glucose, day 1; xylose, day 6; glucose, day 14; glycerol, day 14; cellulose, day 30; palmitic acid, day 30; palmitic acid, day 48; and vanillin, day 48. We also sampled 12 C control treatments for days 1, 6, 14, 30, and 48 to facilitate identification of 13 C-labeled contigs and improve metagenome assembly and binning ( ). DNA used in this experiment (after undergoing extraction, isopycnic centrifugation, and fractionation) was the same as described previously ( ) and was archived at −20°C for ~2 years prior to use in this study. Metagenomic sequencing. For each of the eight treatments and five controls, we combined 10 μL of purified, desalted DNA solution from each CsCl gradient fraction having a buoyant density between 1.72 and 1.77 g mL −1 . By pooling equal volumes from these fractions, we aimed to replicate the composition of the DNA pool of the entire heavy buoyant density window (1.72 to 1.77 g mL −1 ). Metagenomic-SIP simulations have demonstrated that this buoyant density range sufficiently enriches 13 C-labeled bacterial DNA ( ). DNA amplification and sequencing were performed by the Joint Genome Institute (JGI; Berkeley, CA, USA) using standard procedures. In short, DNA was amplified and tagged with Illumina adaptors using a Nextera XT kit (Illumina Inc, San Diego, CA, USA), and sequencing was performed on the NovaSeq system (Illumina Inc.). Read processing, metagenome assembly and annotation, and MAG binning. Quality control read processing and contig assembly were performed by the JGI as previously described ( ). Contigs were generated via terabase-scale metagenome coassembly from all 13 libraries using MetaHipMer ( ). Gene calling and annotation of assembled contigs were performed through JGI’s Integrated Microbial Genomes and Microbiomes (IMG/M) system ( ). Quality-filtered reads, coassembled contigs, and IMG annotations can be accessed through the JGI genome portal (CSP ID 503502, award at https://genome.jgi.doe.gov/portal/Micmetcarbocycle/Micmetcarbocycle.info.html ). We mapped reads from each library to all contigs that were over 1,000 bp in length using BBMap ( ) and then calculated contig coverages using jgi_summarize_bam_contig_depths from MetaBAT ( ). As we were primarily interested in genomes of bacteria that incorporated 13 C into their DNA, we only used putatively 13 C-labeled contigs to bin metagenome-assembled genomes (MAGs). Within each treatment, we defined a 13 C-labeled contig as having an average read coverage greater than 5× in the 13 C-treatment library and a 1.5-fold increase in coverage from the 12 C control to 13 C treatment library after accounting for the difference in sequencing depths. In calculating the fold increase in coverage, we normalized for sequencing depth by dividing coverage by read counts. We binned 13 C-labeled contigs separately for each treatment based on both tetranucleotide frequency and differential coverage with MetaBAT2 ( ), MaxBin ( ), and CONCOCT ( ). Default settings were used with the exceptions that minimum contig lengths were set to 1,000 bp for both MaxBin and CONCOCT and 1,500 bp for MetaBAT2. Final MAGs were generated by refining bins from all three binning tools using metaWRAP ( ). Coverage information used during each binning run was from the paired 13 C treatment and 12 C control libraries, not the entire set of libraries. Therefore, we ran MAG binning eight separate times, once for each treatment. MAG qualities were calculated using CheckM ( ). For further analyses, we only used MAGs with over 50% completeness and less than 10% contamination (i.e., “medium-quality” MAGs) following the guidelines for minimum information about metagenome-assembled genomes ( ). The binning approach we employed used coassembled contigs but binned these contigs separately across the eight 13 C-labeled treatments. As such, some MAGs were identified in multiple treatments if their genomes were 13 C labeled by multiple 13 C-labeled C sources. These sister MAGs might represent a single population that can derive its C from multiple C sources or functionally distinct subpopulations each preferentially adapted for a different C source. Strain heterogeneity has previously been implicated as a cause of poor binning outcomes with soil metagenomes ( ). Traditional MAGs tend to include the entire pan-genome of heterogeneous strains representing an individual taxon ( ). Our 13 C labeling-informed binning strategy should have a greater ability to differentiate functionally differentiated subpopulations than traditional binning strategies. Further characteristics of our MAGs are discussed in . Statistical analysis and computing. Unless otherwise stated, all statistical analyses were performed and all figures generated with R ( ) version 3.6.3. Code for all analyses and most processing is available through GitHub ( https://github.com/seb369/CcycleGenomicFeatures ). Testing associations between genomic features and activity characteristics. We first assessed associations between genomic features and activity characteristics by comparing the genetic composition of 13 C-labeled contigs with the averaged characteristics of the 13 C-labeled OTUs identified in each corresponding treatment from our prior study ( ). These OTUs were clustered at 97% sequence identity of 16S rRNA gene V4 region amplicons. We developed a list of eight genome features hypothesized to be associated with life history strategies and microbial C-cycling activity in soil environments as follows. (i) MCP genes were identified by the product name “methyl-accepting chemotaxis protein.” (ii) Transporter genes were identified by product names containing the terms “transporter,” “channel,” “exchanger,” “symporter,” “antiporter,” “exporter,” “importer,” “ATPase,” or “pump.” The resulting gene list was then filtered to include only those predicted by TMHMM ( ) (version 2.0c) to have at least one transmembrane helix. (iii) Adhesion-associated genes included adhesins and holdfast and identified by product names “holdfast attachment protein HfaA,” “curli production assembly/transport component CsgG/holdfast attachment protein HfaB,” “adhesin/invasin,” “fibronectin-binding autotransporter adhesin,” “surface adhesion protein,” “autotransporter adhesin,” “adhesin HecA-like repeat protein,” “ABC-type Zn 2+ transport system substrate-binding protein/surface adhesin,” “large exoprotein involved in heme utilization and adhesion,” “Tfp pilus tip-associated adhesin PilY1,” and “type V secretory pathway adhesin AidA.” (iv) Transcription factor genes were first identified by product names containing the terms “transcriptional regulator,” “transcriptional repressor,” “transcriptional activator,” “transcription factor,” “transcriptional regulation,” “transcription regulator,” or “transcriptional [family] regulator,” where “[family]” is replaced by some gene family identification. Additional transcription factor genes were identified from the protein FASTA sequences using DeepTFactor ( ). (v) Osmotic stress-related genes were identified by product names containing the terms “osmoregulated,” “osmoprotectant,” “osmotically inducible,” “osmo-dependent,” “osmolarity sensor,” “ompr,” and “ l -ectoine synthase.” (vi) Dormancy-related genes covered three different mechanisms ( ). Endospore production was indicated by products containing the name “Spo0A,” though no Spo0A genes were found. Dormancy resuscitation was indicated by products containing the name “RpfC,” a resuscitation-promoting factor. Dormancy-related toxin-antitoxin systems were indicated by products containing the names “HipA,” “HipB,” “mRNA interferase MazF,” “antitoxin MazE,” “MazEF,” “RelB,” “RelE,” “RelBE,” “DinJ,” or “YafQ.” (vii) Secreted enzyme genes were first annotated against three enzyme databases to include enzymes important for breakdown of organic matter. Carbohydrate-active enzymes were annotated by mapping protein sequences to the dbCAN ( ) database (release 9.0) with HMMER using default settings. Of these enzyme genes, only those in the glycoside hydrolase (GH), polysaccharide lyase (PL), or carbohydrate lyase (CE) groups were retained. Proteases were annotated by mapping protein sequences to the MEROPS ( ) database (release 12.3) using DIAMOND BLASTP alignment with default settings except an E value of <1 × 10 −10 . Enzymes containing an α/β hydrolysis unit were annotated by mapping protein sequences to the ESTHER ( ) database (downloaded 11 June 2021) with HMMER using default settings. While some enzymes containing α/β hydrolysis units are included in the carbohydrate-active enzymes, this group also includes lipases. All annotated enzyme genes from these three groups were then filtered to those containing a secretion signal peptide sequence annotated by SignalP ( ) (version 5.0b). Gram-positive annotations were used for any genes annotated to the Firmicutes or Actinobacteria phyla, and Gram-negative annotations were used for all others. (viii) Bacterial secondary metabolite biosynthetic gene clusters (SMBCs) were predicted using antiSMASH ( ) (version 5.1.2) with default settings. For each genomic feature, except for SMBCs, we calculated the percentage of all protein-coding genes from each 13 C-labeled contig pool (i.e., 13 C labeled in each treatment) that were annotated as described above. For SMBCs, we divided the number of SMBCs in each 13 C-labeled contig pool by the number of protein-coding genes in that pool. We then measured Pearson’s correlation between the genomic feature abundance and each of the activity characteristics averaged across the OTUs that were also 13 C labeled in each treatment. Within this bulk measurement, a greater percentage of the protein-coding gene pool annotated to a genomic signature can indicate that (i) a greater proportion of the represented genomes contain those genes, (ii) the represented genomes have multiple copies of those genes, or (iii) there is a greater diversity of those genes within the represented genomes. To account for increased false-discovery rate with multiple comparisons, we adjusted P values within each activity characteristic using the Benjamini-Hochberg procedure ( n = 7). Examining genomic signatures of life history strategies in MAGs. We next assessed associations between genomic features and activity characteristics by comparing the genetic composition of 13 C-labeled MAGs with the averaged characteristics of the OTUs mapping to those MAGs. As very few 16S rRNA genes were recovered and binned, we matched MAGs to 13 C-labeled OTUs based on taxonomy and 13 C-labeling patterns. MAG taxonomy was assigned using GTDB-Tk ( ). MAGs were taxonomically mapped to the set of OTUs that matched at the highest corresponding taxonomic level, and then this set of OTUs was filtered to include those that were 13 C labeled in the same treatment as the MAG. While previous observations indicate that high taxonomic ranks are a poor predictor of life history traits ( ), here, we are using taxonomy to match 13 C MAGs and 13 C OTUs that are 13 C labeled in the same sample, by the same substrate, and at the same time. In this way, our MAG-OTU matches have been filtered by function as a result of stable isotope probing, and they are not a random draw from the entire community. This approach leverages isotopic labeling to enhance the functional coherence of MAG-OTU matching while minimizing loss of information due to annotation errors and application of arbitrary sequence cutoffs. Genomic features within the contigs of each MAG were determined as described above, except that for secreted enzymes, Gram-positive or Gram-negative SignalP predictions were assigned based on MAG taxonomy. Gene and SMBC counts were adjusted as before but based on the total protein-coding gene count of the MAGs. We then measured Pearson’s correlation between the genomic feature abundance within the MAGs and each of the activity characteristics averaged across the OTUs mapped to the MAGs. To account for the increased false-discovery rate with multiple comparisons, we adjusted P values within each activity characteristic using the Benjamini-Hochberg procedure ( n = 8). Examining genomic signatures of life history strategies with independent studies. We analyzed publicly available soil microbiome data sets to determine whether the genomic relationships we observed in 13 C-labeled MAGs were representative of soil-dwelling bacteria. Seven data sets were chosen, RefSoil ( ), Diamond et al. ( ), Yu et al. ( ), Wilhelm et al. ( ), Wilhelm et al. ( ), Zhalnina et al. ( ), and Li et al. ( ). Assemblies from references , , and were downloaded from GenBank on 21 June 2021 (NCBI accession numbers given in in ). Assemblies from and were acquired from the authors. Assemblies from Li et al. ( ) were downloaded from figshare ( https://figshare.com/s/2a812c513ab14e6c8161 ). Annotation was performed identically for all assemblies to avoid biases introduced by different annotation pipelines. Protein-coding genes were identified and translated using Prodigal ( ) through Prokka ( ). Transcription factor genes, SMBCs, and genes encoding transmembrane helices were further annotated as described above. Transporter genes, transcription factor genes, MCP genes, osmotic stress response genes, and SMBCs were identified, and abundances were calculated as described above. 16S rRNA genes and tRNA genes were identified from Prokka annotations. Pearson correlations were analyzed between transporter gene abundances and transcription factor gene abundances, osmotic stress response gene abundances, and SMBC abundances and between the natural log of 16S rRNA gene counts or tRNA gene count MCP gene abundances separately for each independent data set. Within each data set, P values were adjusted for multiple comparisons using the Benjamini-Hochberg procedure ( n = 4). Using trade-offs to define and predict life history strategies. The C-S-R framework predicts evolutionary trade-offs in energy allocation to resource acquisition across habitats that vary temporally (e.g., variation in disturbance frequency). Since deletion bias in microbial genomes produces streamlined genomes of high coding density, we can assess evolutionary investment in a particular cellular system by quantifying genomic resources devoted to the operation of that system. That is, genetic information must be replicated and repaired with each generation; hence, energy allocation to a given cellular system over evolutionary time can be assessed as the proportion of the genome devoted to that system. To identify putative life history strategies for 13 C-labeled MAGs, we used k -means clustering to group MAG-based genomic investment in transcription factors and resource acquisition. Investment in transcription factors was defined as the transcription factor gene count divided by total gene counts (TF/gene). Relative investment in resource acquisition was determined by summing secreted enzyme and SMBC gene counts, removing duplicates found in both categories, and then dividing by the number of membrane transporter genes [(SE + SM)/MT]. k -means clustering was performed using k -centroid cluster analysis with the R package flexclust ( ) after scaling and centering the two values and using k = 3. Statistical significance was assessed using the Kruskal-Wallis test, and the Dunn test was used to assess post hoc comparisons. We calculated the same trade-offs in genomic investment [TF/gene and (SE + SM)/MT] for RefSoil genomes. Predicted clusters for RefSoil genomes were made using these two genomic signatures as inferred by the R package flexclust ( ) and using the three 13 C-labeled MAG clusters as the training data set. Differences in genomic investments for the eight previously discussed genomic features were then assessed across clusters using the Kruskal-Wallis test with the Dunn test used to assess post hoc comparisons. However, in this analysis, adhesion genes were identified as genes with product names containing the terms “adhesion” or “adhesins” because the previously used product names were not found in these annotations. The multisubstrate DNA-SIP experiment that provided the DNA samples we used for metagenomic-SIP has been described in detail elsewhere ( ). An overview of the experimental design for this prior DNA-SIP experiment is provided for reference in in the supplemental material. Briefly, a mixture of 9 different C sources was added to soil at 0.4 mg C g −1 dry soil each (each representing about 3.3% of total soil C), moisture was maintained at 50% water-holding capacity, and sampling was performed destructively over a period of 48 days. All treatments were derived from the exact same soil sample (from an agricultural field managed under a diverse organic cropping rotation), they received the exact same C sources, and they were incubated under the exact same conditions; the only variable manipulated was the identity of the 13 C-labeled C source. Eight 13 C treatments from this prior experiment (each defined by the identity of the 13 C source and the time of sampling) were chosen for metagenomic-SIP because the previous analysis ( ) indicated that their 13 C-labeled DNA was enriched in bacteria that maximized differences in life history strategy ( and see also from the prior study ). The treatments selected for metagenomic-SIP were glucose, day 1; xylose, day 6; glucose, day 14; glycerol, day 14; cellulose, day 30; palmitic acid, day 30; palmitic acid, day 48; and vanillin, day 48. We also sampled 12 C control treatments for days 1, 6, 14, 30, and 48 to facilitate identification of 13 C-labeled contigs and improve metagenome assembly and binning ( ). DNA used in this experiment (after undergoing extraction, isopycnic centrifugation, and fractionation) was the same as described previously ( ) and was archived at −20°C for ~2 years prior to use in this study. For each of the eight treatments and five controls, we combined 10 μL of purified, desalted DNA solution from each CsCl gradient fraction having a buoyant density between 1.72 and 1.77 g mL −1 . By pooling equal volumes from these fractions, we aimed to replicate the composition of the DNA pool of the entire heavy buoyant density window (1.72 to 1.77 g mL −1 ). Metagenomic-SIP simulations have demonstrated that this buoyant density range sufficiently enriches 13 C-labeled bacterial DNA ( ). DNA amplification and sequencing were performed by the Joint Genome Institute (JGI; Berkeley, CA, USA) using standard procedures. In short, DNA was amplified and tagged with Illumina adaptors using a Nextera XT kit (Illumina Inc, San Diego, CA, USA), and sequencing was performed on the NovaSeq system (Illumina Inc.). Quality control read processing and contig assembly were performed by the JGI as previously described ( ). Contigs were generated via terabase-scale metagenome coassembly from all 13 libraries using MetaHipMer ( ). Gene calling and annotation of assembled contigs were performed through JGI’s Integrated Microbial Genomes and Microbiomes (IMG/M) system ( ). Quality-filtered reads, coassembled contigs, and IMG annotations can be accessed through the JGI genome portal (CSP ID 503502, award at https://genome.jgi.doe.gov/portal/Micmetcarbocycle/Micmetcarbocycle.info.html ). We mapped reads from each library to all contigs that were over 1,000 bp in length using BBMap ( ) and then calculated contig coverages using jgi_summarize_bam_contig_depths from MetaBAT ( ). As we were primarily interested in genomes of bacteria that incorporated 13 C into their DNA, we only used putatively 13 C-labeled contigs to bin metagenome-assembled genomes (MAGs). Within each treatment, we defined a 13 C-labeled contig as having an average read coverage greater than 5× in the 13 C-treatment library and a 1.5-fold increase in coverage from the 12 C control to 13 C treatment library after accounting for the difference in sequencing depths. In calculating the fold increase in coverage, we normalized for sequencing depth by dividing coverage by read counts. We binned 13 C-labeled contigs separately for each treatment based on both tetranucleotide frequency and differential coverage with MetaBAT2 ( ), MaxBin ( ), and CONCOCT ( ). Default settings were used with the exceptions that minimum contig lengths were set to 1,000 bp for both MaxBin and CONCOCT and 1,500 bp for MetaBAT2. Final MAGs were generated by refining bins from all three binning tools using metaWRAP ( ). Coverage information used during each binning run was from the paired 13 C treatment and 12 C control libraries, not the entire set of libraries. Therefore, we ran MAG binning eight separate times, once for each treatment. MAG qualities were calculated using CheckM ( ). For further analyses, we only used MAGs with over 50% completeness and less than 10% contamination (i.e., “medium-quality” MAGs) following the guidelines for minimum information about metagenome-assembled genomes ( ). The binning approach we employed used coassembled contigs but binned these contigs separately across the eight 13 C-labeled treatments. As such, some MAGs were identified in multiple treatments if their genomes were 13 C labeled by multiple 13 C-labeled C sources. These sister MAGs might represent a single population that can derive its C from multiple C sources or functionally distinct subpopulations each preferentially adapted for a different C source. Strain heterogeneity has previously been implicated as a cause of poor binning outcomes with soil metagenomes ( ). Traditional MAGs tend to include the entire pan-genome of heterogeneous strains representing an individual taxon ( ). Our 13 C labeling-informed binning strategy should have a greater ability to differentiate functionally differentiated subpopulations than traditional binning strategies. Further characteristics of our MAGs are discussed in . Unless otherwise stated, all statistical analyses were performed and all figures generated with R ( ) version 3.6.3. Code for all analyses and most processing is available through GitHub ( https://github.com/seb369/CcycleGenomicFeatures ). We first assessed associations between genomic features and activity characteristics by comparing the genetic composition of 13 C-labeled contigs with the averaged characteristics of the 13 C-labeled OTUs identified in each corresponding treatment from our prior study ( ). These OTUs were clustered at 97% sequence identity of 16S rRNA gene V4 region amplicons. We developed a list of eight genome features hypothesized to be associated with life history strategies and microbial C-cycling activity in soil environments as follows. (i) MCP genes were identified by the product name “methyl-accepting chemotaxis protein.” (ii) Transporter genes were identified by product names containing the terms “transporter,” “channel,” “exchanger,” “symporter,” “antiporter,” “exporter,” “importer,” “ATPase,” or “pump.” The resulting gene list was then filtered to include only those predicted by TMHMM ( ) (version 2.0c) to have at least one transmembrane helix. (iii) Adhesion-associated genes included adhesins and holdfast and identified by product names “holdfast attachment protein HfaA,” “curli production assembly/transport component CsgG/holdfast attachment protein HfaB,” “adhesin/invasin,” “fibronectin-binding autotransporter adhesin,” “surface adhesion protein,” “autotransporter adhesin,” “adhesin HecA-like repeat protein,” “ABC-type Zn 2+ transport system substrate-binding protein/surface adhesin,” “large exoprotein involved in heme utilization and adhesion,” “Tfp pilus tip-associated adhesin PilY1,” and “type V secretory pathway adhesin AidA.” (iv) Transcription factor genes were first identified by product names containing the terms “transcriptional regulator,” “transcriptional repressor,” “transcriptional activator,” “transcription factor,” “transcriptional regulation,” “transcription regulator,” or “transcriptional [family] regulator,” where “[family]” is replaced by some gene family identification. Additional transcription factor genes were identified from the protein FASTA sequences using DeepTFactor ( ). (v) Osmotic stress-related genes were identified by product names containing the terms “osmoregulated,” “osmoprotectant,” “osmotically inducible,” “osmo-dependent,” “osmolarity sensor,” “ompr,” and “ l -ectoine synthase.” (vi) Dormancy-related genes covered three different mechanisms ( ). Endospore production was indicated by products containing the name “Spo0A,” though no Spo0A genes were found. Dormancy resuscitation was indicated by products containing the name “RpfC,” a resuscitation-promoting factor. Dormancy-related toxin-antitoxin systems were indicated by products containing the names “HipA,” “HipB,” “mRNA interferase MazF,” “antitoxin MazE,” “MazEF,” “RelB,” “RelE,” “RelBE,” “DinJ,” or “YafQ.” (vii) Secreted enzyme genes were first annotated against three enzyme databases to include enzymes important for breakdown of organic matter. Carbohydrate-active enzymes were annotated by mapping protein sequences to the dbCAN ( ) database (release 9.0) with HMMER using default settings. Of these enzyme genes, only those in the glycoside hydrolase (GH), polysaccharide lyase (PL), or carbohydrate lyase (CE) groups were retained. Proteases were annotated by mapping protein sequences to the MEROPS ( ) database (release 12.3) using DIAMOND BLASTP alignment with default settings except an E value of <1 × 10 −10 . Enzymes containing an α/β hydrolysis unit were annotated by mapping protein sequences to the ESTHER ( ) database (downloaded 11 June 2021) with HMMER using default settings. While some enzymes containing α/β hydrolysis units are included in the carbohydrate-active enzymes, this group also includes lipases. All annotated enzyme genes from these three groups were then filtered to those containing a secretion signal peptide sequence annotated by SignalP ( ) (version 5.0b). Gram-positive annotations were used for any genes annotated to the Firmicutes or Actinobacteria phyla, and Gram-negative annotations were used for all others. (viii) Bacterial secondary metabolite biosynthetic gene clusters (SMBCs) were predicted using antiSMASH ( ) (version 5.1.2) with default settings. For each genomic feature, except for SMBCs, we calculated the percentage of all protein-coding genes from each 13 C-labeled contig pool (i.e., 13 C labeled in each treatment) that were annotated as described above. For SMBCs, we divided the number of SMBCs in each 13 C-labeled contig pool by the number of protein-coding genes in that pool. We then measured Pearson’s correlation between the genomic feature abundance and each of the activity characteristics averaged across the OTUs that were also 13 C labeled in each treatment. Within this bulk measurement, a greater percentage of the protein-coding gene pool annotated to a genomic signature can indicate that (i) a greater proportion of the represented genomes contain those genes, (ii) the represented genomes have multiple copies of those genes, or (iii) there is a greater diversity of those genes within the represented genomes. To account for increased false-discovery rate with multiple comparisons, we adjusted P values within each activity characteristic using the Benjamini-Hochberg procedure ( n = 7). We next assessed associations between genomic features and activity characteristics by comparing the genetic composition of 13 C-labeled MAGs with the averaged characteristics of the OTUs mapping to those MAGs. As very few 16S rRNA genes were recovered and binned, we matched MAGs to 13 C-labeled OTUs based on taxonomy and 13 C-labeling patterns. MAG taxonomy was assigned using GTDB-Tk ( ). MAGs were taxonomically mapped to the set of OTUs that matched at the highest corresponding taxonomic level, and then this set of OTUs was filtered to include those that were 13 C labeled in the same treatment as the MAG. While previous observations indicate that high taxonomic ranks are a poor predictor of life history traits ( ), here, we are using taxonomy to match 13 C MAGs and 13 C OTUs that are 13 C labeled in the same sample, by the same substrate, and at the same time. In this way, our MAG-OTU matches have been filtered by function as a result of stable isotope probing, and they are not a random draw from the entire community. This approach leverages isotopic labeling to enhance the functional coherence of MAG-OTU matching while minimizing loss of information due to annotation errors and application of arbitrary sequence cutoffs. Genomic features within the contigs of each MAG were determined as described above, except that for secreted enzymes, Gram-positive or Gram-negative SignalP predictions were assigned based on MAG taxonomy. Gene and SMBC counts were adjusted as before but based on the total protein-coding gene count of the MAGs. We then measured Pearson’s correlation between the genomic feature abundance within the MAGs and each of the activity characteristics averaged across the OTUs mapped to the MAGs. To account for the increased false-discovery rate with multiple comparisons, we adjusted P values within each activity characteristic using the Benjamini-Hochberg procedure ( n = 8). We analyzed publicly available soil microbiome data sets to determine whether the genomic relationships we observed in 13 C-labeled MAGs were representative of soil-dwelling bacteria. Seven data sets were chosen, RefSoil ( ), Diamond et al. ( ), Yu et al. ( ), Wilhelm et al. ( ), Wilhelm et al. ( ), Zhalnina et al. ( ), and Li et al. ( ). Assemblies from references , , and were downloaded from GenBank on 21 June 2021 (NCBI accession numbers given in in ). Assemblies from and were acquired from the authors. Assemblies from Li et al. ( ) were downloaded from figshare ( https://figshare.com/s/2a812c513ab14e6c8161 ). Annotation was performed identically for all assemblies to avoid biases introduced by different annotation pipelines. Protein-coding genes were identified and translated using Prodigal ( ) through Prokka ( ). Transcription factor genes, SMBCs, and genes encoding transmembrane helices were further annotated as described above. Transporter genes, transcription factor genes, MCP genes, osmotic stress response genes, and SMBCs were identified, and abundances were calculated as described above. 16S rRNA genes and tRNA genes were identified from Prokka annotations. Pearson correlations were analyzed between transporter gene abundances and transcription factor gene abundances, osmotic stress response gene abundances, and SMBC abundances and between the natural log of 16S rRNA gene counts or tRNA gene count MCP gene abundances separately for each independent data set. Within each data set, P values were adjusted for multiple comparisons using the Benjamini-Hochberg procedure ( n = 4). The C-S-R framework predicts evolutionary trade-offs in energy allocation to resource acquisition across habitats that vary temporally (e.g., variation in disturbance frequency). Since deletion bias in microbial genomes produces streamlined genomes of high coding density, we can assess evolutionary investment in a particular cellular system by quantifying genomic resources devoted to the operation of that system. That is, genetic information must be replicated and repaired with each generation; hence, energy allocation to a given cellular system over evolutionary time can be assessed as the proportion of the genome devoted to that system. To identify putative life history strategies for 13 C-labeled MAGs, we used k -means clustering to group MAG-based genomic investment in transcription factors and resource acquisition. Investment in transcription factors was defined as the transcription factor gene count divided by total gene counts (TF/gene). Relative investment in resource acquisition was determined by summing secreted enzyme and SMBC gene counts, removing duplicates found in both categories, and then dividing by the number of membrane transporter genes [(SE + SM)/MT]. k -means clustering was performed using k -centroid cluster analysis with the R package flexclust ( ) after scaling and centering the two values and using k = 3. Statistical significance was assessed using the Kruskal-Wallis test, and the Dunn test was used to assess post hoc comparisons. We calculated the same trade-offs in genomic investment [TF/gene and (SE + SM)/MT] for RefSoil genomes. Predicted clusters for RefSoil genomes were made using these two genomic signatures as inferred by the R package flexclust ( ) and using the three 13 C-labeled MAG clusters as the training data set. Differences in genomic investments for the eight previously discussed genomic features were then assessed across clusters using the Kruskal-Wallis test with the Dunn test used to assess post hoc comparisons. However, in this analysis, adhesion genes were identified as genes with product names containing the terms “adhesion” or “adhesins” because the previously used product names were not found in these annotations.
Offline text-independent writer identification using a codebook with structural features
79f3e932-205d-4b5f-b0cb-cde2de074945
10128938
Forensic Medicine[mh]
Manual handwriting analysis depends mainly upon the identification of distinguishing patterns in a handwriting sample, which in turn can be compared with the patterns encapsulated in a new handwritten document from a known writer . The rapid developments in computing resources enable forensic society to develop automated systems to replace manual handwriting analysis. Hence, many systems have been developed to solve many problems connected to forensic science. Some of these systems deal directly with the problem of writer identification as a part of forensic science, while the remaining deal with other fields. Text recognition is also a wide problem in which extensive studies have been done, and some of these studies address the writer recognition topic in one way or another. Writer recognition involves many problems, such as writer identification, writer verification, writer retrieval, handedness classification, and gender classification . Writer identification is the task of assigning an unknown handwritten document to a specific scribe from a group of known scribes. Writer verification is the task of comparing a sample handwritten document with another handwritten document from a known author for the sake of accepting or rejecting the authority of its scribe . According to this definition, the writer identification problem is a one-to-many classification problem, while writer verification is a one-to-one classification problem, which implies that the identification problem is harder than the verification problem, as reported in . Over time, a writer pursues a specific writing style that encapsulates some specific features characterizing his or her writing style. An example of these features is the handwritten text curvature and the appearance frequency of some basic details, such as letters or even parts of letters . Hence, writer identification can be used as a biometric technique, as the writer usually follows some writing styles that differ from other styles adopted by other writers . Writer identification gains importance from a variety of applications. Such applications include forensic analysis , historical document analysis , musical score analysis , and personalized recognition systems , and this importance encourages the research community to pay intensive attention to writer identification. As a result, a plethora of different valuable approaches has been proposed. However, writer identification is still a hard and challenging problem due to two main facts. First, some languages have different writing styles, such as the Arabic language, which can be written in several styles, including NASKH, REQA’A, and FARISI. Furthermore, some languages can be written in cursive or separated letters manner. Second, each writer is subjected to different writing qualities or interwriter variations over a short-term period. Moreover, the writing habit adopted by the same writer can be developed and changed over a long-term period . In this paper, an effort has been made to develop an automatic system for offline text-independent writer identification for Arabic and English handwritten scripts using the concept of a codebook with structural features. To perform this objective, an insightful literature review of existing methods was performed to identify and solve the lacking aspects from the domain. We found that although there are many studies on Arabic handwritten writer identification, very few studies have been performed on a large Arabic dataset that consists of variants of information. Furthermore, there is an observation in some state-of-the-art studies about the superiority of considering the analysis of small fragments of handwriting images over the analysis of the global texture of handwriting images. Therefore, we designed two effective features to capture some properties of small segments from the connected component contour of handwritten documents to check and confirm the state-of-the-art observation. In summary, our contributions can be listed as follows: A comprehensive literature review was performed to identify the lacking aspects in the writer identification field. This review provides insightful comments on related studies, highlighting the major contributions and limitations. Two effective structural features are proposed. The first feature is called the CONtour point CONcavity/CONvexity (CON 3 ), which is extracted based on the concavity and convexity of a handwriting contour. The second feature represents the contour point curve angle (CPCA). A system to utilize the proposed features in the writer identification domain is designed. The proposed writer identification method is applied to two large datasets, the English IAM and the Arabic KHATT, which is a large dataset that consists of variants of information. The paper is organized as follows. The literature contributions to writer identification are introduced in section 2. Section 3 presents the proposed method in detail. The datasets and the experimental results are discussed in sections 4 and 5, respectively. Finally, the paper is concluded in section 6. As stated earlier, the importance of the writer identification problem encourages the research community to pay close attention to it. Many studies have been proposed during the last few decades. Different aspects can be used to categorize these studies and to identify their pros and cons. According to the type of handwritten document sample, writer identification systems can be broadly categorized into online and offline writer identification. Online writer identification is performed during the writing process using special devices such as tablets. On the other hand, offline writer identification uses digitized images of handwritten documents after the writing process itself. Offline identification is harder than online identification due to the lack of some information that is available for online identification, such as velocity, pen pressure, writing trajectory, etc. . Using textual content, writer identification can be divided into text-dependent and text-independent techniques . In text-dependent writer identification, writers have to copy the same text contents that are then used in training and testing. On the other hand, text-independent writer identification imposes no text content restrictions. Therefore, text-independent writer identification is the closest to real-world scenarios, as reported in . Considering the type of features as a functional context, the methods reported in the literature can be categorized into four main groups: structural-based, textural-based, grapheme-based, and auto-learned based methods. Structural-based methods consider the allograph shapes and then apply a grapheme-emission probability distribution. For this reason, some studies in the literature referred to these methods as statistical methods, which usually require some preprocessing steps to extract the structural features. Structural-based systems have been noted to achieve better identification rates, although this occurs at the cost of run time due to the complexity of the preprocessing steps . Such preprocessing steps include binarization, edge detection, and segmentation. On the other hand, textural-based systems are more efficient regarding the runtime, as they do not need additional steps such as segmentation but are known to have lower identification rates . Structural-based methods can be further categorized into connected component-based methods [ – ], contour direction-based methods , and contour pattern-based methods . Although structural-based methods are more intuitive and stronger, these methods are very vulnerable to any slight variations in the character or allograph characteristics, such as slant or aspect ratio. Additionally, these methods concentrate mainly on the properties of the allographs or characters themselves, which means neglecting important information represented by the properties between allographs drawn in the same word. Furthermore, relying on preprocessing steps is considered a major limitation of these methods, since if the preprocessing step fails, then by necessity, all successive steps, such as feature extraction and classification, are subject to failure. He et al. proposed a novel feature in which they specified a reference point inside the handwritten text and then calculated the stroke-length distribution in each direction surrounding that reference point. Their proposed feature (Junclet) is said to be a simple and efficient local descriptor. It is worth mentioning that their method can be considered one of the few methods that touch on cross-script writer identification. The method is evaluated on a new challenging dataset in which each writer participates in writing text using English and Chinese languages. The authors reported that the Junclet feature is scale- and rotation-invariant, and the experiments proved that the Junclet’s atomic elements are promising in different applications related to text recognition. Khalifa et al. presented a new method for writer identification that employs multiple codebooks instead of a single codebook. The algorithm uses kernel discriminant analysis using spectral regression SR-KDA for the sake of reducing features vector dimensionality to avoid the overfitting problem. The method was evaluated on two public datasets: IAM and ICFHR. The experiments proved that the fusion of multiple codebooks significantly enhances the performance rate. During the experiments, they noted that fusing multiple codebooks achieves better performance than a single codebook by almost 11%. Graz et al. presented a simple interest point-based system for writer identification using novel descriptors that capture the geometric relationships among different parts of handwritten documents. The descriptors exactly represent the probability density functions of the distribution of strokes, junctions, endings, and loops. The proposed system has the advantages of simplicity and efficiency but at the cost of the need for a large amount of data to acquire stable models. The system was evaluated on four datasets: IAM, ICDAR’11F, ICDAR’11C, and ICDAR’13, and the authors reported that the results are comparable to the results achieved by a complex interest point-based system from the literature. Tang et al. presented a system for writer identification based on some structural features. The system extracts the SIFT features and then modifies the SIFT descriptors by replacing the dominant orientation with the real orientation of an interest point. The TD descriptor was proposed to capture the relationships among the contour points of three words. The TD descriptor is a modification of the well-known hinge feature proposed in . The proposed system was evaluated on two public datasets from different language domains, and the experimental results show that the proposed method is comparable to the state-of-the-art methods on the same datasets. Djeddi et al. proposed a system that uses edge hinges, edge directions, and run-length features. The extracted features are then used by the multiclass support vector machine (SVM) classifier, and the proposed system is one of the first systems to be applied on a large Arabic dataset (KHATT) involving handwritten documents from 1000 participants. The experiment reported that the combination of run-length and edge-hinge features achieved the best results at 84.10%. Textural-based methods are sometimes referred to as transformation methods because handwritten script images are considered special textures, and some transformation techniques are applied before feature extraction. Textural-based methods are more efficient concerning the runtime, as they do not need additional preprocessing steps, such as segmentation, but are noted to have lower identification performance . Furthermore, textural-based methods usually need more data to extract reliable and highly discriminative features, which is not the situation in real-world scenarios where forensic experts often encounter small pieces of text to be examined against the writer’s identity. Hannad et al. presented a textural-based system for writer identification from handwritten documents. The system first divides a handwritten document into small fragments and then individually considers each small fragment as a texture. The proposed system is evaluated against the Arabic IFN/ENIT and English IAM datasets. The authors reported that the proposed system achieves 94.89% on IFN/ENIT and 89.54% on IAM using the complete set of writers from the two datasets. Chahi et al. proposed an offline handwriting writer identification system in which the learning method exploits small local regions of the handwritten document. The document is segmented into connected components, which in turn are fed into a cross multiscale locally encoded gradient patterns (CLGP) operator to compute a feature vector. As a classifier, they applied a dissimilarity measure using the Hamming distance, and the system was evaluated on six public datasets from different domains. The proposed descriptor achieved the highest results on some datasets and achieved competitive results on other datasets. Abbas et al. proposed a system for multiscript text-independent offline writer identification. The proposed system builds a column histogram by crossing local binary patterns (LBP) with different parameters settings to capture the local textural information from a handwritten sample. This operator is then augmented with the oriented basic image features (oBIFs) column histogram. For the classification task, the system employs a multiclass SVM and realizes results competitive with state-of-the-art methods. Christlein et al. proposed a system for offline writer identification using a local descriptor called RootSIFT to capture the local properties of an individual’s handwritten documents. The system replaced the kernel feature of the GMM model with SIFT descriptors and then generated GMM supervectors for each handwritten document. The proposed algorithm employs the Exemplar-SVM to train and test document-specific similarity measures. The system was evaluated on three datasets, ICDAR, CVL, and KHATT, and the performance was shown to surpass existing methods that were evaluated on the same datasets. Tang et al. proposed a textural-based system using two textural descriptors: the stroke fragment histogram (SFH) and local contour pattern histogram (LCPH). The former descriptor is calculated based on a codebook, while the latter descriptor is calculated from the image’s contour points. As a result, the system achieved identification rates equal to 91.3% and 85.4% using the SFH and LCPH descriptors, respectively. Grapheme-based methods, sometimes referred to as bags of features BOF-based methods, depend mainly on codebook generation for a bag of words (BOW). Methods [ , , ] are examples of this type of writer identification. In this type of method, the handwritten text of the script is first segmented into small segments (graphemes) of text, and a codebook is generated using any clustering algorithm. Each segment from the underlying script is assigned to exactly one codeword from the codebook, and the resulting codeword histogram of the script is finally used to predict the script identity. Grapheme-based methods usually involve the computation of a local descriptor via grapheme blobs, vector quantization using clustering, script representation using histograms of vectors, dissimilarity measurements, and classifier learning. However, this type of writer identification method suffers from some drawbacks represented by the need to spend much time extracting and comparing grapheme details. Additionally, due to the large number of grapheme features, this type of method requires substantial memory, especially for methods that apply a single clustering algorithm . He & Schomaker were inspired by the fact that joining features distribution of two or more properties can lead to better performance of the system. Accordingly, they proposed a method using two novel curvature-free features: the run-length of local binary patterns (LBPruns) and the cloud of line distribution (COLD). The former captures the joint distribution of the traditional run-length and local binary patterns, while the latter captures the joint distribution of the relations between the length of the line segment and its orientation. The system was evaluated on three datasets, CERUG, IAM, and Firemaker. Bennour et al. proposed a system for offline text-dependent and text-independent writer identification using the concept of an implicit shape codebook. The proposed system applies the Harries detection technique to specify the most dominant points of the input handwritten image. Patches or windows around the detected points are specified and clustered to build an implicit shape codebook. The system was evaluated on the BFL and CVL datasets and promising results were reported. Auto learned methods, sometimes called model-based methods rely on the usage of deep learning techniques to extract automatic features learned by deep models. The main drawback of deep models is the need for enormous and massive labeled data to learn, which is not the case in all benchmarks that are usually used in the writer identification task . According to , another important drawback of deep techniques is the difficulty associated with selecting the best values for a large number of parameters. Additionally, this type of techniques requires very high computational time compared to the traditional handcrafted methods . Furthermore, methods that apply the auto learned techniques did not win the competitions organized by the 2016 International Conference on Pattern Recognition (ICPR2016) and the 2017 International Conference on Document Analysis and Recognition (ICDAR2017). For these reasons, traditional learning methods perform better than deep ones with respect to the writer identification task as reported in . The literature on the writer identification field is not rich with studies that use such deep learning techniques. However, some important studies were designed based on deep techniques. The methods presented in [ – ] were the first methods that introduce deep learning in the writer identification task. All these methods applied the activation layer of a trained convolutional neural network as features. At that time, these recent studies that apply the auto learned techniques did not set a new performance standard with respect to the identification rate on the well-known benchmarks or in the competitions that were organized such as ICDAR2011, ICDAR2013. However, as new techniques, they show promising and comparative results to that achieved by other methods using handcrafted features. In , the CNN features are computed from handwritten image patches, and then, they apply GMM encoding to generate the features vectors as an input to the classification step. In another work , the authors apply the SIFT technique to specify the key points and then, they introduce sub-image patches around the key points to a deep CNN for the sake of extracting auto-learned features from the activations of the hidden layers. Inspired by the high identification rate achieved by structural-based methods and the discrimination power of considering small writing ink strokes in characterizing writers [ , , ], this paper presents a method for offline text-independent writer identification using Arabic and English handwriting samples. This technique is mainly based on local analysis of handwriting ink strokes using two features, the contour points curve angle (CPCA) and CONtour point CONcavity/CONvexity (CON 3 ). As shown in , the proposed writer identification method mainly involves three phases: training, codebook generation and testing. For the training phase, the proposed method applies preprocessing and structural features extraction. Additionally, the training phase involves the calculation of the occurrence histogram in which the most similar codeword in the codebook is specified and the corresponding occurrence histogram bin is incremented. Having extracted all features from the training dataset, the proposed method generates a codebook using a simple clustering technique called k-means clustering to cluster features into K clusters. All clusters’ centers compose the codebook, and each cluster’s center forms a single codeword. Similarly, an evaluation handwritten document in the testing set is first preprocessed, and its structural features are extracted to calculate their occurrence histogram. Having both training and testing occurrence histograms, the trained classifier can then retrieve a candidate list of the possible writers. As preprocessing steps, this method applies the average filter on the scanned images, and then the images are binarized using Otsu thresholding , which is an efficient parameterless global binarization method. The Otsu technique is one of the most popular techniques that are used with clean modern handwriting images. Schomaker et al. found that there are no significant differences in binarized images obtained using Otsu, AdOtsu and other binarization methods on modern handwritten document images. Therefore, this paper adopts the Otsu threshold method for image binarization, and the connected components are detected using the 8-neighborhood connectivity strategy. The next subsections explain in detail the most dominant components of the proposed method. 3.1 Features extraction Two features are extracted from ink stroke segments while traveling on the connected component contour. Each feature exploits a different curve’s attributes to characterize the handwriting style. 3.1.1 CPCA Feature extraction To extract this feature, the proposed method splits the connected component contour into small fragments of a specific length equal to the value specified by the NP parameter. The segmentation process starts at the contour’s first point and uses the NP parameter to specify the fragment size. The next segmentation start point is specified with the GAP parameter, which specifies the distance between the starting points of each two successive contour fragments, as depicted in . It is worth mentioning that the number of features extracted from the connected component contour can be controlled by the NP and GAP parameters. To calculate the feature vector, this method considers the extracted contour fragment’s points and then calculates the curve angle at each point P i of the extracted fragment. To do so, this method uses two line fragments of length Є . One of them represents an inbound line fragment P i −Є P i to the point of interest P i , while the other represents an outbound line fragment P i P i −Є from P i , as illustrated in . As shown in , the inbound line fragment P i−Є P i creates an angle θ 1 with the horizontal axis, and the outbound line fragment P i P i+Є creates an angle θ 2 with the horizontal axis. θ 1 and θ 2 can be calculated using ( ) and ( ), respectively. θ 1 = a r c t a n y i − y i − Є x i − x i − Є (1) θ 2 = a r c t a n y i + Є − y i x i + Є − x i (2) As illustrated in , using θ 1 and θ 2 , the curve angle at point P i can be estimated using ( ). Ф 2 π P i = m i n a b s θ 2 − θ 1 , 2 π − a b s θ 2 − θ 1 (3) This procedure is performed for all points of each contour fragment. Vectors containing the curve angles of each contour fragment have the dimensionality of the NP parameter. We think that using the resulting curve angle vector as a feature directly is not useful. The main reason is that the comparison of curve angle features in a point-to-point manner is very sensitive to variability among different handwritten documents of the same writer. Therefore, the resulting curve angle vector is quantized into a number of angular intervals M . This vector is then augmented with both θ 1 and θ 2 angles, which in turn are quantized into N a angular intervals. Accordingly, the final CPCA vectors for all contour fragments are all of dimensionality M+2N a and are used as features of the handwritten document image. 3.1.2 CON 3 Feature extraction Similar to CPCA feature extraction, this method splits a connected component contour into small fragments of a specific length equal to the value specified by the NP parameter. Additionally, the contour is segmented into small fragments using the same parameters referred to in the CPCA feature extraction. To calculate a feature vector, the method considers the points of a contour fragment and then calculates the concavity/convexity property at each point P i of the contour’s fragment. For this purpose, the method two line fragments of length Є. One of them represents an inbound line fragment P i −Є P i to the point of interest P i , while the other represents an outbound line fragment P i P i +Є from P i , as illustrated in . To measure the concavity/convexity property at point P i , this method measures the perpendicular distance from point P i to the straight line that connects both P i −Є and P i +Є points. According to , this perpendicular distance P d is given by ( ) on the basis of P i = (x i , y i ) , P i-Є = (x i-Є , y i-Є ) and P i+Є = (x i+Є , y i+Є ) . P d = a b s x i + Є − x i y i − y i − Є − x i − x i − Є y i + Є − y i s q r t x i + Є − x i 2 + y i + Є − y i 2 (4) In addition to the perpendicular distance P d , this method also uses the length of line L p connecting both P i-Є and P i+Є points as one attribute of the final feature vector. The line L p length is given by ( ) on the basis of P i-Є = (x i-Є , y i-Є ) and P i+Є = (x i+Є , y i+Є ) . L p = s q r t y i + Є − y i − Є 2 + x i + Є − x i − Є 2 (5) Furthermore, this method augments the final feature vector with the angles θ 1 and θ 2 formed by the line fragments P i-Є P i , P i P i+Є and the horizontal axis. Angles θ 1 and θ 2 can be calculated using ( ) and ( ), respectively. This procedure is performed for all points of each contour fragment. Vectors containing the perpendicular distance P d , line L p , θ 1 and θ 2 of each contour fragment have dimensionality N P multiplied by 4. To reduce the dimensionality of the final vector and reduce the sensitivity to variability among the handwritten samples of the same writer, this method applies the concept of quantization on each of the four attributes separately. Perpendicular distance P d and line L p are quantized into N q and N l distance intervals, respectively, and θ 1 and θ 2 are quantized into N a angular intervals. Therefore, the final CON 3 vectors are all of dimensionality N q +N l +2N a and are used as features for the handwritten document image. 3.2 Codebook generation and features vector computation This method constructs a codebook using the extracted features from the entire training set. The extracted features are used to train a clustering algorithm, and a specific number K of clusters is determined. According to , many clustering algorithms have been investigated and no significant performance differences were noted among the investigated algorithms. Hence, this method uses the k-means clustering algorithm for clustering purposes due to its simplicity, and popularity in the field, and the resultant K clusters then represent the size of the constructed codebook. To calculate the final feature vector of the handwritten document, we only need to construct an occurrence histogram with a number of bins equal to the K codewords. To do so, the features extraction is first applied to the handwritten document, and then, the similarity between each extracted feature and each codebook codeword is determined using the Euclidean distance measurement. Then, the most similar codeword is specified, and its corresponding bin in the histogram is incremented by one. Finally, the constructed histogram represents the final feature vector of the handwritten document with dimensionality equal to the size of the codebook. Two features are extracted from ink stroke segments while traveling on the connected component contour. Each feature exploits a different curve’s attributes to characterize the handwriting style. 3.1.1 CPCA Feature extraction To extract this feature, the proposed method splits the connected component contour into small fragments of a specific length equal to the value specified by the NP parameter. The segmentation process starts at the contour’s first point and uses the NP parameter to specify the fragment size. The next segmentation start point is specified with the GAP parameter, which specifies the distance between the starting points of each two successive contour fragments, as depicted in . It is worth mentioning that the number of features extracted from the connected component contour can be controlled by the NP and GAP parameters. To calculate the feature vector, this method considers the extracted contour fragment’s points and then calculates the curve angle at each point P i of the extracted fragment. To do so, this method uses two line fragments of length Є . One of them represents an inbound line fragment P i −Є P i to the point of interest P i , while the other represents an outbound line fragment P i P i −Є from P i , as illustrated in . As shown in , the inbound line fragment P i−Є P i creates an angle θ 1 with the horizontal axis, and the outbound line fragment P i P i+Є creates an angle θ 2 with the horizontal axis. θ 1 and θ 2 can be calculated using ( ) and ( ), respectively. θ 1 = a r c t a n y i − y i − Є x i − x i − Є (1) θ 2 = a r c t a n y i + Є − y i x i + Є − x i (2) As illustrated in , using θ 1 and θ 2 , the curve angle at point P i can be estimated using ( ). Ф 2 π P i = m i n a b s θ 2 − θ 1 , 2 π − a b s θ 2 − θ 1 (3) This procedure is performed for all points of each contour fragment. Vectors containing the curve angles of each contour fragment have the dimensionality of the NP parameter. We think that using the resulting curve angle vector as a feature directly is not useful. The main reason is that the comparison of curve angle features in a point-to-point manner is very sensitive to variability among different handwritten documents of the same writer. Therefore, the resulting curve angle vector is quantized into a number of angular intervals M . This vector is then augmented with both θ 1 and θ 2 angles, which in turn are quantized into N a angular intervals. Accordingly, the final CPCA vectors for all contour fragments are all of dimensionality M+2N a and are used as features of the handwritten document image. 3.1.2 CON 3 Feature extraction Similar to CPCA feature extraction, this method splits a connected component contour into small fragments of a specific length equal to the value specified by the NP parameter. Additionally, the contour is segmented into small fragments using the same parameters referred to in the CPCA feature extraction. To calculate a feature vector, the method considers the points of a contour fragment and then calculates the concavity/convexity property at each point P i of the contour’s fragment. For this purpose, the method two line fragments of length Є. One of them represents an inbound line fragment P i −Є P i to the point of interest P i , while the other represents an outbound line fragment P i P i +Є from P i , as illustrated in . To measure the concavity/convexity property at point P i , this method measures the perpendicular distance from point P i to the straight line that connects both P i −Є and P i +Є points. According to , this perpendicular distance P d is given by ( ) on the basis of P i = (x i , y i ) , P i-Є = (x i-Є , y i-Є ) and P i+Є = (x i+Є , y i+Є ) . P d = a b s x i + Є − x i y i − y i − Є − x i − x i − Є y i + Є − y i s q r t x i + Є − x i 2 + y i + Є − y i 2 (4) In addition to the perpendicular distance P d , this method also uses the length of line L p connecting both P i-Є and P i+Є points as one attribute of the final feature vector. The line L p length is given by ( ) on the basis of P i-Є = (x i-Є , y i-Є ) and P i+Є = (x i+Є , y i+Є ) . L p = s q r t y i + Є − y i − Є 2 + x i + Є − x i − Є 2 (5) Furthermore, this method augments the final feature vector with the angles θ 1 and θ 2 formed by the line fragments P i-Є P i , P i P i+Є and the horizontal axis. Angles θ 1 and θ 2 can be calculated using ( ) and ( ), respectively. This procedure is performed for all points of each contour fragment. Vectors containing the perpendicular distance P d , line L p , θ 1 and θ 2 of each contour fragment have dimensionality N P multiplied by 4. To reduce the dimensionality of the final vector and reduce the sensitivity to variability among the handwritten samples of the same writer, this method applies the concept of quantization on each of the four attributes separately. Perpendicular distance P d and line L p are quantized into N q and N l distance intervals, respectively, and θ 1 and θ 2 are quantized into N a angular intervals. Therefore, the final CON 3 vectors are all of dimensionality N q +N l +2N a and are used as features for the handwritten document image. To extract this feature, the proposed method splits the connected component contour into small fragments of a specific length equal to the value specified by the NP parameter. The segmentation process starts at the contour’s first point and uses the NP parameter to specify the fragment size. The next segmentation start point is specified with the GAP parameter, which specifies the distance between the starting points of each two successive contour fragments, as depicted in . It is worth mentioning that the number of features extracted from the connected component contour can be controlled by the NP and GAP parameters. To calculate the feature vector, this method considers the extracted contour fragment’s points and then calculates the curve angle at each point P i of the extracted fragment. To do so, this method uses two line fragments of length Є . One of them represents an inbound line fragment P i −Є P i to the point of interest P i , while the other represents an outbound line fragment P i P i −Є from P i , as illustrated in . As shown in , the inbound line fragment P i−Є P i creates an angle θ 1 with the horizontal axis, and the outbound line fragment P i P i+Є creates an angle θ 2 with the horizontal axis. θ 1 and θ 2 can be calculated using ( ) and ( ), respectively. θ 1 = a r c t a n y i − y i − Є x i − x i − Є (1) θ 2 = a r c t a n y i + Є − y i x i + Є − x i (2) As illustrated in , using θ 1 and θ 2 , the curve angle at point P i can be estimated using ( ). Ф 2 π P i = m i n a b s θ 2 − θ 1 , 2 π − a b s θ 2 − θ 1 (3) This procedure is performed for all points of each contour fragment. Vectors containing the curve angles of each contour fragment have the dimensionality of the NP parameter. We think that using the resulting curve angle vector as a feature directly is not useful. The main reason is that the comparison of curve angle features in a point-to-point manner is very sensitive to variability among different handwritten documents of the same writer. Therefore, the resulting curve angle vector is quantized into a number of angular intervals M . This vector is then augmented with both θ 1 and θ 2 angles, which in turn are quantized into N a angular intervals. Accordingly, the final CPCA vectors for all contour fragments are all of dimensionality M+2N a and are used as features of the handwritten document image. 3 Feature extraction Similar to CPCA feature extraction, this method splits a connected component contour into small fragments of a specific length equal to the value specified by the NP parameter. Additionally, the contour is segmented into small fragments using the same parameters referred to in the CPCA feature extraction. To calculate a feature vector, the method considers the points of a contour fragment and then calculates the concavity/convexity property at each point P i of the contour’s fragment. For this purpose, the method two line fragments of length Є. One of them represents an inbound line fragment P i −Є P i to the point of interest P i , while the other represents an outbound line fragment P i P i +Є from P i , as illustrated in . To measure the concavity/convexity property at point P i , this method measures the perpendicular distance from point P i to the straight line that connects both P i −Є and P i +Є points. According to , this perpendicular distance P d is given by ( ) on the basis of P i = (x i , y i ) , P i-Є = (x i-Є , y i-Є ) and P i+Є = (x i+Є , y i+Є ) . P d = a b s x i + Є − x i y i − y i − Є − x i − x i − Є y i + Є − y i s q r t x i + Є − x i 2 + y i + Є − y i 2 (4) In addition to the perpendicular distance P d , this method also uses the length of line L p connecting both P i-Є and P i+Є points as one attribute of the final feature vector. The line L p length is given by ( ) on the basis of P i-Є = (x i-Є , y i-Є ) and P i+Є = (x i+Є , y i+Є ) . L p = s q r t y i + Є − y i − Є 2 + x i + Є − x i − Є 2 (5) Furthermore, this method augments the final feature vector with the angles θ 1 and θ 2 formed by the line fragments P i-Є P i , P i P i+Є and the horizontal axis. Angles θ 1 and θ 2 can be calculated using ( ) and ( ), respectively. This procedure is performed for all points of each contour fragment. Vectors containing the perpendicular distance P d , line L p , θ 1 and θ 2 of each contour fragment have dimensionality N P multiplied by 4. To reduce the dimensionality of the final vector and reduce the sensitivity to variability among the handwritten samples of the same writer, this method applies the concept of quantization on each of the four attributes separately. Perpendicular distance P d and line L p are quantized into N q and N l distance intervals, respectively, and θ 1 and θ 2 are quantized into N a angular intervals. Therefore, the final CON 3 vectors are all of dimensionality N q +N l +2N a and are used as features for the handwritten document image. This method constructs a codebook using the extracted features from the entire training set. The extracted features are used to train a clustering algorithm, and a specific number K of clusters is determined. According to , many clustering algorithms have been investigated and no significant performance differences were noted among the investigated algorithms. Hence, this method uses the k-means clustering algorithm for clustering purposes due to its simplicity, and popularity in the field, and the resultant K clusters then represent the size of the constructed codebook. To calculate the final feature vector of the handwritten document, we only need to construct an occurrence histogram with a number of bins equal to the K codewords. To do so, the features extraction is first applied to the handwritten document, and then, the similarity between each extracted feature and each codebook codeword is determined using the Euclidean distance measurement. Then, the most similar codeword is specified, and its corresponding bin in the histogram is incremented by one. Finally, the constructed histogram represents the final feature vector of the handwritten document with dimensionality equal to the size of the codebook. The experimental study of the proposed method is carried out on two public datasets: KHATT from an Arabic background and IAM from English. 4.1 KHATT dataset The KHATT dataset was prepared by a research group at King Fahd University of Petroleum and Minerals (KFUPM). The KHATT dataset consists of 1000 handwritten forms written by 1000 distinct writers from different countries, and each writer contributed four paragraphs. The first and fourth paragraphs are fixed text, while the second and third paragraphs were selected uniquely from 46 information sources. In other words, the second and third paragraphs have different textual content for each writer, while the first and fourth contain the same text content for all writers. The designer of this dataset carefully selected the words and letters of the fixed paragraphs’ text so that they contain all possible forms of Arabic letters in both cursive and separated writing manners. This paper kept the complete set of KHATT samples, and because the method is text-independent writer identification, one of the two fixed paragraphs is discarded to save the credibility of the results. Hence, paragraphs’ images were divided into two sets, the first containing one of the unique paragraphs and one of the fixed paragraphs, which are used in the training step, and the other containing one of the unique paragraphs is used for system testing. 4.2 IAM dataset The IAM dataset is an English dataset in which 657 writers wrote a different number of pages with different lengths from one writer to another. Some writers wrote full pages, while others wrote a few lines. This research adopts the IAM modifications done by . They chose two random handwritten pages for contributors who had more than two pages, and divided the page strictly in half for writers who had only one page. As a result of their preparation, the IAM dataset used in this experiment involves lowercase handwritten pages for 650 writers, with two samples for each writer. One is used to train the system, and the other is used for system testing. The KHATT dataset was prepared by a research group at King Fahd University of Petroleum and Minerals (KFUPM). The KHATT dataset consists of 1000 handwritten forms written by 1000 distinct writers from different countries, and each writer contributed four paragraphs. The first and fourth paragraphs are fixed text, while the second and third paragraphs were selected uniquely from 46 information sources. In other words, the second and third paragraphs have different textual content for each writer, while the first and fourth contain the same text content for all writers. The designer of this dataset carefully selected the words and letters of the fixed paragraphs’ text so that they contain all possible forms of Arabic letters in both cursive and separated writing manners. This paper kept the complete set of KHATT samples, and because the method is text-independent writer identification, one of the two fixed paragraphs is discarded to save the credibility of the results. Hence, paragraphs’ images were divided into two sets, the first containing one of the unique paragraphs and one of the fixed paragraphs, which are used in the training step, and the other containing one of the unique paragraphs is used for system testing. The IAM dataset is an English dataset in which 657 writers wrote a different number of pages with different lengths from one writer to another. Some writers wrote full pages, while others wrote a few lines. This research adopts the IAM modifications done by . They chose two random handwritten pages for contributors who had more than two pages, and divided the page strictly in half for writers who had only one page. As a result of their preparation, the IAM dataset used in this experiment involves lowercase handwritten pages for 650 writers, with two samples for each writer. One is used to train the system, and the other is used for system testing. The datasets used for conducting the experiments were introduced in the previous section. The identification rate achieved by this method depends on the values of two main groups of parameters for each of the proposed features: 1) Key-internal parameters and 2) Common parameters. As an example of key-internal parameters, during the CPCA feature extraction, experiments show that the length of the line segment Є and the angular quantization M have direct impacts on the identification rate achieved by this feature. Furthermore, during the CON 3 feature extraction, we found direct impacts for other parameters on the results achieved using this feature. These parameters are the angular quantization of both θ 1 and θ 2 , length of line segment Є , distance quantization of the perpendicular distance and length quantization of the line connecting the endpoints of the curve fragment. Additionally, parameters NP and GAP have been found to have impacts on the identification rates of the two proposed features. All these parameters were tested and selected empirically for each feature on each dataset. Finally, some literature studies [ , – , , ] reported that the χ 2 , Euclidean and Manhattan metrics have achieved close results without providing evidence for the superiority of specific metrics. Therefore, this experiment uses the χ 2 metric because it is widely used in many applications with the same background. The analytical results of the variations of the key-internal parameters are presented in section 5.1. Regarding the common parameters, these parameters are common among all writer identification systems as this group of parameters are related to the task itself. This group of common parameters includes the size of the codebook, the amount of available text in the handwritten document and the number of scribes involved in the experiments. Section 5.2 presents the analytical results of the variations of common parameters. In this experiment, multiclass SVM and nearest neighbor (NN) methods are used as classifiers. With nearest neighbor, the proposed method computes the distance between the feature vector of the query handwritten document and all the features vectors of the reference base and selects the writer of the handwritten document based on the minimum distance. The nearest neighbor is a simple classifier that is effective, does not need to be trained and does not need human intervention . The proposed method implements the multiclass SVM using the one-against-all technique . The SVM depends on a selected kernel function to perform complex data transformations, which in turn are used to maximize the separation boundaries between data points based on their labels or classes. A radial basis kernel function (RBF) is considered to train the SVM model, and the regularization parameter C is set to 1 for all classifiers. 5.1 Key-internal parameters analysis This section presents the findings of a set of experiments to evaluate the impacts of the internal parameters of the proposed CPCA and CON 3 features on the writer identification performance on both IAM and KHATT datasets. As stated in sections 3.1.1 and 3.1.2, the feature descriptors comprise some key parameters like contour fragment length NP , the distance between the starting points of each two successive contour fragments GAP , line fragments of length Є , the quantization of the contour points’ angles N a and M . Additionally, the proposed CON 3 feature comprises two additional parameters: (i) The perpendicular line distance quantization ( N q ) and (ii) The distance quantization of the line ( L p ) into N l . The results reported in Tables 2–7 correspond to the optimal values of these parameters that are specified empirically for each dataset through a series of experiments. The parameters NP = 35 , GAP = 15 , Na = 12 , M = 8 , Ɛ = 9 , Nq = 10 , and N l = 12 of both CPCA and CON 3 features are set to the optimal values to achieve the highest identification rate of 96% and 96.3% on IAM dataset using CPCA and CON 3 , respectively. Similarly, with KHATT dataset, the highest identification rate is achieved when the parameters of CPCA and CON 3 features are set to the optimal values as follows: NP = 40 , GAP = 20 , N a = 8 , M = 6 , Ɛ = 15 , Nq = 14 , and N l = 12 to achieve the highest identification rate of 86.6% and 88.2% using CPCA and CON 3 , respectively. From the definition of the proposed CPCA and CON 3 features, there is a correlation between the NP and GAP parameters. Therefore, the identification rate is recorded for each pair values of ( NP , GAP ) as follows; for both IAM and KHATT datasets, the GAP parameter is set from 5 to the value of the parameter NP for each setting of the parameter NP . From Figs and , it can be seen that the highest identification rate on IAM dataset is achieved with settings NP = 35 and GAP = 15 and this observation is consistent for CPCA, CON 3 and their combination. On the other hand, Figs and show that the highest identification rate on KHATT dataset is achieved with parameter values set to NP = 40 and GAP = 20 for both CPCA, CON 3 and their combination. By fixing the previous two parameters to their optimal values and continue with testing the influences of the other parameters on the overall system performance, we found significant impact of the angular quantization parameter N a of the two proposed CPCA and CON 3 features on both IAM and KHATT datasets. shows that the proposed method achieved the highest identification rate when the parameter value N a is set to 12 and 8 on IAM and KHATT datasets. It is worthy here to mention that we fix the optimal settings of the parameters checked so far in order to check the remaining parameters in the successive experiments. As shown in , the line segment parameter ( Ɛ ) values ( Ɛ = 9 and Ɛ = 15 ) prove to be the best setting on IAM and KHATT datasets. Additionally, the proposed CPCA and CON 3 and their combination performed consistently across the two datasets. As stated in section 3.1.2, the proposed CON 3 feature is augmented by the distance quantization of the perpendicular line P d and the line L p . The experiments involved the evaluation of possible impacts of these two parameters on the overall performance of the proposed system. shows that the system achieved the highest identification rate using CON 3 feature when the parameters values N q and N l are set to 10 and 12, respectively on IAM dataset. Likewise, shows that the highest identification rate on KHATT dataset is achieved when the parameters values of N q and N l are set to 14 and 12, respectively. Additionally, section 3.1.1 detailed the definition of the CPCA feature. One of this feature attributes is the point’s curve angle Ø which is calculated for each point of the contour segment. The resultant angles are then quantized into angular quantization M which is evaluated to check its possible impact on the overall identification rate of the system. shows that the system achieved the highest identification rate using CPCA feature when the parameter M is set to 8 and 6 on IAM and KHATT datasets. These internal-key parameters are critical. Their setting is user-specified and subject to the dataset and its configurations. As an example, we show in section 3.1.1 that the user can control the number of the extracted features using the NP and GAP parameters which we think is useful especially in case the available handwritten text is limited. However, such settings may lead to some deterioration of the overall system performance due to the variation of the number of extracted features. 5.2 Common parameters analysis As we mentioned above, the common parameters are not related to a specific writer identification method, but they are related to all systems in the writer identification domain. The next sub-sections discuss the details of the possible impacts of these parameters on the overall performance of the proposed writer identification system. 5.2.1 Sensitivity to the amount of available text This method conducted a series of runs to evaluate the possible impacts of the amount of available text in a handwritten document for both training and testing sets. The available text varies from one writer’s document to another writer’s document. Hence, the experiments were performed for only a subset of writers, namely, 100 writers who had at least 45 and 30 connected components in the training and testing sets, respectively. The system uses the nearest neighbor classifier, and the codebook size is set to 100 during this series of runs. The intention here is to evaluate the possible impacts of the amount of available text on the overall system performance. Therefore, the system starts with 45 connected components for the training and 30 connected components for the testing. The number of connected components is then increased gradually for both training and testing. It is worth mentioning that we fixed the number of connected components instead of fixing the number of words, as different words can lead to different numbers of connected components. presents the results of this experiment on both the IAM and KHATT datasets using the proposed CPCA and CON 3 features, and their combination. As expected, the more available text, the better the identification rate is. A higher identification rate was observed in the case where 90 and 60 connected components were considered in training and testing, respectively. As presented in , the proposed method achieved good results even with a small size of training and testing text, especially on the IAM dataset. However, one can easily note that there is a relatively large gap in the scored results between the case of considering a small amount of text and a large amount of text within the KHATT dataset. This observation is natural and confirms the correctness of the literature observation about the complexity of the Arabic language. 5.2.2 Sensitivity to the number of writers Regarding the parameter of the number of writers, which reflects the stability of the system, this method conducted a series of runs to evaluate the impact of this parameter on the overall system identification rate. The experiment is done by changing the number of writers in each run. The system starts by considering 50 writers from the IAM dataset and then increases the number of writers by 50 writers in successive runs until the total number of writers in the database is reached. The same strategy is done for the KHATT dataset but considering 100 writers in the first run and an increment of 100 writers in the successive runs. The codebook size is set to 200 during this experiment, and the resulting identification rates on the IAM and KHATT datasets using both classifiers are shown in Figs and . With the IAM dataset, slight degradation is observed in the scored identification rate, which is a natural behavior for writer identification systems, as reported in . However, the proposed method achieved stable performance when considering the small differences between the identification rate on a small set of writers and the whole set of IAM writers. The identification rate starts at 100% for the case of 50 scribes and deteriorates slightly as the scribes increase to 92.9% for the case with the complete set of IAM writers using the combination of CPCA and CON 3 features. The situation is different for the KHATT dataset, it can be seen from that the performance deterioration is relatively large, starting at 100% for the case of 100 writers and deteriorating to 77.8% for the case with the complete set of KHATT writers using the combination of CPCA and CON 3 features. This observation is very natural, as it confirms the literature belief that the Arabic language is more complex than the English language. Additionally, this observation confirms the fact that the higher number of writers and the higher content variety lead to more complexity and less system performance. It can be observed that the system did not achieve the best identification rate during this experiment. This is because we did not apply the optimal values for some parameters, such as codebook size. These parameters need to be varied each time the dataset or the number of writers of the dataset is changed. 5.3 Sensitivity to the codebook size The codebook size has been proven to have a large impact on both the identification rate and the efficiency of writer identification systems . According to this literature observation, the proposed method conducted a series of runs to evaluate the impact of the codebook size on the overall performance of the system. The system starts with 100 codewords, and then the number of codewords is increased by 100 in each successive run. The codebook size incrementing is stopped when the system reaches 1000 codewords. Tables – present the identification rate of the proposed method on both IAM and KHATT datasets using both nearest neighbor and SVM classifiers. It is clear that the codebook size parameter has a large impact on the scored identification rate, and the system achieved the highest rate when the codebook size was set to 500 and 800 on the IAM and KHATT datasets, respectively. An interesting observation can be seen from Tables – . The nearest neighbor classifier achieved better performance than the support vector machine in all cases on both the IAM and KHATT datasets, and this observation confirms the superiority of the nearest neighbor classifier over other classifiers, which was reported by . Additionally, the CPCA and CON 3 features, and their combination achieved the highest identification rate with the same codebook size on the same dataset, and this observation is consistent on the IAM and KHATT datasets using both nearest neighbor and support vector machine classifiers, as depicted in Figs and . 5.4 Writer identification Writer identification is the task of recognizing the writer of a query handwritten document by retrieving a candidate list of documents that are similar to the query document. This candidate list is ordered in increasing order based on the dissimilarity value to the query document. The hit list can be considered as part of the candidate list. This experiment adopted typical sizes of the hit list, which are 1, 5, 7, and 10. Accordingly, top1, top5, top7 and top10 represent the overall identification rate of each feature on the considered datasets. Top1 means that the proposed method counts correctly if and only if the document at the top of the hit list is written by the same writer of the query document. Similarly, for top5, top7, and top10, the method counts correctly if one of the top five, top seven, or top ten documents is written by the same writer of the query document. The scored writer identification rates using the nearest neighbor and support vector machine classifiers are presented in Tables – . As discussed previously, the identification rate is expressed by the top1, top5, top7, and top10 measures. It can be noted that the identification rate is consistent for the CPCA and CON 3 features, and their combination on the two datasets. The combination of CPCA and CON 3 features achieved the best identification rate, reading 98.2% and 89.5% on the IAM and KHATT datasets, respectively. However, Tables , , and show that the system achieved good identification rates using the CPCA and CON 3 features on the IAM and KHATT datasets. This paper conducted a comparative study with dominant methods that have been proposed in the literature. However, we emphasize that it is quite difficult to compare specific work to other works for two main reasons. First, some works utilize datasets that are not publicly accessible. Second, even if the dataset is public, researchers often utilize it with different configurations. To ensure the fairness of the performance comparison of our system with the state-of-the-art systems on the writer identification task, we only considered those studies which evaluate their methods on the whole set of scribes of both IAM and KHATT datasets. It is worthy here to mention that we adopted the same configurations of IAM dataset used in . Regarding the KHATT dataset, we excluded one of the two fixed paragraphs of each form as discussed in section 4.1 and the dataset is divided into 70% and 30% subsets for training and testing, respectively. The top1 identification rate achieved by our system along with the current state-of-the-art systems on IAM and KHATT datasets is shown in Tables and , respectively. Comparing the writer identification rates scored on the IAM dataset, the proposed method in this paper achieved 98.2%, which is the highest identification rate among the state-of-the-art methods including deep learning-based ones. The next top methods are the systems presented in , both with top1 identification rates of 97.8%. Among the deep learning-based methods, the method proposed by achieved the highest identification rate of 97.5% followed by the method proposed by which scored an identification rate of 97.27%. From , one can easily observe that the handcrafted features-based systems surpassed those systems developed using deep learning techniques. This observation supports the conclusions of [ – ] and the 2016 International Conference on Pattern Recognition (ICPR2016) and the 2017 International Conference on Document Analysis and Recognition (ICDAR2017) competitions winners. Few studies have been carried out on the KHATT dataset with comparison to those studies that have been performed on the IAM dataset. summarizes state-of-the-art methods that considered the whole KHATT dataset, and the method proposed by achieved the highest top1 identification rate reading 99.6% followed by another method proposed by the same researchers (score of 97.2%). The next top method is the system presented in , with top1 identification rates of 95.6%. Our proposed method scored the 4 th position in the ranking of the state-of-the-art methods summarized in if we excluded as this study considered only 828 scribes out of 1000 scribes who contributed to the KHATT dataset. Interestingly, the three state-of-the-art methods that outperformed our proposed method seem not to consider a common practice in independent-text writer identification. That is, they did not indicate removing of fixed paragraphs in the KHATT dataset. In contrary, our proposed method belongs to the text-independent group, which necessitates the dataset configuration to exclude the repeated text. For this reason, we exclude one of the two fixed text paragraphs as mentioned in section 4.1. All methods in [ , , ] which outperformed our proposed system on KHATT dataset do not refer to which group (text-dependent or text-independent) their systems belong, however, their dataset setup does not exclude the repeated text (fixed text paragraphs) which in turn leads to higher identification rate as reported by . Furthermore, this approach-wise difference leads to the usage of different handwritten samples per writer which does not guarantee fair performance comparison . Finally, we think that this drop might be because of the shortcoming that originated from the fragmentation step of the proposed method itself. This shortcoming causes some loss of information from the handwritten connected component contour. As mentioned in subsections 3.1.1 and 3.1.2, the proposed method segments the contour into segments of fixed length specified by the NP parameter starting by the most left point on the contour. This process continues in clockwise direction until the last point of the contour is reached. The method discards the last segment if its length is less than the NP parameter. This strategy causes loss of information from the connected component contour which may affect the ability of the system to accurately identify the query writers. 5.5 Misclassification errors analysis The proposed method is subjected to some misclassification errors which are mainly due to two reasons: 1) the cases in which the same writer draws the same connected component /basic shapes in different ways. For example, Arabic language is one of the languages that have different writing styles in which the writer does not follow a specific style during his draft production. This leads to produce different shapes for the same connected component/basic shape which eventually affects the ability of the system to accurately classify the query writers. Examples of this case are shown in . shows that the letter SIN ‘س’ is drawn differently by the writer from KHATT dataset and shows that the letter ‘S’ is drawn differently by the same writer from IAM datasets. 2) some connected components /basic shapes are drawn similarly by different writers which is one of the cases that the proposed method may misclassify the query writers. shows that the connected component ‘لى’ is drawn similarly by writer #575 and writer #137 from KHATT dataset and shows that the letter ‘F’ is drafted similarly by writer #2 and writer #21 from IAM dataset”. 5.6 Computation time analysis The proposed writer identification approach has been implemented on AURORA R7 computer with an eighth-generation processor core I7, 3.2 GHz, and 16 GB RAM running the Windows 10 operating system. The system is developed using MATLAB R2015a ® platform. A series of experiments have been done to empirically evaluate all parameters that we believe have possible impacts on the system performance and hence reporting the results obtained in this section. These extensive experiments impose the need for long execution time (days) due to two main reasons: (i) Considering the complete set of scribes of large scale datasets; (ii) the diversity of the experiments. Having the preprocessed data and setting up the system parameters to the optimal values enable us to measure the runtime cost of different steps of the system using both CPCA and CON 3 features on IAM and KHATT datasets. As depicted in Figs and , it is clear that the occurrence histograms calculation step takes much more time than other steps. That is because of the huge number of comparisons and distance calculations needed to build the histogram. This observation is consistent across the two datasets using both CPCA and CON 3 features. Additionally, Figs and show that almost all processing steps of the system on KHATT dataset take quite more time than applying the same steps on IAM dataset. This observation is natural due to two reasons: (i) the size of the KHATT dataset in terms of the number of scribes is bigger than the size of IAM dataset. (ii) the forms of KHATT dataset are richer with text than the forms of IAM dataset, which in turn leads to producing a larger number of features. The processing time of the KHATT testing set was 120.5 and 131.4 minutes, i.e., each form almost takes 7.2 and 7.9 seconds using CPCA and CON 3 features, respectively. It is worthy to mention that our implementation has not been optimized with respect to runtime, that is because the processing time is not considered a key performance indicator in offline type of systems . Moreover, there is no need for real-time applications in the writer identification task (offline mode) . It is difficult to compare the execution time with the benchmark methods due to the difference in the hardware and software configurations. This section presents the findings of a set of experiments to evaluate the impacts of the internal parameters of the proposed CPCA and CON 3 features on the writer identification performance on both IAM and KHATT datasets. As stated in sections 3.1.1 and 3.1.2, the feature descriptors comprise some key parameters like contour fragment length NP , the distance between the starting points of each two successive contour fragments GAP , line fragments of length Є , the quantization of the contour points’ angles N a and M . Additionally, the proposed CON 3 feature comprises two additional parameters: (i) The perpendicular line distance quantization ( N q ) and (ii) The distance quantization of the line ( L p ) into N l . The results reported in Tables 2–7 correspond to the optimal values of these parameters that are specified empirically for each dataset through a series of experiments. The parameters NP = 35 , GAP = 15 , Na = 12 , M = 8 , Ɛ = 9 , Nq = 10 , and N l = 12 of both CPCA and CON 3 features are set to the optimal values to achieve the highest identification rate of 96% and 96.3% on IAM dataset using CPCA and CON 3 , respectively. Similarly, with KHATT dataset, the highest identification rate is achieved when the parameters of CPCA and CON 3 features are set to the optimal values as follows: NP = 40 , GAP = 20 , N a = 8 , M = 6 , Ɛ = 15 , Nq = 14 , and N l = 12 to achieve the highest identification rate of 86.6% and 88.2% using CPCA and CON 3 , respectively. From the definition of the proposed CPCA and CON 3 features, there is a correlation between the NP and GAP parameters. Therefore, the identification rate is recorded for each pair values of ( NP , GAP ) as follows; for both IAM and KHATT datasets, the GAP parameter is set from 5 to the value of the parameter NP for each setting of the parameter NP . From Figs and , it can be seen that the highest identification rate on IAM dataset is achieved with settings NP = 35 and GAP = 15 and this observation is consistent for CPCA, CON 3 and their combination. On the other hand, Figs and show that the highest identification rate on KHATT dataset is achieved with parameter values set to NP = 40 and GAP = 20 for both CPCA, CON 3 and their combination. By fixing the previous two parameters to their optimal values and continue with testing the influences of the other parameters on the overall system performance, we found significant impact of the angular quantization parameter N a of the two proposed CPCA and CON 3 features on both IAM and KHATT datasets. shows that the proposed method achieved the highest identification rate when the parameter value N a is set to 12 and 8 on IAM and KHATT datasets. It is worthy here to mention that we fix the optimal settings of the parameters checked so far in order to check the remaining parameters in the successive experiments. As shown in , the line segment parameter ( Ɛ ) values ( Ɛ = 9 and Ɛ = 15 ) prove to be the best setting on IAM and KHATT datasets. Additionally, the proposed CPCA and CON 3 and their combination performed consistently across the two datasets. As stated in section 3.1.2, the proposed CON 3 feature is augmented by the distance quantization of the perpendicular line P d and the line L p . The experiments involved the evaluation of possible impacts of these two parameters on the overall performance of the proposed system. shows that the system achieved the highest identification rate using CON 3 feature when the parameters values N q and N l are set to 10 and 12, respectively on IAM dataset. Likewise, shows that the highest identification rate on KHATT dataset is achieved when the parameters values of N q and N l are set to 14 and 12, respectively. Additionally, section 3.1.1 detailed the definition of the CPCA feature. One of this feature attributes is the point’s curve angle Ø which is calculated for each point of the contour segment. The resultant angles are then quantized into angular quantization M which is evaluated to check its possible impact on the overall identification rate of the system. shows that the system achieved the highest identification rate using CPCA feature when the parameter M is set to 8 and 6 on IAM and KHATT datasets. These internal-key parameters are critical. Their setting is user-specified and subject to the dataset and its configurations. As an example, we show in section 3.1.1 that the user can control the number of the extracted features using the NP and GAP parameters which we think is useful especially in case the available handwritten text is limited. However, such settings may lead to some deterioration of the overall system performance due to the variation of the number of extracted features. As we mentioned above, the common parameters are not related to a specific writer identification method, but they are related to all systems in the writer identification domain. The next sub-sections discuss the details of the possible impacts of these parameters on the overall performance of the proposed writer identification system. 5.2.1 Sensitivity to the amount of available text This method conducted a series of runs to evaluate the possible impacts of the amount of available text in a handwritten document for both training and testing sets. The available text varies from one writer’s document to another writer’s document. Hence, the experiments were performed for only a subset of writers, namely, 100 writers who had at least 45 and 30 connected components in the training and testing sets, respectively. The system uses the nearest neighbor classifier, and the codebook size is set to 100 during this series of runs. The intention here is to evaluate the possible impacts of the amount of available text on the overall system performance. Therefore, the system starts with 45 connected components for the training and 30 connected components for the testing. The number of connected components is then increased gradually for both training and testing. It is worth mentioning that we fixed the number of connected components instead of fixing the number of words, as different words can lead to different numbers of connected components. presents the results of this experiment on both the IAM and KHATT datasets using the proposed CPCA and CON 3 features, and their combination. As expected, the more available text, the better the identification rate is. A higher identification rate was observed in the case where 90 and 60 connected components were considered in training and testing, respectively. As presented in , the proposed method achieved good results even with a small size of training and testing text, especially on the IAM dataset. However, one can easily note that there is a relatively large gap in the scored results between the case of considering a small amount of text and a large amount of text within the KHATT dataset. This observation is natural and confirms the correctness of the literature observation about the complexity of the Arabic language. 5.2.2 Sensitivity to the number of writers Regarding the parameter of the number of writers, which reflects the stability of the system, this method conducted a series of runs to evaluate the impact of this parameter on the overall system identification rate. The experiment is done by changing the number of writers in each run. The system starts by considering 50 writers from the IAM dataset and then increases the number of writers by 50 writers in successive runs until the total number of writers in the database is reached. The same strategy is done for the KHATT dataset but considering 100 writers in the first run and an increment of 100 writers in the successive runs. The codebook size is set to 200 during this experiment, and the resulting identification rates on the IAM and KHATT datasets using both classifiers are shown in Figs and . With the IAM dataset, slight degradation is observed in the scored identification rate, which is a natural behavior for writer identification systems, as reported in . However, the proposed method achieved stable performance when considering the small differences between the identification rate on a small set of writers and the whole set of IAM writers. The identification rate starts at 100% for the case of 50 scribes and deteriorates slightly as the scribes increase to 92.9% for the case with the complete set of IAM writers using the combination of CPCA and CON 3 features. The situation is different for the KHATT dataset, it can be seen from that the performance deterioration is relatively large, starting at 100% for the case of 100 writers and deteriorating to 77.8% for the case with the complete set of KHATT writers using the combination of CPCA and CON 3 features. This observation is very natural, as it confirms the literature belief that the Arabic language is more complex than the English language. Additionally, this observation confirms the fact that the higher number of writers and the higher content variety lead to more complexity and less system performance. It can be observed that the system did not achieve the best identification rate during this experiment. This is because we did not apply the optimal values for some parameters, such as codebook size. These parameters need to be varied each time the dataset or the number of writers of the dataset is changed. This method conducted a series of runs to evaluate the possible impacts of the amount of available text in a handwritten document for both training and testing sets. The available text varies from one writer’s document to another writer’s document. Hence, the experiments were performed for only a subset of writers, namely, 100 writers who had at least 45 and 30 connected components in the training and testing sets, respectively. The system uses the nearest neighbor classifier, and the codebook size is set to 100 during this series of runs. The intention here is to evaluate the possible impacts of the amount of available text on the overall system performance. Therefore, the system starts with 45 connected components for the training and 30 connected components for the testing. The number of connected components is then increased gradually for both training and testing. It is worth mentioning that we fixed the number of connected components instead of fixing the number of words, as different words can lead to different numbers of connected components. presents the results of this experiment on both the IAM and KHATT datasets using the proposed CPCA and CON 3 features, and their combination. As expected, the more available text, the better the identification rate is. A higher identification rate was observed in the case where 90 and 60 connected components were considered in training and testing, respectively. As presented in , the proposed method achieved good results even with a small size of training and testing text, especially on the IAM dataset. However, one can easily note that there is a relatively large gap in the scored results between the case of considering a small amount of text and a large amount of text within the KHATT dataset. This observation is natural and confirms the correctness of the literature observation about the complexity of the Arabic language. Regarding the parameter of the number of writers, which reflects the stability of the system, this method conducted a series of runs to evaluate the impact of this parameter on the overall system identification rate. The experiment is done by changing the number of writers in each run. The system starts by considering 50 writers from the IAM dataset and then increases the number of writers by 50 writers in successive runs until the total number of writers in the database is reached. The same strategy is done for the KHATT dataset but considering 100 writers in the first run and an increment of 100 writers in the successive runs. The codebook size is set to 200 during this experiment, and the resulting identification rates on the IAM and KHATT datasets using both classifiers are shown in Figs and . With the IAM dataset, slight degradation is observed in the scored identification rate, which is a natural behavior for writer identification systems, as reported in . However, the proposed method achieved stable performance when considering the small differences between the identification rate on a small set of writers and the whole set of IAM writers. The identification rate starts at 100% for the case of 50 scribes and deteriorates slightly as the scribes increase to 92.9% for the case with the complete set of IAM writers using the combination of CPCA and CON 3 features. The situation is different for the KHATT dataset, it can be seen from that the performance deterioration is relatively large, starting at 100% for the case of 100 writers and deteriorating to 77.8% for the case with the complete set of KHATT writers using the combination of CPCA and CON 3 features. This observation is very natural, as it confirms the literature belief that the Arabic language is more complex than the English language. Additionally, this observation confirms the fact that the higher number of writers and the higher content variety lead to more complexity and less system performance. It can be observed that the system did not achieve the best identification rate during this experiment. This is because we did not apply the optimal values for some parameters, such as codebook size. These parameters need to be varied each time the dataset or the number of writers of the dataset is changed. The codebook size has been proven to have a large impact on both the identification rate and the efficiency of writer identification systems . According to this literature observation, the proposed method conducted a series of runs to evaluate the impact of the codebook size on the overall performance of the system. The system starts with 100 codewords, and then the number of codewords is increased by 100 in each successive run. The codebook size incrementing is stopped when the system reaches 1000 codewords. Tables – present the identification rate of the proposed method on both IAM and KHATT datasets using both nearest neighbor and SVM classifiers. It is clear that the codebook size parameter has a large impact on the scored identification rate, and the system achieved the highest rate when the codebook size was set to 500 and 800 on the IAM and KHATT datasets, respectively. An interesting observation can be seen from Tables – . The nearest neighbor classifier achieved better performance than the support vector machine in all cases on both the IAM and KHATT datasets, and this observation confirms the superiority of the nearest neighbor classifier over other classifiers, which was reported by . Additionally, the CPCA and CON 3 features, and their combination achieved the highest identification rate with the same codebook size on the same dataset, and this observation is consistent on the IAM and KHATT datasets using both nearest neighbor and support vector machine classifiers, as depicted in Figs and . Writer identification is the task of recognizing the writer of a query handwritten document by retrieving a candidate list of documents that are similar to the query document. This candidate list is ordered in increasing order based on the dissimilarity value to the query document. The hit list can be considered as part of the candidate list. This experiment adopted typical sizes of the hit list, which are 1, 5, 7, and 10. Accordingly, top1, top5, top7 and top10 represent the overall identification rate of each feature on the considered datasets. Top1 means that the proposed method counts correctly if and only if the document at the top of the hit list is written by the same writer of the query document. Similarly, for top5, top7, and top10, the method counts correctly if one of the top five, top seven, or top ten documents is written by the same writer of the query document. The scored writer identification rates using the nearest neighbor and support vector machine classifiers are presented in Tables – . As discussed previously, the identification rate is expressed by the top1, top5, top7, and top10 measures. It can be noted that the identification rate is consistent for the CPCA and CON 3 features, and their combination on the two datasets. The combination of CPCA and CON 3 features achieved the best identification rate, reading 98.2% and 89.5% on the IAM and KHATT datasets, respectively. However, Tables , , and show that the system achieved good identification rates using the CPCA and CON 3 features on the IAM and KHATT datasets. This paper conducted a comparative study with dominant methods that have been proposed in the literature. However, we emphasize that it is quite difficult to compare specific work to other works for two main reasons. First, some works utilize datasets that are not publicly accessible. Second, even if the dataset is public, researchers often utilize it with different configurations. To ensure the fairness of the performance comparison of our system with the state-of-the-art systems on the writer identification task, we only considered those studies which evaluate their methods on the whole set of scribes of both IAM and KHATT datasets. It is worthy here to mention that we adopted the same configurations of IAM dataset used in . Regarding the KHATT dataset, we excluded one of the two fixed paragraphs of each form as discussed in section 4.1 and the dataset is divided into 70% and 30% subsets for training and testing, respectively. The top1 identification rate achieved by our system along with the current state-of-the-art systems on IAM and KHATT datasets is shown in Tables and , respectively. Comparing the writer identification rates scored on the IAM dataset, the proposed method in this paper achieved 98.2%, which is the highest identification rate among the state-of-the-art methods including deep learning-based ones. The next top methods are the systems presented in , both with top1 identification rates of 97.8%. Among the deep learning-based methods, the method proposed by achieved the highest identification rate of 97.5% followed by the method proposed by which scored an identification rate of 97.27%. From , one can easily observe that the handcrafted features-based systems surpassed those systems developed using deep learning techniques. This observation supports the conclusions of [ – ] and the 2016 International Conference on Pattern Recognition (ICPR2016) and the 2017 International Conference on Document Analysis and Recognition (ICDAR2017) competitions winners. Few studies have been carried out on the KHATT dataset with comparison to those studies that have been performed on the IAM dataset. summarizes state-of-the-art methods that considered the whole KHATT dataset, and the method proposed by achieved the highest top1 identification rate reading 99.6% followed by another method proposed by the same researchers (score of 97.2%). The next top method is the system presented in , with top1 identification rates of 95.6%. Our proposed method scored the 4 th position in the ranking of the state-of-the-art methods summarized in if we excluded as this study considered only 828 scribes out of 1000 scribes who contributed to the KHATT dataset. Interestingly, the three state-of-the-art methods that outperformed our proposed method seem not to consider a common practice in independent-text writer identification. That is, they did not indicate removing of fixed paragraphs in the KHATT dataset. In contrary, our proposed method belongs to the text-independent group, which necessitates the dataset configuration to exclude the repeated text. For this reason, we exclude one of the two fixed text paragraphs as mentioned in section 4.1. All methods in [ , , ] which outperformed our proposed system on KHATT dataset do not refer to which group (text-dependent or text-independent) their systems belong, however, their dataset setup does not exclude the repeated text (fixed text paragraphs) which in turn leads to higher identification rate as reported by . Furthermore, this approach-wise difference leads to the usage of different handwritten samples per writer which does not guarantee fair performance comparison . Finally, we think that this drop might be because of the shortcoming that originated from the fragmentation step of the proposed method itself. This shortcoming causes some loss of information from the handwritten connected component contour. As mentioned in subsections 3.1.1 and 3.1.2, the proposed method segments the contour into segments of fixed length specified by the NP parameter starting by the most left point on the contour. This process continues in clockwise direction until the last point of the contour is reached. The method discards the last segment if its length is less than the NP parameter. This strategy causes loss of information from the connected component contour which may affect the ability of the system to accurately identify the query writers. The proposed method is subjected to some misclassification errors which are mainly due to two reasons: 1) the cases in which the same writer draws the same connected component /basic shapes in different ways. For example, Arabic language is one of the languages that have different writing styles in which the writer does not follow a specific style during his draft production. This leads to produce different shapes for the same connected component/basic shape which eventually affects the ability of the system to accurately classify the query writers. Examples of this case are shown in . shows that the letter SIN ‘س’ is drawn differently by the writer from KHATT dataset and shows that the letter ‘S’ is drawn differently by the same writer from IAM datasets. 2) some connected components /basic shapes are drawn similarly by different writers which is one of the cases that the proposed method may misclassify the query writers. shows that the connected component ‘لى’ is drawn similarly by writer #575 and writer #137 from KHATT dataset and shows that the letter ‘F’ is drafted similarly by writer #2 and writer #21 from IAM dataset”. The proposed writer identification approach has been implemented on AURORA R7 computer with an eighth-generation processor core I7, 3.2 GHz, and 16 GB RAM running the Windows 10 operating system. The system is developed using MATLAB R2015a ® platform. A series of experiments have been done to empirically evaluate all parameters that we believe have possible impacts on the system performance and hence reporting the results obtained in this section. These extensive experiments impose the need for long execution time (days) due to two main reasons: (i) Considering the complete set of scribes of large scale datasets; (ii) the diversity of the experiments. Having the preprocessed data and setting up the system parameters to the optimal values enable us to measure the runtime cost of different steps of the system using both CPCA and CON 3 features on IAM and KHATT datasets. As depicted in Figs and , it is clear that the occurrence histograms calculation step takes much more time than other steps. That is because of the huge number of comparisons and distance calculations needed to build the histogram. This observation is consistent across the two datasets using both CPCA and CON 3 features. Additionally, Figs and show that almost all processing steps of the system on KHATT dataset take quite more time than applying the same steps on IAM dataset. This observation is natural due to two reasons: (i) the size of the KHATT dataset in terms of the number of scribes is bigger than the size of IAM dataset. (ii) the forms of KHATT dataset are richer with text than the forms of IAM dataset, which in turn leads to producing a larger number of features. The processing time of the KHATT testing set was 120.5 and 131.4 minutes, i.e., each form almost takes 7.2 and 7.9 seconds using CPCA and CON 3 features, respectively. It is worthy to mention that our implementation has not been optimized with respect to runtime, that is because the processing time is not considered a key performance indicator in offline type of systems . Moreover, there is no need for real-time applications in the writer identification task (offline mode) . It is difficult to compare the execution time with the benchmark methods due to the difference in the hardware and software configurations. This paper presented an effective method to extract two new features from the segmented contour fragments of offline handwritten images. Such features are intended to capture some structural properties of a handwritten contour, including concavity, convexity attributes, and curve angles. The extracted features were evaluated in the writer identification domain using the concept of a codebook. Two public benchmarks, the Arabic KHATT and English IAM datasets are arranged to assess the identification rate of the proposed method and to compare it with the state-of-the-art methods. Empirical results showed that the combination of the CONtour point CONvexity/CONcavity (CON 3 ) and the Contour point curve angle (CPCA) features scored the highest identification rate on both datasets, and the CON 3 feature achieved better results than the CPCA feature on both datasets. Furthermore, to the best of our knowledge, the experimental results showed that the proposed method outperforms several state-of-the-art systems on the IAM dataset and provides very competitive results on the KHATT dataset. This paper highlighted the impacts of common parameters, such as codebook size, available handwritten text size, and the number of writers involved in the experiment on the overall system identification rate. The major limitation of the proposed method lies in its dependency on some empirical internal parameters, such as segment size, gap, and angle intervals. These internal parameters need to be varied as functions of the underlying script. This limitation and the investigation of the big difference in the performance on the IAM and KHATT datasets will be addressed in the future. Finally, we intend to evaluate the proposed features in other issues of the text recognition domain.
Colchicine — From rheumatology to the new kid on the block: Coronary syndromes and COVID-19
13edd117-347e-441e-9668-7512f7440cce
10129269
Internal Medicine[mh]
Colchicine (Central illustration) is a strong plant alkaloid with a toxic effect, used in herbal medicine, obtained from the seeds of the autumn winter plant ( Colchicum autumnale ). Colchicine in doses of 1 mg/kg body weight is fatal, but at therapeutic doses it is characterized by pleiotropic effects . Colchicine is commonly used in the treatment of gout, familial Mediterranean fever (FMF), Behçet’s disease, pericarditis, coronary artery disease (CAD), and other inflammatory diseases . Colchicine has multiple mechanisms of action . The best known mechanisms of action of colchicine are inhibition of microtubule polymerization (at a low dose) or stimulation of microtubule depolymerization (at a higher dose) . Microtubules are a key component of the cytoskeleton and are involved in many cellular processes such as maintaining cell shape transfer of intracellular substances, secretion of cytokines and chemokines, cell migration, regulation of ion channels, and cell division . Colchicine is an antimitotic agent that blocks cell division during metaphase . In addition to the primary microtubule mechanism of action of colchicine, its anti-inflammatory and immunomodulatory properties encompass several other pathways. Colchicine in small doses reduces the level of E-selectin on vascular endothelial cells, which prevents the neutrophils from sticking to its surface. Colchicine in higher doses promotes elimination L-selectin from neutrophils, preventing these cells from interacting with the vascular endothelium . Moreover, colchicine inhibits the activation of innate immunity, the so-called NALP3 inflammasome activation (severe acute respiratory syndrome coronavirus 2 [SARS-CoV-2] stimulates the NALP3 inflammasome), activates caspase 1, and inhibits chemotactic release factor from neutrophils, thereby reducing the subsequent recruitment of neutrophils to various tissues . Colchicine inhibits the activation and release of interleukin 1 (IL-1) and interleukin 8 (IL-8) as well as superoxide from neutrophils . An important mechanism of action of colchicine is to stimulate the maturation of dendritic cells, which then become antigen presenting cells . In addition, colchicine inhibits vascular endothelial growth factor (VEGF), which reduces the proliferation of vascular endothelial cells . Interestingly, a study in rats showed that colchicine reduced the release of tumor necrosis factor alpha (TNF- α ) by lipopolysaccharide-induced macrophages . In the mouse brain macrophage cell line, colchicine inhibited adenosine triphosphate (ATP)-induced interleukin 1beta (IL-1 β ) release by preventing microtubule rearrangement and limiting activation of the Ras homolog gene family, member A (RhoA) associate, which includes the protein kinase (ROCK) pathway . In the presence of colchicine, murine peritoneal macrophages demonstrated lower ATP-induced permeability to ethidium bromide and less formation of reactive oxygen species (ROS), nitrogen oxide (NO), and release of IL-1 β . All these compounds increase the inflammatory process and are responsible for interstitial infiltration and lung collapse. Through the various mechanisms described above, colchicine can minimize such damage . Gout A randomized study by Terkeltaub et al. in 184 patients with acute gout assessed the effects of large (4.8 mg total over 6 h; n = 52) and small (1.8 mg total over 1 h; n = 74) doses of colchicine or placebo (n = 58) for a pain reduction of ≥ 50% after 24 hours without rescue treatment. It was shown that pain relief in the first 24 hours was administered by 23 (31.1%) patients in the low-dose group (p = 0.027 vs. placebo), 18 (34.6%) patients in the high-dose group (p = 0.103 compared to placebo), and 29 (50.0%) patients in the placebo group. There was no statistically significant difference in the incidence of adverse events between low-dose colchicine and placebo (odds ratio [OR] 1.5; 95% confidence interval [CI] 0.7–3.2). Thus, it was deduced that the use of low doses of colchicine allows for effective prevention of pain attacks during gout exacerbation . The safety and efficacy of naproxen and low-dose colchicine in the treatment of gout attacks were analyzed in a randomized trial by Roddy et al. . Adults with a gout flare recruited from 100 general practices were randomized equally to naproxen 750 mg immediately then 250 mg every 8 hours for 7 days or low-dose colchicine 500 mg 3 times per day for 4 days. The study included 399 people, 200 of whom were taking naproxen and 199 were taking colchicine. There was no significant between-group difference in average pain-change scores over days 1–7 (colchicine vs. naproxen: mean difference −0.18; 95% CI −0.53 to 0.17; p = = 0.32). During days 1–7, diarrhea (45.9% vs. 20.0%; OR 3.31; 95% CI 2.01–5.44) and headache (20.5% vs. 10.7%; OR 1.92; 95% CI 1.03–3.55) were more common in the colchicine group than in the naproxen group, but constipation was less common (4.8% vs. 19.3%; OR 0.24; 95% CI 0.11–0.54). Thus, there was no difference in pain intensity over the 7 days between individuals with a gout flare-up who were randomized to either naproxen or low-dose colchicine. Naproxen caused fewer side effects, supporting naproxen as the first-line treatment of gout flares in primary care with no contraindications . Based on the results of this study, it can be concluded that naproxen may be a good alternative to colchicine in patients with contraindications to its use or in people taking medications that may interact with it. Naproxen is mainly excreted unchanged, and to some extent it is metabolized by CYP1A2 and CYP2C9 (in contrast to colchicine metabolized by CYP3A4). Osteoarthritis of the knee In a randomized study, Aran et al. assessed the effect of colchicine administration on the course of osteoarthritis of the knee. The study included 61 postmenopausal patients with primary osteoarthritis of the knee, who were randomized into two groups receiving 0.5 mg of colchicine 2×/day or placebo. As an analgesic, acetaminophen in a dose less than 2 g/day was used. It was shown that the use of acetaminophen was significantly lower in the colchicine group (879.3 ± 369.7) compared to the placebo group (1620.7 ± 393.1; p = 0.000). The rate of improvement at the end of 3 months was significantly greater in the colchicine group for both patient global assessment and the physician’s overall assessment measures compared to the placebo group (11.14 ± 4.06 vs. 3.14 ± 2.18, p = 0.000 and 9.83 ± 3.799 vs. 3.72 ± 3.35, p = 0.000). Thus, this study demonstrated the effectiveness in reducing pain in patients with osteoarthritis of the knee . In a study by Erden et al. , involving 60 patients with osteoarthritis of the knee joint, the impact of adding colchicine to paracetamol therapy on the course of the disease and total antioxidant capacity was assessed. Patients were assigned to receive only paracetamol (3 g/day) or paracetamol (3 g/day) + colchicine (1.5 g/day) for 6 months. It was shown that the clinical parameters assessed using the WOMAC scale (Western Ontario and McMaster Osteoarthritis Index) improved significantly in patients using colchicine (p > 0.05). The total antioxidant capacity increased only in the colchicine group. In both groups the concentration of malondialdehyde was significantly reduced. The activities of enzymes: catalase, superoxide dismutase, and glutathione concentration did not change in both groups of patients. Thus, it was found that colchicine reduces the concentration of malondialdehyde in the blood and increases the total antioxidant capacity in patients with osteoarthritis of the knee. This may indicate that colchicine has a modifying effect on the course of knee osteoarthritis . Similar results were obtained by Leung et al. in a randomized, placebo-controlled, double-blind clinical trial. The study included 109 patients with osteoarthritis of the knee, randomized to oral treatment with colchicine 0.5 mg twice daily or placebo. It was shown that the intake of colchicine significantly reduced the mean serum concentration of high-sensitivity C-reactive protein (hs-CRP; p = 0.008) and C-terminal telopeptides of type I collagen (p = 0.002). In addition, treatment with colchicine tended to reduce serum levels of inflammatory markers such as IL-6, IL-8, TNF- α , CD14, and IL-18, but these differences were not statistically significant. There was no reduction in osteoarthritis of the knee symptoms during the 16-week study period. Thus, colchicine in this study reduced inflammation and high bone turnover biomarkers known to be associated with the severity and risk of degenerative disease progression but did not reduce symptoms . A systematic review of the literature and a meta-analysis by Restrepo-Escobar et al. attempted to evaluate the efficacy and safety of colchicine in the treatment of osteoarthritis of the knee (primary or related to the deposition of calcium pyrophosphate crystals). The review and meta-analysis included 5 randomized clinical trials involving 223 patients with osteoarthritis of the knee. Colchicine was shown to reduce pain intensity by at least 30% compared to control (OR 9.96; 95% CI 2.29–43.36) and improved patient functioning (OR 8.92; 95% CI 2.30–34.65). Thus, colchicine appears to be an effective and safe treatment alternative for adult patients with osteoarthritis of the knee, whether primary or related to calcium pyrophosphate crystal deposition. Its use reduced pain and improved the functioning of patients . Behçet’s disease The efficacy of colchicine in the treatment of Behçet’s disease was assessed in a randomized study by Davatchi et al. . The study included 169 patients with Behçet’s disease, who were randomized to take colchicine (1 mg/day) or placebo for 4 months. After 4 months, they were switched (colchicine to placebo, placebo to colchicine) for another 4 months. The main result was the overall disease activity index (Iran Behçet’s disease dynamic activity measure [IBDDAM]). It was shown that with placebo, IBDDAM worsened from 3.17 to 3.63 (p = 0.08). For colchicine, IBDDAM improved from 3.35 to 2.75 (p < 0.0001). Moreover, oral aphthosis, genital aphthosis, pseudo-alveolitis, and erythema nodosum were significantly improved with colchicine, but not with placebo. The difference between the results for men and women was not statistically significant. Thus, colchicine reduced the activity of Behçet’s disease . The study by Sun et al. assessed the anti-inflammatory efficacy of the combination of levamisole and colchicine in 64 patients with the mucocutaneous type of Behçet’s disease. The concentration of TNF- α , IL-6, and IL-8 in the serum was measured. Levamisole was administered at a dose of 50 mg twice daily for patients with 30–50 kg of body weight, or at a dose of 50 mg 3 times daily for patients with 50–80 kg of body weight for 3 consecutive days at the beginning of each 2-week interval. Colchicine was administered at a dose of 0.5 mg once or twice daily according to the severity of the disease. It has been shown that in 43 patients with serum TNF- α , IL-6, and IL-8 levels above the upper limit of normal, treatment with levamisole and colchicine for 0.5–11.5 months statistically significantly decreased levels of these proteins (p <0.001). Thus, treatment with levamisole and colchicine may result in a significant decrease in the level of IL-6, IL-8, or TNF- α in the serum in patients with the mucocutaneous type of Behçet’s disease . A randomized study by Terkeltaub et al. in 184 patients with acute gout assessed the effects of large (4.8 mg total over 6 h; n = 52) and small (1.8 mg total over 1 h; n = 74) doses of colchicine or placebo (n = 58) for a pain reduction of ≥ 50% after 24 hours without rescue treatment. It was shown that pain relief in the first 24 hours was administered by 23 (31.1%) patients in the low-dose group (p = 0.027 vs. placebo), 18 (34.6%) patients in the high-dose group (p = 0.103 compared to placebo), and 29 (50.0%) patients in the placebo group. There was no statistically significant difference in the incidence of adverse events between low-dose colchicine and placebo (odds ratio [OR] 1.5; 95% confidence interval [CI] 0.7–3.2). Thus, it was deduced that the use of low doses of colchicine allows for effective prevention of pain attacks during gout exacerbation . The safety and efficacy of naproxen and low-dose colchicine in the treatment of gout attacks were analyzed in a randomized trial by Roddy et al. . Adults with a gout flare recruited from 100 general practices were randomized equally to naproxen 750 mg immediately then 250 mg every 8 hours for 7 days or low-dose colchicine 500 mg 3 times per day for 4 days. The study included 399 people, 200 of whom were taking naproxen and 199 were taking colchicine. There was no significant between-group difference in average pain-change scores over days 1–7 (colchicine vs. naproxen: mean difference −0.18; 95% CI −0.53 to 0.17; p = = 0.32). During days 1–7, diarrhea (45.9% vs. 20.0%; OR 3.31; 95% CI 2.01–5.44) and headache (20.5% vs. 10.7%; OR 1.92; 95% CI 1.03–3.55) were more common in the colchicine group than in the naproxen group, but constipation was less common (4.8% vs. 19.3%; OR 0.24; 95% CI 0.11–0.54). Thus, there was no difference in pain intensity over the 7 days between individuals with a gout flare-up who were randomized to either naproxen or low-dose colchicine. Naproxen caused fewer side effects, supporting naproxen as the first-line treatment of gout flares in primary care with no contraindications . Based on the results of this study, it can be concluded that naproxen may be a good alternative to colchicine in patients with contraindications to its use or in people taking medications that may interact with it. Naproxen is mainly excreted unchanged, and to some extent it is metabolized by CYP1A2 and CYP2C9 (in contrast to colchicine metabolized by CYP3A4). In a randomized study, Aran et al. assessed the effect of colchicine administration on the course of osteoarthritis of the knee. The study included 61 postmenopausal patients with primary osteoarthritis of the knee, who were randomized into two groups receiving 0.5 mg of colchicine 2×/day or placebo. As an analgesic, acetaminophen in a dose less than 2 g/day was used. It was shown that the use of acetaminophen was significantly lower in the colchicine group (879.3 ± 369.7) compared to the placebo group (1620.7 ± 393.1; p = 0.000). The rate of improvement at the end of 3 months was significantly greater in the colchicine group for both patient global assessment and the physician’s overall assessment measures compared to the placebo group (11.14 ± 4.06 vs. 3.14 ± 2.18, p = 0.000 and 9.83 ± 3.799 vs. 3.72 ± 3.35, p = 0.000). Thus, this study demonstrated the effectiveness in reducing pain in patients with osteoarthritis of the knee . In a study by Erden et al. , involving 60 patients with osteoarthritis of the knee joint, the impact of adding colchicine to paracetamol therapy on the course of the disease and total antioxidant capacity was assessed. Patients were assigned to receive only paracetamol (3 g/day) or paracetamol (3 g/day) + colchicine (1.5 g/day) for 6 months. It was shown that the clinical parameters assessed using the WOMAC scale (Western Ontario and McMaster Osteoarthritis Index) improved significantly in patients using colchicine (p > 0.05). The total antioxidant capacity increased only in the colchicine group. In both groups the concentration of malondialdehyde was significantly reduced. The activities of enzymes: catalase, superoxide dismutase, and glutathione concentration did not change in both groups of patients. Thus, it was found that colchicine reduces the concentration of malondialdehyde in the blood and increases the total antioxidant capacity in patients with osteoarthritis of the knee. This may indicate that colchicine has a modifying effect on the course of knee osteoarthritis . Similar results were obtained by Leung et al. in a randomized, placebo-controlled, double-blind clinical trial. The study included 109 patients with osteoarthritis of the knee, randomized to oral treatment with colchicine 0.5 mg twice daily or placebo. It was shown that the intake of colchicine significantly reduced the mean serum concentration of high-sensitivity C-reactive protein (hs-CRP; p = 0.008) and C-terminal telopeptides of type I collagen (p = 0.002). In addition, treatment with colchicine tended to reduce serum levels of inflammatory markers such as IL-6, IL-8, TNF- α , CD14, and IL-18, but these differences were not statistically significant. There was no reduction in osteoarthritis of the knee symptoms during the 16-week study period. Thus, colchicine in this study reduced inflammation and high bone turnover biomarkers known to be associated with the severity and risk of degenerative disease progression but did not reduce symptoms . A systematic review of the literature and a meta-analysis by Restrepo-Escobar et al. attempted to evaluate the efficacy and safety of colchicine in the treatment of osteoarthritis of the knee (primary or related to the deposition of calcium pyrophosphate crystals). The review and meta-analysis included 5 randomized clinical trials involving 223 patients with osteoarthritis of the knee. Colchicine was shown to reduce pain intensity by at least 30% compared to control (OR 9.96; 95% CI 2.29–43.36) and improved patient functioning (OR 8.92; 95% CI 2.30–34.65). Thus, colchicine appears to be an effective and safe treatment alternative for adult patients with osteoarthritis of the knee, whether primary or related to calcium pyrophosphate crystal deposition. Its use reduced pain and improved the functioning of patients . The efficacy of colchicine in the treatment of Behçet’s disease was assessed in a randomized study by Davatchi et al. . The study included 169 patients with Behçet’s disease, who were randomized to take colchicine (1 mg/day) or placebo for 4 months. After 4 months, they were switched (colchicine to placebo, placebo to colchicine) for another 4 months. The main result was the overall disease activity index (Iran Behçet’s disease dynamic activity measure [IBDDAM]). It was shown that with placebo, IBDDAM worsened from 3.17 to 3.63 (p = 0.08). For colchicine, IBDDAM improved from 3.35 to 2.75 (p < 0.0001). Moreover, oral aphthosis, genital aphthosis, pseudo-alveolitis, and erythema nodosum were significantly improved with colchicine, but not with placebo. The difference between the results for men and women was not statistically significant. Thus, colchicine reduced the activity of Behçet’s disease . The study by Sun et al. assessed the anti-inflammatory efficacy of the combination of levamisole and colchicine in 64 patients with the mucocutaneous type of Behçet’s disease. The concentration of TNF- α , IL-6, and IL-8 in the serum was measured. Levamisole was administered at a dose of 50 mg twice daily for patients with 30–50 kg of body weight, or at a dose of 50 mg 3 times daily for patients with 50–80 kg of body weight for 3 consecutive days at the beginning of each 2-week interval. Colchicine was administered at a dose of 0.5 mg once or twice daily according to the severity of the disease. It has been shown that in 43 patients with serum TNF- α , IL-6, and IL-8 levels above the upper limit of normal, treatment with levamisole and colchicine for 0.5–11.5 months statistically significantly decreased levels of these proteins (p <0.001). Thus, treatment with levamisole and colchicine may result in a significant decrease in the level of IL-6, IL-8, or TNF- α in the serum in patients with the mucocutaneous type of Behçet’s disease . Pericarditis, post-pericardiotomy syndrome, and postoperative AF A meta-analysis of 17 prospective clinical trials conducted by Papageorgiou et al. assessed the effect of colchicine on the prevention and treatment of cardiovascular diseases (pericarditis, postcoricardiotomy syndrome, and postoperative atrial fibrillation [AF] recurrence). The meta-analysis included 2082 patients who received colchicine and 1982 controls, with a mean follow-up of 12 months. Colchicine administration was shown to significantly reduce the risk of recurrent pericarditis/postpericardiotomy syndrome (OR 0.37; 95% CI 0.29–0.47; p < 0.001). In addition, it was shown that colchicine treatment significantly reduced the risk of recurrent AF by as much as 46% in patients after cardiac surgery or pulmonary vein isolation (OR 0.54; 95% CI 0.41–0.7; p = 0.001). Thus, colchicine has been found to be effective in recurrent pericarditis/postpericardiotomy syndrome and recurrent AF after surgery . The results of the above meta-analysis are consistent with those previously obtained by Imazio et al. , who conducted a meta-analysis of 5 randomized clinical trials including 795 patients with pericarditis with an average follow-up of 13 months. The effectiveness of colchicine in the primary and secondary prevention of pericarditis was analyzed. Colchicine administration was shown to be associated with a reduced risk of pericarditis during follow-up (relative risk [RR] 0.40; 95% CI 0.30–0.54; p < 0.001) . Similar results were obtained by Lennerz et al. in a meta-analysis of 5 randomized clinical trials. These investigators found that treatment with colchicine reduced postoperative AF by 31% compared to placebo or usual care (18% vs. 27%, RR 0.69; 95% CI 0.57–0.84; p = 0.0002). The length of hospital stay after cardiac surgery decreased by 1.2 days following use of colchicine (95% CI from −1.89 to −0.44, p = 0.002) . A meta-analysis of 6 randomized clinical trials by Salih et al. involving 1257 patients also demonstrated the efficacy of colchicine in reducing the risk of post-operative AF (OR 0.52; 95% CI 0.40–0.68; p < 0.001) . An interesting randomized, double-blind, placebo-controlled study by Shojaeifard et al. assessed the efficacy of colchicine in the prevention of acute pericarditis (through its influence on constrictive physiology) after cardiac surgery. Patients (n = 160) were randomized to receive colchicine at a dose of 1 mg/day from 48 hours before and 0.5 mg twice/day for 5 days after surgery. One week after surgery, the incidence of constrictive physiology was reduced in the colchicine group (13% vs. 23%), but the difference was not statistically significant. After 4 weeks of follow-up, 19 (23%) patients in the placebo group and 9 (11%) in the colchicine group had constrictive physiology, while 2 of 11 patients (18.2%) recovered. The difference was statistically significant (p = 0.038). There was no new case of constrictive physiology between weeks 1 and 4 of observation. Thus, the short-term use of colchicine was preventative in reducing constrictive physiology after 1 month of open-heart surgery, but not after a week . In conclusion, it should be stated that colchicine is a safe and effective drug in the prevention of pericarditis and postoperative AF. Colchicine, together with acetylsalicylic acid (ASA) and non-steroidal anti-inflammatory drugs (NSAID), is the drug of first choice in the treatment of acute pericarditis and to prevent recurrence as an adjunct to ASA/NSAID therapy (class and level of recommendation: IA) . The anti-inflammatory treatment regimen proposed by the European Society of Cardiology (ESC) for acute pericarditis (first-line therapy) is as follows: — ASA at a dose of 750–1000 mg every 8 hours for 1–2 weeks (decrease doses by 250–500 mg every 1–2 weeks); — ibuprofen at a dose of 600 mg every 8 hours for 1–2 weeks (decrease doses by 200–400 mg every 1–2 weeks); — colchicine at a dose of 0.5 mg once (< 70 kg) or 0.5 mg b.i.d. (≥ 70 kg) for 3 months (tapering not mandatory, alternatively 0.5 mg every other day [< 70 kg] or 0.5 mg once [≥ 70 kg] in the last weeks) . The treatment regimen proposed by the ESC for acute recurrent pericarditis (first-line therapy): — ASA at a dose of 500–1000 mg every 6–8 hours (range 1.5–4 g/day) for weeks-months (decrease doses by 250–500 mg every 1–2 weeks); — ibuprofen at a dose of 600 mg every 8 hours (range 1200–2400 mg) for weeks-months (decrease doses by 200–400 mg every 1–2 weeks); — indomethacin at a dose of 25–50 mg every 8 hours (start at lower end of dosing range and titrate upward to avoid headache and dizziness) for weeks-months (decrease doses by 25 mg every 1–2 weeks); — colchicine at a dose of 0.5 mg twice or 0.5 mg daily for patients weighing < 70 kg or intolerant of higher doses for at least 6 months (tapering not necessary, alternatively 0.5 mg every other day [< 70 kg] or 0.5 mg once [≥ 70 kg] in the last weeks) . It is also worth mentioning that the treatment of pericarditis and its recurrences in patients with systemic lupus erythematosus with colchicine in a study by Morel et al. , involving 10 patients, was effective and safe. Secondary cardiovascular prevention and in-stent restenosis The interest in colchicine in the treatment of CAD is justified by its anti-inflammatory and anti-atherosclerotic effects . It was found that monthly therapy with colchicine at a dose of 0.5 mg/day led to a decrease in the concentration of inflammatory markers in the blood in patients with chronic CAD . In a randomized and controlled study by Martínez et al. involving 40 patients with acute coronary syndrome (ACS) and 10 patients with stable coronary disease, the effect of colchicine administration on the immune profile of the patients was assessed. Subjects were randomized to receive oral colchicine (1 mg dose followed by 0.5 mg an hour later) or no colchicine 6 to 24 hours prior to cardiac catheterization. A rapid and significant reduction in the production of pro-inflammatory cytokines in the heart, such as IL-1 β , IL-6, and IL-18, was demonstrated . Administration of colchicine at a low dose to patients with ACS led to the stabilization of atherosclerotic plaque in the coronary vessels . Moreover, colchicine significantly reduces the local production of chemotactic factors such as chemokine ligand 2 (CCL2), C-X3-C motif chemokine ligand 1 (CX3CL1), and (slightly) chemokine ligand 5 (CCL5) in patients with ACS . The efficacy of adding colchicine to therapy in secondary cardiovascular prevention in patients with CAD was the subject of a meta-analysis of 4 randomized clinical trials conducted by Samuel et al. . The meta-analysis included 11,546 patients with stable coronary disease or ACS, who were administered colchicine (n = 5774, dose 0.5 mg/day) or placebo/no colchicine (n = 5820). It was shown that compared to placebo or no colchicine, colchicine was associated with a statistically significant reduction in myocardial infarction (MI) (hazard ratio [HR] 0.62; 95% CI 0.36–0.88; p < 0.05), ischemic stroke (HR 0.38; 95% CI 0.13–0.63; p < 0.05) and the urgent need for coronary revascularization (HR 0.56; 95% CI 0.30–0.82; p < 0.05). However, no statistically significant effect of colchicine administration was demonstrated on reduction of the risk of cardiovascular mortality (HR 0.82; 95% CI 0.46–1.18), deep vein thrombosis, or pulmonary embolism (HR 1.13; 95% CI 0.43–1.84). The effect of colchicine administration on the risk of AF was at the borderline of statistical significance (HR 0.86; 95% CI 0.67–1.04). Thus, in secondary cardiovascular prophylaxis, the addition of a low dose of colchicine to standard treatment reduces the incidence of major cardiovascular events, except for cardiovascular mortality, compared to standard therapy alone . The randomized, double-blind, placebo-controlled study COLCOT (Colchicine Cardiovascular Outcomes Trial) conducted by Tardif et al. , involving 4745 patients recruited within 30 days of MI, assessed the efficacy of colchicine in secondary cardiovascular prevention. Patients were randomized to receive either low-dose colchicine (0.5 mg/day; n = 2366) or placebo (n = 2370). A statistically significant 50% reduction in the risk of urgent hospitalization for angina leading to coronary revascularization was demonstrated (HR 0.50; 95% CI 0.31–0.81) and the risk of stroke (HR 0.26; 95% CI 0.10–0.70). There was also a reduction in the risk of death from cardiovascular causes (HR 0.84; 95% CI 0.46–1.52), the risk of sudden cardiac arrest (HR 0.83; 95% CI 0.25–2.73) and the risk of recurrent MI (HR 0.91; 95% CI 0.68–1.21) . In further analyses of the studied group of patients it was shown that the greatest benefits in terms of secondary cardiovascular prevention were achieved by those patients who received colchicine within the first 3 days after the onset of MI . In the randomized, double-blind, placebo-controlled study, LoDoCo2 (Low-Dose Colchicine-2), conducted by Nidorf et al. and involving 5522 patients with chronic coronary disease, the effectiveness of colchicine administration on the risk of cardiovascular events was analyzed. Patients were administered either 0.5 mg of colchicine once daily (n = 2762) or placebo (n = 2760). The observation time was 28.6 months. The primary endpoint (cardiovascular death, MI, ischemic stroke, or ischemia-driven coronary revascularization) in the colchicine group was reduced by 31% (HR 0.69; 95% CI 0.57–0.83; p < 0.001) . The most recent meta-analysis of 12 randomized clinical trials by Bytyçi et al. summarized knowledge on the efficacy of colchicine in patients with CAD (colchicine: n = 6351; placebo: n = 6722). The mean follow-up was 22.5 months. The risk of major adverse cardiac events, all-cause death, cardiovascular death, recurrent myocardial infraction, stroke, and hospitalization were analyzed. The results of this meta-analysis are shown in . The authors of the meta-analysis indicate that colchicine appears to be a promising therapeutic option for effective prevention of cardiovascular events. The effect of colchicine on cardiovascular and all-cause mortality deserves more research, especially long-term . To sum up, colchicine administered at a dose of 0.5 mg/day shows a beneficial effect in the secondary prevention of cardiovascular disease . However, it should be mentioned that there is no recommendation on the use of colchicine in the ESC and Canadian Cardiovascular Society (CCS) guidelines for ST-segment elevation MI/non-ST segment elevation MI (STEMI/NSTEMI), which still limits its use in secondary prevention. A meta-analysis of 9 randomized clinical trials conducted by Masson et al. , including 6630 patients at high cardiovascular risk, analyzed the effect of colchicine administration on the risk of stroke. In this study, 3359 subjects were allocated to receive colchicine while 3271 subjects were allocated to the respective control arms. The incidence of stroke was lower in the colchicine group compared to the placebo group (OR 33; 95% CI 15–70). Thus, it was found that colchicine significantly reduces the risk of stroke in patients at high cardiovascular risk . In the previously cited meta-analysis by Bytyçi et al. it was also shown that the use of colchicine reduced the risk of stroke . In-stent restenosis is a phenomenon sometimes observed after angioplasty procedures involving the relapse of stenosis in the artery being treated. The incidence of restenosis after percutaneous coronary intervention (PCI) is approximately 8% at 1-year follow-up. Risk factors (p < 0.05) for restenosis after PCI include the following: postoperative hypersensitive CRP levels (OR 2.309; 95% CI 1.579–3.375 mg/L), postoperative homocysteine levels (OR 2.202; 95% CI 1.268–3.826 μ mol/L), history of diabetes (OR 1.955; 95% CI 1.272–3.003), coronary bifurcation lesions (OR 3.785; 95% CI 2.246–6.377), and stent length (OR 1.269; 95% CI 1.179–1.365 mm) . The use of modern drug-eluting stents (DES) and drug-eluting balloons (DEB) in the treatment of CAD significantly reduced the incidence of restenosis (which is one of the major complications associated with bare metal stents) . Considering the important role of the inflammatory process in the pathogenesis of restenosis , it seems that the use of colchicine should be an effective method of preventing this complication. In a meta-analysis conducted by Papageorgiou et al. colchicine administration was not shown to reduce the risk of in-stent restenosis (OR 0.61; 95% CI 0.24–1.57; p = 0.31). In a randomized, placebo-controlled study, Deftereos et al. analyzed the effect of colchicine administration on the incidence of in-stent restenosis in patients with diabetes and a contraindication to DES implantation (n = 196). Patients were randomized to receive colchicine at a dose of 0.5 mg twice daily or placebo for 6 months. It was shown that the frequency of in-stent restenosis was 16% in the colchicine group and 33% in the control group (OR 0.38; 95% CI 0.18–0.79; p = 0.007). The lumen area loss was 1.6 mm 2 (interquartile range: 1.0 to 2.9 mm 2 ) in patients treated with colchicine and 2.9 mm 2 (interquartile range: 1.4 to 4.8 mm 2 ) in the control group (p = 0.002). The use of colchicine in diabetic patients after PCI with a metal stent is associated with less neointimal hyperplasia and a reduced incidence of in-stent restenosis . A recently published meta-analysis of 10 studies by Tien et al. showed that the use of colchicine significantly reduced the risk of restenosis after PCI (OR 0.46; 95% CI 0.23–0.92). A meta-analysis of 17 prospective clinical trials conducted by Papageorgiou et al. assessed the effect of colchicine on the prevention and treatment of cardiovascular diseases (pericarditis, postcoricardiotomy syndrome, and postoperative atrial fibrillation [AF] recurrence). The meta-analysis included 2082 patients who received colchicine and 1982 controls, with a mean follow-up of 12 months. Colchicine administration was shown to significantly reduce the risk of recurrent pericarditis/postpericardiotomy syndrome (OR 0.37; 95% CI 0.29–0.47; p < 0.001). In addition, it was shown that colchicine treatment significantly reduced the risk of recurrent AF by as much as 46% in patients after cardiac surgery or pulmonary vein isolation (OR 0.54; 95% CI 0.41–0.7; p = 0.001). Thus, colchicine has been found to be effective in recurrent pericarditis/postpericardiotomy syndrome and recurrent AF after surgery . The results of the above meta-analysis are consistent with those previously obtained by Imazio et al. , who conducted a meta-analysis of 5 randomized clinical trials including 795 patients with pericarditis with an average follow-up of 13 months. The effectiveness of colchicine in the primary and secondary prevention of pericarditis was analyzed. Colchicine administration was shown to be associated with a reduced risk of pericarditis during follow-up (relative risk [RR] 0.40; 95% CI 0.30–0.54; p < 0.001) . Similar results were obtained by Lennerz et al. in a meta-analysis of 5 randomized clinical trials. These investigators found that treatment with colchicine reduced postoperative AF by 31% compared to placebo or usual care (18% vs. 27%, RR 0.69; 95% CI 0.57–0.84; p = 0.0002). The length of hospital stay after cardiac surgery decreased by 1.2 days following use of colchicine (95% CI from −1.89 to −0.44, p = 0.002) . A meta-analysis of 6 randomized clinical trials by Salih et al. involving 1257 patients also demonstrated the efficacy of colchicine in reducing the risk of post-operative AF (OR 0.52; 95% CI 0.40–0.68; p < 0.001) . An interesting randomized, double-blind, placebo-controlled study by Shojaeifard et al. assessed the efficacy of colchicine in the prevention of acute pericarditis (through its influence on constrictive physiology) after cardiac surgery. Patients (n = 160) were randomized to receive colchicine at a dose of 1 mg/day from 48 hours before and 0.5 mg twice/day for 5 days after surgery. One week after surgery, the incidence of constrictive physiology was reduced in the colchicine group (13% vs. 23%), but the difference was not statistically significant. After 4 weeks of follow-up, 19 (23%) patients in the placebo group and 9 (11%) in the colchicine group had constrictive physiology, while 2 of 11 patients (18.2%) recovered. The difference was statistically significant (p = 0.038). There was no new case of constrictive physiology between weeks 1 and 4 of observation. Thus, the short-term use of colchicine was preventative in reducing constrictive physiology after 1 month of open-heart surgery, but not after a week . In conclusion, it should be stated that colchicine is a safe and effective drug in the prevention of pericarditis and postoperative AF. Colchicine, together with acetylsalicylic acid (ASA) and non-steroidal anti-inflammatory drugs (NSAID), is the drug of first choice in the treatment of acute pericarditis and to prevent recurrence as an adjunct to ASA/NSAID therapy (class and level of recommendation: IA) . The anti-inflammatory treatment regimen proposed by the European Society of Cardiology (ESC) for acute pericarditis (first-line therapy) is as follows: — ASA at a dose of 750–1000 mg every 8 hours for 1–2 weeks (decrease doses by 250–500 mg every 1–2 weeks); — ibuprofen at a dose of 600 mg every 8 hours for 1–2 weeks (decrease doses by 200–400 mg every 1–2 weeks); — colchicine at a dose of 0.5 mg once (< 70 kg) or 0.5 mg b.i.d. (≥ 70 kg) for 3 months (tapering not mandatory, alternatively 0.5 mg every other day [< 70 kg] or 0.5 mg once [≥ 70 kg] in the last weeks) . The treatment regimen proposed by the ESC for acute recurrent pericarditis (first-line therapy): — ASA at a dose of 500–1000 mg every 6–8 hours (range 1.5–4 g/day) for weeks-months (decrease doses by 250–500 mg every 1–2 weeks); — ibuprofen at a dose of 600 mg every 8 hours (range 1200–2400 mg) for weeks-months (decrease doses by 200–400 mg every 1–2 weeks); — indomethacin at a dose of 25–50 mg every 8 hours (start at lower end of dosing range and titrate upward to avoid headache and dizziness) for weeks-months (decrease doses by 25 mg every 1–2 weeks); — colchicine at a dose of 0.5 mg twice or 0.5 mg daily for patients weighing < 70 kg or intolerant of higher doses for at least 6 months (tapering not necessary, alternatively 0.5 mg every other day [< 70 kg] or 0.5 mg once [≥ 70 kg] in the last weeks) . It is also worth mentioning that the treatment of pericarditis and its recurrences in patients with systemic lupus erythematosus with colchicine in a study by Morel et al. , involving 10 patients, was effective and safe. The interest in colchicine in the treatment of CAD is justified by its anti-inflammatory and anti-atherosclerotic effects . It was found that monthly therapy with colchicine at a dose of 0.5 mg/day led to a decrease in the concentration of inflammatory markers in the blood in patients with chronic CAD . In a randomized and controlled study by Martínez et al. involving 40 patients with acute coronary syndrome (ACS) and 10 patients with stable coronary disease, the effect of colchicine administration on the immune profile of the patients was assessed. Subjects were randomized to receive oral colchicine (1 mg dose followed by 0.5 mg an hour later) or no colchicine 6 to 24 hours prior to cardiac catheterization. A rapid and significant reduction in the production of pro-inflammatory cytokines in the heart, such as IL-1 β , IL-6, and IL-18, was demonstrated . Administration of colchicine at a low dose to patients with ACS led to the stabilization of atherosclerotic plaque in the coronary vessels . Moreover, colchicine significantly reduces the local production of chemotactic factors such as chemokine ligand 2 (CCL2), C-X3-C motif chemokine ligand 1 (CX3CL1), and (slightly) chemokine ligand 5 (CCL5) in patients with ACS . The efficacy of adding colchicine to therapy in secondary cardiovascular prevention in patients with CAD was the subject of a meta-analysis of 4 randomized clinical trials conducted by Samuel et al. . The meta-analysis included 11,546 patients with stable coronary disease or ACS, who were administered colchicine (n = 5774, dose 0.5 mg/day) or placebo/no colchicine (n = 5820). It was shown that compared to placebo or no colchicine, colchicine was associated with a statistically significant reduction in myocardial infarction (MI) (hazard ratio [HR] 0.62; 95% CI 0.36–0.88; p < 0.05), ischemic stroke (HR 0.38; 95% CI 0.13–0.63; p < 0.05) and the urgent need for coronary revascularization (HR 0.56; 95% CI 0.30–0.82; p < 0.05). However, no statistically significant effect of colchicine administration was demonstrated on reduction of the risk of cardiovascular mortality (HR 0.82; 95% CI 0.46–1.18), deep vein thrombosis, or pulmonary embolism (HR 1.13; 95% CI 0.43–1.84). The effect of colchicine administration on the risk of AF was at the borderline of statistical significance (HR 0.86; 95% CI 0.67–1.04). Thus, in secondary cardiovascular prophylaxis, the addition of a low dose of colchicine to standard treatment reduces the incidence of major cardiovascular events, except for cardiovascular mortality, compared to standard therapy alone . The randomized, double-blind, placebo-controlled study COLCOT (Colchicine Cardiovascular Outcomes Trial) conducted by Tardif et al. , involving 4745 patients recruited within 30 days of MI, assessed the efficacy of colchicine in secondary cardiovascular prevention. Patients were randomized to receive either low-dose colchicine (0.5 mg/day; n = 2366) or placebo (n = 2370). A statistically significant 50% reduction in the risk of urgent hospitalization for angina leading to coronary revascularization was demonstrated (HR 0.50; 95% CI 0.31–0.81) and the risk of stroke (HR 0.26; 95% CI 0.10–0.70). There was also a reduction in the risk of death from cardiovascular causes (HR 0.84; 95% CI 0.46–1.52), the risk of sudden cardiac arrest (HR 0.83; 95% CI 0.25–2.73) and the risk of recurrent MI (HR 0.91; 95% CI 0.68–1.21) . In further analyses of the studied group of patients it was shown that the greatest benefits in terms of secondary cardiovascular prevention were achieved by those patients who received colchicine within the first 3 days after the onset of MI . In the randomized, double-blind, placebo-controlled study, LoDoCo2 (Low-Dose Colchicine-2), conducted by Nidorf et al. and involving 5522 patients with chronic coronary disease, the effectiveness of colchicine administration on the risk of cardiovascular events was analyzed. Patients were administered either 0.5 mg of colchicine once daily (n = 2762) or placebo (n = 2760). The observation time was 28.6 months. The primary endpoint (cardiovascular death, MI, ischemic stroke, or ischemia-driven coronary revascularization) in the colchicine group was reduced by 31% (HR 0.69; 95% CI 0.57–0.83; p < 0.001) . The most recent meta-analysis of 12 randomized clinical trials by Bytyçi et al. summarized knowledge on the efficacy of colchicine in patients with CAD (colchicine: n = 6351; placebo: n = 6722). The mean follow-up was 22.5 months. The risk of major adverse cardiac events, all-cause death, cardiovascular death, recurrent myocardial infraction, stroke, and hospitalization were analyzed. The results of this meta-analysis are shown in . The authors of the meta-analysis indicate that colchicine appears to be a promising therapeutic option for effective prevention of cardiovascular events. The effect of colchicine on cardiovascular and all-cause mortality deserves more research, especially long-term . To sum up, colchicine administered at a dose of 0.5 mg/day shows a beneficial effect in the secondary prevention of cardiovascular disease . However, it should be mentioned that there is no recommendation on the use of colchicine in the ESC and Canadian Cardiovascular Society (CCS) guidelines for ST-segment elevation MI/non-ST segment elevation MI (STEMI/NSTEMI), which still limits its use in secondary prevention. A meta-analysis of 9 randomized clinical trials conducted by Masson et al. , including 6630 patients at high cardiovascular risk, analyzed the effect of colchicine administration on the risk of stroke. In this study, 3359 subjects were allocated to receive colchicine while 3271 subjects were allocated to the respective control arms. The incidence of stroke was lower in the colchicine group compared to the placebo group (OR 33; 95% CI 15–70). Thus, it was found that colchicine significantly reduces the risk of stroke in patients at high cardiovascular risk . In the previously cited meta-analysis by Bytyçi et al. it was also shown that the use of colchicine reduced the risk of stroke . In-stent restenosis is a phenomenon sometimes observed after angioplasty procedures involving the relapse of stenosis in the artery being treated. The incidence of restenosis after percutaneous coronary intervention (PCI) is approximately 8% at 1-year follow-up. Risk factors (p < 0.05) for restenosis after PCI include the following: postoperative hypersensitive CRP levels (OR 2.309; 95% CI 1.579–3.375 mg/L), postoperative homocysteine levels (OR 2.202; 95% CI 1.268–3.826 μ mol/L), history of diabetes (OR 1.955; 95% CI 1.272–3.003), coronary bifurcation lesions (OR 3.785; 95% CI 2.246–6.377), and stent length (OR 1.269; 95% CI 1.179–1.365 mm) . The use of modern drug-eluting stents (DES) and drug-eluting balloons (DEB) in the treatment of CAD significantly reduced the incidence of restenosis (which is one of the major complications associated with bare metal stents) . Considering the important role of the inflammatory process in the pathogenesis of restenosis , it seems that the use of colchicine should be an effective method of preventing this complication. In a meta-analysis conducted by Papageorgiou et al. colchicine administration was not shown to reduce the risk of in-stent restenosis (OR 0.61; 95% CI 0.24–1.57; p = 0.31). In a randomized, placebo-controlled study, Deftereos et al. analyzed the effect of colchicine administration on the incidence of in-stent restenosis in patients with diabetes and a contraindication to DES implantation (n = 196). Patients were randomized to receive colchicine at a dose of 0.5 mg twice daily or placebo for 6 months. It was shown that the frequency of in-stent restenosis was 16% in the colchicine group and 33% in the control group (OR 0.38; 95% CI 0.18–0.79; p = 0.007). The lumen area loss was 1.6 mm 2 (interquartile range: 1.0 to 2.9 mm 2 ) in patients treated with colchicine and 2.9 mm 2 (interquartile range: 1.4 to 4.8 mm 2 ) in the control group (p = 0.002). The use of colchicine in diabetic patients after PCI with a metal stent is associated with less neointimal hyperplasia and a reduced incidence of in-stent restenosis . A recently published meta-analysis of 10 studies by Tien et al. showed that the use of colchicine significantly reduced the risk of restenosis after PCI (OR 0.46; 95% CI 0.23–0.92). The results of experimental studies have shown that the NLRP3 inflammasome can be activated and triggered by various SARS-CoV-2 proteins, and it may then be involved in the development of acute respiratory distress syndrome (ARDS), a complication of coronavirus disease 2019 (COVID-19). Colchicine, as an important inhibitor of the NLRP3 inflammasome, may be used in the treatment of COVID-19 . In an experimental study involving rats with oleic acid ARDS, the efficacy of colchicine in preventing lung damage was investigated. Rats were administered colchicine at a dose of 1 mg/kg body weight or placebo for 3 days prior to induction of ARDS. Four hours later, tests were performed, and it was shown that colchicine reduced the histological area of lung damage by 61%, reduced pulmonary edema, improved oxygenation of lung tissue by increasing PaO 2 /FiO 2 from 66 ± 13 mmHg (mean ± SEM) to 246 ± 45 mmHg, and decreased pCO 2 and respiratory acidosis. In addition, colchicine decreased pulmonary neutrophil recruitment and activation of circulating leukocytes. The researchers concluded that colchicine significantly reduced lung damage in the experimental ARDS model . In a clinical trial conducted by Fiorucci et al. involving 30 patients with mild to moderate idiopathic pulmonary fibrosis, the effects of prednisone (n = 11; dose 1 mg/kg bw/day) or prednisone + cyclophosphamide (n = 9; dose 0.5 mg/kg bw/day and 100 mg/day) or prednisone + colchicine (n = 10; dose of 0.5 mg/kg bw/day and 1 mg/day, respectively) for the clinical course of the disease were assessed. Clinical parameters were assessed before treatment and at 6-month intervals for 18 months. Side effects and 3-year survival were also studied. None of the regimens was shown to be able to influence the course of idiopathic pulmonary fibrosis. However, treatment with colchicine with prednisone resulted in fewer side effects, and the reassessment parameters showed a significant reduction in dyspnea (p < 0.01). There were no significant differences in survival between the 3 groups . In an interesting clinical study by Scarsi et al. , involving 262 patients with COVID-19, the effectiveness of colchicine (n = 122) was assessed in comparison with standard therapy (n = 140). Standard COVID-19 therapy included hydroxychloroquine and/or intravenous dexamethasone and/or lopinavir/ritonavir. Antiviral drugs were discontinued in the colchicine 1 mg/day group due to potential drug interactions. It was shown that patients with COVID-19 receiving colchicine had a reduced mortality on the 21 st day of observation compared to those receiving standard therapy (63.6% vs. 84.2%; p = 0.001). The results of this promising study indicate that administering colchicine to patients with COVID-19 is justified and may have potential benefits . In a prospective, randomized, controlled study conducted by Deftereos et al. , involving 105 patients with COVID-19, the effect of adding colchicine to standard therapy on the concentration of cardiac and inflammatory biomarkers and the course of COVID-19 was assessed. The patients were divided into a group receiving standard therapy (n = 50) and a group additionally receiving colchicine (n = 55) for standard therapy. Colchicine was administered according to the following schedule: 1.5 mg loading dose followed by 0.5 mg 60 min later and maintenance doses 0.5 mg twice daily along with standard treatment for up to 3 weeks. Standard treatments mainly consisted of chloroquine, hydroxychloroquine, or azithromycin, as well as ritonavir, lopinavir, or tocilizumab. It was shown that patients who received colchicine had a statistically significantly longer time to clinical deterioration. There was no reduction in mortality in the studied COVID-19 patients. There were no significant differences in the levels of highly sensitive cardiac troponin or CRP. Thus, it seems that colchicine may improve the prognosis of COVID-19 patients . In a prospective randomized, double-blind clinical trial conducted by Salehzadeh et al. , the effect of colchicine administration on the course of the disease (symptom course, hospitalization time, incidence of other diseases) was assessed in 100 patients with COVID-19. The subjects were randomized to a group receiving hydroxychloroquine or a group receiving hydroxychloroquine + colchicine. Colchicine was used at a dose of 1 mg/day for 6 days. It was shown that the compared groups differed statistically significantly in terms of hospitalization time, amounting to 8.12 days in the control group vs. 6.28 days in the study group (p = 0.001). Moreover, in patients with COVID-19 receiving colchicine, the incidence of fever was significantly lower than in the control group (2% vs. 22%, p = 0.02). Thus, it seems that adding colchicine to COVID-19 treatment may limit the severity of the disease and the time of length of hospital stay . The randomized, double-blind, placebo-controlled clinical trial COL-CORONA by Tardif et al. , involving 4159 non-hospitalized patients with COVID-19 (polymerase chain reaction [PCR] confirmed) at high risk of severe disease assessed the efficacy of colchicine versus placebo. Colchicine was administered at a dose of 0.5 mg orally twice daily for the first 3 days and then once daily for the next 27 days. Colchicine use was found to reduce hospitalizations by 25% (OR 0.75; 95% CI 0.57–0.99), need for mechanical ventilation by 50% (OR 0.50; 95% CI 0.23–1.07), and the mortality rate by 44% (OR 0.56; 95% CI 0.19–1.66). In the group of patients receiving colchicine, the most common adverse reaction was gastrointestinal complaints (23.9% vs. 14.8%). The results of this very promising study may be a breakthrough in the outpatient treatment of COVID-19 patients . The recently published COLORIT study (COLchicine versus Ruxolitinib and Secukinumab in Open-label Prospective Randomized Trial in Patients with COVID-19) indicated the benefits of the following treatment regimen: 1 mg of colchicine for 1–3 days, then 0.5 mg for 2 weeks as an effective anti-inflammatory treatment for COVID-19. This treatment turned out to be more effective than the administration of expensive anticytokine drugs: ruxolitinib and secukinumab. More aggressive protocols used a loading dose of 1.5 mg of colchicine, an additional dose of 0.5 mg an hour later, and then dosing of 2 × 0.5 mg daily for 3 weeks . Concerns about the impact of chronic diseases and medications on the risk of SARS-CoV-2 infection and the severity of COVID-19 prompted Haslak et al. to conduct a study involving 404 sick children with AIDS. Based on the epidemiological interview and retrospective analysis of the medical records of the respondents, it was found that during the COVID-19 pandemic, 375 patients were treated with colchicine and 48 with biological drugs. Twenty-four patients were admitted to the hospital with suspected SARS-CoV-2 infection, and 7 of them were confirmed. All patients recovered and no serious complications were found. The researchers conclude that pediatric AIDS patients treated biologically with colchicine may not be at increased risk of either SARS-CoV-2 infection or severe COVID-19 . The results of a meta-analysis of studies assessing the impact of colchicine on the prognosis of patients with COVID-19 are presented in . Currently, several clinical trials are underway to assess the efficacy and safety of colchicine in the treatment of COVID-19 ( ClinicalTrials.gov ). Overall, the results of studies and meta-analyses indicate that colchicine may be effective in treating patients with COVID-19. Familial Mediterranean fever The therapeutic efficacy of colchicine was assessed in the double-blind, placebo-controlled study by Dinarello et al. , involving 11 patients with FMF. This study showed high effectiveness of colchicine in reducing the number of attacks of the disease (p < 0.001) compared to placebo. Resistance to orally administered colchicine is observed in a proportion of patients with FMF. The pathomechanism of this resistance is unknown, but it was found that these patients had lower concentrations of colchicine in mononuclear cells . In short-term (12 weeks) studies of intravenous colchicine administration in patients with FMF refractory to oral administration, good efficacy and effectiveness have been demonstrated . The study by Grossman et al. evaluated the safety and efficacy of long-term intravenous colchicine administration in patients with FMF refractory to this drug administered orally. The study included 15 patients with frequent attacks of FMF, despite the maximum tolerated dose of oral colchicine (2–3 mg/day), who were treated with weekly intravenous injections of 1 mg of colchicine for at least 12 months. Treatment effectiveness was based on changes in the frequency, duration, and severity of FMF attacks. Safety was assessed on the basis of adverse events. The mean duration of treatment with intravenous colchicine was 5.16 ± 2.85 years. A decrease was observed in the mean monthly indexes of abdominal attacks (from 5.6 ± 3.7 to 1.9 ± 3.3, p = 0.0009), joint attacks (from 6.5 ± 5.1 to 1, 6 ± 1.6, p = 0.01), and total attacks (from 22.3 ± 16.2 to 7.4 ± 5.7, p = 0.002). The incidence of adverse events was low and mainly related to the gastrointestinal tract. No serious adverse events have been reported. Thus, long-term treatment with intravenous colchicine in patients who do not respond to oral colchicine treatment is effective and safe . Overall, administration of oral or intravenous colchicine is effective and safe in patients with FMF. Colchicine is also effective in the treatment of chronic urticaria, as well as PFAPA syndrome (periodic fever, aphthous stomatitis, pharyngitis, and cervical adenitis) . The therapeutic efficacy of colchicine was assessed in the double-blind, placebo-controlled study by Dinarello et al. , involving 11 patients with FMF. This study showed high effectiveness of colchicine in reducing the number of attacks of the disease (p < 0.001) compared to placebo. Resistance to orally administered colchicine is observed in a proportion of patients with FMF. The pathomechanism of this resistance is unknown, but it was found that these patients had lower concentrations of colchicine in mononuclear cells . In short-term (12 weeks) studies of intravenous colchicine administration in patients with FMF refractory to oral administration, good efficacy and effectiveness have been demonstrated . The study by Grossman et al. evaluated the safety and efficacy of long-term intravenous colchicine administration in patients with FMF refractory to this drug administered orally. The study included 15 patients with frequent attacks of FMF, despite the maximum tolerated dose of oral colchicine (2–3 mg/day), who were treated with weekly intravenous injections of 1 mg of colchicine for at least 12 months. Treatment effectiveness was based on changes in the frequency, duration, and severity of FMF attacks. Safety was assessed on the basis of adverse events. The mean duration of treatment with intravenous colchicine was 5.16 ± 2.85 years. A decrease was observed in the mean monthly indexes of abdominal attacks (from 5.6 ± 3.7 to 1.9 ± 3.3, p = 0.0009), joint attacks (from 6.5 ± 5.1 to 1, 6 ± 1.6, p = 0.01), and total attacks (from 22.3 ± 16.2 to 7.4 ± 5.7, p = 0.002). The incidence of adverse events was low and mainly related to the gastrointestinal tract. No serious adverse events have been reported. Thus, long-term treatment with intravenous colchicine in patients who do not respond to oral colchicine treatment is effective and safe . Overall, administration of oral or intravenous colchicine is effective and safe in patients with FMF. Colchicine is also effective in the treatment of chronic urticaria, as well as PFAPA syndrome (periodic fever, aphthous stomatitis, pharyngitis, and cervical adenitis) . Oral preparation of colchicine is well absorbed from the gastrointestinal tract, reaching the maximum concentration in the blood within 1 hour. 30–50% of the drug is bound to plasma proteins, and it penetrates into tissues (leukocytes, kidneys, liver, spleen). Colchicine is partially acetylated in the liver and then slowly metabolized in other tissues. It is mainly excreted in the feces, and 10–20% in the urine. During the use of the drug, its accumulation in tissues occurs. Only 10% of the drug is excreted within 24 hours of taking a single dose; excretion of the active substance may continue for more than 10 days after the end of treatment. The half-life of the drug (T1/2) in people with normal renal function is 4.4 hours, and 18.8 hours in people with renal insufficiency . The safety of colchicine in various diseases was assessed in a meta-analysis of 35 randomized clinical trials by Stewart et al. published in 2020, including 8659 patients. Subjects were assigned to receive colchicine (n = 4225), placebo (n = 3956), or comparator (n = 411). The most common diseases in which colchicine was administered were gout, liver cirrhosis, and acute pericarditis. The percentage of patients taking colchicine with any adverse event was 21.1% (95% CI 19.9–22.4%), while in the reference groups it was 18.9% (95% CI 17.7–20.1%). The meta-analysis showed that the overall estimated relative risk for any adverse event in patients using colchicine compared to the reference groups was 1.46 (95% CI 1.20–1.77, p < 0.001). The most common adverse events were diarrhea and gastrointestinal disorders . The subgroup analysis showed that patients with liver disease had the highest relative risk of any adverse event (RR 5.92; 95% CI 2.08–16.82). However, no statistically significant differences in the RR of adverse events were found between patients with various diseases who used colchicine. Moreover, no statistical significance in relative risk of any adverse events was demonstrated at different times of colchicine intake and at different doses (< 50 mg; 50 to < 100 mg; ≥ 100 to 300 mg; and > 600 mg). No adverse event was found to be related to the patient’s death . Thus, colchicine increases the incidence of diarrhea and gastrointestinal adverse events but does not increase the incidence of increased liver enzymes, hepatitis, hepatotoxicity and hepatic abnormalities, dysesthesia in the legs and paresthesia, muscle symptoms (myalgia, muscle cramps, elevated creatine phosphokinase and muscle weakness), infections (urinary tract infection, parotiditis, shingles, upper respiratory tract infection, nasopharyngitis and sinus congestion), or hematological disorders (anemia, bone marrow toxicity, leukopenia and purple). Colchicine also does not increase mortality . The results of this meta-analysis indicate that colchicine is a safe and well-tolerated drug. Another clinically important concern is the safety of colchicine use in pregnant women. This issue was the subject of a meta-analysis of 4 clinical trials conducted by Indraratna et al. , covering 2125 pregnant women. The effect of colchicine intake (n = 550) on the risk of miscarriage and fetal malformation was assessed compared to women who did not take colchicine (n = 1575). The indications for the use of colchicine were FMF, Behçet’s disease, and others. In the analyzed studies, the daily dose of colchicine was 1–2 mg. In the group of women receiving colchicine due to FMF, there was no statistically significant effect of taking colchicine during pregnancy on the risk of miscarriage, fetal malformation, cesarean section, and reduced birth weight . Interestingly, the analysis of all women taking colchicine (regardless of the indication for its use) showed a statistically significant reduction in the risk of miscarriage, an increased risk of premature delivery, a reduction in the gestational age and a reduction in birth weight . Thus, treatment with colchicine did not significantly increase the incidence of fetal malformations or abortions during pregnancy. The results of this meta-analysis provide clinically significant information that supports the conclusion that colchicine treatment should not be discontinued in the case of FMF during pregnancy . The safety of colchicine may be limited by the presence of contraindications to its use and interactions with other drugs . Colchicine is metabolized by cytochrome P450 3A4 (CYP3A4) and P-glycoprotein; therefore, the use of drugs that are also substrates for these enzymes may result in lethal adverse effects. Hence, it is important to ask patients if they are using colchicine before prescribing any other medications. The study by Imai et al. assessed the number of prescribed prescriptions for colchicine in patients who were taking drugs that could interact with it at the same time. The study included 3302 patients who regularly take colchicine. Based on the analysis of prescriptions for other drugs issued to these patients, it was found that 43 (1.3%) of them were taking drugs that were strong inhibitors of CYP3A4 or P-glycoprotein (clarithromycin, cyclosporin and itraconazole). Of these 43 patients, 11 had renal and/or hepatic impairment at baseline. Moreover, it was found that patients with Behçet’s disease had the highest risk of interaction with colchicine (OR 4.93; 95% CI 2.12–11.5; p < 0.001). Among patients with Behçet’s disease receiving colchicine and the drug interacting with it, 25% had renal and/or hepatic impairment. Significant drug interactions with colchicine were found in 1% of patients with gout and Behçet’s disease . Thus, in clinical practice, it is always necessary to determine whether a patient taking colchicine has contraindications to its use and to pay attention to other drugs regarding whether they interact with colchicine. In summary, colchicine is well tolerated and safe, also in pregnant women. In clinical practice, however, the use of colchicine in patients with contraindications and its combination with drugs that may interact with it significantly reduces the safety of this drug. Colchicine, a drug with a long tradition, is now experiencing its renaissance. It is a safe and well-tolerated drug. Colchicine is used in rheumatology, cardiology, and neurology, and recently there are more and more data on its effectiveness in the treatment of COVID-19 patients. Colchicine may become an important drug in the early stages of COVID-19 treatment.
Real-World Effectiveness and Safety of Two-Drug Single Pill Combinations of Antihypertensive Medications for Blood Pressure Management: A Follow-Up on Daily Cardiology Practice in Douala, Cameroon
9b85055b-ce54-4163-ac2d-fd054fb4ca1f
10129918
Internal Medicine[mh]
Hypertension is a major preventable cause of cardiovascular disease including stroke, heart failure, and kidney disease globally In sub-Saharan Africa (SSA), hypertension affects about one-third of adults . Despite advances in the understanding of pathophysiology and in the therapeutic strategies to manage hypertension, SSA patients continue to have increased morbidity and mortality from hypertension-mediated organ damage (HMOD), due to poor treatment and control rates. Also, compared with other ethnic groups, black patients have a higher prevalence and earlier onset of hypertension, coupled with poorer prognosis . Therefore, early treatment of hypertension and the attainment of blood pressure (BP) goals within the shortest time possible are important to prevent HMOD. BP targets are usually difficult to achieve, and most patients require a combination of two or more drugs. Indeed, several studies showed that adequate BP control in most patients can only be guaranteed by a combination of at least two classes of BP-lowering medications , preferentially in single pill combinations (SPCs) to improve adherence . In line with this, the International Society of Hypertension (ISH) and the European Society of Cardiology/European Society of Hypertension (ESC/ESH) guidelines recommended the use of an SPC therapy with either two or three antihypertensive drug classes to initiate treatment in people with hypertension Five major drug classes have proven evidence in their ability to reduce BP and cardiovascular morbidity and mortality in western countries , including beta-blockers, calcium channel blockers (CCB), renin–angiotensin system inhibitors (RAASi) (angiotensin-converting enzyme inhibitors and angiotensin receptors blockers), and diuretics (DIU) (thiazides and thiazide-like diuretics such as indapamide), but it should be noted that the last three classes are currently the mainstay of treatment . The Pan African Society of Cardiology (PASCAR) recommended that these same drug classes preferentially in SPCs be used to reduce the burden of hypertension in Africa . Furthermore, PASCAR recently endorsed the ISH guidelines. However, data is lacking to support these recommendations in the context of SSA. Some studies, mainly of observational design and small sample size, suggested that CCB and diuretics are more efficacious as monotherapies in terms of BP-lowering efficacy than RAASi . These studies did not focus on initiating therapy with combinations. The CREOLE trial is to our knowledge the only contemporary large randomized clinical trial on the treatment of hypertension in SSA which reported efficacy of two-drug combinations of antihypertensive agents (DIU + CCB, CCB + RAASi, and RAASi + DIU); findings showed a better efficacy of combinations with amlodipine and a low rate of adverse events for the three combinations . However, these were free pill combinations, while current practice showed a prescribing craze of SPCs in SSA . In this study, we aim to provide real-world data on the effectiveness and short-term safety of different two-drug SPCs of BP medications used in daily cardiology practice in a low-middle-income setting. Study Design and Setting We conducted an in-depth retrospective analysis of the pharmacological strategies of all patients included in the hypertension registry and treated with two-drug SPCs at two reference centers in Douala (Douala General Hospital and Cardiovascular Center of Douala). The registry was initiated in January 2010, and the current analysis focused on patients included until January 2020. The institutional review board of the Faculty of Health Sciences, University of Buea, approved the study and we further obtained administrative authorization from the participating institutions and conformed to the principles outlined in the Declaration of Helsinki. Eligibility Criteria All patients aged 30 years or older with a clinical diagnosis of hypertension by their cardiologist were included in the registry. Participants who reported to be on two-drug SPCs and who had at least three office visits in the subsequent 4 months following initiation of drug therapy were considered for analysis. Patients were excluded if they had insufficient diagnostic evidence of hypertension, or if they had secondary hypertension. Also, patients with hypertensive disorders in pregnancy or those with clinical manifestations of HMOD (stroke, heart failure, or coronary artery disease and renal failure), and those who had a change of their active molecule in the SPCs during the 16-week follow-up period were excluded. Study Procedure and Data Collection During the study period, diagnosis of hypertension was based on office BP per the ESC/ESH international guidelines with a threshold of systolic blood pressure (SBP) ≥ 140 mmHg and/or diastolic blood pressure (DBP) ≥ 90 mmHg. Patients diagnosed with hypertension in the two reference settings were systematically screened for other cardiovascular risk factors and clinical cardiovascular, neurological, or renal manifestations. They were invited to undergo minimum laboratory testing including fasting blood sugar, serum sodium and potassium, serum creatinine, total cholesterol, and a 12-lead standard electrocardiogram (ECG). The requirement of an ambulatory BP measurement (ABPM), repetition of laboratory measurements, and any other laboratory tests were completely left to the judgment of the attending physician. General recommendations for treatment of hypertension in our setting during the study period were to initiate a two-drug SPCs when baseline BP was at least 20/10 mmHg (systolic/diastolic) above the target of less than 140/90 mmHg, thus being at least 160/100 mmHg. When the BP target was not achieved after 4 weeks, the attending cardiologist was free to decide on dose escalation or change of medications. We collected sociodemographic data, associated cardiovascular risk factors, HMOD, weight (in kilograms), height (in meters), BP (mmHg), drug history, and antihypertensive drugs adverse events. Outcomes were changes in BP after 16 weeks of follow-up, attainment of BP target value (less than 140/90 mmHg), and drug tolerance (self-reported drug adverse event by the patient, and/or perceived drug tolerance by the cardiologist). Office BP was measured before initiating any drug therapy and at 4, 8, 12, and 16 weeks from the index date of two-drug SPC initiation. BP was measured during each visit thrice in a seated position using a validated electronic sphygmomanometer (OMRON HEM) at 2-min intervals. The recorded office BP was the average of the second and third measurements. Heart rate was also measured simultaneously. Body mass index (BMI) was calculated by dividing the weight by the height squared. Antihypertensive drug adverse events were measured clinically on the basis of self-report from the patient and on evaluation by the attending cardiologist at any visit. Regardless of the commercial name and dose of medications, participants were categorized into three subgroups based on the active substance present in the reported SPCs used for treatment. These included CCB + DIU, CCB + RAASi, and RAASi + DIU. The diuretic classes considered were either thiazide or thiazide-like diuretics. Statistical Analysis Data were analyzed using R version 4.2.1. Missing data for BP in the subsequent follow-up visits were replaced with BP values from the most recent visit for the corresponding participant. Baseline characteristics were compared between the different classes of two-drug SPCs, continuous variables were presented as the mean and standard deviation for symmetrically distributed data and median and interquartile range for skewed distributions. Categorical variables were presented as frequencies and percentages. Mixed repeated measures analysis of variance (ANOVA) was used to analyze BP at different time intervals. Greenhouse–Geisser and Huynh–Feldt corrections were applied to correct for violation of the assumption of sphericity. A mixed linear repeated model was used to evaluate the change of SBP from baseline to week 16, while controlling for age, gender, and baseline SBP. The chi-squared test was used to compare the proportion of patients with controlled BP. In addition a logistic regression model was fitted to evaluate BP control at week 8 and week 16 while controlling for baseline SBP, age, and gender to evaluate association between the different SPCs and SBP and the proportion of adverse drug reactions across the three groups. We conducted an in-depth retrospective analysis of the pharmacological strategies of all patients included in the hypertension registry and treated with two-drug SPCs at two reference centers in Douala (Douala General Hospital and Cardiovascular Center of Douala). The registry was initiated in January 2010, and the current analysis focused on patients included until January 2020. The institutional review board of the Faculty of Health Sciences, University of Buea, approved the study and we further obtained administrative authorization from the participating institutions and conformed to the principles outlined in the Declaration of Helsinki. All patients aged 30 years or older with a clinical diagnosis of hypertension by their cardiologist were included in the registry. Participants who reported to be on two-drug SPCs and who had at least three office visits in the subsequent 4 months following initiation of drug therapy were considered for analysis. Patients were excluded if they had insufficient diagnostic evidence of hypertension, or if they had secondary hypertension. Also, patients with hypertensive disorders in pregnancy or those with clinical manifestations of HMOD (stroke, heart failure, or coronary artery disease and renal failure), and those who had a change of their active molecule in the SPCs during the 16-week follow-up period were excluded. During the study period, diagnosis of hypertension was based on office BP per the ESC/ESH international guidelines with a threshold of systolic blood pressure (SBP) ≥ 140 mmHg and/or diastolic blood pressure (DBP) ≥ 90 mmHg. Patients diagnosed with hypertension in the two reference settings were systematically screened for other cardiovascular risk factors and clinical cardiovascular, neurological, or renal manifestations. They were invited to undergo minimum laboratory testing including fasting blood sugar, serum sodium and potassium, serum creatinine, total cholesterol, and a 12-lead standard electrocardiogram (ECG). The requirement of an ambulatory BP measurement (ABPM), repetition of laboratory measurements, and any other laboratory tests were completely left to the judgment of the attending physician. General recommendations for treatment of hypertension in our setting during the study period were to initiate a two-drug SPCs when baseline BP was at least 20/10 mmHg (systolic/diastolic) above the target of less than 140/90 mmHg, thus being at least 160/100 mmHg. When the BP target was not achieved after 4 weeks, the attending cardiologist was free to decide on dose escalation or change of medications. We collected sociodemographic data, associated cardiovascular risk factors, HMOD, weight (in kilograms), height (in meters), BP (mmHg), drug history, and antihypertensive drugs adverse events. Outcomes were changes in BP after 16 weeks of follow-up, attainment of BP target value (less than 140/90 mmHg), and drug tolerance (self-reported drug adverse event by the patient, and/or perceived drug tolerance by the cardiologist). Office BP was measured before initiating any drug therapy and at 4, 8, 12, and 16 weeks from the index date of two-drug SPC initiation. BP was measured during each visit thrice in a seated position using a validated electronic sphygmomanometer (OMRON HEM) at 2-min intervals. The recorded office BP was the average of the second and third measurements. Heart rate was also measured simultaneously. Body mass index (BMI) was calculated by dividing the weight by the height squared. Antihypertensive drug adverse events were measured clinically on the basis of self-report from the patient and on evaluation by the attending cardiologist at any visit. Regardless of the commercial name and dose of medications, participants were categorized into three subgroups based on the active substance present in the reported SPCs used for treatment. These included CCB + DIU, CCB + RAASi, and RAASi + DIU. The diuretic classes considered were either thiazide or thiazide-like diuretics. Data were analyzed using R version 4.2.1. Missing data for BP in the subsequent follow-up visits were replaced with BP values from the most recent visit for the corresponding participant. Baseline characteristics were compared between the different classes of two-drug SPCs, continuous variables were presented as the mean and standard deviation for symmetrically distributed data and median and interquartile range for skewed distributions. Categorical variables were presented as frequencies and percentages. Mixed repeated measures analysis of variance (ANOVA) was used to analyze BP at different time intervals. Greenhouse–Geisser and Huynh–Feldt corrections were applied to correct for violation of the assumption of sphericity. A mixed linear repeated model was used to evaluate the change of SBP from baseline to week 16, while controlling for age, gender, and baseline SBP. The chi-squared test was used to compare the proportion of patients with controlled BP. In addition a logistic regression model was fitted to evaluate BP control at week 8 and week 16 while controlling for baseline SBP, age, and gender to evaluate association between the different SPCs and SBP and the proportion of adverse drug reactions across the three groups. Of a total population of 549 patients with hypertension on combination therapy, 377 (68.7%) participants were on two-drug fixed-dose combinations at baseline, including 123 on CCB + DIU, 96 on RAASi + CCB, and 158 on RAASi + DIU (Fig. ). Their mean age was 54.6 (± 11.3) years, similar across the different subgroups of SPCs. There were 187 (49.6%) male participants. Also, half of the participants reported a history of hypertension, and the median duration was 2.0 (IQR 1.0–7.0) years. The baseline SBP was 168.3 (± 12.5) mmHg and was slightly higher in participants on RAASi + CCB combinations compared to the other two subgroups. In addition, the baseline DBP was 99.2 (± 10.0) mmHg, with no significant differences between the different subgroups of SPCs. Table summarizes the baseline characteristics of study participants by subgroup of SPCs used for treatment of hypertension. Effectiveness of Different Dual Therapy Combinations on Blood Pressure The distribution of active molecules in the different SPCs is shown in Fig. . Of the 283/377 (75.1%) patients taking an SPC containing a diuretic, 212/377 (56.2%) were taking indapamide and 71/377 (18.8%) were on hydrochlorothiazide. An SPC containing perindopril was prescribed in 59.8% of the 254 (67.4%) on RAASi, while amlodipine was prescribed in 95.9% of the 219 (58.1%) patients receiving an SPC with CCB. A decrease of 34.3 (± 14.2) mmHg units of SBP was observed at week 16, with no significant difference between the three groups of SPCs ( p = 0.118). There was a slightly higher decrease in SBP in participants on SPCs containing a CCB compared to participants on SPCs comprising RAASi + DIU . However, this was not statistically significant across the different time periods of follow-up (Table ). A mixed linear model adjusted for age, gender, and baseline SBP revealed that the different follow-up period had a significant main effect on the change in SBP from baseline to 16 weeks, F 3,1122 = 36.03, p < 0.001, but there was no significant main effect on the type of SPC and SBP change. Similarly, no significant interaction effect was observed between the follow-up period and the type of SPC. From week 4 to week 16, there was a linear trend in SBP change, with a mean change at 4 weeks and 16 weeks of 28.7 (16.6) mmHg and 34.3 (14.2) mmHg, respectively. A post hoc analysis of the various follow-up periods revealed a significant difference in SBP change between weeks 4 and 16, p < 0.01. Further analysis of the BP changes from baseline across the different time points revealed no significant interaction effect between follow-up time and type of SPC on SBP (Fig. ). There was no significant interaction effect between follow-up time and the group of SPCs on SBP F 5.5,1028.3 = 1.55, p = 0.16. Likewise, the main effect of the type of SPC on BP control was not statistically significant F 2,374 = 3.1, p = 0.05 (Fig. ). However, there was a significant main effect of the period of follow-up on SBP F 2.75,1028.3 = 943.3, p < 0.001. Pairwise post hoc comparisons for follow-up time and SBP were statistically significant across the different follow-up periods. Furthermore, no significant interaction effect was observed for the SPC type and follow-up time on DBP change F 5.7,1073.9 = 0.7, p = 0.6 Similarly, there was no significant main effect of the class of SPC on BP, F 2,374 = 1.0, p = 0.3 (Fig. ). There was a significant main effect of follow-up time on the DBP, F 2.9,1073.1 = 594.4, p < 0.001. Rate of BP Control Two-thirds of patients had their BP controlled at 16 weeks of treatment, with a slightly higher percentage in the participants on combination with CCB (Table ). Logistic regression models were fitted to estimate the effect of SPCs on blood pressure control at week 8, while controlling for baseline SBP, gender, and age. With the RAASi + DIU group used as the reference group, the odds ratio of BP control at week 8 was 1.40 (95% CI 0.85, 2.33) and 0.79 (95% CI 0.46, 1.36) for the CCB + DIU and RAASi + CCB groups, respectively. Similarly, the odds ratio for BP control at week 16 were 1.27 (95% CI 0.75, 2.17) and 0.83 (95% CI 0.48, 1.45). Adverse Events Rates of adverse events (Table ) were not statistically significant among the three listed groups. The overall incidence of adverse events was 3.4%, and fatigue was the most frequently reported. Unproductive cough was reported by three participants. The distribution of active molecules in the different SPCs is shown in Fig. . Of the 283/377 (75.1%) patients taking an SPC containing a diuretic, 212/377 (56.2%) were taking indapamide and 71/377 (18.8%) were on hydrochlorothiazide. An SPC containing perindopril was prescribed in 59.8% of the 254 (67.4%) on RAASi, while amlodipine was prescribed in 95.9% of the 219 (58.1%) patients receiving an SPC with CCB. A decrease of 34.3 (± 14.2) mmHg units of SBP was observed at week 16, with no significant difference between the three groups of SPCs ( p = 0.118). There was a slightly higher decrease in SBP in participants on SPCs containing a CCB compared to participants on SPCs comprising RAASi + DIU . However, this was not statistically significant across the different time periods of follow-up (Table ). A mixed linear model adjusted for age, gender, and baseline SBP revealed that the different follow-up period had a significant main effect on the change in SBP from baseline to 16 weeks, F 3,1122 = 36.03, p < 0.001, but there was no significant main effect on the type of SPC and SBP change. Similarly, no significant interaction effect was observed between the follow-up period and the type of SPC. From week 4 to week 16, there was a linear trend in SBP change, with a mean change at 4 weeks and 16 weeks of 28.7 (16.6) mmHg and 34.3 (14.2) mmHg, respectively. A post hoc analysis of the various follow-up periods revealed a significant difference in SBP change between weeks 4 and 16, p < 0.01. Further analysis of the BP changes from baseline across the different time points revealed no significant interaction effect between follow-up time and type of SPC on SBP (Fig. ). There was no significant interaction effect between follow-up time and the group of SPCs on SBP F 5.5,1028.3 = 1.55, p = 0.16. Likewise, the main effect of the type of SPC on BP control was not statistically significant F 2,374 = 3.1, p = 0.05 (Fig. ). However, there was a significant main effect of the period of follow-up on SBP F 2.75,1028.3 = 943.3, p < 0.001. Pairwise post hoc comparisons for follow-up time and SBP were statistically significant across the different follow-up periods. Furthermore, no significant interaction effect was observed for the SPC type and follow-up time on DBP change F 5.7,1073.9 = 0.7, p = 0.6 Similarly, there was no significant main effect of the class of SPC on BP, F 2,374 = 1.0, p = 0.3 (Fig. ). There was a significant main effect of follow-up time on the DBP, F 2.9,1073.1 = 594.4, p < 0.001. Two-thirds of patients had their BP controlled at 16 weeks of treatment, with a slightly higher percentage in the participants on combination with CCB (Table ). Logistic regression models were fitted to estimate the effect of SPCs on blood pressure control at week 8, while controlling for baseline SBP, gender, and age. With the RAASi + DIU group used as the reference group, the odds ratio of BP control at week 8 was 1.40 (95% CI 0.85, 2.33) and 0.79 (95% CI 0.46, 1.36) for the CCB + DIU and RAASi + CCB groups, respectively. Similarly, the odds ratio for BP control at week 16 were 1.27 (95% CI 0.75, 2.17) and 0.83 (95% CI 0.48, 1.45). Rates of adverse events (Table ) were not statistically significant among the three listed groups. The overall incidence of adverse events was 3.4%, and fatigue was the most frequently reported. Unproductive cough was reported by three participants. This observational study of patients who had been receiving one of the three most recommended SPCs of antihypertensives (CCB + DIU, RAASi + CCB, and RAASi + DIU) in our hypertension registry in the city of Douala revealed three major findings. First, we found a similar reduction in the BP variables and control rates across the 16 weeks of follow-up, although there was a significant advantage of BP control rates for SPCs with CCB + DIU at 8 weeks. Second, two-thirds of patients had their BP controlled at 16 weeks of treatment. Third, the overall incidence of adverse events was low (3.4%) with similar rates among the three groups. The above suggests that in black patients residing in SSA, the three most recommended SPCs are equally effective in reducing and controlling BP with a slight advantage for combinations with CCB in reducing BP, as described by the CREOLE trial , and all three are equally safe (Table ). Approximately 150 million Africans are hypertensive, and most are either not aware, not treated, or poorly controlled, and since this number may grow, the PASCAR proposed strategies for intensive BP control . In populations residing in SSA, hypertension develops at younger ages, is often more severe in terms of BP levels, and is associated with low control rates, resulting in a higher incidence of organ damage including heart disease, kidney disease, and stroke . Experts advocated that hypertension control in the world and especially in Africa is feasible if SPCs were widely prescribed globally, as it could increase the number of patients with controlled hypertension by 80 million and could prevent two million stroke and heart attack events and more than 600,000 cardiovascular-related deaths over 5 years . With 62.3% of our patients having their BP controlled at 16 weeks of treatment, our findings strongly support the theory that an early prescription of effective SPCs would help better attain the BP targets. The effectiveness of SPCs antihypertensive drugs has been proven elsewhere , in addition to improved patient adherence (which is unfortunately low in hypertension) and more rapid and greater BP lowering compared to use of single drug medications. Whether these benefits apply to all ethnic groups at the same magnitude and for all SPCs or have similar drug adverse effects needed more investigations in the black population residing in SSA. Although it is traditionally known that black patients with hypertension usually show a reduced antihypertensive response to RAASi monotherapy, whereas they usually respond more effectively to thiazide or thiazide-like diuretics and CCBs , there was no special emphasis on a combination based on CCB + DIU until the CREOLE trial which found that combinations with amlodipine + hydrochlorothiazide or amlodipine + perindopril are more efficacious than perindopril + hydrochlorothiazide in reducing BP in black patients as part of free two-drug combinations, with similar rates of adverse events among the three treatment groups . On the basis of this trial, the ISH guidelines recommended the elective prescription of either an SPC of CCB + RAASi or CCB + DIU to treat hypertension in blacks and added a preference for ARBs due to frequent risk of angioedema with ACE inhibitors among black patients . With similar rates of BP control and adverse events across the three groups over 16 weeks, our findings support the CREOLE trial in the sense that a combination of RAASi + DIU is also effective in lowering and controlling BP with no excess adverse events. Implications from these findings are that in our clinical practice settings, the three recommended combinations must be encouraged for use. If a selection was to be made, the clinician’s decision must be guided by criteria other than effectiveness or short-term safety including availability, price, and patient’s profile. The latter point is important because HMOD is common in Africa and SSA patients with hypertension have an average of 3.7 cardiovascular risk factors . Strengths and Limitations This study has several limitations including non-randomization, its short treatment period, the relatively small-sized treatment subgroups, and the lack of an office BP monitoring to evaluate drug effectiveness. While comparisons between groups were limited, it should be noted that for a real-world study, our patients’ baseline characteristics were grossly similar. The SPCs used in this study were quite heterogenous (type and dosage) and we did not assess medication adherence which could have affected BP control. In addition, by excluding individuals with clinical manifestations of HMOD and those who had their treatment changed during follow-up, we likely selected a population with higher chances of having a better response to antihypertensives. Although these shortcomings may limit our ability to extrapolate our findings of the observed effectiveness and tolerance of the three SPCs evaluated, these findings are consistent with previous literature demonstrating that SPCs are effective in improving hypertension control and are well tolerated. In addition, further long-term studies in western countries demonstrated that using SPCs to treat hypertension is associated with a significant reduction in cardiovascular events . Large-scale studies are needed in our populations to ascertain the long-term tolerance and effectiveness of SPCs. This study has several limitations including non-randomization, its short treatment period, the relatively small-sized treatment subgroups, and the lack of an office BP monitoring to evaluate drug effectiveness. While comparisons between groups were limited, it should be noted that for a real-world study, our patients’ baseline characteristics were grossly similar. The SPCs used in this study were quite heterogenous (type and dosage) and we did not assess medication adherence which could have affected BP control. In addition, by excluding individuals with clinical manifestations of HMOD and those who had their treatment changed during follow-up, we likely selected a population with higher chances of having a better response to antihypertensives. Although these shortcomings may limit our ability to extrapolate our findings of the observed effectiveness and tolerance of the three SPCs evaluated, these findings are consistent with previous literature demonstrating that SPCs are effective in improving hypertension control and are well tolerated. In addition, further long-term studies in western countries demonstrated that using SPCs to treat hypertension is associated with a significant reduction in cardiovascular events . Large-scale studies are needed in our populations to ascertain the long-term tolerance and effectiveness of SPCs. The findings of our study suggest that the three two-drug fixed-dose combinations CCB + DIU, CCB + RAASi, and RAASi + DIU are highly effective in reducing and controlling BP in patients without clinical evidence of HMOD, with low and similar rates of short-term adverse effects. Although long-term data is still needed to investigate whether this translates into a similar reduction in major cardiovascular and renal events, we believe that SPCs rather than free dual pill therapy represents a simple and potentially low-cost intervention that could significantly reduce the burden of hypertension in our setting. Specific interventions to address the problem of hypertension in Cameroon must therefore include the incorporation of these SPCs in the essential medicines list of the country, and they must be made widely available and affordable. However, we recognize that in individuals with mildly elevated BP, monotherapy would still be appropriate.
The emotions experienced by family medicine residents and interns during their clinical trainings: a qualitative study
4d3059e1-334d-4642-9ba9-f642e572122d
10130843
Family Medicine[mh]
Medical education is typically regarded as highly stressful (Ofri, ). Doctors are trained in a culture that has high expectations. High achievers compete with each other with little or no room for making an error in order to strive for excellence. Moreover, medical students or the residents are confronted with uncertainties in their role descriptions, and they witness illness, sufferings and deaths as part of their everyday duties. Both medical students and family medicine residents work under pressure which may easily lead to burnout (Soler et al. , ). In an European General Practice Research Network (EGPRN) study including 12 European countries ( n = 1393) in terms of burnout, 43% of respondents scored high for “emotional exhaustion burnout”, 35% for “depersonalization” and 32% for “personal accomplishment”, with 12% scoring high burnout in all three dimensions (Soler et al. , ). In a Turkish study, Kosan et al reported about 70% of burnout among family physicians ( n = 246). Their study exhibited a higher level of “emotional exhaustion” (Kosan et al. , ). During this heavy workload, medical students or residents do not have time to think, talk or reflect on their emotions (Helmich et al. , a). In fact, being aware of and able to regulate emotions is essential to doctor–patient relationship and to build medical teamwork. Furthermore, being aware of and able to understand and manage emotions in oneself and others is critical for medical students’ and family medicine residents’ personal wellbeing (Satterfield and Hughes, ; Shapiro, ). Medical students and family medicine residents may face intense emotions in patients causing similar emotional reactions in themselves (Helmich et al. , b). However, evidence shows that emotional learning processes tend to be underestimated (Karnieli-Miller et al. , ), and doctors may seem reluctant to confront their own emotions (Helmich et al. , a; b). Being able to understand and regulate emotions is considered a critical feature of medical students’ and residents’ overall clinical performance, including diagnostic processes, medical decision-making, and interpersonal relationships (Croskerry et al. , ). However, only a few publications demonstrated that there was a need for a set of skills that medical students or residents should develop in dealing with emotions (Satterfield and Hughes, ; Shapiro, ; Cherry et al. , ). Therefore, the aim of this study was to explore the family medicine residents’ and final-year medical students’ emotions during their clinical trainings. In addition to that we wanted to find out the emotions experienced during their patient encounters and the strategies they use to regulate emotional experiences and responses to stress. This qualitative study was performed with 15 family medicine residents and 24 final-year medical students using a convenience sample from two medical faculties to explore and analyze their emotions during their clinical trainings. Data were gathered by means of focus group interviews, including six interviews conducted and recorded through online meetings. An information meeting on the subject was held for the residents and students, and volunteers were included in the study. Interns and residents who agreed to participate in the study were invited to the online platform where the interview would take place. The meeting was held with a facilitator and an observer. At the beginning of the interview, the participants were informed about the subject and purpose of the study. In addition to that, their verbal consent for the study and recordings was obtained. The interviews were recorded both as audio-video and by observer notes. Sessions lasted approximately 45 min to 1 h. The interviews were conducted by three researchers (OT, SP, SH) who were trained in qualitative research. Participants were asked about demographic characteristics in the first part of the interview, and then semi-structured questions created by literature review were asked to the participants. The main questions included in focus groups were: What kind of emotions in general do you experience in the clinical settings? What kind of emotions do you experience while having a patient–physician interviews? How have you been able to cope with emotionally difficult situations? A focus group format was selected for this study, as this method is useful for exploring views, opinions, knowledge, experiences and needs of participants. With relatively few qualitative studies on the topic, an inductive thematic analysis approach was selected for this exploratory study. A preconceived theoretical framework was not used; instead, the researchers allowed themes to emerge as the data were analyzed. Data were analyzed by using a “thematic analysis” approach. In this analysis, we used the six steps proposed by Braun and Clarke (Braun and Clarke, ). During the initial step, two researchers reviewed the transcripts to better understand the content. In the second step, the primary researcher labeled the important issues of the data. After that, the codes were labeled based on the interpretations using open coding. Transcripts were independently read and coded by the two authors. In the third step, relationships among the concepts were investigated and emerged into themes. In the fourth step, discussion and comparison of coding led to the identification of themes. In the fifth step, each theme was defined and named. Finally, in the sixth step, the relationships between the themes were discussed and a report was written. This was reviewed by the research team and revised through discussion. A final code book was agreed upon that included clear definitions of themes and sub-themes. Since the interviews reached saturation in terms of content, the interviews were terminated at the end of sixth focus group meetings. Each interview took an average of 45–60 min. Six video recordings were performed with the permission of the participants during the interviews. The audio recordings of the interviews were transcribed by the researchers who conducted the interview on the same day. This study was approved by the local Research Ethics Board. Permission was obtained from the ethics committee for video recordings as well. The informed consent form of the study was verbally explained to the participants and their verbal consent was obtained for the study and video recordings. All data were analyzed anonymously. Our research was prepared in accordance with the Declaration of Helsinki, which was revised in 2000. A total of 39 participants were interviewed. Of these, 26 were female. The mean age was 25.41 ± 1.72 (min 23–max 32). Fifteen were second year family medicine residents and had experience of a GP placement in their rotations. Fourteen were final year medical students (interns) from a governmental medical school while 10 were interns from a foundation university. Demographic details were shown in Table . The interns and residents were happy to share their emotions frankly and were grateful to the researchers as no one else has asked them about their feelings before. Themes and subthemes are described below, with participant quotations identified by participant’s gender, profession (intern or resident), faculty (Foundation University: FU; Governmental University: GU) and age which was given in the parenthesis (Table ). Overall three main themes emerged from our data regarding residents’ and interns’ emotions. These were the “clinical climate’s role”, “emotions during patient encounters” and “coping strategies with negative emotions”. The main themes and the subthemes are demonstrated in a concept map in Figure . Our first theme was the “ clinical climate’s role ”. When interns or residents start a new rotation, they are unfamiliar with this new environment and if there is no one else to do an orientation or if the medical staff (nurses, residents. etc) were too busy and seem to be not very helpful, the newcomers may feel like they are not welcomed to this clinic and state that they might have negative feelings onward. Moreover, they also state that in some clinics they are ignored or not heard or valued. Therefore, the subthemes associated with our first theme were as the students/residents not being used to the routines of the clinic they experienced “feelings of being unfamiliar”, “feelings of not being understood, heard or valued” and “communication problems due to uncertain role descriptions” all of which can prevent the achievement of the goals of the clinical education. All of these experiences can have some negative effects on the students’ thought content or emotional world. The emotion labels related to these subthemes were the feelings of worthlessness, helplessness, tension and anxiety followed by frustration and uncertainty. Below are some of the examples related to this theme: “….in the clinic I’m sick of trying to be useful, what do I do to the nurse? …she doesn’t record or doesn’t want to understand, even if I express it as accurately as I can. I don’t feel understood, I feel like worthless, feel very helpless” (F, Intern, FU, 24 y) . ‘…again, I found myself alone dealing with this acute medical emergency no one else is helping me in the clinic.’ (M, Resident, GU, 25y) . In Turkey, final-year medical students are called “intern doctors,” and they are no longer students, but on the other hand, they are not graduated as medical doctors. For this reason, they express that they have problems with other healthcare team members because of this uncertainty. Interns also stated that they had difficulties in the clinic, in relations with patients, their relatives, relations with other healthcare professionals, and in recognizing their own roles and responsibilities. Interns stated that defining their roles in the clinical environment is important not only for themselves, but also for other healthcare professionals, patients and educators, and they have difficulties in the clinical settings when the roles are not defined well. Below is an example related to this subtheme: “Different attitudes created confusion about what we should do and should not do. They did not allow us to do anything when we were interested. When we kept our distance and stand on the sidelines, we were pressured to answer why we weren’t interested this time” (M, Intern, FU, 25y) . Our second theme was “ emotions during patient encounters ”. Under this theme, we have identified two subthemes: “ feelings depending on patients’ condition ”and “ hiding emotions during patient encounters ”. “Feelings depending on patients’ condition” Both residents and interns stated that their feelings changed according to patients’ condition. If they find the patient to be too complex then they had feelings of insufficiency. They thought that they had insufficient medical knowledge and skills. They feel uncertain in their medical knowledge and skills and they have the fear of making mistakes followed by stress and anxiety: “….Besides, if the patient is a complicated case, I feel stressed and I have thoughts like what would I do if I were alone, could I be enough?” (F, Intern, FU, 24y) “… during our medical education we memorize the rarest things, syndromes etc. But when it comes to patient encounter, I can’t help, but, feel stressed and inadequate….” (F, Resident, GU, 29y) On the other hand if the chief complaint seemed too minor, then they express feelings of anger toward patient or loosing calmness: “….in the ER, in the middle of the night that man comes with first degree burn in his hand. I really get mad, because he is stealing my time and he is stealing from other patients’ time…”(M, Resident, GU, 29y) If the residents or the interns think that they have managed the patient well, then they have the feelings of happiness, compassion, relief, joy, pride, satisfaction and confidence: “…It’s a relief when you examine and treat such a patient. I feel joy, happiness and satisfaction after seeing the patient…”(M, Intern, FU, 24y) . “Hiding emotions during patient encounters” Another subtheme was “ hiding emotions during patient encounters ”. Both residents and interns stated that they struggled with hiding their emotions from the patients. The majority of interns and residents believed that physicians should hide their emotions from their patients and that showing the emotions openly was unprofessional. Furthermore, they thought that becoming more apathetic and to be able to distance themselves from the patients will be an asset for them in their medical career. Although they experienced positive emotions such as excitement, joy, pride and happiness, they thought that they should not share these feelings with their patients. They also stated that they should hide their negative emotions like burnout/exhaustion, boredom, stress, upset, sadness, anger and anxiety. Below are some examples: “…when physicians make decisions that will affect the life of that patient, only science can influence medicine, we must act without causing emotionality” (M, Resident, GU, 25 y) . “…even if we are doctors, we are all human, we have feelings. But when we are seeing the patient that sentimentality should stay out of the door” (F, Intern, FU, 23y) . The emotion labels related to these subthemes under the theme of “ emotions during patient encounters ”were the “excitement, stress, feelings of insufficiency, inadequacy, feelings of anger toward patient or loosing calmness, uncertainity, and feelings of happiness, compassion, relief, joy, pride, satisfaction and confidence followed by burnout/exhaustion, boredom, upset, sadness, anger and anxiety. Both residents and interns stated that their feelings changed according to patients’ condition. If they find the patient to be too complex then they had feelings of insufficiency. They thought that they had insufficient medical knowledge and skills. They feel uncertain in their medical knowledge and skills and they have the fear of making mistakes followed by stress and anxiety: “….Besides, if the patient is a complicated case, I feel stressed and I have thoughts like what would I do if I were alone, could I be enough?” (F, Intern, FU, 24y) “… during our medical education we memorize the rarest things, syndromes etc. But when it comes to patient encounter, I can’t help, but, feel stressed and inadequate….” (F, Resident, GU, 29y) On the other hand if the chief complaint seemed too minor, then they express feelings of anger toward patient or loosing calmness: “….in the ER, in the middle of the night that man comes with first degree burn in his hand. I really get mad, because he is stealing my time and he is stealing from other patients’ time…”(M, Resident, GU, 29y) If the residents or the interns think that they have managed the patient well, then they have the feelings of happiness, compassion, relief, joy, pride, satisfaction and confidence: “…It’s a relief when you examine and treat such a patient. I feel joy, happiness and satisfaction after seeing the patient…”(M, Intern, FU, 24y) . Another subtheme was “ hiding emotions during patient encounters ”. Both residents and interns stated that they struggled with hiding their emotions from the patients. The majority of interns and residents believed that physicians should hide their emotions from their patients and that showing the emotions openly was unprofessional. Furthermore, they thought that becoming more apathetic and to be able to distance themselves from the patients will be an asset for them in their medical career. Although they experienced positive emotions such as excitement, joy, pride and happiness, they thought that they should not share these feelings with their patients. They also stated that they should hide their negative emotions like burnout/exhaustion, boredom, stress, upset, sadness, anger and anxiety. Below are some examples: “…when physicians make decisions that will affect the life of that patient, only science can influence medicine, we must act without causing emotionality” (M, Resident, GU, 25 y) . “…even if we are doctors, we are all human, we have feelings. But when we are seeing the patient that sentimentality should stay out of the door” (F, Intern, FU, 23y) . The emotion labels related to these subthemes under the theme of “ emotions during patient encounters ”were the “excitement, stress, feelings of insufficiency, inadequacy, feelings of anger toward patient or loosing calmness, uncertainity, and feelings of happiness, compassion, relief, joy, pride, satisfaction and confidence followed by burnout/exhaustion, boredom, upset, sadness, anger and anxiety. Our third theme was “ coping strategies with negative emotions ”. It is seen that residents or the interns cope with emotions in various ways. Under this theme, we have identified three subthemes: “emotional awareness”, “accepting the situation ” and “loss of feelings”. “Emotional awareness” Residents and interns mentioned the importance of awareness of their own emotions and expressing their feelings was the first step in improving coping strategies and better clinical outcomes. Below are the statements of the residents and interns regarding this theme: “I know that I need to calm my current emotional intensity, calm it down. After I calm down, I communicate with the patient. If I don’t communicate with the person I’m having trouble with, I can’t relax. This is how I deal with my problems…” ”(F, Intern, FU, 24 y) . “It’s like being… I’m trying to get over it this way. I use positive language to inspire myself. I have difficulties with words while emphasizing the importance of positive language for coping” (F, Resident, GU,32 y) “Accepting the situation” On the other hand interns and residents cope with their emotions reminding themselves that emotionally intense situations will be part of their job and they need to get used to it. “…I try to adapt and accept the problems I face, thinking of the worst. When my patient died, I could not cope. Every time I go to the hospital I wondered if my patient will die today too? I will lose another patient every week I come. ….but I don’t know how it happened, frankly, …then maybe I accepted. This is my profession, I am a healthcare worker and it is in the nature of my profession to lose people as well as to win. Knowing this, I must continue my profession”(F, Intern, FU,26 y) . “I started to think that the feeling of acceptance begins with the expression of your own emotions first. …I don’t compare myself to anyone. I accept this way. It’s your problem…I mean something you can deal with. Maybe I need time. Maybe I need another feeling. Hope or motivation” (F, Resident, GU,,27 y) . The emotion labels related to these subthemes under the theme of “Coping strategies with negative emotions” were the “calming down, feeling of acceptance, and hope for the future”. “Loss of feelings” Our third subtheme was the “ loss of feelings ”. Seeing too many patients and trying to hide emotions may have caused the loss of feelings. In addition to that losing feelings could be related with denial of situations in the experience of patient encounters. Residents and interns described this situation as “behaving like a robot”. They think that they distance themselves from the patient’s feelings in order to protect themselves from distress: “In order to be able to do this profession, I say to myself: You are a doctor and everyone has a duty, a purpose for existence. That patient…she needs you and you should do your best for her well-being. You cannot feel sorry for her” (M, Intern, FU, 24y) . “As I am still a student, I feel a slight excitement and also stress. Being a doctor is like a role played most of the time, and we all play the subconscious doctor type with our own language. I can also connect it to some kind of an auto-pilot” (M, Intern, FU, 23y) . “…examining too many patients makes you numb-you became like an automated robot”(M, Resident, GU,27 y) . Residents and interns mentioned the importance of awareness of their own emotions and expressing their feelings was the first step in improving coping strategies and better clinical outcomes. Below are the statements of the residents and interns regarding this theme: “I know that I need to calm my current emotional intensity, calm it down. After I calm down, I communicate with the patient. If I don’t communicate with the person I’m having trouble with, I can’t relax. This is how I deal with my problems…” ”(F, Intern, FU, 24 y) . “It’s like being… I’m trying to get over it this way. I use positive language to inspire myself. I have difficulties with words while emphasizing the importance of positive language for coping” (F, Resident, GU,32 y) On the other hand interns and residents cope with their emotions reminding themselves that emotionally intense situations will be part of their job and they need to get used to it. “…I try to adapt and accept the problems I face, thinking of the worst. When my patient died, I could not cope. Every time I go to the hospital I wondered if my patient will die today too? I will lose another patient every week I come. ….but I don’t know how it happened, frankly, …then maybe I accepted. This is my profession, I am a healthcare worker and it is in the nature of my profession to lose people as well as to win. Knowing this, I must continue my profession”(F, Intern, FU,26 y) . “I started to think that the feeling of acceptance begins with the expression of your own emotions first. …I don’t compare myself to anyone. I accept this way. It’s your problem…I mean something you can deal with. Maybe I need time. Maybe I need another feeling. Hope or motivation” (F, Resident, GU,,27 y) . The emotion labels related to these subthemes under the theme of “Coping strategies with negative emotions” were the “calming down, feeling of acceptance, and hope for the future”. Our third subtheme was the “ loss of feelings ”. Seeing too many patients and trying to hide emotions may have caused the loss of feelings. In addition to that losing feelings could be related with denial of situations in the experience of patient encounters. Residents and interns described this situation as “behaving like a robot”. They think that they distance themselves from the patient’s feelings in order to protect themselves from distress: “In order to be able to do this profession, I say to myself: You are a doctor and everyone has a duty, a purpose for existence. That patient…she needs you and you should do your best for her well-being. You cannot feel sorry for her” (M, Intern, FU, 24y) . “As I am still a student, I feel a slight excitement and also stress. Being a doctor is like a role played most of the time, and we all play the subconscious doctor type with our own language. I can also connect it to some kind of an auto-pilot” (M, Intern, FU, 23y) . “…examining too many patients makes you numb-you became like an automated robot”(M, Resident, GU,27 y) . We have shown the overall emotions in “Word Cloud” in Figure . The most commonly perceived emotions were the feelings of “tension and anxiety” followed by “happiness, compassion and excitement”. In addition to that “burnout, exhaustion, boredom, stress, angry and upset” were also frequently mentioned feelings. To our best knowledge, this is the first study to explore the emotions of family medicine residents and interns during their clinical trainings in Turkey. During clinical trainings, both family medicine residents’ and interns’ emotional experiences are critical for several reasons. First of all, emotions have important implications on cognitive processes such as learning and motivation (Pekrun et al. , ). Secondly, emotions help residents’ or interns’ professional identity development (Helmich et al. , ; Helmich et al. , ; Dornan et al. , ). Thirdly, during challenging conditions, high stress and anxiety may cause decline in empathy, blunting of emotions, and consequently may lead to burnout (Lee et al. , b; Soler et al. , ; Paro et al. , ; Romani and Ashkar, ). Therefore, to be aware of one’s own emotions can help to cope with emotional reactions. By this skill, residents or interns can promote their professional well-being. Because of this, there is a need for identifying and reflecting on emotional experiences in medical education (Post, b; Thommasen et al. , b). In this study, our first theme was the “clinical climate’s role”. In Turkey, the family medicine residents and the interns have to do their core rotations that may last up to 4 months or longer at the hospital. During these rotations, the family medicine residents and the interns have the similar responsibilities as the residents of that clinic. Sometimes it is perceived that rotationers will come and go, therefore, there may not be an accommodating climate in that particular clinic. The subthemes associated with our first theme were; as the interns/residents were not being used to the routines of the new clinic they experienced “feelings of being unfamiliar”, “feelings of not being understood, heard or valued” and “communication problems due to uncertain role descriptions” all of which can prevent the achievement of the goals of the clinical education. Medical education programmes are designed as a rotation through a series of departments at regular intervals. When residents or interns start a new rotation, they need supervision and support to adapt to a new clinical settings (Holmboe et al. , ). Therefore, in order to overcome feelings of unfamiliarity, when a resident or intern start a new rotation there should be orientation sessions. Many residents find transition experiences difficult and stressful (Brennan et al. , ; Sturman et al. , ; Coakley et al. , ). Support and positive climate created by clinical consultants may decrease the initial stressors for trainees (Wiese and Bennett, ). In addition to this, in this study, we have demonstrated that family medicine residents and interns struggle with emotional strains during their clinical trainings and patient–doctor interviews. The family medicine residents and interns mentioned feeling various emotions as well as blunting of the feelings in their descriptions that formed our three main themes. These were the “clinical climate’s role”, “emotions during patient encounters” and “coping strategies with negative emotions”. The most commonly perceived emotions were the feelings of “tension and anxiety” followed by “happiness, compassion and excitement”. In addition to that “feeling burnout, exhaustion, boredom, stress, angry and upset” were also frequently mentioned emotions. Several studies show that medical students experience intense emotions during their patient encounters (Pitkälä and Mäntyranta, ). For example Clay et al described themes of emotions as follows: sorrow, gratitude, personal responsibility, regret, shattered expectations and anger (Clay et al. , ). Other studies mainly focusing on empathy reveal the feelings of uncertainty and helplessness. (Halpern, ; Nevalainen et al. , ; Neumann et al. , ; Burks and Kobus, ; Nevalainen et al. , ; Preusche and Lamm, ). In addition to this, in an interprofessional study, the emotions of anxiety, sadness, empathy, frustration and insecurity were reported during difficult healthcare conversations (Martin Jr et al. , ). Besides several studies, the emotions of interns or the family medicine residents have rarely been the focus of systematic research. In the literature, typically the students’ or the residents’ intense emotions were the focus of attention in challenging situations such as anatomy dissections, autopsy encounters, or in case of a patient death (Bamber et al. , ; Sándor et al. , ; Trivate et al. , ). Other than these highly emotional situations, the emotions of medical students or residents were not the leading actor (main character) in the clinical research scene. Therefore, researchers of the current study believe that interns or residents were grateful to the researchers for letting them express their emotions is also a finding and shows that there is a critical need for reflection/mindfulness sessions for residents and interns to learn how to be aware of their emotions and then how to express and regulate their emotions during their clinical trainings. Besides intense emotions encountered during anatomy dissections, autopsy sessions or in case of a patient death, it has been stated that patient encounters and communication practices, which play a major role in the development of professional identity, are also emotionally challenging and stressful for students or residents (Sharif and Masoumi, ; Arieli, ). In our study, both residents and interns have revealed that their feelings changed according to patients’ condition. If they find the patient to be too complex then they had feelings of insufficiency. They thought that they had insufficient medical knowledge and skills. They feel uncertain in their medical knowledge and skills and they have the fear of making mistakes followed by stress and anxiety. When it is extreme or prolonged, stress can create several health problems including burnout. Physical and mental health problems may appear due to burnout and may have direct effects on the quality of care provided to patients. Stress in the workplace has been identified as a major problem for family physicians (Post, a). The term burn-out is defined as a combination of emotional exhaustion, feelings of depersonalization and perceived lack of personal accomplishment. A survey of rural family physicians in 2001 showed a self-reported burnout rate of as high as 55% (Thommasen et al. , a). It is known that burnout rates increase during residency; therefore, interventions in medical education are necessary to identify the emotions that residents or interns may feel and cope with (Soler et al. , ; Lee et al. , a; Romani and Ashkar, ). Reflections are needed to normalize their emotions. This may allow residents or interns to develop a resilience to emotionally challenging situations. On the other hand, if the residents or the interns think that they have managed the patient well, then they have the feelings of happiness, compassion, relief, joy, pride, satisfaction and confidence. On the contrary, if the chief complaint seemed too minor, especially during ER visits, then the residents and interns express feelings of anger toward patients or mentioned about losing their temper. Similarly, Isbell et al , using grounded theory, conducted 86 semi-structured qualitative interviews with experienced emergency department (ED) providers. They found out that patients triggered both positive and negative emotions. In their study, providers described feelings of frustrations with certain types of ED visits, which were inappropriate especially for services that are unnecessary for the ED to provide (eg, treatment for seasonal colds) (Isbell et al. , ). Our final theme was “coping strategies with negative emotions”. It is seen that residents or the interns cope with their emotions in various ways. Under this theme we have identified three subthemes: “emotional awareness”, “accepting the situation” and “loss of feelings”. Self-awareness is the ability to know one’s emotions, strengths and weaknesses, and it is one of the vital components of emotional intelligence. It is the ability to be aware of and to understand emotional states in oneself and others and to regulate one’s emotions effectively. It is well established that emotional intelligence is associated with communication skills in medical students’ and residents’ performances in a positive way. Therefore, emotional intelligence is an important skill that should be incorporated into residents’ or interns’ formal professional skills training (Salovey and Mayer, ; Gross and John, ; Libbrecht et al. , ; Bourgeon et al. , ). Several studies demonstrated emotional detachment among medical students, in emotionally challenging situations, mostly because of self-preservation which means that students think that they have to distance themselves from the patient’s feelings in order to protect themselves from distress (Doulougeri et al. , ). Emotional suppression is used as coping mechanism, and distancing from the patients is also considered a strategy for managing the stressfulness when breaking bad news(Neumann et al. , ; Burks and Kobus, ; Eikeland et al. , ; Toivonen et al. , ). Emotional detachment or loss of feelings was also described in our results. Our findings reveal that residents or interns lack proper coping strategies during challenging patient encounters. The study findings resonate with those of previous studies dealing with the emotional detachment as a coping strategy. For example, Gaufberg et al. reported that medical students described the need to actively suppress emotions in response to the powerful incidents of hospital life (Gaufberg et al. , ). In medical education, the hidden curriculum may encourage the suppression of emotions and distancing from the patient as an unwritten cultural norm. Several studies report the depersonalization and burnout with the loss of empathy during medical training. (Coulehan and Williams, ; Hojat et al. , ; Coulehan, ; Thomas et al. , ; Neumann et al. , ; Neufeld and Malin, ). To the best of our knowledge, this is the first study investigating the family medicine residents’ and interns’ emotions during their clinical trainings in Turkey. Our findings suggest a need to further evaluate residents’ or the interns’ emotions during their clinical trainings using ecological momentary assessments (EMAs). EMAs study people’s thoughts and behavior in their daily lives by repeatedly collecting data in an individual’s normal environment, at or close to the time they carry out that behavior. In addition to that, investigations of a larger national sample of residents’ or interns’ emotions will facilitate the understanding of emotions during patient encounters and will help the implementation of reflection or emotion regulation skills programs in Turkey. Conversely, several methodological limitations must be considered when interpreting our findings which has implications on guiding future research. First, this study included only two medical schools of Istanbul with a relatively small number of residents and interns, and second, all data were self-reported and subjective in line with the qualitative studies’ nature. Therefore, the generalizability of our findings may be limited, but they highlight the need for education on emotions (Hamilton-West et al. , ). Emotions are critical in medical education, therefore, residents and interns should be empowered with the skills to acknowledge, accept and regulate them. Overall, our results have several practice implications. Educators need to understand that challenging encounters evoke many complex emotions in students. Firstly, emotions should be actively discussed in communication skills studies during undergraduate years. Students should be encouraged to accept their emotional experiences and supported in finding strategies for coping with them. Secondly, emotions rising in the authentic clinical education should be systematically reflected. Thirdly, medical teachers need education in reflecting on emotions as part of their teaching practices to be able to constructively address emotional issues. Identifying and normalizing uncomfortable emotions and developing new ways to help learners cope and adapt while remaining empathic and emotionally available to their patients are very important (Toivonen et al. , ). Emotions should be explicitly incorporated into medical education, and interns and residents should be supported in coping with these emotions in order to help their professional growth and well-being.
Impact of pharmacogenomic
2f846cef-28ed-44c2-bccb-dcee3887bf47
10131438
Pharmacology[mh]
Fluoropyrimidines are integral chemotherapy drugs in the treatment of gastrointestinal cancers . In European oncology practice, the fluoropyrimidines 5-fluorouracil (5FU) and the orally bioavailable capecitabine are amongst the most commonly drugs prescribed as systemic therapy for tumours arising from the lower gastrointestinal tract (colon, rectal, anal canal) and upper digestive tracts (oesophagogastric, hepatopancreatic biliary). Fluoropyrimidines are also commonly used in the management of head and neck and breast cancers . Severe fluoropyrimidine chemotherapy toxicity occurs in approximately 30% of recipients . Toxicity is characterised by myelosuppression, gastrointestinal toxicity (including diarrhoea and mucositis), hand and foot syndrome and cardiac toxicity . Mortality from severe fluoropyrimidine toxicity occurs in 0.1–0.5% of patients . Dihydropyrimidine dehydrogenase (DPD) is a critical protein in the enzymatic degradation of fluoropyrimidines. As the rate limiting step in fluoropyrimidine clearance, 5FU is converted to 5-dihydrofluorouracil by DPD in the liver and responsible for 80–85% of the catabolism of fluoropyrimidines to inactive metabolites . Genetic variants in the corresponding encoding DPYD gene occur in approximately 8% of patients of Caucasian ethnicity. The variants DPYD *2A (c.1905+1G>A), c.2846A>T, DPYD *13 (c.1679T>G) and c.1236G>A are associated with attenuated enzymatic activity and more frequent fluoropyrimidine related adverse events . Following the pivotal study by Henricks et al. , which demonstrated pre-emptive fluoropyrimidine dose reductions based upon DPYD variants reduced severe toxicity, DPYD variant testing and pre-emptive fluoropyrimidine chemotherapy dose reduction was implemented by the Gastrointestinal Unit at the Royal Marsden Hospital in November 2018. In this study, we conducted an audit to assess the implementation of a DPYD variant guided dosing and its effect on severe toxicity. To identify patients receiving fluoropyrimidine chemotherapy, we interrogated the Royal Marsden Hospital pharmacy database for all prescriptions containing fluoropyrimidines (5FU and capecitabine) either in combination with other chemotherapy drugs and/or radiotherapy for gastrointestinal cancers after (1 st December 2018 to 30 th June 2019) DPYD testing implementation. Patients who had previously received systemic fluoropyrimidines were excluded from the study. To capture patients with homozygous DPYD variants, who may have not received fluoropyrimidines, we also searched for patients receiving raltitrexed, a non-fluoropyrimidine chemotherapeutic. DPYD sequence variants were analysed in DNA extracted from EDTA whole blood. Common sequence variants DPYD c.1905+1G>A, c.2846A>T, c.1679T>G, c.1236G>A and c.1601G>A were genotyped by TaqMan assay (Applied Biosystems) using an AriaMx Real-Time PCR instrument (Agilent). Treating physicians were provided with pathology reports which recommended initial dose reductions of 50% for heterozygous DPYD *2A or c.1679T>G variants; 25% for c.1236G>A and c.2846A>T; or 20% with a c.1601G>A variant. Patients with a DPYD heterozygous variant received an initial dose at the discretion of the treating physician. If initial doses of fluoropyrimidines were tolerated, doses of fluoropyrimdines were escalated for subsequent cycles. DPYD homozygous variant carriers did not receive fluoropyrimidines. DPYD wild-type carriers were dosed according to standard of care. Data on patient demographics, tumour, treatment and toxicity by CTCAE v4.03 criteria was collected retrospectively from electronic medical records, discharge summaries and chemotherapy charts. The primary aim of the study was to compare the frequency of severe toxicity (grade ≥ 3) between DPYD wildtype and variant carriers with genotype based dosing. Other outcomes of interest were compliance with routine testing, turnaround time of DPYD testing, frequency of hospital admissions, treatment cessations and deaths due to fluoropyrimidine toxicity. Comparisons of outcomes between groups were analysed using risk ratios and the Exact Fisher test. All P values were two-sided and a P value < 0.05 was considered statistically significant. Statistical analysis was performed on R-studio version 0.99.447. Patient and public involvement Patients and the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research. Patients and the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research. To compare the frequency of severe toxicities, we identified patients commencing fluoropyrimidine chemotherapy following implementation of routine DPYD testing in November 2018. Between 1 st December 2018 and 31 st July 2019, we identified 542 patients commencing fluoropyrimidine chemotherapy, of whom 150 patients were not receiving fluoropyrimidines for the first time. The remaining 392 patients were naïve to fluoropyrimidine chemotherapy. The DPYD guided cohort was comprised of the 370 patients (94.6%) who underwent, DPYD variant testing prior to receiving fluoropyrimidine containing chemotherapy. (Fig. ). Of the patients in the analysis ( n = 370), the median age was 64 years (range 30–90). The majority were male (64.3%) and of white ethnicity (274, 74.1%). The predominant tumour type was colorectal cancer ( n = 209, 56.5%), followed by oesophagogastric ( n = 93, 25.1%) and hepatopancreatic biliary ( n = 52, 14.1%). The majority of patients had an ECOG performance status of 0 ( n = 100, 27%) or 1 ( n = 254, 68.6%). The characteristics amongst the DPYD variant and wildtype cohorts were similar. (Table ). DPYD testing For DPYD testing, the median time from blood draw to result was 6 days (IQR 5–7, range 0–18). Twenty-two DPYD tests were missed, with the majority of the missed tests (17/22, 77.2%) occurring with the first 2 months of testing implementation. Amongst the 370 patients who underwent DPYD genotyping, 36 variants were detected amongst 33 patients (8.9%). The most common variants were c.1601G>A ( n = 16, 4.3%), followed by c.1236G>A ( n = 11, 3.0%), c.1905+ 1G > A ( n = 4, 1.1%), c.2846A>T ( n = 4, 1.1%) and c.1679T >G ( n = 1, 0.3%). Thirty patients had a single heterozygous variant and 3 patients had a compound heterozygous variant. All compound heterozygous variants were associated with the c.1601G>A variant which co-occurred in two patients with c.1236G > A, and one patient with c.1905+1G>A. Concurrently, we identified 18 patients who received raltitrexed following DPYD genotyping implementation, of which none were carriers of homozygous DPYD variants. Treatment The majority of the patients were receiving fluoropyrimidine chemotherapy treatment with curative intent ( n = 207, 55.9%) or as first line therapy ( n = 359, 97%). Fluoropyrimidines were most commonly administered as a part of a chemotherapy doublet regimen ( n = 219, 59.2%), single agent therapy ( n = 92, 24.2%) or triplet therapy ( n = 59, 15.9%). In 64 patients (17.3%), fluoropyrimidines were administered in combination with radiotherapy. Capecitabine was the most frequently prescribed fluoropyrimidine ( n = 236, 63.8%), with the remainder of the patients receiving 5FU ( n = 134, 36.2%). The proportions of characteristics were similar between the DPYD variant and wildtype cohorts. The relative dose intensities of the initial cycle of fluoropyrimidine was 54.2% (range 37.5–75%) for DPYD variant carriers, and 93.2% (42.9–100) for DPYD wildtype carriers. The two patients with compound heterozygous variants c.1236G>A/c.1601G>A commenced fluoropyrimidines at 50%. One patient with the compound heterozygous variant c.1905+1G>A/c.1601G>A was treated with an initial dose intensity of 41%. (Table ). DPYD wildtype vs DPYD variants To understand the impact of pharmacogenomic guided dosing on DPYD variant carriers, we compared the toxicities of wildtype and variant carriers. In total, 4 patients (12.1%) in the DPYD variant cohort experienced grade ≥ 3 toxicity with 2 patients experiencing gastrointestinal toxicity and 1 patient with severe haematological toxicity. By comparison, 89 patients in the wildtype group experienced grade ≥ 3 toxicity with haematological (46, 13.6%) and gastrointestinal (29, 8.6%) the most frequent. Eleven patients, all in the wildtype cohort experienced cardiac toxicity of any grade. There were no statistically significant differences in the frequency of grade ≥ 3 adverse events between DPYD variant and wildtype carriers. (Table ) In keeping with the intended dosing strategy, 10 patients (30.3%) in the DPYD variant cohort had a dose escalation after the first cycle of fluoropyrimidines whereas only 7 patients (2.1%) of the wildtype cohort received a dose escalation ( P < 0.00001). Dose reductions were more common within the wildtype cohort compared with the variant cohort (18.3% vs 3.0%, P = 0.0261). The frequency of early treatment discontinuation (within the first two cycles of therapy) was similar in the wildtype and variant cohorts (3.3% vs 6.1%, P = 0.3245). Fluoropyrimidine dose reductions within the first two cycles of chemotherapy was also similar between both groups (6.1% vs 10.7%, P = 0.2282). The clinical characteristics of the 4 carriers of DPYD variants experiencing severe fluoropyrimidine toxicity are detailed in Table . Three of these patients were receiving FOLFIRINOX (5FU, oxaliplatin, irinotecan) triplet chemotherapy which is frequency associated with severe toxicity . In two patients, grade ≥ 3 toxicity occurred after 8 cycles of treatment which would be consistent with toxicity from non-fluoropyrimidine agents. Amongst DPYD wildtype carriers, grade ≥ 3 toxicity occurred in 10 patients (12.0%) receiving single agent chemotherapy and was significant higher in patients receiving doublet chemotherapy (53/198, 26.8%; P = 0.0074) and receiving triplet chemotherapy (26/56, 46.4%; P < 0.0001). testing For DPYD testing, the median time from blood draw to result was 6 days (IQR 5–7, range 0–18). Twenty-two DPYD tests were missed, with the majority of the missed tests (17/22, 77.2%) occurring with the first 2 months of testing implementation. Amongst the 370 patients who underwent DPYD genotyping, 36 variants were detected amongst 33 patients (8.9%). The most common variants were c.1601G>A ( n = 16, 4.3%), followed by c.1236G>A ( n = 11, 3.0%), c.1905+ 1G > A ( n = 4, 1.1%), c.2846A>T ( n = 4, 1.1%) and c.1679T >G ( n = 1, 0.3%). Thirty patients had a single heterozygous variant and 3 patients had a compound heterozygous variant. All compound heterozygous variants were associated with the c.1601G>A variant which co-occurred in two patients with c.1236G > A, and one patient with c.1905+1G>A. Concurrently, we identified 18 patients who received raltitrexed following DPYD genotyping implementation, of which none were carriers of homozygous DPYD variants. The majority of the patients were receiving fluoropyrimidine chemotherapy treatment with curative intent ( n = 207, 55.9%) or as first line therapy ( n = 359, 97%). Fluoropyrimidines were most commonly administered as a part of a chemotherapy doublet regimen ( n = 219, 59.2%), single agent therapy ( n = 92, 24.2%) or triplet therapy ( n = 59, 15.9%). In 64 patients (17.3%), fluoropyrimidines were administered in combination with radiotherapy. Capecitabine was the most frequently prescribed fluoropyrimidine ( n = 236, 63.8%), with the remainder of the patients receiving 5FU ( n = 134, 36.2%). The proportions of characteristics were similar between the DPYD variant and wildtype cohorts. The relative dose intensities of the initial cycle of fluoropyrimidine was 54.2% (range 37.5–75%) for DPYD variant carriers, and 93.2% (42.9–100) for DPYD wildtype carriers. The two patients with compound heterozygous variants c.1236G>A/c.1601G>A commenced fluoropyrimidines at 50%. One patient with the compound heterozygous variant c.1905+1G>A/c.1601G>A was treated with an initial dose intensity of 41%. (Table ). wildtype vs DPYD variants To understand the impact of pharmacogenomic guided dosing on DPYD variant carriers, we compared the toxicities of wildtype and variant carriers. In total, 4 patients (12.1%) in the DPYD variant cohort experienced grade ≥ 3 toxicity with 2 patients experiencing gastrointestinal toxicity and 1 patient with severe haematological toxicity. By comparison, 89 patients in the wildtype group experienced grade ≥ 3 toxicity with haematological (46, 13.6%) and gastrointestinal (29, 8.6%) the most frequent. Eleven patients, all in the wildtype cohort experienced cardiac toxicity of any grade. There were no statistically significant differences in the frequency of grade ≥ 3 adverse events between DPYD variant and wildtype carriers. (Table ) In keeping with the intended dosing strategy, 10 patients (30.3%) in the DPYD variant cohort had a dose escalation after the first cycle of fluoropyrimidines whereas only 7 patients (2.1%) of the wildtype cohort received a dose escalation ( P < 0.00001). Dose reductions were more common within the wildtype cohort compared with the variant cohort (18.3% vs 3.0%, P = 0.0261). The frequency of early treatment discontinuation (within the first two cycles of therapy) was similar in the wildtype and variant cohorts (3.3% vs 6.1%, P = 0.3245). Fluoropyrimidine dose reductions within the first two cycles of chemotherapy was also similar between both groups (6.1% vs 10.7%, P = 0.2282). The clinical characteristics of the 4 carriers of DPYD variants experiencing severe fluoropyrimidine toxicity are detailed in Table . Three of these patients were receiving FOLFIRINOX (5FU, oxaliplatin, irinotecan) triplet chemotherapy which is frequency associated with severe toxicity . In two patients, grade ≥ 3 toxicity occurred after 8 cycles of treatment which would be consistent with toxicity from non-fluoropyrimidine agents. Amongst DPYD wildtype carriers, grade ≥ 3 toxicity occurred in 10 patients (12.0%) receiving single agent chemotherapy and was significant higher in patients receiving doublet chemotherapy (53/198, 26.8%; P = 0.0074) and receiving triplet chemotherapy (26/56, 46.4%; P < 0.0001). Following the pivotal prospective study by Henricks et al. , we implemented routine DPYD pharmacogenomic guided dosing for patients commencing fluoropyrimidines for gastrointestinal cancers. Since initiating DPYD testing at the Royal Marsden Hospital, nationwide guidelines have been implemented with widely available DPYD genotype testing available through the NHS England Genomic Test Directory. Our study demonstrates routine DPYD variant testing prior to the initiation of fluoropyrimidines can be successfully integrated into clinical practice. Compliance rates of DPYD testing was > 90% and the median laboratory turnaround time was 6 days. Patients were often tested for DPYD variants at the time of their initial medical oncology clinic appointment. Given the lead times for other investigations such as diagnostic imaging and chemotherapy day-unit scheduling, this turnaround time was not thought to cause a delay in the commencement of the chemotherapy. As the rollout of DPYD variant testing continues through the NHS England Genomic Test Directory, turnaround times are likely to improve. With the adoption of in-house testing, the turnaround time at our institution is now within 48 h. The overall prevalence of the four DPYD variants recommended for testing (DPYD*2A, c.1236G>A, c.2846A>T, 1679G>A) amongst the population (20/330 5.4%) was slightly lower, but consistent with previous data in Caucasian populations. The c.1236A > T (HapB3) is the most prevalent clinically significant variant (2.4%), followed by c.1601G>A ( DPYD* 4) (2.0%), c.1905+1G>A ( DPYD *2A) (0.8%), c.2846A>T (0.4%) and c.1679T>G ( DPYD *13) (0.06%) . The frequency of these variants are lower in other ethnic groups and further research is required to optimise a pharmacogenomic dosing approach in these populations . In our clinical practice, we predominantly used 50% dose reductions of fluoropyrimidines with the first cycle of chemotherapy. Whilst this is recommended for heterozygous DPYD *2A and c.1679T >G variant carriers, we also extended this to carriers of other variants. We based this practice on prospective data suggesting 25% dose reductions are inadequate to mitigate toxicity with the c.1236G>A and c.2846A>T variants . The incidence of grade ≥ 3 toxicity were not statistically different between patients with DPYD variants and wildtype. Taking into account the propensity for fluoropyrimidine toxicity in DPYD variant carriers, our result does suggest some level of effectiveness of upfront dose reductions. The incidence of severe fluoropyrimidine toxicity amongst carriers of DPYD variants was (4/33, 12.1%) and was similar to other real world case series of pharmacogenomic guided fluoropyrimidine dosing . Despite pre-emptive fluoropyrimidine dose reductions, four patients with DPYD variants experienced severe toxicity due to the use of fluoropyrimidine with other cytotoxic drugs. Three of these patients were receiving FOLFIRINOX chemotherapy which is associated with high rates of severe toxicity . Severe toxicity occurred in 25.7% DPYD wild type patients with doublet and 46.4% receiving triplet combination therapy. Whilst this may provide an argument for the limitations of DPYD variant testing, it does underline the need to develop pre-emptively strategies to reduce toxicity with fluoropyrimidine combination regimens. DPD phenotypic testing (uracil), which was not performed in this study may have identified additional patients at risk of fluoropyrimidine toxicity . The most important limitation of our study is its retrospective design and sample size from a single institution. Only 33 (8.9%) carriers of DPYD variants were identified which is insufficient to provide a definitive estimate of the incidence of severe toxicities. Indeed, it is possible that the DPYD wildtype carriers have a different DPYD variant which was not detected with the available assays. Severe toxicity is also further confounded by the use of combination regimens which could be independent of DPYD variant status. Due to the low number of patients with DPYD variants, we could not perform meaningful analyses of populations at higher risk of toxicity including recipients of concurrent radiotherapy , female sex , age and adjuvant/metastatic intent . We did not record creatinine clearance which is a known risk factor particularly for capecitabine toxicity, however upfront reductions are routinely indicated in our practice in patients with lower creatinine clearance (30-50 mL/min). The reporting laboratory provided fluoropyrimidine dose reduction recommendations rather than the DPD activity score. The use of the activity score provides user friendly dosing guidance particularly if multiple clinically significant DPYD variants are detected. Following the implementation of pre-emptive DPYD variant testing, the routine testing of the c.1601G>A variant is no longer recommended. Whilst one case series has reported the c.1601G > A variant as a clinically significant predictor of fluoropyrimidine toxicity , this was not proven in a meta-analysis . Furthermore, biochemical analysis suggests this variant does not result in loss of enzymatic activity . In our study, we did not record the efficacy outcomes of the treatments. Due to the sample size and heterogeneity of the cohorts, it would have been insufficiently powered to demonstrate a conclusive result. A case–control study suggested a pre-emptive dose reduction strategy results in similar outcomes to patients with DPYD wildtype . Furthermore, pharmacokinetic studies suggest AUC drug exposure with DPYD guided dosing is similar amongst variant and wildtype patients  . Whilst further study is necessary, the strategy of dose escalation based upon tolerance, was successful in ten patients (30.3%), and mitigates these efficacy concerns. In conclusion, our study demonstrates successful implementation of routine DPYD variant testing amongst fluoropyrimidine naïve patients with gastrointestinal cancers. Severe toxicity amongst DPYD variant carriers was not observed and had comparable rates of toxicity to DPYD variant carriers. Our data supports routine testing and pre-emptive dosing strategies which is now standardly done within the NHS.
Enhancing Public Health Communication Regarding Vaccine Trials: Design and Development of the Pan-European VACCELERATE Toolkit
a96f3810-e0dd-48b6-a168-7c49f5adedf6
10131613
Health Communication[mh]
VACCELERATE is an independent, innovative, and transparent Pan-European academic network with the aim of harmonizing multinational vaccine trial initiatives and conducting capacity mapping of vaccine clinical trials sites and laboratories with standardized methods and protocols across Europe. The network identifies and provides access to state-of-the-art vaccine trial sites to accelerate the development of vaccines and recruit volunteers for vaccine trials through the VACCELERATE Volunteer Registry . The main goal of the VACCELERATE Volunteer Registry is to establish the first transnational harmonized trial participation platform serving as single entry point for the European region. The VACCELERATE Volunteer Registry , under the mandate of the European Union’s Health Emergency Preparedness and Response Authority (HERA) , provides a sustainable platform source for the recruitment of potential volunteers for clinical studies for the current COVID-19 pandemic and future epidemics. According to the literature, one of the main challenges of a volunteer registry is the registration of a large number of potential trial participants with great diversity and dedication in order to be able to recruit and match suitable candidates for the selected clinical trial. Therefore, a great amount of effort is spent looking for strategies to improve communication and convince potential volunteers to take part in vaccine trials. The enrollment of volunteers in vaccine trials is also influenced by factors such as complacency, conspiracy theories on social media regarding COVID-19, convenience and confidence (the “Three Cs” model of vaccine hesitancy) , disinformation , fake news , fear , and increasing level of health misinformation . The latter leads to an infodemic of misleading information with immediate effects on public health, refusal, and vaccine hesitancy for participation in clinical trials . It should be noted that health misinformation is responsible for increased fear and lack of trust, which are major factors for enrollment also in cancer clinical trials . Recently, Luís et al reported that there is a lack of public information about COVID-19 vaccine trials among several population groups (eg, minorities and underrepresented populations or individuals), while the available tools for trial participation were also sparse. The main purpose of this study is the design and development of a standardized toolkit based on the Social Cognitive Theory to increase positive attitudes, access to trustworthy information for better access and increased recruitment to vaccine trials. Our secondary objective is to foster engagement of the community through the VACCELERATE Volunteer Registry and smart technology for participating in forthcoming clinical trials . The VACCELERATE toolkit has been translated into several languages and is freely and easily accessible to facilitate dissemination among the participating countries of the VACCELERATE Consortium . The development of the VACCELERATE educational and promotional tools included the following steps: (1) conceptualization of the idea and context based on the selected educational aims and objectives; (2) selection of targeted population groups (eg, older individuals, children, adolescents, and minorities); (3) prioritization of trustworthy sources and websites for seeking information about COVID-19 vaccine trials for the developed materials; (4) performing graphical design (eg, visualization, audio, product, print, and animation design); and (5) acquisition of a digital object identifier (DOI) for each produced material from a DOI registration agency. More details on the developed tools are presented in the Results section. The context of the educational and promotional tools was created and reviewed by a group of experts in the field of infectious diseases and vaccine and pedagogical research. Two graphic designers developed and selected the appropriate color palette and audio (music, speaker voice, and dubbing) for the video story-tales and they adapted the produced material into an efficient pedagogical, attractive, and eye-catching way. The developed tools were (1) icons related to the COVID-19 pandemic (eg, use of a face mask, syringe, microscope, vaccine, handwashing, virus, etc); (2) different versions of logotypes for the VACCELERATE project; (3) typefaces; (4) samples of fonts with different sizes; (5) color palettes (primary and secondary colors); (6) imagery templates and guidelines for social media (eg, Twitter and Instagram); (7) human characters (ie, medical staff, minorities, and older individuals) as prototypes for any future VACCELERATE promotional and or educational material. Special attention was also paid to foster inclusiveness among the characters (different age groups, different ethic and other minority groups, citizens with disabilities, migrants, etc). The provided tools also include QR codes with a URL. The selected URL can be a website (as shown on the VACCELERATE website ) or selected animation videos from trustworthy sources of information (eg, COVID-19 Vaccines Global Access [ie, COVAX]; the European Centre for Disease Prevention and Control; the European Patients’ Academy on Therapeutic Innovation; the European Vaccine Initiative; Gavi, the Vaccine Alliance; the United Nations International Children's Emergency Fund; and the World Health Organization). In addition, the produced puzzles test memory in a team-based manner and help develop hand-eye coordination. Finally, the developed tools have been designed in a such a way that can be adjusted in accordance with local country-wise requirements and requirements to build confidence among participants about the safety and efficacy of COVID-19 vaccines and the health care system, which is a key parameter to improve recruitment among minority communities. VACCELERATE Flyers The flyers (see ) provide the appropriate amount of information to the public regarding the registration process and the VACCELERATE Volunteer Registry in general. The flyers are A3-sized (29.7×43.18 cm) and include a short description of the VACCELERATE Volunteer Registry objectives and volunteer selection process. In addition, they contain the appropriate contact details (email ID) and link (along with a QR code) for registration in the Volunteer Registry . The registration link directs the interested volunteer to a brief survey (which takes 5-10 minutes to complete) that collects comprehensive information required for clinical trials. The flyer invites all citizens to register, including those with prior COVID-19 vaccination, with specific questions regarding COVID-19 infection, and the invitations to eligible volunteers for future vaccine trials will be sent out by the Coordination Office of VACCELERATE (University Hospital Cologne, Cologne, Germany). Finally, the flyers have been translated into 9 languages (Cypriot Greek, English, French, German, Greek, Hebrew, Italian, Portuguese, and Spanish). VACCELERATE Posters The developed posters (see ) are A0-sized (84.1×118.9 cm) and depict human characters (ie, medical staff, minorities, and older individuals, among others) and a QR code including a URL for the VACCELERATE Volunteer Registry website , as well as catchy slogans such as “Be the missing piece” and “Together we can tackle the COVID-19 pandemic.” Finally, the posters have been translated into 7 languages, namely English, French, German, Greek, Hebrew, Italian, and Spanish, in collaboration with individual partners and national coordinators of the VACCELERATE Consortium. The promotion of the VACCELERATE Volunteer Registry for participating in COVID-19 vaccine trials could be achieved by affixing these prototype posters to a wall at sites or in buildings with high visibility (eg, hospitals, universities, malls, and public buildings). VACCELERATE Extended Brochure (Booklet) The extended VACCELERATE brochure (see ) comprises valuable information regarding the VACCELERATE Volunteer Registry, such as “What is the VACCELERATE Network,” the mission and objectives, the current promotion activities, the importance of registration, and a description of the registration process. The brochure also contains key information for the volunteers, such as the possibility to withdraw their registration at any time. Finally, the extended brochure also includes a QR code that directs the interested reader to the VACCELERATE Volunteer Registry website after scanning it with a smart device. VACCELERATE Interactive Educational Cards The educational cards enhance general knowledge and address the misinformation about COVID-19 vaccines, vaccine trials, and COVID-19 vaccination, as well as voluntary trial participation of children and adults. Each set of educational cards represents an educational aim and comprises a specific number of cards and target groups, as shown in . The interactive educational cards have been prepared with the help of 2 graphic designers and under the guidance of a pediatrician and several VACCELERATE participants. The educational cards include a QR code with a URL for promoting educational material on the one side and additional text script and characters on the other. The URL can be a website (eg, the one showed on the VACCELERATE website ) or selected videos from trustworthy sources (eg, COVID-19 Vaccines Global Access [ie, COVAX]; the European Centre for Disease Prevention and Control; the European Patients’ Academy on Therapeutic Innovation; the European Vaccine Initiative; Gavi, the Vaccine Alliance; the United Nations International Children's Emergency Fund; and the World Health Organization) and produced animation videos from VACCELERATE. VACCELERATE Puzzles The VACCELERATE puzzles were developed using graphic techniques under the guidance of the VACCELERATE participants, including a pediatrician, with focus on younger children (aged <12 years) in order to raise their awareness of clinical trials, vaccines, and participation in trials and to promote inclusiveness. The link to the produced puzzles is provided in . One of the developed puzzles comprises 25 pieces and depicts a variety of human characters (ie, medical staff, minorities, older individuals, and children, among others) obtained from the VACCELERATE toolkit, including the appropriate link to the VACCELERATE Volunteer Registry . The concept herein is to include as many human characters as possible to highlight the importance of inclusiveness and diversity in vaccine trials, especially for underserved population groups such as racial and ethnic minorities and older individuals. The other 2 smaller puzzles (8 pieces each) were developed to familiarize children with symbols and icons related to the COVID-19 pandemic, as well as to understand and recognize the effectiveness of vaccines in minimizing the risk of infection and harmful effects of COVID-19. VACCELERATE Animation Videos Educational Videos (Adult and Pediatric) The first produced educational video (see ) was mainly focused on adults and designed using advanced animation techniques, appropriate subtitles based on a storyboard and educational aims, and audio settings (eg, music and a speaker voice). In addition, the video included the following sections: (1) highlight and increase information regarding clinical trials and their necessity and usefulness for public health (eg, good clinical practices, vaccine trial phases and monitoring, and applied safety protocols); (2) the contribution of volunteers (ie, citizens, patient advocacy groups, and underserved populations) to this effort for capacity mapping and building of registries; and (3) volunteer safety, benefits, and any potential risks during a clinical trial. The second educational video (see ) was mainly aimed at children and adolescents. Special attention was given to contain informative animations while delivering the learning aim and keeping each video segment short. The produced video included the following sections: (1) provide information about the significance of vaccine trials; (2) share knowledge and information about the high value of vaccination as the most powerful “weapon” to prevent morbidity and mortality associated with infectious diseases; (3) acknowledge the advancements in technology and safety procedures, which have contributed significantly to the development, evaluation, approval, and monitoring of safe and effective vaccines; and (4) highlight the contribution and importance of pediatric volunteers in vaccine clinical trials and ensure their safety and the benefits and risks of participating in clinical trials. Promotional Video The promotional video (see ) presents short, recorded videos by the VACCELERATE work package leaders and the National Coordinator’s pictures to promote, inform, and describe the VACCELERATE Volunteer Registry through specific text scripts. The aim of the video is the inclusiveness and representation of all participating European Union countries and European Union–associated countries of the VACCELERATE Consortium. Currently, we have active representation from 17 countries (Austria, Belgium, Cyprus, Czech Republic, Germany, Greece, Ireland, Israel, Italy, Hungary, Lithuania, the Netherlands, Norway, Portugal, Spain, Sweden, and Turkey) in the VACCELERATE Volunteer Registry. The flyers (see ) provide the appropriate amount of information to the public regarding the registration process and the VACCELERATE Volunteer Registry in general. The flyers are A3-sized (29.7×43.18 cm) and include a short description of the VACCELERATE Volunteer Registry objectives and volunteer selection process. In addition, they contain the appropriate contact details (email ID) and link (along with a QR code) for registration in the Volunteer Registry . The registration link directs the interested volunteer to a brief survey (which takes 5-10 minutes to complete) that collects comprehensive information required for clinical trials. The flyer invites all citizens to register, including those with prior COVID-19 vaccination, with specific questions regarding COVID-19 infection, and the invitations to eligible volunteers for future vaccine trials will be sent out by the Coordination Office of VACCELERATE (University Hospital Cologne, Cologne, Germany). Finally, the flyers have been translated into 9 languages (Cypriot Greek, English, French, German, Greek, Hebrew, Italian, Portuguese, and Spanish). The developed posters (see ) are A0-sized (84.1×118.9 cm) and depict human characters (ie, medical staff, minorities, and older individuals, among others) and a QR code including a URL for the VACCELERATE Volunteer Registry website , as well as catchy slogans such as “Be the missing piece” and “Together we can tackle the COVID-19 pandemic.” Finally, the posters have been translated into 7 languages, namely English, French, German, Greek, Hebrew, Italian, and Spanish, in collaboration with individual partners and national coordinators of the VACCELERATE Consortium. The promotion of the VACCELERATE Volunteer Registry for participating in COVID-19 vaccine trials could be achieved by affixing these prototype posters to a wall at sites or in buildings with high visibility (eg, hospitals, universities, malls, and public buildings). The extended VACCELERATE brochure (see ) comprises valuable information regarding the VACCELERATE Volunteer Registry, such as “What is the VACCELERATE Network,” the mission and objectives, the current promotion activities, the importance of registration, and a description of the registration process. The brochure also contains key information for the volunteers, such as the possibility to withdraw their registration at any time. Finally, the extended brochure also includes a QR code that directs the interested reader to the VACCELERATE Volunteer Registry website after scanning it with a smart device. The educational cards enhance general knowledge and address the misinformation about COVID-19 vaccines, vaccine trials, and COVID-19 vaccination, as well as voluntary trial participation of children and adults. Each set of educational cards represents an educational aim and comprises a specific number of cards and target groups, as shown in . The interactive educational cards have been prepared with the help of 2 graphic designers and under the guidance of a pediatrician and several VACCELERATE participants. The educational cards include a QR code with a URL for promoting educational material on the one side and additional text script and characters on the other. The URL can be a website (eg, the one showed on the VACCELERATE website ) or selected videos from trustworthy sources (eg, COVID-19 Vaccines Global Access [ie, COVAX]; the European Centre for Disease Prevention and Control; the European Patients’ Academy on Therapeutic Innovation; the European Vaccine Initiative; Gavi, the Vaccine Alliance; the United Nations International Children's Emergency Fund; and the World Health Organization) and produced animation videos from VACCELERATE. The VACCELERATE puzzles were developed using graphic techniques under the guidance of the VACCELERATE participants, including a pediatrician, with focus on younger children (aged <12 years) in order to raise their awareness of clinical trials, vaccines, and participation in trials and to promote inclusiveness. The link to the produced puzzles is provided in . One of the developed puzzles comprises 25 pieces and depicts a variety of human characters (ie, medical staff, minorities, older individuals, and children, among others) obtained from the VACCELERATE toolkit, including the appropriate link to the VACCELERATE Volunteer Registry . The concept herein is to include as many human characters as possible to highlight the importance of inclusiveness and diversity in vaccine trials, especially for underserved population groups such as racial and ethnic minorities and older individuals. The other 2 smaller puzzles (8 pieces each) were developed to familiarize children with symbols and icons related to the COVID-19 pandemic, as well as to understand and recognize the effectiveness of vaccines in minimizing the risk of infection and harmful effects of COVID-19. Educational Videos (Adult and Pediatric) The first produced educational video (see ) was mainly focused on adults and designed using advanced animation techniques, appropriate subtitles based on a storyboard and educational aims, and audio settings (eg, music and a speaker voice). In addition, the video included the following sections: (1) highlight and increase information regarding clinical trials and their necessity and usefulness for public health (eg, good clinical practices, vaccine trial phases and monitoring, and applied safety protocols); (2) the contribution of volunteers (ie, citizens, patient advocacy groups, and underserved populations) to this effort for capacity mapping and building of registries; and (3) volunteer safety, benefits, and any potential risks during a clinical trial. The second educational video (see ) was mainly aimed at children and adolescents. Special attention was given to contain informative animations while delivering the learning aim and keeping each video segment short. The produced video included the following sections: (1) provide information about the significance of vaccine trials; (2) share knowledge and information about the high value of vaccination as the most powerful “weapon” to prevent morbidity and mortality associated with infectious diseases; (3) acknowledge the advancements in technology and safety procedures, which have contributed significantly to the development, evaluation, approval, and monitoring of safe and effective vaccines; and (4) highlight the contribution and importance of pediatric volunteers in vaccine clinical trials and ensure their safety and the benefits and risks of participating in clinical trials. Promotional Video The promotional video (see ) presents short, recorded videos by the VACCELERATE work package leaders and the National Coordinator’s pictures to promote, inform, and describe the VACCELERATE Volunteer Registry through specific text scripts. The aim of the video is the inclusiveness and representation of all participating European Union countries and European Union–associated countries of the VACCELERATE Consortium. Currently, we have active representation from 17 countries (Austria, Belgium, Cyprus, Czech Republic, Germany, Greece, Ireland, Israel, Italy, Hungary, Lithuania, the Netherlands, Norway, Portugal, Spain, Sweden, and Turkey) in the VACCELERATE Volunteer Registry. The first produced educational video (see ) was mainly focused on adults and designed using advanced animation techniques, appropriate subtitles based on a storyboard and educational aims, and audio settings (eg, music and a speaker voice). In addition, the video included the following sections: (1) highlight and increase information regarding clinical trials and their necessity and usefulness for public health (eg, good clinical practices, vaccine trial phases and monitoring, and applied safety protocols); (2) the contribution of volunteers (ie, citizens, patient advocacy groups, and underserved populations) to this effort for capacity mapping and building of registries; and (3) volunteer safety, benefits, and any potential risks during a clinical trial. The second educational video (see ) was mainly aimed at children and adolescents. Special attention was given to contain informative animations while delivering the learning aim and keeping each video segment short. The produced video included the following sections: (1) provide information about the significance of vaccine trials; (2) share knowledge and information about the high value of vaccination as the most powerful “weapon” to prevent morbidity and mortality associated with infectious diseases; (3) acknowledge the advancements in technology and safety procedures, which have contributed significantly to the development, evaluation, approval, and monitoring of safe and effective vaccines; and (4) highlight the contribution and importance of pediatric volunteers in vaccine clinical trials and ensure their safety and the benefits and risks of participating in clinical trials. The promotional video (see ) presents short, recorded videos by the VACCELERATE work package leaders and the National Coordinator’s pictures to promote, inform, and describe the VACCELERATE Volunteer Registry through specific text scripts. The aim of the video is the inclusiveness and representation of all participating European Union countries and European Union–associated countries of the VACCELERATE Consortium. Currently, we have active representation from 17 countries (Austria, Belgium, Cyprus, Czech Republic, Germany, Greece, Ireland, Israel, Italy, Hungary, Lithuania, the Netherlands, Norway, Portugal, Spain, Sweden, and Turkey) in the VACCELERATE Volunteer Registry. Principal Findings This paper presents the first set of harmonized promotional and educational tools of the VACCELERATE Volunteer Registry . The produced material, based on the selected population group, includes flyers, posters, interactive educational cards, puzzles, and animation videos. The tools can be downloaded from the official website of VACCELERATE and appropriate references, based on their own DOIs. It should be noted that the provided educational material can be adopted to the user needs and may also be used as base-prototypes for further improvement and production of related tools. To ensure high rates of vaccination in populations and a large number of participants in vaccine trials, it is necessary to ensure effective, targeted, and trustworthy educational and promotional tools regarding vaccines and vaccine trials, guaranteeing high diversity and inclusiveness in vaccine trials without underrepresentation of minorities. The developed standardized toolkit is one example of information from trustful sources that can be used against health misinformation and disinformation, lack of trust, fear, and fake news. Furthermore, it can be also used to increase literacy and understanding of the importance of vaccine clinical trials in the general population as part of an emergency preparedness plan. In addition, the promotional and educational material will be able to bridge the gap in public information (eg, communication approach, target audiences, and type of media produced) regarding COVID-19 vaccine trials, which is covered in only 2.51% of the collected media and only in 4 languages (English, French, German, and Czech) . Another challenge is using visual material and stories in order to inform children about the procedures, importance of, and volunteer participation in vaccine trials and about vaccination. Limitations The impact and assessment of the promotional and educational material are beyond the scope of this study. Nevertheless, we are currently in the process of evaluating them for vaccine trials during this year by another expert group (WP4) of the VACCELERATE Consortium. Conclusions Communicating with trustworthy information to the public, related to public health issues and vaccine research or trials, constitutes the most important preventive strategy against infectious diseases and threats of future emerging infection diseases. Future initiatives could involve further improvement of the dissemination process for the developed tools in order to increase the number of volunteer registrations and in particular among population groups, which remain underrepresented in vaccine trials. Another initiative could be the design of an innovative video game targeting training and educational material for vaccine clinical trials, addressing vaccine hesitancy and refusal. Other prospective initiatives could be the conduct of studies based on citizen science methods (eg, community-based participatory action research ) for vaccine research and future epidemics by engaging community members and service providers as partners in the research process and providing them the appropriate educational tools for tackling current and future public health issues. Such actions will eventually contribute to reduce the knowledge gaps and hesitancy regarding participation in vaccine trials in the general population. This paper presents the first set of harmonized promotional and educational tools of the VACCELERATE Volunteer Registry . The produced material, based on the selected population group, includes flyers, posters, interactive educational cards, puzzles, and animation videos. The tools can be downloaded from the official website of VACCELERATE and appropriate references, based on their own DOIs. It should be noted that the provided educational material can be adopted to the user needs and may also be used as base-prototypes for further improvement and production of related tools. To ensure high rates of vaccination in populations and a large number of participants in vaccine trials, it is necessary to ensure effective, targeted, and trustworthy educational and promotional tools regarding vaccines and vaccine trials, guaranteeing high diversity and inclusiveness in vaccine trials without underrepresentation of minorities. The developed standardized toolkit is one example of information from trustful sources that can be used against health misinformation and disinformation, lack of trust, fear, and fake news. Furthermore, it can be also used to increase literacy and understanding of the importance of vaccine clinical trials in the general population as part of an emergency preparedness plan. In addition, the promotional and educational material will be able to bridge the gap in public information (eg, communication approach, target audiences, and type of media produced) regarding COVID-19 vaccine trials, which is covered in only 2.51% of the collected media and only in 4 languages (English, French, German, and Czech) . Another challenge is using visual material and stories in order to inform children about the procedures, importance of, and volunteer participation in vaccine trials and about vaccination. The impact and assessment of the promotional and educational material are beyond the scope of this study. Nevertheless, we are currently in the process of evaluating them for vaccine trials during this year by another expert group (WP4) of the VACCELERATE Consortium. Communicating with trustworthy information to the public, related to public health issues and vaccine research or trials, constitutes the most important preventive strategy against infectious diseases and threats of future emerging infection diseases. Future initiatives could involve further improvement of the dissemination process for the developed tools in order to increase the number of volunteer registrations and in particular among population groups, which remain underrepresented in vaccine trials. Another initiative could be the design of an innovative video game targeting training and educational material for vaccine clinical trials, addressing vaccine hesitancy and refusal. Other prospective initiatives could be the conduct of studies based on citizen science methods (eg, community-based participatory action research ) for vaccine research and future epidemics by engaging community members and service providers as partners in the research process and providing them the appropriate educational tools for tackling current and future public health issues. Such actions will eventually contribute to reduce the knowledge gaps and hesitancy regarding participation in vaccine trials in the general population.
Preliminary Attainability Assessment of Real-World Data for Answering Major Clinical Research Questions in Breast Cancer Brain Metastasis: Framework Development and Validation Study
bb293e20-b1d5-4d28-aa36-6780055f9723
10131620
Internal Medicine[mh]
Brain metastasis (BM) is a major cause of mortality in patients with breast cancer, and it increases the difficulty of treatment. Advancements in treatment and the development of brain imaging technology have increased the survival of patients with metastatic breast cancer, leading to an increased incidence of BM . Nonetheless, the opportunity to participate in prospective randomized clinical trials (RCTs) is typically only available to a limited number of patients with breast cancer brain metastasis (BCBM). Design challenges, including heterogeneity of patients, varying definitions of clinical end points, and different methods to assess these end points, have led to excluding most patients with BCBM from RCTs . Consequently, while the incidence and survival duration have expanded, clinical research methods for BCBM remain limited . Meanwhile, real-world evidence (RWE) in oncology has rapidly gained traction in recent decades, with the potential to answer clinical questions that cannot be directly or completely addressed by RCTs . Integrating real-world data (RWD) into clinical research promises to contribute to more sustainable research designs, including extension, augmentation, enrichment, and pragmatic design . To support the need for RWE generation, medical institutions have provided RWD through the construction of clinical data warehouses (CDWs) based on electronic health records (EHRs) . Nevertheless, clinical research using RWD is still limited because of concerns regarding confidence in nonrandomized RWD analyses and the shortage of best practices for extracting, harmonizing, and analyzing RWD—practices that improve transparency and reproducibility . To address these concerns, researchers have emphasized the importance of comprehensively understanding data representation and data content while clarifying research questions . However, pragmatic screening methods to determine whether the content of a data source is sufficient to answer the research questions before conducting research with RWD have not yet been established. Accurate but oversimplified instructions could lead to confusion among clinical researchers who seek to select the optimal RWD source for their hypothesis and research purpose . In detail, specifying the research question and determining the appropriate level for understanding massive and complex RWD is still challenging for clinical researchers. The vagueness inherent in this complex process at an early stage of research is one cause of concern and controversy regarding the utility of RWD for generating scientific evidence . Therefore, effective screening methods are needed to determine whether the contents of a data source are sufficient to answer the research questions before conducting studies using RWD . This study suggests a method to screen specific data sources for their ability to address a clinical research question at the preliminary step of research design and evaluate the method’s utility in assessing data attainability for BCBM. Overview In line with major clinical research questions on BCBM, this study evaluated screening performance using the Preliminary Attainability Assessment of Real-World Data (PAR) framework, which can assist clinicians in assessing data attainability from an RWD source at a preliminary stage of the study design. This study was divided into 2 phases. In the preparation phase, we first identified clinical questions related to current unmet needs according to the perspectives of clinical experts on BCBM for evaluation of the framework. In the evaluation phase, the working group assessed the preliminary data attainability of the listed clinical questions with a specific data source, using the PAR framework presented in the following section. Data attainability was defined in this study as the availability of data for reconstruction and extraction required from a particular data source to conduct clinical research regarding sample size, data fields, and content. Data Source and Population The Samsung Medical Center Breast Cancer Registry (SMC BCR) is a hospital-based real-time registry implemented in March 2021 that leverages the institution’s anonymized and deidentified CDW platform: the Data Analytics and Research Window for Integrated Knowledge (DARWIN)–C. The inclusion criteria for the SMC BCR comprise the intersection of three patient conditions: (1) visited at least one of the medical oncology, breast surgery, or radiation oncology departments, (2) diagnosed with breast cancer under codes C50 or D05 of the International Classification of Diseases codes, and (3) aged older than 15 years at enrollment . The number of breast cancer patients in SMC BCR was 45,129, and the registry covered the period from June 1995 to December 2021. The SMC BCR consists of 24 base data marts that represent disease-specific breast cancer characteristics and care pathways. The registry provides clinical variables in the form of structured data, including demographics, diagnostic history, treatment information (operation, chemotherapy, and radiation therapy), laboratory test results, and featured data fields extracted from free-text records, such as pathology, radiology, or genomic laboratory reports . For BCBM patient identification, we used a BCBM data mart in the SMC BCR. The BCBM data mart contains predefined and preprocessed data from patients (n=1443 up to December 2021) with at least one BM indicator after a breast cancer diagnosis. The indicators, defined by clinical experts, are as follows: craniotomy record (payment-based), whole-brain radiotherapy treatment (regions for radiation treatment are brain, whole brain, and partial), gamma knife (code for gamma knife), and intrathecal methotrexate treatment. The data quality of the BCBM data mart was evaluated using manual chart reviews performed by 2 clinical nurses with expertise in data management; data validity was greater than 98% via 10% random sampling. Participants Clinicians Interviewed to Develop the Key Clinical Questions Seven breast cancer experts from 6 hospitals in the Republic of Korea participated in a survey. Two breast cancer experts were interviewed in advance to define clinical questions regarding BCBM. Working Group for Data Attainability Assessment A working group was formed for data attainability testing. It comprised people with clinical expertise and at least 10 years of experience in clinical research across various interdisciplinary areas, including medical informatics and epidemiology. Clinical Question Preparation and Validation We established clinical questions through 2 rounds of interviews with 2 breast cancer clinical experts in April and June 2021 . From the interviews, we identified clinical questions that reflected the most recent and internationally relevant unmet clinical needs in BCBM . We conducted a survey of experts’ opinions to validate the clinical significance and the feasibility of these clinical questions. Seven breast cancer clinical experts participated in the survey from September 27 to October 5, 2021. In this survey, clinical experts scored clinical significance, research availability, data attainability, and method suitability using EHR data on a 5-point Likert scale (range 1-5). and show the average scores and correlation analysis results. All data analyses and calculations were performed using Microsoft Excel (2016 version; Microsoft Corp). Graphs and plots were designed using R (version 4.0.2; R Foundation for Statistical Computing). List of clinical questions developed from interviews with experts. Clinical question A Is there a difference in survival outcomes between the brain as the primary metastasis site and brain metastasis accompanied by systemic metastasis? Clinical question B Does systemic treatment for brain metastasis patients affect survival outcomes according to subtype? Clinical question C Does the timing of systemic treatment initiation affect survival outcomes when brain metastasis is accompanied by systemic metastasis? Clinical question D Is there a difference in the prediction of brain metastasis in patients who previously received trastuzumab alone for neoadjuvant therapy and in patients who received pertuzumab and trastuzumab for neoadjuvant therapy? Clinical question E Can any record of neurological symptoms described by patients be translated as a surrogate factor of brain metastasis? PAR Framework The PAR framework was proposed to assess the data attainability of a data source from the perspective of a particular clinical question in the early research stages. The PAR framework has 4 sequential stages, starting with clarification of the clinical question . Stage 1: Operational Definitions of Variables In this step, we identify variables inherent in the clinical question, describe them in natural language, and clarify operational definitions of the variables at 2 distinct levels. One level is the research-variable level and includes dependent or independent variables at the description level of the research hypothesis. The other level is a data-variable level that can be used as an atomic condition to declare the research variable. For example, “overall survival (time)” can be one of the research variables that the researcher intends to observe, and “first diagnosis date” and “death date” are the data variables that constitute the research variable. Stage 2: Data Matching (Structural/Semantic) This stage involves matching the data variables with specific data fields in the data source selected as the research material. Data fields refer to stored data elements, such as the column name of the data table, that depend on the data source. By contrast, the data variable is a unified conceptual term. A one-to-one direct match has priority; however, a surrogate definition for the data variable is explored by combining the values from multiple data fields. Stage 3: Data Screening and Extraction Here, the actual data values are extracted and screened from the data source for each data variable identified in the previous stages. Stage 4: Data Attainability Diagramming Finally, we draw a diagram showing the extracted sample size according to the respective clinical question and matching process from the sub–data sets acquired in stage 3. Ethical Considerations The study protocol was approved by the Institutional Review Board of the SMC (2021-09-036), which waived the need for informed consent as our study data were deidentified, and anonymized data were extracted from the CDW. This study followed the Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines . In line with major clinical research questions on BCBM, this study evaluated screening performance using the Preliminary Attainability Assessment of Real-World Data (PAR) framework, which can assist clinicians in assessing data attainability from an RWD source at a preliminary stage of the study design. This study was divided into 2 phases. In the preparation phase, we first identified clinical questions related to current unmet needs according to the perspectives of clinical experts on BCBM for evaluation of the framework. In the evaluation phase, the working group assessed the preliminary data attainability of the listed clinical questions with a specific data source, using the PAR framework presented in the following section. Data attainability was defined in this study as the availability of data for reconstruction and extraction required from a particular data source to conduct clinical research regarding sample size, data fields, and content. The Samsung Medical Center Breast Cancer Registry (SMC BCR) is a hospital-based real-time registry implemented in March 2021 that leverages the institution’s anonymized and deidentified CDW platform: the Data Analytics and Research Window for Integrated Knowledge (DARWIN)–C. The inclusion criteria for the SMC BCR comprise the intersection of three patient conditions: (1) visited at least one of the medical oncology, breast surgery, or radiation oncology departments, (2) diagnosed with breast cancer under codes C50 or D05 of the International Classification of Diseases codes, and (3) aged older than 15 years at enrollment . The number of breast cancer patients in SMC BCR was 45,129, and the registry covered the period from June 1995 to December 2021. The SMC BCR consists of 24 base data marts that represent disease-specific breast cancer characteristics and care pathways. The registry provides clinical variables in the form of structured data, including demographics, diagnostic history, treatment information (operation, chemotherapy, and radiation therapy), laboratory test results, and featured data fields extracted from free-text records, such as pathology, radiology, or genomic laboratory reports . For BCBM patient identification, we used a BCBM data mart in the SMC BCR. The BCBM data mart contains predefined and preprocessed data from patients (n=1443 up to December 2021) with at least one BM indicator after a breast cancer diagnosis. The indicators, defined by clinical experts, are as follows: craniotomy record (payment-based), whole-brain radiotherapy treatment (regions for radiation treatment are brain, whole brain, and partial), gamma knife (code for gamma knife), and intrathecal methotrexate treatment. The data quality of the BCBM data mart was evaluated using manual chart reviews performed by 2 clinical nurses with expertise in data management; data validity was greater than 98% via 10% random sampling. Clinicians Interviewed to Develop the Key Clinical Questions Seven breast cancer experts from 6 hospitals in the Republic of Korea participated in a survey. Two breast cancer experts were interviewed in advance to define clinical questions regarding BCBM. Working Group for Data Attainability Assessment A working group was formed for data attainability testing. It comprised people with clinical expertise and at least 10 years of experience in clinical research across various interdisciplinary areas, including medical informatics and epidemiology. Clinical Question Preparation and Validation We established clinical questions through 2 rounds of interviews with 2 breast cancer clinical experts in April and June 2021 . From the interviews, we identified clinical questions that reflected the most recent and internationally relevant unmet clinical needs in BCBM . We conducted a survey of experts’ opinions to validate the clinical significance and the feasibility of these clinical questions. Seven breast cancer clinical experts participated in the survey from September 27 to October 5, 2021. In this survey, clinical experts scored clinical significance, research availability, data attainability, and method suitability using EHR data on a 5-point Likert scale (range 1-5). and show the average scores and correlation analysis results. All data analyses and calculations were performed using Microsoft Excel (2016 version; Microsoft Corp). Graphs and plots were designed using R (version 4.0.2; R Foundation for Statistical Computing). List of clinical questions developed from interviews with experts. Clinical question A Is there a difference in survival outcomes between the brain as the primary metastasis site and brain metastasis accompanied by systemic metastasis? Clinical question B Does systemic treatment for brain metastasis patients affect survival outcomes according to subtype? Clinical question C Does the timing of systemic treatment initiation affect survival outcomes when brain metastasis is accompanied by systemic metastasis? Clinical question D Is there a difference in the prediction of brain metastasis in patients who previously received trastuzumab alone for neoadjuvant therapy and in patients who received pertuzumab and trastuzumab for neoadjuvant therapy? Clinical question E Can any record of neurological symptoms described by patients be translated as a surrogate factor of brain metastasis? PAR Framework The PAR framework was proposed to assess the data attainability of a data source from the perspective of a particular clinical question in the early research stages. The PAR framework has 4 sequential stages, starting with clarification of the clinical question . Stage 1: Operational Definitions of Variables In this step, we identify variables inherent in the clinical question, describe them in natural language, and clarify operational definitions of the variables at 2 distinct levels. One level is the research-variable level and includes dependent or independent variables at the description level of the research hypothesis. The other level is a data-variable level that can be used as an atomic condition to declare the research variable. For example, “overall survival (time)” can be one of the research variables that the researcher intends to observe, and “first diagnosis date” and “death date” are the data variables that constitute the research variable. Stage 2: Data Matching (Structural/Semantic) This stage involves matching the data variables with specific data fields in the data source selected as the research material. Data fields refer to stored data elements, such as the column name of the data table, that depend on the data source. By contrast, the data variable is a unified conceptual term. A one-to-one direct match has priority; however, a surrogate definition for the data variable is explored by combining the values from multiple data fields. Stage 3: Data Screening and Extraction Here, the actual data values are extracted and screened from the data source for each data variable identified in the previous stages. Stage 4: Data Attainability Diagramming Finally, we draw a diagram showing the extracted sample size according to the respective clinical question and matching process from the sub–data sets acquired in stage 3. Seven breast cancer experts from 6 hospitals in the Republic of Korea participated in a survey. Two breast cancer experts were interviewed in advance to define clinical questions regarding BCBM. A working group was formed for data attainability testing. It comprised people with clinical expertise and at least 10 years of experience in clinical research across various interdisciplinary areas, including medical informatics and epidemiology. We established clinical questions through 2 rounds of interviews with 2 breast cancer clinical experts in April and June 2021 . From the interviews, we identified clinical questions that reflected the most recent and internationally relevant unmet clinical needs in BCBM . We conducted a survey of experts’ opinions to validate the clinical significance and the feasibility of these clinical questions. Seven breast cancer clinical experts participated in the survey from September 27 to October 5, 2021. In this survey, clinical experts scored clinical significance, research availability, data attainability, and method suitability using EHR data on a 5-point Likert scale (range 1-5). and show the average scores and correlation analysis results. All data analyses and calculations were performed using Microsoft Excel (2016 version; Microsoft Corp). Graphs and plots were designed using R (version 4.0.2; R Foundation for Statistical Computing). List of clinical questions developed from interviews with experts. Clinical question A Is there a difference in survival outcomes between the brain as the primary metastasis site and brain metastasis accompanied by systemic metastasis? Clinical question B Does systemic treatment for brain metastasis patients affect survival outcomes according to subtype? Clinical question C Does the timing of systemic treatment initiation affect survival outcomes when brain metastasis is accompanied by systemic metastasis? Clinical question D Is there a difference in the prediction of brain metastasis in patients who previously received trastuzumab alone for neoadjuvant therapy and in patients who received pertuzumab and trastuzumab for neoadjuvant therapy? Clinical question E Can any record of neurological symptoms described by patients be translated as a surrogate factor of brain metastasis? The PAR framework was proposed to assess the data attainability of a data source from the perspective of a particular clinical question in the early research stages. The PAR framework has 4 sequential stages, starting with clarification of the clinical question . In this step, we identify variables inherent in the clinical question, describe them in natural language, and clarify operational definitions of the variables at 2 distinct levels. One level is the research-variable level and includes dependent or independent variables at the description level of the research hypothesis. The other level is a data-variable level that can be used as an atomic condition to declare the research variable. For example, “overall survival (time)” can be one of the research variables that the researcher intends to observe, and “first diagnosis date” and “death date” are the data variables that constitute the research variable. This stage involves matching the data variables with specific data fields in the data source selected as the research material. Data fields refer to stored data elements, such as the column name of the data table, that depend on the data source. By contrast, the data variable is a unified conceptual term. A one-to-one direct match has priority; however, a surrogate definition for the data variable is explored by combining the values from multiple data fields. Here, the actual data values are extracted and screened from the data source for each data variable identified in the previous stages. Finally, we draw a diagram showing the extracted sample size according to the respective clinical question and matching process from the sub–data sets acquired in stage 3. The study protocol was approved by the Institutional Review Board of the SMC (2021-09-036), which waived the need for informed consent as our study data were deidentified, and anonymized data were extracted from the CDW. This study followed the Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines . The working group assessed data attainability pertaining to the 5 clinical questions through stages 1 to 4. In stage 1, the research and data variables were defined based on the SMC BCR data structure . For clinical question E, the variable “neurological symptoms” did not match with any structured or semistructured fields in the data source. Furthermore, terminological code systems covering “neurological symptoms” had not been applied in the EHR prior to the SMC BCR. Therefore, we concluded that clinical question E was not properly answered using our data source. In stage 2, the data fields were matched with the defined operational data variables. At this stage, some questions required variable replacement to match data in the SMC BCR. The variables “BM event time,” “death date,” “neoadjuvant regimen start date,” and “neoadjuvant regimen name” were respectively coupled with corresponding data fields in the SMC BCR. By contrast, for “systematic metastasis event time,” a one-to-one direct match was not possible for any data field within the data source. Hence, we used “palliative treatment start date” as a surrogate variable, following clinical expert opinion, which reflected institutional treatment protocols. In stage 3, the actual data values identified during the previous stages were extracted. The interdisciplinary team, including clinical experts, continued to validate the contents of the extracted data sets. Through this cross-validation, we confirmed whether the data were logically aligned with previously well-known clinical evidence. Data sets for 4 of the 5 clinical questions proposed by the breast cancer experts from a clinical perspective were extracted. At the final stage, only clinical question D had a data set obtained from a directly matched data variable ( D). We gathered data for 4292 patients from 2006 to 2021 from the BCR for trastuzumab treatment. Among these patients, 1382 patients received trastuzumab treatment as a neoadjuvant treatment, 832 patients received combined trastuzumab and pertuzumab neoadjuvant treatment, and 550 patients received trastuzumab neoadjuvant treatment alone. The remaining clinical questions required substitutional variables owing to a lack of matched variables in the CDW system. Therefore, we obtained appropriate data sets for the other clinical questions from the data source with a defined surrogate variable. To derive the data set for the clinical question “Does systemic treatment for brain metastasis patients affect survival outcomes according to subtype?” the BCBM mart and clinical subtyping information based on immunohistochemistry test results were used ( B). Clinical subtyping information was available for 662 patients who had test results for estrogen receptor, progesterone receptor, or human epidermal growth factor receptor 2 (HER2) items. HER2 results were curated based on supplementing the silver in situ hybridization or fluorescence in situ hybridization tests. Systemic treatment records were only available for 498 patients. The number of patients with any hormone receptor (HR)/HER2 positive clinical subtype was 180 (36.2%), the number with triple-negative breast cancer (TNBC) was 175 (35.1%), and the number with an HR positive /HER2 negative clinical subtype was 143 (28.7%). In the subgroup with no systemic treatment records, 54 patients (33%) were classified into the HR positive/HER2 negative group, 44 patients (26.8%) into the HR/HER2 positive group, and 66 patients (40.2%) into the TNBC group. In stage 2, the data-matching step, a surrogate data variable, “systemic treatment in a palliative setting,” was set to represent the presence of systemic metastasis. Among the 1443 BCBM patients, 956 patients were classified as having BM with systemic metastasis based on records of both BM-related and systemic treatment in a palliative setting. Of these 956 patients, 237 patients were classified as having BM that later progressed to systemic metastasis. Data attainability was verified for 2 clinical questions: “Is there a difference in survival outcomes between the brain as the primary metastasis site and brain metastasis accompanied by systemic metastasis?” and “Does the timing of systemic treatment initiation affect survival outcomes when brain metastasis is accompanied by systemic metastasis?” ( A and C). During the process of clearly defining research variables in terms of data fields, the research questions stratified into 3 types. Among the 5 clinical questions suggested by clinical experts, we gathered data sets for 4 clinical questions using the stages suggested above. Only 1 clinical question could be answered using the matched direct data variables. However, the additional questions could be answered using surrogate variables . Principal Findings In this study, we propose the PAR framework for data attainability screening at the preliminary step and evaluate its utility with clinical questions that reflect the most recent and internationally relevant unmet needs of clinicians in BCBM . A survey was conducted to evaluate the clinical significance of the clinical questions. The mean score was 4.37 (range 3.57-5.00). We found that the correlation between scores given by experts was higher for the questions with higher average scores ( and ). RWE generation has received attention in the BCBM therapeutic area owing to limited clinical trial opportunities despite increasing clinical importance. However, incomplete gold standards for RWD study protocols and the unpredictable “hidden labor” of the secondary use of clinical data serve as barriers when clinical researchers attempt to design the most suitable methods to address their research questions . We identified particular gaps when SMC built a site-specific CDW platform and developed a BCR as its first implementation case. After the release of the SMC BCR, clinical researchers attempted to generate RWE by using this registry, especially in areas with significant unmet needs, such as BCBM. Nevertheless, clinical research using RWD has not been as successful as expected in our experience despite the fact that well-known technical barriers are addressed; the CDW’s functional user interface provides clinicians with access to nonidentified and anonymous high-quality data sets. The greatest challenge faced by clinical researchers at the next step was securing a sufficient understanding of the data to avoid information distortion during the research data preparation stage. Therefore, we proposed the PAR framework to assess the feasibility of research at the preliminary stage and evaluated this framework with BCBM clinical questions with various entry points. To the best of our knowledge, a systematic framework to explore research feasibility at the conjunction between the data source and clinical questions has yet to be presented. Without such a framework, sample size, data fields, and content have not been properly accounted for. The methodology of this study can contribute to the acceleration of RWE generation by strengthening the transparency and reproducibility of the RWD research process and lowering the entry barrier for clinical researchers. Clinical researchers with research questions derived from empirical insight in clinical practice have difficulties securing an understanding of accumulated RWD and solidifying their study designs. In contrast to conventional medical research methodologies, securing the reliability and validity of research variables for RWD studies is designed at a post– rather than pre–data collection stage. In addition, RWD is a conceptual collective term encompassing all data obtained through health care activities, and the content varies by data source . Therefore, research using RWD requires significant and iterative effort prior to formal hypothesis testing, including selecting an appropriate data source, curating the data, repurposing it, and preprocessing it . As a result, local information system expertise and deep content knowledge are often required to understand the idiosyncratic manner in which data sources are captured and stored . Depending on the level of the data structure, data extraction frequently requires a high level of technical training as well. Through the process of matching research variables with data fields, the PAR framework illustrates how the process of clarifying research questions refines the range of data that should be analyzed before addressing a specific research question. Consequently, time and effort could be greatly reduced to ensure understanding of the selected data source. The advantage of the PAR framework is that the results of stages 1 and 2 are explicitly communicated and can be reused. For example, in clinical question D, the research variable “time from neoadjuvant to BM” can be measured as the time between “neoadjuvant regiment start date” and “BM event date.” This level of operational definition can be reviewed and reused by peers with clinical expertise regardless of the storage structure of the data source. Above all, this reusability-enhancing reproducibility is further extended when applied to data sources that adopt not only institution-specific structures but also common data models, such as the Observational Medical Outcomes Partnership (OMOP) common data model or the Informatics for Integrating Biology & the Bedside (i2b2) data model . As specific data field conditions and query code levels with standardized data models and vocabularies can be reused, the accumulation of operational definitions of data levels can be considered an additional knowledge base. Furthermore, the accumulation of these definitions enhances consistency in the conduct of RWD research by enabling discussion regarding content validity in a more comparative manner. Accordingly, connecting formative efforts in the RWD research process, from data storage to processing, is a promising way to ensure reliability of research outcomes . CDWs contain clinical data from EHRs for retrospective analysis to enable clinical researchers to utilize RWD for research directly, and the scope of RWD use has significantly expanded over the past few decades . However, we identified several challenges while conducting RWD studies using the SMC BCR with clinical questions. A deficiency in the exact data variables for the questions recognized by clinical researchers was detected in stage 1. Researchers need to reassess stages 1 and 2 when no directly matching data field exists for a research variable. A surrogate definition could be considered as a substitution. For example, we adopted an indirect variable to represent systemic metastasis using the data for systemic treatment and used the start date of the palliative treatment cycle as the date of metastasis. Since most metastatic breast cancer patients with systemic metastasis receive systemic treatments, data on these indirect variables were readily available from the SMC BCR. However, when using an indirect variable as a surrogate, the validity of the research outcome is lower than when using a direct variable, which could be a primary limitation of RWD studies. Alternatively, if the variables will be frequently used for the generation of RWE, construction of featured data marts based on raw data and local practice rules is recommended. Since the SMC CDW has a well-constructed BCR with a BM mart, it was relatively easy for the investigators to extract the necessary data variables. Additionally, it was not possible to extract key variables for clinical question E. This result aligned with the experts’ survey scores for the least feasible and least suitable questions using EHR data. Symptoms described by patients were only recorded in EHRs as free text, and there was no specific template or location. It was not possible to preprocess this information using a rule-based semantic engine for the CDW. Despite capturing symptoms of patients’ complaints, integrating the information into the CDW was difficult, because the location and template of the data were not aligned across departments. Moreover, the terminology for neurological symptoms is not standardized and is subject to a relatively high level of cultural dependency. To address this, integrating other types of data, such as prospective cohort studies or patient-generated data from mobile or watch-type devices, can be considered. If investigators continue to track patient-reported outcomes through cohort studies and integrate this information with other variables in CDWs, the use of RWE based on CDW data can be increased. Limitations This study is limited in its generalizability. We assessed a single clinical domain, BCBM, and extracted data from a single data source, the SMC BCR. Further application of the PAR framework in different domains or with different types of RWD will be needed to develop the framework. However, the PAR framework and training case presented in this study could help guide clinical researchers in assessing preliminary attainability for future studies using RWD. Conclusions We proposed and evaluated a PAR framework to assess data attainability to answer major clinical research questions in BCBM. The adoption of the PAR framework is associated with improved efficiency in clinical research using RWD in the preliminary stage. This framework could contribute to improving the quality of RWD-based clinical research by enhancing its transparency and reproducibility. In this study, we propose the PAR framework for data attainability screening at the preliminary step and evaluate its utility with clinical questions that reflect the most recent and internationally relevant unmet needs of clinicians in BCBM . A survey was conducted to evaluate the clinical significance of the clinical questions. The mean score was 4.37 (range 3.57-5.00). We found that the correlation between scores given by experts was higher for the questions with higher average scores ( and ). RWE generation has received attention in the BCBM therapeutic area owing to limited clinical trial opportunities despite increasing clinical importance. However, incomplete gold standards for RWD study protocols and the unpredictable “hidden labor” of the secondary use of clinical data serve as barriers when clinical researchers attempt to design the most suitable methods to address their research questions . We identified particular gaps when SMC built a site-specific CDW platform and developed a BCR as its first implementation case. After the release of the SMC BCR, clinical researchers attempted to generate RWE by using this registry, especially in areas with significant unmet needs, such as BCBM. Nevertheless, clinical research using RWD has not been as successful as expected in our experience despite the fact that well-known technical barriers are addressed; the CDW’s functional user interface provides clinicians with access to nonidentified and anonymous high-quality data sets. The greatest challenge faced by clinical researchers at the next step was securing a sufficient understanding of the data to avoid information distortion during the research data preparation stage. Therefore, we proposed the PAR framework to assess the feasibility of research at the preliminary stage and evaluated this framework with BCBM clinical questions with various entry points. To the best of our knowledge, a systematic framework to explore research feasibility at the conjunction between the data source and clinical questions has yet to be presented. Without such a framework, sample size, data fields, and content have not been properly accounted for. The methodology of this study can contribute to the acceleration of RWE generation by strengthening the transparency and reproducibility of the RWD research process and lowering the entry barrier for clinical researchers. Clinical researchers with research questions derived from empirical insight in clinical practice have difficulties securing an understanding of accumulated RWD and solidifying their study designs. In contrast to conventional medical research methodologies, securing the reliability and validity of research variables for RWD studies is designed at a post– rather than pre–data collection stage. In addition, RWD is a conceptual collective term encompassing all data obtained through health care activities, and the content varies by data source . Therefore, research using RWD requires significant and iterative effort prior to formal hypothesis testing, including selecting an appropriate data source, curating the data, repurposing it, and preprocessing it . As a result, local information system expertise and deep content knowledge are often required to understand the idiosyncratic manner in which data sources are captured and stored . Depending on the level of the data structure, data extraction frequently requires a high level of technical training as well. Through the process of matching research variables with data fields, the PAR framework illustrates how the process of clarifying research questions refines the range of data that should be analyzed before addressing a specific research question. Consequently, time and effort could be greatly reduced to ensure understanding of the selected data source. The advantage of the PAR framework is that the results of stages 1 and 2 are explicitly communicated and can be reused. For example, in clinical question D, the research variable “time from neoadjuvant to BM” can be measured as the time between “neoadjuvant regiment start date” and “BM event date.” This level of operational definition can be reviewed and reused by peers with clinical expertise regardless of the storage structure of the data source. Above all, this reusability-enhancing reproducibility is further extended when applied to data sources that adopt not only institution-specific structures but also common data models, such as the Observational Medical Outcomes Partnership (OMOP) common data model or the Informatics for Integrating Biology & the Bedside (i2b2) data model . As specific data field conditions and query code levels with standardized data models and vocabularies can be reused, the accumulation of operational definitions of data levels can be considered an additional knowledge base. Furthermore, the accumulation of these definitions enhances consistency in the conduct of RWD research by enabling discussion regarding content validity in a more comparative manner. Accordingly, connecting formative efforts in the RWD research process, from data storage to processing, is a promising way to ensure reliability of research outcomes . CDWs contain clinical data from EHRs for retrospective analysis to enable clinical researchers to utilize RWD for research directly, and the scope of RWD use has significantly expanded over the past few decades . However, we identified several challenges while conducting RWD studies using the SMC BCR with clinical questions. A deficiency in the exact data variables for the questions recognized by clinical researchers was detected in stage 1. Researchers need to reassess stages 1 and 2 when no directly matching data field exists for a research variable. A surrogate definition could be considered as a substitution. For example, we adopted an indirect variable to represent systemic metastasis using the data for systemic treatment and used the start date of the palliative treatment cycle as the date of metastasis. Since most metastatic breast cancer patients with systemic metastasis receive systemic treatments, data on these indirect variables were readily available from the SMC BCR. However, when using an indirect variable as a surrogate, the validity of the research outcome is lower than when using a direct variable, which could be a primary limitation of RWD studies. Alternatively, if the variables will be frequently used for the generation of RWE, construction of featured data marts based on raw data and local practice rules is recommended. Since the SMC CDW has a well-constructed BCR with a BM mart, it was relatively easy for the investigators to extract the necessary data variables. Additionally, it was not possible to extract key variables for clinical question E. This result aligned with the experts’ survey scores for the least feasible and least suitable questions using EHR data. Symptoms described by patients were only recorded in EHRs as free text, and there was no specific template or location. It was not possible to preprocess this information using a rule-based semantic engine for the CDW. Despite capturing symptoms of patients’ complaints, integrating the information into the CDW was difficult, because the location and template of the data were not aligned across departments. Moreover, the terminology for neurological symptoms is not standardized and is subject to a relatively high level of cultural dependency. To address this, integrating other types of data, such as prospective cohort studies or patient-generated data from mobile or watch-type devices, can be considered. If investigators continue to track patient-reported outcomes through cohort studies and integrate this information with other variables in CDWs, the use of RWE based on CDW data can be increased. This study is limited in its generalizability. We assessed a single clinical domain, BCBM, and extracted data from a single data source, the SMC BCR. Further application of the PAR framework in different domains or with different types of RWD will be needed to develop the framework. However, the PAR framework and training case presented in this study could help guide clinical researchers in assessing preliminary attainability for future studies using RWD. We proposed and evaluated a PAR framework to assess data attainability to answer major clinical research questions in BCBM. The adoption of the PAR framework is associated with improved efficiency in clinical research using RWD in the preliminary stage. This framework could contribute to improving the quality of RWD-based clinical research by enhancing its transparency and reproducibility.
Impact of COVID-19 control measures on
f2cfb283-6806-491c-805a-bc9e9806eb7c
10131741
Microbiology[mh]
The authors declare no conflict of interests.
The Use of Digital Health Services Among Patients and Citizens Living at Home: Scoping Review
36481d3a-b3bc-40a0-bc76-fed85e0ce0f1
10131924
Patient-Centered Care[mh]
The use of digital health services has become increasingly relevant for health care professionals, patients, and citizens as the COVID-19 pandemic has challenged the health sector worldwide [ - ]. However, what we consider as digital health has evolved over time. At the time that Frank first introduced the concept, digital health was considered mainly in terms of internet-based functions, such as finding information on the internet, or as a means of health e-commerce and also as internet-based applications for integrating information from different information systems. In 2001, Eysenbach used the term “eHealth” not only to mean that health services and health-related information are delivered or enhanced using information and communication technology (ICT) but also in a wider sense as a networked way of improving health care with the help of ICT. Eysenbach stated that eHealth is not just about the technical development of services but also about the development of different attitudes and ways of thinking. Eysenbach presented the 10 e’s (eg, efficiency, evidence based, empowerment, encouragement, ethics, and equity) that are inseparable from the concept of eHealth. Since then, further clarification and updating of the term “eHealth” have been called for . Today, the term “digital health” encompasses many other technologies than just internet-based solutions. In addition to digital health, terms such as “digital health services,” “eHealth,” and “telemedicine” are used with slightly different meanings . These solutions not only include internet-based ICT solutions but also other types of technologies, such as artificial intelligence, wearables, and mobile apps. The World Health Organization (WHO) considers digital health services as a secure and cost-effective use of ICT for providing access to health and health-related fields, such as health surveillance, education, knowledge, and research. The European Commission (EC), in contrast, emphasizes the concept of digitalization and considers digital health services as either partly or fully digitalized by using digital elements and solutions to provide health services. According to the EC , digitalization is not only a technical but also an organizational and cultural process. In this review, digital health services are considered in their broad concept, covering all kinds of technology solutions used for delivering health care services digitally. The development of digital services in health care plays an important role in involving individuals in managing their health and maintaining activity in managing their health and overall well-being . This can be described as a paradigm shift toward participatory medicine, of which a cornerstone is full patient access to their medical records . The paradigm shift from traditional to modern medicine enhances shared decision-making between the patient and the health care professional as well as democratization of care, leading thus to a more equal patient–health care professional relationship . To be able to participate actively in decision-making, patients need health literacy skills that enable them to obtain and understand health information and share their preferences, values, and experiences with health care professionals . In addition to activating patient participation, the development of and the increase in digital health services are aimed at enhancing the efficiency and quality of services and providing services more cost-effectively from the service provider's point of view. Between the customer and the service provider, digital health services, such as patient portals, provide a completely new opportunity for arranging care regardless of time and place [ , , ]. The value of care is created and defined in terms of meeting the patient’s needs and thus affecting the quality and cost-effectiveness of the care and the performance of the health care provider . The use of digital health services depends on many factors . Patients and customers possibly have positive attitudes toward using digital services, especially when having positive perceptions of the usefulness and ease of use of digital health services . Even among elderly people, satisfaction with and the preparedness to use digital health services have been observed [ - ]. According to studies conducted during the pandemic, patients stated that they were willing to continue using digital health services even after the pandemic . Digital health services can include many examples of solutions for patients and citizens. In this review, digital health services refer to all possible technology-based solutions that enable health management while living at home. These solutions include technologies operated via computers, tablets, and mobile phones, as well as wearable and monitoring software for measuring and collecting data on the user’s health . The definition of health is more complex. In 1946, WHO defined health as a “state of complete physical, mental, and social well-being and not merely the absence of disease or infirmity.” Later, WHO expanded the definition to also mean a resource for life and for the continuous process of people to promote their health. Due to the complexity of the definition and its implications, for example, health policy and services, new definitions are required . In this review, health is considered in its broader context, as defined by WHO . As the use of various technologies is becoming more common in health care, more patient engagement and activation are required. In this setting, activation of patients refers to not only knowledge and skills but also confidence to manage one’s health. This is considered a prerequisite for a patient to make informed choices concerning their care . This scoping review aims to explore the publications published since 2010 studying the use of different kinds of digital health services among patients and citizens living at home. The review’s focus is solely on technology solutions that can be used in the home environment, thus highlighting the various possibilities of digital health services. Design This scoping review was conducted using the methodological framework of the Joanna Briggs Institute (JBI) . A scoping review approach can be chosen for a range of reasons [ - ]. In this paper, the scoping review method was chosen to map the extent of the literature on this specific topic, to objectively summarize the available evidence, and to identify knowledge gaps and thereby contribute to future research. Based on the reasons for conducting a scoping review, no critically appraised or synthesized answer to the research question is offered; rather, the aim is to provide evidence of the particular phenomenon . Scoping Review Question The research question for this scoping review is: How are digital health services used among patients and citizens while living at home? Inclusion Criteria The inclusion criteria were identified in relation to the research question with the help of the Population, Concept, and Context (PCC) framework . The population regarding the research question were all patients and citizens who use digital health services, and the concept was digital health services. In this study, we defined digital health services as any solutions that use different information technologies. In this review, a wide range of study designs, such as randomized controlled trials, cohort studies, cross-sectional studies, and reviews, were considered. Protocols that provide a plan for a review or study were excluded from the review. The context in this review was the home environment; thus, studies in which digital health services were used elsewhere, such as in hospitals or long-term care facilities, were excluded. Studies were also excluded if the use environment was not apparent. In the search, papers published in open access and peer-reviewed scientific journals between January 1, 2010, and March 8, 2022, were retrieved. The search included journal papers published in English, German, or Swedish. Search Strategy The online databases Scopus, PubMed, and the CINAHL were used to retrieve journal papers concerning the use of digital health services among patients and citizens while living at home. The search was conducted on March 9, 2022. The database searches resulted in 152 papers in CINAHL, 28 papers in PubMed, and 239 papers in Scopus. Keywords related to digital health and the use of digital health services were used to carry out the search. The keywords were patient , customer , effectiveness , impact , effect , util *, ehealth , digital service , electronic health , digihealth , telehealth , telemedicine , m-health , digital health , healthcare , health care , hospital , health , and care . They were used with various combinations using the Boolean operators AND and OR. An information specialist of the University of Eastern Finland assisted in refining the search strategy. The search strategies are presented in . Study Selection and Inclusion The selection procedure and data extraction were performed by the first author of the paper. The studies were then reviewed and selected in 3 stages. Studies that did not meet the inclusion criteria were excluded at each stage accordingly. Initially, the search in the 3 databases identified 419 papers. The database search results were then uploaded to the ProQuest RefWorks citation manager. After excluding 167 (39.9%) duplicates in the first stage, 252 (60.1%) papers were eligible for further screening. In the second stage, the titles and abstracts of the papers were screened and 106 (42.1%) papers were rejected because they did not meet the inclusion criteria; 146 (57.9%) papers were eligible for full-text review. A full-text review was conducted in the third stage, and finally, 88 (60.2%) papers were selected for this review. The procedure of this scoping review is presented in , which is based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for scoping review (PRISMA-ScR) flow diagram . The main reason for rejection (n=22, 15.1%, papers) based on full-text review was that the papers were from the provider’s or caregiver’s point of view. Other reasons for rejection were that the studies (n=13, 8.9%, papers) did not clearly state whether the use of digital health care services took place at home or elsewhere or that the studies were conducted in a hospital environment, a long-term care facility, or a location other than home (n=8, 5.5%, papers). Further reasons for rejection were that the papers discussed future possibilities (n=6, 4.1%, papers) or dealt with using electronic health record (EHR) data for study purposes or EHR standards (n=5, 3.4%, papers). Additional reasons for exclusion included describing the general use of digital health services (n=3, 2.1%, papers) or describing the use of digital health services from a technical (n=1, 0.7%, paper) or theoretical (n=1, 0.7%, paper) point of view. Characteristics of the papers and extracted data are presented in the Results section and finally concluded in the Discussion section. Data Analysis The final data for this review is presented in alphabetical order in . For each included paper, the following information was recorded: author, year of publication, country of origin, objective, study design, population, device and use, and main results. “Device and use” was chosen as the primary theme based on the objective and main research question of this scoping review. The analysis was performed deductively using the framework developed by Harst et al . The framework classifies interventions into 5 clusters: telemonitoring, teleconsultation, telediagnosis, teleambulance/tele-emergency, and digital self-management . This scoping review was conducted using the methodological framework of the Joanna Briggs Institute (JBI) . A scoping review approach can be chosen for a range of reasons [ - ]. In this paper, the scoping review method was chosen to map the extent of the literature on this specific topic, to objectively summarize the available evidence, and to identify knowledge gaps and thereby contribute to future research. Based on the reasons for conducting a scoping review, no critically appraised or synthesized answer to the research question is offered; rather, the aim is to provide evidence of the particular phenomenon . The research question for this scoping review is: How are digital health services used among patients and citizens while living at home? The inclusion criteria were identified in relation to the research question with the help of the Population, Concept, and Context (PCC) framework . The population regarding the research question were all patients and citizens who use digital health services, and the concept was digital health services. In this study, we defined digital health services as any solutions that use different information technologies. In this review, a wide range of study designs, such as randomized controlled trials, cohort studies, cross-sectional studies, and reviews, were considered. Protocols that provide a plan for a review or study were excluded from the review. The context in this review was the home environment; thus, studies in which digital health services were used elsewhere, such as in hospitals or long-term care facilities, were excluded. Studies were also excluded if the use environment was not apparent. In the search, papers published in open access and peer-reviewed scientific journals between January 1, 2010, and March 8, 2022, were retrieved. The search included journal papers published in English, German, or Swedish. The online databases Scopus, PubMed, and the CINAHL were used to retrieve journal papers concerning the use of digital health services among patients and citizens while living at home. The search was conducted on March 9, 2022. The database searches resulted in 152 papers in CINAHL, 28 papers in PubMed, and 239 papers in Scopus. Keywords related to digital health and the use of digital health services were used to carry out the search. The keywords were patient , customer , effectiveness , impact , effect , util *, ehealth , digital service , electronic health , digihealth , telehealth , telemedicine , m-health , digital health , healthcare , health care , hospital , health , and care . They were used with various combinations using the Boolean operators AND and OR. An information specialist of the University of Eastern Finland assisted in refining the search strategy. The search strategies are presented in . The selection procedure and data extraction were performed by the first author of the paper. The studies were then reviewed and selected in 3 stages. Studies that did not meet the inclusion criteria were excluded at each stage accordingly. Initially, the search in the 3 databases identified 419 papers. The database search results were then uploaded to the ProQuest RefWorks citation manager. After excluding 167 (39.9%) duplicates in the first stage, 252 (60.1%) papers were eligible for further screening. In the second stage, the titles and abstracts of the papers were screened and 106 (42.1%) papers were rejected because they did not meet the inclusion criteria; 146 (57.9%) papers were eligible for full-text review. A full-text review was conducted in the third stage, and finally, 88 (60.2%) papers were selected for this review. The procedure of this scoping review is presented in , which is based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for scoping review (PRISMA-ScR) flow diagram . The main reason for rejection (n=22, 15.1%, papers) based on full-text review was that the papers were from the provider’s or caregiver’s point of view. Other reasons for rejection were that the studies (n=13, 8.9%, papers) did not clearly state whether the use of digital health care services took place at home or elsewhere or that the studies were conducted in a hospital environment, a long-term care facility, or a location other than home (n=8, 5.5%, papers). Further reasons for rejection were that the papers discussed future possibilities (n=6, 4.1%, papers) or dealt with using electronic health record (EHR) data for study purposes or EHR standards (n=5, 3.4%, papers). Additional reasons for exclusion included describing the general use of digital health services (n=3, 2.1%, papers) or describing the use of digital health services from a technical (n=1, 0.7%, paper) or theoretical (n=1, 0.7%, paper) point of view. Characteristics of the papers and extracted data are presented in the Results section and finally concluded in the Discussion section. The final data for this review is presented in alphabetical order in . For each included paper, the following information was recorded: author, year of publication, country of origin, objective, study design, population, device and use, and main results. “Device and use” was chosen as the primary theme based on the objective and main research question of this scoping review. The analysis was performed deductively using the framework developed by Harst et al . The framework classifies interventions into 5 clusters: telemonitoring, teleconsultation, telediagnosis, teleambulance/tele-emergency, and digital self-management . Characteristics of the Included Papers Altogether, 88 papers were included in the review, all written in English. Geographically, over half (50/88, 56.8%) of the papers (their first authors) were from the United States [ - ]. Further, 8 (9.1%) papers were from Australia [ - ], 4 (4.5%) each from the Netherlands [ - ] and Germany [ - ], and 3 (3.4%) from Canada [ - ]. China , India , Norway , and Thailand were each represented in 2 (2.3%) papers. Other countries represented were Bangladesh , the Czech Republic , Denmark , the United Kingdom , Greece and Finland (a joint paper) , Italy , Jamaica , Libya , Saudi Arabia , South Africa , and Turkey . Of all the included papers, 15 (17.0%) were published in JMIR publications, 6 (6.8%) in BioMed Central (BMC) journals, 4 (4.5%) in the Journal of Telemedicine and Telecare , 3 (3.4%) in the Journal of the American Informatics Association , 3 (3.4%) in Telemedicine and e-Health , and 3 (3.4%) in British Medical Journal (BMJ) publications. In addition, 2 (2.3%) papers were published in Rheumatology Advances in Practice and 2 (2.3%) in each of the Journal of the American Medical Association (JAMA) publications ( JAMA Network Open and JAMA Surgery ) as well as in the Journal of Substance Abuse Treatment . The rest of the papers (n=50, 56.8%) were each published in a different journal. In addition, 74 (84.1%) of the 88 papers were published between 2017 and 2022, and 38 (43.2%) of the 88 papers were published in 2021 and 2022 alone, mainly due to the COVID-19 pandemic. Characteristics of the Population Population characteristics of the included papers were classified according to the patients’ age group (pediatric patients, adults, older people). Of the included studies, 35 (39.8%) papers had solely adults (>18 years of age) as participants, 25 (28.4%) papers had both adults and older people as participants, and 8 (9.1%) papers had only older people. However, the definition of older people varied across studies. In 5 (5.7%) papers, participants were pediatric patients. All age groups were represented in 6 (6.8%) papers. The age of the participants was not clearly defined or clearly distinguished in 9 (10.2%) papers. Most of the participants in the studies had a medical condition that required consultation, surveillance, or monitoring. The most common medical condition was a chronic condition, such as cardiovascular disease, cancer, diabetes, and arthritis. In addition, behavioral health issues and substance use disorders were among the conditions observed in studies. In some studies, health issues related to general medical conditions or no specific health condition was given. Use of Digital Health Services The results of the search showed that the use of digital health services can be extensive and can be used for many different purposes and in different population groups. The results were analyzed according to the methodology given by Harst et al , which classifies interventions into 5 clusters, as discussed earlier. In this Results section, the purpose of using digital services is roughly categorized according to the clusters presented in the framework. It should be noted that some of the included studies may overlap across clusters. Most studies in this review can be included in the teleconsultation and telediagnosis cluster given by Harst et al as the use of digital health services occurred mainly as video (or virtual) visits [ - , - , , , , , , , ] and in some cases led to a diagnosis (eg, Atilgan et al ). Examples of the study cases concerning video visits are presented in . The examples of video visit usage show that video visits are used in different kinds of populations with varying conditions. A video connection could also be used in combination with patient portals and access to EHRs and different devices and apps . Video consultations offer possibilities for first and follow-up visits at the clinic [ , , ], for peri- and postoperative sessions and for medication and psychotherapy sessions . In addition to the concept of video visits and videoconferencing, expressions such as video consultation and video encounter were used when patients had consultations with health care professionals via a video connection. In addition, telehealth visits , platforms , eHealth , telephone (or voice) calls [ , , , , , , - , - , , , , , ], text messages , and mobile apps were mentioned as means for consultation. Looi et al found that during the second and third quarters of 2021, the telephone was mostly used for telehealth visits in private psychiatry practice in Australia. Whether it is in the form of video visits, telephone calls, or other means, the use of virtual communication has sharply increased due to the COVID-19 pandemic. Many countries still face challenges in implementing widespread use of digital health services according to national and international guidelines . The important advantage of digital health services is the potential to activate citizens and patients to participate, engage more in maintaining their own health, and support shared decision-making between health care professionals and patients . Thus, the use of digital health services may play a central role for patients and citizens in self-management of their health. In the Harst et al framework, this refers to a self-management cluster . Digital services for self-management among the studies included the use of portals or eHealth platforms [ , , , , , , ], online programs, and mobile [ - ] and social media platforms . For example, the patient-reported outcome (PRO) assessment that was conducted via an iPhone was used to support the self-management and clinical decision-making for patients with cancer, which seemed to be highly acceptable among patients . Social media platforms were used, for example, for multifaceted eHealth, including websites, digital monthly newsletters, and social media platforms, among patients diagnosed with nonspecific low back pain, which, however, did not show any effectiveness in improving the patients' back pain beliefs or in decreasing disability and absenteeism . Through different means (internet, portals, etc), citizens and patients can use and explore their personal health records in support of self-management, for example, during and after hospital discharge from cardiac care , for obtaining personalized recommendations on actions concerning one’s health , or for enabling communication between patients and different health care settings in a prototype study . Personal health records were also seen useful in keeping track of different kinds of health-related needs, such as medications . Digital health services respond widely to the need to acquire information about health-related issues. The internet was used as an important source of health information in several studies [ , , , , , , , , , ]. Athanasopoulou et al studied the use of the internet for health-related purposes among Finnish and Greek patients with schizophrenia spectrum disorders and found that the use of the internet for health-related purposes was similar among patient groups. However, Finnish patients considered the internet the second-most important source, while Greek patients considered the internet the least important source of health information . The internet was also used, for example, for online education modules , social media and video services , and promotion of clinical decision-making . In addition to self-management at home, digital services can be used for remote monitoring. In line with Harst et al , this review assigned telemonitoring as a cluster of its own. Remote monitoring has played a central role in opening up possibilities to provide digital health services while patients live at home [ , - , , , , , , , , ]. In these studies, monitoring was mainly used for recording vital parameters and transmitting data from the home to the health care professionals at clinics. Examples of study cases concerning remote monitoring are presented in . The examples indicate that remote monitoring can be used effectively at home for different purposes. Not only monitoring vital signs [ , , , ] or other values for disease management but also performing, for example, spirometry at home show possibilities for using digital services from a distance. The included studies mentioned to a lesser extent the use of digital services as alarms, alerts (eg, fall alerts) or reminders [ , , , ]. In its simplest form, patients can use a reminder to remember their health-related appointments and tasks as well as to take medications [ , , , ]. Reminders can also be a functionality of a remote monitoring device as well as notifications of a device . Medication reminders, in contrast, offer other kinds of possibilities, such as improving medication adherence . Personal emergency response systems are used by patients as fall alert systems, at least in the United States, and they include the use of a help push button worn as a necklace or a bracelet, an in-home communication system, and an emergency response center . In their study, Agboola et al found that the use of fall alert systems combined with personal medical records can enable improvement in health outcomes in older patients with chronic medical conditions. These digital services that include alarms, alerts, and reminders can be linked to the teleambulance/tele-emergency cluster, as they rapidly can react to, for example, the patient’s health status, if needed . Several literature reviews have identified a variety of digital health services and provided insights into using different technologies in health management for patients living at home [ , , , , , , , , , ]. These digital health services can belong to different clusters of the framework, for example, specifically teleconsultation and self-management. The reviews have found the use of the telephone, text messages via the telephone, video calls, apps such as WhatsApp, personal health records, and social media and digital and online platforms to be of importance. The use of online education and video consultation and teleconferencing via mobile phone seemed to be useful, for example, for patients undergoing bariatric surgery . The literature review by Kuwabara et al showed that digital technology can be used to improve patient education and skills needed for using digital health services. There are, however, barriers to the usage of digital health services, and the study by Chitungo et al highlighted the infrastructure-related challenges when using digital services. Altogether, 88 papers were included in the review, all written in English. Geographically, over half (50/88, 56.8%) of the papers (their first authors) were from the United States [ - ]. Further, 8 (9.1%) papers were from Australia [ - ], 4 (4.5%) each from the Netherlands [ - ] and Germany [ - ], and 3 (3.4%) from Canada [ - ]. China , India , Norway , and Thailand were each represented in 2 (2.3%) papers. Other countries represented were Bangladesh , the Czech Republic , Denmark , the United Kingdom , Greece and Finland (a joint paper) , Italy , Jamaica , Libya , Saudi Arabia , South Africa , and Turkey . Of all the included papers, 15 (17.0%) were published in JMIR publications, 6 (6.8%) in BioMed Central (BMC) journals, 4 (4.5%) in the Journal of Telemedicine and Telecare , 3 (3.4%) in the Journal of the American Informatics Association , 3 (3.4%) in Telemedicine and e-Health , and 3 (3.4%) in British Medical Journal (BMJ) publications. In addition, 2 (2.3%) papers were published in Rheumatology Advances in Practice and 2 (2.3%) in each of the Journal of the American Medical Association (JAMA) publications ( JAMA Network Open and JAMA Surgery ) as well as in the Journal of Substance Abuse Treatment . The rest of the papers (n=50, 56.8%) were each published in a different journal. In addition, 74 (84.1%) of the 88 papers were published between 2017 and 2022, and 38 (43.2%) of the 88 papers were published in 2021 and 2022 alone, mainly due to the COVID-19 pandemic. Population characteristics of the included papers were classified according to the patients’ age group (pediatric patients, adults, older people). Of the included studies, 35 (39.8%) papers had solely adults (>18 years of age) as participants, 25 (28.4%) papers had both adults and older people as participants, and 8 (9.1%) papers had only older people. However, the definition of older people varied across studies. In 5 (5.7%) papers, participants were pediatric patients. All age groups were represented in 6 (6.8%) papers. The age of the participants was not clearly defined or clearly distinguished in 9 (10.2%) papers. Most of the participants in the studies had a medical condition that required consultation, surveillance, or monitoring. The most common medical condition was a chronic condition, such as cardiovascular disease, cancer, diabetes, and arthritis. In addition, behavioral health issues and substance use disorders were among the conditions observed in studies. In some studies, health issues related to general medical conditions or no specific health condition was given. The results of the search showed that the use of digital health services can be extensive and can be used for many different purposes and in different population groups. The results were analyzed according to the methodology given by Harst et al , which classifies interventions into 5 clusters, as discussed earlier. In this Results section, the purpose of using digital services is roughly categorized according to the clusters presented in the framework. It should be noted that some of the included studies may overlap across clusters. Most studies in this review can be included in the teleconsultation and telediagnosis cluster given by Harst et al as the use of digital health services occurred mainly as video (or virtual) visits [ - , - , , , , , , , ] and in some cases led to a diagnosis (eg, Atilgan et al ). Examples of the study cases concerning video visits are presented in . The examples of video visit usage show that video visits are used in different kinds of populations with varying conditions. A video connection could also be used in combination with patient portals and access to EHRs and different devices and apps . Video consultations offer possibilities for first and follow-up visits at the clinic [ , , ], for peri- and postoperative sessions and for medication and psychotherapy sessions . In addition to the concept of video visits and videoconferencing, expressions such as video consultation and video encounter were used when patients had consultations with health care professionals via a video connection. In addition, telehealth visits , platforms , eHealth , telephone (or voice) calls [ , , , , , , - , - , , , , , ], text messages , and mobile apps were mentioned as means for consultation. Looi et al found that during the second and third quarters of 2021, the telephone was mostly used for telehealth visits in private psychiatry practice in Australia. Whether it is in the form of video visits, telephone calls, or other means, the use of virtual communication has sharply increased due to the COVID-19 pandemic. Many countries still face challenges in implementing widespread use of digital health services according to national and international guidelines . The important advantage of digital health services is the potential to activate citizens and patients to participate, engage more in maintaining their own health, and support shared decision-making between health care professionals and patients . Thus, the use of digital health services may play a central role for patients and citizens in self-management of their health. In the Harst et al framework, this refers to a self-management cluster . Digital services for self-management among the studies included the use of portals or eHealth platforms [ , , , , , , ], online programs, and mobile [ - ] and social media platforms . For example, the patient-reported outcome (PRO) assessment that was conducted via an iPhone was used to support the self-management and clinical decision-making for patients with cancer, which seemed to be highly acceptable among patients . Social media platforms were used, for example, for multifaceted eHealth, including websites, digital monthly newsletters, and social media platforms, among patients diagnosed with nonspecific low back pain, which, however, did not show any effectiveness in improving the patients' back pain beliefs or in decreasing disability and absenteeism . Through different means (internet, portals, etc), citizens and patients can use and explore their personal health records in support of self-management, for example, during and after hospital discharge from cardiac care , for obtaining personalized recommendations on actions concerning one’s health , or for enabling communication between patients and different health care settings in a prototype study . Personal health records were also seen useful in keeping track of different kinds of health-related needs, such as medications . Digital health services respond widely to the need to acquire information about health-related issues. The internet was used as an important source of health information in several studies [ , , , , , , , , , ]. Athanasopoulou et al studied the use of the internet for health-related purposes among Finnish and Greek patients with schizophrenia spectrum disorders and found that the use of the internet for health-related purposes was similar among patient groups. However, Finnish patients considered the internet the second-most important source, while Greek patients considered the internet the least important source of health information . The internet was also used, for example, for online education modules , social media and video services , and promotion of clinical decision-making . In addition to self-management at home, digital services can be used for remote monitoring. In line with Harst et al , this review assigned telemonitoring as a cluster of its own. Remote monitoring has played a central role in opening up possibilities to provide digital health services while patients live at home [ , - , , , , , , , , ]. In these studies, monitoring was mainly used for recording vital parameters and transmitting data from the home to the health care professionals at clinics. Examples of study cases concerning remote monitoring are presented in . The examples indicate that remote monitoring can be used effectively at home for different purposes. Not only monitoring vital signs [ , , , ] or other values for disease management but also performing, for example, spirometry at home show possibilities for using digital services from a distance. The included studies mentioned to a lesser extent the use of digital services as alarms, alerts (eg, fall alerts) or reminders [ , , , ]. In its simplest form, patients can use a reminder to remember their health-related appointments and tasks as well as to take medications [ , , , ]. Reminders can also be a functionality of a remote monitoring device as well as notifications of a device . Medication reminders, in contrast, offer other kinds of possibilities, such as improving medication adherence . Personal emergency response systems are used by patients as fall alert systems, at least in the United States, and they include the use of a help push button worn as a necklace or a bracelet, an in-home communication system, and an emergency response center . In their study, Agboola et al found that the use of fall alert systems combined with personal medical records can enable improvement in health outcomes in older patients with chronic medical conditions. These digital services that include alarms, alerts, and reminders can be linked to the teleambulance/tele-emergency cluster, as they rapidly can react to, for example, the patient’s health status, if needed . Several literature reviews have identified a variety of digital health services and provided insights into using different technologies in health management for patients living at home [ , , , , , , , , , ]. These digital health services can belong to different clusters of the framework, for example, specifically teleconsultation and self-management. The reviews have found the use of the telephone, text messages via the telephone, video calls, apps such as WhatsApp, personal health records, and social media and digital and online platforms to be of importance. The use of online education and video consultation and teleconferencing via mobile phone seemed to be useful, for example, for patients undergoing bariatric surgery . The literature review by Kuwabara et al showed that digital technology can be used to improve patient education and skills needed for using digital health services. There are, however, barriers to the usage of digital health services, and the study by Chitungo et al highlighted the infrastructure-related challenges when using digital services. Principal Findings The aim of this review was to identify and summarize evidence on how digital health services are being used among citizens and patients while living at home. The focus of the review is on the patient perspective as the development and use of digital health services are considered a central tool for activating citizens and patients to manage and maintain their own health . When Frank , some 2 decades ago, first introduced the concept of digital health, the idea was primarily based on the internet and the functions it enabled. The idea of digital health, or eHealth, as Eysenbach named the phenomenon, was then broadened to cover functions that are delivered or enhanced by using ICT in a networked way. Eysenbach pointed as well to the need for a change in mindset and attitude. Since the beginning of the 21st century, the pace of the development and use of technology has been remarkable . This has been accompanied by a shift from traditional health care to more patient-centered medicine, which is also seen in the results of this review, as patients and citizens are actively using digital health services and producing information about their health for clinical decision-making concerning their care or voluntarily with the help of remote monitoring devices and transmitting information via portals, apps, or other services, while at the same time continuing their lives at home [ - , ]. Based on the analysis using the framework presented by Harst et al , which consists of 5 clusters, this study found that digital health services are used widely for different health-related purposes. The clusters in the framework include teleconsultation, telediagnosis, telemonitoring, digital self-management, and teleambulance/tele-emergency. Most of the digital health service usage discovered in this review falls into the teleconsultation cluster as the studies involve the use of video or other virtual means of consultation or visits, which partly seems to be because of the COVID-19 pandemic [ - , - , , , , , , , ]. There are, however, also studies that concurrently fit into the telediagnosis cluster, offering a possibility to access health care using a video connection and thus enabling an early diagnosis [ , , ]. The results indicate that the use of video consultations will also continue in the future as they are seen as a compatible and cost-effective way of providing consultation also in medical specialties and with different kinds of tools, such as WhatsApp or Zoom [ , , , , , , ]. The use of or the possibility for video consultation is especially important for rural and other areas with long distances, though infrastructural or cultural issues may currently prevent or delay the use of digital services in some locations [ , , , ]. According to the results, telephone calls were still used frequently for contacting patients [ , , , , , , , , - , , ]. In some studies, most consultations were done over the phone [ , , , ]. For instance, in private Australian psychiatric clinics, short consultations (less than 30 minutes) were conducted mostly over the phone at the beginning of 2020. In addition, in sub-Saharan Africa, the use of the telephone played a crucial role in consultations at the beginning of the pandemic despite the many challenges faced in the area . Digital self-management, especially the use of the internet, was, according to the results, found to be important for searching for health information and accessing portals, platforms, websites, web videos, online modules, and web-based programs [ , , , , , , , , , ]. The increasing use of the internet and other information sources to search for health-related information highlights the need to promote the development of the skills needed to acquire and understand relevant information . The use of the internet for information searching, video watching, and education module watching requires skills in, for example, eHealth literacy and overall usage skills needed with technology. According to the results, apps were found to be actively used for different kinds of functions, such as transmitting and communicating [ , , ] or searching for information , controlling medication , obtaining education , accessing medical records , or managing disease [ , , , ]. Apps also enable the collection of PROs among patients with prostate cancer, and in this study, nearly all patients reported that using a smartphone app is easier than or equivalent to paper and pen . Of the various technological solutions used, mobile phones were used in a variety of ways for social media, apps , text messaging , or virtual visits . Remote monitoring of health was used in different kinds of situations. Patients used remote devices, for example, to record 1 or several vital signs [ , , , , ], to perform home spirometry , to register inhalations , to monitor blood glucose data and perform insulin therapy, and to also transmit data to the clinic . Instructions on the use of monitoring devices was given to patients before starting the monitoring at home [ , , , ]. The use of remote monitoring devices in collecting and transmitting data has a favorable effect according to the studies on clinical decision-making concerning care, while at the same time, remote monitoring made it possible for patients to live at home [ , , , , ]. Remote monitoring at home seemed to have a positive effect on care as life-threatening situations could be observed early , better results for treatment were attained , and self-management (especially of chronic conditions) was made easier . Remote monitoring was also found to be a reliable, sustainable, and cost-effective part of the care of the patient . In nearly all studies, the health care provider was actively involved in the care process of managing and monitoring patient health or health information, such as vital signs [ , , , ]. In some cases, however, digital services for remote monitoring did not lead to better results (eg, in care adherence) [ , , ]. Decision-making was also mentioned separately in a few papers that considered the effect of information obtained via the internet on joint decision-making with health care professionals . Digital health services were, according to the studies, used widely in different kinds of population groups ranging from children (eg, ) to older people (eg, [ , , ]). Geographically, the studies in this review were concentrated to a great extent in the United States, but European countries, Australia, and China were also represented. Some studies addressed the challenge of less industrialized countries where the infrastructure for the digital health services may not yet be adequate for the vast use of digital health services . However, digital health services provide a way to engage patients more actively to participate in their own care by providing new ways for usage (eg, internet, health records, apps, and other technical solutions), which contribute to acquiring health information and building up knowledge on health . Overall, the results of this scoping review indicate that using digital health services offer many options for self-care while living at home. Generic services, such as information searching, can be used more autonomously and for self-management, whereas tailored services can be used more for the consultation and management of specific diseases or conditions. As Harst et al mention, digital services may well fit into more than 1 cluster in their framework. The COVID-19 pandemic has clearly provided an impetus for offering alternative ways to citizens to use services enhanced by new technologies in many sectors, not least in health care [ - ]. These services are mainly independent of time and place and thus promote equity in society by providing services from a distance (eg, in rural areas). However, a certain level of digital infrastructure is needed for the implementation of digital services, which is still lacking in many countries, as reflected also in the studies in this review . Sociodemographic factors are a barrier to accessing digital services as well [ , , , ]. The digital divide and development disparity of digital health services worldwide is perhaps also mirrored in the geographical distribution of the studies in this scoping review. Strengths and Limitations This scoping review was conducted to identify and summarize how patients and citizens use digital health services while living at home. Specifically, population characteristics, digital services used, and outcomes were identified. The objective of this review was not to evaluate the quality of the evidence but to provide evidence of the literature in 3 databases (Scopus, PubMed, and CINAHL) concerning the use of digital health services in managing patients’ and citizens’ health while living at home. Results from other sources (gray literature), such as books, book chapters, and websites, were not included. Solely open access scientific journal papers were included in this review. The review focused on digital services used at home by citizens and patients and therefore did not consider the services that patients use in hospitals or home-like environments, such as elderly care homes. The health care provider viewpoint was not the topic of this review, although the provider is actively involved in the care process. As only open access papers were considered, relevant papers and the range of gray literature could have been missed. The search was conducted using specific keywords, search terms, and other inclusion criteria, so relevant documents on this broad topic may have been missed. Health can be considered in a broad or specific sense, but in this review, the concept of health was used as defined by WHO. In this sense, a limitation of this study is that it covered only health care services, as health can be seen (as defined by WHO) as a sum of the physical, mental, and social aspects of one’s well-being. In some papers, issues such as drug and other substance use disorders or alcohol abuse were discussed . In the northern European context, these belong primarily within the purview of social services. So maybe using social services as a keyword would have provided more relevant results. For instance, in Finland, substance abuse services belong to general social services under the Social Welfare Act . The analysis in this review is roughly based on the framework of Harst et al . The framework and its clusters do not necessarily provide a fully adequate measure for analysis of a vast range of digital services, which is also noted by Harst et al . The strength of the review lies in its ability to describe how vastly digital health services can be used in different kinds of populations when living at home. The review illustrates various potential user groups and different forms of digital services and, thereby, possibilities for the future development of digital health services. The review points out the importance of information for clinical decision-making concerning treatments and also the need for patients and citizens to acquire skills to search, use, and understand health-related information. The results also indicate that digital health services may not be suitable for all population groups. In many studies, facilitators and barriers affecting the use of digital services have been described, but this was not the main topic of this review. The review also notes that a discussion of the development of more equal distribution of digital services worldwide may have value; however, this was beyond the remit of this review. Conclusion The results of the review note the various possibilities of using digital health services while living at home. The use and further development of digital services still face challenges in many levels. However, patients engaging in their own care while living at home indicate a shift from more traditional health care to a modern era in which care can be provided and managed irrespective of time and place. The use of digital services also indicates a shift to more patient-centered care and engaging the patient as part of the decision-making process concerning their health. The aim of this review was to identify and summarize evidence on how digital health services are being used among citizens and patients while living at home. The focus of the review is on the patient perspective as the development and use of digital health services are considered a central tool for activating citizens and patients to manage and maintain their own health . When Frank , some 2 decades ago, first introduced the concept of digital health, the idea was primarily based on the internet and the functions it enabled. The idea of digital health, or eHealth, as Eysenbach named the phenomenon, was then broadened to cover functions that are delivered or enhanced by using ICT in a networked way. Eysenbach pointed as well to the need for a change in mindset and attitude. Since the beginning of the 21st century, the pace of the development and use of technology has been remarkable . This has been accompanied by a shift from traditional health care to more patient-centered medicine, which is also seen in the results of this review, as patients and citizens are actively using digital health services and producing information about their health for clinical decision-making concerning their care or voluntarily with the help of remote monitoring devices and transmitting information via portals, apps, or other services, while at the same time continuing their lives at home [ - , ]. Based on the analysis using the framework presented by Harst et al , which consists of 5 clusters, this study found that digital health services are used widely for different health-related purposes. The clusters in the framework include teleconsultation, telediagnosis, telemonitoring, digital self-management, and teleambulance/tele-emergency. Most of the digital health service usage discovered in this review falls into the teleconsultation cluster as the studies involve the use of video or other virtual means of consultation or visits, which partly seems to be because of the COVID-19 pandemic [ - , - , , , , , , , ]. There are, however, also studies that concurrently fit into the telediagnosis cluster, offering a possibility to access health care using a video connection and thus enabling an early diagnosis [ , , ]. The results indicate that the use of video consultations will also continue in the future as they are seen as a compatible and cost-effective way of providing consultation also in medical specialties and with different kinds of tools, such as WhatsApp or Zoom [ , , , , , , ]. The use of or the possibility for video consultation is especially important for rural and other areas with long distances, though infrastructural or cultural issues may currently prevent or delay the use of digital services in some locations [ , , , ]. According to the results, telephone calls were still used frequently for contacting patients [ , , , , , , , , - , , ]. In some studies, most consultations were done over the phone [ , , , ]. For instance, in private Australian psychiatric clinics, short consultations (less than 30 minutes) were conducted mostly over the phone at the beginning of 2020. In addition, in sub-Saharan Africa, the use of the telephone played a crucial role in consultations at the beginning of the pandemic despite the many challenges faced in the area . Digital self-management, especially the use of the internet, was, according to the results, found to be important for searching for health information and accessing portals, platforms, websites, web videos, online modules, and web-based programs [ , , , , , , , , , ]. The increasing use of the internet and other information sources to search for health-related information highlights the need to promote the development of the skills needed to acquire and understand relevant information . The use of the internet for information searching, video watching, and education module watching requires skills in, for example, eHealth literacy and overall usage skills needed with technology. According to the results, apps were found to be actively used for different kinds of functions, such as transmitting and communicating [ , , ] or searching for information , controlling medication , obtaining education , accessing medical records , or managing disease [ , , , ]. Apps also enable the collection of PROs among patients with prostate cancer, and in this study, nearly all patients reported that using a smartphone app is easier than or equivalent to paper and pen . Of the various technological solutions used, mobile phones were used in a variety of ways for social media, apps , text messaging , or virtual visits . Remote monitoring of health was used in different kinds of situations. Patients used remote devices, for example, to record 1 or several vital signs [ , , , , ], to perform home spirometry , to register inhalations , to monitor blood glucose data and perform insulin therapy, and to also transmit data to the clinic . Instructions on the use of monitoring devices was given to patients before starting the monitoring at home [ , , , ]. The use of remote monitoring devices in collecting and transmitting data has a favorable effect according to the studies on clinical decision-making concerning care, while at the same time, remote monitoring made it possible for patients to live at home [ , , , , ]. Remote monitoring at home seemed to have a positive effect on care as life-threatening situations could be observed early , better results for treatment were attained , and self-management (especially of chronic conditions) was made easier . Remote monitoring was also found to be a reliable, sustainable, and cost-effective part of the care of the patient . In nearly all studies, the health care provider was actively involved in the care process of managing and monitoring patient health or health information, such as vital signs [ , , , ]. In some cases, however, digital services for remote monitoring did not lead to better results (eg, in care adherence) [ , , ]. Decision-making was also mentioned separately in a few papers that considered the effect of information obtained via the internet on joint decision-making with health care professionals . Digital health services were, according to the studies, used widely in different kinds of population groups ranging from children (eg, ) to older people (eg, [ , , ]). Geographically, the studies in this review were concentrated to a great extent in the United States, but European countries, Australia, and China were also represented. Some studies addressed the challenge of less industrialized countries where the infrastructure for the digital health services may not yet be adequate for the vast use of digital health services . However, digital health services provide a way to engage patients more actively to participate in their own care by providing new ways for usage (eg, internet, health records, apps, and other technical solutions), which contribute to acquiring health information and building up knowledge on health . Overall, the results of this scoping review indicate that using digital health services offer many options for self-care while living at home. Generic services, such as information searching, can be used more autonomously and for self-management, whereas tailored services can be used more for the consultation and management of specific diseases or conditions. As Harst et al mention, digital services may well fit into more than 1 cluster in their framework. The COVID-19 pandemic has clearly provided an impetus for offering alternative ways to citizens to use services enhanced by new technologies in many sectors, not least in health care [ - ]. These services are mainly independent of time and place and thus promote equity in society by providing services from a distance (eg, in rural areas). However, a certain level of digital infrastructure is needed for the implementation of digital services, which is still lacking in many countries, as reflected also in the studies in this review . Sociodemographic factors are a barrier to accessing digital services as well [ , , , ]. The digital divide and development disparity of digital health services worldwide is perhaps also mirrored in the geographical distribution of the studies in this scoping review. This scoping review was conducted to identify and summarize how patients and citizens use digital health services while living at home. Specifically, population characteristics, digital services used, and outcomes were identified. The objective of this review was not to evaluate the quality of the evidence but to provide evidence of the literature in 3 databases (Scopus, PubMed, and CINAHL) concerning the use of digital health services in managing patients’ and citizens’ health while living at home. Results from other sources (gray literature), such as books, book chapters, and websites, were not included. Solely open access scientific journal papers were included in this review. The review focused on digital services used at home by citizens and patients and therefore did not consider the services that patients use in hospitals or home-like environments, such as elderly care homes. The health care provider viewpoint was not the topic of this review, although the provider is actively involved in the care process. As only open access papers were considered, relevant papers and the range of gray literature could have been missed. The search was conducted using specific keywords, search terms, and other inclusion criteria, so relevant documents on this broad topic may have been missed. Health can be considered in a broad or specific sense, but in this review, the concept of health was used as defined by WHO. In this sense, a limitation of this study is that it covered only health care services, as health can be seen (as defined by WHO) as a sum of the physical, mental, and social aspects of one’s well-being. In some papers, issues such as drug and other substance use disorders or alcohol abuse were discussed . In the northern European context, these belong primarily within the purview of social services. So maybe using social services as a keyword would have provided more relevant results. For instance, in Finland, substance abuse services belong to general social services under the Social Welfare Act . The analysis in this review is roughly based on the framework of Harst et al . The framework and its clusters do not necessarily provide a fully adequate measure for analysis of a vast range of digital services, which is also noted by Harst et al . The strength of the review lies in its ability to describe how vastly digital health services can be used in different kinds of populations when living at home. The review illustrates various potential user groups and different forms of digital services and, thereby, possibilities for the future development of digital health services. The review points out the importance of information for clinical decision-making concerning treatments and also the need for patients and citizens to acquire skills to search, use, and understand health-related information. The results also indicate that digital health services may not be suitable for all population groups. In many studies, facilitators and barriers affecting the use of digital services have been described, but this was not the main topic of this review. The review also notes that a discussion of the development of more equal distribution of digital services worldwide may have value; however, this was beyond the remit of this review. The results of the review note the various possibilities of using digital health services while living at home. The use and further development of digital services still face challenges in many levels. However, patients engaging in their own care while living at home indicate a shift from more traditional health care to a modern era in which care can be provided and managed irrespective of time and place. The use of digital services also indicates a shift to more patient-centered care and engaging the patient as part of the decision-making process concerning their health.
Lessons for Vietnam on the Use of Digital Technologies to Support Patient-Centered Care in Low- and Middle-Income Countries in the Asia-Pacific Region: Scoping Review
cb7faabf-257e-474e-b02b-0c1b9007d6fe
10132046
Patient-Centered Care[mh]
Background Vietnam’s health care landscape is changing. The country’s population is aging rapidly, with more than 1 in 5 Vietnamese citizens being predicted to be aged >65 years by 2050 . It is forecast that Vietnam will transition from its current classification as an aging country, where 7% of the population is aged ≥65 years, to an aged country (ie, 14% of the population aged ≥65 years) in just 16 years . By contrast, nearby countries Thailand and Singapore will take 20 years and 22 years, respectively, to reach this point . This rapid aging is contributing to a shift in disease burden from communicable diseases to noncommunicable diseases (NCDs), that is, diseases that are not transmitted among persons but rather are the result of genetic, physiological, environmental, and behavioral factors . In 2019, NCDs such as cardiovascular diseases, diabetes, and Alzheimer disease made up 8 of the top 10 causes of death in Vietnam for males and females across all age groups . Furthermore, global health estimates published by the World Health Organization in 2020 showed that the percentage of deaths caused by NCDs in Vietnam has increased from 73% to 81% in <20 years . This presents a major problem for the country’s health care system. NCDs are typically chronic and multimorbid and therefore require coordinated, long-term care . Preventive measures for NCDs are also challenging, given the numerous risk factors associated with NCD onset . Prevention and management of NCDs consequently demands considerable resources from all areas of the health care system. By contrast, potential infectious disease outbreaks continue to threaten the health care system, and additional resources must remain on standby to cope with such eventualities. Vietnam’s existing health care system is not adequately resourced to meet these challenges. Health disparities are evident in many parts of the country, especially in rural areas, and the population faces inequitable access to quality, patient-centered health care . This raises concerns since patient-centered care is widely considered to be an effective approach to health care from the perspective of patients, families, and health care professionals, and may also reduce health care costs [ - ]. Vietnam must therefore explore and implement advanced solutions to the provision of patient-centered care, with a view to reducing pressures on the health care system simultaneously. The use of digital health technologies (DHTs) may be one of these solutions. Digital health refers to “the use of information and communications technologies in medicine and other health professions to manage illnesses and health risks and to promote wellness” . This may include but is not limited to the use of wearable devices, mobile health (mHealth), telehealth, health IT, and telemedicine. DHTs have been shown to be effective in supporting the management of both NCDs, such as diabetes and cardiovascular disease, and infectious diseases, such as COVID-19 [ - ]. Evidence suggests that DHTs may also support several dimensions of patient-centered care, such as health knowledge, self-efficacy, quality of life, and access to health care . Objectives Although there is increasing research demonstrating the value of DHTs in general, the potential of DHTs to support patient-centered care in Vietnam has thus far been relatively unexplored. Many neighboring low- and middle-income countries (LMICs) in the Asia-Pacific region (APR) are already exploring or implementing DHTs within their health care systems. This offers Vietnam the opportunity to gain insight into the effective use of DHTs from countries that share economic and cultural similarities, and to apply these learnings when developing its own approach to the use of DHTs to support patient-centered care for patients with communicable diseases and those with NCDs. This paper therefore aimed to identify the application of DHTs to support the provision of patient-centered care in LMICs in the APR and to draw lessons for Vietnam. Vietnam’s health care landscape is changing. The country’s population is aging rapidly, with more than 1 in 5 Vietnamese citizens being predicted to be aged >65 years by 2050 . It is forecast that Vietnam will transition from its current classification as an aging country, where 7% of the population is aged ≥65 years, to an aged country (ie, 14% of the population aged ≥65 years) in just 16 years . By contrast, nearby countries Thailand and Singapore will take 20 years and 22 years, respectively, to reach this point . This rapid aging is contributing to a shift in disease burden from communicable diseases to noncommunicable diseases (NCDs), that is, diseases that are not transmitted among persons but rather are the result of genetic, physiological, environmental, and behavioral factors . In 2019, NCDs such as cardiovascular diseases, diabetes, and Alzheimer disease made up 8 of the top 10 causes of death in Vietnam for males and females across all age groups . Furthermore, global health estimates published by the World Health Organization in 2020 showed that the percentage of deaths caused by NCDs in Vietnam has increased from 73% to 81% in <20 years . This presents a major problem for the country’s health care system. NCDs are typically chronic and multimorbid and therefore require coordinated, long-term care . Preventive measures for NCDs are also challenging, given the numerous risk factors associated with NCD onset . Prevention and management of NCDs consequently demands considerable resources from all areas of the health care system. By contrast, potential infectious disease outbreaks continue to threaten the health care system, and additional resources must remain on standby to cope with such eventualities. Vietnam’s existing health care system is not adequately resourced to meet these challenges. Health disparities are evident in many parts of the country, especially in rural areas, and the population faces inequitable access to quality, patient-centered health care . This raises concerns since patient-centered care is widely considered to be an effective approach to health care from the perspective of patients, families, and health care professionals, and may also reduce health care costs [ - ]. Vietnam must therefore explore and implement advanced solutions to the provision of patient-centered care, with a view to reducing pressures on the health care system simultaneously. The use of digital health technologies (DHTs) may be one of these solutions. Digital health refers to “the use of information and communications technologies in medicine and other health professions to manage illnesses and health risks and to promote wellness” . This may include but is not limited to the use of wearable devices, mobile health (mHealth), telehealth, health IT, and telemedicine. DHTs have been shown to be effective in supporting the management of both NCDs, such as diabetes and cardiovascular disease, and infectious diseases, such as COVID-19 [ - ]. Evidence suggests that DHTs may also support several dimensions of patient-centered care, such as health knowledge, self-efficacy, quality of life, and access to health care . Although there is increasing research demonstrating the value of DHTs in general, the potential of DHTs to support patient-centered care in Vietnam has thus far been relatively unexplored. Many neighboring low- and middle-income countries (LMICs) in the Asia-Pacific region (APR) are already exploring or implementing DHTs within their health care systems. This offers Vietnam the opportunity to gain insight into the effective use of DHTs from countries that share economic and cultural similarities, and to apply these learnings when developing its own approach to the use of DHTs to support patient-centered care for patients with communicable diseases and those with NCDs. This paper therefore aimed to identify the application of DHTs to support the provision of patient-centered care in LMICs in the APR and to draw lessons for Vietnam. Study Design A scoping review protocol was developed and registered on the Open Science Framework . The review was undertaken using the following established methodologies: (1) identifying the research questions; (2) identifying relevant studies; (3) study selection; (4) charting the data; and (5) collating, summarizing, and reporting the results . Reporting was in line with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines ( [ - ]). Review Questions The research questions guiding this review were as follows: What types of DHTs are being used in LMICs in the APR? What patient-centered outcomes are associated with the use of DHTs? What are the enablers and barriers for the use of DHTs to support patient-centered care outcomes? What lessons can Vietnam learn when developing its own approach to the use of DHTs to support patient-centered care? Search Strategy Using search terms related to DHTs and patient-centered care, we conducted a comprehensive search in January 2022 in the following 8 electronic databases: MEDLINE, PubMed, Embase, EMCare, PsycInfo, Ovid Nursing Database, Web of Science, and Scopus. The search strategy for MEDLINE is presented in . Studies were considered eligible if they met the following criteria: (1) published in English or Vietnamese; (2) set in LMICs in the APR; (3) discussed communicable diseases or NCDs; and (4) discussed the application of DHTs to support patient-centered care with regard to patient-centered outcomes, barriers and enablers for the use of DHTs, and policy or practice outcomes. LMICs were defined according to the relevant 2022 World Bank country classifications: low-income economies (gross national income of ≤US $1085 per capita), lower middle-income economies (gross national income between US $1086 and US $4255 per capita), and upper middle-income economies (gross national income between US $4256 and US $13,205 per capita) . When selecting LMICs for inclusion, we applied the World Bank definition of the APR, which includes countries geographically neighboring Vietnam within East Asia and the Pacific . The search was not limited by publication date or type, although publications that did not present outcomes (eg, protocol papers) were excluded at the screening stage. The database searches were supplemented by manual searches and references as appropriate. Duplicates were removed using an EndNote library (Clarivate) , and the remaining titles were imported into Covidence software (Veritas Health Innovation Ltd) for screening . Study Selection Interrater reliability of the screening process was established using an initial selection of 5 publications that were independently screened at the title and abstract and full-text levels by 4 reviewers. Discrepancies in screening decisions were discussed and resolved by consensus before the final inclusion and exclusion criteria were agreed upon. The titles and abstracts of the remaining publications were then independently screened by 2 reviewers per publication before 2 reviewers completed a full-text review of each publication remaining thereafter. Conflicts were resolved through consensus. Data Extraction A data extraction form was developed to identify the key characteristics of each study as well as relevant information regarding the application of DHTs in the provision of patient-centered care. Seven reviewers independently extracted the data and resolved inconsistencies through discussion with 2 additional researchers. The variables included authors, publication year, country of origin, aims, settings, study design, methodology, type of DHT, reported outcomes, enablers, barriers, and policy and practice implications. Data Synthesis Thematic analysis was used to synthesize and report the findings, following the approach described by Braun and Clarke . This involved (1) familiarization with the data, (2) searching for themes, (3) reviewing the themes, (4) defining and naming the themes, and (5) producing the report. Outcomes were considered patient centered if they mapped against established definitions and determinants of patient-centered care, that is, health care that aligns with patients’ values, needs, and preferences, as well as increases patient autonomy and involvement in their care . Systems-level determinants of patient-centered care were also considered in addition to this definition, including factors related to system characteristics and structures and processes, as well as external policies, regulations, and resources . Further to the thematic analysis, the full texts of the selected articles were analyzed to identify the types of DHTs used. The DHTs were then grouped according to classifications set out by the National Institute for Health and Care Excellence evidence standards framework for DHTs . This framework classifies DHTs by intended purpose and stratifies them into 3 tiers based on the potential risk to service users and to the system (tier A, tier B, and tier C). Tier A comprises DHTs intended to save costs, release staff time, or improve efficiency; tier B includes DHTs that help citizens and patients to manage their own health and wellness; and tier C comprises DHTs used for treating and diagnosing medical conditions or for guiding care choices . Each tier is further divided into subcategories that relate to the intended purpose of the DHT in question. Ethical Considerations Ethics approval was not required for this review paper. A scoping review protocol was developed and registered on the Open Science Framework . The review was undertaken using the following established methodologies: (1) identifying the research questions; (2) identifying relevant studies; (3) study selection; (4) charting the data; and (5) collating, summarizing, and reporting the results . Reporting was in line with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines ( [ - ]). The research questions guiding this review were as follows: What types of DHTs are being used in LMICs in the APR? What patient-centered outcomes are associated with the use of DHTs? What are the enablers and barriers for the use of DHTs to support patient-centered care outcomes? What lessons can Vietnam learn when developing its own approach to the use of DHTs to support patient-centered care? Using search terms related to DHTs and patient-centered care, we conducted a comprehensive search in January 2022 in the following 8 electronic databases: MEDLINE, PubMed, Embase, EMCare, PsycInfo, Ovid Nursing Database, Web of Science, and Scopus. The search strategy for MEDLINE is presented in . Studies were considered eligible if they met the following criteria: (1) published in English or Vietnamese; (2) set in LMICs in the APR; (3) discussed communicable diseases or NCDs; and (4) discussed the application of DHTs to support patient-centered care with regard to patient-centered outcomes, barriers and enablers for the use of DHTs, and policy or practice outcomes. LMICs were defined according to the relevant 2022 World Bank country classifications: low-income economies (gross national income of ≤US $1085 per capita), lower middle-income economies (gross national income between US $1086 and US $4255 per capita), and upper middle-income economies (gross national income between US $4256 and US $13,205 per capita) . When selecting LMICs for inclusion, we applied the World Bank definition of the APR, which includes countries geographically neighboring Vietnam within East Asia and the Pacific . The search was not limited by publication date or type, although publications that did not present outcomes (eg, protocol papers) were excluded at the screening stage. The database searches were supplemented by manual searches and references as appropriate. Duplicates were removed using an EndNote library (Clarivate) , and the remaining titles were imported into Covidence software (Veritas Health Innovation Ltd) for screening . Interrater reliability of the screening process was established using an initial selection of 5 publications that were independently screened at the title and abstract and full-text levels by 4 reviewers. Discrepancies in screening decisions were discussed and resolved by consensus before the final inclusion and exclusion criteria were agreed upon. The titles and abstracts of the remaining publications were then independently screened by 2 reviewers per publication before 2 reviewers completed a full-text review of each publication remaining thereafter. Conflicts were resolved through consensus. A data extraction form was developed to identify the key characteristics of each study as well as relevant information regarding the application of DHTs in the provision of patient-centered care. Seven reviewers independently extracted the data and resolved inconsistencies through discussion with 2 additional researchers. The variables included authors, publication year, country of origin, aims, settings, study design, methodology, type of DHT, reported outcomes, enablers, barriers, and policy and practice implications. Thematic analysis was used to synthesize and report the findings, following the approach described by Braun and Clarke . This involved (1) familiarization with the data, (2) searching for themes, (3) reviewing the themes, (4) defining and naming the themes, and (5) producing the report. Outcomes were considered patient centered if they mapped against established definitions and determinants of patient-centered care, that is, health care that aligns with patients’ values, needs, and preferences, as well as increases patient autonomy and involvement in their care . Systems-level determinants of patient-centered care were also considered in addition to this definition, including factors related to system characteristics and structures and processes, as well as external policies, regulations, and resources . Further to the thematic analysis, the full texts of the selected articles were analyzed to identify the types of DHTs used. The DHTs were then grouped according to classifications set out by the National Institute for Health and Care Excellence evidence standards framework for DHTs . This framework classifies DHTs by intended purpose and stratifies them into 3 tiers based on the potential risk to service users and to the system (tier A, tier B, and tier C). Tier A comprises DHTs intended to save costs, release staff time, or improve efficiency; tier B includes DHTs that help citizens and patients to manage their own health and wellness; and tier C comprises DHTs used for treating and diagnosing medical conditions or for guiding care choices . Each tier is further divided into subcategories that relate to the intended purpose of the DHT in question. Ethics approval was not required for this review paper. Overview A total of 264 publications were identified through the database search, of which 45 (17%) were included in the final analysis ( ). Of these 45 articles, 19 (42%) were quantitative studies (including n=3, 16%, randomized controlled trials), 7 (16%) were qualitative studies, 4 (9%) were mixed methods studies, 6 (13%) were technical reports, and 9 (20%) were review papers. The included articles were published between 2010 and 2021, with a majority published in 2020 (11/45, 24%) and 2021 (13/45, 29%). Most of the studies were conducted in India (14/45, 31%) and China (11/45,24%). Studies from Malaysia, Pakistan, Bangladesh, Indonesia, Nepal, Sri Lanka, Thailand, and Vietnam were also included. Of the 45 publications, 20 (44%) focused on NCDs (eg, diabetes, cardiovascular disease, and mental health conditions), 3 (7%) focused on communicable diseases (eg, COVID-19, tuberculosis, and acute diarrhea), and 5 (11%) encompassed both communicable diseases and NCDs, whereas the remaining publications (17/45, 38%) did not focus on a specific health condition or did not include this information. Of the 45 publications, 33 (73%) presented data on novel or specific DHTs, whereas the remaining studies (12/45, 27%) mapped the existing landscape of DHT-supported health care in specific countries or populations or provided evidence on enablers and barriers for DHT-supported health care. Detailed characteristics of the included studies are presented in [ - ]. Classification of DHTs Among the 33 studies presenting novel or specific DHTs, the majority of the DHTs were classified as tier C (n=15, 45%), followed by tier B (n=14, 42%) and tier A (n=4, 12%). The DHTs identified were further classified by intended purpose ( ). Summary of identified digital health technologies classified according to the National Institute for Health and Care Excellence evidence standards framework for digital health and care technologies. Tier A System services: health information systems and mobile apps (medical document digitizer and patient appointment flow optimizer ) Tier B Communicating about health and care: telemedicine and teleconsultation platforms (telephone, SMS text messaging, and video) [ - ] and appointment reminders Health and care diaries: mobile apps to track and record users’ health information for self-monitoring Promoting good health: internet-based health information and nonpersonalized health education via SMS text messaging Tier C Inform clinical management: mobile apps to allow remote monitoring of patient health information to provide personalized recommendations direct to users and inform clinical decision-making by health care professionals [ - ] Diagnose a condition: artificial intelligence–based self-diagnosis tool ; mobile-based assessment tools ; and software solution to allow mobile monitoring, assessment, and diagnosis Patient-Centered Outcomes of DHTs In studies that evaluated the outcomes of DHT use, DHTs supported patient-centered outcomes at an individual level and a systems level. Individual-Level Outcomes At an individual level, DHTs increased accessibility of health care and health-related information, supported individuals to self-manage their health, and led to improvements in clinical and quality-of-life outcomes. Increased Accessibility Nedungadi et al designed and pilot-tested a self-monitoring system for managing well-being in rural areas of India. The device was shown to accurately monitor patient conditions, was easy to understand and valued by patients, and was able to be delivered at a low cost. Overall, the device contributed to filling a gap in access to health care and health-related information in rural Indian locations, where health care resources are scarce. An mHealth device for diabetes management and education developed in China showed similar results . The multimedia teaching platform within the device enabled users to easily access diabetes-related information, which was identified as an unmet need within the diabetes community. Secondary benefits of DHTs that increased accessibility included reduced travel times and reduced health care–related costs for both patients and their families [ , , ]. The findings about increased accessibility and the associated benefits were further confirmed in qualitative studies that explored the benefits of mHealth and internet hospitals , as well as in review papers that addressed this topic [ , , ]. Overall, these DHTs promoted patient-centered outcomes by providing flexible health care options that met patients’ individual access needs. Improved Self-management DHTs were shown to support patients in self-monitoring and self-managing diabetes, COVID-19 infection, medication adherence, cardiovascular disease, and general health, increasing patients’ involvement in their own health care [ , , , , , ]; for example, Vitale et al developed a diabetes telemanagement system that was found to improve diabetes self-management in terms of frequency of blood glucose monitoring and frequency of insulin use. A COVID-19 symptom monitoring system developed by Lim et al supported patient decision-making with regard to symptom severity and actions required. Finally, Chew et al found evidence of improved medication adherence in people taking long-term medications who used a novel medication adherence app. A systematic review that explored the use of DHTs for self-management of cardiovascular disease reported that mHealth platforms can improve patient knowledge and confidence in self-management, increase active symptom monitoring and recording, and improve adherence to medications and appointments . Improved Clinical and Quality-of-Life Outcomes In a cluster randomized controlled trial, Guo et al compared the use of an mHealth platform to usual care for the management of patients with atrial fibrillation. Patients in the intervention group had significantly lower rates of the composite outcome “ischemic stroke/systemic thromboembolism, death, and rehospitalization” ( P <.001) as well as consistently lower heart rates than those receiving usual care. In a cross-sectional study by Vitale et al , patients with diabetes who received comprehensive care involving teleconsultations achieved better intermediate health outcomes than those receiving usual care (ie, significantly lower glycated hemoglobin: P =.003, cholesterol: P <.001, and diastolic blood pressure: P =.02). Finally, a WeChat-based intervention implemented in China reduced depressive symptoms in participants who participated for a 3-month period . In contrast to these studies, a randomized controlled trial assessing the effectiveness of an SMS text messaging system for managing coronary heart disease reported no significant changes in any clinical outcomes measured . A single study specifically reported on quality-of-life outcomes related to DHT use. Gupta et al developed and implemented a telemedicine device for otology screening in rural India. Use of the device resulted in 265,615 referrals, and 45% (9443/20,986) of the referred patients who reported for and received treatment reported a “significant improvement in their quality of life.” Finally, in an evaluation of a mobile obstetrics monitoring platform, patients who received care using the platform almost unanimously reported an increased feeling of safety while being remotely monitored, which may be considered an aspect of quality of life . On the whole, these DHTs contributed to patient-centered outcomes by enabling access to health care that was effective and met patients’ clinical and psychosocial needs. Systems-Level Outcomes DHTs also supported systems-level determinants of patient-centered outcomes, including increased efficiency, reduced strain on health care resources, and support for patient-centered clinical practice. Increased Efficiency In some of the studies (6/45, 13%), DHTs reduced the amount of time clinical staff spent undertaking administrative tasks and thereby increased their availability for tasks that directly benefited patients. Ali et al trialed a mobile app for document digitization in hospitals and found a considerable time reduction in data aggregation and data transfer activities. Similarly, an eHealth system implemented at a primary health care center in India reduced the amount of time that staff spent generating reports . Another group of researchers developed a mobile app to improve patient flow during hospital appointments . The app was shown to reduce the number of times patients requested appointment information from hospital staff and to reduce the amount of time staff spent seeking appointment-related information and responding to patients. A qualitative study reported that mHealth allows staff to have timely access to patient records at the time of treatment , and this finding was echoed in the evaluation of the mobile obstetrics monitoring platform described earlier, which allowed health care workers to remotely view patient records . A narrative review paper also highlighted the role of DHTs in increasing operational efficiencies . Reduced Strain on Health Care Resources DHTs supported reducing strain on health care resources in 3 ways. First, DHTs were shown to facilitate remote monitoring of patients’ health status [ , , , ]; for example, the COVID-19 symptom monitoring system designed by Lim et al monitored patient stability, connected health care professionals and patients via teleconsultations, and alerted patients and health care professionals to changes in, and worsening of, symptoms. Most of the patients were thereby enabled to recover at home rather than needing to be hospitalized, reducing unnecessary use of health care resources and simultaneously supporting patient preference. Reductions in emergency conditions and resultant hospitalizations also emerged as benefits of remote monitoring in several other publications [ , , ]. Second, DHTs reduced the need for referral to other health care professionals or diagnostic services. According to a narrative review, the use of teleconsultations coupled with services such as teleradiology and telepathology may enable patients to receive advice and diagnoses in a shorter time period and without the need for referral to specialists, thereby freeing up specialist availability while also providing patients with faster access to the care they need . Third and last, DHTs enabled patients to be triaged to a care mode that best suited their needs and preferences [ , , , , ]. As in the case of remote monitoring, this enabled some patients to receive remote care rather than receiving face-to-face care and thereby contributed to both meeting patient preferences and the conservation of clinic and hospital resources. Support for Patient-Centered Clinical Practice In 9% (4/45) of the studies, DHTs were reported to support health care staff in efficient decision-making, accurate assessment, and timely diagnosis [ , , , ]. This enabled health care staff to provide care that was closely aligned with patients’ clinical needs, whereby patients received the right care at the right time. DHTs also improved continuity of care (eg, by reducing repeated patient and health care provider interactions) and facilitated multidisciplinary teamwork by connecting different health care professionals remotely . For patients, this translated into a more seamless health care journey and enabled them to receive holistic health care from a number of disciplines when required. Enablers and Barriers for the Use of DHTs for Patient-Centered Care Enablers and barriers for the use of DHTs for patient-centered care emerged at the level of the device or platform, the user, and the broader environment ( ). At the device or platform level (ie, characteristics and design of DHTs), the most commonly reported enablers related to the ability of DHTs to meet users’ individual needs, such as those that integrated easily with users’ lives or workflows [ , , , , , ] and those that were adapted to local languages, cultures, and literacy levels [ , , , , , ]. Co-design methodology was commonly suggested as an enabler to developing DHTs that aligned with users’ individual needs [ , , , , , , ]. Incorporation of direct support from health care professionals (eg, teleconsultations) was also recommended [ , , , , , ], as was ensuring that users found DHTs easy to use [ , , , , , ]. Although no barriers to patient-centered outcomes pertaining to the characteristics or design of DHTs were specifically reported in these studies, each of the enablers, if considered in reverse, could be considered barriers (eg, DHTs that are not adapted to local languages). Regarding the user level, the availability of technical support and user education and training with regard to DHTs was the most commonly reported enabler [ , , , , , , , , ], whereas low literacy and technical literacy emerged as a user-level barrier in several of the publications (9/45, 20%) [ , , , , , , , , ]. Finally, at a broader environmental level, governance that ensures security, privacy, and integrity of DHTs [ , - , , , ] as well as cross-sectorial collaboration between and within government and nongovernment sectors on development, implementation, and promotion of DHTs [ , , , , ] were the most frequently reported enablers. Limited user access to DHT infrastructure (especially among populations residing in rural areas and those with low socioeconomic status) [ , , , ] and a lack of policies and protocols to guide the implementation and use of DHTs [ , , , , ] emerged as common barriers at the environmental level. A total of 264 publications were identified through the database search, of which 45 (17%) were included in the final analysis ( ). Of these 45 articles, 19 (42%) were quantitative studies (including n=3, 16%, randomized controlled trials), 7 (16%) were qualitative studies, 4 (9%) were mixed methods studies, 6 (13%) were technical reports, and 9 (20%) were review papers. The included articles were published between 2010 and 2021, with a majority published in 2020 (11/45, 24%) and 2021 (13/45, 29%). Most of the studies were conducted in India (14/45, 31%) and China (11/45,24%). Studies from Malaysia, Pakistan, Bangladesh, Indonesia, Nepal, Sri Lanka, Thailand, and Vietnam were also included. Of the 45 publications, 20 (44%) focused on NCDs (eg, diabetes, cardiovascular disease, and mental health conditions), 3 (7%) focused on communicable diseases (eg, COVID-19, tuberculosis, and acute diarrhea), and 5 (11%) encompassed both communicable diseases and NCDs, whereas the remaining publications (17/45, 38%) did not focus on a specific health condition or did not include this information. Of the 45 publications, 33 (73%) presented data on novel or specific DHTs, whereas the remaining studies (12/45, 27%) mapped the existing landscape of DHT-supported health care in specific countries or populations or provided evidence on enablers and barriers for DHT-supported health care. Detailed characteristics of the included studies are presented in [ - ]. Among the 33 studies presenting novel or specific DHTs, the majority of the DHTs were classified as tier C (n=15, 45%), followed by tier B (n=14, 42%) and tier A (n=4, 12%). The DHTs identified were further classified by intended purpose ( ). Summary of identified digital health technologies classified according to the National Institute for Health and Care Excellence evidence standards framework for digital health and care technologies. Tier A System services: health information systems and mobile apps (medical document digitizer and patient appointment flow optimizer ) Tier B Communicating about health and care: telemedicine and teleconsultation platforms (telephone, SMS text messaging, and video) [ - ] and appointment reminders Health and care diaries: mobile apps to track and record users’ health information for self-monitoring Promoting good health: internet-based health information and nonpersonalized health education via SMS text messaging Tier C Inform clinical management: mobile apps to allow remote monitoring of patient health information to provide personalized recommendations direct to users and inform clinical decision-making by health care professionals [ - ] Diagnose a condition: artificial intelligence–based self-diagnosis tool ; mobile-based assessment tools ; and software solution to allow mobile monitoring, assessment, and diagnosis In studies that evaluated the outcomes of DHT use, DHTs supported patient-centered outcomes at an individual level and a systems level. Individual-Level Outcomes At an individual level, DHTs increased accessibility of health care and health-related information, supported individuals to self-manage their health, and led to improvements in clinical and quality-of-life outcomes. Increased Accessibility Nedungadi et al designed and pilot-tested a self-monitoring system for managing well-being in rural areas of India. The device was shown to accurately monitor patient conditions, was easy to understand and valued by patients, and was able to be delivered at a low cost. Overall, the device contributed to filling a gap in access to health care and health-related information in rural Indian locations, where health care resources are scarce. An mHealth device for diabetes management and education developed in China showed similar results . The multimedia teaching platform within the device enabled users to easily access diabetes-related information, which was identified as an unmet need within the diabetes community. Secondary benefits of DHTs that increased accessibility included reduced travel times and reduced health care–related costs for both patients and their families [ , , ]. The findings about increased accessibility and the associated benefits were further confirmed in qualitative studies that explored the benefits of mHealth and internet hospitals , as well as in review papers that addressed this topic [ , , ]. Overall, these DHTs promoted patient-centered outcomes by providing flexible health care options that met patients’ individual access needs. Improved Self-management DHTs were shown to support patients in self-monitoring and self-managing diabetes, COVID-19 infection, medication adherence, cardiovascular disease, and general health, increasing patients’ involvement in their own health care [ , , , , , ]; for example, Vitale et al developed a diabetes telemanagement system that was found to improve diabetes self-management in terms of frequency of blood glucose monitoring and frequency of insulin use. A COVID-19 symptom monitoring system developed by Lim et al supported patient decision-making with regard to symptom severity and actions required. Finally, Chew et al found evidence of improved medication adherence in people taking long-term medications who used a novel medication adherence app. A systematic review that explored the use of DHTs for self-management of cardiovascular disease reported that mHealth platforms can improve patient knowledge and confidence in self-management, increase active symptom monitoring and recording, and improve adherence to medications and appointments . Improved Clinical and Quality-of-Life Outcomes In a cluster randomized controlled trial, Guo et al compared the use of an mHealth platform to usual care for the management of patients with atrial fibrillation. Patients in the intervention group had significantly lower rates of the composite outcome “ischemic stroke/systemic thromboembolism, death, and rehospitalization” ( P <.001) as well as consistently lower heart rates than those receiving usual care. In a cross-sectional study by Vitale et al , patients with diabetes who received comprehensive care involving teleconsultations achieved better intermediate health outcomes than those receiving usual care (ie, significantly lower glycated hemoglobin: P =.003, cholesterol: P <.001, and diastolic blood pressure: P =.02). Finally, a WeChat-based intervention implemented in China reduced depressive symptoms in participants who participated for a 3-month period . In contrast to these studies, a randomized controlled trial assessing the effectiveness of an SMS text messaging system for managing coronary heart disease reported no significant changes in any clinical outcomes measured . A single study specifically reported on quality-of-life outcomes related to DHT use. Gupta et al developed and implemented a telemedicine device for otology screening in rural India. Use of the device resulted in 265,615 referrals, and 45% (9443/20,986) of the referred patients who reported for and received treatment reported a “significant improvement in their quality of life.” Finally, in an evaluation of a mobile obstetrics monitoring platform, patients who received care using the platform almost unanimously reported an increased feeling of safety while being remotely monitored, which may be considered an aspect of quality of life . On the whole, these DHTs contributed to patient-centered outcomes by enabling access to health care that was effective and met patients’ clinical and psychosocial needs. Systems-Level Outcomes DHTs also supported systems-level determinants of patient-centered outcomes, including increased efficiency, reduced strain on health care resources, and support for patient-centered clinical practice. Increased Efficiency In some of the studies (6/45, 13%), DHTs reduced the amount of time clinical staff spent undertaking administrative tasks and thereby increased their availability for tasks that directly benefited patients. Ali et al trialed a mobile app for document digitization in hospitals and found a considerable time reduction in data aggregation and data transfer activities. Similarly, an eHealth system implemented at a primary health care center in India reduced the amount of time that staff spent generating reports . Another group of researchers developed a mobile app to improve patient flow during hospital appointments . The app was shown to reduce the number of times patients requested appointment information from hospital staff and to reduce the amount of time staff spent seeking appointment-related information and responding to patients. A qualitative study reported that mHealth allows staff to have timely access to patient records at the time of treatment , and this finding was echoed in the evaluation of the mobile obstetrics monitoring platform described earlier, which allowed health care workers to remotely view patient records . A narrative review paper also highlighted the role of DHTs in increasing operational efficiencies . Reduced Strain on Health Care Resources DHTs supported reducing strain on health care resources in 3 ways. First, DHTs were shown to facilitate remote monitoring of patients’ health status [ , , , ]; for example, the COVID-19 symptom monitoring system designed by Lim et al monitored patient stability, connected health care professionals and patients via teleconsultations, and alerted patients and health care professionals to changes in, and worsening of, symptoms. Most of the patients were thereby enabled to recover at home rather than needing to be hospitalized, reducing unnecessary use of health care resources and simultaneously supporting patient preference. Reductions in emergency conditions and resultant hospitalizations also emerged as benefits of remote monitoring in several other publications [ , , ]. Second, DHTs reduced the need for referral to other health care professionals or diagnostic services. According to a narrative review, the use of teleconsultations coupled with services such as teleradiology and telepathology may enable patients to receive advice and diagnoses in a shorter time period and without the need for referral to specialists, thereby freeing up specialist availability while also providing patients with faster access to the care they need . Third and last, DHTs enabled patients to be triaged to a care mode that best suited their needs and preferences [ , , , , ]. As in the case of remote monitoring, this enabled some patients to receive remote care rather than receiving face-to-face care and thereby contributed to both meeting patient preferences and the conservation of clinic and hospital resources. Support for Patient-Centered Clinical Practice In 9% (4/45) of the studies, DHTs were reported to support health care staff in efficient decision-making, accurate assessment, and timely diagnosis [ , , , ]. This enabled health care staff to provide care that was closely aligned with patients’ clinical needs, whereby patients received the right care at the right time. DHTs also improved continuity of care (eg, by reducing repeated patient and health care provider interactions) and facilitated multidisciplinary teamwork by connecting different health care professionals remotely . For patients, this translated into a more seamless health care journey and enabled them to receive holistic health care from a number of disciplines when required. At an individual level, DHTs increased accessibility of health care and health-related information, supported individuals to self-manage their health, and led to improvements in clinical and quality-of-life outcomes. Increased Accessibility Nedungadi et al designed and pilot-tested a self-monitoring system for managing well-being in rural areas of India. The device was shown to accurately monitor patient conditions, was easy to understand and valued by patients, and was able to be delivered at a low cost. Overall, the device contributed to filling a gap in access to health care and health-related information in rural Indian locations, where health care resources are scarce. An mHealth device for diabetes management and education developed in China showed similar results . The multimedia teaching platform within the device enabled users to easily access diabetes-related information, which was identified as an unmet need within the diabetes community. Secondary benefits of DHTs that increased accessibility included reduced travel times and reduced health care–related costs for both patients and their families [ , , ]. The findings about increased accessibility and the associated benefits were further confirmed in qualitative studies that explored the benefits of mHealth and internet hospitals , as well as in review papers that addressed this topic [ , , ]. Overall, these DHTs promoted patient-centered outcomes by providing flexible health care options that met patients’ individual access needs. Improved Self-management DHTs were shown to support patients in self-monitoring and self-managing diabetes, COVID-19 infection, medication adherence, cardiovascular disease, and general health, increasing patients’ involvement in their own health care [ , , , , , ]; for example, Vitale et al developed a diabetes telemanagement system that was found to improve diabetes self-management in terms of frequency of blood glucose monitoring and frequency of insulin use. A COVID-19 symptom monitoring system developed by Lim et al supported patient decision-making with regard to symptom severity and actions required. Finally, Chew et al found evidence of improved medication adherence in people taking long-term medications who used a novel medication adherence app. A systematic review that explored the use of DHTs for self-management of cardiovascular disease reported that mHealth platforms can improve patient knowledge and confidence in self-management, increase active symptom monitoring and recording, and improve adherence to medications and appointments . Improved Clinical and Quality-of-Life Outcomes In a cluster randomized controlled trial, Guo et al compared the use of an mHealth platform to usual care for the management of patients with atrial fibrillation. Patients in the intervention group had significantly lower rates of the composite outcome “ischemic stroke/systemic thromboembolism, death, and rehospitalization” ( P <.001) as well as consistently lower heart rates than those receiving usual care. In a cross-sectional study by Vitale et al , patients with diabetes who received comprehensive care involving teleconsultations achieved better intermediate health outcomes than those receiving usual care (ie, significantly lower glycated hemoglobin: P =.003, cholesterol: P <.001, and diastolic blood pressure: P =.02). Finally, a WeChat-based intervention implemented in China reduced depressive symptoms in participants who participated for a 3-month period . In contrast to these studies, a randomized controlled trial assessing the effectiveness of an SMS text messaging system for managing coronary heart disease reported no significant changes in any clinical outcomes measured . A single study specifically reported on quality-of-life outcomes related to DHT use. Gupta et al developed and implemented a telemedicine device for otology screening in rural India. Use of the device resulted in 265,615 referrals, and 45% (9443/20,986) of the referred patients who reported for and received treatment reported a “significant improvement in their quality of life.” Finally, in an evaluation of a mobile obstetrics monitoring platform, patients who received care using the platform almost unanimously reported an increased feeling of safety while being remotely monitored, which may be considered an aspect of quality of life . On the whole, these DHTs contributed to patient-centered outcomes by enabling access to health care that was effective and met patients’ clinical and psychosocial needs. Nedungadi et al designed and pilot-tested a self-monitoring system for managing well-being in rural areas of India. The device was shown to accurately monitor patient conditions, was easy to understand and valued by patients, and was able to be delivered at a low cost. Overall, the device contributed to filling a gap in access to health care and health-related information in rural Indian locations, where health care resources are scarce. An mHealth device for diabetes management and education developed in China showed similar results . The multimedia teaching platform within the device enabled users to easily access diabetes-related information, which was identified as an unmet need within the diabetes community. Secondary benefits of DHTs that increased accessibility included reduced travel times and reduced health care–related costs for both patients and their families [ , , ]. The findings about increased accessibility and the associated benefits were further confirmed in qualitative studies that explored the benefits of mHealth and internet hospitals , as well as in review papers that addressed this topic [ , , ]. Overall, these DHTs promoted patient-centered outcomes by providing flexible health care options that met patients’ individual access needs. DHTs were shown to support patients in self-monitoring and self-managing diabetes, COVID-19 infection, medication adherence, cardiovascular disease, and general health, increasing patients’ involvement in their own health care [ , , , , , ]; for example, Vitale et al developed a diabetes telemanagement system that was found to improve diabetes self-management in terms of frequency of blood glucose monitoring and frequency of insulin use. A COVID-19 symptom monitoring system developed by Lim et al supported patient decision-making with regard to symptom severity and actions required. Finally, Chew et al found evidence of improved medication adherence in people taking long-term medications who used a novel medication adherence app. A systematic review that explored the use of DHTs for self-management of cardiovascular disease reported that mHealth platforms can improve patient knowledge and confidence in self-management, increase active symptom monitoring and recording, and improve adherence to medications and appointments . In a cluster randomized controlled trial, Guo et al compared the use of an mHealth platform to usual care for the management of patients with atrial fibrillation. Patients in the intervention group had significantly lower rates of the composite outcome “ischemic stroke/systemic thromboembolism, death, and rehospitalization” ( P <.001) as well as consistently lower heart rates than those receiving usual care. In a cross-sectional study by Vitale et al , patients with diabetes who received comprehensive care involving teleconsultations achieved better intermediate health outcomes than those receiving usual care (ie, significantly lower glycated hemoglobin: P =.003, cholesterol: P <.001, and diastolic blood pressure: P =.02). Finally, a WeChat-based intervention implemented in China reduced depressive symptoms in participants who participated for a 3-month period . In contrast to these studies, a randomized controlled trial assessing the effectiveness of an SMS text messaging system for managing coronary heart disease reported no significant changes in any clinical outcomes measured . A single study specifically reported on quality-of-life outcomes related to DHT use. Gupta et al developed and implemented a telemedicine device for otology screening in rural India. Use of the device resulted in 265,615 referrals, and 45% (9443/20,986) of the referred patients who reported for and received treatment reported a “significant improvement in their quality of life.” Finally, in an evaluation of a mobile obstetrics monitoring platform, patients who received care using the platform almost unanimously reported an increased feeling of safety while being remotely monitored, which may be considered an aspect of quality of life . On the whole, these DHTs contributed to patient-centered outcomes by enabling access to health care that was effective and met patients’ clinical and psychosocial needs. DHTs also supported systems-level determinants of patient-centered outcomes, including increased efficiency, reduced strain on health care resources, and support for patient-centered clinical practice. Increased Efficiency In some of the studies (6/45, 13%), DHTs reduced the amount of time clinical staff spent undertaking administrative tasks and thereby increased their availability for tasks that directly benefited patients. Ali et al trialed a mobile app for document digitization in hospitals and found a considerable time reduction in data aggregation and data transfer activities. Similarly, an eHealth system implemented at a primary health care center in India reduced the amount of time that staff spent generating reports . Another group of researchers developed a mobile app to improve patient flow during hospital appointments . The app was shown to reduce the number of times patients requested appointment information from hospital staff and to reduce the amount of time staff spent seeking appointment-related information and responding to patients. A qualitative study reported that mHealth allows staff to have timely access to patient records at the time of treatment , and this finding was echoed in the evaluation of the mobile obstetrics monitoring platform described earlier, which allowed health care workers to remotely view patient records . A narrative review paper also highlighted the role of DHTs in increasing operational efficiencies . Reduced Strain on Health Care Resources DHTs supported reducing strain on health care resources in 3 ways. First, DHTs were shown to facilitate remote monitoring of patients’ health status [ , , , ]; for example, the COVID-19 symptom monitoring system designed by Lim et al monitored patient stability, connected health care professionals and patients via teleconsultations, and alerted patients and health care professionals to changes in, and worsening of, symptoms. Most of the patients were thereby enabled to recover at home rather than needing to be hospitalized, reducing unnecessary use of health care resources and simultaneously supporting patient preference. Reductions in emergency conditions and resultant hospitalizations also emerged as benefits of remote monitoring in several other publications [ , , ]. Second, DHTs reduced the need for referral to other health care professionals or diagnostic services. According to a narrative review, the use of teleconsultations coupled with services such as teleradiology and telepathology may enable patients to receive advice and diagnoses in a shorter time period and without the need for referral to specialists, thereby freeing up specialist availability while also providing patients with faster access to the care they need . Third and last, DHTs enabled patients to be triaged to a care mode that best suited their needs and preferences [ , , , , ]. As in the case of remote monitoring, this enabled some patients to receive remote care rather than receiving face-to-face care and thereby contributed to both meeting patient preferences and the conservation of clinic and hospital resources. Support for Patient-Centered Clinical Practice In 9% (4/45) of the studies, DHTs were reported to support health care staff in efficient decision-making, accurate assessment, and timely diagnosis [ , , , ]. This enabled health care staff to provide care that was closely aligned with patients’ clinical needs, whereby patients received the right care at the right time. DHTs also improved continuity of care (eg, by reducing repeated patient and health care provider interactions) and facilitated multidisciplinary teamwork by connecting different health care professionals remotely . For patients, this translated into a more seamless health care journey and enabled them to receive holistic health care from a number of disciplines when required. In some of the studies (6/45, 13%), DHTs reduced the amount of time clinical staff spent undertaking administrative tasks and thereby increased their availability for tasks that directly benefited patients. Ali et al trialed a mobile app for document digitization in hospitals and found a considerable time reduction in data aggregation and data transfer activities. Similarly, an eHealth system implemented at a primary health care center in India reduced the amount of time that staff spent generating reports . Another group of researchers developed a mobile app to improve patient flow during hospital appointments . The app was shown to reduce the number of times patients requested appointment information from hospital staff and to reduce the amount of time staff spent seeking appointment-related information and responding to patients. A qualitative study reported that mHealth allows staff to have timely access to patient records at the time of treatment , and this finding was echoed in the evaluation of the mobile obstetrics monitoring platform described earlier, which allowed health care workers to remotely view patient records . A narrative review paper also highlighted the role of DHTs in increasing operational efficiencies . DHTs supported reducing strain on health care resources in 3 ways. First, DHTs were shown to facilitate remote monitoring of patients’ health status [ , , , ]; for example, the COVID-19 symptom monitoring system designed by Lim et al monitored patient stability, connected health care professionals and patients via teleconsultations, and alerted patients and health care professionals to changes in, and worsening of, symptoms. Most of the patients were thereby enabled to recover at home rather than needing to be hospitalized, reducing unnecessary use of health care resources and simultaneously supporting patient preference. Reductions in emergency conditions and resultant hospitalizations also emerged as benefits of remote monitoring in several other publications [ , , ]. Second, DHTs reduced the need for referral to other health care professionals or diagnostic services. According to a narrative review, the use of teleconsultations coupled with services such as teleradiology and telepathology may enable patients to receive advice and diagnoses in a shorter time period and without the need for referral to specialists, thereby freeing up specialist availability while also providing patients with faster access to the care they need . Third and last, DHTs enabled patients to be triaged to a care mode that best suited their needs and preferences [ , , , , ]. As in the case of remote monitoring, this enabled some patients to receive remote care rather than receiving face-to-face care and thereby contributed to both meeting patient preferences and the conservation of clinic and hospital resources. In 9% (4/45) of the studies, DHTs were reported to support health care staff in efficient decision-making, accurate assessment, and timely diagnosis [ , , , ]. This enabled health care staff to provide care that was closely aligned with patients’ clinical needs, whereby patients received the right care at the right time. DHTs also improved continuity of care (eg, by reducing repeated patient and health care provider interactions) and facilitated multidisciplinary teamwork by connecting different health care professionals remotely . For patients, this translated into a more seamless health care journey and enabled them to receive holistic health care from a number of disciplines when required. Enablers and barriers for the use of DHTs for patient-centered care emerged at the level of the device or platform, the user, and the broader environment ( ). At the device or platform level (ie, characteristics and design of DHTs), the most commonly reported enablers related to the ability of DHTs to meet users’ individual needs, such as those that integrated easily with users’ lives or workflows [ , , , , , ] and those that were adapted to local languages, cultures, and literacy levels [ , , , , , ]. Co-design methodology was commonly suggested as an enabler to developing DHTs that aligned with users’ individual needs [ , , , , , , ]. Incorporation of direct support from health care professionals (eg, teleconsultations) was also recommended [ , , , , , ], as was ensuring that users found DHTs easy to use [ , , , , , ]. Although no barriers to patient-centered outcomes pertaining to the characteristics or design of DHTs were specifically reported in these studies, each of the enablers, if considered in reverse, could be considered barriers (eg, DHTs that are not adapted to local languages). Regarding the user level, the availability of technical support and user education and training with regard to DHTs was the most commonly reported enabler [ , , , , , , , , ], whereas low literacy and technical literacy emerged as a user-level barrier in several of the publications (9/45, 20%) [ , , , , , , , , ]. Finally, at a broader environmental level, governance that ensures security, privacy, and integrity of DHTs [ , - , , , ] as well as cross-sectorial collaboration between and within government and nongovernment sectors on development, implementation, and promotion of DHTs [ , , , , ] were the most frequently reported enablers. Limited user access to DHT infrastructure (especially among populations residing in rural areas and those with low socioeconomic status) [ , , , ] and a lack of policies and protocols to guide the implementation and use of DHTs [ , , , , ] emerged as common barriers at the environmental level. Principal Findings This scoping review is the first to bring together evidence regarding the use of DHTs to support patient-centered care in LMICs in the APR, contributing to an increase in the knowledge base about the value of DHTs in non-Western countries. The findings suggest that many LMICs in the APR are successfully using DHTs to support the equitable provision of patient-centered care and simultaneously reduce pressures on their health care systems. To optimize success when developing its own approach to the use of DHTs to address the country’s specific health care challenges, Vietnam should take advantage of the lessons learned by these neighboring countries. In line with the findings of previous studies, DHTs were shown to be a viable option for the management of NCDs such as diabetes, cardiovascular disease, and depression in LMICs in the APR, which is encouraging, given the rise of NCDs in Vietnam. Evidence from a study on remote monitoring of patients with COVID-19 infection also demonstrated the potential of DHTs to conserve health care resources in the face of communicable disease outbreaks . Perhaps most promisingly, DHTs were able to increase access to health-related information and health care services in rural and low-income areas. This suggests that DHTs may go some way toward addressing the health disparities that persist in Vietnam. However, both the development and implementation of DHTs to support patient-centered care were not without challenges. Although several enablers for the use of DHTs were identified, so too were many barriers at the user and environmental levels. These barriers need to be considered and accounted for when planning for widespread use of DHTs. It is also important to note that, although no barriers pertaining to the individual characteristics or design of DHTs were specifically reported, a lack of evidence regarding device- and platform-level barriers in the included studies does not indicate the absence of barriers at this level. Each of the enablers reported at the device and platform levels may also act as a barrier to patient-centered outcomes if not fulfilled. The same is true of enablers at the user and environmental levels. Therefore, policy makers should take a holistic view with regard to enablers and barriers and give equal weight to both when planning for DHT use. There are several recommendations that Vietnamese policy makers may consider. First, it is important to emphasize stakeholder engagement. According to our findings, DHTs that strongly align with the needs of end users reflect patient-centered principles and are likely to enable patient-centered outcomes. Therefore, it is crucial to engage with end users at all stages of DHT development and implementation to understand and meet their needs. This could be achieved by adopting a co-design approach. Co-design has been well established as a methodology for the development and ongoing improvement of health care services . It is defined as “collective creativity as it is applied across the whole span of a design process” , which in this context refers to the involvement of a diverse range of stakeholders (ie, patients and their carers, health care professionals, researchers, and technology designers) throughout the development and implementation of DHTs. Several studies identified in this review used elements of co-design to determine the feasibility, acceptability, and usability of DHTs, and this approach was strongly recommended to increase the likelihood of patient-centered outcomes [ , , , ]. Second, measures are needed to strengthen digital literacy among the Vietnamese population. According to the Global Competitiveness Index 4.0 2019 Rankings , which measure digital skills among the active population on a scale ranging from 1 to 7, Vietnam achieved a value of 3.8 . Although Vietnam is ranked fourth in a list of 8 Association of Southeast Asian Nations member states in terms of digital literacy , it comes in 97th when compared with global estimates among 141 countries . Our findings demonstrated that low digital literacy is a key barrier to the uptake of DHTs for both patients and health care staff and as such may restrict access to patient-centered health care if not addressed. Digital literacy education is therefore required. A multimodal approach is needed, including school-based education, community education and adult learning, and workforce training and development. This would enable the Vietnamese population to develop strong foundations in digital literacy, as well as improve their digital literacy in later life. For guidance, Vietnam may look to the World Health Organization’s Global Strategy on Digital Health 2020-2025, which includes improved digital literacy in its strategic objectives . Our findings suggest that, even with adequate digital literacy, users value the availability of technical support and training for the use of specific DHTs. As such, DHT developers should consider this an essential component of their product. Third, support is needed for the improvement of DHT infrastructure. Population-wide access to affordable and reliable mobile devices, computers, internet and mobile networks, and electricity is essential for the use of DHTs according to the studies included in this review. Promisingly, the Vietnamese government has already committed to improving the nation’s digital infrastructure in its National Digital Transformation Program 2025-2030 . Targets include improvements to internet and mobile networks throughout the country, as well as the establishment of a telemedicine unit in 100% of the health care services. However, no measures or targets relate specifically to improving digital infrastructure in rural and low-income regions of Vietnam. Our findings indicate that these regions may require additional support to achieve equitable access to DHTs, using measures such as subsidized access to these technologies and inclusion of DHT-supported care in medical insurance plans [ - , ]. Fourth, our findings highlight cross-sectorial collaboration on development, implementation, and promotion of DHTs as a key enabler for the use of DHTs to support patient-centered care. This aligns with findings from previous studies . A coordinated, whole-system approach is needed to overcome the complexities and costs of implementing a comprehensive DHT system in Vietnam. This should involve collaboration between the ministry of health and other relevant government departments (ie, the ministry of science and technology and the ministry of information and communications), as well as local and international collaboration with health care providers, the private sector, researchers, technology developers, social entrepreneurs, and consumers . Fifth, efforts must be made to strengthen the governance of health-related cybersecurity . The use of DHTs to record, store, and share health data increases the risk of privacy and security breaches . Data security is a key concern for end users according to this review and is likely to affect the uptake of DHTs. Adequate governance of DHTs with regard to cybersecurity is therefore needed to protect users’ information and to promote trust in the security and integrity of DHTs. Currently, data protection laws in Vietnam are fragmented. Cyberinformation security (ie, information exchanged in a telecommunications or computer network environment) is governed by Law No. 86/2015/QH13 (2015), whereas cybersecurity falls under Law No. 24/2018/QH14 (2018) . Although the law on cybersecurity recognizes health as an information system critical for national security, no detail on specific regulations or protections is provided with regard to health data in either of these laws. To promote trust, specific regulations concerning the protection of health data that are recorded, stored, and shared using DHTs are required. Resources must also be dedicated to enforcing the resultant regulations. Sixth and last, we found that a lack of policies and protocols to guide DHT implementation and use was a common barrier to the application of DHTs to support patient-centered care. Conversely, clear positions and policies at the government level promoted confidence in DHTs within the broader community and guided their appropriate use. The Vietnamese government therefore has the opportunity to lead the way in DHT uptake in the nation. Developing and publishing guidelines on the use of DHTs, including patient-centered DHT-supported care, would set the standard for quality use of DHTs and inform best practice for both health care professionals and technology developers. Vietnam could look to neighboring countries as an example. In India, the ministry of health and family welfare collaborated with the Medical Council of India to develop telemedicine guidelines . These guidelines supported the health care system to adapt quickly to telehealth during the COVID-19 pandemic while still maintaining a consistent standard of patient-centered care. Without such guidelines, many disparate approaches to telehealth could have resulted, and the quality of DHT-supported care would have been likely to vary. Limitations This study may have been limited in its ability to provide a comprehensive overview of the use of DHTs to support patient-centered outcomes in all LMICs in the APR. Although the scoping review methodology allowed for a broad search, it was necessary to limit the search to papers published in English and Vietnamese. This may have excluded publications available in other regional languages. Conclusions The use of DHTs is a viable option to increase equitable access to quality, patient-centered care across Vietnam and simultaneously reduce pressures faced by the health care system owing in part to a rapidly aging population and an increase in NCDs. Vietnam can take advantage of the lessons learned by other LMICs in the APR when developing its own approach to the use of DHTs. The following strategies are recommended: (1) emphasize stakeholder engagement, (2) strengthen digital literacy, (3) support the improvement of DHT infrastructure, (4) increase cross-sectorial collaboration, (5) strengthen governance of cybersecurity, and (6) lead the way in DHT uptake. Mapping existing DHT applications in Vietnam and evaluating the effectiveness of these DHT applications should be considered for future research. Investigations of the needs, preferences, and experiences of key Vietnamese stakeholders (eg, patients and their carers, health care workers and providers, and DHT developers) related to DHTs are also needed. This information would support the Vietnamese government to develop a national road map for DHTs and align the approach to DHT use most closely with local needs. This scoping review is the first to bring together evidence regarding the use of DHTs to support patient-centered care in LMICs in the APR, contributing to an increase in the knowledge base about the value of DHTs in non-Western countries. The findings suggest that many LMICs in the APR are successfully using DHTs to support the equitable provision of patient-centered care and simultaneously reduce pressures on their health care systems. To optimize success when developing its own approach to the use of DHTs to address the country’s specific health care challenges, Vietnam should take advantage of the lessons learned by these neighboring countries. In line with the findings of previous studies, DHTs were shown to be a viable option for the management of NCDs such as diabetes, cardiovascular disease, and depression in LMICs in the APR, which is encouraging, given the rise of NCDs in Vietnam. Evidence from a study on remote monitoring of patients with COVID-19 infection also demonstrated the potential of DHTs to conserve health care resources in the face of communicable disease outbreaks . Perhaps most promisingly, DHTs were able to increase access to health-related information and health care services in rural and low-income areas. This suggests that DHTs may go some way toward addressing the health disparities that persist in Vietnam. However, both the development and implementation of DHTs to support patient-centered care were not without challenges. Although several enablers for the use of DHTs were identified, so too were many barriers at the user and environmental levels. These barriers need to be considered and accounted for when planning for widespread use of DHTs. It is also important to note that, although no barriers pertaining to the individual characteristics or design of DHTs were specifically reported, a lack of evidence regarding device- and platform-level barriers in the included studies does not indicate the absence of barriers at this level. Each of the enablers reported at the device and platform levels may also act as a barrier to patient-centered outcomes if not fulfilled. The same is true of enablers at the user and environmental levels. Therefore, policy makers should take a holistic view with regard to enablers and barriers and give equal weight to both when planning for DHT use. There are several recommendations that Vietnamese policy makers may consider. First, it is important to emphasize stakeholder engagement. According to our findings, DHTs that strongly align with the needs of end users reflect patient-centered principles and are likely to enable patient-centered outcomes. Therefore, it is crucial to engage with end users at all stages of DHT development and implementation to understand and meet their needs. This could be achieved by adopting a co-design approach. Co-design has been well established as a methodology for the development and ongoing improvement of health care services . It is defined as “collective creativity as it is applied across the whole span of a design process” , which in this context refers to the involvement of a diverse range of stakeholders (ie, patients and their carers, health care professionals, researchers, and technology designers) throughout the development and implementation of DHTs. Several studies identified in this review used elements of co-design to determine the feasibility, acceptability, and usability of DHTs, and this approach was strongly recommended to increase the likelihood of patient-centered outcomes [ , , , ]. Second, measures are needed to strengthen digital literacy among the Vietnamese population. According to the Global Competitiveness Index 4.0 2019 Rankings , which measure digital skills among the active population on a scale ranging from 1 to 7, Vietnam achieved a value of 3.8 . Although Vietnam is ranked fourth in a list of 8 Association of Southeast Asian Nations member states in terms of digital literacy , it comes in 97th when compared with global estimates among 141 countries . Our findings demonstrated that low digital literacy is a key barrier to the uptake of DHTs for both patients and health care staff and as such may restrict access to patient-centered health care if not addressed. Digital literacy education is therefore required. A multimodal approach is needed, including school-based education, community education and adult learning, and workforce training and development. This would enable the Vietnamese population to develop strong foundations in digital literacy, as well as improve their digital literacy in later life. For guidance, Vietnam may look to the World Health Organization’s Global Strategy on Digital Health 2020-2025, which includes improved digital literacy in its strategic objectives . Our findings suggest that, even with adequate digital literacy, users value the availability of technical support and training for the use of specific DHTs. As such, DHT developers should consider this an essential component of their product. Third, support is needed for the improvement of DHT infrastructure. Population-wide access to affordable and reliable mobile devices, computers, internet and mobile networks, and electricity is essential for the use of DHTs according to the studies included in this review. Promisingly, the Vietnamese government has already committed to improving the nation’s digital infrastructure in its National Digital Transformation Program 2025-2030 . Targets include improvements to internet and mobile networks throughout the country, as well as the establishment of a telemedicine unit in 100% of the health care services. However, no measures or targets relate specifically to improving digital infrastructure in rural and low-income regions of Vietnam. Our findings indicate that these regions may require additional support to achieve equitable access to DHTs, using measures such as subsidized access to these technologies and inclusion of DHT-supported care in medical insurance plans [ - , ]. Fourth, our findings highlight cross-sectorial collaboration on development, implementation, and promotion of DHTs as a key enabler for the use of DHTs to support patient-centered care. This aligns with findings from previous studies . A coordinated, whole-system approach is needed to overcome the complexities and costs of implementing a comprehensive DHT system in Vietnam. This should involve collaboration between the ministry of health and other relevant government departments (ie, the ministry of science and technology and the ministry of information and communications), as well as local and international collaboration with health care providers, the private sector, researchers, technology developers, social entrepreneurs, and consumers . Fifth, efforts must be made to strengthen the governance of health-related cybersecurity . The use of DHTs to record, store, and share health data increases the risk of privacy and security breaches . Data security is a key concern for end users according to this review and is likely to affect the uptake of DHTs. Adequate governance of DHTs with regard to cybersecurity is therefore needed to protect users’ information and to promote trust in the security and integrity of DHTs. Currently, data protection laws in Vietnam are fragmented. Cyberinformation security (ie, information exchanged in a telecommunications or computer network environment) is governed by Law No. 86/2015/QH13 (2015), whereas cybersecurity falls under Law No. 24/2018/QH14 (2018) . Although the law on cybersecurity recognizes health as an information system critical for national security, no detail on specific regulations or protections is provided with regard to health data in either of these laws. To promote trust, specific regulations concerning the protection of health data that are recorded, stored, and shared using DHTs are required. Resources must also be dedicated to enforcing the resultant regulations. Sixth and last, we found that a lack of policies and protocols to guide DHT implementation and use was a common barrier to the application of DHTs to support patient-centered care. Conversely, clear positions and policies at the government level promoted confidence in DHTs within the broader community and guided their appropriate use. The Vietnamese government therefore has the opportunity to lead the way in DHT uptake in the nation. Developing and publishing guidelines on the use of DHTs, including patient-centered DHT-supported care, would set the standard for quality use of DHTs and inform best practice for both health care professionals and technology developers. Vietnam could look to neighboring countries as an example. In India, the ministry of health and family welfare collaborated with the Medical Council of India to develop telemedicine guidelines . These guidelines supported the health care system to adapt quickly to telehealth during the COVID-19 pandemic while still maintaining a consistent standard of patient-centered care. Without such guidelines, many disparate approaches to telehealth could have resulted, and the quality of DHT-supported care would have been likely to vary. This study may have been limited in its ability to provide a comprehensive overview of the use of DHTs to support patient-centered outcomes in all LMICs in the APR. Although the scoping review methodology allowed for a broad search, it was necessary to limit the search to papers published in English and Vietnamese. This may have excluded publications available in other regional languages. The use of DHTs is a viable option to increase equitable access to quality, patient-centered care across Vietnam and simultaneously reduce pressures faced by the health care system owing in part to a rapidly aging population and an increase in NCDs. Vietnam can take advantage of the lessons learned by other LMICs in the APR when developing its own approach to the use of DHTs. The following strategies are recommended: (1) emphasize stakeholder engagement, (2) strengthen digital literacy, (3) support the improvement of DHT infrastructure, (4) increase cross-sectorial collaboration, (5) strengthen governance of cybersecurity, and (6) lead the way in DHT uptake. Mapping existing DHT applications in Vietnam and evaluating the effectiveness of these DHT applications should be considered for future research. Investigations of the needs, preferences, and experiences of key Vietnamese stakeholders (eg, patients and their carers, health care workers and providers, and DHT developers) related to DHTs are also needed. This information would support the Vietnamese government to develop a national road map for DHTs and align the approach to DHT use most closely with local needs.
Managing children with daytime urinary incontinence: a survey of Dutch general practitioners
d70f2265-f5bd-40f3-8826-61f0716f378f
10132240
Family Medicine[mh]
Urinary incontinence (UI) is the involuntary leakage of urine from age 5 years or older . Associated with shame, stress and social difficulties in children and parents alike , it affects self-confidence and impairs quality of life . The treatment of UI in the Netherlands is multidisciplinary. Parents first seek help from a general practitioner (GP) or youth healthcare practitioner. If these physicians cannot resolve the problem, they can refer the child to a paediatrician or (paediatric) urologist. Although the Dutch associations of Urology and Paediatrics have collaborated to create guidance for assessing and treating daytime UI , no specific guideline exists for GPs. Studies in New Zealand and Australia showed that confidence in managing daytime UI seems to vary in primary care . No comparable studies are available in the Netherlands and it is unclear how Dutch GPs approach daytime UI in children, how confident they are with this care, and on what basis they refer to secondary care. We aimed to identify these topics. Study design We performed a survey among GPs who referred children aged 4–18 years with daytime UI to the outpatient clinic of a large teaching hospital in the Netherlands between January 2018 and September 2019. We searched for cases based on Diagnosis Treatment Combination codes (DBC) recorded in secondary care medical charts and reviewed the referral letter to obtain the reason for referral. Children referred for daytime UI, with or without coexisting nocturnal enuresis, were included. Monosymptomatic nocturnal enuresis and referral with urinary tract infections (UTIs) as the only cause for UI were exclusion criteria. General information, such as the child’s age and gender, was obtained from the referral letter or medical file. Finally, we invited the GPs of identified patients to participate in this survey. Survey We constructed a questionnaire in a multidisciplinary team comprising a GP, a urologist, an epidemiologist and independent researchers based on clinical experiences, (inter)national guidelines and previous research . To retrieve information on actual cases, the questionnaire included seven patient-specific questions. Additionally, eight general questions about the treatment of daytime UI were used . GPs who referred more than one child were asked to complete the patient-specific information for each child and the general part only once. If a colleague referred a child, we asked the GP who received the questionnaire to respond on their colleague’s behalf. The questionnaires were sent a maximum of 1 year after referral, and reminders were sent to GPs who had yet to respond after 2 weeks. Statistical analysis Descriptive characteristics are reported for patient demographics and reason for referral, GP referral preferences, experience as a GP, interest in urological complaints, and self-rated skill in treating daytime UI in children. Normality was assessed using the Kolmogorov–Smirnov test. Medians and interquartile ranges (IQR) are reported for non-normally distributed data. Categorical variables are presented as percentages and compared with the chi-square test. We calculated 95% confidence intervals for some categorical variables. Possible correlations between different ordinal variables are shown using the Spearman rank correlation coefficient ( r s ), considering a p -value < 0.05 to be statistically significant. Correlation is shown graphically by an interpolation line between the known values, offering a simplified view of the relationship. The data were analysed using IBM SPSS, Version 25.0 (IBM Corp., Armonk, NY). Ethics The Medical Ethical Committee of Isala Zwolle confirmed that formal ethical approval was not necessary under the Dutch law. We performed a survey among GPs who referred children aged 4–18 years with daytime UI to the outpatient clinic of a large teaching hospital in the Netherlands between January 2018 and September 2019. We searched for cases based on Diagnosis Treatment Combination codes (DBC) recorded in secondary care medical charts and reviewed the referral letter to obtain the reason for referral. Children referred for daytime UI, with or without coexisting nocturnal enuresis, were included. Monosymptomatic nocturnal enuresis and referral with urinary tract infections (UTIs) as the only cause for UI were exclusion criteria. General information, such as the child’s age and gender, was obtained from the referral letter or medical file. Finally, we invited the GPs of identified patients to participate in this survey. We constructed a questionnaire in a multidisciplinary team comprising a GP, a urologist, an epidemiologist and independent researchers based on clinical experiences, (inter)national guidelines and previous research . To retrieve information on actual cases, the questionnaire included seven patient-specific questions. Additionally, eight general questions about the treatment of daytime UI were used . GPs who referred more than one child were asked to complete the patient-specific information for each child and the general part only once. If a colleague referred a child, we asked the GP who received the questionnaire to respond on their colleague’s behalf. The questionnaires were sent a maximum of 1 year after referral, and reminders were sent to GPs who had yet to respond after 2 weeks. Descriptive characteristics are reported for patient demographics and reason for referral, GP referral preferences, experience as a GP, interest in urological complaints, and self-rated skill in treating daytime UI in children. Normality was assessed using the Kolmogorov–Smirnov test. Medians and interquartile ranges (IQR) are reported for non-normally distributed data. Categorical variables are presented as percentages and compared with the chi-square test. We calculated 95% confidence intervals for some categorical variables. Possible correlations between different ordinal variables are shown using the Spearman rank correlation coefficient ( r s ), considering a p -value < 0.05 to be statistically significant. Correlation is shown graphically by an interpolation line between the known values, offering a simplified view of the relationship. The data were analysed using IBM SPSS, Version 25.0 (IBM Corp., Armonk, NY). The Medical Ethical Committee of Isala Zwolle confirmed that formal ethical approval was not necessary under the Dutch law. Participants Of 201 children referred to urologists and 959 referred to paediatricians, 25 and 219 met the inclusion criteria, respectively. In total, 94 GPs returned 118 questionnaires (1–4 per GP), with the data for 96 cases (81.4%) completed by the referring GP. Seven GPs ended only the patient-specific part without answering the general questions, while five GPs did not respond to the general part. Complete data were available for 72 unique GPs, 40 females (55.6%) and 32 males (44.4%), with a median working experience of 11.5 years (IQR, 13.3 years). The 118 children included 63 (53.4%) males, with a median age of 6 years (IQR, 4 years) . Management of daytime UI Discussed complaints Most GPs discussed the coexistence of nocturnal enuresis (73.7%), whether UI was primary or secondary (68.8%), the defaecation pattern (63.6%) and/or micturition habits (61.9%) . GPs less frequently asked if coexisting pain (43.2%), mental problems, social problems or UTIs were present. Eight GPs (6.8%) did not discuss any complaints and referred directly to the hospital. Diagnostics Overall, 22.9% of GPs performed no diagnostics, 49.2% performed a physical examination and 26.3% inspected the genital area . In 61.0% of cases, urine was checked by dipstick or microscopy and followed by a urine culture in 19.5%. A voiding diary was advocated by 11% of GPs. Reasons for not performing diagnostics were that parents wanted a referral or that diagnostics had already been done at a prior referral or by a physical therapist. Lifestyle advice More than half of the GPs (61.9%) gave lifestyle advice , including the need for sufficient fluid intake (34.7%), adequate toilet posture and hygiene (34.7%), a high-fibre diet (29.7%) and having set voiding times (28.0%). Treatment Some GPs (17.8%) started pharmaceutical treatment , most commonly laxatives (14.4%). Only one GP started anticholinergics, two had used desmopressin. Most GPs (80.0%) did not think that treatment with anticholinergic drugs was appropriate for primary care. GPs referred to a pelvic floor physiotherapist in 11.0% of cases. Referral Most common reasons for referral were the explicit wish of a parent or patient (44.9%) or the persistence of symptoms despite treatment (39.0%) . Most children were referred to paediatricians (83.9%), which most GPs reported as their preference (72.9%). Sometimes there was a desire for a more general approach, especially in cases with coexisting constipation, behavioural problems or other comorbidities. Arguments cited for referral to a urologist were parental request, the presence of an anatomic abnormality and recommendation by a pelvic floor therapist. Competence and interest Of the GPs, 41.4% felt incompetent in treating children with UI and 30% felt (totally) competent. More than half (55.7%) stated they wanted a clinical practice guideline for GPs . Some GPs consulted guidelines on nocturnal enuresis or recurrent UTIs ( n = 12) or the guideline from Dutch associations of Urology and Paediatrics ( n = 2), but most GPs did not use a guideline. Almost half (47.9%) reported having no professional interest in urological complaints in children. Reported interest in urological complaints and feeling competent in treating children with daytime UI were positively related ( r s = 0.664, p < 0.001), irrespective of the expressed need for a guideline for daytime UI . Of 201 children referred to urologists and 959 referred to paediatricians, 25 and 219 met the inclusion criteria, respectively. In total, 94 GPs returned 118 questionnaires (1–4 per GP), with the data for 96 cases (81.4%) completed by the referring GP. Seven GPs ended only the patient-specific part without answering the general questions, while five GPs did not respond to the general part. Complete data were available for 72 unique GPs, 40 females (55.6%) and 32 males (44.4%), with a median working experience of 11.5 years (IQR, 13.3 years). The 118 children included 63 (53.4%) males, with a median age of 6 years (IQR, 4 years) . Management of daytime UI Discussed complaints Most GPs discussed the coexistence of nocturnal enuresis (73.7%), whether UI was primary or secondary (68.8%), the defaecation pattern (63.6%) and/or micturition habits (61.9%) . GPs less frequently asked if coexisting pain (43.2%), mental problems, social problems or UTIs were present. Eight GPs (6.8%) did not discuss any complaints and referred directly to the hospital. Diagnostics Overall, 22.9% of GPs performed no diagnostics, 49.2% performed a physical examination and 26.3% inspected the genital area . In 61.0% of cases, urine was checked by dipstick or microscopy and followed by a urine culture in 19.5%. A voiding diary was advocated by 11% of GPs. Reasons for not performing diagnostics were that parents wanted a referral or that diagnostics had already been done at a prior referral or by a physical therapist. Lifestyle advice More than half of the GPs (61.9%) gave lifestyle advice , including the need for sufficient fluid intake (34.7%), adequate toilet posture and hygiene (34.7%), a high-fibre diet (29.7%) and having set voiding times (28.0%). Treatment Some GPs (17.8%) started pharmaceutical treatment , most commonly laxatives (14.4%). Only one GP started anticholinergics, two had used desmopressin. Most GPs (80.0%) did not think that treatment with anticholinergic drugs was appropriate for primary care. GPs referred to a pelvic floor physiotherapist in 11.0% of cases. Referral Most common reasons for referral were the explicit wish of a parent or patient (44.9%) or the persistence of symptoms despite treatment (39.0%) . Most children were referred to paediatricians (83.9%), which most GPs reported as their preference (72.9%). Sometimes there was a desire for a more general approach, especially in cases with coexisting constipation, behavioural problems or other comorbidities. Arguments cited for referral to a urologist were parental request, the presence of an anatomic abnormality and recommendation by a pelvic floor therapist. Competence and interest Of the GPs, 41.4% felt incompetent in treating children with UI and 30% felt (totally) competent. More than half (55.7%) stated they wanted a clinical practice guideline for GPs . Some GPs consulted guidelines on nocturnal enuresis or recurrent UTIs ( n = 12) or the guideline from Dutch associations of Urology and Paediatrics ( n = 2), but most GPs did not use a guideline. Almost half (47.9%) reported having no professional interest in urological complaints in children. Reported interest in urological complaints and feeling competent in treating children with daytime UI were positively related ( r s = 0.664, p < 0.001), irrespective of the expressed need for a guideline for daytime UI . Discussed complaints Most GPs discussed the coexistence of nocturnal enuresis (73.7%), whether UI was primary or secondary (68.8%), the defaecation pattern (63.6%) and/or micturition habits (61.9%) . GPs less frequently asked if coexisting pain (43.2%), mental problems, social problems or UTIs were present. Eight GPs (6.8%) did not discuss any complaints and referred directly to the hospital. Diagnostics Overall, 22.9% of GPs performed no diagnostics, 49.2% performed a physical examination and 26.3% inspected the genital area . In 61.0% of cases, urine was checked by dipstick or microscopy and followed by a urine culture in 19.5%. A voiding diary was advocated by 11% of GPs. Reasons for not performing diagnostics were that parents wanted a referral or that diagnostics had already been done at a prior referral or by a physical therapist. Lifestyle advice More than half of the GPs (61.9%) gave lifestyle advice , including the need for sufficient fluid intake (34.7%), adequate toilet posture and hygiene (34.7%), a high-fibre diet (29.7%) and having set voiding times (28.0%). Treatment Some GPs (17.8%) started pharmaceutical treatment , most commonly laxatives (14.4%). Only one GP started anticholinergics, two had used desmopressin. Most GPs (80.0%) did not think that treatment with anticholinergic drugs was appropriate for primary care. GPs referred to a pelvic floor physiotherapist in 11.0% of cases. Referral Most common reasons for referral were the explicit wish of a parent or patient (44.9%) or the persistence of symptoms despite treatment (39.0%) . Most children were referred to paediatricians (83.9%), which most GPs reported as their preference (72.9%). Sometimes there was a desire for a more general approach, especially in cases with coexisting constipation, behavioural problems or other comorbidities. Arguments cited for referral to a urologist were parental request, the presence of an anatomic abnormality and recommendation by a pelvic floor therapist. Competence and interest Of the GPs, 41.4% felt incompetent in treating children with UI and 30% felt (totally) competent. More than half (55.7%) stated they wanted a clinical practice guideline for GPs . Some GPs consulted guidelines on nocturnal enuresis or recurrent UTIs ( n = 12) or the guideline from Dutch associations of Urology and Paediatrics ( n = 2), but most GPs did not use a guideline. Almost half (47.9%) reported having no professional interest in urological complaints in children. Reported interest in urological complaints and feeling competent in treating children with daytime UI were positively related ( r s = 0.664, p < 0.001), irrespective of the expressed need for a guideline for daytime UI . Most GPs discussed the coexistence of nocturnal enuresis (73.7%), whether UI was primary or secondary (68.8%), the defaecation pattern (63.6%) and/or micturition habits (61.9%) . GPs less frequently asked if coexisting pain (43.2%), mental problems, social problems or UTIs were present. Eight GPs (6.8%) did not discuss any complaints and referred directly to the hospital. Overall, 22.9% of GPs performed no diagnostics, 49.2% performed a physical examination and 26.3% inspected the genital area . In 61.0% of cases, urine was checked by dipstick or microscopy and followed by a urine culture in 19.5%. A voiding diary was advocated by 11% of GPs. Reasons for not performing diagnostics were that parents wanted a referral or that diagnostics had already been done at a prior referral or by a physical therapist. More than half of the GPs (61.9%) gave lifestyle advice , including the need for sufficient fluid intake (34.7%), adequate toilet posture and hygiene (34.7%), a high-fibre diet (29.7%) and having set voiding times (28.0%). Some GPs (17.8%) started pharmaceutical treatment , most commonly laxatives (14.4%). Only one GP started anticholinergics, two had used desmopressin. Most GPs (80.0%) did not think that treatment with anticholinergic drugs was appropriate for primary care. GPs referred to a pelvic floor physiotherapist in 11.0% of cases. Most common reasons for referral were the explicit wish of a parent or patient (44.9%) or the persistence of symptoms despite treatment (39.0%) . Most children were referred to paediatricians (83.9%), which most GPs reported as their preference (72.9%). Sometimes there was a desire for a more general approach, especially in cases with coexisting constipation, behavioural problems or other comorbidities. Arguments cited for referral to a urologist were parental request, the presence of an anatomic abnormality and recommendation by a pelvic floor therapist. Of the GPs, 41.4% felt incompetent in treating children with UI and 30% felt (totally) competent. More than half (55.7%) stated they wanted a clinical practice guideline for GPs . Some GPs consulted guidelines on nocturnal enuresis or recurrent UTIs ( n = 12) or the guideline from Dutch associations of Urology and Paediatrics ( n = 2), but most GPs did not use a guideline. Almost half (47.9%) reported having no professional interest in urological complaints in children. Reported interest in urological complaints and feeling competent in treating children with daytime UI were positively related ( r s = 0.664, p < 0.001), irrespective of the expressed need for a guideline for daytime UI . Main findings In this study, Dutch GPs reported a lack of competence in treating children with UI and a wish for a clinical practice guideline specific to their needs. The most common justifications for referring a child with daytime UI were the explicit wishes of the children or parents, persistent symptoms despite treatment, the presence of psychosocial factors or other physical complaints. In most instances, children were referred to a paediatrician. Prior to referral, almost three-quarters of GPs obtained a medical history, but few performed any diagnostics or initiated treatment. Strengths and limitations A strength of this study is that the participating GPs had a broad range of experience working in both cities and more rural areas. However, our reliance on data from one hospital in the Netherlands may have generated an unrepresentative sample. The sampling method may have been both a strength and a weakness. On the one hand, it meant that all participating GPs were involved in caring for children with UI. On the other hand, GPs who treat children with UI will refer fewer children, and we do not know the size of this population or the care they received. Notably, outcomes could have been biased due to recall by allowing the use of medical records to answer patient-specific questions. By contrast, questions directed at the GP’s general management were unlikely to be affected by recall bias because these reflect their current practice, but could have prompted socially desirable answers. Finally, it is unsure if the outcomes of this study are generalisable to other European countries with comparable primary care settings, such as Denmark, Norway, England, Italy and Portugal. A study among GPs in Europe showed that GPs in the Netherlands are treating most children themselves instead of referring the child to a specialist . However, this was based on all complaints a child can present with and not specified for UI. Compared to GPs in other countries, the Dutch GP has a broad range of tasks and multiple other responsibilities, which could explain why they need more confidence treating daytime UI, especially because the prevalence of daytime UI is relatively low. Our study shows that almost half of the GPs did not consider urological complaints an area of interest and felt they lacked the skills to treat these children. Previous studies in New Zealand and Australia support this finding . The absence of guidelines and uncertainty may explain why GPs refer children with daytime UI directly to hospital for analysis and treatment when some cases could be managed in primary care. We found that the most common reason for referral was the parents’ explicit wishes. This is in line with an earlier survey among GPs in Europe showing that Dutch GPs are most influenced by patients to make referrals, with 60% of referrals on request of the patient in the Netherlands, compared to 30–40% in countries in South Europe . In case of referral, we found that GPs preferred the more general approach offered by paediatricians when children had problems other than daytime UI, including other physical or psychosocial problems. This is appropriate given the association between such complaints and daytime UI . GPs also likely referred children with multiple complaints to a paediatrician because of their specialist knowledge beyond UI and ability to manage all aspects of their care. We expect this to be comparable to other parts of the world, as paediatricians are sometimes considered as the GPs for children. An essential early step in the treatment of UI is to ensure adequate fluid intake and to complete a voiding diary, which was advised by only about one-third and one-tenth of the GPs, respectively. These approaches have considerable potential as cheap and informative diagnostic aids that can be used by GPs . About one-quarter of the GPs obtained only a medical history before referral and performed no diagnostic tests. Although they did report paying attention to toilet position, hygiene, and urinating at set times, they may have yet to learn that these are elements of standard urotherapy. Most GPs indicated that anticholinergics were not suitable for initial therapy, consistent with recent data showing that Australian GPs have a poor knowledge of first-line treatments for daytime UI , and compared to other countries, Dutch GPs are reluctant to prescribe medication . Implications and further research Suppose we could improve the confidence of GPs in treating daytime UI in children. In that case many uncomplicated cases could be routinely managed in primary care, thereby reducing healthcare costs and demands on hospital care. To achieve this, we should educate GPs about urotherapy (avoiding holding manoeuvres, proper toilet posture, normalisation of fluid intake and timed voiding) . This could be supported by a new GP guideline, in which the basic assessment with simple diagnostic tests and standard urotherapy are explained. Special attention should be given to voiding diaries that can easily reveal the frequency of UI and the pattern of UI. GPs could use both aspects to advise parents on how to solve the problem themselves. This also ensures that the therapeutic process begins with the active involvement of children and parents. In this study, Dutch GPs reported a lack of competence in treating children with UI and a wish for a clinical practice guideline specific to their needs. The most common justifications for referring a child with daytime UI were the explicit wishes of the children or parents, persistent symptoms despite treatment, the presence of psychosocial factors or other physical complaints. In most instances, children were referred to a paediatrician. Prior to referral, almost three-quarters of GPs obtained a medical history, but few performed any diagnostics or initiated treatment. A strength of this study is that the participating GPs had a broad range of experience working in both cities and more rural areas. However, our reliance on data from one hospital in the Netherlands may have generated an unrepresentative sample. The sampling method may have been both a strength and a weakness. On the one hand, it meant that all participating GPs were involved in caring for children with UI. On the other hand, GPs who treat children with UI will refer fewer children, and we do not know the size of this population or the care they received. Notably, outcomes could have been biased due to recall by allowing the use of medical records to answer patient-specific questions. By contrast, questions directed at the GP’s general management were unlikely to be affected by recall bias because these reflect their current practice, but could have prompted socially desirable answers. Finally, it is unsure if the outcomes of this study are generalisable to other European countries with comparable primary care settings, such as Denmark, Norway, England, Italy and Portugal. A study among GPs in Europe showed that GPs in the Netherlands are treating most children themselves instead of referring the child to a specialist . However, this was based on all complaints a child can present with and not specified for UI. Compared to GPs in other countries, the Dutch GP has a broad range of tasks and multiple other responsibilities, which could explain why they need more confidence treating daytime UI, especially because the prevalence of daytime UI is relatively low. Our study shows that almost half of the GPs did not consider urological complaints an area of interest and felt they lacked the skills to treat these children. Previous studies in New Zealand and Australia support this finding . The absence of guidelines and uncertainty may explain why GPs refer children with daytime UI directly to hospital for analysis and treatment when some cases could be managed in primary care. We found that the most common reason for referral was the parents’ explicit wishes. This is in line with an earlier survey among GPs in Europe showing that Dutch GPs are most influenced by patients to make referrals, with 60% of referrals on request of the patient in the Netherlands, compared to 30–40% in countries in South Europe . In case of referral, we found that GPs preferred the more general approach offered by paediatricians when children had problems other than daytime UI, including other physical or psychosocial problems. This is appropriate given the association between such complaints and daytime UI . GPs also likely referred children with multiple complaints to a paediatrician because of their specialist knowledge beyond UI and ability to manage all aspects of their care. We expect this to be comparable to other parts of the world, as paediatricians are sometimes considered as the GPs for children. An essential early step in the treatment of UI is to ensure adequate fluid intake and to complete a voiding diary, which was advised by only about one-third and one-tenth of the GPs, respectively. These approaches have considerable potential as cheap and informative diagnostic aids that can be used by GPs . About one-quarter of the GPs obtained only a medical history before referral and performed no diagnostic tests. Although they did report paying attention to toilet position, hygiene, and urinating at set times, they may have yet to learn that these are elements of standard urotherapy. Most GPs indicated that anticholinergics were not suitable for initial therapy, consistent with recent data showing that Australian GPs have a poor knowledge of first-line treatments for daytime UI , and compared to other countries, Dutch GPs are reluctant to prescribe medication . Suppose we could improve the confidence of GPs in treating daytime UI in children. In that case many uncomplicated cases could be routinely managed in primary care, thereby reducing healthcare costs and demands on hospital care. To achieve this, we should educate GPs about urotherapy (avoiding holding manoeuvres, proper toilet posture, normalisation of fluid intake and timed voiding) . This could be supported by a new GP guideline, in which the basic assessment with simple diagnostic tests and standard urotherapy are explained. Special attention should be given to voiding diaries that can easily reveal the frequency of UI and the pattern of UI. GPs could use both aspects to advise parents on how to solve the problem themselves. This also ensures that the therapeutic process begins with the active involvement of children and parents. This research offers valuable insights in how Dutch GPs assess children with daytime UI. Most GPs do not treat these children but refer them to a paediatrician, and almost half of the GPs feel incompetent to treat children with DUI. Developing a GP guideline for this topic could help prevent unnecessary referrals to hospital, supporting the principle of the right care being delivered in the right place. Supplemental Material Click here for additional data file.
A Paper-Based Simulation Model for Teaching Inguinal Hernia Anatomy
17f6696e-5214-413c-8897-c038ba204541
10132405
Anatomy[mh]
Hernias of the abdominal wall, defined as the abnormal protrusion of intra-abdominal contents through the containing abdominal wall, is a common surgical pathology . They have a prevalence of about 4% in those over 45 years old . Inguinal hernias represent 75% of all abdominal wall hernias, and its repair remains one of the most common general surgical operations in the UK . However, the complex anatomy of the inguinal canal continues to make the understanding of this disease and its surgical repair challenging for medical students and junior surgical trainees . Traditionally, in the undergraduate curriculum, this topic is delivered using didactic lectures and tutorials or delivered in the operating theatre . These modes have inherent limitations; lectures are inherently descriptive and use 2-dimensional images, whereas intraoperative teaching is opportunistic and unstructured. The COVID-19 pandemic and its subsequent reprioritisation of healthcare resources have consequently led to a detrimental effect on the volume and quality of teaching opportunities in surgical training . This pandemic has highlighted the importance of the development of surgical training tools which can be complementary to traditional surgical training techniques or be used as effective contingency alternatives where normal workplace surgical training is reduced or suspended. This has led to the development and use of a 3D paper-based model for simulated teaching of inguinal hernia in our department. Hernia model A paper-based model was developed comprising four overlapping paper panels simulating the anatomical layers of the inguinal canal and associated structures (Figs. and ). These paper panels display key anatomical structures of the inguinal canal in schematic fashion and allow for low-fidelity simulation of open groin hernia procedures (Figs. and ). These models can be easily modified using readily available adjunct materials such as surgical gauze, plastic tubing and glove material to simulate normal inguinal canal anatomy, various inguinal hernia pathologies and an open surgical mesh repair of an inguinal hernia (Figs. , and ). Learning sessions The use of these models was incorporated into a timetabled structured learning session delivered by the authors for 3 rd - and 4 th -year medical students rotating through their general surgical placement in a single teaching hospital site. Briefly, in these learning sessions, pertinent concepts surrounding the anatomy and pathology of inguinal hernia are discussed including surface and surgical anatomy, clinical examination, investigations including radiology, different pathological variants and surgical techniques involved in the repair of inguinal hernia. These learning sessions were designed and blueprinted based on Gagne’s instructional levels (Supplementary Table 1). Students are then provided with one each of a variety of completed models of the hernia, each constructed to simulate the normal inguinal canal, various inguinal hernia pathologies and a surgically repaired inguinal hernia (Figs. and ). Students are then allowed to make a ‘skin incision’ on the model and dissect down, simulating a surgical exposure of the inguinal canal in the paper model to the deepest layer and discuss what they find on these models and compare it with the other models (Fig. ). In models with simulated pathology, students can proceed to a repair of the hernia, including dissection of the sac and reducing it, and placing and securing the ‘mesh’. Students’ perceptions of their knowledge and understanding of inguinal hernia anatomy and pathology were assessed using anonymised surveys delivered immediately before and repeated immediately after the learning sessions. Additionally, students’ perceptions of the usefulness of the models and the sessions were assessed in the post-session questionnaires (Fig. ). These questionnaires incorporated three questions asking the learners to rate their confidence in describing the layers of the inguinal canal, identifying a direct and indirect inguinal hernia and in naming the contents of the inguinal canal on a 10-point semantic differential scale. Learners were also asked to rate the usefulness of the session and provide freehand comments. Ethics Proportional review has been sought from the University of Glasgow College of Medical, Veterinary and Life Sciences Ethics Committee who have advised that this research project does not need full ethical review and has waived the need for this. Data used and reported in this study are from routinely collected course evaluation data and do not include any personal identifiable details from students involved in these teaching sessions. A paper-based model was developed comprising four overlapping paper panels simulating the anatomical layers of the inguinal canal and associated structures (Figs. and ). These paper panels display key anatomical structures of the inguinal canal in schematic fashion and allow for low-fidelity simulation of open groin hernia procedures (Figs. and ). These models can be easily modified using readily available adjunct materials such as surgical gauze, plastic tubing and glove material to simulate normal inguinal canal anatomy, various inguinal hernia pathologies and an open surgical mesh repair of an inguinal hernia (Figs. , and ). The use of these models was incorporated into a timetabled structured learning session delivered by the authors for 3 rd - and 4 th -year medical students rotating through their general surgical placement in a single teaching hospital site. Briefly, in these learning sessions, pertinent concepts surrounding the anatomy and pathology of inguinal hernia are discussed including surface and surgical anatomy, clinical examination, investigations including radiology, different pathological variants and surgical techniques involved in the repair of inguinal hernia. These learning sessions were designed and blueprinted based on Gagne’s instructional levels (Supplementary Table 1). Students are then provided with one each of a variety of completed models of the hernia, each constructed to simulate the normal inguinal canal, various inguinal hernia pathologies and a surgically repaired inguinal hernia (Figs. and ). Students are then allowed to make a ‘skin incision’ on the model and dissect down, simulating a surgical exposure of the inguinal canal in the paper model to the deepest layer and discuss what they find on these models and compare it with the other models (Fig. ). In models with simulated pathology, students can proceed to a repair of the hernia, including dissection of the sac and reducing it, and placing and securing the ‘mesh’. Students’ perceptions of their knowledge and understanding of inguinal hernia anatomy and pathology were assessed using anonymised surveys delivered immediately before and repeated immediately after the learning sessions. Additionally, students’ perceptions of the usefulness of the models and the sessions were assessed in the post-session questionnaires (Fig. ). These questionnaires incorporated three questions asking the learners to rate their confidence in describing the layers of the inguinal canal, identifying a direct and indirect inguinal hernia and in naming the contents of the inguinal canal on a 10-point semantic differential scale. Learners were also asked to rate the usefulness of the session and provide freehand comments. Proportional review has been sought from the University of Glasgow College of Medical, Veterinary and Life Sciences Ethics Committee who have advised that this research project does not need full ethical review and has waived the need for this. Data used and reported in this study are from routinely collected course evaluation data and do not include any personal identifiable details from students involved in these teaching sessions. A total of 45 students participated in these sessions over a period of 6 months. Pre-learning session mean ratings for the learners’ confidence in their understanding of the layers of the inguinal canal, identifying indirect and direct inguinal hernias and in naming the contents of the inguinal canal were 2.5, 3.3 and 2.9, while post-learning session mean ratings were 8.0, 9.4 and 8.2, respectively. Paired samples Student’s t -tests for all three questions were statistically significant ( p < 0.001) (Fig. ). The mean rating for usefulness of the session was 9.6/10. Free comments from students emphasised the usefulness of the models as a visual learning aid (Fig. ). Our results indicate that students found these sessions useful in improving their understanding of inguinal hernia anatomy, pathology and surgical repair. Simulation is increasingly used in surgical training and represents a shift from the traditional ‘see one, do one, teach one’ paradigm of surgical training in the past . There have been multiple drivers for this paradigm shift including increasingly steep learning curves associated with modern surgical techniques, an increased focus on patient safety and the adoption of modern educational pedagogical methods in surgical training . Simulation is pedagogically consistent with current understanding of surgical skill acquisition and development. Fitt and Posner describe the three-stage theory of skills acquisition as incorporating the three distinct stages of cognition, integration and automation, which respectively involve intellectualising the task using this and translating it into execution of the task, and thereafter developing automation of the task from continued practice of the task . Simulation allows trainees to develop and master the earliest stages of task acquisition in a safe environment away from the patient. The evidence for simulated models of hernia repair and their efficacy is scarce in the literature. Ansaloni et al. and Nazari et al. have both independently described different 3-dimensional models constructed from primarily a cardboard box and different fabrics, respectively . Mann et al. describe a full paper model similar to ours illustrated with realistic anatomy . However, unlike our model, Mann et al.’s model does not allow modification for the simulation of different surgical pathologies using adjunct material . Other ex vivo models include computer simulation models have been described but are more cost-intensive and often not available as open-source models which can be reproduced widely by readers and interested trainers . Our model is also considerably low fidelity with the use of paper and schematic anatomy; fidelity in the context of simulation being the level of realism (multidimensional) of a particular simulation activity to the learners . This design is deliberate. Indeed, current evidence suggests that educational outcomes are similar in high- and low-fidelity models and some studies suggest low-fidelity models are superior to high-fidelity models . A more unified interpretation of current evidence may be that training should constitute a range of fidelity levels, and this can be personalised to the individual needs of the learners. When considering this within the context of cognitive load theory, low fidelity, simpler models may be associated with minimising the intrinsic and extraneous cognitive loads (intrinsic load being the innate difficulty of the task itself; in this case, the hernia repair and extraneous load being any other external loads not related to the subject matter itself, e.g. the learning session and how it is designed) . This can therefore better aid in understanding the key concepts behind the task and therefore acquisition of learning . These suggest that these low-fidelity models are ideally suited towards introducing the concept of hernias and hernia repairs to relative novices such as medical students and surgical trainees at the beginning of training. Importantly, it is likely that the best learning programs will employ a mixed and perhaps stepwise manner of increasing fidelity and complexity; therefore, our low-fidelity model may be utilised as an introductory level model to introduce the concept to novices before progressing in a stepwise manner to more complex simulations, for example, computer, 3-dimensional, cadaveric and finally patients undergoing hernia repairs in the operating theatre . This work has some limitations. While we have assessed perceptions of knowledge and understanding of medical students, we have not assessed knowledge and understanding levels. Future research should assess  knowledge and understanding levels of students before and after undergoing learning sessions using these models. These models and their efficacy also need to be validated across medical students at different training levels, as well as postgraduate training doctors at the early stages of surgical training. In conclusion, we describe a cost-effective paper-based model for the teaching of inguinal hernia which can be flexibly modified to represent normal anatomy and different surgical pathologies of the inguinal canal. The use of these models within a structured learning session has been associated with improved students’ perception of their knowledge and understanding of the anatomy, pathology, and management of inguinal hernias. This paper also provides the model in an electronic template (Supplementary File 2) detailed information on the design and construction of both the model and the associated lesson plans, making this an open-source model which can be evaluated and used by surgical trainers on a global basis. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 16 kb) Supplementary file2 (PDF 147 kb)
COVID-19 messages targeting young people on social media: content analysis of Australian health authority posts
9e29392a-cbb0-4c05-8599-d895eb99aa95
10132623
Health Communication[mh]
The COVID-19 pandemic has highlighted the importance of communicating reliable health messages to the community during public health emergencies . Given its accessibility and time-sensitivity potential, social media technologies are playing an increasingly significant role in the health communication of governments and public health authorities . During the COVID-19 pandemic, platforms like Facebook, Twitter, Instagram and TikTok emerged as major sources of health information for the public . Health authorities can use these platforms as an effective tool to increase public health awareness through dissemination of brief messages to general or targeted populations . Studies have already shown that this use of social media can positively influence awareness of public health measures and preventative behaviours . Users of social media were three times more likely to follow the COVID-19 health rules than non-users and daily social media users were more likely to engage in protective, social distancing behaviours . Young people are a key social media user group active on an average of five social media platforms . They frequently use social media to find health information and receive the majority of their COVID-19 information from the internet and social media . However, the abundance of unverified health information and misinformation on social media makes it difficult for the general public to identify accurate information, thus impeding governmental health-related communication efforts . The Director-General of the World Health Organization (WHO) termed the COVID-19 misinformation situation as an ‘infodemic’ (i.e. misinformation epidemic or misinformation pandemic) due to the proliferation of conspiracy theories, propaganda and unproven scientific claims regarding the diagnosis, treatment and prevention of the disease . This presence of misinformation can obscure reliable information on social media and make it harder to discern, which is of particular concern for individuals with lower levels of health literacy. Whilst young people may have high levels of digital literacy, they tend to have comparatively low levels of critical health literacy, limiting their ability to critically evaluate online health information . Even university students with well-developed digital health literacy still face difficulties discerning accurate and reliable COVID-19 health information online . Furthermore, young people in priority populations at greater risk of COVID-19 infection and adverse outcomes (e.g. individuals from Aboriginal and Torres Strait Islander and culturally and linguistically diverse communities and individuals living with chronic health conditions) may face compounded difficulty due to the lack of tailored and targeting COVID-19 messaging from health authorities . These health literacy concerns are further compounded by the fact that social media platforms that young people use frequently host content that discourages vaccination and compliance with government mandates and/or prevention measures. For example, one study identified over 50% of TikTok videos about COVID-19 vaccination were discouraging vaccination through parodies and memes of adverse reactions to the vaccination . Young people were a key interest group for Australian health authorities during the COVID-19 response, as they represented high proportions of COVID-19 cases throughout the pandemic . Young people were also excluded from the early stages of the Australian COVID-19 vaccination roll-out and received mixed messages around eligibility and blood clotting risk associated with the AstraZeneca vaccine . Furthermore, young people were found to be more vaccine hesitant and to endorse misbeliefs about COVID-19 more readily than other age cohorts which can negatively impact other preventative behaviours . These factors motivated health authorities to target this population with communications and interventions during the COVID-19 pandemic. Social media has the potential to be an important tool to communicate reliable and timely health information to young people, thus it is essential to understand how official social media channels are currently being used for this purpose. An Australian study conducted prior to the COVID-19 pandemic found public health Facebook accounts utilize social marketing principles, relying on humour and celebrity endorsements for higher engagement of their health messages . However, there has been limited research into the way health authorities target young people with COVID-19 messages and use social media strategies in their messaging. Preliminary research conducted in the first few months of the COVID-19 pandemic in 2020 by Dimanlig-Cruz et al. found Canadian authorities rarely targeted young people explicitly in their social media posts on COVID-19-related physical distancing. Further research is required to understand how other health authorities used social media during different points of the pandemic to target young people with COVID-19 messages. Objectives This study aimed to describe and identify the characteristics of COVID-19 health communication targeting young people on social media shared by Australian health authorities during the Delta outbreak (September 2021) by investigating the following research question: How do Australian health authorities use social media to communicate COVID-19 health messages to young people (16–29 years)? This study aimed to describe and identify the characteristics of COVID-19 health communication targeting young people on social media shared by Australian health authorities during the Delta outbreak (September 2021) by investigating the following research question: How do Australian health authorities use social media to communicate COVID-19 health messages to young people (16–29 years)? Study design We conducted a content analysis of social media posts targeting young people with COVID-19 messages shared by Australian State/Territory government health authorities. Social media posts were extracted in two stages; In Stage 1, all COVID-19-related posts were extracted from the accounts and subsequently in Stage 2, posts targeting young people were identified from the Stage 1 sample. We identified the social media accounts of eight State/Territory government health authorities across Australia: Australian Capital Territory, Northern Territory, New South Wales, Victoria, Queensland, South Australia, Tasmania and Western Australia . This study focussed on Facebook, Instagram and TikTok platforms due to their frequent use by young people; users aged 18–24 are the largest user group on Instagram and TikTok, and the second largest on Facebook following users aged 25–35 years . All data used in the study are publicly available. Australian COVID-19 context Australia has a federal system of governance; pandemic response responsibilities are spilt across Commonwealth, State and Territory governments. The coordination of COVID-19 communication is delegated to the State and Territory level, primarily led by State and Territory government health departments via their official media channels . This study focussed on social media posts shared over the month of September 2021, during the Delta outbreak in the Eastern States of Australia. The Australian Capital Territory, New South Wales and Victoria were under ‘stay at home’ orders due to high COVID-19 caseload (referred to as the ‘outbreak states’ henceforth), whilst the Northern Territory, Queensland, South Australia, Tasmania and Western Australia had no major outbreaks (referred to as the ‘non-outbreak states’). Health departments across the country were focussed on rapidly increasing vaccination rates; at the beginning of the month, 36% of eligible Australians aged 16 years and over had received two doses of a COVID-19 vaccine. By the end of September, 54% had received two doses . The COVID-19 vaccination roll-out was staggered by age (with the exception of individuals with chronic conditions, or Aboriginal and Torres Strait Islander background); the 16- to 30-year age group were eligible for the Pfizer COVID-19 vaccine (the recommended vaccine for the age group) at the end August 2021. Data collection Stage 1: extraction—all COVID-19-related posts All posts related to COVID-19 shared by the identified accounts from 1 to 30 September 2021 were extracted. This included content posted by other accounts and reposted by the identified accounts. We also included duplicate posts that were cross-posted by the departments across their multiple platforms. Extracted posts were saved as screenshots and URLs using a data extraction spreadsheet stored on the University’s password protected cloud server. Stage 2: extraction—posts targeting young people From the extracted posts, posts that targeted young people were subsequently identified if they satisfied one of the following selection criterion: (i) specific mention of the terms ‘youth’, ‘teen’, ‘young person’ or an age/range within 16–29 years; (ii) featuring internet trends and memes; (iii) shared via TikTok; (iv) predominately/solely featuring actors or characters in the age range; (v) featuring stylistic elements that appeal to young people (e.g. bright colours, cute illustrations that are appealing in an endearing way) or (vi) mention of themes relevant to the age group (e.g. dating, sexual activity, going to university). Examples included in . The criteria were developed in consultation with young people; two young people (aged 25 and 26) and a researcher (aged 26) discussed criteria used in research conducted by Dimanlig-Cruz et al. and adapted accordingly. Much of the criteria from Dimanlig-Cruz et al. were retained, along with the same youth age range of 16–29. This study added a TikTok criterion, given it was unique for health authorities to use this youth-centric platform at the time. The criteria were purposely generous to ensure any posts the departments had intended to target young people were captured. Criteria were pilot tested with the data extraction method before use. Two coders who were in youth age range (both aged 26) were trained in content analysis methods and coded the same sample ( n = 106) of randomly selected posts independently to test the criteria before completing the Stage 2 extraction. We assessed the level of agreement between the coders using Cohen kappa statistic, which indicated high agreement ( k = 0.878). Coders compared selections and resolved any discrepancies with assistance from other study researchers. Descriptive data collection Descriptive data were collected before content analysis was undertaken. We noted post type (e.g. image, video, GIF, etc.), selection criteria satisfied, URL, caption, hashtags, date shared and cross-posts in the data extraction spreadsheet. We recorded the followers of each account and the following engagement metrics for each individual post in the extraction spreadsheet: (i) Facebook—likes/reactions, comments, shares; (ii) Instagram—likes, comments (share data unavailable); (iii) TikTok—likes, comments, shares. All metrics were captured on 1 February 2022 to ensure posts were afforded ample time to receive engagement. Likes/reactions were the only metric consistent across the platforms and the only metric that was never disabled by the social media platform/accounts. Data analysis We conducted a content analysis of the posts, systematically categorizing the posts into a predetermined coding framework outlined in . The framework was developed using previous research focussing on social marketing principles . Modifications were included after iterative testing to ensure relevance to COVID-19 communication to young people (e.g. capturing memes, emojis and responsive posts) and relevant priority groups. Apart from post type, codes were not mutually exclusive; multiple codes may be applied to a single post. To evaluate reliability of coding, two coders who were in youth age range (aged 25 and 26) and trained in content analysis methods coded the same sample ( n = 24) of randomly selected posts independently. We assessed the level of agreement between the coders using Cohen kappa statistic, which indicated high agreement ( k = 0.841). Descriptive statistics were calculated to summarize the frequency of occurrence of each code, and metrics for each account and code. We conducted a content analysis of social media posts targeting young people with COVID-19 messages shared by Australian State/Territory government health authorities. Social media posts were extracted in two stages; In Stage 1, all COVID-19-related posts were extracted from the accounts and subsequently in Stage 2, posts targeting young people were identified from the Stage 1 sample. We identified the social media accounts of eight State/Territory government health authorities across Australia: Australian Capital Territory, Northern Territory, New South Wales, Victoria, Queensland, South Australia, Tasmania and Western Australia . This study focussed on Facebook, Instagram and TikTok platforms due to their frequent use by young people; users aged 18–24 are the largest user group on Instagram and TikTok, and the second largest on Facebook following users aged 25–35 years . All data used in the study are publicly available. Australia has a federal system of governance; pandemic response responsibilities are spilt across Commonwealth, State and Territory governments. The coordination of COVID-19 communication is delegated to the State and Territory level, primarily led by State and Territory government health departments via their official media channels . This study focussed on social media posts shared over the month of September 2021, during the Delta outbreak in the Eastern States of Australia. The Australian Capital Territory, New South Wales and Victoria were under ‘stay at home’ orders due to high COVID-19 caseload (referred to as the ‘outbreak states’ henceforth), whilst the Northern Territory, Queensland, South Australia, Tasmania and Western Australia had no major outbreaks (referred to as the ‘non-outbreak states’). Health departments across the country were focussed on rapidly increasing vaccination rates; at the beginning of the month, 36% of eligible Australians aged 16 years and over had received two doses of a COVID-19 vaccine. By the end of September, 54% had received two doses . The COVID-19 vaccination roll-out was staggered by age (with the exception of individuals with chronic conditions, or Aboriginal and Torres Strait Islander background); the 16- to 30-year age group were eligible for the Pfizer COVID-19 vaccine (the recommended vaccine for the age group) at the end August 2021. Stage 1: extraction—all COVID-19-related posts All posts related to COVID-19 shared by the identified accounts from 1 to 30 September 2021 were extracted. This included content posted by other accounts and reposted by the identified accounts. We also included duplicate posts that were cross-posted by the departments across their multiple platforms. Extracted posts were saved as screenshots and URLs using a data extraction spreadsheet stored on the University’s password protected cloud server. Stage 2: extraction—posts targeting young people From the extracted posts, posts that targeted young people were subsequently identified if they satisfied one of the following selection criterion: (i) specific mention of the terms ‘youth’, ‘teen’, ‘young person’ or an age/range within 16–29 years; (ii) featuring internet trends and memes; (iii) shared via TikTok; (iv) predominately/solely featuring actors or characters in the age range; (v) featuring stylistic elements that appeal to young people (e.g. bright colours, cute illustrations that are appealing in an endearing way) or (vi) mention of themes relevant to the age group (e.g. dating, sexual activity, going to university). Examples included in . The criteria were developed in consultation with young people; two young people (aged 25 and 26) and a researcher (aged 26) discussed criteria used in research conducted by Dimanlig-Cruz et al. and adapted accordingly. Much of the criteria from Dimanlig-Cruz et al. were retained, along with the same youth age range of 16–29. This study added a TikTok criterion, given it was unique for health authorities to use this youth-centric platform at the time. The criteria were purposely generous to ensure any posts the departments had intended to target young people were captured. Criteria were pilot tested with the data extraction method before use. Two coders who were in youth age range (both aged 26) were trained in content analysis methods and coded the same sample ( n = 106) of randomly selected posts independently to test the criteria before completing the Stage 2 extraction. We assessed the level of agreement between the coders using Cohen kappa statistic, which indicated high agreement ( k = 0.878). Coders compared selections and resolved any discrepancies with assistance from other study researchers. All posts related to COVID-19 shared by the identified accounts from 1 to 30 September 2021 were extracted. This included content posted by other accounts and reposted by the identified accounts. We also included duplicate posts that were cross-posted by the departments across their multiple platforms. Extracted posts were saved as screenshots and URLs using a data extraction spreadsheet stored on the University’s password protected cloud server. From the extracted posts, posts that targeted young people were subsequently identified if they satisfied one of the following selection criterion: (i) specific mention of the terms ‘youth’, ‘teen’, ‘young person’ or an age/range within 16–29 years; (ii) featuring internet trends and memes; (iii) shared via TikTok; (iv) predominately/solely featuring actors or characters in the age range; (v) featuring stylistic elements that appeal to young people (e.g. bright colours, cute illustrations that are appealing in an endearing way) or (vi) mention of themes relevant to the age group (e.g. dating, sexual activity, going to university). Examples included in . The criteria were developed in consultation with young people; two young people (aged 25 and 26) and a researcher (aged 26) discussed criteria used in research conducted by Dimanlig-Cruz et al. and adapted accordingly. Much of the criteria from Dimanlig-Cruz et al. were retained, along with the same youth age range of 16–29. This study added a TikTok criterion, given it was unique for health authorities to use this youth-centric platform at the time. The criteria were purposely generous to ensure any posts the departments had intended to target young people were captured. Criteria were pilot tested with the data extraction method before use. Two coders who were in youth age range (both aged 26) were trained in content analysis methods and coded the same sample ( n = 106) of randomly selected posts independently to test the criteria before completing the Stage 2 extraction. We assessed the level of agreement between the coders using Cohen kappa statistic, which indicated high agreement ( k = 0.878). Coders compared selections and resolved any discrepancies with assistance from other study researchers. Descriptive data were collected before content analysis was undertaken. We noted post type (e.g. image, video, GIF, etc.), selection criteria satisfied, URL, caption, hashtags, date shared and cross-posts in the data extraction spreadsheet. We recorded the followers of each account and the following engagement metrics for each individual post in the extraction spreadsheet: (i) Facebook—likes/reactions, comments, shares; (ii) Instagram—likes, comments (share data unavailable); (iii) TikTok—likes, comments, shares. All metrics were captured on 1 February 2022 to ensure posts were afforded ample time to receive engagement. Likes/reactions were the only metric consistent across the platforms and the only metric that was never disabled by the social media platform/accounts. We conducted a content analysis of the posts, systematically categorizing the posts into a predetermined coding framework outlined in . The framework was developed using previous research focussing on social marketing principles . Modifications were included after iterative testing to ensure relevance to COVID-19 communication to young people (e.g. capturing memes, emojis and responsive posts) and relevant priority groups. Apart from post type, codes were not mutually exclusive; multiple codes may be applied to a single post. To evaluate reliability of coding, two coders who were in youth age range (aged 25 and 26) and trained in content analysis methods coded the same sample ( n = 24) of randomly selected posts independently. We assessed the level of agreement between the coders using Cohen kappa statistic, which indicated high agreement ( k = 0.841). Descriptive statistics were calculated to summarize the frequency of occurrence of each code, and metrics for each account and code. We identified 1058 posts about COVID-19 shared by the eight State/Territory health departments over the month of September 2021. Of these, less than a quarter (238 posts) specifically targeted young people. Breakdown provided in . Platform As seen in , all eight health departments shared Facebook posts, five shared via Instagram and only one shared via TikTok. The departments sometimes shared the same post across their Facebook and Instagram account (8.4% of posts ( n = 20) were cross-posted) but TikTok posts were unique to the platform. The overall dataset consisted mostly of Facebook posts (72.3%; ) whereas TikTok and Instagram posts were 17.6 and 10.1% of the dataset, respectively. The majority (86.6%) of the posts were shared by the departments of the outbreak states and the remaining 13.4% were shared by the non-outbreak states. Youth criteria Only 14.7% of posts explicitly targeted young people by including specific mention of age, ‘youth’ or ‘young person’. The majority of posts implicitly targeted young people, included in the dataset because they contained stylistic elements that appealed to young people (71.4%), actors or characters of relevant age (37.0%), were shared on TikTok (18.1%), included themes relevant to young people (9.2%) or used youth-centric internet trends or memes (7.1%). Notably, 48% of posts included satisfied two or more youth criteria, with posts that satisfied the TikTok criteria always satisfying other criteria. Format All posts were accompanied by visuals; 76.5% were still images and 23.5% were moving images like videos or GIFs. Of the still images, 78.6% were illustrations, 18.7% were photos, 10.4% were infographics and 3.3% were graphs. The majority (96.4%) of moving images were videos and the remaining 3.6% were GIFs. The video lengths ranged from 11 s to 4 min, with the majority (66.7%) being under 30 s in length. This is in part due to the high proportion TikToks from New South Wales Health’s daily COVID-19 communication campaign, the ‘#20sectakeaway’. These TikToks featured a celebrity TV show host presenting COVID-19 information using popular TikTok filters and transitions. Other TikToks included Q&A style videos with a ‘real’ young person and people of authority (e.g. a prominent adolescent physician). COVID-19 issues There were a range of COVID-19-related issues raised in the communications, with 10% of the posts ( n = 24) covering more than one COVID-19 issue. Issues frequently raised included testing (33.2% of posts), vaccination (32.8%), restrictions (13.4%) and social distancing (6.7%). Misinformation was rarely posted about; the three posts about misinformation included debunking circulating misinformation or educating public on how to spot misinformation. The departments of the outbreak states most frequently communicated about testing (41.3% of outbreak state posts) and vaccination (26.7% of outbreak state posts). For example, testing posts included health promotional messages encouraging the public to get tested, updates on testing clinic wait times and daily testing rates. Similarly, vaccination posts included health promotional posts, daily vaccination rates and acknowledgements of important vaccination milestones. Testing was not posted about in the non-outbreak states; communication was concentrated predominately on vaccination (71.8% of non-outbreak state posts) and to a lesser extent on masking (25.0% of non-outbreak state posts). Communication technique Over half (59.7%) of posts reported COVID-19-related facts such as updates about case and vaccination statistics and restrictions. These posts included daily case numbers or vaccination milestones. About 23% of posts were educating the public about COVID-19, the vaccine and other preventative measures (e.g. infographic outlining vaccine ingredients). Almost one-third (31.9%) of posts featured positive emotional appeal such as encouraging the public to engage in health promoting behaviours (e.g. ‘we can do this together’ or Rosie the Riveter type imagery with plaster on upper arm). Departments also expressed gratitude for the COVID-19 effort by healthcare workers and the public. Rarely did the posts include fear-based techniques; the three posts that did use this approach included ominous and abstract references like ‘the virus is ripping families apart’, or ‘every day you delay getting tested is a risk to you, your family and the community’. Additionally, 62.6% of posts included a specific call to action (e.g. ‘go and get vaccinated today’). Nearly one third (31.9%) of the posts utilized social media time sensitivity and were responsive to a COVID-19-related event in the community (e.g. addressing long wait times at testing clinics or addressing negative comments on a previous post). Social marketing techniques All the health departments employed social marketing techniques in their communication and used engagement strategies to appeal to young people specifically. Humour was present in 16% of posts and just over 5% (5.5%) of posts were memes. Emojis featured frequently, either in the text caption or visuals of 45.4% of posts. Celebrities (generally sportspeople, social media influencers or popular media show hosts) were also featured in 14.3% of posts. To a lesser extent, (6.7%) posts included other people of authority (including prominent healthcare workers, scientists, politicians or community representatives). Priority groups Posts rarely contained information about priority groups that young people may be a part of such as cultural and ethnic groups or chronic health and disability groups. Only 4.6% of posts targeted Aboriginal and Torres Strait Islanders with specific messaging, these posts being shared by two health departments (New South Wales and Northern Territory) and 1.3% targeted other culturally and linguistically diverse communities, shared by three health departments (Australian Capital Territory, South Australia and Tasmania). These posts delivered health messages in languages other than English and/or promoted the availability of translated health information. Posts targeting young people living with chronic health conditions and disability were also rare, representing only 2.9% of the dataset. Mental health issues in the context of COVID-19 were raised slightly more frequently and featured in 6.7% of the posts. Engagement metrics The accounts had large variations in follower counts, ranging from 3769 followers (Northern Territory Health Instagram) to 989 495 for (New South Wales Health Facebook). To some extent this was reflective of the population in the state (e.g. Northern Territory has the lowest population across the States/Territories whereas New South Wales has the greatest), but some accounts from less populous States had larger social media following (e.g. South Australia Health’s Facebook with 420 thousand followers) than populous states (e.g. Victoria Department of Health with 338 thousand followers). Detailed metrics data in . The public’s engagement with the 238 posts also varied. Overall, a small percentage (1.7%) of posts had 10 or less likes, 35.3% had between 11 and 100 likes, 37.0% had between 101 and 1000 likes, 25.6% had between 1001 and 10 000 likes and only 0.4% (1 post) had over 10 000 likes. Posts that performed well in the sample on one platform and were cross-posted on other platforms were also likely to perform well on the other platform; 25% of posts appeared twice in the top 10% performing posts (3 different posts that were cross-posted on the departments’ Facebook and Instagram). Some features seemed to receive higher likes than others in the sample . For example, posts on TikTok received higher median likes (1086 likes) than posts on Facebook (111.5 likes) and Instagram (566 likes). Posts that included relevant themes for young people and included internet trends were also relatively popular. Videos under 30 seconds received higher median likes than videos 30 seconds and longer (1490.5 vs. 605.5 median likes). In terms of COVID-19 issues, the few posts that addressed misinformation received relatively high median likes (3548 likes) in comparison to posts about testing which were posted very frequently but received low median likes. Posts containing the social marketing elements of humour, emojis and celebrities received higher number of likes, whereas testimonials received the least. Additionally, the small number of fear-based posts received high median likes. As seen in , all eight health departments shared Facebook posts, five shared via Instagram and only one shared via TikTok. The departments sometimes shared the same post across their Facebook and Instagram account (8.4% of posts ( n = 20) were cross-posted) but TikTok posts were unique to the platform. The overall dataset consisted mostly of Facebook posts (72.3%; ) whereas TikTok and Instagram posts were 17.6 and 10.1% of the dataset, respectively. The majority (86.6%) of the posts were shared by the departments of the outbreak states and the remaining 13.4% were shared by the non-outbreak states. Only 14.7% of posts explicitly targeted young people by including specific mention of age, ‘youth’ or ‘young person’. The majority of posts implicitly targeted young people, included in the dataset because they contained stylistic elements that appealed to young people (71.4%), actors or characters of relevant age (37.0%), were shared on TikTok (18.1%), included themes relevant to young people (9.2%) or used youth-centric internet trends or memes (7.1%). Notably, 48% of posts included satisfied two or more youth criteria, with posts that satisfied the TikTok criteria always satisfying other criteria. All posts were accompanied by visuals; 76.5% were still images and 23.5% were moving images like videos or GIFs. Of the still images, 78.6% were illustrations, 18.7% were photos, 10.4% were infographics and 3.3% were graphs. The majority (96.4%) of moving images were videos and the remaining 3.6% were GIFs. The video lengths ranged from 11 s to 4 min, with the majority (66.7%) being under 30 s in length. This is in part due to the high proportion TikToks from New South Wales Health’s daily COVID-19 communication campaign, the ‘#20sectakeaway’. These TikToks featured a celebrity TV show host presenting COVID-19 information using popular TikTok filters and transitions. Other TikToks included Q&A style videos with a ‘real’ young person and people of authority (e.g. a prominent adolescent physician). There were a range of COVID-19-related issues raised in the communications, with 10% of the posts ( n = 24) covering more than one COVID-19 issue. Issues frequently raised included testing (33.2% of posts), vaccination (32.8%), restrictions (13.4%) and social distancing (6.7%). Misinformation was rarely posted about; the three posts about misinformation included debunking circulating misinformation or educating public on how to spot misinformation. The departments of the outbreak states most frequently communicated about testing (41.3% of outbreak state posts) and vaccination (26.7% of outbreak state posts). For example, testing posts included health promotional messages encouraging the public to get tested, updates on testing clinic wait times and daily testing rates. Similarly, vaccination posts included health promotional posts, daily vaccination rates and acknowledgements of important vaccination milestones. Testing was not posted about in the non-outbreak states; communication was concentrated predominately on vaccination (71.8% of non-outbreak state posts) and to a lesser extent on masking (25.0% of non-outbreak state posts). Over half (59.7%) of posts reported COVID-19-related facts such as updates about case and vaccination statistics and restrictions. These posts included daily case numbers or vaccination milestones. About 23% of posts were educating the public about COVID-19, the vaccine and other preventative measures (e.g. infographic outlining vaccine ingredients). Almost one-third (31.9%) of posts featured positive emotional appeal such as encouraging the public to engage in health promoting behaviours (e.g. ‘we can do this together’ or Rosie the Riveter type imagery with plaster on upper arm). Departments also expressed gratitude for the COVID-19 effort by healthcare workers and the public. Rarely did the posts include fear-based techniques; the three posts that did use this approach included ominous and abstract references like ‘the virus is ripping families apart’, or ‘every day you delay getting tested is a risk to you, your family and the community’. Additionally, 62.6% of posts included a specific call to action (e.g. ‘go and get vaccinated today’). Nearly one third (31.9%) of the posts utilized social media time sensitivity and were responsive to a COVID-19-related event in the community (e.g. addressing long wait times at testing clinics or addressing negative comments on a previous post). All the health departments employed social marketing techniques in their communication and used engagement strategies to appeal to young people specifically. Humour was present in 16% of posts and just over 5% (5.5%) of posts were memes. Emojis featured frequently, either in the text caption or visuals of 45.4% of posts. Celebrities (generally sportspeople, social media influencers or popular media show hosts) were also featured in 14.3% of posts. To a lesser extent, (6.7%) posts included other people of authority (including prominent healthcare workers, scientists, politicians or community representatives). Posts rarely contained information about priority groups that young people may be a part of such as cultural and ethnic groups or chronic health and disability groups. Only 4.6% of posts targeted Aboriginal and Torres Strait Islanders with specific messaging, these posts being shared by two health departments (New South Wales and Northern Territory) and 1.3% targeted other culturally and linguistically diverse communities, shared by three health departments (Australian Capital Territory, South Australia and Tasmania). These posts delivered health messages in languages other than English and/or promoted the availability of translated health information. Posts targeting young people living with chronic health conditions and disability were also rare, representing only 2.9% of the dataset. Mental health issues in the context of COVID-19 were raised slightly more frequently and featured in 6.7% of the posts. The accounts had large variations in follower counts, ranging from 3769 followers (Northern Territory Health Instagram) to 989 495 for (New South Wales Health Facebook). To some extent this was reflective of the population in the state (e.g. Northern Territory has the lowest population across the States/Territories whereas New South Wales has the greatest), but some accounts from less populous States had larger social media following (e.g. South Australia Health’s Facebook with 420 thousand followers) than populous states (e.g. Victoria Department of Health with 338 thousand followers). Detailed metrics data in . The public’s engagement with the 238 posts also varied. Overall, a small percentage (1.7%) of posts had 10 or less likes, 35.3% had between 11 and 100 likes, 37.0% had between 101 and 1000 likes, 25.6% had between 1001 and 10 000 likes and only 0.4% (1 post) had over 10 000 likes. Posts that performed well in the sample on one platform and were cross-posted on other platforms were also likely to perform well on the other platform; 25% of posts appeared twice in the top 10% performing posts (3 different posts that were cross-posted on the departments’ Facebook and Instagram). Some features seemed to receive higher likes than others in the sample . For example, posts on TikTok received higher median likes (1086 likes) than posts on Facebook (111.5 likes) and Instagram (566 likes). Posts that included relevant themes for young people and included internet trends were also relatively popular. Videos under 30 seconds received higher median likes than videos 30 seconds and longer (1490.5 vs. 605.5 median likes). In terms of COVID-19 issues, the few posts that addressed misinformation received relatively high median likes (3548 likes) in comparison to posts about testing which were posted very frequently but received low median likes. Posts containing the social marketing elements of humour, emojis and celebrities received higher number of likes, whereas testimonials received the least. Additionally, the small number of fear-based posts received high median likes. This study identified and described the characteristics of social media posts shared by Australian health departments targeting young people with COVID-19-related messages during the Delta outbreak. Notably, there is a lack of posts targeting young people with these messages, which reinforces the need for health departments to increase messaging to young people during health emergencies to ensure this priority group receives clear and accurate health information. The findings presented in this paper provide public health authorities with current strengths and areas of need so future health communication via social media to this group can be improved. Comparisons with prior work We found that not only were health authorities infrequently targeting young people with COVID-19 messages on social media, but that they also infrequently used platforms popular with young people in favour of Facebook. Similar trends have also been identified in other countries; Canadian health authorities predominantly used Twitter and Facebook to communicate COVID-19 physical distancing messages to young people . However, young people have been shifting towards other platforms like Instagram and TikTok in recent years, whereas Facebook has lost 11% of users aged 18–24 from 2017 to 2020 . TikTok in particular saw a major rise in users during the COVID-19 pandemic , yet only one Australian health department shared content on this platform. The underutilization of these platforms with greater youth presence in favour of Facebook may reflect the skills and experience of health department staff, rather than the preferences of the intended consumer. Health authorities should consider platform relevance and appropriateness in future messaging to young people to ensure greatest health benefit from their health communication efforts . This study also found that health departments in jurisdictions with no major COVID-19 outbreaks rarely shared COVID-19 posts targeting youth. Whilst their community may not have been experiencing the COVID-19 burden at the time, previous research indicates that that health organizations should establish a strong presence on multiple social media platforms as a preparedness measure to improve crisis communications in future emergencies . Furthermore, health authorities can actively cultivate trust and ‘brand’ awareness in a similar way to private businesses on social media by establishing frequent and consistent interactive communication with the public . By raising their social media profile in an appealing way and building relationships with their consumers, young people may be more engaged with future health messages and better understand how health departments work to protect them . However, posts shared by the Australian health authorities were generally in line with best practice communication techniques . For example, all posts were accompanied with visuals rather than being text-only, an important factor in communicating health messages, but also important for social media platform algorithms which often favour visual over text-based content . Video posts were also generally short in length, as recommended by social media marketers for engagement, especially for younger populations . Posts frequently included calls to action, a strategy other international health authority accounts communicating COVID-19 messages via social media also practiced . Australian health departments also frequently shared responsive posts, capitalizing on social media’s unique ability to disseminate information to the community in real time. This reactivity can maintain public trust during emergencies when there is high levels of uncertainty and anxiety , and has led to higher public engagement with health messages during the Ebola crisis and the COVID-19 pandemic . The Australian health authorities rarely used fear appeal in their COVID-19 social media communications, which is notable given that these posts received high engagement from users. Previous research also found that COVID-19-related TikToks shared by official accounts that conveyed alarm, concern and disease severity received higher user engagement . Interestingly, the health departments’ focus on positive emotional appeal when reporting about COVID-19 diverged from that of traditional media who tended to focus on negative emotions and death associated with the disease . Whilst discussing the negative aspects of disease outbreaks and validating normal fear response is an important way to engage the public, fear appeals alone are rarely useful in altering behaviour, especially without encouraging self-efficacy or a sense that people can do something to avoid the danger . Additionally, it is typically recommended that negative messages be counterbalanced by a greater number of positive messages since negative information tend to receive significantly greater weight in an individual’s risk perception . Furthermore, including emotional appeals in COVID-19 health communications has been associated with inducing empathy for people most vulnerable to the virus, and motivates adherence to preventative measures . Infodemic management Another key finding was that the health department accounts rarely posted about misinformation despite these posts receiving high levels of engagement. Similar trends have been identified in analysis of Canadian public health tweets . MacKay et al. also found less than 1% of Canadian health authority tweets corrected COVID-19 misinformation despite receiving higher than average engagement than other crisis communication tweets . Similar trends were noted during the Ebola epidemic, with less than 3% of Instagram posts and 1% of tweets by the major health organizations of Centres for Disease Control and Prevention, World Health Organisation and Médecins Sans Frontiers addressing Ebola-related misinformation . Addressing misinformation is key in infodemic management , especially for young people who may have lower levels of health literacy and therefore greater vulnerability to misinformation. A recent multidisciplinary Delphi study on ending the COVID-19 public health threat recommended that institutions should proactively monitor false health information and collaborate with trusted community leaders to refute it and enhance trust . Health authorities are uniquely placed to bust these myths circulating on social media and ensure accurate information is predominant during public health emergencies . Social marketing elements Whilst we found the health departments frequently used emojis in their communication to young people, other social marketing techniques that appeal to young people like humour, celebrities and memes were infrequently used despite receiving high levels of engagement . It is noteworthy however, that in comparison to previous content analysis of Australian public health posts on Facebook pre-pandemic by Kite et al. , contemporary health departments leveraged social marketing techniques more frequently. For example, we found at least double the proportion of celebrity and humour posts in comparison to Kite et al. who found 5% of public health posts shared on Facebook featured celebrities and 8% were humorous. As well as attracting higher levels of engagement , social marketing elements like humour and entertainment have been perceived as effective in driving attention to public health communications, with research recommending a divergence from messages that are too didactic . However, it is still unclear how humour in public health campaigns can translate to meaningful behaviour change . There may also be risks and unanticipated consequences for using social marketing techniques for the health authorities (e.g. going viral for the wrong reasons) or that creativity and humour may overpower the main message . Instead of wearing the risk associated with humorous or experimental content, health authorities may benefit from other social marketing techniques such as sharing popular user generated content or influencer content that aligns with their aims and values . For example, preliminary research into the role of TikTok influencers and COVID-19-related content found that the most popular US TikTok influencers created content that usually demonstrated adherence to public health guidance at the time . Since this content reaches millions of users on a platform mainly used by young people, it indicates there are positive implications for influencer marketing for health promotion on the social media platform . Priority groups We found limited communication to priority groups young people are a part of like cultural and ethnic groups or chronic health and disability groups, in this study. Ensuring there is adequate, relevant and appealing communication to these priority groups is key during public health emergencies given the disproportionate impacts of the pandemic on vulnerable groups to date . For example, in Australia, infection rates for people who identify as Aboriginal and/or Torres Strait Islander during the Delta outbreak was twice the rate of non-Aboriginal and/or Torres Strait Islander Australians . COVID-19 age-adjusted mortality was also 2.5 times higher for people born overseas than those born in Australia . Similarly, communicating preventative measures to individuals with chronic disease is especially important given the increased risk of severe COVID-19 illness and potential worsening of existing conditions due to COVID-19 disruptions to their care . Including priority groups in official health department social media messaging is important to ensure these groups are not left out in mainstream health communication and to maximize the impact of collaborations. Strengths and limitations This study was strengthened by youth consultation to ensure posts analysed were targeting young people aged 16–29 years. Youth involvement also spanned into the data analysis stage to ensure findings were youth-centred. We note that the young people involved in the study were in the older range of the age group so their views may differ from younger adolescents. We also note that because this study focussed on posts shared by the Australian Government State/Territory Health departments it does not reflect the full breadth of youth health campaigns by other organizations; posts from other accounts were not captured unless reposted by the health departments. Facebook reactions were considered to operate as likes in this study and since user comments were not analysed we do not know if public approved or disapproved of the messaging . Engagement metrics are also not always a reliable way to identify the success of health communication; accounts may disable certain metrics, algorithms can favour certain posts and boost sponsored posts, and posts may go viral for unintended reasons or negative comments . Additionally, this study captured content shared in 1 month during the Delta wave of the pandemic in Australia and therefore only reflects content from that period. Health departments may have removed or deleted posts by the time of data extraction or shared ephemeral content (e.g. Stories) which could not be captured in the study. Implications Our findings provide practical implications for use of social media to communicate COVID-19 and other health messages. Social media health communication to young people should be increased, especially on platforms they use frequently like Instagram, TikTok and other emerging platforms. This type of communication should be prioritized for health promotion purposes and can be developed into a trusted source of health information in community messaging. Messages should aim to encompass the diverse range of needs of young people, including specific priority populations who may be in greater need of health information, whether this is by creating specific content or reposting collaborations with other relevant organizations. Social marketing techniques like utilizing influencer marketing, internet trends and memes can also be explored to increase the engagement of these health messages amongst young people. Ultimately, it is important to understand the perspectives and needs of young people themselves to optimize this health communication during COVID-19 and beyond. Consultation and co-design methods are imperative in ensuring future health communication resonates with young people for the greatest public health benefit. Future research is needed to understand how COVID-19 health communication has changed over time as public health needs progress throughout the course of the pandemic. We found that not only were health authorities infrequently targeting young people with COVID-19 messages on social media, but that they also infrequently used platforms popular with young people in favour of Facebook. Similar trends have also been identified in other countries; Canadian health authorities predominantly used Twitter and Facebook to communicate COVID-19 physical distancing messages to young people . However, young people have been shifting towards other platforms like Instagram and TikTok in recent years, whereas Facebook has lost 11% of users aged 18–24 from 2017 to 2020 . TikTok in particular saw a major rise in users during the COVID-19 pandemic , yet only one Australian health department shared content on this platform. The underutilization of these platforms with greater youth presence in favour of Facebook may reflect the skills and experience of health department staff, rather than the preferences of the intended consumer. Health authorities should consider platform relevance and appropriateness in future messaging to young people to ensure greatest health benefit from their health communication efforts . This study also found that health departments in jurisdictions with no major COVID-19 outbreaks rarely shared COVID-19 posts targeting youth. Whilst their community may not have been experiencing the COVID-19 burden at the time, previous research indicates that that health organizations should establish a strong presence on multiple social media platforms as a preparedness measure to improve crisis communications in future emergencies . Furthermore, health authorities can actively cultivate trust and ‘brand’ awareness in a similar way to private businesses on social media by establishing frequent and consistent interactive communication with the public . By raising their social media profile in an appealing way and building relationships with their consumers, young people may be more engaged with future health messages and better understand how health departments work to protect them . However, posts shared by the Australian health authorities were generally in line with best practice communication techniques . For example, all posts were accompanied with visuals rather than being text-only, an important factor in communicating health messages, but also important for social media platform algorithms which often favour visual over text-based content . Video posts were also generally short in length, as recommended by social media marketers for engagement, especially for younger populations . Posts frequently included calls to action, a strategy other international health authority accounts communicating COVID-19 messages via social media also practiced . Australian health departments also frequently shared responsive posts, capitalizing on social media’s unique ability to disseminate information to the community in real time. This reactivity can maintain public trust during emergencies when there is high levels of uncertainty and anxiety , and has led to higher public engagement with health messages during the Ebola crisis and the COVID-19 pandemic . The Australian health authorities rarely used fear appeal in their COVID-19 social media communications, which is notable given that these posts received high engagement from users. Previous research also found that COVID-19-related TikToks shared by official accounts that conveyed alarm, concern and disease severity received higher user engagement . Interestingly, the health departments’ focus on positive emotional appeal when reporting about COVID-19 diverged from that of traditional media who tended to focus on negative emotions and death associated with the disease . Whilst discussing the negative aspects of disease outbreaks and validating normal fear response is an important way to engage the public, fear appeals alone are rarely useful in altering behaviour, especially without encouraging self-efficacy or a sense that people can do something to avoid the danger . Additionally, it is typically recommended that negative messages be counterbalanced by a greater number of positive messages since negative information tend to receive significantly greater weight in an individual’s risk perception . Furthermore, including emotional appeals in COVID-19 health communications has been associated with inducing empathy for people most vulnerable to the virus, and motivates adherence to preventative measures . Another key finding was that the health department accounts rarely posted about misinformation despite these posts receiving high levels of engagement. Similar trends have been identified in analysis of Canadian public health tweets . MacKay et al. also found less than 1% of Canadian health authority tweets corrected COVID-19 misinformation despite receiving higher than average engagement than other crisis communication tweets . Similar trends were noted during the Ebola epidemic, with less than 3% of Instagram posts and 1% of tweets by the major health organizations of Centres for Disease Control and Prevention, World Health Organisation and Médecins Sans Frontiers addressing Ebola-related misinformation . Addressing misinformation is key in infodemic management , especially for young people who may have lower levels of health literacy and therefore greater vulnerability to misinformation. A recent multidisciplinary Delphi study on ending the COVID-19 public health threat recommended that institutions should proactively monitor false health information and collaborate with trusted community leaders to refute it and enhance trust . Health authorities are uniquely placed to bust these myths circulating on social media and ensure accurate information is predominant during public health emergencies . Whilst we found the health departments frequently used emojis in their communication to young people, other social marketing techniques that appeal to young people like humour, celebrities and memes were infrequently used despite receiving high levels of engagement . It is noteworthy however, that in comparison to previous content analysis of Australian public health posts on Facebook pre-pandemic by Kite et al. , contemporary health departments leveraged social marketing techniques more frequently. For example, we found at least double the proportion of celebrity and humour posts in comparison to Kite et al. who found 5% of public health posts shared on Facebook featured celebrities and 8% were humorous. As well as attracting higher levels of engagement , social marketing elements like humour and entertainment have been perceived as effective in driving attention to public health communications, with research recommending a divergence from messages that are too didactic . However, it is still unclear how humour in public health campaigns can translate to meaningful behaviour change . There may also be risks and unanticipated consequences for using social marketing techniques for the health authorities (e.g. going viral for the wrong reasons) or that creativity and humour may overpower the main message . Instead of wearing the risk associated with humorous or experimental content, health authorities may benefit from other social marketing techniques such as sharing popular user generated content or influencer content that aligns with their aims and values . For example, preliminary research into the role of TikTok influencers and COVID-19-related content found that the most popular US TikTok influencers created content that usually demonstrated adherence to public health guidance at the time . Since this content reaches millions of users on a platform mainly used by young people, it indicates there are positive implications for influencer marketing for health promotion on the social media platform . We found limited communication to priority groups young people are a part of like cultural and ethnic groups or chronic health and disability groups, in this study. Ensuring there is adequate, relevant and appealing communication to these priority groups is key during public health emergencies given the disproportionate impacts of the pandemic on vulnerable groups to date . For example, in Australia, infection rates for people who identify as Aboriginal and/or Torres Strait Islander during the Delta outbreak was twice the rate of non-Aboriginal and/or Torres Strait Islander Australians . COVID-19 age-adjusted mortality was also 2.5 times higher for people born overseas than those born in Australia . Similarly, communicating preventative measures to individuals with chronic disease is especially important given the increased risk of severe COVID-19 illness and potential worsening of existing conditions due to COVID-19 disruptions to their care . Including priority groups in official health department social media messaging is important to ensure these groups are not left out in mainstream health communication and to maximize the impact of collaborations. This study was strengthened by youth consultation to ensure posts analysed were targeting young people aged 16–29 years. Youth involvement also spanned into the data analysis stage to ensure findings were youth-centred. We note that the young people involved in the study were in the older range of the age group so their views may differ from younger adolescents. We also note that because this study focussed on posts shared by the Australian Government State/Territory Health departments it does not reflect the full breadth of youth health campaigns by other organizations; posts from other accounts were not captured unless reposted by the health departments. Facebook reactions were considered to operate as likes in this study and since user comments were not analysed we do not know if public approved or disapproved of the messaging . Engagement metrics are also not always a reliable way to identify the success of health communication; accounts may disable certain metrics, algorithms can favour certain posts and boost sponsored posts, and posts may go viral for unintended reasons or negative comments . Additionally, this study captured content shared in 1 month during the Delta wave of the pandemic in Australia and therefore only reflects content from that period. Health departments may have removed or deleted posts by the time of data extraction or shared ephemeral content (e.g. Stories) which could not be captured in the study. Our findings provide practical implications for use of social media to communicate COVID-19 and other health messages. Social media health communication to young people should be increased, especially on platforms they use frequently like Instagram, TikTok and other emerging platforms. This type of communication should be prioritized for health promotion purposes and can be developed into a trusted source of health information in community messaging. Messages should aim to encompass the diverse range of needs of young people, including specific priority populations who may be in greater need of health information, whether this is by creating specific content or reposting collaborations with other relevant organizations. Social marketing techniques like utilizing influencer marketing, internet trends and memes can also be explored to increase the engagement of these health messages amongst young people. Ultimately, it is important to understand the perspectives and needs of young people themselves to optimize this health communication during COVID-19 and beyond. Consultation and co-design methods are imperative in ensuring future health communication resonates with young people for the greatest public health benefit. Future research is needed to understand how COVID-19 health communication has changed over time as public health needs progress throughout the course of the pandemic. Communicating health messages to young people via social media will remain a critical facet of the COVID-19 pandemic response as priorities shift from testing to vaccination and booster roll-out and long-term health impacts. There is a need for Australian health authorities to prioritize the development of social media health communication that engages with young people, particularly those most at-risk. Young people need easily accessible, reliable and trustworthy health information that is tailored to their changing needs and concerns and combats the misinformation they see online. Health authorities can explore increasing their social media communication to this group and apply the relevant social marketing strategies in consultation with youth advisors to ensure messages are appealing and youth-centred. daad034_suppl_Supplementary_Appendix_1 Click here for additional data file. daad034_suppl_Supplementary_Appendix_2 Click here for additional data file. daad034_suppl_Supplementary_Appendix_3 Click here for additional data file.
A proteo-transcriptomic map of non-alcoholic fatty liver disease signatures
91a4604a-1b94-47f2-8dd4-181612f06067
10132975
Anatomy[mh]
Non-alcoholic fatty liver disease (NAFLD) is a chronic, progressive condition affecting about 25% of the global population that is strongly associated with features of the metabolic syndrome, including obesity and type 2 diabetes mellitus (T2DM) . NAFLD is characterized by excessive accumulation of hepatic triglyceride and encompasses a range of disease states: from steatosis (non-alcoholic fatty liver, NAFL) through non-alcoholic steatohepatitis (NASH), defined by the presence of hepatocyte ballooning and lobular inflammation with increasing fibrosis stage, to cirrhosis and hepatocellular carcinoma . Not every patient diagnosed with NAFL will develop NASH or progress to cirrhosis and end-stage liver disease, meaning that there is substantial interindividual variation in disease severity. Patients with greater steatohepatitic disease activity, defined by a histological NAFLD Activity Score (NAS, the sum of steatosis, hepatocyte ballooning and lobular inflammation) more than or equal to 4 with fibrosis stage of 2 or more ( F ≥ 2) are considered to show ‘at-risk NASH’ that indicates a high likelihood of progressive disease , . Several non-invasive tests have been developed to identify patients with advanced liver fibrosis. These include use of indirect markers reflecting liver function and biochemical changes, such as the NAFLD Fibrosis Score (NFS) or the FIB-4 (ref. ), and biomarkers that directly measure collagen turnover, including cleaved pro-collagen type 3 peptide or thrombospondin-2 (ref. ). The FibroScan-AST (FAST) score based on imaging assessment has proved to be an efficient way to identify NASH patients considered to be at risk of progressive disease . More recently, proteomic approaches have been used to identify classifiers that differentiate advanced from early fibrosis , . In contrast, effective biomarkers that identify steatohepatitis and grade activity remain elusive, the field therefore relies on histological assessment that is invasive and has considerable interobserver variability. In this study, we integrate proteomics and RNA sequencing (RNA-seq) approaches to understand pathophysiological changes associated with NAFLD in humans and establish whether candidate circulating biomarkers might originate from the liver (Fig. ); a similar approach to that used recently in human alcoholic liver disease and NAFLD animal models . Our study included 336 samples from patients with histologically characterized NAFLD derived from the European NAFLD Registry . The discovery cohort comprised 191 plasma samples and the independent validation cohorts included 115 serum samples together with 30 liver biopsies. Within the discovery cohort, 38.4% were female, the average age was 55.2 (±11) years, average body mass index (BMI) was 33.5 (±6.7) and 60.7% had type 2 diabetes (Table and Supplementary Table ). Samples were processed for proteomics using the SomaScan v.4.0 platform, measuring 4,730 unique proteins and reads were corrected for sex, centre and T2DM (Extended Data Fig. ). When stratifying patients on the basis of fibrosis stage (F), ranging from 0 to 4 and comparing advanced (F3–4) with mild (F0–2), we found 117 unique proteins (121 probes) to be differentially expressed (Fig. and Supplementary Table ). Functional annotation enrichment clustered proteins correlating to pathways such as ‘cell adhesion’, ‘inflammatory response’ and ‘carbohydrate metabolism’ (Fig. ). When stratifying patients on the basis of a high disease activity using NAS ≥ 4, we found 52 differentially expressed proteins (53 probes) (Fig. and Supplementary Table ). Enrichment analysis grouped proteins relating to ‘lipid metabolism’, ‘amino-acid biosynthesis’ or ‘bile acid catabolism’ (Fig. ). The two comparisons, advanced fibrosis and NAS ≥ 4, shared 30 differentially expressed proteins (Fig. ). When looking at the top 50 most significant differentially expressed proteins for each of these two comparisons, different dynamic expression patterns were observed as NAFLD progressed (Fig. ). Clear differences were seen in proteins associated with fibrogenesis and steatohepatitis during the pathogenesis of NAFLD: proteins associated with steatohepatitic activity (NAS ≥ 4) tended to peak in NASH F2–3 and then fall with progression to cirrhosis. By contrast, proteins purely associated with fibrosis increased steadily, peaking in cirrhosis (F4) (Fig. ). To establish that the circulating proteins were of hepatic origin, and to further characterize their cellular origins within the liver, we conducted a two-stage analysis. First, a proteo-transcriptomic comparison in a cohort of matching plasma-liver biopsy samples that were a subset of the discovery cohort, and second, an integrated single-cell RNA-seq and tissue expression analysis using publicly available data . We performed linear correlations between circulating proteins on the basis of the SomaScan read-out and hepatic messenger RNA obtained from RNA-seq analysis in a subset of 52 cases from the discovery cohort with matching plasma-liver biopsy samples. Here, 4,584 protein probes, matching 4,292 proteins and/or genes, were identified in the RNA-seq data, of which 194 significantly correlated with each other (Fig. and Supplementary Table ). Within these 194 correlations, 31 proteins had been identified in the two previous comparisons described above (Fig. ). Eight of these 31 signals were associated with both NAS ≥ 4 and advanced fibrosis (F3–4), including THBS2, APOF, ADAMTSL2, CFHR4, TREM2, AKR1B10, SULT2A1 and PTGR1. In addition, 21 positive correlations were uniquely identified in the advanced fibrosis comparison, including GDF15, IGFBP7 and SHBG, while two correlations were from the NAS ≥ 4 comparison (ADSSL1 and ENO3) (Fig. ). GTex tissue expression analysis indicated that several of the 31 proteins in the signature are enriched in normal human liver, including the markers APOF, CFHR4, PTGR1, SULT2A1 and SHBG (Extended Data Fig. ). Additionally, supervised analysis using bulk RNA-seq data from a large cohort of 206 patients with NAFLD , showed that most of our signature changes occur in patients with advanced fibrosis and/or NAS ≥ 4 (Supplementary Table ). Integrated single-cell RNA-seq analysis showed that the 31 signature proteins can be found in different hepatic cell populations (Fig. ). Of the 31 markers, 18 were enriched in epithelial cells, hepatocytes or cholangiocytes (including AKR1B10 , CFHR4 and PTGR1 ) compared to other hepatic cells, while other markers were primarily restricted to fibroblasts ( ADAMTSL2 , THBS2 ) or macrophages ( CXCL8 and TREM2 ) (Fig. ). To demonstrate the potential power of our proteo-transcriptomics signature strategy to support development of new non-invasive diagnostics to detect fibrosing-steatohepatitis, we performed logistic regression analysis to identify patients with ‘at-risk NASH’, defined as NAS ≥ 4 (with at least one point deriving from each NAS component) plus F ≥ 2 fibrosis. Backward elimination of variables identified a composite model in the discovery cohort ( n = 191) that could classify patients with at-risk NASH with an area under the curve (AUC) of 0.878 (±0.025) including the variables BMI, T2DM and circulating ADAMTSL2, AKR1B10, CFHR4 and TREM2 (Fig. ), independent from any other clinical variables. The classification model had a positive predictive value of 0.79 and a negative predictive value of 0.85 (Supplementary Table ). It significantly outperformed established non-invasive tests including the FIB-4, NFS and aspartate transaminase (AST) to alanine transaminase (ALT) ratio scores in the entire discovery cohort of 191 patients, and had a higher AUC compared to the FAST score, which was available for a subset of 62 patients (Fig. , Extended Data Fig. and Supplementary Table ). These findings were validated in an independent cohort of 115 samples where the model had an AUC of 0.80 (±0.04) (Fig. , Extended Data Fig. and Supplementary Table ). In this study, we have identified proteo-transcriptomic connections associated with features of progressive NAFLD. While only CFHR4 is uniquely expressed in healthy liver (Extended Data Fig. ), ADAMTSL2, AKR1B10 and TREM2 have been previously been reported to play a role in the progression of liver diseases and NAFLD. Single-cell RNA-seq has showed that TREM2-positive macrophages are associated with hepatic portal fibrosis, while ADAMTSL2 reflects a zonal activation of hepatic stellate cells , . Soluble ADAMTSL2 proved to be a good biomarker to identify significant and advanced fibrosis in patients with NAFLD, while circulating TREM2 levels have proved to stratify patients with NASH , . Soluble levels of TREM2 are believed to reflect the recruitment and expansion of TREM2-positive macrophages localizing to fibrotic areas in the liver, in a response to resolve steatohepatitis . Using a high-throughput RNA-seq approach in a cohort of 206 NAFLD biopsies to understand the pathogenesis disease progression, we recently showed that changes in transcription of the epithelial markers AKR1B10 and GDF15 can also lead to altered circulating concentrations of these proteins, serving as putative biomarkers for fibrosing-steatohepatitis . To support these findings, we performed immunohistochemical stainings on series of 30 NAFLD biopsies. AKR1B10 positivity was more prominent in advanced NAFLD, and was observed in ballooned hepatocytes and hepatocytes neighbouring necro-inflammatory foci and periportal/periseptal areas (Fig. ). This study has some limitations as we assessed linear associations between protein and hepatic mRNA in a European White cohort only, which does not exclude the potential contribution of other organs to the expression of the circulating proteins or that other factors contribute in different ethnic groups. We were also limited in our ability to confirm some proteomic findings in hepatic tissue due to limited availability of appropriate antibodies. Nevertheless, we have highlighted the complexity of the different liver cell populations and showed that circulating proteins correlating with hepatic mRNA can be used to identify patients with at-risk NASH. Patient selection A total of 336 histologically characterized cases were derived from the European NAFLD Registry ( NCT04442334 ); samples were collected as previously described . European White patients have been treated and diagnosed for NAFLD on the basis of histology at specialized centres including Angers and Paris (France), Mainz (Germany), Turin (Italy), Linköping (Sweden) and Newcastle upon Tyne (UK). The discovery cohort comprised 191 plasma samples and the independent validation cohorts included 115 serum samples and 30 paraffin liver biopsies sections (Table and Supplementary Table ). A subset of the discovery cohort, comprising 52 of these cases had frozen liver tissue available for RNA extraction. All liver samples were centrally scored according to the semiquantitative NASH-CRN Scoring System by an expert liver pathologist. Fibrosis stage ranged from F0 to F4 (cirrhosis) and the NAS was defined as the sum of the scores for steatosis, hepatocyte ballooning and lobular inflammation . Alternative diagnoses and aetiologies such as excessive alcohol intake, autoimmune liver diseases, viral hepatitis and steatogenic medication use were excluded. Sex and/or gender of participants was determined on the basis of self-report. This study has been approved by the relevant Ethical Committees in the participating centres and all patients having provided informed consent. Proteomics The proteomic aptamer-based SomaScan Platform (SomaLogic) was used to process 191 plasma and 115 serum human samples (20 μl, 1 in 20 dilution) . To each sample slow off-rate modified labelled aptamers were added to form SOMAmer–protein bead complexes. The beads were captured, and non-specifically bound reagents were subsequently removed. SOMAmers were quantified by hybridization to DNA microarrays. Relative quantity of SOMAmer reagents measured by the SomaScan assay reflecting original protein concentrations (that is, relative fluorescent units, RFUs). Counts were analysed for differential expression using linear models as implemented by the R package limma ( https://www.bioconductor.org/ ) and correcting for centre, sex and T2DM. Statistical significance was determined by a corrected P value less than 0.05 (Benjamini–Hochberg false discovery rate) and a fold change of more than 1.25. RNA-seq As previously described, mRNA was extracted from frozen liver biopsy samples and processed with the TruSeq RNA Library Prep Kit v.2 and sequenced on the NextSeq 550 System (Illumina) . Data are available on the NCBI GEO repository ( GSE135251 ). Raw sequencing quality assessment and alignment to the reference genome (GRCh38, Ensembl release 76) was done using Fastqc (v.0.11.5) and MultiQC (v.1.2dev), and gene count tables were produced with HT-Seq. Counts were normalized using the trimmed mean of M values method and transformed using limma’s voom methodology. A correction for centre, sex and batch was implemented. Pearson correlation was used to investigate linearity between hepatic mRNA and circulating proteins. A P < 0.01 was considered significant. Tissue expression analysis was conducted using GTEx ( https://gtexportal.org/ ). Supervised analysis was done as previously described . Deconvolution to identify cell of origin was done using publicly available single-cell RNA-seq data ( GSE115469 ) from liver samples obtained from neurological deceased individuals . The transformed normalized and cluster identifiers were obtained from the Human Protein Atlas ( https://www.proteinatlas.org/ ). For each marker of interest, the Z score was calculated to visualize expression per cell cluster. The DAVID annotation tool was used for functional protein pathway enrichment on the basis of UniprotKB Keywords and Homo Sapiens background . Immunohistochemistry Human formalin-fixed paraffin-embedded liver biopsies ( n = 30) were immunostained with antibody directed against human AKR1B10 (ab232623, Abcam; EDTA, 1/500). Immunostains were performed manually at room temperature using Envision Flex+ reagent (Dako) as secondary antibody with 3′,3′-diaminobenzidine visualization. Immunopositive cells were quantified in three different high power fields (magnification ×400) using a bright field microscope. Statistics The Kolmogorov–Smirnov and Shapiro–Wilk normality tests, one-way analysis of variance with Dunnett’s test, Chi-square, Mann–Whitney U -test and Kruskal–Wallis test with Bonferroni correction were performed in IBM SPSS v.s27 or GraphPad Prism 9. Binary logistic regression analysis was carried out in IBM SPSS v.27 using Backward Stepwise Likelihood Ratio model including clinical parameters sex, age, BMI, ALT, AST, albumin, platelet count and T2DM, and the uncorrected values of the circulating proteins as measured by SomaScan identified as hepatic markers associated with F3–4 and NAS ≥ 4. The model identifying patients with NASH + F ≥ 2 + NAS ≥ 4 with at least one point deriving from each NAS component, and the FIB-4, NFS and FAST scores were calculated as follows: Classification model = −6.236112 + (0.082163 × BMI) + (1.110341 × T2DM) + (0.001084 × ADAMTSL2) − (0.000031 × CFHR4) + (0.000060 × TREM2) + (0.000048 × AKR1B10) FIB-4 4 = (age (years) × AST (U l −1 ))/((platelets (10 9 per l)) × √ALT (U l −1 )) NFS 4 = 1.675 + 0.037 × age (years) + 0.094 × BMI (kg m − 2 ) + 1.13 × T2DM + (0.99 × AST:ALT ratio) (0.013 × platelet (10 9 per l)) (0.66 × albumin (g dl −1 )) FAST 7 = (e [12pt]{minimal} $${}^{(-1.65+1.07 {}+2.66 10^{-8}\, {}3-63.3 {}^{-1})}$$ ( − 1.65 + 1.07 × In(LSM) + 2.66 × 1 0 − 8 CAP 3 − 63.3 × AST − 1 ) )/(1 + e [12pt]{minimal} $${}^{(-1.65+1.07 {}+2.66 10^{-8}\, {}3-63.3 {}^{-1})}$$ ( − 1.65 + 1.07 × In(LSM) + 2.66 × 1 0 − 8 CAP 3 − 63.3 × AST − 1 ) ) Receiver operating characteristic (ROC) analyses and AUC calculations were performed with IBM SPSS v.27. Paired-Sample Area Difference under the ROC curve was used as statistical test. The binary cut-off for the classification model was set at greater than −0.4491 to rule in patients with NASH + F ≥ 2 + NAS ≥ 4, the FIB-4 score was set at more than 1.3, the FAST score at more than 0.67 to rule in and equal to or less than 0.35 to rule out. Graphs have been generated using R ggplot2, R pheatmap and GraphPad Prism 9. Illustrations within Fig. were created with BioRender.com . Reporting summary Further information on research design is available in the linked to this article. A total of 336 histologically characterized cases were derived from the European NAFLD Registry ( NCT04442334 ); samples were collected as previously described . European White patients have been treated and diagnosed for NAFLD on the basis of histology at specialized centres including Angers and Paris (France), Mainz (Germany), Turin (Italy), Linköping (Sweden) and Newcastle upon Tyne (UK). The discovery cohort comprised 191 plasma samples and the independent validation cohorts included 115 serum samples and 30 paraffin liver biopsies sections (Table and Supplementary Table ). A subset of the discovery cohort, comprising 52 of these cases had frozen liver tissue available for RNA extraction. All liver samples were centrally scored according to the semiquantitative NASH-CRN Scoring System by an expert liver pathologist. Fibrosis stage ranged from F0 to F4 (cirrhosis) and the NAS was defined as the sum of the scores for steatosis, hepatocyte ballooning and lobular inflammation . Alternative diagnoses and aetiologies such as excessive alcohol intake, autoimmune liver diseases, viral hepatitis and steatogenic medication use were excluded. Sex and/or gender of participants was determined on the basis of self-report. This study has been approved by the relevant Ethical Committees in the participating centres and all patients having provided informed consent. The proteomic aptamer-based SomaScan Platform (SomaLogic) was used to process 191 plasma and 115 serum human samples (20 μl, 1 in 20 dilution) . To each sample slow off-rate modified labelled aptamers were added to form SOMAmer–protein bead complexes. The beads were captured, and non-specifically bound reagents were subsequently removed. SOMAmers were quantified by hybridization to DNA microarrays. Relative quantity of SOMAmer reagents measured by the SomaScan assay reflecting original protein concentrations (that is, relative fluorescent units, RFUs). Counts were analysed for differential expression using linear models as implemented by the R package limma ( https://www.bioconductor.org/ ) and correcting for centre, sex and T2DM. Statistical significance was determined by a corrected P value less than 0.05 (Benjamini–Hochberg false discovery rate) and a fold change of more than 1.25. As previously described, mRNA was extracted from frozen liver biopsy samples and processed with the TruSeq RNA Library Prep Kit v.2 and sequenced on the NextSeq 550 System (Illumina) . Data are available on the NCBI GEO repository ( GSE135251 ). Raw sequencing quality assessment and alignment to the reference genome (GRCh38, Ensembl release 76) was done using Fastqc (v.0.11.5) and MultiQC (v.1.2dev), and gene count tables were produced with HT-Seq. Counts were normalized using the trimmed mean of M values method and transformed using limma’s voom methodology. A correction for centre, sex and batch was implemented. Pearson correlation was used to investigate linearity between hepatic mRNA and circulating proteins. A P < 0.01 was considered significant. Tissue expression analysis was conducted using GTEx ( https://gtexportal.org/ ). Supervised analysis was done as previously described . Deconvolution to identify cell of origin was done using publicly available single-cell RNA-seq data ( GSE115469 ) from liver samples obtained from neurological deceased individuals . The transformed normalized and cluster identifiers were obtained from the Human Protein Atlas ( https://www.proteinatlas.org/ ). For each marker of interest, the Z score was calculated to visualize expression per cell cluster. The DAVID annotation tool was used for functional protein pathway enrichment on the basis of UniprotKB Keywords and Homo Sapiens background . Human formalin-fixed paraffin-embedded liver biopsies ( n = 30) were immunostained with antibody directed against human AKR1B10 (ab232623, Abcam; EDTA, 1/500). Immunostains were performed manually at room temperature using Envision Flex+ reagent (Dako) as secondary antibody with 3′,3′-diaminobenzidine visualization. Immunopositive cells were quantified in three different high power fields (magnification ×400) using a bright field microscope. The Kolmogorov–Smirnov and Shapiro–Wilk normality tests, one-way analysis of variance with Dunnett’s test, Chi-square, Mann–Whitney U -test and Kruskal–Wallis test with Bonferroni correction were performed in IBM SPSS v.s27 or GraphPad Prism 9. Binary logistic regression analysis was carried out in IBM SPSS v.27 using Backward Stepwise Likelihood Ratio model including clinical parameters sex, age, BMI, ALT, AST, albumin, platelet count and T2DM, and the uncorrected values of the circulating proteins as measured by SomaScan identified as hepatic markers associated with F3–4 and NAS ≥ 4. The model identifying patients with NASH + F ≥ 2 + NAS ≥ 4 with at least one point deriving from each NAS component, and the FIB-4, NFS and FAST scores were calculated as follows: Classification model = −6.236112 + (0.082163 × BMI) + (1.110341 × T2DM) + (0.001084 × ADAMTSL2) − (0.000031 × CFHR4) + (0.000060 × TREM2) + (0.000048 × AKR1B10) FIB-4 4 = (age (years) × AST (U l −1 ))/((platelets (10 9 per l)) × √ALT (U l −1 )) NFS 4 = 1.675 + 0.037 × age (years) + 0.094 × BMI (kg m − 2 ) + 1.13 × T2DM + (0.99 × AST:ALT ratio) (0.013 × platelet (10 9 per l)) (0.66 × albumin (g dl −1 )) FAST 7 = (e [12pt]{minimal} $${}^{(-1.65+1.07 {}+2.66 10^{-8}\, {}3-63.3 {}^{-1})}$$ ( − 1.65 + 1.07 × In(LSM) + 2.66 × 1 0 − 8 CAP 3 − 63.3 × AST − 1 ) )/(1 + e [12pt]{minimal} $${}^{(-1.65+1.07 {}+2.66 10^{-8}\, {}3-63.3 {}^{-1})}$$ ( − 1.65 + 1.07 × In(LSM) + 2.66 × 1 0 − 8 CAP 3 − 63.3 × AST − 1 ) ) Receiver operating characteristic (ROC) analyses and AUC calculations were performed with IBM SPSS v.27. Paired-Sample Area Difference under the ROC curve was used as statistical test. The binary cut-off for the classification model was set at greater than −0.4491 to rule in patients with NASH + F ≥ 2 + NAS ≥ 4, the FIB-4 score was set at more than 1.3, the FAST score at more than 0.67 to rule in and equal to or less than 0.35 to rule out. Graphs have been generated using R ggplot2, R pheatmap and GraphPad Prism 9. Illustrations within Fig. were created with BioRender.com . Further information on research design is available in the linked to this article. Supplementary Information Supplementary Tables 1–6, LITMUS consortium. Reporting Summary
Diabetische Nierenerkrankung (Update 2023)
a7b3aff0-2dd1-40fe-b303-15450a02eeea
10133372
Internal Medicine[mh]
Der Verlauf der Nierenerkrankung bei Patient*innen mit T1D ist weniger variabel als bei Patient*innen mit T2D und eine optimale/intensivierte Blutzuckereinstellung ist hier die wichtigste Maßnahme zur Prävention und in frühen Stadien auch der Intervention. Bei optimaler Einstellung (HbA 1c < 7 % (53 mmol/mol)) kam es in einer großen Interventionsstudie nach 30 Jahren zu einer 36–76%igen Reduktion der mikrovaskulären Komplikationen im Vergleich zur Gruppe mit einem HbA 1c ~ 9 % . Die Inzidenz der terminalen Niereninsuffizienz in der intensiv behandelten Gruppe lag bei 11/1000 Patienten . Sobald entweder eine Hypertonie oder eine Albuminurie (ab Stadium A2) vorliegen, gilt die medikamentöse Blockade des Renin-Angiotensin-Aldosteron-Systems (RAAS) als gesicherte Therapie zur Nephroprotektion (Reduktion der Albuminurie und Reduktion des GFR-Abfalls) . Die Prävalenz des T2D in Österreich ist nicht genau bekannt, liegt aber etwa bei 8 % der erwachsenen Bevölkerung. Etwa 25 % dieser Patienten haben auch eine chronische Niereninsuffizienz (CKD = „chronic kidney disease“) Stadium G3 oder höher (eGFR < 60 ml/min/1,73m 2 ) , diese werden in weiterer Folge unter dem Begriff diabetische Nierenerkrankung (DKD = „diabetic kidney disease“) zusammengefasst. Rezente amerikanische Daten gehen davon aus, dass ca. 24 % aller Fälle von CKD (d. h. eGFR < 60 ml/min/1,73m 2 oder Albumin/Kreatinin-Ratio ≥ 30 mg/g oder beides) nach Korrektur für demografische Faktoren durch Diabetes mellitus verursacht werden . Durch das erhöhte Mortalitätsrisiko von T2D-Patient*innen („competing risk of death“) versterben viele, bevor sie das Stadium der terminalen Niereninsuffizienz erreichen. Eine CKD bei T2D ist ätiologisch heterogener als bei T1D-Patient*innen, somit sind der Verlauf und die Prognose schwieriger abzuschätzen. Aufgrund der meist schon längeren Zeitspanne zwischen Beginn der gestörten Stoffwechsellage und Diagnose des T2D kann zum Zeitpunkt der Diagnosestellung bereits eine Albuminurie vorliegen. Ohne spezielle Intervention entwickeln ca. 20–40 % der Patient*innen eine Albuminurie Stadium A2 (s. unten) sowie eine größere Albuminurie bzw. Proteinurie (Stadium A3), die DKD schreitet aber insgesamt nur bei etwa 20 % dieser Patienten innerhalb von 20 Jahren zu einer terminalen Niereninsuffizienz fort . Das Auftreten einer Albuminurie per se sowie das Vorliegen einer CKD gehen mit einer erhöhten Inzidenz kardiovaskulärer Morbidität und Mortalität einher . Früher ging man von einem klassischen „Durchlaufen“ aller Stadien bis zur Entwicklung der terminalen Niereninsuffizienz aus und betonte die Wertigkeit der Albuminurie im Stadium A2 als Parameter der Frühdiagnostik. Bei vielen diabetischen Patient*innen mit eingeschränkten Nierenfunktionsparametern findet sich jedoch keine Albuminurie , sodass hier primär eine mikro-/makrovaskuläre Komponente in der Niere anzunehmen ist. Zudem werden auch unterschiedliche Albuminurie-Verläufe bis hin zu einer Regression der Albuminurie ohne spezifische Therapie bei Patienten mit Diabetes beobachtet. Mitte des 20. Jahrhunderts wurde der Begriff der diabetischen Nephropathie als klinisches Syndrom, basierend auf interkapillärer oder nodulärer Glomerulosklerose (Kimmelstiel-Wilson) bei Patienten mit längerer Diabetesdauer, persistierender Albuminurie, Hypertonie, Retinopathie und progressivem Nierenfunktionsverlust geprägt . Dieser wurde in den letzten Jahren durch die klassischen fünf Stadien des natürlichen Krankheitsverlaufes einer CKD ergänzt . Obwohl dieses Modell und der Krankheitsverlauf primär auf Daten von Patienten mit T1D basierte , wurde es auch auf Patienten mit T2D angewandt . Mittlerweile ist aber klar, dass mehr als 50 % der Patient*innen mit T2D in Langzeitbeobachtungen eine GFR < 60 ml/min/1,73 m 2 ohne vorangehende Albuminurie entwickeln bzw. der Verlauf der Albuminurie nicht immer mit dem Nierenfunktionsverlust korreliert . Ähnliche Beobachtungen gibt es auch für T1D . Patho-histologische Untersuchungen bei Patienten mit T2D und CKD weisen auf ein vielfältiges Spektrum an Nierenerkrankungen hin: so liegt der der Anteil einer histomorphologischen diabetischen Nephropathie bei ca. 30–50 %, dazu existieren bei ca. 20–30 % andere Nierenrkrankungen (z. B. IgA-Nephropathie, fokale segmentale Glomerulosklerose (FSGS) etc.) und schließlich findet sich noch einmal ein gemischter Anteil an eigentlicher diabetischer Nephropathie und anderen Nierenerkrankungen von ca. 20–30 %. Dabei weist das Ausmaß der Albuminurie die engste Korrelation mit dem Auftreten einer pathohistologischen diabetischen Nephropathie auf . Von der Renal Pathology Society wurde zwar eine Klassifikation auf Basis von Biopsien von Patient*innen mit T1D und T2D erstellt , allerdings wird diese Klassifikation in der klinischen Routinebefundung im Allgemeinen wie auch in Österreich speziell nicht verwendet. Die Stadieneinteilung der DKD entspricht der klassischen Einteilung der CKD Stadien nach KDIGO: die geschätzte glomeruläre Filtrationsrate (eGFR) wird in die Stadien G1–G5 (Abb. ) eingeteilt, Stadium G3 in G3a (eGFR 45–59 ml/min/1,73 m 2 ) und G3b (eGFR 30–44 ml/min/1,73 m 2 ) unterteilt und zusätzlich wird die Albuminausscheidung im Spontanharn (Albumin/Kreatinin Ratio) in A1 (< 30 mg/g Kreatinin), A2 (30–300 mg/g) und A3 (> 300 mg/g) unterschieden. Zudem wird in der neuen Klassifizierung auch farblich das Risiko für das Auftreten kardiovaskulärer Ereignisse und das Fortschreiten der Funktionseinschränkung der Nieren bis hin zum Nierenversagen dargestellt (Abb. ). Zur Beurteilung des Ausmaßes der Nierenfunktionseinschränkung sollte eine der derzeit gängigen Schätzformeln verwendet werden, welche bereits in den meisten Labors implementiert sind. Eine ausschließliche Serum-Kreatininbestimmung ist v. a. bei älteren Menschen oft irreführend, da keine gute Korrelation zur tatsächlichen Nierenfunktion besteht bzw. die Schwelle zu einer definierten Nierenerkankung vermutlich niedriger anzusetzen ist als bei jungen Menschen (45 vs. 60 ml/min/1,73 m 2 ). Die mittels MDRD (Modification of Diet in Renal Disease)-Formel geschätzte glomeruläre Filtrationsrate (eGFR) ist für den Bereich zwischen 20 und 60 ml/min/1,73 m 2 für Personen über 18 Jahren validiert . Die Basis der Berechnung soll eine nach IDMS („isotope dilution mass spectrometry“)-Goldstandard kalibrierte Serum-Kreatininbestimmung sein (; Tab. ). Aktuell empfehlen die meisten Gesellschaften die CKD-EPI (Chronic Kidney Disease Epidemiology Collaboration)-Formel als Standard (; Tab. ). Für diese Formel wurde mehrfach gezeigt, dass sie v. a. im CKD Stadium G2–3 genauer als die MDRD-Formel und somit besser zur Risikostratifizierung geeignet ist . Aufgrund der Ressourcen und Praktikabilität sind in der täglichen Praxis andere Schätzformeln z. B. unter Einbeziehung von Cystatin C derzeit von geringerer Bedeutung. Zur besseren allgemeinen Verständlichkeit schlagen die Gesellschaften vor, gegenüber Betroffenen die Nierenfunktion als % Nierenfunktion zu interpretieren, was bei einem annähernden Normalwert von etwa 100 ml/min/1,73 m 2 (90–120 ml/min/1,73 m 2 ) durchaus gerechtfertigt erscheint. Screening auf diabetische Nierenerkrankung Bei T1D sollte das jährliche Screening auf Albuminurie fünf Jahre nach Diagnosestellung, bei T2D bereits mit der Diagnosestellung beginnen. Generell wird empfohlen, als Screening nur die Messung der Albumin/Kreatinin-Ratio aus dem Spontanharn durchzuführen . Wir empfehlen, unabhängig von der Bestimmung der Albuminurie auch eine regelmäßige eGFR-Bestimmung insbesondere bei T1D nach zumindest fünf Jahren Krankheitsdauer und bei allen T2D-Patient*innen schon ab Diagnosbeginn durchzuführen . Die Harnalbumin-Ausscheidung wird mit den Stadien A1–A3 klassifiziert (siehe Abb. ). Aufgrund der hohen individuellen Variabilität der Albuminexkretion wird zur Diagnostik der Albuminurie folgendes Vorgehen empfohlen, dabei gilt die „2 aus 3 Regel“: Wenn zwei hintereinander analysierte Urinproben innerhalb von 3–6 Monaten übereinstimmend positiv (Albuminausscheidung > 30 mg/g Kreatinin) oder negativ sind, ist eine Albuminurie nachgewiesen bzw. ausgeschlossen. Wenn die Albuminausscheidung einer Urinprobe negativ und die andere positiv ist, sollte eine dritte Urinprobe auf Albuminurie getestet werden. Zu beachten ist, dass positive Befunde z. B. auch bei akut fieberhaften Erkrankungen, Harnwegsinfekten und arterieller Hypertonie, bei Herzinsuffizienz, Menstruation und nach körperlicher Anstrengung unabhängig von Nierenschäden möglich sind. Nachdem die Sammlung von 24-h Harn aufwendig ist und nur wenig Zusatznutzen im Vergleich zur Bestimmung der Albumin-Kreatinin-Ratio aus dem Spontanharn aufweist, hat sich letztere Methode durchgesetzt. Bei T1D sollte das jährliche Screening auf Albuminurie fünf Jahre nach Diagnosestellung, bei T2D bereits mit der Diagnosestellung beginnen. Generell wird empfohlen, als Screening nur die Messung der Albumin/Kreatinin-Ratio aus dem Spontanharn durchzuführen . Wir empfehlen, unabhängig von der Bestimmung der Albuminurie auch eine regelmäßige eGFR-Bestimmung insbesondere bei T1D nach zumindest fünf Jahren Krankheitsdauer und bei allen T2D-Patient*innen schon ab Diagnosbeginn durchzuführen . Die Harnalbumin-Ausscheidung wird mit den Stadien A1–A3 klassifiziert (siehe Abb. ). Aufgrund der hohen individuellen Variabilität der Albuminexkretion wird zur Diagnostik der Albuminurie folgendes Vorgehen empfohlen, dabei gilt die „2 aus 3 Regel“: Wenn zwei hintereinander analysierte Urinproben innerhalb von 3–6 Monaten übereinstimmend positiv (Albuminausscheidung > 30 mg/g Kreatinin) oder negativ sind, ist eine Albuminurie nachgewiesen bzw. ausgeschlossen. Wenn die Albuminausscheidung einer Urinprobe negativ und die andere positiv ist, sollte eine dritte Urinprobe auf Albuminurie getestet werden. Zu beachten ist, dass positive Befunde z. B. auch bei akut fieberhaften Erkrankungen, Harnwegsinfekten und arterieller Hypertonie, bei Herzinsuffizienz, Menstruation und nach körperlicher Anstrengung unabhängig von Nierenschäden möglich sind. Nachdem die Sammlung von 24-h Harn aufwendig ist und nur wenig Zusatznutzen im Vergleich zur Bestimmung der Albumin-Kreatinin-Ratio aus dem Spontanharn aufweist, hat sich letztere Methode durchgesetzt. Bei Menschen mit Diabetes mellitus sollte immer auch an eine mögliche andere, nicht-diabetische Ursache der Proteinurie und/oder Nierenfunktionseinschränkung gedacht werden, insbesondere wenn mindestens eines der folgenden Kriterien erfüllt ist: Diabetesdauer unter 5 Jahren bei T1D, fehlende (insbesondere proliferative) oder nur milde diabetische Retinopathie, pathologisches Harnsediment mit Mikrohämaturie (insbesondere Akanthozytennachweis und Erythrozytenzylinder), sehr rasche Zunahme der Albuminurie, definiert als Klassenwechsel der Albuminurie (A1 auf A2 oder A3 sowie A2 auf A3 innerhalb kurzer Zeit), rascher Anstieg des Serum-Kreatinins, Auffälligkeiten in der Nierensonographie, welche an eine andere Nierenpathologie denken lassen. Differenzialdiagnostisch häufig zu erwägende Nierenerkrankungen, die statt oder auch zusätzlich zu einer DKD bestehen können, sind eine hypertensive oder eine ischämische Nephropathie als Folge einer Atherosklerose/Aterioloklerose der Nierengefäße. Bei ausgeprägter Albuminurie und/oder Mikrohämaturie ist differentialdiagnostisch an andere renale Erkrankungen zu denken (u. a. Vaskulitis, Glomerulonephritis, Amyloidose), die einer gezielten Therapie bedürfen, weshalb hier eine weitere nephrologische Abklärung notwendig ist. Eine Indikation zur Nierenbiopsie als diagnostischer Goldstandard sollte in Anbetracht der Vielfalt anderer ernsthafter Nierenerkrankungen in diesen Fällen großzügig gestellt werden. Ernährung Hinsichtlich der Eiweißzufuhr mit der Nahrung werden nach den KDIGO- und den ADA-Leitlinien 0,8 g/kg Körpergewicht sowie die Vermeidung der Überschreitung von 1,3 g/kg Körpergewicht empfohlen . Eine noch niedrigere Eiweißzufuhr scheint keinen weiteren Nutzen zu haben . Zusätzlich wird ein Meiden von industriell verarbeiteten Lebensmitteln empfohlen sowie eine Reduktion der Kochsalzzufuhr auf 5 g/d vorgeschlagen . Es sollte dazu angemerkt werden, dass auch Studien bei T1D- und bei T2D-Patient*innen die Kochsalzrestriktion kritisch hinterfragen, und die Studienlage in dieser Population dafür nur geringe Evidenz liefert . Weitere Studien zum therapeutischen Nutzen einer Kochsalz-Restriktion sind daher notwendig, bevor solide Empfehlungen gemacht werden können. Verschiedene Diäten werden diskutiert, um das kardiovaskuläre Risiko der Patient*innen mit Diabetes mellitus zu senken. Diäten sind naturgemäß schwierig zu standardisieren und die Studienlage ist heterogen. Eine allgemeingültige Diätempfehlung, die für jede Patient*in gut passend und auch praktisch umsetzbar ist, kann aktuell nicht gegeben werden . Gesichert scheint aber ein allgemein gesunderer Lebensstil zur Verhinderung bzw. Progressionsverzögerung der Nierenerkrankungen bei Personen mit Diabetes mellitus zu sein . Gewichtsreduktion bei morbider Adipositas (BMI > 40 kg/m 2 ) durch ein bariatrisch chirurgisches Vorgehen führt zu einer Verbesserung der Stoffwechsellage oder sogar zur Diabetesremission und zu einer deutlichen Reduktion der Risken für diabetische Komorbiditäten . Kardiovaskuläres Risiko Bei bestehendem Diabetes mellitus ist bei Nierenerkrankung konsistent eine substanzielle Erhöhung der Mortalität beobachtet worden . Ein Großteil der erhöhten Mortalität ist auf kardiovaskuläre Erkrankungen zurückzuführen, obwohl die nicht-kardiovaskuläre Mortalität ebenso erhöht ist. Albuminurie und eGFR sind unabhängige und zusätzlich assoziierte Risikofaktoren für kardiovaskuläre Ereignisse, kardiovaskuläre Mortalität und Gesamtmortalität . Sowohl Diabetes mellitus als auch CKD bewirken vergleichbare Inzidenzraten von kardiovaskulären Ereignissen wie Patient*innen mit manifester koronarer Herzerkrankung . Dies führt zur Empfehlung, dass Patient*innen mit Diabetes mellitus, CKD oder diabetischer Nierenerkrankung präventiv hinsichtlich kardiovaskulärer Erkrankungen so behandelt werden sollen, als ob sie bereits ein solches Ereignis erlitten hätten . Diese Beobachtungen ziehen nach sich, dass Behandlungsstrategien darauf ausgerichtet sein sollen, das hohe kardiovaskuläre Risiko von Patient*innen mit DKD abzuschwächen, um letztlich das Überleben zu verbessern . Die Mechanismen, durch die eine DKD das kardiovaskuläre Risiko beeinflusst, umfassen traditionelle Risikofaktoren (Hyperglykämie, Hypervolämie und Hypertonie, Lipoprotein-Metabolismus, systemische Inflammation, oxidativer Stress und endotheliale Dysfunktion) wie auch Mechanismen spezifisch im Zusammenhang mit der Nierenfunktionseinschränkung (z. B. Urämietoxine, Anämie und Störungen des Knochen- und Mineralstoffwechsels) . Diese Überlegungen fließen in die unten angeführten Therapieempfehlungen ein. Lipidstoffwechsel Die DKD wird durch Störungen des Lipidmetabolismus in Zusammenhang mit einer Abnahme der Nierenfunktion abhängig vom Stadium der CKD begleitet. Wie bereits erwähnt führt das Vorliegen einer CKD auch zu einem deutlichen Anstieg des kardiovaskulären Risikos . LDL-Cholesterin ist ein etablierter Risikofaktor für kardiovaskuläre Erkrankungen in der Allgemeinbevölkerung. Sein prognostischer Wert ist allerdings bei Personen mit eingeschränkter Nierenfunktion aufgrund einer DKD eingeschränkt und es wird eine quantitative Verschiebung im Lipidprofil zu erhöhten Triglyzeriden, niedrigem HDL-Cholesterin – inkl. qualitativen Änderungen des HDL Cholesterins – sowie erhöhten Spiegeln an oxidiertem LDL-Cholesterin gefunden . Das Ausmaß der LDL-Senkung bei der CKD-Population mit Statinen ist vergleichbar mit Personen mit erhaltener Nierenfunktion . Klinische Untersuchungen und entsprechende Meta-Analysen bei nicht-dialysepflichtiger CKD zeigen, dass kardiovaskuläre Ereignisse und Mortalität durch Statine bzw. die Kombination Statine/Ezetimibe im Vergleich zu Plazebo gesenkt werden . Der günstige Effekt scheint nicht durch Diabetes modifiziert zu sein. Während also der kardiovaskuläre Benefit durch Statine bei CKD gut dokumentiert ist, haben Statine keine progressionsverzögernde Wirkung hinsichtlich der Nierenfunktion . Basierend auf den rezenten KDIGO-Leitlinien, werden Statine bei allen diabetischen Patient*innen mit nicht-dialysepflichtiger CKD empfohlen . Rezente Daten haben auch gezeigt, dass PCSK9-Hemmer bei Personen im CKD-Stadium 3–5 (mit und ohne T2D) vergleichbare lipidsenkende Wirkungen und Sicherheitsprofile wie bei Patient*innen mit einer eGFR > 60 ml/min/1,73 m 2 aufweisen und sind daher für die Therapie in diesem Kollektiv auch zugelassen, ebenso wie die small interfering RNA-Therapie mit Inclisiran (Fachinformation 2022). Bei Dialysepflichtigkeit haben zwei größere klinische Studien (4D und Aurora) keinen Vorteil einer Statintherapie auf den kombinierten kardiovaskulären Endpunkt (3-Punkt MACE) zeigen können . Im Kollektiv von Menschen mit Diabetes und Dialysepflicht zeigt sich jedenfalls keine zwingende Indikation für die Neueinleitung einer Statintherapie. Betreuung der Patienten mit diabetischer Nierenerkrankung Eine nephrologische Begutachtung ist bei Unklarheit über die Ätiologie der Nierenerkrankung und/oder rascher CKD Progression indiziert. Prinzipiell sollte auch unabhängig vom CKD Stadium bei A3 sowie bei CKD G3b (zumindest ab A2) und G4 unabhängig von der Albuminurie eine nephrologische Vorstellung bzw. Betreuung erfolgen. Ab CKD Stadium G3 sollte eine gemeinsame Betreuung durch Diabetolog*innen und Nephrolog*innen erwogen werden und zusätzlich Augenmerk auf mögliche renale Folgeerkrankungen gelegt werden. Ab CKD Stadium G4 ist auch die Eignung für eine alleinige Nierentransplantation oder eine kombinierte Nieren- und Pankreastransplantation (bevorzugt bei T1D, aber auch in ausgewählten Fällen bei T2D möglich ) zu prüfen. Optimal ist eine präemptive Transplantation (Lebend- oder Post-mortem-Spende), insbesondere bei Patient*innen mit T1D ist aufgrund des exzessiven kardiovaskulären Risikos eine rechtzeitige Evaluation für eine Transplantation anzustreben, um die Zeit an der Hämo- oder Peritonealdialyse so kurz als möglich zu halten. Hinsichtlich der Eiweißzufuhr mit der Nahrung werden nach den KDIGO- und den ADA-Leitlinien 0,8 g/kg Körpergewicht sowie die Vermeidung der Überschreitung von 1,3 g/kg Körpergewicht empfohlen . Eine noch niedrigere Eiweißzufuhr scheint keinen weiteren Nutzen zu haben . Zusätzlich wird ein Meiden von industriell verarbeiteten Lebensmitteln empfohlen sowie eine Reduktion der Kochsalzzufuhr auf 5 g/d vorgeschlagen . Es sollte dazu angemerkt werden, dass auch Studien bei T1D- und bei T2D-Patient*innen die Kochsalzrestriktion kritisch hinterfragen, und die Studienlage in dieser Population dafür nur geringe Evidenz liefert . Weitere Studien zum therapeutischen Nutzen einer Kochsalz-Restriktion sind daher notwendig, bevor solide Empfehlungen gemacht werden können. Verschiedene Diäten werden diskutiert, um das kardiovaskuläre Risiko der Patient*innen mit Diabetes mellitus zu senken. Diäten sind naturgemäß schwierig zu standardisieren und die Studienlage ist heterogen. Eine allgemeingültige Diätempfehlung, die für jede Patient*in gut passend und auch praktisch umsetzbar ist, kann aktuell nicht gegeben werden . Gesichert scheint aber ein allgemein gesunderer Lebensstil zur Verhinderung bzw. Progressionsverzögerung der Nierenerkrankungen bei Personen mit Diabetes mellitus zu sein . Gewichtsreduktion bei morbider Adipositas (BMI > 40 kg/m 2 ) durch ein bariatrisch chirurgisches Vorgehen führt zu einer Verbesserung der Stoffwechsellage oder sogar zur Diabetesremission und zu einer deutlichen Reduktion der Risken für diabetische Komorbiditäten . Bei bestehendem Diabetes mellitus ist bei Nierenerkrankung konsistent eine substanzielle Erhöhung der Mortalität beobachtet worden . Ein Großteil der erhöhten Mortalität ist auf kardiovaskuläre Erkrankungen zurückzuführen, obwohl die nicht-kardiovaskuläre Mortalität ebenso erhöht ist. Albuminurie und eGFR sind unabhängige und zusätzlich assoziierte Risikofaktoren für kardiovaskuläre Ereignisse, kardiovaskuläre Mortalität und Gesamtmortalität . Sowohl Diabetes mellitus als auch CKD bewirken vergleichbare Inzidenzraten von kardiovaskulären Ereignissen wie Patient*innen mit manifester koronarer Herzerkrankung . Dies führt zur Empfehlung, dass Patient*innen mit Diabetes mellitus, CKD oder diabetischer Nierenerkrankung präventiv hinsichtlich kardiovaskulärer Erkrankungen so behandelt werden sollen, als ob sie bereits ein solches Ereignis erlitten hätten . Diese Beobachtungen ziehen nach sich, dass Behandlungsstrategien darauf ausgerichtet sein sollen, das hohe kardiovaskuläre Risiko von Patient*innen mit DKD abzuschwächen, um letztlich das Überleben zu verbessern . Die Mechanismen, durch die eine DKD das kardiovaskuläre Risiko beeinflusst, umfassen traditionelle Risikofaktoren (Hyperglykämie, Hypervolämie und Hypertonie, Lipoprotein-Metabolismus, systemische Inflammation, oxidativer Stress und endotheliale Dysfunktion) wie auch Mechanismen spezifisch im Zusammenhang mit der Nierenfunktionseinschränkung (z. B. Urämietoxine, Anämie und Störungen des Knochen- und Mineralstoffwechsels) . Diese Überlegungen fließen in die unten angeführten Therapieempfehlungen ein. Die DKD wird durch Störungen des Lipidmetabolismus in Zusammenhang mit einer Abnahme der Nierenfunktion abhängig vom Stadium der CKD begleitet. Wie bereits erwähnt führt das Vorliegen einer CKD auch zu einem deutlichen Anstieg des kardiovaskulären Risikos . LDL-Cholesterin ist ein etablierter Risikofaktor für kardiovaskuläre Erkrankungen in der Allgemeinbevölkerung. Sein prognostischer Wert ist allerdings bei Personen mit eingeschränkter Nierenfunktion aufgrund einer DKD eingeschränkt und es wird eine quantitative Verschiebung im Lipidprofil zu erhöhten Triglyzeriden, niedrigem HDL-Cholesterin – inkl. qualitativen Änderungen des HDL Cholesterins – sowie erhöhten Spiegeln an oxidiertem LDL-Cholesterin gefunden . Das Ausmaß der LDL-Senkung bei der CKD-Population mit Statinen ist vergleichbar mit Personen mit erhaltener Nierenfunktion . Klinische Untersuchungen und entsprechende Meta-Analysen bei nicht-dialysepflichtiger CKD zeigen, dass kardiovaskuläre Ereignisse und Mortalität durch Statine bzw. die Kombination Statine/Ezetimibe im Vergleich zu Plazebo gesenkt werden . Der günstige Effekt scheint nicht durch Diabetes modifiziert zu sein. Während also der kardiovaskuläre Benefit durch Statine bei CKD gut dokumentiert ist, haben Statine keine progressionsverzögernde Wirkung hinsichtlich der Nierenfunktion . Basierend auf den rezenten KDIGO-Leitlinien, werden Statine bei allen diabetischen Patient*innen mit nicht-dialysepflichtiger CKD empfohlen . Rezente Daten haben auch gezeigt, dass PCSK9-Hemmer bei Personen im CKD-Stadium 3–5 (mit und ohne T2D) vergleichbare lipidsenkende Wirkungen und Sicherheitsprofile wie bei Patient*innen mit einer eGFR > 60 ml/min/1,73 m 2 aufweisen und sind daher für die Therapie in diesem Kollektiv auch zugelassen, ebenso wie die small interfering RNA-Therapie mit Inclisiran (Fachinformation 2022). Bei Dialysepflichtigkeit haben zwei größere klinische Studien (4D und Aurora) keinen Vorteil einer Statintherapie auf den kombinierten kardiovaskulären Endpunkt (3-Punkt MACE) zeigen können . Im Kollektiv von Menschen mit Diabetes und Dialysepflicht zeigt sich jedenfalls keine zwingende Indikation für die Neueinleitung einer Statintherapie. Eine nephrologische Begutachtung ist bei Unklarheit über die Ätiologie der Nierenerkrankung und/oder rascher CKD Progression indiziert. Prinzipiell sollte auch unabhängig vom CKD Stadium bei A3 sowie bei CKD G3b (zumindest ab A2) und G4 unabhängig von der Albuminurie eine nephrologische Vorstellung bzw. Betreuung erfolgen. Ab CKD Stadium G3 sollte eine gemeinsame Betreuung durch Diabetolog*innen und Nephrolog*innen erwogen werden und zusätzlich Augenmerk auf mögliche renale Folgeerkrankungen gelegt werden. Ab CKD Stadium G4 ist auch die Eignung für eine alleinige Nierentransplantation oder eine kombinierte Nieren- und Pankreastransplantation (bevorzugt bei T1D, aber auch in ausgewählten Fällen bei T2D möglich ) zu prüfen. Optimal ist eine präemptive Transplantation (Lebend- oder Post-mortem-Spende), insbesondere bei Patient*innen mit T1D ist aufgrund des exzessiven kardiovaskulären Risikos eine rechtzeitige Evaluation für eine Transplantation anzustreben, um die Zeit an der Hämo- oder Peritonealdialyse so kurz als möglich zu halten. ) Bei Menschen mit T1D oder T2D sollte möglichst eine normoglykämische Stoffwechselsituation angestrebt werden . In der Primärprävention sind niedrigere HbA 1c -Werte zu fordern als in fortgeschrittenen Stadien der CKD und in der Sekundärprävention. Hier hat sich in den Studien ein HbA 1c -„Zielkorridor“ von 6,5–7,5 % als sinnvoll erwiesen. Unabhängig davon sollte aufgrund der Vorgeschichte, Komorbiditäten, Hypoglykämieneigung und diabetischer Begleiterkrankungen (Retinopathie, Neuropathie) insbesondere bei älteren Patient*innen eine individualisierte Festlegung der Therapieziele erfolgen. Bei nachlassender Nierenfunktion ist besonders das erhöhte Risiko der Hypoglykämie zu berücksichtigen. Die Wahl antidiabetischer und anderer Medikamente bedarf bei eingeschränkter Nierenfunktion erhöhter Aufmerksamkeit, da Zulassungseinschränkungen und Kontraindikationen vorliegen können. Renoprotektive antihyperglykämische Substanzen Einige antihyperglykämische Substanzen haben direkte renale Effekte gezeigt, die sich nicht alleine durch die Blutzuckersenkung erklären lassen . SGLT-2-Inhibitoren Empagliflozin, ein Vertreter der SGLT2-Inhibitoren, zeigte in der EMPA-REG-OUTCOME-Studie eine signifikante, 39 %ige relative Risikoreduktion im kombinierten Endpunkt bestehend aus Progression zu Makroalbuminurie, Verdopplung des Serum-Kreatinins, Beginn einer Nierenersatztherapie oder Tod renaler Ursache . Canagliflozin reduzierte in der CANVAS Studie das Risiko für einen 40%igen eGFR-Abfall, den Beginn einer Nierenersatztherapie oder renalen Tod um relative 40 % . Der nahezu idente Endpunkt (40%iger eGFR Abfall unter 60 ml/min/1,73 m 2 , Beginn einer Nierenersatztherapie und renaler Tod) wurde in der DECLARE-TIMI58 Studie mit Dapagliflozin um 47 % reduziert . Nachdem obige Studien in Kollektiven von Menschen mit T2D durchgeführt wurden, untersuchten DAPA-CKD, CREDENCE und EMPA-KIDNEY die Effekte von Dapagliflozin, Canagliflozin und Empagliflozin bei Menschen mit CKD. In diesen Kollektiven konnte jeweils der primäre renale Endpunkt signifikant reduziert werden . Während in der CREDENCE-Studie nur Personen mit T2D eingeschlossen wurden, hatten in DAPA-CKD etwa 68 % und EMPA-KIDNEY 46 % einen T2D. Auch wenn die antihyperglykämischen Effekte bei SGLT-2-Inhibitoren mit abnehmender eGFR nachlassen, so bleiben die kardio-renoprotektiven Effekte bis zu einer eGFR von zumindest 20 ml/min/1,73 m 2 erhalten. GLP-1-Rezeptoragonisten Von den Vertretern der GLP-1-Rezeptoragonisten hat Liraglutide in der LEADER-Studie den kombinierten renalen Endpunkt (persistierende Albuminurie > 300 mg/g, Verdopplung des Serum-Kreatinins, terminale Niereninsuffizienz oder Tod aufgrund terminaler Niereninsuffizienz) um relative 22 % im Vergleich zur Plazebogruppe (i.e. Blutzuckersenkung ohne GLP-1-RA) reduziert, ein Effekt, der in erster Linie durch eine Reduktion im Auftreten einer Makroalbuminurie getragen wird . Semaglutide und Dulaglutide bestätigte diesen Effekt in der SUSTAIN-6-Studie bzw. der REWIND Studie . Ob ein tatsächlicher günstiger Einfluss dieser Substanzklasse auf die CKD Progression exisitiert wird in einer gerade laufenden Studie untersucht (FLOW trial). Therapiebesonderheiten bei nachlassender Nierenfunktion Die Auswahl von oralen Antidiabetika hat in den letzten Jahren deutlich zugenommen. Dennoch gestaltet sich die orale antidiabetische Therapie bei Nierenfunktionseinschränkung schwieriger als bei diabetischen Patienten mit normaler Nierenfunktion . Ebenso ist auf die erhöhte Hypoglykämieneigung in diesem Zusammenhang Aufmerksamkeit zu legen . Im Folgenden werden die wesentlichen Substanzen bzw. Substanzgruppen aufgelistet: Metformin galt lange Zeit aufgrund seiner Plasmaeliminations-Halbwertszeit von 4,0–8,7 h bei kompletter renaler Elimination bei mittel- bis höhergradiger Nierenfunktionseinschränkung aufgrund der Laktatazidosegefahr als kontraindiziert. Dies ändere sich allerdings über die letzten Jahre da dafür die Evidenz aus der klinischen Praxis fehlte . Metformin ist bei einer eGFR < 30 ml/min/1,73 m 2 kontraindiziert, unter einer eGFR von 45 ml/min/1,73 m 2 sollte Metformin nicht neu begonnen werden, die Dosis bei bestehender Therapie auf 1000 mg am Tag beschränkt und die eGFR engmaschiger überwacht werden. Studien zur Verwendung von Metformin bei eingeschränkter Nierenfunktion unterstützen diese Vorgangsweise und empfehlen im Stadium 3b die Aufteilung der 1000-mg-Tagesmaximaldosis auf zweimal täglich 500 mg . SGLT-2-Inhibitoren (Indikation Blutzuckersenkung): Nachdem wie bereits erwähnt die blutzuckersenkende Wirkung dieser Substanzklasse mit eGFR-Verringerung abnimmt, sollte ab einer eGFR von < 45 ml/min/1,73 m 2 wenig Erwartung in eine weitere blutzuckersenkende Potenz gesetzt werden. Eine SGLT-2-Inhibitor-Therapie sollte jedoch bei einer eGFR < 45/ml/min/1,73m 2 fortgeführt werden, da die günstigen renalen Effekte zumindest bis zu einer eGFR von 20 ml/min/1,73m 2 weiterhin erhalten bleiben. Eine Therapieeinleitung erscheint bei einer eGFR von > 20 ml/min/1,73m 2 sinnvoll und die Fortführung einer bereits bestehenden Therapie ist bis zum Beginn einer Nierenersatztherapie möglich . Für GLP-1-Rezeptoragonisten gilt: Dulaglutide, Liraglutide und Semaglutide können bis zu einer eGFR von 15 ml/min/1,73m 2 ohne Dosisanpassung angewendet werden, für terminale Niereninsuffizienz liegen bislang noch keine ausreichenden Daten vor. Für Lixisenatide ist bis zu einer eGFR von 30 ml/min/1,73m 2 ebenfalls keine Dosisanpassung notwendig (aktuelle Fachinformationen). Exenatid 1 -mal wöchentlich sollte nach aktueller Datenlage bei Patient*innen mit einer eGFR < 30 ml/min/1,73m 2 nicht eingesetzt werden. Für DPP-4-Hemmer gilt: Linagliptin kann in allen Stadien ohne Dosisanpassung gegeben werden, da es primär hepatobiliär ausgeschieden wird. Bei anderen DPP-4-Hemmern wie Sitagliptin, Vildagliptin, Saxagliptin und Alogliptin sind ab Stadium G3 Dosisanpassungen erforderlich. Sulfonylharnstoffe (SH) stellen aufgrund des Hypoglykämierisikos nicht das optimale orale Antidiabetikum bei Patient*innen mit CKD dar. Zwischen den einzelnen Substanzen gibt es erhebliche Unterschiede. Gliclazid sollte bei CKD in niedriger Dosierung begonnen und alle 4 Wochen dosistitriert werden. Glimepirid kann im Stadium CKD G1–3 in normaler Dosis, im Stadium G4 reduziert (1 mg/Tag) verabreicht werden und sollte im Stadium G5 vermieden werden . Das Hypoglykämierisiko erscheint am niedrigsten bei Gliclazid , gefolgt von Glipizid und Glimepirid . Dennoch ist insgesamt das Hypoglykämierisiko unter SH 10-fach so hoch wie unter Metformin und 4‑ bis 5‑fach höher als unter Pioglitazon . Es sollte auf die Gabe des vorwiegend renal eliminierten Glibenclamid verzichtet werden (heutzutage kaum mehr verwendet) wegen der Kumulationsgefahr mit Neigung zu schwerer und protrahierter Hypoglykämie. Bei Verwendung von Repaglinid kann bis CKD Stadium G4 ohne Dosisreduktion vorgegangen werden. Für Repaglinid gibt es im Stadium CKD G5 keine Daten. Pioglitazon als einzig verbleibender Vertreter der Thiazolidindione muss nicht dosisreduziert werden . Pioglitazon kann entsprechend der Fachinformation bei einer Kreatinin-Clearance > 4 ml/min eingesetzt werden. Aufgrund des erhöhten Risikos für Herzinsuffizienz durch Volumenretention und dem erhöhten Risiko für periphere Frakturen, ist der Einsatz von Pioglitazon jdeoch limitiert. Bei Insulinen ist auf eine mögliche Dosisreduktion in Abhängigkeit von der Nierenfunktionseinschränkung zu achten, da Insulin teilweise renal abgebaut wird und das Hypoglykämierisiko bei weit fortgeschrittener CKD generell signifikant erhöht ist. Einige antihyperglykämische Substanzen haben direkte renale Effekte gezeigt, die sich nicht alleine durch die Blutzuckersenkung erklären lassen . SGLT-2-Inhibitoren Empagliflozin, ein Vertreter der SGLT2-Inhibitoren, zeigte in der EMPA-REG-OUTCOME-Studie eine signifikante, 39 %ige relative Risikoreduktion im kombinierten Endpunkt bestehend aus Progression zu Makroalbuminurie, Verdopplung des Serum-Kreatinins, Beginn einer Nierenersatztherapie oder Tod renaler Ursache . Canagliflozin reduzierte in der CANVAS Studie das Risiko für einen 40%igen eGFR-Abfall, den Beginn einer Nierenersatztherapie oder renalen Tod um relative 40 % . Der nahezu idente Endpunkt (40%iger eGFR Abfall unter 60 ml/min/1,73 m 2 , Beginn einer Nierenersatztherapie und renaler Tod) wurde in der DECLARE-TIMI58 Studie mit Dapagliflozin um 47 % reduziert . Nachdem obige Studien in Kollektiven von Menschen mit T2D durchgeführt wurden, untersuchten DAPA-CKD, CREDENCE und EMPA-KIDNEY die Effekte von Dapagliflozin, Canagliflozin und Empagliflozin bei Menschen mit CKD. In diesen Kollektiven konnte jeweils der primäre renale Endpunkt signifikant reduziert werden . Während in der CREDENCE-Studie nur Personen mit T2D eingeschlossen wurden, hatten in DAPA-CKD etwa 68 % und EMPA-KIDNEY 46 % einen T2D. Auch wenn die antihyperglykämischen Effekte bei SGLT-2-Inhibitoren mit abnehmender eGFR nachlassen, so bleiben die kardio-renoprotektiven Effekte bis zu einer eGFR von zumindest 20 ml/min/1,73 m 2 erhalten. GLP-1-Rezeptoragonisten Von den Vertretern der GLP-1-Rezeptoragonisten hat Liraglutide in der LEADER-Studie den kombinierten renalen Endpunkt (persistierende Albuminurie > 300 mg/g, Verdopplung des Serum-Kreatinins, terminale Niereninsuffizienz oder Tod aufgrund terminaler Niereninsuffizienz) um relative 22 % im Vergleich zur Plazebogruppe (i.e. Blutzuckersenkung ohne GLP-1-RA) reduziert, ein Effekt, der in erster Linie durch eine Reduktion im Auftreten einer Makroalbuminurie getragen wird . Semaglutide und Dulaglutide bestätigte diesen Effekt in der SUSTAIN-6-Studie bzw. der REWIND Studie . Ob ein tatsächlicher günstiger Einfluss dieser Substanzklasse auf die CKD Progression exisitiert wird in einer gerade laufenden Studie untersucht (FLOW trial). Empagliflozin, ein Vertreter der SGLT2-Inhibitoren, zeigte in der EMPA-REG-OUTCOME-Studie eine signifikante, 39 %ige relative Risikoreduktion im kombinierten Endpunkt bestehend aus Progression zu Makroalbuminurie, Verdopplung des Serum-Kreatinins, Beginn einer Nierenersatztherapie oder Tod renaler Ursache . Canagliflozin reduzierte in der CANVAS Studie das Risiko für einen 40%igen eGFR-Abfall, den Beginn einer Nierenersatztherapie oder renalen Tod um relative 40 % . Der nahezu idente Endpunkt (40%iger eGFR Abfall unter 60 ml/min/1,73 m 2 , Beginn einer Nierenersatztherapie und renaler Tod) wurde in der DECLARE-TIMI58 Studie mit Dapagliflozin um 47 % reduziert . Nachdem obige Studien in Kollektiven von Menschen mit T2D durchgeführt wurden, untersuchten DAPA-CKD, CREDENCE und EMPA-KIDNEY die Effekte von Dapagliflozin, Canagliflozin und Empagliflozin bei Menschen mit CKD. In diesen Kollektiven konnte jeweils der primäre renale Endpunkt signifikant reduziert werden . Während in der CREDENCE-Studie nur Personen mit T2D eingeschlossen wurden, hatten in DAPA-CKD etwa 68 % und EMPA-KIDNEY 46 % einen T2D. Auch wenn die antihyperglykämischen Effekte bei SGLT-2-Inhibitoren mit abnehmender eGFR nachlassen, so bleiben die kardio-renoprotektiven Effekte bis zu einer eGFR von zumindest 20 ml/min/1,73 m 2 erhalten. Von den Vertretern der GLP-1-Rezeptoragonisten hat Liraglutide in der LEADER-Studie den kombinierten renalen Endpunkt (persistierende Albuminurie > 300 mg/g, Verdopplung des Serum-Kreatinins, terminale Niereninsuffizienz oder Tod aufgrund terminaler Niereninsuffizienz) um relative 22 % im Vergleich zur Plazebogruppe (i.e. Blutzuckersenkung ohne GLP-1-RA) reduziert, ein Effekt, der in erster Linie durch eine Reduktion im Auftreten einer Makroalbuminurie getragen wird . Semaglutide und Dulaglutide bestätigte diesen Effekt in der SUSTAIN-6-Studie bzw. der REWIND Studie . Ob ein tatsächlicher günstiger Einfluss dieser Substanzklasse auf die CKD Progression exisitiert wird in einer gerade laufenden Studie untersucht (FLOW trial). Die Auswahl von oralen Antidiabetika hat in den letzten Jahren deutlich zugenommen. Dennoch gestaltet sich die orale antidiabetische Therapie bei Nierenfunktionseinschränkung schwieriger als bei diabetischen Patienten mit normaler Nierenfunktion . Ebenso ist auf die erhöhte Hypoglykämieneigung in diesem Zusammenhang Aufmerksamkeit zu legen . Im Folgenden werden die wesentlichen Substanzen bzw. Substanzgruppen aufgelistet: Metformin galt lange Zeit aufgrund seiner Plasmaeliminations-Halbwertszeit von 4,0–8,7 h bei kompletter renaler Elimination bei mittel- bis höhergradiger Nierenfunktionseinschränkung aufgrund der Laktatazidosegefahr als kontraindiziert. Dies ändere sich allerdings über die letzten Jahre da dafür die Evidenz aus der klinischen Praxis fehlte . Metformin ist bei einer eGFR < 30 ml/min/1,73 m 2 kontraindiziert, unter einer eGFR von 45 ml/min/1,73 m 2 sollte Metformin nicht neu begonnen werden, die Dosis bei bestehender Therapie auf 1000 mg am Tag beschränkt und die eGFR engmaschiger überwacht werden. Studien zur Verwendung von Metformin bei eingeschränkter Nierenfunktion unterstützen diese Vorgangsweise und empfehlen im Stadium 3b die Aufteilung der 1000-mg-Tagesmaximaldosis auf zweimal täglich 500 mg . SGLT-2-Inhibitoren (Indikation Blutzuckersenkung): Nachdem wie bereits erwähnt die blutzuckersenkende Wirkung dieser Substanzklasse mit eGFR-Verringerung abnimmt, sollte ab einer eGFR von < 45 ml/min/1,73 m 2 wenig Erwartung in eine weitere blutzuckersenkende Potenz gesetzt werden. Eine SGLT-2-Inhibitor-Therapie sollte jedoch bei einer eGFR < 45/ml/min/1,73m 2 fortgeführt werden, da die günstigen renalen Effekte zumindest bis zu einer eGFR von 20 ml/min/1,73m 2 weiterhin erhalten bleiben. Eine Therapieeinleitung erscheint bei einer eGFR von > 20 ml/min/1,73m 2 sinnvoll und die Fortführung einer bereits bestehenden Therapie ist bis zum Beginn einer Nierenersatztherapie möglich . Für GLP-1-Rezeptoragonisten gilt: Dulaglutide, Liraglutide und Semaglutide können bis zu einer eGFR von 15 ml/min/1,73m 2 ohne Dosisanpassung angewendet werden, für terminale Niereninsuffizienz liegen bislang noch keine ausreichenden Daten vor. Für Lixisenatide ist bis zu einer eGFR von 30 ml/min/1,73m 2 ebenfalls keine Dosisanpassung notwendig (aktuelle Fachinformationen). Exenatid 1 -mal wöchentlich sollte nach aktueller Datenlage bei Patient*innen mit einer eGFR < 30 ml/min/1,73m 2 nicht eingesetzt werden. Für DPP-4-Hemmer gilt: Linagliptin kann in allen Stadien ohne Dosisanpassung gegeben werden, da es primär hepatobiliär ausgeschieden wird. Bei anderen DPP-4-Hemmern wie Sitagliptin, Vildagliptin, Saxagliptin und Alogliptin sind ab Stadium G3 Dosisanpassungen erforderlich. Sulfonylharnstoffe (SH) stellen aufgrund des Hypoglykämierisikos nicht das optimale orale Antidiabetikum bei Patient*innen mit CKD dar. Zwischen den einzelnen Substanzen gibt es erhebliche Unterschiede. Gliclazid sollte bei CKD in niedriger Dosierung begonnen und alle 4 Wochen dosistitriert werden. Glimepirid kann im Stadium CKD G1–3 in normaler Dosis, im Stadium G4 reduziert (1 mg/Tag) verabreicht werden und sollte im Stadium G5 vermieden werden . Das Hypoglykämierisiko erscheint am niedrigsten bei Gliclazid , gefolgt von Glipizid und Glimepirid . Dennoch ist insgesamt das Hypoglykämierisiko unter SH 10-fach so hoch wie unter Metformin und 4‑ bis 5‑fach höher als unter Pioglitazon . Es sollte auf die Gabe des vorwiegend renal eliminierten Glibenclamid verzichtet werden (heutzutage kaum mehr verwendet) wegen der Kumulationsgefahr mit Neigung zu schwerer und protrahierter Hypoglykämie. Bei Verwendung von Repaglinid kann bis CKD Stadium G4 ohne Dosisreduktion vorgegangen werden. Für Repaglinid gibt es im Stadium CKD G5 keine Daten. Pioglitazon als einzig verbleibender Vertreter der Thiazolidindione muss nicht dosisreduziert werden . Pioglitazon kann entsprechend der Fachinformation bei einer Kreatinin-Clearance > 4 ml/min eingesetzt werden. Aufgrund des erhöhten Risikos für Herzinsuffizienz durch Volumenretention und dem erhöhten Risiko für periphere Frakturen, ist der Einsatz von Pioglitazon jdeoch limitiert. Bei Insulinen ist auf eine mögliche Dosisreduktion in Abhängigkeit von der Nierenfunktionseinschränkung zu achten, da Insulin teilweise renal abgebaut wird und das Hypoglykämierisiko bei weit fortgeschrittener CKD generell signifikant erhöht ist. Eine antihypertensive Behandlung von Diabetespatient*innen hat das Ziel, Auftreten und Progression einer DKD sowie makrovaskuläre Komplikationen und vorzeitigen kardiovaskulären Tod zu vermeiden. Daraus ergeben sich folgende Therapieziele: Rückbildung bzw. Stabilisierung einer Albuminurie; Erhalt der Nierenfunktion; Verhinderung der terminalen Niereninsuffizienz; Reduktion kardiovaskulärer Morbidität und Mortalität. Blutdrucktherapieempfehlungen für Kinder werden im pädiatrischen Kapitel erläutert. Der Zielblutdruck bei diabetischer Nierenerkrankung wird mit < 140/90 mm Hg angegeben, um die kardiovaskuläre Mortalität und die Progression der CKD zu reduzieren. Zusätzlich wird von KDIGO bei einer Albuminurie > 30 mg/g ein Zielblutdruck von < 130/80 mm Hg vorgeschlagen . Eine Unterstützung für diese Zielwerte ergibt sich aus einer limitierten Anzahl von randomisierten Studien, welche auch Patienten mit Diabetes mellitus beinhalteten und sich auf kardiovaskuläre Ereignisse fokussierten . Allerdings existieren keine randomisierten Studien hinsichtlich Zielblutdruckwerten, die auf renale Ereignisse eingehen. Daten, welche eine Progressionsverzögerung der CKD zeigen, stammen ausschließlich von drei randomisierten Studien bei Patient*innen ohne DKD, welche Afroamerikaner mit hypertensiver Nephropathie, Patienten mit IgA-Nephropathie und Patient*innen mit CKD ohne spezifische Diagnose umfassten . Es gab auch ein Gefahrensignal aus klinischen Studien, dass diastolische Blutdruckwerte < 70 mm Hg und insbesondere < 60 mm Hg bei älteren Patient*innen problematisch sein könnten . Daten von Patient*innen mit CKD Stadium G3 oder höher zeigten, dass ein diastolischer Blutdruckwert < 60 mm Hg mit einer erhöhten Inzidenzrate an terminaler Niereninsuffizienz vergesellschaftet ist , während andere Studien bei Patient*innen ohne CKD bei diastolischen Werten < 65 mm Hg eine Assoziation mit schlechterem Outcome der kardiovaskulären Erkrankungen zeigten . Ein therapeutischer Nutzen von Blockern des RAAS, sei es durch Verwendung eines ACE-Hemmers oder Angiotensin-Rezeptorblockers, ist durch eine Fülle von klinischen Daten nachgewiesen, insbesondere hinsichtlich der Reduktion von renalen Ereignissen bei Patienten im CKD Stadium G3 oder höher, solchen mit einer Albuminurie, Hypertonie und Diabetes mellitus . Daher stellen diese die First-line-Therapie einer antihypertensiven Therapie dar, auch wenn es Hinweise gibt, dass andere Antihypertensiva gleichwertig hinsichtlich harter kardiovaskulärer Endpunkte und dem Auftreten von terminalem Nierenversagen wären . Gegensätzlich zur aufgestellten Hypothese, dass eine RAAS-Doppelblockade klinisch sinnvoll wäre, mussten klinische Studien vorzeitig aufgrund von höheren Raten an Hyperkaliämie und/oder akutem Nierenversagen und fehlender Effizienz gestoppt werden . Aktuell stehen zwei Klassen an Mineralokortikoid-Rezeptor Antagonisten zur Verfügung – die steroidalen und die nicht-steroidalen. Rezent hat die FIDELIO-DKD Studie die renalen Effekte von Finerenone bei Personen mit diabetischer Nierenerkrankung untersucht. Der primäre Endpunkt (Nierenversagen, zumindest 4 Wochen anhaltender Abfall der eGFR um > 40 %, renaler Tod) wurde signifikant um relative 18 % reduziert . Diese renalen Ergebnisse wurden von einer zweiten kardiovaskulären Outcome-Studie (FIGARO) bestätigt. Der kombinierte, primäre kardiovaskuläre Endpunkt wurde in dieser Studie um signifikante relative 24 % gesenkt . Das Risiko für akutes Nierenversagen unterschied sich nicht signifikant zwischen der Finerenone- und der Plazebogruppe und die Häufigkeit von Hyperkaliämien, die zu einem Studienabbruch führten, war 0,6 % unter Placebo und 1,7 % in der Finerenone-Gruppe in beiden Studien zusammengefasst . Eine Finerenone-Therapie kann bei Menschen mit Diabetes mellitus und CKD G3–4 eingesetzt werden, die trotz einer Therapie mit einem ACE-Hemmer oder Angiotensin-Rezeptorblocker für zumindest vier Wochen eine persistierende Albuminurie (A2–3) und normale Kaliumwerte aufweisen. Über einen möglichen synergistischen Effekt von SGLT2-Inhibitoren und Finerenone liegen noch keine aussagekräftigen Daten vor, da nur wenige Patient*innen mit dieser Kombination in den Studien behandelt wurden. Zielwerte und Maßnahmen bei diabetischer Nierenerkrankung: Blutdruck RR < 140/90 mm Hg RR < 130/80 mm Hg bei Albuminurie (Stadium A2 und A3) RR diastolisch > 60 mm Hg HbA1c-Zielwerte HbA 1c -„Zielkorridor“ meistens 6,5–7,5 % (48–58 mmol/mol) (bei fortgeschrittener CKD) HbA 1c -„Zielkorridor“ bei Dialysepatient*innen 7–8,0 % (53–64 mmol/mol). Dieser soll entsprechend dem Alter und der Komorbiditäten individualisiert werden. LDL-Cholesterin-Ziel Bei Diabetes mellitus mit Albuminurie, CKD G3 oder G4: < 55 mg/dl Weitere Aspekte Hämoglobin 9–11 g/dl (eGFR Stadium CKDG 4–5) Elektrolyte im Normbereich Normalisierung der Eiweißzufuhr auf täglich 0,8–1,3 g/kg Körpergewicht Thrombozytenaggregationshemmer (individuelle Abwägung des potenziellen kardiovaskulären Benefits gegenüber dem Blutungsrisiko) Verzicht auf Rauchen Exakte Nutzen-Risiko-Abwägung vor Einsatz potenziell nephrotoxischer Medikamente (z. B. nichtsteroidale Antirheumatika, bestimmte Antibiotika) Protektive Maßnahmen bei Röntgenkontrastmittelgabe wegen der erhöhten Gefahr einer akuten Nierenschädigung (CT mit KM: bei eGFR < 30 ml/min/1,73 2 ; bei arteriellen Angiographien eGFR < 45 ml/min/1,73 m 2 ): auf ausreichende Hydrierung achten Beachten der möglichen Kumulation von Begleitmedikamenten Beachten des erhöhten kardiovaskulären Risikos mit Screening für Angiopathie Beachten von Harnwegsinfekten (Restharn?) Kontrollen bei Patienten mit diabetischer Nierenerkrankung Je nach CKD-Stadium und Progression mindestens 2‑ bis 4 -mal jährliche Kontrollen: HbA 1c , Lipide Bestimmung der Albuminurie bzw. Albumin-Kreatinin-Ratio Bestimmung der Retentionsparameter und Serumelektrolyte (Kreatinin, Harnstoff oder BUN, Kalium) Bestimmung der eGFR Blutdruckselbstmessung mit Protokollierung, empfohlen ambulante 24-h-Blutdruckmessung Bei einer eGFR < 60 ml/min/1,73 m 2 zusätzlich (Frequenz vom CKD Stadium abhängig): Blutbild Eisenstatus mit Ferritin, Transferrin, Transferrinsättigung, Serumeisen Serum-Phosphat, Serum-Kalzium Parathormon, 25-OH Vitamin D Bestimmung der venösen Blutgase insbesondere bei eGFR < 30 ml/min Serumkalium (vor allem beim Einsatz von RAAS-blockierenden Antihypertensiva und auch Mineralokortikoid-Rezeptor Antagonisten) Interdisziplinäre diabetologisch-nephrologische Betreuung ab eGFR < 60 ml/min (Stadium G3) erwägen (Details siehe Text oben) Hepatitis-B-Virus-Impfschutz Bei Auftreten einer akuten Niereninsuffizienz bzw. Verdacht auf das Vorliegen einer nicht-diabetischen Nierenerkrankung (signifikante Proteinurie) ist eine umgehende nephrologische Begutachtung des Patienten zu veranlassen Zur Diagnosesicherung und optimalen Therapieempfehlung ist oftmals eine Nierenbiopsie indiziert. Dieses Vorgehen wird im Einzelfall vom Nephrologen mit dem Patienten besprochen. RR < 140/90 mm Hg RR < 130/80 mm Hg bei Albuminurie (Stadium A2 und A3) RR diastolisch > 60 mm Hg HbA 1c -„Zielkorridor“ meistens 6,5–7,5 % (48–58 mmol/mol) (bei fortgeschrittener CKD) HbA 1c -„Zielkorridor“ bei Dialysepatient*innen 7–8,0 % (53–64 mmol/mol). Dieser soll entsprechend dem Alter und der Komorbiditäten individualisiert werden. Bei Diabetes mellitus mit Albuminurie, CKD G3 oder G4: < 55 mg/dl Hämoglobin 9–11 g/dl (eGFR Stadium CKDG 4–5) Elektrolyte im Normbereich Normalisierung der Eiweißzufuhr auf täglich 0,8–1,3 g/kg Körpergewicht Thrombozytenaggregationshemmer (individuelle Abwägung des potenziellen kardiovaskulären Benefits gegenüber dem Blutungsrisiko) Verzicht auf Rauchen Exakte Nutzen-Risiko-Abwägung vor Einsatz potenziell nephrotoxischer Medikamente (z. B. nichtsteroidale Antirheumatika, bestimmte Antibiotika) Protektive Maßnahmen bei Röntgenkontrastmittelgabe wegen der erhöhten Gefahr einer akuten Nierenschädigung (CT mit KM: bei eGFR < 30 ml/min/1,73 2 ; bei arteriellen Angiographien eGFR < 45 ml/min/1,73 m 2 ): auf ausreichende Hydrierung achten Beachten der möglichen Kumulation von Begleitmedikamenten Beachten des erhöhten kardiovaskulären Risikos mit Screening für Angiopathie Beachten von Harnwegsinfekten (Restharn?) Je nach CKD-Stadium und Progression mindestens 2‑ bis 4 -mal jährliche Kontrollen: HbA 1c , Lipide Bestimmung der Albuminurie bzw. Albumin-Kreatinin-Ratio Bestimmung der Retentionsparameter und Serumelektrolyte (Kreatinin, Harnstoff oder BUN, Kalium) Bestimmung der eGFR Blutdruckselbstmessung mit Protokollierung, empfohlen ambulante 24-h-Blutdruckmessung Bei einer eGFR < 60 ml/min/1,73 m 2 zusätzlich (Frequenz vom CKD Stadium abhängig): Blutbild Eisenstatus mit Ferritin, Transferrin, Transferrinsättigung, Serumeisen Serum-Phosphat, Serum-Kalzium Parathormon, 25-OH Vitamin D Bestimmung der venösen Blutgase insbesondere bei eGFR < 30 ml/min Serumkalium (vor allem beim Einsatz von RAAS-blockierenden Antihypertensiva und auch Mineralokortikoid-Rezeptor Antagonisten) Interdisziplinäre diabetologisch-nephrologische Betreuung ab eGFR < 60 ml/min (Stadium G3) erwägen (Details siehe Text oben) Hepatitis-B-Virus-Impfschutz Bei Auftreten einer akuten Niereninsuffizienz bzw. Verdacht auf das Vorliegen einer nicht-diabetischen Nierenerkrankung (signifikante Proteinurie) ist eine umgehende nephrologische Begutachtung des Patienten zu veranlassen Zur Diagnosesicherung und optimalen Therapieempfehlung ist oftmals eine Nierenbiopsie indiziert. Dieses Vorgehen wird im Einzelfall vom Nephrologen mit dem Patienten besprochen.
Health Promotion Interventions in Occupational Settings: Fact-Finding Survey among Italian Occupational Physicians
dd723872-6818-47dc-830a-4b950c55e1ca
10133770
Preventive Medicine[mh]
In 1946, health was defined by the World Health Organization (WHO) as “A state of complete physical, mental and social well-being and not merely the absence of disease or infirmity” . Forty years later, in 1986, the WHO Ottawa Charter for Health Promotion reported that “To reach a state of complete physical, mental and social well-being, an individual or group must be able to identify and to realize aspirations, to satisfy needs, and to change or cope with the environment” . In this view, each individual should be able to fulfill his/her aspirations and needs in every field of life, including home, community, and workplaces. In this perspective, it seems evident how workplace safety and health efforts should be focused not only on the prevention and protection from occupational risks, but also to promote the physical and mental health as well as the well-being of the workforce through a holistic “Total Worker Health® (TWH)” approach, as firstly proposed by the US National Institute for Occupational Safety and Health in 2011 . This includes policies, programs, and practices that foster safer and healthier workplaces by addressing work organization, employment and supervisory practices, and workplace culture taking also into account the possible synergy between occupational risks, environment, lifestyles and personal conditions . Thus, the TWH approach inevitably includes workplace health promotion (WHP) strategies to advance workers’ well-being. In Italy, the first WHP model was conceived and applied in the Lombardia Region in 2013 and reached around 600 participating companies throughout the Region in 2020 . It was based on the WHO model and aimed to adopt organizational changes in workplaces to make them favorable environments for the conscious adoption and diffusion of healthy lifestyles, contributing to the prevention of chronic diseases. More recently, the Italian Ministry of Health included the TWH approach in one of the intervention lines of the National Prevention Plan (NPP) for the years 2020-2025: “Activation of technical tables for the strengthening of the overall health of the worker according to the Total Worker Health approach” . In agreement with the TWH principles, the NPP pointed out that to achieve health-friendly workplaces, the involvement of all the preventive figures engaged in occupational health is necessary . From this perspective, it emerges the crucial role of occupational physicians (OPs), in the design, implementation, and monitoring of TWH and HP interventions. This has also been underlined by article 25 of the Italian Legislative Decree 81/2008, which stated the role of the OP in collaborating to the implementation and valorization of voluntary programs of HP, according to the principles of social responsibility . OP expertise in understanding possible health implications of exposure to occupational risks and the strong relationship with workers, supporting the deep knowledge about their health conditions, make the OP a key figure in implementing the health and well-being of the workforce in individual companies . However, although recognized as an integral part of HP policies and programs in the workplace, the OPs’ knowledge and perceptions regarding HP seem still an under-searched topic. Therefore, the present study aimed to address issues related to the approach, experience, strategies, and needs of OPs with respect to HP plans. This may be helpful to extrapolate insights that may assist OPs to more effectively generate interest and action to integrate occupational preventive and protective actions with improving employee health outcomes. This may strongly support workplaces to become safe, healthy, and sustainable with overall benefits for workers, employers, and the community. This report summarizes the survey’s main results, whereas additional details are provided in the Italian version of the report, which can be accessed as supplementary material including more numerous and detailed tables. 2.1. The Investigated Population and Data Collection A cross-sectional HP survey was conducted between September and December 2022. Italian OPs attending the 84° National Congress of the Italian Society of Occupational Medicine (SIML), held in Genova, Liguria Region, from the 28 th to the 30 th of September 2022, were asked to participate in the survey completing the specifically targeted questionnaire. Additionally, OPs listed in the database of the SIML were also contacted by email and asked to respond to the same questionnaire via a Google form. In any case, voluntary and anonymous participation was assured by all the members of the SIML Working Group promoting the research program. Only those OPs actively involved in occupational health activities in private or public enterprises, as stated by article 25 of the Italian Legislative Decree 81/2008 , were included in the study. No other exclusion criteria relative to socio-demographic and occupational features were applied. 2.2. Health Promotion Questionnaire An exploratory questionnaire was developed by the Members of the SIML HP Working Group to collect information concerning the Italian OPs knowledge on HP and initiatives implemented to support the health and well-being of the workforce in different settings. It consisted in 28 items divided into multiple choices and open questions, that required at least 15 minutes to be completed. The questionnaire included a first section focused on the OP socio-demographic data, i.e., age, and regions of work, and the type of activity performed. This was aimed to explore the OP private or public operating sector, as well as the number and features of the enterprises in which they worked (i.e., economic sector, number of workers employed, occupational risks experienced). The HP knowledge was explored through questions concerning the experience that the individual OP had on the HP plans in companies, with respect also to the National and Regional initiatives, the role that these programs should have with respect to the occupational health and safety system, and the relevance of the employers as well as additional healthcare professionals and preventive figures in organizing and implementing such programs. The final section of the questionnaire was dedicated to investigating the engagement of the OPs in HP plans and their characteristics in terms of intervention targets, length of the programs, effectiveness, collaboration with other professionals involved in the health and safety at work, as well as formative needs for a more widespread development/implementation of the HP plans. 2.3. Statistical Analyses Data are presented as frequency (percentages). The chi-square test for parametric distributions or Fisher’s test for non-parametric distributions, as appropriate, were used to test for the difference among the specified groups in the questionnaire’s responses. All analyses were performed using the statistical software R, version 4.0.3. A cross-sectional HP survey was conducted between September and December 2022. Italian OPs attending the 84° National Congress of the Italian Society of Occupational Medicine (SIML), held in Genova, Liguria Region, from the 28 th to the 30 th of September 2022, were asked to participate in the survey completing the specifically targeted questionnaire. Additionally, OPs listed in the database of the SIML were also contacted by email and asked to respond to the same questionnaire via a Google form. In any case, voluntary and anonymous participation was assured by all the members of the SIML Working Group promoting the research program. Only those OPs actively involved in occupational health activities in private or public enterprises, as stated by article 25 of the Italian Legislative Decree 81/2008 , were included in the study. No other exclusion criteria relative to socio-demographic and occupational features were applied. An exploratory questionnaire was developed by the Members of the SIML HP Working Group to collect information concerning the Italian OPs knowledge on HP and initiatives implemented to support the health and well-being of the workforce in different settings. It consisted in 28 items divided into multiple choices and open questions, that required at least 15 minutes to be completed. The questionnaire included a first section focused on the OP socio-demographic data, i.e., age, and regions of work, and the type of activity performed. This was aimed to explore the OP private or public operating sector, as well as the number and features of the enterprises in which they worked (i.e., economic sector, number of workers employed, occupational risks experienced). The HP knowledge was explored through questions concerning the experience that the individual OP had on the HP plans in companies, with respect also to the National and Regional initiatives, the role that these programs should have with respect to the occupational health and safety system, and the relevance of the employers as well as additional healthcare professionals and preventive figures in organizing and implementing such programs. The final section of the questionnaire was dedicated to investigating the engagement of the OPs in HP plans and their characteristics in terms of intervention targets, length of the programs, effectiveness, collaboration with other professionals involved in the health and safety at work, as well as formative needs for a more widespread development/implementation of the HP plans. Data are presented as frequency (percentages). The chi-square test for parametric distributions or Fisher’s test for non-parametric distributions, as appropriate, were used to test for the difference among the specified groups in the questionnaire’s responses. All analyses were performed using the statistical software R, version 4.0.3. 3.1. Investigated Population A total of 380 participants were enrolled; 164 OPs were enrolled during the national congress days, while the other 216 participated in the online survey. This seems a consistent sample with respect to the total number of SIML members (1900) and the number of members who declared to be directly employed as company OPs. The general characteristics of the investigated population are summarized in . Males represented most of the sample (65%), and 64% of the participants had more than 50 years, with a different distribution of male and female subjects according to the diverse age groups (shown in supplementary material, p=<0.001): greater percentages of female OPs were ≤ 49 years (52%), while most of the male participants were in the ≥ 60 age group (53%). Of 380 respondents, 336 (88.4%) declared direct engagement in companies as OPs and completed the questionnaire. This number represents more than a half (57%) of the SIML OPs, and 7% of the Italian OPs (4652) who transmitted to the competent local services in 2022, the aggregated health and risk data of the workers subjected to health surveillance according to the article 40 of the Legislative Decree 81/2008, Annex 3B. Gender differences have been determined concerning the professional activity performed (p=0.006). A greater portion of female professionals (18%) declared to have not been directly engaged in companies as OP than the male ones (8.1%). About half were from Northern Italy, about 30% from Central Italy, and the remaining 20% from Southern Italy. Regions of residence included Lombardia (14.8%), Toscana (11.1%), Piemonte (10.3%), Lazio (9.3%), Campania, and Emilia Romagna (both 7.4%). 3.2. Professional Activity Characteristics Professional features of the investigated population are reported in . Most participants (2/3) started their OP profession before 2005, while smaller portions started in the 2006-2015 period and after 2015. Freelancers characterized the majority of the enrolled population (66.87%). In line with the residence data, about half of OPs performed their professional activity in Northern Italy. Concerning the number of the followed companies, more than 40% performed their professional activity in less than 10 enterprises. More limited percentages were engaged with a greater number of companies. As many as 27.4% were involved in more than 50 enterprises, respectively. A significant different gender related distribution (p=0.003) was determined with respect to the number of companies where the OP attended his/her professional activity . A greater percentage of female OPs were engaged with less than 10 enterprises (57%) compared to the male ones (37%). Conversely, a lower percentage of females (6.5%) were employed in 26-50 enterprises compared to males (16%). In general, the companies where OPs worked were small (32.0% with 11-49 employees) or big ones (35.9% with > 249 employees). More than half of the recruited OPs followed > 1000 workers. The most represented sectors were manufacturing activities, health, social work, water supply, sewerage, waste management, accommodation, food service activities, and the construction sector. Occupational risk factors were primarily the use of video display terminals, the manual handling of loads, biomechanical overload of upper extremities, chemical risk factors, and night shift work. 3.3. Health Promotion Approach Occupational physicians were firstly asked about their opinion on the role of HP plans in occupational settings . They indicated that HP programs represent a social investment (34.5%), and a shared responsibility with all the figures involved in companies’ preventive actions (30.2%). More limited percentages of the respondents declared HP was an added value for occupational health (19.2%), a moral duty towards the workforce (12.5%), or a regulatory obligation (3.7%). Concerning the OP knowledge regarding international, national or regional Italian initiatives on HP, about half of respondents declared to know the NPP of the Ministry of Health 2020-2025 (55.9%) and the Regional Prevention Plans 2020-2025 (48.3%). The TWH ® proposed by the NIOSH and the Healthy Workplaces promoted by the WHO were known by the 40.2% and 35.7% of the OPs. However, only 33.4% had been involved in one of these interventions, without significant gender or age-based differences. The Italian Legislative Decree n. 81, in 2008, among the OP mandatory duties (article 25), stated that the “OP collaborates in the implementation and valorization of voluntary programs of HP, according to the principles of social responsibility” . In this perspective, the questionnaire included an item relative to the participants’ agreement concerning a possible increase of HP programs following the issuance of the Decree mentioned above. A quiet agreement was expressed by 48.7% of the respondents about the effectiveness of such legislative intervention in increasing HP initiatives at the workplace, without gender-related differences, the number of workers employed in the enterprises where the OPs performed their professional activity (≤ 49 vs. > 49 employees), the number of followed workers (≤ 500 vs. > 500 workers) or concerning be or not to be involved in organizing or collaborating to HP programs. Only 7.5% did not agree at all with this statement. Additionally, almost all participants agreed on the fact that occupational HP programs should be intended as an integral part of the preventive and protective system aimed to ensure the psycho-physical health and safety of workers. There were no differences between males and females or depending on the number of workers employed in the enterprises where the OPs performed their professional activity, the total number of followed workers and the involvement as organizers or collaborators of HP initiatives. These plans should be supported by other healthcare professionals, such as general practitioners and specialists in other disciplines, as strongly agreed by 40% of the total respondents, without significant differences related to gender, the size of the companies, the number of followed workers or the direct engagement in HP plans as organizers or collaborators. OPs were asked about the interest expressed by employers, which was reported as insufficient by 41.8% of respondents, while most of the group (58.2%) considered it sufficient up to high. A different distribution was determined in such response between male and female professionals, as more female OPs (41%) reported a sufficient employers’ interest compared to the 28% of the male subjects. Moreover, also a different distribution in the responses was determined for being or not being involved as organizers of HP plans (p< 0.001). Among OPs directly engaged in organizing HP initiatives, the percentage of those reporting at least a sufficient interest in employers was greater than those not involved in such activity (64.2% vs. 50.7%, respectively). No significant differences were reported for being involved in HP collaborations, the size of the enterprises, and the number of followed workers. 3.4. Participation in Health Promotion Plans As regards the involvement of the OPs in HP programs at the workplace during the previous 5 years, 57.3% and 54.8% declared to have been involved as organizers or collaborators, respectively . A significantly different age-related distribution could be demonstrated in this item. The greatest percentages of OPs involved as organizers in such initiatives, in fact, were in the 40-49 (21%) and 50-59 (29%) aged groups, compared to those of a comparable age that did not organize HP strategies, 13% and 20%, respectively. Such differences failed to emerge when HP collaborative efforts were explored. Gender related discrepancies in organizing or collaborating to HP strategies were not demonstrated. When the organization of the HP plans was analyzed according to the characteristics of the OP activity, i.e., the number and size of companies in which they performed their activity, and the number of supervised workers, significantly different distributions of respondents could be determined. A greater proportion of professionals engaged in less than 25 companies, with medium-high dimensions (>49 workers) and following more than 500 workers was engaged in HP organization. Comparable results were obtained with respect to the collaboration in HP plans, with significant results obtained for OPs engaged in larger enterprises and with a greater number of followed workers. In general, the organization of HP plans was supported by the employers (62.0%), the preventive and protective service of the company (60.5%), the human resource staff (45.1%), the workers’ representative for safety (43.2%), the workers themselves (34.2%), as well as by the operators of the prevention departments of the local health authorities (20.7%). No significant differences in this regard emerged with respect to have been organizing or collaborating in HP plans, as well as with respect to the size of the companies where the OP activity was performed or the number of followed workers. As declared by most of the participants (88.6%), the areas of intervention were oriented towards the promotion of healthy lifestyles, such as good nutrition, avoidance of voluptuary habits, physical activity promotion and sleep hygiene. Lower percentages of OPs were engaged in programs aimed to promote the workers’ psychological well-being (37.1%), a comfortable working environment (24.2%), as well as a better home-work relationship (11.7%). The length of the HP interventions was of one day or less up to few days in 48.3% of cases and of some months in 24.3%. The programs that had a length of some years, were still ongoing at the time of the survey or were interrupted in the 22.9% and 4.5% of cases, respectively. In most cases (75.8%), OPs reported a sufficient or good voluntary participation of the workforce, without gender related differences, which was described as insufficient only by a limited percentage (2.2%) of the participants. When OPs were asked to indicate the percentage of the workforce that participated in such HP plans, 54.3% of them declared that more than a half of the company employees chase to take part into such interventions. As concerns the effectiveness of such initiatives, these were reported as quiet, very or completely effective in the 65.4%, 14.7% and 2.6% of the responses, respectively. A more limited percentage of responses described these interventions as not very effective (17.3%). Efficacy indicators were adopted in 66.7% of cases. Another key issue explored by the questionnaire regarded the OPs perceived needs concerning the aspects that may be useful to implement the application of HP strategies in occupational settings. Among those, the collaboration between different healthcare disciplines was the most frequently reported (72.1%), followed by the specific training of OPs on HP procedures (63.3%) and the adoption of suitable methods of evaluation of the effectiveness of HP programs (56.5%). Additionally, adequate funding (52.0%), and appropriate information on the target population (43.2%), as well as a suitable quality assessment of the programs (39.9%) have been also indicated as useful means to promote their wider application. From the perspective of the OP involved in such initiatives, to have more time available (39.6%) and a contractual provision for a financial recognition of the HP required efforts (33.8%) could also represent a possible incentive to disseminate HP interventions. A total of 380 participants were enrolled; 164 OPs were enrolled during the national congress days, while the other 216 participated in the online survey. This seems a consistent sample with respect to the total number of SIML members (1900) and the number of members who declared to be directly employed as company OPs. The general characteristics of the investigated population are summarized in . Males represented most of the sample (65%), and 64% of the participants had more than 50 years, with a different distribution of male and female subjects according to the diverse age groups (shown in supplementary material, p=<0.001): greater percentages of female OPs were ≤ 49 years (52%), while most of the male participants were in the ≥ 60 age group (53%). Of 380 respondents, 336 (88.4%) declared direct engagement in companies as OPs and completed the questionnaire. This number represents more than a half (57%) of the SIML OPs, and 7% of the Italian OPs (4652) who transmitted to the competent local services in 2022, the aggregated health and risk data of the workers subjected to health surveillance according to the article 40 of the Legislative Decree 81/2008, Annex 3B. Gender differences have been determined concerning the professional activity performed (p=0.006). A greater portion of female professionals (18%) declared to have not been directly engaged in companies as OP than the male ones (8.1%). About half were from Northern Italy, about 30% from Central Italy, and the remaining 20% from Southern Italy. Regions of residence included Lombardia (14.8%), Toscana (11.1%), Piemonte (10.3%), Lazio (9.3%), Campania, and Emilia Romagna (both 7.4%). Professional features of the investigated population are reported in . Most participants (2/3) started their OP profession before 2005, while smaller portions started in the 2006-2015 period and after 2015. Freelancers characterized the majority of the enrolled population (66.87%). In line with the residence data, about half of OPs performed their professional activity in Northern Italy. Concerning the number of the followed companies, more than 40% performed their professional activity in less than 10 enterprises. More limited percentages were engaged with a greater number of companies. As many as 27.4% were involved in more than 50 enterprises, respectively. A significant different gender related distribution (p=0.003) was determined with respect to the number of companies where the OP attended his/her professional activity . A greater percentage of female OPs were engaged with less than 10 enterprises (57%) compared to the male ones (37%). Conversely, a lower percentage of females (6.5%) were employed in 26-50 enterprises compared to males (16%). In general, the companies where OPs worked were small (32.0% with 11-49 employees) or big ones (35.9% with > 249 employees). More than half of the recruited OPs followed > 1000 workers. The most represented sectors were manufacturing activities, health, social work, water supply, sewerage, waste management, accommodation, food service activities, and the construction sector. Occupational risk factors were primarily the use of video display terminals, the manual handling of loads, biomechanical overload of upper extremities, chemical risk factors, and night shift work. Occupational physicians were firstly asked about their opinion on the role of HP plans in occupational settings . They indicated that HP programs represent a social investment (34.5%), and a shared responsibility with all the figures involved in companies’ preventive actions (30.2%). More limited percentages of the respondents declared HP was an added value for occupational health (19.2%), a moral duty towards the workforce (12.5%), or a regulatory obligation (3.7%). Concerning the OP knowledge regarding international, national or regional Italian initiatives on HP, about half of respondents declared to know the NPP of the Ministry of Health 2020-2025 (55.9%) and the Regional Prevention Plans 2020-2025 (48.3%). The TWH ® proposed by the NIOSH and the Healthy Workplaces promoted by the WHO were known by the 40.2% and 35.7% of the OPs. However, only 33.4% had been involved in one of these interventions, without significant gender or age-based differences. The Italian Legislative Decree n. 81, in 2008, among the OP mandatory duties (article 25), stated that the “OP collaborates in the implementation and valorization of voluntary programs of HP, according to the principles of social responsibility” . In this perspective, the questionnaire included an item relative to the participants’ agreement concerning a possible increase of HP programs following the issuance of the Decree mentioned above. A quiet agreement was expressed by 48.7% of the respondents about the effectiveness of such legislative intervention in increasing HP initiatives at the workplace, without gender-related differences, the number of workers employed in the enterprises where the OPs performed their professional activity (≤ 49 vs. > 49 employees), the number of followed workers (≤ 500 vs. > 500 workers) or concerning be or not to be involved in organizing or collaborating to HP programs. Only 7.5% did not agree at all with this statement. Additionally, almost all participants agreed on the fact that occupational HP programs should be intended as an integral part of the preventive and protective system aimed to ensure the psycho-physical health and safety of workers. There were no differences between males and females or depending on the number of workers employed in the enterprises where the OPs performed their professional activity, the total number of followed workers and the involvement as organizers or collaborators of HP initiatives. These plans should be supported by other healthcare professionals, such as general practitioners and specialists in other disciplines, as strongly agreed by 40% of the total respondents, without significant differences related to gender, the size of the companies, the number of followed workers or the direct engagement in HP plans as organizers or collaborators. OPs were asked about the interest expressed by employers, which was reported as insufficient by 41.8% of respondents, while most of the group (58.2%) considered it sufficient up to high. A different distribution was determined in such response between male and female professionals, as more female OPs (41%) reported a sufficient employers’ interest compared to the 28% of the male subjects. Moreover, also a different distribution in the responses was determined for being or not being involved as organizers of HP plans (p< 0.001). Among OPs directly engaged in organizing HP initiatives, the percentage of those reporting at least a sufficient interest in employers was greater than those not involved in such activity (64.2% vs. 50.7%, respectively). No significant differences were reported for being involved in HP collaborations, the size of the enterprises, and the number of followed workers. As regards the involvement of the OPs in HP programs at the workplace during the previous 5 years, 57.3% and 54.8% declared to have been involved as organizers or collaborators, respectively . A significantly different age-related distribution could be demonstrated in this item. The greatest percentages of OPs involved as organizers in such initiatives, in fact, were in the 40-49 (21%) and 50-59 (29%) aged groups, compared to those of a comparable age that did not organize HP strategies, 13% and 20%, respectively. Such differences failed to emerge when HP collaborative efforts were explored. Gender related discrepancies in organizing or collaborating to HP strategies were not demonstrated. When the organization of the HP plans was analyzed according to the characteristics of the OP activity, i.e., the number and size of companies in which they performed their activity, and the number of supervised workers, significantly different distributions of respondents could be determined. A greater proportion of professionals engaged in less than 25 companies, with medium-high dimensions (>49 workers) and following more than 500 workers was engaged in HP organization. Comparable results were obtained with respect to the collaboration in HP plans, with significant results obtained for OPs engaged in larger enterprises and with a greater number of followed workers. In general, the organization of HP plans was supported by the employers (62.0%), the preventive and protective service of the company (60.5%), the human resource staff (45.1%), the workers’ representative for safety (43.2%), the workers themselves (34.2%), as well as by the operators of the prevention departments of the local health authorities (20.7%). No significant differences in this regard emerged with respect to have been organizing or collaborating in HP plans, as well as with respect to the size of the companies where the OP activity was performed or the number of followed workers. As declared by most of the participants (88.6%), the areas of intervention were oriented towards the promotion of healthy lifestyles, such as good nutrition, avoidance of voluptuary habits, physical activity promotion and sleep hygiene. Lower percentages of OPs were engaged in programs aimed to promote the workers’ psychological well-being (37.1%), a comfortable working environment (24.2%), as well as a better home-work relationship (11.7%). The length of the HP interventions was of one day or less up to few days in 48.3% of cases and of some months in 24.3%. The programs that had a length of some years, were still ongoing at the time of the survey or were interrupted in the 22.9% and 4.5% of cases, respectively. In most cases (75.8%), OPs reported a sufficient or good voluntary participation of the workforce, without gender related differences, which was described as insufficient only by a limited percentage (2.2%) of the participants. When OPs were asked to indicate the percentage of the workforce that participated in such HP plans, 54.3% of them declared that more than a half of the company employees chase to take part into such interventions. As concerns the effectiveness of such initiatives, these were reported as quiet, very or completely effective in the 65.4%, 14.7% and 2.6% of the responses, respectively. A more limited percentage of responses described these interventions as not very effective (17.3%). Efficacy indicators were adopted in 66.7% of cases. Another key issue explored by the questionnaire regarded the OPs perceived needs concerning the aspects that may be useful to implement the application of HP strategies in occupational settings. Among those, the collaboration between different healthcare disciplines was the most frequently reported (72.1%), followed by the specific training of OPs on HP procedures (63.3%) and the adoption of suitable methods of evaluation of the effectiveness of HP programs (56.5%). Additionally, adequate funding (52.0%), and appropriate information on the target population (43.2%), as well as a suitable quality assessment of the programs (39.9%) have been also indicated as useful means to promote their wider application. From the perspective of the OP involved in such initiatives, to have more time available (39.6%) and a contractual provision for a financial recognition of the HP required efforts (33.8%) could also represent a possible incentive to disseminate HP interventions. A healthy, safe, and productive working life is the essence of a modern and sustainable workplace . In this view, key elements are improving the working environment and adopting different workplace HP initiatives to ensure the employees’ well-being. The WHO prioritizes the workplace for promoting health and well-being . Workplaces appear ideal for this purpose , providing access to a sizable segment of the adult population who spend many waking hours at work. In the United States, the Total Worker Health® program of the NIOSH sought to improve the workforce’s well-being by protecting their safety and enhancing their health, motivation, and productivity. Although, in this scenario, “occupational health and safety,” codified in regulations, encompasses efforts that prevent injury or illness due to workplace-specific risk factors by conducting safety training, environmental modification, and the provision of and use of collective and personal protective equipment, “health and wellbeing in the workplace” can be viewed as a broad concept comprised of personal satisfaction, work-life satisfaction, and general health . Many stakeholders can share an interest in HP in occupational settings ranging from employers and employees, OPs, various government departments, trade unions, universities, and organizations with a health-promoting focus. However, although essential in HP, the position and needs of OPs have still not been fully explored. In this perspective, the present study represents the first attempt to investigate the perceptions of a representative sample of Italian OPs concerning HP. Notably, while the retrieved findings are most applicable to the Italian-specific context, they may also have relevance for international settings, given the general applicability of the HP and the growing trend towards implementing health and wellbeing programs in the workplace. In general, one-third of our investigated OP population intended HP as a social investment in workplaces, in line with the idea of the workplace as an optimal setting to support the promotion of the health of a large proportion of the working population and with the reported effectiveness of such initiatives at the community level . HP plans have been demonstrated effective in preventing and controlling chronic diseases, reducing the exit from the workforce and health care costs while increasing workplace productivity and promoting active aging of the employees . Almost all the OPs agreed that HP programs should be considered an integral part of the workplace health and safety preventive and protective systems. In this view, a third of the respondents saw HP as a shared responsibility of all the preventive figures in the workplace. In some cases, the employers’ interest was reported as insufficient, which may be because while the employer’s responsibility regarding occupational health and safety is of evident importance and often legislated, the HP lines are somewhat blurred and discretionary about activities covered under the broader topic of health and wellbeing . However, it seems important to note that OPs reporting at least a sufficient interest of the employers towards HP plans were also those most frequently engaged in organizing such initiatives, supporting the key role of all the workplace preventive figures’ collaboration in successful HP strategies. In this view, it cannot be excluded that the OPs reporting an insufficient interest from the employers could be those who performed their activities in micro-small companies, where it was more challenging to carry out the HP plan because of limited resources, higher numbers of casual/part-time workers, and small numbers of permanent employees . In this setting, the contributing role of social parties and trade unions would be desirable to overcome such difficulties and favor a wide diffusion of HP policies and programs. Establishing collaborations with neighboring businesses and developing HP plans with local health authorities’ support may be effective measures to create or implement joint HP programs, particularly in small and medium enterprises. Additionally, applying for grants or funding opportunities sponsored by charities or governmental organizations may help small companies implement HP initiatives. The respondents strongly agreed upon an interdisciplinary approach to HP because this may help achieve a comprehensive approach to the initiatives’ other health and wellbeing targets. These focused on healthy lifestyles and risk factors requiring expertise in different medical disciplines. Concerted action between different types of healthcare professionals, general practitioners, and hospital services is important to achieve effective HP interventions relying on existing resources, such as local health clinics, to provide health education and screenings that may positively impact the occupational and general health of the workforce. Concerning the practical engagement in organizing or collaborating with HP plans, about half of our sample reported to have been directly involved, although a greater proportion of OPs in the 40-59 years of age declared to contribute to the organization of such programs. Interestingly, following a more limited number of companies, prevalently of medium-high dimensions and more than 500 workers were positively associated with a greater percentage of OPs participating in HP plans, owing to the cultural and economic difficulties encountered by the micro and small enterprises to implement such types of activities as detailed above. This further underlines the relevance of the contribution of all the preventive actors in the workplace, even if small, in creating suitable settings for HP, as also suggested by the figures indicated as supporters of HP plans by the interviewed OPs. Generally, the promotion of healthy lifestyles was the target of HP interventions. Evidence exists that health risk behaviors, including smoking and alcohol use, have been reduced through HP activities at work while physical activity and healthy eating have improved . In addition, HP positively influenced business outcomes, including reduced staff turnover and absenteeism . Other potential intervention targets, such as the psychological well-being of the workforce, a comfortable occupational environment, and a better home-work interface, were less frequently addressed. These issues should be the focus of future research aimed at collecting a series of multi-targeted activities that may be specifically adapted to different occupational realities according to the peculiar conditions of work, occupational risk factors experienced, and features of the employees. Different workplace circumstances must be given consideration when designing initiatives and interventions. In this perspective, although our OP sample reported generally good participation of workers in HP plans, such enlarged proposals might offer HP interventions to the overall company workforce, thus assuring social inclusion and equal access to the decision to participate in such activities. In order to further enlarge employees’ HP participation, it could be helpful to utilize social media and other intelligent communication strategies to promote healthy behaviors and offer incentives for workers who attend health education events or engage in healthy behaviors. Workplaces could host health fairs or other community events promoting healthy behaviors and lifestyles to reach the community and the workforce. Several factors influencing the implementation of HP programs have been identified. First, multiple contextual levels can determine OP participation in HP plans, from political to intra-personal, via inter-personal, institutional, and community/social factors. In exploring these levels, our survey pointed out that interdisciplinary collaboration, adequate training on HP procedures, and appropriate information on the targeted population is essential for OPs to engage in HP effectively. In this view, it might be essential to consider the inclusion of information and training on HP early in the productive career of the OPs to adequately develop an HP culture that they will be able to spread/share in the occupational settings where they will operate, training existing occupational medicine staff to become health ambassadors who can provide basic health information to their peers. A suitable assessment of the quality and effectiveness of HP programs may provide incentives to implement such strategies further. A strategic HP initiative should be intended as a systematic process of needs analysis, priority setting, planning, implementation, and evaluation . To this latter aim, it appears necessary to define health, psychological, social, administrative, and economic indicators of the effectiveness of the HP activities that may allow pointing out possible critical aspects and follow up obtained benefits. Additionally, funding sources can support the implementation of HP, but the OP perspective for gainful employment should also be considered for HP motivation. Moreover, while financial resources are often considered in HP program design and implementation, the OP time resource implications of scoping, planning, implementing, and participating are frequently ignored. They should be considered more explicitly and thoughtfully in the OP engagement in such strategies. Future research could be directed toward testing and quantifying these themes to advance understanding of the pathway to successful workplace health and wellbeing initiatives, programs, and policies. This would help improve the capacity of workplaces wanting to effectively implement healthy changes and generate information that more clearly explicates the drivers of this type of change. Overall, this seems in line with the strategic role of the OPs as recipients of the TWH approach and key figures in HP, as pointed out by the NPP 2020-2025. In this regard, formative initiatives should be specifically targeted to the OPs, as is in the purpose of the SIML, which is to organize a special session on HP for the next 85° National Congress. This may be helpful to inform OPs better, providing them with updated knowledge to become more confident on HP procedures and models to be applied in different occupational settings. Even if preliminary, the obtained results sound relevant as they regard a significant portion of the Italian OPs. Although the participants were enrolled among the members of a scientific society, and this may introduce a bias in the sample recruitment, the large number of respondents among those SIML members engaged in OP activities allowed us to point out some issues that may be considered representative of the global scenario of the Italian OPs. Moreover, the findings provide an initial figure of the approach, opinions, and needs of OPs concerning HP in the workplace. It may be interesting to implement such an initial cross-sectional analysis with future follow-up investigations to assess the influence of possible formative interventions, governmental proposals for HP, and longer occupational medicine experience on HP on the OP feedback. The results of this study support the general interest of the Italian OPs for HP in workplaces. However, several issues still need to be addressed to assess the appropriateness of ongoing health and wellbeing initiatives and understand how to encourage the OP successful participation best. In this view, a multifaceted approach involving education about what workplace health and wellbeing encapsulates is warranted. Further, information on the potential benefits of promoting workplace health and well-being aligned with OP perceptions and needs seems necessary to successfully implement HP interventions.
The Impact of Cancer Relapse and Poor Patient Outcomes on Health Care Providers Practicing in the Oncology Field
793b2632-8871-4f84-a214-ac2ce1612be7
10134170
Internal Medicine[mh]
Although survival rates after cancer remission have significantly improved over the years, cancer relapse remains a major concern for patients and health care providers (HCPs). – Devastating cancer-related events, such as cancer relapse or recurrence, are not an uncommon experience by patients, and HCPs remain at the forefront of delivering such difficult news to patients and their families. As a result, HCPs may experience increased emotional distress and feelings of fear, stress, and anxiety. Some devastating news that HCPs may have to convey include irreversible side effects of treatments and cancer relapse or progression as well as discuss hospice care, palliative care, and even resuscitation options when no other therapies are available. In such situations, it becomes extremely challenging for HCPs to separate their emotions from these difficult consequences. Dealing with such circumstances requires the HCPs to manage the stress associated with the uncertainty of patient response and to have a high level of competence in managing all their emotional anticipations. Several studies have investigated the impact of traumatic events on HCPs. , Brown et al conducted a simulated patient consultation approach among 24 physicians, expert experience (≥4 years) and novice experience (1-3 years), and evaluated their stress response and communication performance when delivering bad vs good news. The results of the study showed no difference in communication performance, depression anxiety stress score (DAS), and fatigue between the two groups at the beginning of the study; However, a significant increase in stress, reflected by elevated heart rate (HR), was associated with delivering bad news compared to delivering good news in both groups. This increase in stress was found to be related to fatigue and inexperience with delivering bad news. Additionally, poor communication performance was found to be related to burnout and fatigue levels. Another study by Cohen et al was conducted to examine the impact of delivering good or bad news by medical students by evaluating their stress levels through monitoring blood pressure (BP) and HR during simulated physician–patient scenario. Although systolic BP and HR during preparation of delivering the news did not significantly differ between the good news group vs the control group, they were significantly higher in the bad news group vs the control group. Moreover, those in the bad news group had significantly higher stress levels and disrupted mood compared to the other groups. Furthermore, the study by Dulmen et al on medical students revealed that anticipation of delivering bad news increased cortisol levels, cardiovascular activity (BP and HR), and anxiety. Interestingly, Shaw et al reported similar apparent stress levels in both junior and senior physicians, indicating that medical experience was a negligible factor in reducing the stress of delivering difficult news over time. Nevertheless, senior physicians reported using problem-focused coping (PFC) strategies, such as selecting patients with better prognosis or an optimal environment for such disclosures, to reduce the impact of the difficult news and emotional interaction. Despite the physiological and psychological impact of delivering bad news among physicians highlighted by previous study, – limited research examining the impact of bad news on their emotions, perception on clinical performance and quality of care provided to patients thereafter. Therefore, our study aimed to assess the impact of delivering bad news on the emotions, perception of performance, and quality of care provided by clinicians practicing in the field of oncology. A cross-sectional, online survey was conducted between January and March 2022 on HCPs practicing in the oncology field in Saudi Arabia. The HCPs comprised of physicians, pharmacists, and nurses. Participants were recruited through social media such as WhatsApp and were requested to complete the questionnaire one-time only to avoid response duplication. Those expressing willingness to complete the survey were included in the study. Since no such study has been conducted in the oncology field before, the questionnaire was developed after a review of literature, – , – and a convenient sample approach was adopted. Participants were assured that the shared information was used solely for research purposes and would be kept private during the investigation. Participants who did not meet the inclusion criteria were omitted from the study. Ethical approval was obtained from the Institutional Review Board (reference number: E-22-6611). The questionnaire was divided into 3 sections. The first section included 11 questions that identified participants emotionally affected by poor outcomes and assessed their perspective toward failure of cancer therapy. The second and third sections were initially measured on a Five-level Likert scale ranging from strongly agree to strongly disagree, but was later modified to a three-level scale (Agree, Disagree, and Neutral). The second and third sections included 4-7 statements that assessed the perspective of HCPs on the reasons and consequences of poor performance using a three-level Likert scale. The Arabic questionnaires were subjected to a pilot study, among randomly selected participants (n = 7), to examine the readability and ease of administration before the actual study. The results of the pilot study are not included in the main study. The Cronbach’s alpha score of the questionnaire was estimated at 0.847, indicating that the questionnaire was ready to be carried-out for the study. Statistical Analysis A descriptive analysis was used to assess the responses of HCPs and Chi-square test was used for categorical variables analysis. A P- value <0.05 was considered statistically significant, and the data were analyzed using Statistical Package for Social Sciences version 26.0 (SPSS Inc., Chicago, IL, USA). shows the details of the responses and design. A descriptive analysis was used to assess the responses of HCPs and Chi-square test was used for categorical variables analysis. A P- value <0.05 was considered statistically significant, and the data were analyzed using Statistical Package for Social Sciences version 26.0 (SPSS Inc., Chicago, IL, USA). shows the details of the responses and design. The questionnaire was distributed among 250 HCPs practicing in the oncology field in several hospitals in Saudi Arabia. The HCPs comprised of physicians (i.e., medical oncologists, hematologist-oncologists, surgical oncologists, radiation oncologists, and pediatric oncologists), pharmacists (clinical oncology pharmacists and oncology pharmacy practitioners), and nurses. Eighty (32%) participants were able to complete the questionnaire. Of the respondents, 61.3% were males and majority (two-thirds) were physicians. Seventy one percent were ≥36 years of age and approximately 59% had ≥6 years of experience in the oncology field. The baseline characteristics of the participants are shown in . Majority (91%) of the HCPs (those who responded yes or maybe ) had tendencies to be emotionally affected by poor clinical outcomes of patients treated for cancer . This was not influenced by sex, age, or occupation (Pearson Chi-Square: P = 0.568, P = 0.884, P = 0.468, respectively). However, it was notable that the length of experience in the oncology field tended to have a positive association with emotions (P = 0.071), such that 70.2% (33 of 47) of those with experience ≥6 years and 67% (22 of 33) of those experienced <6 years answered “Yes” to being emotionally influenced. On the other hand, 4.3% of those with experience ≥6 years responded “No” and confirmed being emotionally un-affected by poor clinical outcomes compared to 15.2% of those with experience <6 years. Further details are provided in . Further investigation about the emotions and their impact on HCPs was conducted. Of those who had tendencies to be emotionally affected by poor clinical outcomes, 74% confirmed being impacted by both cancer relapse and mortality as shown in . Although 37% of respondents perceived these influences positively through multidisciplinary discussions on identifying and adopting the best alternative therapies, 45% perceived them negatively and believed cancer relapse would even occur especially in cases with advanced stages, despite available therapies. To understand whether baseline characteristics contributed into these variations, age, sex, occupation, and length of experience were included in the analysis. However, the findings showed that only age tended to affect the perception of these emotional influences (positively or negatively) (P = 0.069). Half of the participants (10 out of 20) in the age group 26 to <36 years and approximately half the participants (13 out of 27) in the age group ≥46 years perceived these influences positively, while 61.5% (16 of 26) of those in the age group 36 to <46 years perceived them negatively. Additional analysis was conducted on the perspective of HCPs toward the failure of cancer therapies. Approximately half confirmed that delivering difficult news, such as cancer relapse and poor prognosis, had an impact on their general provision of patient care, which affected patients who have not relapsed . Moreover, approximately 60% denied low quality of care, work burnout, missteps in treatment protocols, or inappropriateness of treatment as reasons for cancer relapse or patient mortality. However, majority (≥75%) of the respondents questioned the efficacy of treatment modalities on their patients and felt the urge to conduct research to test efficacy, as some participants stated that these medications were tested on western populations and might produce different effects due to their genetic differences. Further analysis involving baseline characteristics was conducted to understand whether they contributed to these variations. Occupation and sex showed significant associations. For example, 52.4% of pharmacists speculated that low quality of care in their practice site as a reason for failure of cancer therapy while 71.4% of physicians (35 out of 49) and 66.7% of nurses (2 out of 3) denied that (P = 0.014). Additionally, more females (22.2%; 6 out of 27) than males (8.7%; 4 out of 46) considered low quality of care as a reason for failure of cancer therapy while this was denied by 69.6% (32 of 46) and 40.7% (11 of 27) of males and females (P = 0.047), respectively. With respect to following protocols correctly, 67.3% of physicians denied this as a reason for failure of cancer therapy, while 66.7% of pharmacists and nurses answered “maybe” (P = 0.005). Regarding the perception of HCPs toward the need to conduct research and test the appropriateness of treatment protocols on their patients, it was significantly affected by the occupation (P = 0.022), where 71.4%, 100%, and 66.7% answered “yes” while 28.6%, 0%, and 33.3% answered “no” among physicians, pharmacists, and nurses, respectively. Further details on justifications of those who thought inappropriateness of treatment or missteps in treatment protocols as reasons for cancer relapses or patient mortality are provided in Supplementary Table 1 . In order to gain a comprehensive understanding of the perspectives of all HCPs toward the reasons and consequences of poor clinical outcomes, several statements were presented to them for agreement or disagreement. In terms of the reasons behind these outcomes, slightly more than half agreed that lack of concentration and attentiveness, problems in personal and coworker relationships, underappreciation, lack of patience, and being prone to anger and frustration could contribute to poor clinical outcomes . Interestingly, despite 60% of emotionally influenced HCPs rejecting work burnout a leading cause of failure of cancer therapy , the majority (74%; 59 out of 80) of the respondents agreed with the opposite viewpoint that constant work and low social involvement were strong reasons for poor clinical outcomes . With respect to consequences, 70% of respondents agreed that chronic stress and psychological, emotional, and physical issues are factors of poor clinical outcomes, while over half of the respondents believed that clinicians would undergo self-isolation (58%), suffer loss of self-esteem and entrapped feeling (55%), think of themselves as not fitting for the job, and consume alcohol and drugs as a consequence of poor clinical outcomes . The present study examined whether devastating clinical outcomes observed in cancer patients provoked emotional responses from HCPs in the oncology field and if these responses affected the performance and quality of care delivered to patients diagnosed with cancer. Around 91% of HCPs had tendencies to be emotionally influenced by poor clinical outcomes, which is often perceived negatively. This is concerning since negative emotions have been related to a decline in the overall performance of HCPs, both personally and professionally, and linked with suboptimal quality of care delivered to patients. , Although results related to the impact on performance from our study is still to be investigated, our findings resonate with these results, as approximately half of the HCPs who were emotionally influenced confirmed the impact of failure of cancer therapies on the quality of care provided to their patients, including those who have not relapsed or had disease progression. Several factors, including sex, age, occupation, and length of experience in the oncology field, could influence the emotional response of HCPs to poor outcomes, and their perceptions on these emotions and failure of cancer therapies. In this study age, sex, and occupation had no effect on the emotional response of the respondents. With respect to the length of experience, it was generally assumed that with multiple exposure to cases and over time, the emotions of HCPs would acclimatize; however, our results revealed that longer experience in the oncology field tended to have a positive association in those who were emotionally influenced (P = .071). Interestingly, a study by Shaw et al showed that longer experience in the medical field had no influence on the stress level of participants delivering bad news and emotional interaction when compared to those with less experience. However, their findings revealed that more strategies to reduce emotional interactions were used by the former group, which probably led to the speculation that an increase in emotional interactions if these strategies were not adopted. Another study reported no impact of experience on the stress level, fatigue, and communication performance of participants. Despite variation in results, there seems to be a consensus on no impact of experience on stressfulness and emotional response of HCPs to bad news or poor outcomes observed in cancer patients. Our study revealed that majority of HCPs questioned the efficacy of cancer therapies used to treat their patients and highlighted the need to conduct further research to test these treatments on local patients before adopting them in treatment protocols. This is partly attributed to the genetic disparities that is believed to be associated with variable responses amongst patients from different ethnicities. Accordingly, published literature also states that race and genetic differences have long been known to impact the incidence of and survival with various cancers, besides patient response to treatments. Therefore, when conducting scientific research, it is important to note such genetic disparities, in addition to other factors, including, but not limited to sex, age, and ethnicity, as they may significantly influence the study results and affect generalizations. Although a diverse inclusion of participants in a study might maximize generalizations, shortcomings arise and not all populations would be adequately represented. Thus, further research is warranted on the efficacy and overall clinical outcomes related to the use of cancer therapies (chemotherapies brands or generics and biological therapies including biosimilars) in different populations. Clinical trials conducted in Saudi Arabia are necessary to develop tailored and population-centered protocols based on such genetic variations. Finally, delivery of bad news is an extremely burdensome task on clinicians and often occurs in the oncology settings. To adequately support professionals in the workplace and assist in overcoming such a difficult task, a tailored, problem-solving approach to delivery of difficult news must be adopted. One approach to this is the consensus method, which encompasses a consensus among a panel of oncologists, surgeons, general practitioners, nurses, human rights representatives, and other HCPs. Based on the available data, the panel then establishes a joint decision on the presented case. Offering stress-management interventions such as counseling, workshops, and training programs have also shown to be beneficial in reducing occupational stress among HCPs. Overall, this would potentially ameliorate the impact of the delivering difficult news on emotions, clinical performance, and quality of care. This study has some limitations. First, this was a descriptive study that focused on assessing perspectives and beliefs of HCPs in the field of oncology and the impact of poor outcomes from different points of view. Second, the study lacked assessment tools that could have been used to objectively determine physiological, psychological, emotional outcomes (i.e., simulated case scenarios, depression/anxiety assessment scales, and BP and HR monitoring), and consequent impact on the clinical performance. Despite these limitations, this study remains to be crucial in unveiling the existence of these issues. Subsequently, further investigations can be conducted to determine the clinical impact of these events, as well as demonstrating strategies to fix these. Third, the participation rate was considered low with respect to the whole sample and extremely low from nurse practitioners, which limited the generalizability of the results. Despite these limitations, to the best of our knowledge, this study is the first to establish the foundation of research on this issue in the oncology field, which can potentially be simulated in other clinical fields. Poor clinical outcomes are emotional triggers for HCPs, particularly in those practicing in the field of oncology. These outcomes could be perceived negatively and potentially lead to a reduction in the quality of care provided to patients with cancer. Work burnout and self-isolations, among other factors, were considered reasons of poor clinical outcomes, whereas psychological and physical stress could be attributed to these outcomes. Length of experience and age of HCPs tended to affect the emotional response and perception to these emotions. Occupation and sex had a significant influence on the perspective toward reasons of cancer therapy failure and the necessity of conducting research on treatment protocols used for local patients. Supplemental Material - The Impact of Cancer Relapse and Poor Patient Outcomes on Health Care Providers Practicing in the Oncology Field Click here for additional data file. Supplemental Material for The Impact of Cancer Relapse and Poor Patient Outcomes on Health Care Providers Practicing in the Oncology Field by Abdulrahman Alwhaibi, Miteb Alenazi, Bana Almadi, Nora aljabali, Sahar Alkhalifah, Wajid Syed, Reem Alsaif, Salmeen D Bablghaith, and Mohammed N Al-Arifi in Cancer Control