title
stringlengths 1
827
⌀ | uuid
stringlengths 36
36
| pmc_id
stringlengths 5
8
| search_term
stringclasses 18
values | text
stringlengths 0
8.42M
|
---|---|---|---|---|
Global Pediatric Research Investigator: Modupe Coker | 053195ed-02c5-4872-ae0a-6336c93bdaab | 7658910 | Pediatrics[mh] | |
Differential Regulation of miRNA and Protein Profiles in Human Plasma-Derived Extracellular Vesicles via Continuous Aerobic and High-Intensity Interval Training | 44cfae81-e7f4-418c-a4c8-3e0654161e1f | 11818269 | Biochemistry[mh] | Exercise has long been recommended as a fundamental strategy to improve physical and mental fitness, which play pivotal roles in the prevention and treatment of various diseases, including cardiometabolic diseases, neurological diseases, sarcopenia, and cancer . Exercise can be primarily categorized into aerobic and anaerobic exercises . Aerobic exercise, which includes continuous aerobic training (CAT), involves continuous moderate-intensity activity and typically improves cardiorespiratory fitness. In contrast, anaerobic exercise, which includes sprinting, high-intensity interval training (HIIT), and power-lifting, involves short bursts of high-intensity activity and primarily focuses on promoting strength, power, and speed . Despite their distinct features, both aerobic and anaerobic exercises present similar beneficial roles and result in comparable enhancements in glycemic control and mitochondrial function . In general, exercise is believed to trigger a major challenge to cellular, tissue, and whole-body homeostasis, where a myriad of epigenetic, metabolic, and transcriptional regulations are involved in the adaptive responses to exercise . During this metabolic activity, the intensity, frequency, and duration of exercise determine the overall metabolic and molecular responses. In particular, aerobic and anaerobic exercise represent distinct ends of the exercise continuum, resulting in substrate-level oxidative phosphorylation and phosphorylation, respectively . Interestingly, even a single bout of exercise elicits acute adaptive responses, while regular periods of exercise promote long-term adaptation processes . It has been suggested that exercise-mediated beneficial effects are at least partially attributed to tissue crosstalk . In fact, exercise stimulates numerous cells and tissues, including immune cells, skeletal muscle, liver, adipose tissue, brain, and bone, to secret bioactive molecules into the circulation, which, in turn, act in an autocrine, paracrine, or endocrine manner to promote positive outcomes . However, the underlying molecular mechanisms responsible for exercise-mediated tissue crosstalk and its potential effects remain largely unexplored. Extracellular vesicles (EVs), a diverse group of lipid bilayer vesicles that are secreted by almost all cells, contain bioactive molecules, including proteins, nucleic acids, and lipids, and play essential roles in intercellular and interorgan communication . Numerous studies have reported that a single bout of exercise can rapidly trigger a significant increase in the amount of circulating EVs in both humans and rodents , indicating that exercise may stimulate the release of EVs from various tissues into circulation. Furthermore, it has been shown that the circulating EV content can be altered by exercise , highlighting the critical roles of EVs in mediating tissue crosstalk during exercise. For instance, a significant number of circulating EV proteins were found to be regulated by exercise and extensively participate in different biological processes, such as glycolysis and immune regulation . MiRNA, a class of non-coding RNAs with a length of about 20–25 nucleotides, serves as one of the most important active components of EVs. MiRNAs are involved in post-transcriptional regulation of more than 60% of protein-coding genes in mammals, which play essential roles in various physiological and pathological processes . Interestingly, exercise-regulated circulating EV miRNAs have been demonstrated to mediate health-promoting processes such as cardiovascular protection and white adipose tissue browning . Although great efforts have been made to elucidate the roles of EX-EVs, current studies have focused only on a single exercise mode with a single type of omics analysis, thereby limiting our current understanding. A comprehensive comparison of different types of exercise modalities with multi-omics integration analysis of EX-EVs is needed. In this study, we analyzed and compared the systemic effects of a single bout of CAT and HIIT by performing an integrated analysis of differentially regulated proteins and miRNAs within circulating EVs. Our aim was to elucidate the molecular mechanisms underlying the roles of EVs in mediating organ crosstalk and health promotion under different exercise modes.
2.1. General Characterizations of Exercise Participants and Their Plasma EVs Five healthy individuals participated in the study, and their clinical characteristics are provided in . All the participants underwent CAT and HIIT at an interval of 7 days. Blood samples were collected at rest or immediately after each training session, and plasma EVs were extracted and subjected to proteomics and miRNA profile analysis ( A). As shown in B, the target heart rate was maintained at around 60–80% of HRmax during CAT, while HIIT elicited 85% of HRmax interspersed with 2 min of active recovery at 70% of HRmax. To determine the characteristics of the isolated plasma EVs, we evaluated the presence of the commonly used EV markers using WB, morphology using TEM, and size distribution using NTA, as recommended . Typical EV markers, such as TSG101, CD9, and CD81, could be detected, while the negative EV marker Calnexin or plasma marker apolipoprotein AI was undetectable in the EV samples ( C), indicating that the obtained plasma EVs were free of blood cells and plasma. TEM revealed the typical saucer-like morphology of EVs obtained from all groups ( D). Furthermore, plasma EVs isolated from each group displayed comparable size distributions with mean sizes of 156.1 ± 2.1 nm for the REST group, 155.2 ± 3.6 nm for the CAT group, and 161.5 ± 3.8 nm for the HIIT group, respectively ( D,E). An interesting trend toward an increase in the concentration of plasma EVs was observed in the HIIT group compared to the REST group, although the difference was not statistically significant ( E). 2.2. Effects of CAT and HIIT on miRNA Profiles in Human Plasma-Derived EVs We analyzed and compared miRNA profiles of plasma EVs obtained from REST, CAT, and HIIT groups ( and ). To determine the correlations among different samples, principal component analysis (PCA) and correlation matrix analysis were performed. Specifically, different miRNA expression profiles were observed not only between the exercise and control groups but also between the CAT and HIIT groups ( A). A total of 67 DE miRNAs (22 upregulated and 45 downregulated, SI-DE miRNAs) were identified in the CAT group compared to those in the REST group, while 13 DE miRNAs (7 upregulated and 6 downregulated) were identified in the HIIT group compared to those in the REST group ( B,C). The top 10 most upregulated and downregulated miRNAs in each pairwise comparison are listed in D. Next, the potential target genes of the identified DE miRNAs were predicted. A total of 874,698 and 390,841 target genes were predicted based on RNAhybrid and miRanda, respectively, among which 94,674 target genes overlapped, as shown in the Venn diagram . To elucidate the possible molecular mechanisms connecting the EV miRNA content to the health benefits of two different types of exercise, GO and KEGG pathway enrichment analyses were conducted. Target genes of the DE miRNAs in both CAT and HIIT groups (vs. REST group) were mainly enriched in “nuclear chromatin” for the GO-cellular component (CC) terms and “DNA-binding transcription factor activity” for GO-molecular function (MF) terms , suggesting that the DE miRNAs in both CAT and HIIT groups are largely involved in the regulation of transcription factors. Among these transcription factors, NEUROG1 , SOX12, and SOX13 are shown to be important for neuronal development, RUNX3 is suggested to play key roles in the immune system; TEAD3 and TEAD4 are involved in cell proliferation and differentiation; and KLF11, KLF15 , and PPARD are proven to be responsible for the regulation of metabolism. GO analysis further revealed that target genes of the DE miRNAs in the CAT and HIIT groups (vs. the REST group) were commonly enriched in biological processes such as neuronal signal transduction, autophagy, and cell fate regulation (especially for the neuron and cardiomyocytes) to a similar extent ( E). Furthermore, target genes of the DE miRNAs in the CAT vs. REST group were more specifically enriched in cognitive function and substrate metabolism, while target genes of the DE miRNAs in the HIIT vs. REST group were more specifically enriched in organ growth, cardiac muscle function, and the insulin signaling pathway ( F). Additionally, KEGG enrichment analysis demonstrated that the most significantly enriched pathways in both CAT and HIIT groups were commonly associated with autophagy and neuronal signal transduction, while the most significantly enriched pathways in CAT and HIIT groups were also specifically associated with substrate metabolism and signal transduction in cardiomyocytes, respectively , which is consistent with the data of GO analysis. 2.3. Identification of the Possible Tissue Origin of DE Plasma EV miRNAs To assess the contributions of various tissues to the profile of circulating EV miRNAs in response to different types of exercise, tissue-specific enrichment analysis was performed on DE plasma EV miRNAs using the Tissue Atlas. Sankey network diagrams were used to visualize the tissue origin of the most significantly altered EV miRNAs. We found that multiple tissues contributed to the altered expression of EV miRNAs in response to CAT or HIIT. Remarkably, the DE EV miRNAs in both CAT and HIIT groups (vs. the REST group) were found to be enriched in the nervous system ( A–D), highlighting the involvement of the nervous system during exercise. Interestingly, the upregulated EV miRNAs in the CAT group (vs. the REST group) were suggested to be associated with multiple metabolic tissues, including liver, pancreas, muscle, and adipocytes, while the upregulated EV miRNAs in the HIIT group (vs. the REST group) were shown to be associated with the immune system, such as the spleen and lymph nodes ( A,C). 2.4. Identification of the miR-379 Cluster and miR-154 Family Among HIIT-Regulated Plasma EV miRNAs To investigate whether any miRNA cluster in plasma EVs could be regulated by CAT or HIIT, we analyzed the DE EV miRNAs identified in the CAT and HIIT groups (vs. the REST group) using the TAM 2.0 database. Interestingly, the miR-379 cluster, positioned on the chr14q (q32.2) genomic locus, was demonstrated to be specifically regulated by HIIT (FDR < 0.05). Eleven miR-379 cluster members, including miR-299, miR-412, miR-496, miR-376c, miR-329-1, miR-329-2, miR-1197, miR-382, miR-323b, miR-654, and miR-379, were significantly downregulated by HIIT, among which miR-379, miR-382, miR-323b, and miR-496 were also members of the miR-154 miRNA family ( A,B). To determine the biological roles of the HIIT-regulated miR-379 cluster, the potential target genes of the DE miR-379 cluster members were analyzed via GO enrichment analysis, which revealed that biological processes, including glucose homeostasis, innate immune response in the mucosa, monocyte differentiation, respiratory burst, the regulation of blood pressure, and appetite, were involved ( C). Furthermore, the STRING database was used to predict the gene interactions among the target genes of the DE miR-379 cluster members. A protein–protein interaction (PPI) network of 14 proteins was identified to be involved, where CHRNG, GLP1R, TACR1, and POMC genes were enriched in the biological process of “neuroactive ligand-receptor interaction” (RF = 4.64 with p = 6.46 × 10 −4 ), with POMC identified as the hub gene of the network ( D). 2.5. Effects of CAT and HIIT on Proteomic Profiles of Human Plasma-Derived EVs To better understand the biological roles of plasma EVs during two types of exercise, we also analyzed and compared the proteomic profiles of plasma EVs obtained from the REST, CAT, and HIIT groups. A total of 990 EV proteins were quantified in our studies. PCA and correlation matrix analysis revealed a distinct segregation among the three study groups ( A, and ). As expected, some of the EV marker proteins were identified ( B). A total of 55 DE proteins (11 upregulated and 44 downregulated SI-DE proteins) were identified in the CAT group compared to those in the REST group, while 70 DE proteins (56 upregulated and 14 downregulated) were identified in the HIIT group compared to those in the REST group ( C,D). The top 10 most upregulated and downregulated proteins in each pairwise comparison are listed in E. The signal peptide is a short amino acid sequence located at the N-terminus of a protein, with a length of approximately 13 to 36 amino acid residues. It functions to direct the localization of the protein and is typically cleaved after the protein is transported to its site of function or structural role within a membrane region . Notably, the majority of the DE EV proteins (17 out of 78 in total, with 11 out of 12 in the CAT group and 50 out of 66 in the HIIT group) were shown to not carry signal peptides ( F). GO pathway enrichment analysis was subsequently conducted to elucidate the potential biological roles of the DE EV proteins. In contrast to the DE EV miRNAs, the DE EV proteins in CAT and HIIT groups (vs. REST group) were commonly involved in vesicle secretion, transport, localization, and immune processes ( H). Furthermore, CAT-regulated EV proteins were more specifically enriched in the biological process of substrate metabolism, while HIIT-regulated EV proteins were more specifically enriched in the biological process of cell death and survival ( I), which is in line with the results generated from the DE EV miRNAs. Hum-mPLoc3 was used to predict the subcellular localization of the DE EV proteins, and the results showed that most of these DE EV proteins were originally enriched in the extracellular region, plasma membrane, and cytoplasm ( G), which was further supported by the results of GO-CC and GO-MF analysis . Moreover, KEGG analysis indicated that the CAT-regulated EV proteins were predominantly associated with hormone synthesis and metabolic pathways, while the HIIT-regulated EV proteins were more strongly associated with immune-related pathways . 2.6. Identification of the Possible Tissue Origin of DE Plasma EV Proteins The Human Protein Atlas was utilized to evaluate the contributions of various tissues to the profile of circulating EV proteins in response to different types of exercise. Similar to the tissue origin of the DE EV miRNAs, numerous tissues were found to contribute to the altered expression of EV proteins in response to CAT or HIIT. Specifically, the DE EV proteins in both CAT and HIIT groups (vs. the REST group) were found to be enriched in the nervous system , which is consistent with the results generated from the DE EV miRNAs . Furthermore, the CAT-upregulated EV proteins were largely enriched in different brain regions, including the cerebral cortex, midbrain, cerebellum, caudate, hippocampus, and amygdala ( A), while the HIIT-upregulated EV proteins were largely enriched in the immune system, including the bone marrow, spleen, lymph nodes, thymus, tonsils, appendix, and small intestine ( C). 2.7. Integrated Analysis of the DE EV miRNAs and the DE EV Proteins The roles of EV miRNAs and EV proteins are relatively independent once EVs are released into extracellular space. Instead, EV miRNAs could interplay with EV proteins by regulating their targets in the recipient cells. Therefore, a multivariate Venn diagram was used to overlap the GO terms of the DE EV proteins and target genes of the DE EV miRNAs in both CAT and HIIT groups. As shown in and , four pathways were found to be co-regulated by EV miRNAs and EV proteins in both CAT and HIIT groups, which were primarily involved in autophagy, cell proliferation, and differentiation. In the CAT group, 23 pathways were found to be co-regulated by EV miRNAs and EV proteins, which were mainly involved in metabolism (lipid and sterol) and the maintenance of cellular homeostasis. Furthermore, in the HIIT group, 29 pathways were found to be co-regulated by EV miRNAs and EV proteins, which were primarily associated with phospholipid metabolism, insulin secretion, and cellular physiological functions. These data further confirmed the overlapping and distinct biological roles of CAT and HIIT.
Five healthy individuals participated in the study, and their clinical characteristics are provided in . All the participants underwent CAT and HIIT at an interval of 7 days. Blood samples were collected at rest or immediately after each training session, and plasma EVs were extracted and subjected to proteomics and miRNA profile analysis ( A). As shown in B, the target heart rate was maintained at around 60–80% of HRmax during CAT, while HIIT elicited 85% of HRmax interspersed with 2 min of active recovery at 70% of HRmax. To determine the characteristics of the isolated plasma EVs, we evaluated the presence of the commonly used EV markers using WB, morphology using TEM, and size distribution using NTA, as recommended . Typical EV markers, such as TSG101, CD9, and CD81, could be detected, while the negative EV marker Calnexin or plasma marker apolipoprotein AI was undetectable in the EV samples ( C), indicating that the obtained plasma EVs were free of blood cells and plasma. TEM revealed the typical saucer-like morphology of EVs obtained from all groups ( D). Furthermore, plasma EVs isolated from each group displayed comparable size distributions with mean sizes of 156.1 ± 2.1 nm for the REST group, 155.2 ± 3.6 nm for the CAT group, and 161.5 ± 3.8 nm for the HIIT group, respectively ( D,E). An interesting trend toward an increase in the concentration of plasma EVs was observed in the HIIT group compared to the REST group, although the difference was not statistically significant ( E).
We analyzed and compared miRNA profiles of plasma EVs obtained from REST, CAT, and HIIT groups ( and ). To determine the correlations among different samples, principal component analysis (PCA) and correlation matrix analysis were performed. Specifically, different miRNA expression profiles were observed not only between the exercise and control groups but also between the CAT and HIIT groups ( A). A total of 67 DE miRNAs (22 upregulated and 45 downregulated, SI-DE miRNAs) were identified in the CAT group compared to those in the REST group, while 13 DE miRNAs (7 upregulated and 6 downregulated) were identified in the HIIT group compared to those in the REST group ( B,C). The top 10 most upregulated and downregulated miRNAs in each pairwise comparison are listed in D. Next, the potential target genes of the identified DE miRNAs were predicted. A total of 874,698 and 390,841 target genes were predicted based on RNAhybrid and miRanda, respectively, among which 94,674 target genes overlapped, as shown in the Venn diagram . To elucidate the possible molecular mechanisms connecting the EV miRNA content to the health benefits of two different types of exercise, GO and KEGG pathway enrichment analyses were conducted. Target genes of the DE miRNAs in both CAT and HIIT groups (vs. REST group) were mainly enriched in “nuclear chromatin” for the GO-cellular component (CC) terms and “DNA-binding transcription factor activity” for GO-molecular function (MF) terms , suggesting that the DE miRNAs in both CAT and HIIT groups are largely involved in the regulation of transcription factors. Among these transcription factors, NEUROG1 , SOX12, and SOX13 are shown to be important for neuronal development, RUNX3 is suggested to play key roles in the immune system; TEAD3 and TEAD4 are involved in cell proliferation and differentiation; and KLF11, KLF15 , and PPARD are proven to be responsible for the regulation of metabolism. GO analysis further revealed that target genes of the DE miRNAs in the CAT and HIIT groups (vs. the REST group) were commonly enriched in biological processes such as neuronal signal transduction, autophagy, and cell fate regulation (especially for the neuron and cardiomyocytes) to a similar extent ( E). Furthermore, target genes of the DE miRNAs in the CAT vs. REST group were more specifically enriched in cognitive function and substrate metabolism, while target genes of the DE miRNAs in the HIIT vs. REST group were more specifically enriched in organ growth, cardiac muscle function, and the insulin signaling pathway ( F). Additionally, KEGG enrichment analysis demonstrated that the most significantly enriched pathways in both CAT and HIIT groups were commonly associated with autophagy and neuronal signal transduction, while the most significantly enriched pathways in CAT and HIIT groups were also specifically associated with substrate metabolism and signal transduction in cardiomyocytes, respectively , which is consistent with the data of GO analysis.
To assess the contributions of various tissues to the profile of circulating EV miRNAs in response to different types of exercise, tissue-specific enrichment analysis was performed on DE plasma EV miRNAs using the Tissue Atlas. Sankey network diagrams were used to visualize the tissue origin of the most significantly altered EV miRNAs. We found that multiple tissues contributed to the altered expression of EV miRNAs in response to CAT or HIIT. Remarkably, the DE EV miRNAs in both CAT and HIIT groups (vs. the REST group) were found to be enriched in the nervous system ( A–D), highlighting the involvement of the nervous system during exercise. Interestingly, the upregulated EV miRNAs in the CAT group (vs. the REST group) were suggested to be associated with multiple metabolic tissues, including liver, pancreas, muscle, and adipocytes, while the upregulated EV miRNAs in the HIIT group (vs. the REST group) were shown to be associated with the immune system, such as the spleen and lymph nodes ( A,C).
To investigate whether any miRNA cluster in plasma EVs could be regulated by CAT or HIIT, we analyzed the DE EV miRNAs identified in the CAT and HIIT groups (vs. the REST group) using the TAM 2.0 database. Interestingly, the miR-379 cluster, positioned on the chr14q (q32.2) genomic locus, was demonstrated to be specifically regulated by HIIT (FDR < 0.05). Eleven miR-379 cluster members, including miR-299, miR-412, miR-496, miR-376c, miR-329-1, miR-329-2, miR-1197, miR-382, miR-323b, miR-654, and miR-379, were significantly downregulated by HIIT, among which miR-379, miR-382, miR-323b, and miR-496 were also members of the miR-154 miRNA family ( A,B). To determine the biological roles of the HIIT-regulated miR-379 cluster, the potential target genes of the DE miR-379 cluster members were analyzed via GO enrichment analysis, which revealed that biological processes, including glucose homeostasis, innate immune response in the mucosa, monocyte differentiation, respiratory burst, the regulation of blood pressure, and appetite, were involved ( C). Furthermore, the STRING database was used to predict the gene interactions among the target genes of the DE miR-379 cluster members. A protein–protein interaction (PPI) network of 14 proteins was identified to be involved, where CHRNG, GLP1R, TACR1, and POMC genes were enriched in the biological process of “neuroactive ligand-receptor interaction” (RF = 4.64 with p = 6.46 × 10 −4 ), with POMC identified as the hub gene of the network ( D).
To better understand the biological roles of plasma EVs during two types of exercise, we also analyzed and compared the proteomic profiles of plasma EVs obtained from the REST, CAT, and HIIT groups. A total of 990 EV proteins were quantified in our studies. PCA and correlation matrix analysis revealed a distinct segregation among the three study groups ( A, and ). As expected, some of the EV marker proteins were identified ( B). A total of 55 DE proteins (11 upregulated and 44 downregulated SI-DE proteins) were identified in the CAT group compared to those in the REST group, while 70 DE proteins (56 upregulated and 14 downregulated) were identified in the HIIT group compared to those in the REST group ( C,D). The top 10 most upregulated and downregulated proteins in each pairwise comparison are listed in E. The signal peptide is a short amino acid sequence located at the N-terminus of a protein, with a length of approximately 13 to 36 amino acid residues. It functions to direct the localization of the protein and is typically cleaved after the protein is transported to its site of function or structural role within a membrane region . Notably, the majority of the DE EV proteins (17 out of 78 in total, with 11 out of 12 in the CAT group and 50 out of 66 in the HIIT group) were shown to not carry signal peptides ( F). GO pathway enrichment analysis was subsequently conducted to elucidate the potential biological roles of the DE EV proteins. In contrast to the DE EV miRNAs, the DE EV proteins in CAT and HIIT groups (vs. REST group) were commonly involved in vesicle secretion, transport, localization, and immune processes ( H). Furthermore, CAT-regulated EV proteins were more specifically enriched in the biological process of substrate metabolism, while HIIT-regulated EV proteins were more specifically enriched in the biological process of cell death and survival ( I), which is in line with the results generated from the DE EV miRNAs. Hum-mPLoc3 was used to predict the subcellular localization of the DE EV proteins, and the results showed that most of these DE EV proteins were originally enriched in the extracellular region, plasma membrane, and cytoplasm ( G), which was further supported by the results of GO-CC and GO-MF analysis . Moreover, KEGG analysis indicated that the CAT-regulated EV proteins were predominantly associated with hormone synthesis and metabolic pathways, while the HIIT-regulated EV proteins were more strongly associated with immune-related pathways .
The Human Protein Atlas was utilized to evaluate the contributions of various tissues to the profile of circulating EV proteins in response to different types of exercise. Similar to the tissue origin of the DE EV miRNAs, numerous tissues were found to contribute to the altered expression of EV proteins in response to CAT or HIIT. Specifically, the DE EV proteins in both CAT and HIIT groups (vs. the REST group) were found to be enriched in the nervous system , which is consistent with the results generated from the DE EV miRNAs . Furthermore, the CAT-upregulated EV proteins were largely enriched in different brain regions, including the cerebral cortex, midbrain, cerebellum, caudate, hippocampus, and amygdala ( A), while the HIIT-upregulated EV proteins were largely enriched in the immune system, including the bone marrow, spleen, lymph nodes, thymus, tonsils, appendix, and small intestine ( C).
The roles of EV miRNAs and EV proteins are relatively independent once EVs are released into extracellular space. Instead, EV miRNAs could interplay with EV proteins by regulating their targets in the recipient cells. Therefore, a multivariate Venn diagram was used to overlap the GO terms of the DE EV proteins and target genes of the DE EV miRNAs in both CAT and HIIT groups. As shown in and , four pathways were found to be co-regulated by EV miRNAs and EV proteins in both CAT and HIIT groups, which were primarily involved in autophagy, cell proliferation, and differentiation. In the CAT group, 23 pathways were found to be co-regulated by EV miRNAs and EV proteins, which were mainly involved in metabolism (lipid and sterol) and the maintenance of cellular homeostasis. Furthermore, in the HIIT group, 29 pathways were found to be co-regulated by EV miRNAs and EV proteins, which were primarily associated with phospholipid metabolism, insulin secretion, and cellular physiological functions. These data further confirmed the overlapping and distinct biological roles of CAT and HIIT.
Exercise triggers the rapid release of EVs into the circulation in both humans and rodents . In our study, there was an increasing trend in EV concentrations after exercise. Previous studies, especially human studies, have reported inconsistent results regarding whether the amount of EVs increases after exercise. The discrepancy observed in the total particle number may be partly explained by the presence of circulating plasma lipoproteins that cannot be distinguished from EVs by NTA . Nevertheless, the size of EVs remains unmodified regardless of the mode of exercise, isolation method, or measurement technique. The contents (miRNAs and proteins) of EVs varied from different exercise types in our study. As the concept of “responders” and “non-responders” has been previously proposed in exercise physiology, the degree of responsiveness in individual organs varies depending on the intensity of exercise . Given the regulatory roles of blood flow and organ activation in EV release, the level of exercise intensity may play a pivotal role in determining not only the amount but also the content of circulating EVs . In fact, the differential impacts of low-, moderate-, and high-intensity exercise on the quantity and content of circulating EVs have been demonstrated in rodents . Therefore, examination of plasma-EVs, which are integral constituents of liquid biopsies, may help elucidate genetic and epigenetic biomarkers in the field of exercise physiology. The physiological impacts of exercise may be partially achieved by circulating EVs through their contained bioactive molecules. During exercise, organ crosstalk can be facilitated by the release of EVs, which are then transported into the circulatory system and delivered to other tissues . Both CAT and HIIT were shown to be involved in neuronal signal transduction, autophagy, and cell death and survival, which is in line with the previous animal and clinical studies . Actually, both aerobic and resistance exercise have been shown to improve spatial learning and memory in both humans and rodents . Previous studies have also demonstrated that exercise-induced upregulation of autophagy can be found in a number of tissues, driving the beneficial effects of exercise on the cardiovascular system, hepatic metabolism , and aging . Moreover, the common roles of CAT and HIIT in cardioprotection and neuroprotection have been widely reported, highlighting their roles in promoting the survival of cardiomyocytes and neuronal cells . In addition to their common biological roles, the distinct roles of CAT- and HIIT-regulated EV miRNAs were also identified in our study. The CAT-regulated EV miRNAs were found to be involved in synaptic plasticity, memory, and substrate metabolism, which provides the molecular details of the protective roles of CAT against neurological and metabolic disorders . Meanwhile, the HIIT-regulated EV miRNAs were more strongly associated with vascular endothelial growth, muscle function, and insulin signaling, which elucidates the underlying molecular mechanisms of certain reported beneficial roles of HIIT. Callahan et al. showed that HIIT contributes to increased muscle protein synthesis and muscle fiber size . Furthermore, emerging evidence from human studies shows that high-intensity exercise results in improved insulin resistance and glucose homeostasis . A large proportion of miRNAs are clustered in the genome, which can be commonly regulated and present similar expression patterns. It has been suggested that members of miRNA clusters can share the same target genes or regulate the genes that are involved in a specific pathway . Interestingly, 11 members of the miR-379 cluster within plasma EVs were shown to be downregulated by HIIT, where these spatial neighboring miRNAs share the same promotor and collaborate in the regulation of specific cellular processes . Actually, the miR-379 cluster is known for its impacts on neurodevelopment, tumor metastasis, hyper-glucocorticoidemia, and obesity . In addition, Okamoto et al. reported that upregulated miR-379 is strongly associated with non-alcoholic fatty liver disease . Our results show that the target genes of these DE miR-379 cluster members could interact with each other via multiple pathways, particularly the neuroactive ligand–receptor interaction pathway. Among the target genes of the DE miR-379 cluster members, POMC and GLP1R are particularly relevant to neural function and systemic energy metabolism . GO pathway enrichment analysis also indicated that the HIIT-regulated miR-379 cluster may have an impact on biological processes such as glucose homeostasis and immunologic function. In this study, we also performed a comprehensive analysis of the DE EV proteins. Notably, we found that most exercise-regulated circulating EV proteins are free of a predicted signal peptide sequence and are assumed not to be classically secreted proteins . Exercise may probably serve as the driving force for protein encapsulation into EVs and the release of EVs containing non-secreted proteins. Interestingly, a portion of CAT- and HIIT-regulated EV proteins were commonly associated with vesicle secretion, transport, and localization, which is strikingly different from the common biological roles of CAT- and HIIT-regulated EV miRNAs. However, exercise-regulated EV proteins and EV miRNAs were also demonstrated to be involved in overlapping biological roles. For instance, CAT-regulated EV proteins were shown to be associated with organic substance and macromolecule metabolic processes involving carbohydrate and lipid metabolism, while HIIT-regulated EV proteins were suggested to be involved in cell death and survival, which is consistent with our previous results generated from DE EV miRNAs. Taken together, these data suggest that CAT-induced plasma EVs contribute to carbohydrate and lipid metabolism, while HIIT-induced plasma EVs are involved in cell death and survival, which is in line with clinical evidence showing that CAT promotes fat oxidation and insulin sensitivity , while HIIT prevents the apoptosis of skeletal muscle cells . The choice of exercise type may depend on individual preferences, time availability, and physical fitness levels; however, certain recommendations can be achieved based on the current study. Given the common beneficial roles of CAT and HIIT in cardiomyocytes, it is suggested that individuals engaging in either type of exercise can achieve better cardiovascular function and a reduced risk of cardiovascular incidents. Individuals at risk for neurodegenerative diseases (e.g., Alzheimer’s disease) and those with metabolic diseases (e.g., non-alcoholic steatohepatitis [NASH] or diabetes) may benefit more from CAT due to the potential beneficial roles of CAT in neuronal function and metabolism. On the other hand, HIIT is recommended for individuals who seek to increase muscle mass and strength, as well as for those who aim to promote injury repair and enhance resilience to stress, considering the potential beneficial roles of HIIT in muscle function and cell viability. Since a “humoral” factor with hypoglycemic properties was discovered to be released from skeletal muscles in response to exercise , the role of skeletal muscle as a secretory organ in mediating exercise-induced organ crosstalk has been heavily investigated. Growing evidence suggests that EVs containing bioactive molecules can be released from various tissues and play essential roles in tissue/organ crosstalk during exercise , which opens a new avenue for the studies of exercise-induced organ crosstalk. We therefore aimed to identify the tissue origin of DE plasma EV miRNAs and EV proteins. We found that numerous tissues contributed to exercise-induced alterations in the expression of EV contents. Surprisingly, a large portion of the DE EV miRNAs and EV proteins were found to be enriched in the nervous system. For instance, the top CAT-upregulated EV miR-124-5p is uniquely expressed in the nervous system. MiR-124 has been suggested to play critical roles in neuronal development and function, and dysregulation of miR-124 is associated with various neurological disorders, including Alzheimer’s disease, Parkinson’s disease, hypoxic–ischemic encephalopathy, Huntington’s disease, and ischemic stroke . The top CAT-upregulated EV protein, the myelin sheath, functions as a crucial insulating membrane layer that envelopes myelinated axons in vertebrates, playing an important role in neural transmission . Furthermore, the top HIIT-upregulated EV miRNA (miR-6511a-3p) and the top HIIT-downregulated EV miRNA (miR-137) were all found to be enriched in the nervous system. These data highlight the potential roles of nervous system-derived EVs in exercise-induced organ crosstalk. Interestingly, CAT-regulated EV miRNAs were shown to be enriched in multiple metabolic tissues, including the liver, pancreas, muscle, and adipocytes, which supports our previous observations on the roles of CAT in substrate metabolism. Additionally, both HIIT-upregulated EV miRNAs and EV proteins were found to be largely enriched in the immune system, which is in line with the reported immune regulatory roles of HIIT . Admittedly, there are some limitations to our study. First, the number and diversity of participants in the current study were comparatively limited. This will not only reduce the statistical power of the findings but also affect the generalization of the results. Secondly, this study involves two types of exercise, and the varying workloads associated with different exercise modalities may potentially confound the results. Thirdly, the plasma EV samples used in this study were largely heterogeneous due to the limited isolation and analysis methods for EVs. Finally, the exact contributions of each tissue to the DE EV contents and their underlying mechanisms cannot be elucidated due to the lack of relevant methodologies. To further validate our results, future research should include volunteers from diverse demographics, such as females, older adults, and individuals with various health conditions. And more exercise types with different workloads should also be considered. Meanwhile, circulating EVs provide direct and rapid responses to exercise, making them suitable for initial experimental exploration. The non-invasive samples such as urine, sweat, and tears also serve as excellent alternatives in future research, which will offer additional molecular insights into the role of exercise. In summary, we provide the molecular details of the systemic effects of CAT and HIIT by analyzing the circulating EV contents. To our knowledge, no previous studies have compared different types of exercise using a multi-omics integration analysis of circulating EVs. We showed that CAT and HIIT could play common roles in neuronal signal transduction, autophagy, as well as cell fate regulation. CAT also plays distinct roles in cognitive function and substrate metabolism, while HIIT is strongly associated with muscle performance, insulin signaling, and positive regulation of overall cell function. It is postulated that EX-EVs likely originate from various tissues, including metabolic tissues, immune systems, and the largely neglected nervous system. This study provides the basis for a better understanding of exercise-mediated organ crosstalk and its potential health-promoting roles.
4.1. Study Design and Participants In total, five healthy male volunteers were enrolled in the study. The inclusion criteria were as follows: (1) 18–65 years of age; (2) body mass index (BMI) values between 18 and 28 kg/m 2 ; (3) more than 3 h of physical activity per week; (4) acknowledgment of informed consent. The exclusion criteria were as follows: (1) smokers; (2) body weight change >5 kg in 6 months; (3) unsuitable for physical training (heart disease, respiratory disorders, or any conditions that could be aggravated by exercise); (4) currently taking medication or having a history of medication such as steroids, beta-blockers, or anticoagulants. Before the formal experiment, the volunteers underwent a thorough physical examination (height, weight, body fat percentage, heart rate, and blood analysis) and proper adaptive training for the experimental protocol to ensure that they were able to complete the experiment. The participants refrained from exercise 24 h prior to the test to ensure the integrity and accuracy of the results. All volunteers had the same breakfast, and all tests started at 9:00 a.m. The volunteers successively completed both HIIT and CAT under the supervision of a professional coach, and the interval between each type of exercise was 7 days. Two exercise types have the same total time. Each training session was initiated with a brief 2 min of dynamic stretching to warm-up, followed by 20 min of cycling consisting of periods of 2 min at 80–95% maximal heart rate (HRmax) separated by 2 min of active recovery for the HIIT group or 20 min of cycling at 60–80% of HRmax for the CAT group. A real-time heart rate monitoring system was continuously used during each training session. An HRmax was estimated using the age-predicted equation of 220–age . The participants were instructed to continue their regular physical activities and eating habits throughout the intervention period ( A). Blood samples were collected at rest or immediately after each training session for further analysis. Informed consents were obtained from all participants, and the experimental procedures were approved by the Ethics Committee of the West China Hospital of Sichuan University (approval No. 2022629). 4.2. Plasma EV Isolation Blood was collected in heparin-coated blood collection tubes (avoiding excessive agitation) and immediately centrifuged at 1600× g for 10 min at RT. Afterward, the supernatant was carefully collected from the top down with a pipette, ensuring that a specified amount of the supernatant was left on top of the pellet . Two milliliters of collected supernatant was centrifuged at 10,000× g for 30 min at 4 °C in a fixed-angle rotor (model 220.78, Hermle, Wehingen, Germany), followed by two washes with iced PBS to eliminate soluble proteins. The obtained pellet was resuspended in 1.5 mL of iced PBS and filtered through 0.2 μm syringe filters (Millex-GP; Merck Millipore, Darmstadt, Germany). Then, the final volume of the filtrate was top-up with iced PBS to 1.5 mL prior to centrifugation at 47,000 rpm [RCF (average) 98,963, RCF (maximum) 130,000, k-factor 90.4] for 2 h at 4 °C in a Beckman TLA-55 rotor (Beckman Coulter, Krefeld, Germany). Finally, the pellets were resuspended in iced PBS, aliquoted in Eppendorf Polyallomer tubes, and stored in a −80 °C freezer, with care taken to avoid repeated freeze–thaw cycles during analysis. 4.3. Characterization of EVs Specific EV markers and the proper controls were analyzed via Western blotting, as previously described . TSG101 (Cell Signaling Technology, #72312, 1:1000), CD9 (Abcam, ab307085, 1:1000), and CD81 (Abcam, ab79559, 1:1000) were chosen as EV-positive markers. Calnexin (Cell Signaling Technology, #2433, 1:1000) was chosen as an EV-negative marker, and apolipoprotein AI (Abcam, ab7613, 1:1000) was chosen as a positive marker of plasma. The size distribution and particle concentration of the EVs were analyzed using nanoparticle tracking analysis (NTA) instrument ZetaView PMX120 (Particle Metrix, Inning am Ammersee, Germany). For each measurement, five consecutive NTA videos were captured across all 11 positions at room temperature. The analysis parameters were set as follows: sensitivity = 75, shutter speed = 75, minimum brightness = 20, and minimum detectable particle size = 5 nm. The morphology of the EVs was examined by transmission electron microscopy (TEM, model HT7800, Hitachi, Ltd., Tokyo, Japan). Specifically, 10 μL of a diluted EV solution was applied to a carbon-supported copper grid and subsequently subjected to negative staining with a 2% phosphotungstic acid solution. Following air drying at ambient temperature, the grids were analyzed using TEM at a voltage of 80 kV . 4.4. EV RNA Extraction and miRNA Sequencing Total RNA, including small RNAs, was extracted from EVs using the miRNeasy kit (Qiagen, Hilden, Germany) according to the manufacturer’s instructions . The miRNA library was constructed using the NEBNext Multiplex Small RNA Library Prep Set for Illumina (catalog #E730, New England Biolabs), according to the manufacturer’s instructions. A unique molecular identifier provided by Seqhealth Technology Co., LTD was utilized to label the pre-amplified small RNA molecules. The RNA library was purified through 6% polyacrylamide gel electrophoresis. Library quantification was performed using a QubitTM3 fluorometer (Invitrogen, catalog# Q33216) along with the Qubit dsDNA HS Assay Kit (Invitrogen, catalog # Q32854). The quality of the library was assessed using the Qsep100TM bio-fragment analyzer (Bioptic Inc., New Taipei City, Taiwan, Changzhou, China). The RNA library was sequenced on a Novaseq 6000 sequencer (Illumina) with a PE150 model. The raw sequencing data were filtered to remove low-quality reads using the FASTX-Toolkit (version 0.0.13.2), and the adaptor sequences were trimmed using cutadapt (version 1.15). Processed reads were then treated to minimize duplication bias. For the miRNA sequencing data analysis, the clean read sequences were aligned against the Silva, GtRNAdb, Rfam, and Repbase databases using Bowtie software. This process served to filter out ribosomal RNA (rRNA), transfer RNA (tRNA), small nuclear RNA (snRNA), small nucleolar RNA (snoRNA), and other non-coding RNAs (ncRNAs), as well as repeats. The sequences that remained after filtering were then compared to known miRNAs from miRbase and the Human Genome (GRCh38) to identify both known and predicted novel miRNAs. Read counts for each miRNA were extracted from the mapping results, and transcripts per million (TPM) were calculated. Comparison between the two sets of replicate samples was conducted using the limma R package . 4.5. Protein Extraction and Proteomic Profiling of EVs EV samples were lysed using RIPA buffer (catalog# 89901, Thermo Scientific, Waltham, MA, USA) supplemented with Halt™ protease inhibitor mixture (catalog# 87785, Thermo Scientific, Waltham, MA, USA), followed by extensive sonication in an ice bath. The lysate was then centrifuged at 20,000× g for 20 min at 4 °C, and the supernatant was carefully collected and transferred to a sterile EP tube. Next, the samples were reduced with 10 mM DTT for 1 h at 56 °C and subsequently alkylated with iodoacetamide for 1 h at room temperature in a dark environment. This was followed by mixing the samples with 4 volumes of acetone and incubating them at −20 °C for 2 h. After centrifugation, the resulting pellet was washed with cold acetone and solubilized in 0.1 M of TEAB containing 6 M of urea. The protein concentration of the samples was determined using the PierceTM BCA Protein Assay Kit (catalog# 23,225, Thermo Scientific, Waltham, MA, USA). For LC-MS/MS analysis, the lyophilized samples were dissolved in a 0.1% formic acid solution (referred to as solvent A). These dissolved samples were then injected into a C18 Nano-Trap column. Peptide separation occurred within an analytical column using a mobile phase consisting of 0.1% formic acid in 80% acetonitrile (termed solvent B). The elution process involved gradually increasing the concentration of solvent B from 6% to 100% over a 60 min period while maintaining a constant flow rate of 600 nL/min. The separated peptides were subsequently injected into a Nanospray Flex ESI source with a spray voltage of 2.3 kV and analyzed using an Orbitrap Exploris 480 mass spectrometer (Thermo Fisher, Waltham, MA, USA). The raw data from the mass spectrometry assays were searched against the UniProt database. During this search, carbamate was set as a fixed modification, while methionine oxidation (M) and N-terminal acetylation were designated as variable modifications. Label-free protein quantification was performed using Proteome Discoverer software version 2.2. 4.6. Bioinformatic Analysis The screening criteria for the DE miRNAs and proteins were |log2(FC)| ≥ 0.5 and p value ≤ 0.05. The miRNA targets were predicted using RNAhybrid and miRanda . A Venn diagram was generated to visualize the overlapping target genes. The protein–protein interaction (PPI) network was constructed based on the STRING database involving predicted and experimentally verified protein interactions. Subsequent analysis was performed using Cytoscape software, which also facilitated the identification of hub proteins within the network. The TAM 2.0 database was used to determine the miRNA clusters . Signal peptides of proteins were identified using SignalP 5.0 , and protein subcellular localization was predicted based on Hum-mPLoc 3.0 . Tissue enrichment analysis of miRNAs and proteins was performed using the Tissue Atlas and Human Protein Atlas, respectively. The significance A/B method was used to calculate the significance of the difference between samples . All bioinformatics calculations were further processed using R software (version 4.3.0). The repeated samples between groups were analyzed using the limma R package. The heatmaps were created using the Complex Heatmap package (v. 2.14.0), the correlation was calculated by the corrplot package (v. 0.95), the Volcano plot was created using the ggplot2 package (v. 3.5.0), and the Sankey diagram was created using the ggalluvial package (v. 0.12.5). All analysis tools used in this study are summarized in . 4.7. GO and KEGG Pathway Enrichment Analysis Functional annotation and pathway enrichment analyses were conducted utilizing the Cluster Profiler package with Gene Ontology (GO) terms and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways as reference datasets. 4.8. Statistical Analysis Statistical analyses were conducted with GraphPad Prism 8.0 (GraphPad Software, San Diego, CA, USA). Each experiment of EV characterization was replicated a minimum of three times. The results are presented as the mean ± standard error of the mean (SEM) and compared using Student’s t -test. All statistical analyses were performed with GraphPad Prism (8.0), and p < 0.05 was considered statistically significant.
In total, five healthy male volunteers were enrolled in the study. The inclusion criteria were as follows: (1) 18–65 years of age; (2) body mass index (BMI) values between 18 and 28 kg/m 2 ; (3) more than 3 h of physical activity per week; (4) acknowledgment of informed consent. The exclusion criteria were as follows: (1) smokers; (2) body weight change >5 kg in 6 months; (3) unsuitable for physical training (heart disease, respiratory disorders, or any conditions that could be aggravated by exercise); (4) currently taking medication or having a history of medication such as steroids, beta-blockers, or anticoagulants. Before the formal experiment, the volunteers underwent a thorough physical examination (height, weight, body fat percentage, heart rate, and blood analysis) and proper adaptive training for the experimental protocol to ensure that they were able to complete the experiment. The participants refrained from exercise 24 h prior to the test to ensure the integrity and accuracy of the results. All volunteers had the same breakfast, and all tests started at 9:00 a.m. The volunteers successively completed both HIIT and CAT under the supervision of a professional coach, and the interval between each type of exercise was 7 days. Two exercise types have the same total time. Each training session was initiated with a brief 2 min of dynamic stretching to warm-up, followed by 20 min of cycling consisting of periods of 2 min at 80–95% maximal heart rate (HRmax) separated by 2 min of active recovery for the HIIT group or 20 min of cycling at 60–80% of HRmax for the CAT group. A real-time heart rate monitoring system was continuously used during each training session. An HRmax was estimated using the age-predicted equation of 220–age . The participants were instructed to continue their regular physical activities and eating habits throughout the intervention period ( A). Blood samples were collected at rest or immediately after each training session for further analysis. Informed consents were obtained from all participants, and the experimental procedures were approved by the Ethics Committee of the West China Hospital of Sichuan University (approval No. 2022629).
Blood was collected in heparin-coated blood collection tubes (avoiding excessive agitation) and immediately centrifuged at 1600× g for 10 min at RT. Afterward, the supernatant was carefully collected from the top down with a pipette, ensuring that a specified amount of the supernatant was left on top of the pellet . Two milliliters of collected supernatant was centrifuged at 10,000× g for 30 min at 4 °C in a fixed-angle rotor (model 220.78, Hermle, Wehingen, Germany), followed by two washes with iced PBS to eliminate soluble proteins. The obtained pellet was resuspended in 1.5 mL of iced PBS and filtered through 0.2 μm syringe filters (Millex-GP; Merck Millipore, Darmstadt, Germany). Then, the final volume of the filtrate was top-up with iced PBS to 1.5 mL prior to centrifugation at 47,000 rpm [RCF (average) 98,963, RCF (maximum) 130,000, k-factor 90.4] for 2 h at 4 °C in a Beckman TLA-55 rotor (Beckman Coulter, Krefeld, Germany). Finally, the pellets were resuspended in iced PBS, aliquoted in Eppendorf Polyallomer tubes, and stored in a −80 °C freezer, with care taken to avoid repeated freeze–thaw cycles during analysis.
Specific EV markers and the proper controls were analyzed via Western blotting, as previously described . TSG101 (Cell Signaling Technology, #72312, 1:1000), CD9 (Abcam, ab307085, 1:1000), and CD81 (Abcam, ab79559, 1:1000) were chosen as EV-positive markers. Calnexin (Cell Signaling Technology, #2433, 1:1000) was chosen as an EV-negative marker, and apolipoprotein AI (Abcam, ab7613, 1:1000) was chosen as a positive marker of plasma. The size distribution and particle concentration of the EVs were analyzed using nanoparticle tracking analysis (NTA) instrument ZetaView PMX120 (Particle Metrix, Inning am Ammersee, Germany). For each measurement, five consecutive NTA videos were captured across all 11 positions at room temperature. The analysis parameters were set as follows: sensitivity = 75, shutter speed = 75, minimum brightness = 20, and minimum detectable particle size = 5 nm. The morphology of the EVs was examined by transmission electron microscopy (TEM, model HT7800, Hitachi, Ltd., Tokyo, Japan). Specifically, 10 μL of a diluted EV solution was applied to a carbon-supported copper grid and subsequently subjected to negative staining with a 2% phosphotungstic acid solution. Following air drying at ambient temperature, the grids were analyzed using TEM at a voltage of 80 kV .
Total RNA, including small RNAs, was extracted from EVs using the miRNeasy kit (Qiagen, Hilden, Germany) according to the manufacturer’s instructions . The miRNA library was constructed using the NEBNext Multiplex Small RNA Library Prep Set for Illumina (catalog #E730, New England Biolabs), according to the manufacturer’s instructions. A unique molecular identifier provided by Seqhealth Technology Co., LTD was utilized to label the pre-amplified small RNA molecules. The RNA library was purified through 6% polyacrylamide gel electrophoresis. Library quantification was performed using a QubitTM3 fluorometer (Invitrogen, catalog# Q33216) along with the Qubit dsDNA HS Assay Kit (Invitrogen, catalog # Q32854). The quality of the library was assessed using the Qsep100TM bio-fragment analyzer (Bioptic Inc., New Taipei City, Taiwan, Changzhou, China). The RNA library was sequenced on a Novaseq 6000 sequencer (Illumina) with a PE150 model. The raw sequencing data were filtered to remove low-quality reads using the FASTX-Toolkit (version 0.0.13.2), and the adaptor sequences were trimmed using cutadapt (version 1.15). Processed reads were then treated to minimize duplication bias. For the miRNA sequencing data analysis, the clean read sequences were aligned against the Silva, GtRNAdb, Rfam, and Repbase databases using Bowtie software. This process served to filter out ribosomal RNA (rRNA), transfer RNA (tRNA), small nuclear RNA (snRNA), small nucleolar RNA (snoRNA), and other non-coding RNAs (ncRNAs), as well as repeats. The sequences that remained after filtering were then compared to known miRNAs from miRbase and the Human Genome (GRCh38) to identify both known and predicted novel miRNAs. Read counts for each miRNA were extracted from the mapping results, and transcripts per million (TPM) were calculated. Comparison between the two sets of replicate samples was conducted using the limma R package .
EV samples were lysed using RIPA buffer (catalog# 89901, Thermo Scientific, Waltham, MA, USA) supplemented with Halt™ protease inhibitor mixture (catalog# 87785, Thermo Scientific, Waltham, MA, USA), followed by extensive sonication in an ice bath. The lysate was then centrifuged at 20,000× g for 20 min at 4 °C, and the supernatant was carefully collected and transferred to a sterile EP tube. Next, the samples were reduced with 10 mM DTT for 1 h at 56 °C and subsequently alkylated with iodoacetamide for 1 h at room temperature in a dark environment. This was followed by mixing the samples with 4 volumes of acetone and incubating them at −20 °C for 2 h. After centrifugation, the resulting pellet was washed with cold acetone and solubilized in 0.1 M of TEAB containing 6 M of urea. The protein concentration of the samples was determined using the PierceTM BCA Protein Assay Kit (catalog# 23,225, Thermo Scientific, Waltham, MA, USA). For LC-MS/MS analysis, the lyophilized samples were dissolved in a 0.1% formic acid solution (referred to as solvent A). These dissolved samples were then injected into a C18 Nano-Trap column. Peptide separation occurred within an analytical column using a mobile phase consisting of 0.1% formic acid in 80% acetonitrile (termed solvent B). The elution process involved gradually increasing the concentration of solvent B from 6% to 100% over a 60 min period while maintaining a constant flow rate of 600 nL/min. The separated peptides were subsequently injected into a Nanospray Flex ESI source with a spray voltage of 2.3 kV and analyzed using an Orbitrap Exploris 480 mass spectrometer (Thermo Fisher, Waltham, MA, USA). The raw data from the mass spectrometry assays were searched against the UniProt database. During this search, carbamate was set as a fixed modification, while methionine oxidation (M) and N-terminal acetylation were designated as variable modifications. Label-free protein quantification was performed using Proteome Discoverer software version 2.2.
The screening criteria for the DE miRNAs and proteins were |log2(FC)| ≥ 0.5 and p value ≤ 0.05. The miRNA targets were predicted using RNAhybrid and miRanda . A Venn diagram was generated to visualize the overlapping target genes. The protein–protein interaction (PPI) network was constructed based on the STRING database involving predicted and experimentally verified protein interactions. Subsequent analysis was performed using Cytoscape software, which also facilitated the identification of hub proteins within the network. The TAM 2.0 database was used to determine the miRNA clusters . Signal peptides of proteins were identified using SignalP 5.0 , and protein subcellular localization was predicted based on Hum-mPLoc 3.0 . Tissue enrichment analysis of miRNAs and proteins was performed using the Tissue Atlas and Human Protein Atlas, respectively. The significance A/B method was used to calculate the significance of the difference between samples . All bioinformatics calculations were further processed using R software (version 4.3.0). The repeated samples between groups were analyzed using the limma R package. The heatmaps were created using the Complex Heatmap package (v. 2.14.0), the correlation was calculated by the corrplot package (v. 0.95), the Volcano plot was created using the ggplot2 package (v. 3.5.0), and the Sankey diagram was created using the ggalluvial package (v. 0.12.5). All analysis tools used in this study are summarized in .
Functional annotation and pathway enrichment analyses were conducted utilizing the Cluster Profiler package with Gene Ontology (GO) terms and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways as reference datasets.
Statistical analyses were conducted with GraphPad Prism 8.0 (GraphPad Software, San Diego, CA, USA). Each experiment of EV characterization was replicated a minimum of three times. The results are presented as the mean ± standard error of the mean (SEM) and compared using Student’s t -test. All statistical analyses were performed with GraphPad Prism (8.0), and p < 0.05 was considered statistically significant.
|
Causal inference from observational data in neurosurgical studies: a mini-review and tutorial | 62e1da9c-7fed-4e69-9ae5-ab194ed9c445 | 11813971 | Surgical Procedures, Operative[mh] | Driven by evidence-based medicine, clinical neurosurgical research has increasingly focused on investigating causation, not just association . Despite the widely-held belief thatRandomized Controlled Trials (RCTs) represent the gold standard for inferring causal effects, recent studies have emphasized their limitations in neurosurgical research, such as ethical issues and high cost. These issues highlight the importance and need for a paradigm shift, to investigate causality from non-randomized and observational data . For instance, studies have explored the causal effects of early surgical interventions for spinal cord injuries and secondary glioblastoma multiforme resection in non-randomized cohorts. A common challenge in these observational studies is to address selection and confounding biases, which can distort the observed causal relationship between treatment and outcome. Causal inference aims to account for these biases and disentangle causation from association . A historical milestone occurred when Sir Austin Bradford Hill , in an early effort to formalize an understanding of causal relationships, proposed nine criteria for identifying causal effects (Table ). Rather than imposing a rigid set of guidelines, Hill’s criteria were intended to serve as a flexible framework to aid in the discovery of causal effects, an intellectual game . However, concerns have been raised about the implausibility, vagueness, and lack of a strict definition of causal effects in Hill’s criteria . These criticisms cast doubt on Hill’s criteria, suggesting that alternatively, more modern approaches could be used. Since Hill’s criteria, the field of causal inference has seen numerous developments, largely driven by the potential outcome framework, also known as the Rubin Causal Model . This framework defined causal effects as the expected difference between potential outcomes for each treatment assignment. However, only one potential outcome can exist and be observed for each patient; for example, if a patient undergoes resection, the outcome of non-resection for the same patient under the same conditions (called in epidemiological terms the counterfactual ) is purely hypothetical . The fundamental challenge of causal inference, from this perspective, is the absence of outcomes for the unreceived treatment , which must be rigorously addressed through quantitative techniques. To address this challenge as well as common confounding and selection bias, this paper aims to provide researchers with a guide for applying causal inference techniques in neurosurgery with observational data, thereby enhancing the quality of causality-based decision-making. To properly employ observational data for causality investigation, it is crucial to identify a specific causal question. Understanding the causal effects of treatments on the targeted outcome can greatly aid clinical decision-making. For instance, neurosurgeons may want to investigate whether early surgery will reduce the risk of long-term complications after spinal cord injury , or the causal effects of a second resection on survival probability in recurrent glioblastoma . In addition to treatment effects, researchers may also wish to investigate the causal effects of multiple factors, such as pre-injury, injury-related, and clinical variables in traumatic brain injury . In this case, Pirracchio et al. identified two significant factors, the history of hepatic disease and the history of psychiatric disease, both of which are associated with a poorer functional outcome. Furthermore, causal questions can be extended to uncover causal relationships among closely related factors, . Moreover, exploring effect modification, whereby treatment effects differ across subpopulations, is of paramount importance, especially when anticipating population heterogeneity. The causal questions will further guide the selection of covariates and the determination of causal methods. In this study, we concentrate on estimating treatment effects due to its prevalence in the scientific literature and potential for future studies. The target trial emulation principle in causal inference is to simulate a hypothetical RCT when an actual RCT is not feasible due to ethical issues, logistical or cost issues . After specifying the clinical question, as in an RCT, defining a targeted trial involves specifying eligibility criteria for participants, treatment definitions, causal estimands, follow-up duration, baseline time point, and outcome of interest. The observational data is then used to emulate the targeted trial by finding eligible individuals, assuming randomization, following them from trial start to end, and conducting an analysis adjusted for confounding and selection bias. The treatment assignment strategy is crucial, where randomness is virtually impossible in observational studies given that control over treatment assignments is limited . To improve such a situation, causal inference techniques can be utilized to derive desired causal effects. The success of the emulation relies on assessing the plausibility of causal assumptions and examining the sensitivity of the results to potential biases or unmeasured confounders . The Bradford Hill criteria , consisting of nine criteria (Table ), have been widely employed to evaluate the evidence for causal relationships between a presumed exposure and an outcome of interest . However, advances in biological, etiological, and statistical domains necessitate re-evaluating these criteria to enable causal inferences . As Fedak et al. stated, statistical developments allow investigators to test the strength of association from both the magnitude and statistical significance perspectives. Additionally, the suitability and interpretability of the analytical model employed should be given greater consideration. Despite the Bradford Hill criteria providing a systematic framework, they may fall short in addressing issues such as confounding and selection bias, thereby limiting their applicability in current real-world clinical settings (Table ). Aside from the Bradford Hill criteria, other causal inference frameworks have been proposed, as evidenced in the Olsen and Jensen essay, who argue in favor of a so-called “consequence criterion”, i.e. implementing a vaccine for a deadly disease even in the presence of less-than-ideal evidence may be warranted .Shepherd’s criteria can help evaluate if an exposure is teratogenic or not . Also in this case a consequence criterion applies, even if the evidence is limited, if the evidence is correct then the consequences are grave. In response to the limitations of the Bradford Hill criteria, the Potential Outcome (PO) framework and Directed Acyclic Graphs (DAG) have emerged as valuable tools for establishing causal relationships. These complementary approaches offer a more comprehensive understanding of causation by visualizing causal pathways and accounting for confounding variables. Although other causal inference methods exist, such as do-calculus and Bayesian decision analysis , we outline the PO framework and DAG in the . To produce reliable ATE estimates, various methods such as standardization, propensity score matching, targeted maximum likelihood, etc., can be employed, as detailed in this section. These methods aim to establish comparability between the exposure and non-exposure groups, thereby reducing the impact of confounding or other biases. It is important to note that these methods still rely on corresponding assumptions (Table ) and violating these assumptions can result in biased estimates. Standardization Standardization involves calculating expected outcomes across strata divided by confounding variables with the total dataset as a reference, assuming the dataset’s distribution is consistent with the actual population . When the number of confounders and their categories are manageable, and the sample size is sufficiently large, non-parametric standardization can be useful. This involves computing mean outcomes in each subgroup defined by the confounders—a form of "within-stratum" adjustment . The overall treatment effect can be obtained using methods such as weighted average based on the frequency of each confounder subgroup or meta-analysis . When there are many confounders or some have multiple levels (or some are continuous variables), parametric standardization using techniques such as regression should be considered . Propensity score-based methods Propensity score (S), defined as the conditional probability of assignment to a particular treatment given a group of observed variable, i.e., [12pt]{minimal} $${S}_{i}=P({T}_{i }|{X}_{i})$$ S i = P T i X i ) , is a classical tool to address selection bias . In practice, true propensity scores are rarely known outside of randomized experiments and thus need to be estimated by models such as logistic regression and generalized boosted models . The main applications of propensity score include model adjustment, stratification, weighting, and matching. The propensity score can be directly added to the outcome prediction model to adjust for confounders or to stratify the data for subgroup analysis. The other two directions, weighting and matching, are explained as below. Propensity score weighting assigns each individual a weight based on their propensity score, and the ATE can be estimated using statistical methods like weighted regression . The idea is to create a "pseudo-population" in which the distribution of covariates is balanced between treated and control groups, mimicking the conditions of a randomized controlled trial . There are two main approaches to constructing weights based on propensity scores: a) inverse probability of treatment weighting (IPTW) weighting , where individuals are weighted by [12pt]{minimal} $$1/S$$ 1 / S , if they received treatment, and by [12pt]{minimal} $$1/(1-S)$$ 1 / ( 1 - S ) , if they did not; b) overlap weighting , where individuals are weighted by [12pt]{minimal} $$1-S$$ 1 - S , if they received treatment, and directly by [12pt]{minimal} $$S$$ S , if they did not. The latter puts more emphasis on individuals with propensity scores close to 0.5, where the treated and control groups overlap the most. Propensity score matching, as one of the matching methods , relies on matching the treated and control units with similar propensity scores. This balances the distribution of confounders between the treatment and control groups, removing the effects of treatment assignment on covariates . To perform matching based on the propensity score, there are several methods such as nearest-neighbor matching and caliper matching , which differ in how they prioritize balance between the groups and overall sample size. Nevertheless, in certain cases the unobserved confounders may become imbalanced when matching on observed confounders, which make propensity score matching only as powerful as the dataset and provided confounders. Targeted maximum likelihood estimation (TMLE) Targeted Maximum Likelihood Estimation (TMLE) is a semi-parametric method used in causal inference to estimate causal effects in the presence of confounding variables . This method is particularly suitable for complex observational data since it combines the strengths of machine learning and statistical techniques to provide robust and efficient estimates of treatment effects. TMLE is a doubly robust maximum-likelihood–based approach, where the robustness is achieved by modelling both the treatment assignment mechanism (e.g., propensity scores) and the outcome mechanism (e.g., regression of the outcome on the treatment and covariates) . Initially, these two models are initially separately built. Then a “clever covariate” is constructed to capture the relationship between these two mechanisms, upon which the initial outcome regression model is updated to a targeted version. This method uniquely combines machine learning and logistic regression to create a more robust confounder adjustment. Examples in neurosurgical studies Propensity score matching is a commonly used approach in neurosurgical studies. Balas et al. applied this approach to respectively estimate the causal effects of early surgery on the risk of complications for acute traumatic thoracolumbar spinal cord injury and complete cervical spinal cord injury . Fariña Nuñez et al. also conducted propensity score matching and validated the matching results using t-Distributed Stochastic Neighbor Embedding (t-SNE). Based on the matched cohort, a Cox proportional-hazards regression model with re-resection as a time-dependent covariate was computed . Koenecke et al. applied both propensity score trimming (excluding individuals with extreme propensity values) in combination with IPTW and propensity score matching to investigate the causal effects of Alpha-1 adrenergic receptor (⍺1-AR) antagonists on preventing hyperinflammation and death in patients with acute respiratory distress and pneumonia. TMLE and its extension collaborative TMLE (cTMLE) were applied by Pirracchio et al. to compute the causal importance ranking of variables towards the disability score at three months post-injury. Standardization involves calculating expected outcomes across strata divided by confounding variables with the total dataset as a reference, assuming the dataset’s distribution is consistent with the actual population . When the number of confounders and their categories are manageable, and the sample size is sufficiently large, non-parametric standardization can be useful. This involves computing mean outcomes in each subgroup defined by the confounders—a form of "within-stratum" adjustment . The overall treatment effect can be obtained using methods such as weighted average based on the frequency of each confounder subgroup or meta-analysis . When there are many confounders or some have multiple levels (or some are continuous variables), parametric standardization using techniques such as regression should be considered . Propensity score (S), defined as the conditional probability of assignment to a particular treatment given a group of observed variable, i.e., [12pt]{minimal} $${S}_{i}=P({T}_{i }|{X}_{i})$$ S i = P T i X i ) , is a classical tool to address selection bias . In practice, true propensity scores are rarely known outside of randomized experiments and thus need to be estimated by models such as logistic regression and generalized boosted models . The main applications of propensity score include model adjustment, stratification, weighting, and matching. The propensity score can be directly added to the outcome prediction model to adjust for confounders or to stratify the data for subgroup analysis. The other two directions, weighting and matching, are explained as below. Propensity score weighting assigns each individual a weight based on their propensity score, and the ATE can be estimated using statistical methods like weighted regression . The idea is to create a "pseudo-population" in which the distribution of covariates is balanced between treated and control groups, mimicking the conditions of a randomized controlled trial . There are two main approaches to constructing weights based on propensity scores: a) inverse probability of treatment weighting (IPTW) weighting , where individuals are weighted by [12pt]{minimal} $$1/S$$ 1 / S , if they received treatment, and by [12pt]{minimal} $$1/(1-S)$$ 1 / ( 1 - S ) , if they did not; b) overlap weighting , where individuals are weighted by [12pt]{minimal} $$1-S$$ 1 - S , if they received treatment, and directly by [12pt]{minimal} $$S$$ S , if they did not. The latter puts more emphasis on individuals with propensity scores close to 0.5, where the treated and control groups overlap the most. Propensity score matching, as one of the matching methods , relies on matching the treated and control units with similar propensity scores. This balances the distribution of confounders between the treatment and control groups, removing the effects of treatment assignment on covariates . To perform matching based on the propensity score, there are several methods such as nearest-neighbor matching and caliper matching , which differ in how they prioritize balance between the groups and overall sample size. Nevertheless, in certain cases the unobserved confounders may become imbalanced when matching on observed confounders, which make propensity score matching only as powerful as the dataset and provided confounders. Targeted Maximum Likelihood Estimation (TMLE) is a semi-parametric method used in causal inference to estimate causal effects in the presence of confounding variables . This method is particularly suitable for complex observational data since it combines the strengths of machine learning and statistical techniques to provide robust and efficient estimates of treatment effects. TMLE is a doubly robust maximum-likelihood–based approach, where the robustness is achieved by modelling both the treatment assignment mechanism (e.g., propensity scores) and the outcome mechanism (e.g., regression of the outcome on the treatment and covariates) . Initially, these two models are initially separately built. Then a “clever covariate” is constructed to capture the relationship between these two mechanisms, upon which the initial outcome regression model is updated to a targeted version. This method uniquely combines machine learning and logistic regression to create a more robust confounder adjustment. Propensity score matching is a commonly used approach in neurosurgical studies. Balas et al. applied this approach to respectively estimate the causal effects of early surgery on the risk of complications for acute traumatic thoracolumbar spinal cord injury and complete cervical spinal cord injury . Fariña Nuñez et al. also conducted propensity score matching and validated the matching results using t-Distributed Stochastic Neighbor Embedding (t-SNE). Based on the matched cohort, a Cox proportional-hazards regression model with re-resection as a time-dependent covariate was computed . Koenecke et al. applied both propensity score trimming (excluding individuals with extreme propensity values) in combination with IPTW and propensity score matching to investigate the causal effects of Alpha-1 adrenergic receptor (⍺1-AR) antagonists on preventing hyperinflammation and death in patients with acute respiratory distress and pneumonia. TMLE and its extension collaborative TMLE (cTMLE) were applied by Pirracchio et al. to compute the causal importance ranking of variables towards the disability score at three months post-injury. Despite addressing potential confounding and selection biases through the methods introduced above, the causal estimate may still be inaccurate due to factors such as unmeasured confounders or measurement errors. Therefore, it is essential to consider the limitations and potential sources of bias when interpreting causal conclusions drawn from an observational study. Multiple statistical methods should be conducted to evaluate the robustness of the results. Furthermore, incorporating relevant biological and etiological knowledge, conducting replication studies, and performing sensitivity analysis can provide additional evidence to support or against causal conclusions, ultimately strengthening the validity of the findings. To demonstrate the applications of causal inference methods, we designed a simulation setting that satisfied the assumptions of consistency, conditional exchangeability, and positivity. The setting included a binary outcome ( [12pt]{minimal} $$Y$$ Y ), a binary treatment ( [12pt]{minimal} $$T$$ T ), two confounders (binary [12pt]{minimal} $${L}_{1}$$ L 1 and continuous [12pt]{minimal} $${L}_{2}$$ L 2 ) influencing both the treatment and the outcome, and two outcome-related covariates ( [12pt]{minimal} $${W}_{1}$$ W 1 and [12pt]{minimal} $${W}_{2}$$ W 2 ) independent of treatment assignment. When conditioning on [12pt]{minimal} $${W}_{1}$$ W 1 , [12pt]{minimal} $${W}_{3}$$ W 3 and [12pt]{minimal} $$Y$$ Y are independent. Figure illustrates the relationship between variables and the outcome. We adjusted confounders [12pt]{minimal} $${L}_{1}$$ L 1 and [12pt]{minimal} $${L}_{2}$$ L 2 , to ensure the causal relationship between treatment [12pt]{minimal} $$T$$ T and outcome [12pt]{minimal} $$Y$$ Y , and included covariates [12pt]{minimal} $${W}_{1}$$ W 1 and [12pt]{minimal} $${W}_{2}$$ W 2 for modeling. [12pt]{minimal} $${W}_{3}$$ W 3 was excluded due to its conditional independence from [12pt]{minimal} $$Y$$ Y in the presence of [12pt]{minimal} $${W}_{1}$$ W 1 . It is crucial to approach the selection of covariates and identification of confounding factors with the uttermost caution in practice . We applied the aforementioned methods to estimate the ATE in our analysis: 1) standardization, particularly, parametric standardization, as there was a continuous confounder 2) Propensity score weighting via IPTW 3) Propensity score matching via optimal full matching that supports the calculation of ATE 4) TMLE. The baseline method for comparison is the “naïve method”, which directly computed the difference in the average outcomes between two treatment subgroups. The objective was to obtain the ATE of treatment [12pt]{minimal} $$T$$ T on outcome [12pt]{minimal} $$Y$$ Y . The implementation codes can be found in the supplementary material. The results for the estimated ATE by each method are presented in Fig. . The true ATE in this setting was 0.168. The “naive method” exhibited an overestimation of the ATE, whereas parametric standardization, propensity score-based methods (weighting and matching), and TMLE yielded similar estimates, with standardization providing the most accurate estimate. In this tutorial, we provide a concise yet insightful guide for researchers to identify causal relationships through observational studies. Researchers are encouraged to begin by clearly defining the specific causal questions they intend to investigate, conceptualizing a hypothetical randomized trial that aligns with their research question, and then designing the observational study with the target trial emulation approach. On top of Bradford Hill’s criteria, we highlighted the potential outcome framework and DAG as one of the fundamental pillars of causal inference and introduced several commonly used causal inference methods. These statistical approaches can address various bias sources, particularly confounding and selection bias, enabling researchers to obtain more robust and reliable causal conclusions to guide the neurosurgical practice. Thinking causal inference as grounds before study inception When utilizing observational data for causality investigation, it is crucial to approach it with the same rigor as conducting an RCT, starting from the study setup phase. The core idea is to make the data from observational studies resemble data from a randomized experiment as closely as possible, as emphasized in Sect. . Some of the limitations of observational data can be solved by taking a proactive stance from the beginning. In observational studies, the ambiguity regarding treatment timing can be resolved by accurately recording the timing of treatment decisions. The issue of design and analysis mingling, which involves simultaneous access to covariates, treatment, and outcome, can be mitigated by isolating the outcome from the analysis until the treatment groups are sufficiently balanced . Another common limitation of observational study is the lack of a pre-specified protocol for analysis . This can be addressed by establishing a rigorous protocol in advance and specifying the planned statistical methods and analysis steps. By implementing these proactive strategies, researchers can enhance the validity and reliability of causal inference from observational data, minimizing potential biases and limitations . Choices along the causal inference process To draw valid causal conclusions, it is essential to choose appropriate causal inference methods, considering the nature of the data, research questions of interest, and required assumptions of each method. Particularly, in neurosurgical studies with rare diseases and multiple covariates, propensity score-based methods are often preferable to (parametric) standardization . In addition, methods based on propensity score offer advantages in identifying and addressing positivity assumption violations . For example, observations with extreme scores may be excluded from analysis to obtain more efficient estimates. However, discarding data may result in power loss and sample differences from the target population . When researchers have greater confidence in correctly specifying the outcome model rather than the propensity model, standardization can be a superior choice. Conversely, if there is limited information about both the outcome model and the propensity model, TMLE can provide a robust estimation of causal quantities . Variable selection in both propensity and outcome models is crucial in causal inference. Researchers should rely on both neurosurgical domain knowledge and statistical considerations to effectively identify potential confounding variables that require adjustment and colliders that should not be adjusted for. It is advised to include all variables believed to be correlated with the outcome in the propensity model, while excluding variables only related to the exposure . For parametric standardization, advanced techniques, such as random forest, Super Learner, and Gradient Boosting Machines (GBM), offer more accurate estimates of potential outcomes by effectively selecting important variables and capturing complex relationships and interactions among them. Missing data problems also threaten the validity of causal inference. The missing mechanisms, as described by Little and Rubin , are related to the degree of exposure and outcome dependency. When missing completely at random (MCAR), i.e., the missingness does not depend either on the exposure or on the outcome, complete-case analysis that simply drops missing values can yield unbiased results. When the missingness depends on the exposure but not the outcome, defined as missing at random (MAR), multiple imputations can provide an unbiased causal estimate that accounts for the uncertainty introduced by imputation. Inverse probability weighting (IPW) can also address the missingness under the MAR assumptions. Furthermore, sensitivity analysis is needed to evaluate the robustness of conclusions under each missing mechanism . Causal inference in current neurosurgery Despite being considered in some neurosurgical studies, as mentioned in Sect. , causal inference remains underutilized in this field. In addition to the investigation of treatment effects, which is the main focus of this study, causal inference can also contribute to identifying risk factors associated with neurosurgical conditions or surgical outcomes, providing researchers with insights about the modifiable risk factors that can be targeted to improve patient outcomes. Furthermore, the mediation analysis in causal inference can be applied to investigate the mechanisms by which neurosurgical treatments or risk factors influence outcomes. Future directions While our primary focus is on estimating the Average Treatment Effect (ATE), it is imperative to acknowledge other estimands of interest. For instance, the Average Treatment Effect on the Treated (ATT) becomes relevant when evaluating the treatment’s impact exclusively on those who received it. Notably, this review focused on fixed treatment comparisons without additional adjustments. However, real clinical practice often involves individualizing and adjusting treatments based on patients’ characteristics and evolving disease status when dealing with relapsing and chronic diseases. Dynamic treatment regimens (DTRs) are sequences of decision rules that can formalize adaptive disease management plans and guide healthcare providers on which treatment should be given to which subgroup of patients . Although the sequential multiple assignment randomized trial (SMART) serves as the gold standard for constructing optimal DTRs , longitudinal observational data can be leveraged in situations where randomization is not feasible, allowing for the evaluations of DTRs at a lower cost. However, when using observational data to estimate treatment effects of each DTR, it is crucial to pay meticulous attention to time-varying confounders, as treatment assignment may vary based on patients’ intermediate disease status and characteristics. Mahar et al. provided a comprehensive overview of relevant statistical methods, such as IPW, G-estimation, Q-learning, etc., for constructing optimal DTRs using observation longitudinal data. Readers can refer to Hernan and Robins for estimating various causal estimands and corresponding statistical techniques in different contexts. When utilizing observational data for causality investigation, it is crucial to approach it with the same rigor as conducting an RCT, starting from the study setup phase. The core idea is to make the data from observational studies resemble data from a randomized experiment as closely as possible, as emphasized in Sect. . Some of the limitations of observational data can be solved by taking a proactive stance from the beginning. In observational studies, the ambiguity regarding treatment timing can be resolved by accurately recording the timing of treatment decisions. The issue of design and analysis mingling, which involves simultaneous access to covariates, treatment, and outcome, can be mitigated by isolating the outcome from the analysis until the treatment groups are sufficiently balanced . Another common limitation of observational study is the lack of a pre-specified protocol for analysis . This can be addressed by establishing a rigorous protocol in advance and specifying the planned statistical methods and analysis steps. By implementing these proactive strategies, researchers can enhance the validity and reliability of causal inference from observational data, minimizing potential biases and limitations . To draw valid causal conclusions, it is essential to choose appropriate causal inference methods, considering the nature of the data, research questions of interest, and required assumptions of each method. Particularly, in neurosurgical studies with rare diseases and multiple covariates, propensity score-based methods are often preferable to (parametric) standardization . In addition, methods based on propensity score offer advantages in identifying and addressing positivity assumption violations . For example, observations with extreme scores may be excluded from analysis to obtain more efficient estimates. However, discarding data may result in power loss and sample differences from the target population . When researchers have greater confidence in correctly specifying the outcome model rather than the propensity model, standardization can be a superior choice. Conversely, if there is limited information about both the outcome model and the propensity model, TMLE can provide a robust estimation of causal quantities . Variable selection in both propensity and outcome models is crucial in causal inference. Researchers should rely on both neurosurgical domain knowledge and statistical considerations to effectively identify potential confounding variables that require adjustment and colliders that should not be adjusted for. It is advised to include all variables believed to be correlated with the outcome in the propensity model, while excluding variables only related to the exposure . For parametric standardization, advanced techniques, such as random forest, Super Learner, and Gradient Boosting Machines (GBM), offer more accurate estimates of potential outcomes by effectively selecting important variables and capturing complex relationships and interactions among them. Missing data problems also threaten the validity of causal inference. The missing mechanisms, as described by Little and Rubin , are related to the degree of exposure and outcome dependency. When missing completely at random (MCAR), i.e., the missingness does not depend either on the exposure or on the outcome, complete-case analysis that simply drops missing values can yield unbiased results. When the missingness depends on the exposure but not the outcome, defined as missing at random (MAR), multiple imputations can provide an unbiased causal estimate that accounts for the uncertainty introduced by imputation. Inverse probability weighting (IPW) can also address the missingness under the MAR assumptions. Furthermore, sensitivity analysis is needed to evaluate the robustness of conclusions under each missing mechanism . Despite being considered in some neurosurgical studies, as mentioned in Sect. , causal inference remains underutilized in this field. In addition to the investigation of treatment effects, which is the main focus of this study, causal inference can also contribute to identifying risk factors associated with neurosurgical conditions or surgical outcomes, providing researchers with insights about the modifiable risk factors that can be targeted to improve patient outcomes. Furthermore, the mediation analysis in causal inference can be applied to investigate the mechanisms by which neurosurgical treatments or risk factors influence outcomes. While our primary focus is on estimating the Average Treatment Effect (ATE), it is imperative to acknowledge other estimands of interest. For instance, the Average Treatment Effect on the Treated (ATT) becomes relevant when evaluating the treatment’s impact exclusively on those who received it. Notably, this review focused on fixed treatment comparisons without additional adjustments. However, real clinical practice often involves individualizing and adjusting treatments based on patients’ characteristics and evolving disease status when dealing with relapsing and chronic diseases. Dynamic treatment regimens (DTRs) are sequences of decision rules that can formalize adaptive disease management plans and guide healthcare providers on which treatment should be given to which subgroup of patients . Although the sequential multiple assignment randomized trial (SMART) serves as the gold standard for constructing optimal DTRs , longitudinal observational data can be leveraged in situations where randomization is not feasible, allowing for the evaluations of DTRs at a lower cost. However, when using observational data to estimate treatment effects of each DTR, it is crucial to pay meticulous attention to time-varying confounders, as treatment assignment may vary based on patients’ intermediate disease status and characteristics. Mahar et al. provided a comprehensive overview of relevant statistical methods, such as IPW, G-estimation, Q-learning, etc., for constructing optimal DTRs using observation longitudinal data. Readers can refer to Hernan and Robins for estimating various causal estimands and corresponding statistical techniques in different contexts. In this study, we emphasized the critical importance of causation in clinical research and provided a concise guide to identifying causal relationships through observational datasets. We strongly encourage clinical researchers to delve into the field of causal inference and appropriately apply these causal methods to enhance the quality of evidence and improve patient care. Below is the link to the electronic supplementary material. Supplementary Material 1 (DOCX 18.0 KB) |
Conflicts of Interest Among Cardiology Clinical Practice Guideline Authors in Japan | 49e842e6-38f8-4f20-b5ac-a4fc204b5f62 | 11262514 | Internal Medicine[mh] | This analysis of publicly available payment data disclosed by pharmaceutical companies found that 94.4% of Japanese cardiology clinical guideline authors received personal payments from pharmaceutical companies, totaling >US $70.8 million from 2016 to 2020. Leading authors of these cardiology clinical guidelines received larger payments than nonleading authors. More stringent and transparent conflict of interest management strategies are needed for authors of cardiology clinical guidelines in Japan. All data used in this study are available from the Yen For Docs database, managed by the Medical Governance Research Institute ( https://yenfordocs.jp/ ), and from each pharmaceutical company belonging to the Japan Pharmaceutical Manufacturers Association. Due to privacy restrictions on payment recipients, the data sets collected and analyzed during the current study are available from the corresponding author upon reasonable request. As this study was a retrospective analysis of publicly available data and met the definition of nonhuman subjects research, no institutional board review and approval were required. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology guideline. Using a publicly accessible payment database, this study assessed the extent of financial relationships between the pharmaceutical industry and authors of CPGs for cardiovascular diseases in Japan. To comprehensively capture the financial relationships between cardiology CPG authors and pharmaceutical companies, this study included all authors of CPGs developed and published by the JCS from January 2015 to December 2022. This encompassed CPG chairs, development committee authors, review committee authors, systematic review committee authors, and writing supporting authors. We collected data on all authors of CPGs published from 2015 to the latest fiscal year (2022), as the payment database contained data on personal payments starting in 2016. Additionally, several international COI policies recommend that CPG authors should abstain from providing speaking and lecturing services sponsored by pharmaceutical companies for several years following CPG publication. , , , Established in 1935, the JCS represents the preeminent professional organization for cardiologists and cardiovascular researchers in Japan, comprising >32 000 society members. The JCS is responsible for producing CPGs for cardiovascular diseases and board‐certifying cardiologists in Japan. Subsequently, we extracted data on personal payments made by pharmaceutical companies to the CPG authors for the period between 2016 and 2020, using a publicly accessible payment database, consistent with methodologies used in previous studies. , The database included information on speaking fees, consultancy payments, and writing compensation, which pharmaceutical companies provided to individual health care providers from 2016 to 2020, aligning with the data collection methods of prior studies. , , All pharmaceutical firms and their subsidiaries affiliated with the Japan Pharmaceutical Manufacturers Association (JPMA), the foremost trade organization in Japan's pharmaceutical sector, were mandated to disclose payments to individual health care providers with the providers' names. These payments are for activities such as delivering lectures at industry‐sponsored events, offering consultancy services, and creating manuscripts and pamphlets. However, other forms of payments, including royalties, ownership interests, travel and lodging fees, food and beverage fees, and grant and research payments, are not mandated to be disclosed in Japan and have not been disclosed by the companies at individual provider level. This is in contrast to the United States, where the Physician Payments Sunshine Act mandates the disclosure of nearly all nonresearch, research, and ownership payments, making them accessible for review. , , , The disclosed payment data, available on the companies' respective websites, have been voluntarily collated into a searchable database by an independent research organization since 2016. This database, in the most recent iteration, encompasses payment records from 2016 through 2020. We conducted a descriptive analysis of the extracted payment data, including the proportions of CPG authors who received payments and the median and mean of the payments. Furthermore, in line with international COI policies advocating that CPG chairs should be devoid of any financial COIs, , , we separately analyzed the payments made to CPG chairs. We examined the disparity in payment amounts between chair authors and nonchair authors using the Mann–Whitney U test, as the payments per author were not normally distributed. All statistical analyses were conducted using Python 3.9.12 (Python Software Foundation, Beaverton, OR) and Stata version 17.0 (StataCorp, College Station, TX). Given that this study involved a retrospective analysis of publicly available data and designed as nonhuman subjects study, an institutional board review and informed consent were not required in Japan. In our analysis, we identified 929 unique authors from 37 JCS CPGs that were eligible for the study. Among these authors, 275 (29.0%) contributed to the development of 2 or more CPGs. Notably, 877 authors (94.4%) received 1 or more personal payments from pharmaceutical companies between 2016 and 2020 (Table ). The total cumulative payments amounted to US $70 895 253, distributed across 67 618 individual payment transactions. The mean payment per author over this 5‐year period was US $76 314 (SD: US $138663), and the median payment was US $20 792 (interquartile range [IQR]: US $4262–US $76 998). Of the total US $58.3 million in payments, 85.2% (amounting to US $50.7 million) were for lecture compensations, and 10.5% (US $6.2 million) for consulting services. The annual payments varied, ranging from US $14.4 million to US $15.6 million between 2016 and 2019, but showed a decrease to US $11.5 million in 2020. Furthermore, we identified 44 CPG authors who served in the capacities of chairs or vice chairs for CPG development. All 44 of these chairs received personal payments from pharmaceutical companies within the same study period. The median payments to these chairs were significantly higher than those made to nonchair authors, with amounts of US $42 126 versus US $18 978 ( P = 0.002 in the Mann–Whitney U test). The breakdown of payments to authors by specific guidelines is presented in Table . , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , In all 37 eligible JCS CPGs, >80% of authors received personal payments over the 5 years. The proportion of authors receiving personal payments ranged from 84% in the CPG on perioperative cardiovascular assessment and management for noncardiac surgery (published in 2022) to 100% in 13 different CPGs. The CPG on revascularization of stable coronary artery disease recorded the highest median payment amount at US $234 552 (interquartile range: US $63 801–US $309 956). This was followed by the CPGs on nonpharmacotherapy of cardiac arrhythmias (US $144 736), indication and management of pregnancy and delivery in women with heart disease (US $136 398), and management of peripheral arterial disease (US $104 253). This analysis of personal payments to Japanese cardiology CPG authors, as disclosed by pharmaceutical companies, demonstrated that >US $70.8 million were paid to 94% of all Japanese cardiology CPG authors from 2016 to 2020. Notably, all CPG chairs and vice chairs received significantly higher payments than other authors. Moreover, each of the 37 eligible CPGs included in this analysis had >80% of its authors receiving personal payments, offering critical insights into the financial relationships between cardiology CPG authors and the pharmaceutical industry. First, our findings indicate that the total personal payments to all JCS CPG authors exceeded US $70.8 million over 5 years, ranging US $11.5 to US $15.6 million annually. These payments constituted >50% of the personal payments made to all board‐certified cardiologists in Japan. A preliminary study reported that annual personal payments to all 15 048 cardiologists board certified by the JCS ranged from US $27.4 million to US $28.8 million between 2016 and 2019. Additionally, Purkayastha et al assessed nonresearch payments (eg, speaking fees, consulting fees) to authors of CPGs developed by the AHA/ACC from 2014 to 2020, finding that only 29% (169 out of 578 authors) received a total of US $16.8 million over 7 years. Hence, given that our study included only payments for speaking, consulting, and writing services, our analysis indicates that JCS CPG authors received at least 4.2 times more in nonresearch payments than AHA/ACC CPG authors, suggesting stronger financial ties of Japanese cardiology CPG authors to the pharmaceutical industry compared with their US counterparts. Second, this study found that >94% of CPG authors had financial relationships with the pharmaceutical industry, a higher percentage than reported in other countries. For instance, Dudum et al noted that 80% of authors of AHA/ACC CPGs published between 2016 and 2017 received either research or nonresearch payments. Additionally, Purkayastha et al. reported that the majority of AHA/ACC CPG authors published after 2018 did not receive nonresearch payments from health care companies. The fact that the majority of authors received personal payments from the pharmaceutical industry in all JCS CPGs, and that all CPG chairs had substantial financial ties, clearly deviates from current international COI management policies for CPG development. , , The COI policy from the US National Academy of Medicine, which sets widely recognized COI management strategies for CPG development globally, , , , , recommends that health care organizations producing CPGs should predominantly assign experts free from COIs as authors and that CPG chairs should have no financial COIs with the health care industry. Although some CPGs developed in the 2010s did not meet these recommendations, many recent studies have shown improvements in COI management strategies in CPGs developed after the late 2010s. , , , , , , Our study, however, has repeatedly demonstrated that nearly all Japanese CPG authors across specialties received significant payments for activities like delivering lectures and consulting services, leading to direct income. , , , , , , , , , , , , , , , Although the authors acknowledge the importance of collaboration between physicians and the health care industry in improving patient care, it is imperative to develop trustworthy and evidence‐based CPGs without establishing a group where >94% of the experts have substantial financial ties to the pharmaceutical industry. Implications for Future Research and Policy Interventions This study has illuminated the extensive financial relationships between CPG authors and the pharmaceutical industry in Japan. However, the effects of COIs on CPG recommendations, physicians' clinical practices, patient outcomes, and the trust and adherence of physicians and patients to these recommendations remain underexplored. Although individual COIs of CPG authors are documented, less attention has been given to institutional COIs at professional medical societies, universities, departments, and hospitals affiliated with CPG authors. A recent study reported that the JCS received a total of US $10.2 million in sponsorship, donations, and advertising fees from pharmaceutical companies affiliated with the JPMA from 2017 to 2021. Future research should investigate the impact of COIs, both individual and institutional, on clinical practice and patient outcomes. Furthermore, our findings suggest that COI management in Japanese cardiology CPGs does not meet international standards in terms of transparency and rigor. Despite some organizations having stringent COI policies for CPG development, instances of underdeclaration and inaccurate disclosure of COIs by some CPG authors have been reported in previous research. , , , This raises questions about the effectiveness of even the most rigorous and transparent COI policies without a mechanism to verify the accuracy of self‐declared COIs. In the United States, the Physician Payments Sunshine Act mandates that all pharmaceutical and medical device companies report any payments to physicians, both nonresearch and research‐related, on the Open Payments Database. , , , , , This legislation ensures that manufacturers report all research payments to physicians through third parties such as universities and teaching hospitals. , , , Some professional medical societies in the United States, including the American Society of Clinical Oncology and the American Gastroenterological Association, use this database to verify the accuracy of COI declarations by CPG authors. , To enhance COI management globally, regulatory agencies, industry, and professional organizations should consider developing and using similar databases to verify COI declarations outside the United States. A more comprehensive and comparative analysis of COI policies across medical specialties and countries is necessary. Such research could offer detailed insights into effective COI management in CPG development and identify strategies to improve the quality and trustworthiness of CPG recommendations. Limitations This study has several important limitations. First, the publicly accessible payment database developed by the Medical Governance Research Institute includes only nonresearch payments for speaking, consulting, and writing compensations from pharmaceutical companies affiliated with the JPMA. Additionally, due to the absence of a uniform system for payment disclosure and the fact that transparency initiatives are not enforced as rigorously for medical device companies as they are for pharmaceutical companies in Japan, this study was unable to account for payments to CPG authors from medical device companies. Consequently, this study may have underreported the magnitude and extent of financial relationships between CPG authors and the entire health care industry, including payments from companies not affiliated with the JPMA. However, of 104 pharmaceutical companies manufacturing prescription drugs, 73 (70.2%) companies affiliated with the JPMA and disclose their payment data as of 2020. Furthermore, 94% of total prescription drug sales in Japan in 2020 (US $101.0 billion out of US $107.4 billion) were drugs manufactured by the 73 JPMA‐affiliated companies. These figures would support the validity of examination of payments from the JPMA‐affiliated companies to the cardiology CPG authors for our study. Second, the JPMA requires its member companies to disclose payments for speaking, consulting, and writing compensations at the individual level, whereas other types of payments, including those for research, royalties, and ownership interests, were not available for this study. However, because compensation payments are generally paid directly to and can be a direct source of income for individual health care providers, examining the size and fraction of these compensations to the CPG authors is paramount in evaluating the extent of financial relationships between the CPG authors and the pharmaceutical industry for nonresearch purposes. Third, althuogh the JCS is responsible for developing and issuing CPGs for most of cardiovascular diseases in Japan, our study findings based on sampling from a single CPG developing society might not generalize contexts in other disease areas, specialties, and regions. This study has illuminated the extensive financial relationships between CPG authors and the pharmaceutical industry in Japan. However, the effects of COIs on CPG recommendations, physicians' clinical practices, patient outcomes, and the trust and adherence of physicians and patients to these recommendations remain underexplored. Although individual COIs of CPG authors are documented, less attention has been given to institutional COIs at professional medical societies, universities, departments, and hospitals affiliated with CPG authors. A recent study reported that the JCS received a total of US $10.2 million in sponsorship, donations, and advertising fees from pharmaceutical companies affiliated with the JPMA from 2017 to 2021. Future research should investigate the impact of COIs, both individual and institutional, on clinical practice and patient outcomes. Furthermore, our findings suggest that COI management in Japanese cardiology CPGs does not meet international standards in terms of transparency and rigor. Despite some organizations having stringent COI policies for CPG development, instances of underdeclaration and inaccurate disclosure of COIs by some CPG authors have been reported in previous research. , , , This raises questions about the effectiveness of even the most rigorous and transparent COI policies without a mechanism to verify the accuracy of self‐declared COIs. In the United States, the Physician Payments Sunshine Act mandates that all pharmaceutical and medical device companies report any payments to physicians, both nonresearch and research‐related, on the Open Payments Database. , , , , , This legislation ensures that manufacturers report all research payments to physicians through third parties such as universities and teaching hospitals. , , , Some professional medical societies in the United States, including the American Society of Clinical Oncology and the American Gastroenterological Association, use this database to verify the accuracy of COI declarations by CPG authors. , To enhance COI management globally, regulatory agencies, industry, and professional organizations should consider developing and using similar databases to verify COI declarations outside the United States. A more comprehensive and comparative analysis of COI policies across medical specialties and countries is necessary. Such research could offer detailed insights into effective COI management in CPG development and identify strategies to improve the quality and trustworthiness of CPG recommendations. This study has several important limitations. First, the publicly accessible payment database developed by the Medical Governance Research Institute includes only nonresearch payments for speaking, consulting, and writing compensations from pharmaceutical companies affiliated with the JPMA. Additionally, due to the absence of a uniform system for payment disclosure and the fact that transparency initiatives are not enforced as rigorously for medical device companies as they are for pharmaceutical companies in Japan, this study was unable to account for payments to CPG authors from medical device companies. Consequently, this study may have underreported the magnitude and extent of financial relationships between CPG authors and the entire health care industry, including payments from companies not affiliated with the JPMA. However, of 104 pharmaceutical companies manufacturing prescription drugs, 73 (70.2%) companies affiliated with the JPMA and disclose their payment data as of 2020. Furthermore, 94% of total prescription drug sales in Japan in 2020 (US $101.0 billion out of US $107.4 billion) were drugs manufactured by the 73 JPMA‐affiliated companies. These figures would support the validity of examination of payments from the JPMA‐affiliated companies to the cardiology CPG authors for our study. Second, the JPMA requires its member companies to disclose payments for speaking, consulting, and writing compensations at the individual level, whereas other types of payments, including those for research, royalties, and ownership interests, were not available for this study. However, because compensation payments are generally paid directly to and can be a direct source of income for individual health care providers, examining the size and fraction of these compensations to the CPG authors is paramount in evaluating the extent of financial relationships between the CPG authors and the pharmaceutical industry for nonresearch purposes. Third, althuogh the JCS is responsible for developing and issuing CPGs for most of cardiovascular diseases in Japan, our study findings based on sampling from a single CPG developing society might not generalize contexts in other disease areas, specialties, and regions. In conclusion, our findings that at least 94% of cardiology CPG authors in Japan had financial relationships with the pharmaceutical industry for nonresearch purposes, including all chairs of the JCS CPGs published between 2015 and 2022, highlight several deviations from international standards for proper COI management policies. The profound influence of CPG recommendations on physician practice and patient care necessitates the development of trustworthy CPGs that mitigate financial relationships with the pharmaceutical industry. It is crucial for the JCS to implement more transparent and stringent COI management strategies, aligning with the strong recommendations in current international COI policies for CPG development. A more thorough and comparative analysis of COI policies across various medical specialties and countries is warranted. Such research would identify specific strategies to enhance the quality and trustworthiness of CPG recommendations globally, extending beyond the field of cardiology. Future studies should also explore the impact of both individual and institutional COIs on CPG development, as well as their implications for clinical practice and patient outcomes. None. None. |
Molecular typing and antimicrobial susceptibility profiles of | 7506da9c-7dbf-4d8c-80b0-d3c4b072263b | 11633965 | Microbiology[mh] | Campylobacter , a zoonotic pathogen, can cause symptoms such as diarrhea, fever, and abdominal pain. Additionally, it can lead to severe complications like Guillain-Barré syndrome and reactive arthritis, posing a significant threat to human health . In developed countries, Campylobacter infection has become more prevalent than infections caused by pathogens such as Salmonella, E. coli and, Vibrio parahaemolyticus, establishing it as the leading cause of bacterial diarrhea worldwide . Campylobacter jejuni (C . jejuni) and Campylobacter coli (C . coli) are the main Campylobacter species that cause gastroenteritis in humans and responsible for approximately 95% of all Campylobacter infections in developing countries . It has been reported that globally, there are 400 to 500 million cases of diarrhea annually caused by C . jejuni , making it a serious public health concern . Campylobacter widely inhabits the human and animal intestines . Poultry is a significant source of contamination in the human food chain. During poultry farming, poultry infected with Campylobacter do not exhibit any clinical symptoms but can continuously shed the bacteria into the environment and carry it for life . This can easily lead to cross-contamination between poultry and livestock and its products during slaughter, processing, and retail stages . Epidemiological studies have shown that up to 30% of human Campylobacter infections are caused by handling, preparing, and consuming raw or undercooked poultry. Poultry meat—especially chicken meat—is the most common source of infection in humans, along with other insufficiently heated meat, raw milk, and contaminated water . With the increasing awareness of the major public health importance of Campylobacter , studies about the prevalence of Campylobacter isolated from clinical cases in China have been carried out in the recent years. However, only a few studies have investigated the isolation rate and molecular characterization of Campylobacter spp. from both food and human clinical sources in China. The link between foodborne and human clinical isolates of Campylobacter has remained largely uncharacterized. The aim of this study was to investigate the molecular typing and antimicrobial susceptibility profiles of C . jejuni and C . coli isolates from patients and raw meat in Huzhou and to evaluate the phylogenetic relationships of Campylobacter strains from human patients and raw meat products using PFGE and MLST methods. Ethics statement The protocol was approved by the ethics committee of Huzhou Center for Disease Control and Prevention (approval number: HZ2021005). The only human material used in this study is fecal specimen from outpatient with acute diarrhea for local foodborne disease surveillance project, data records and collected clinical specimens were deidentified and anonymous. Patient consent was not required as the research results will not be used as a basis for any auxiliary diagnosis or for any commercial purposes. Furthermore, any identifiers related to participants will be removed from the research results to ensure that personal privacy is not compromised. Therefore, there is no objective risk to the participants. Sample collection and strain isolation According to the guidelines of the local foodborne disease surveillance project in Huzhou, from September 1, 2021 to December 31, 2022, a total of 342 fecal specimens from outpatients with acute diarrhea at the sentinel hospital and 168 samples of raw meat products (50 samples of livestork meat products and 118 samples of raw poultry products) purchased from farmers’ markets and supermarkets, were subjected to Campylobacter isolation. 168 raw meat products mainly refer to fresh poultry, frozen poultry, and fresh livestork meat, including 25 portions of raw pork and 25 portions of raw beef, 30 ducks, and 88 chickens. For Campylobacter spp. isolation, raw meat was first placed in sterile self-sealing bags containing 500 ml of BPW culture medium, followed by vigorous rubbing for 5 minutes. Then a Campylobacter isolation kit incorporating a membrane filter method (ZC-CAMPY-001 for specimens and ZC-CAMPY-002 for meat, Qingdao Sinova Biotechnology Co., Ltd., Qingdao, China) was used to isolate Campylobacter . Briefly, 2 mL of meat suspension and suitable amount of fecal specimen was transferred to 4 mL of growth-promoting enrichment Preston broth provided in the kit. The enrichment broth was then incubated at 42°C under microaerobic conditions (5% O2, 10% CO2, and 85% N2) for 24 hours. Three hundred microliters drop of the enrichment broth were applied to the 0.45-μm pore-size filter and left on the surface of Karmali and Columbia blood agar plates. After 30 minutes, the filters were removed, and these plates were further incubated at 42°C under microaerobic condition. Suspicious colonies were subcultured, and identified using matrix-assisted laser desorption/ionization time of flight (MALDI-TOF) mass spectrometry (VITEK MS). PFGE molecular typing Pulsed-field gel electrophoresis (PFGE) molecular typing was performed according to the PulseNet standardized protocol for C . jejuni (Available online: https://www.cdc.gov/pulsenet/PDF/campylobacter-pfge-protocol-508c.pdf ). In brief, genomic DNA was digested with SmaI (Takara, Dalian, China), and run on a CHEF Mapper PFGE system (Bio-Rad Laboratories, Hercules, CA) for 16 h on SeaKem gold agarose (Lonza, Rockland, MD, USA) in 0.5×Tris-borate-EDTA. XbaI-digested DNA from Salmonella enterica serovar Braenderup H9812 was used as the standard size. Salmonella enterica serovar Braenderup digested with XbaI (Takara) was used as the molecular reference marker. The gel images were stored electronically as TIFF files and bands were analyzed by using BioNumerics software v. 7.6 (Applied Maths, Kortrijk, Belgium). The similarity between chromosomal fingerprints was scored using the Dice coefficient. MLST molecular typing Multilocus sequence typing (MLST) was performed by sequencing seven housekeeping loci (aspA, glnA, gltA, glyA, pgm, tkt, and uncA) according to previously described primers for C . jejuni and C . coli ( https://pubmlst.org/organisms/ Campylobacter -jejunicoli/primers ). The nucleotide sequences of the amplicons were submitted to the pubMLST database ( https://pubmlst.org/ ) for online data analysis, resulting in corresponding Sequence Types (ST) and Clonal Complexes (CC). For ST types not found in the database, new ST types were applied for based on strain sequences using blast. A minimum spanning tree (MST) and dendrogram of MLST data was created using BioNumerics v.7.6 (Applied Maths, Kortrijk, Belgium). Antibiotic susceptibility testing Antibiotic susceptibility testing was conducted using the agar dilution method recommended by the Clinical and Laboratory Standards Institute (CLSI) using a commercial kit (ZC-AST-001, Zhongchuang Biotechnology Ltd. Corp., Qingdao, China). The antibiotics tested included macrolides: erythromycin (ERY) and azithromycin (AZI); quinolones and fluoroquinolones: nalidixic acid (NAL) and ciprofloxacin (CIP); aminoglycosides: gentamicin (GEN) and streptomycin (STR); chloramphenicol: chloramphenicol (CHL) and florfenicol (FLO); tetracyclines: tetracycline (TET); ketolides: telithromycin (TEL); lincosamides: clindamycin (CLI). MICs were interpreted in accordance with the standard of National Antimicrobial Resistance Monitoring System (NARMS-2014). The breakpoints for resistance were as follows: ERY ≥ 32 μg/mL, AZI ≥ 1 μg/mL, NAL ≥ 32 μg/mL, CIP ≥ 4 μg/mL, GEN ≥ 4 μg/mL, STR ≥ 16 μg/mL, CHL ≥ 32 μg/mL, FLO ≥ 8 μg/mL, TET ≥ 16 μg/mL, TEL ≥ 8 μg/mL, CLI ≥ 1 μg/mL. Quality control was performed with C . jejuni ATCC 33560. The multiple antibiotic resistance index (MARI) was used to quantify the multi-resistance of Campylobacter isolates. MAR index = a/b. In this formula, “a” indicated the number of antibiotics to which the isolate was resistant and “b” indicated the total number of antibiotics to which the isolate was tested. Multi-drug resistance (MDR) was defined as resistance to three or more classes of antimicrobials in this study. Statistical analysis Statistical analysis was performed using SPSS 19.0 software. The χ2 test was employed, and significance was determined at P < 0.05. The protocol was approved by the ethics committee of Huzhou Center for Disease Control and Prevention (approval number: HZ2021005). The only human material used in this study is fecal specimen from outpatient with acute diarrhea for local foodborne disease surveillance project, data records and collected clinical specimens were deidentified and anonymous. Patient consent was not required as the research results will not be used as a basis for any auxiliary diagnosis or for any commercial purposes. Furthermore, any identifiers related to participants will be removed from the research results to ensure that personal privacy is not compromised. Therefore, there is no objective risk to the participants. According to the guidelines of the local foodborne disease surveillance project in Huzhou, from September 1, 2021 to December 31, 2022, a total of 342 fecal specimens from outpatients with acute diarrhea at the sentinel hospital and 168 samples of raw meat products (50 samples of livestork meat products and 118 samples of raw poultry products) purchased from farmers’ markets and supermarkets, were subjected to Campylobacter isolation. 168 raw meat products mainly refer to fresh poultry, frozen poultry, and fresh livestork meat, including 25 portions of raw pork and 25 portions of raw beef, 30 ducks, and 88 chickens. For Campylobacter spp. isolation, raw meat was first placed in sterile self-sealing bags containing 500 ml of BPW culture medium, followed by vigorous rubbing for 5 minutes. Then a Campylobacter isolation kit incorporating a membrane filter method (ZC-CAMPY-001 for specimens and ZC-CAMPY-002 for meat, Qingdao Sinova Biotechnology Co., Ltd., Qingdao, China) was used to isolate Campylobacter . Briefly, 2 mL of meat suspension and suitable amount of fecal specimen was transferred to 4 mL of growth-promoting enrichment Preston broth provided in the kit. The enrichment broth was then incubated at 42°C under microaerobic conditions (5% O2, 10% CO2, and 85% N2) for 24 hours. Three hundred microliters drop of the enrichment broth were applied to the 0.45-μm pore-size filter and left on the surface of Karmali and Columbia blood agar plates. After 30 minutes, the filters were removed, and these plates were further incubated at 42°C under microaerobic condition. Suspicious colonies were subcultured, and identified using matrix-assisted laser desorption/ionization time of flight (MALDI-TOF) mass spectrometry (VITEK MS). Pulsed-field gel electrophoresis (PFGE) molecular typing was performed according to the PulseNet standardized protocol for C . jejuni (Available online: https://www.cdc.gov/pulsenet/PDF/campylobacter-pfge-protocol-508c.pdf ). In brief, genomic DNA was digested with SmaI (Takara, Dalian, China), and run on a CHEF Mapper PFGE system (Bio-Rad Laboratories, Hercules, CA) for 16 h on SeaKem gold agarose (Lonza, Rockland, MD, USA) in 0.5×Tris-borate-EDTA. XbaI-digested DNA from Salmonella enterica serovar Braenderup H9812 was used as the standard size. Salmonella enterica serovar Braenderup digested with XbaI (Takara) was used as the molecular reference marker. The gel images were stored electronically as TIFF files and bands were analyzed by using BioNumerics software v. 7.6 (Applied Maths, Kortrijk, Belgium). The similarity between chromosomal fingerprints was scored using the Dice coefficient. Multilocus sequence typing (MLST) was performed by sequencing seven housekeeping loci (aspA, glnA, gltA, glyA, pgm, tkt, and uncA) according to previously described primers for C . jejuni and C . coli ( https://pubmlst.org/organisms/ Campylobacter -jejunicoli/primers ). The nucleotide sequences of the amplicons were submitted to the pubMLST database ( https://pubmlst.org/ ) for online data analysis, resulting in corresponding Sequence Types (ST) and Clonal Complexes (CC). For ST types not found in the database, new ST types were applied for based on strain sequences using blast. A minimum spanning tree (MST) and dendrogram of MLST data was created using BioNumerics v.7.6 (Applied Maths, Kortrijk, Belgium). Antibiotic susceptibility testing was conducted using the agar dilution method recommended by the Clinical and Laboratory Standards Institute (CLSI) using a commercial kit (ZC-AST-001, Zhongchuang Biotechnology Ltd. Corp., Qingdao, China). The antibiotics tested included macrolides: erythromycin (ERY) and azithromycin (AZI); quinolones and fluoroquinolones: nalidixic acid (NAL) and ciprofloxacin (CIP); aminoglycosides: gentamicin (GEN) and streptomycin (STR); chloramphenicol: chloramphenicol (CHL) and florfenicol (FLO); tetracyclines: tetracycline (TET); ketolides: telithromycin (TEL); lincosamides: clindamycin (CLI). MICs were interpreted in accordance with the standard of National Antimicrobial Resistance Monitoring System (NARMS-2014). The breakpoints for resistance were as follows: ERY ≥ 32 μg/mL, AZI ≥ 1 μg/mL, NAL ≥ 32 μg/mL, CIP ≥ 4 μg/mL, GEN ≥ 4 μg/mL, STR ≥ 16 μg/mL, CHL ≥ 32 μg/mL, FLO ≥ 8 μg/mL, TET ≥ 16 μg/mL, TEL ≥ 8 μg/mL, CLI ≥ 1 μg/mL. Quality control was performed with C . jejuni ATCC 33560. The multiple antibiotic resistance index (MARI) was used to quantify the multi-resistance of Campylobacter isolates. MAR index = a/b. In this formula, “a” indicated the number of antibiotics to which the isolate was resistant and “b” indicated the total number of antibiotics to which the isolate was tested. Multi-drug resistance (MDR) was defined as resistance to three or more classes of antimicrobials in this study. Statistical analysis was performed using SPSS 19.0 software. The χ2 test was employed, and significance was determined at P < 0.05. Prevalence of Campylobacter spp. in Huzhou We collected 342 fecal specimens from diarrheal patients and 168 samples of raw meat products from farmers’ markets and supermarkets during 2021 and 2022. Seventy eight Campylobacter isolates were recovered, comprising 58 isolates of C . jejuni (74.36%, 58/78) and 20 isolates of C . coli (25.64%, 20/78). The isolation rate of Campylobacter in diarrhea patients was 11.70% (40/342), with detection rates of C . jejuni and C . coli at 9.94% (34/342) and 1.75% (6/342), respectively. The isolation rate of Campylobacter in raw meat products was 22.62% (38/168), with all strains isolated from poultry meat (5 from duck and 33 from chicken). A significant higher percentage (P = 0.035,χ2 = 4.4476) of Campylobacter spp. isolates was observed in chicken samples (37.50%, 33/88) as compared to those in duck samples (16.67%, 5/30). The detection rate for C . jejuni in raw meat was 14.29% (24/168), while C . coli had a detection rate of 1.75% (6/342). The comparison of detection rates between C . jejuni and C . coli in samples from diarrhea patients showed statistically significant differences (χ2 = 20.81, P < 0.0001), see . PFGE clustering A total of 78 Campylobacter strains were subjected to PFGE after digestion with the restriction enzyme Sma I and 73 valid profiles were obtained, comprising 54 C . jejuni strains (34 from patients and 20 from food samples) and 19 C . coli strains (6 from patients and 13 from food samples). Cluster analysis revealing a band pattern similarity of 26.8% to 100% among the 73 Campylobacter isolates, see . Among the 54 C . jejuni strains, a total of 45 band patterns were obtained, with 15 groups of closely related isolates (greater than 85.0% similarity in banding patterns) which were assigned a profile group number (J1–J15). The most common profile group was J1, which included 6 isolates from diarrhoea patient. 4 groups had both patient and food isolates (J2, J4, J6, J9) and only 1 group (J9) each of patient isolate (ID 2022636) and food isolate (ID 2022756) had identical profiles. 17 band patterns were obtained from the 19 C . coli isolates by SmaI digestion analysis. 5 groups were identified that had isolates possessing over 85% similarities with each other and were designated C1–C5. The most common profile group was C3, which included 6 food isolates. Only one group (C2) comprised both patient and food isolates and shared 85.7% similarities with each other. No isolates from patients formed identical PFGE patterns with chicken or duck among C . coli strains. MLST typing 50 C . jejuni strains (34 patient isolates and 16 food isolates) and 18 C . coli strains (6 patient isolates and 12 food isolates) were selected for MLST molecular typing, with the results presented in . We identified 37 STs of 12 CCs among the 50 C . jejuni isolates, including 5 STs (ST-11775, ST-11822, ST-12371, ST-12391, ST-12392) were newly designated in this study. 13 STs from 17 isolates did not assign to any known CCs. The most commonly isolated CC was CC-21 (22.00%, 11/50), followed by CC-353(6.00%, 3/50), CC-464(6.00%, 3/50) and CC-574 (6.00%, 3/50). The most common STs in patients and food were ST-298(4 strains) and ST-2328 (3 strains), respectively. STs overlapped in both patients and food isolates included ST-6500, ST-464, ST-9621 and ST-2328. MLST analysis for the 18 C . coli isolates resulted in 11 STs, of which 1 ST (ST12390) was new. Except for 3 unclassified STs (ST-1145, ST-11932 and ST-12337), all other STs were classified into the same clonal complex, CC-828. STs overlapped in both patients and food isolates for C . coli in this study was ST-825. MLST data summary of 50 C . jejuni strains and 18 C . coli strains in this study were presented in . The minimum spanning trees based on MLST data for the cluster analysis of C . jejuni and C . coli isolates are depicted in Figs and . The results revealed close genetic relationships among strains within the same CCs, see . For instance, ST-11822, ST-298, and ST-6500 from the CC-21 were positioned on the same small branch. Similarly, within the CC-828, ST-829, ST-825, ST-1563 and ST-5511 displayed close genetic relationships. Strains from different CCs exhibited relatively distant genetic relationships. C . jejuni and C . coli from diarrheal patients and raw meat products demonstrated a dispersed distribution, with no significant clustering observed, See . Comparison of genetic relatedness between PFGE and MLST revealed that not all clustered PFGE types with 100.0% similarity in banding pattern showed identical ST types, while not all the isolates with the same STs shared 100.0% similarity in PFGE profile type, see Figs and groups (PFGE J2-ST464 and PFGE J9-ST-2328) of C . jejuni strains originated from humans and chickens were confirmed to be clonally related by comparing PFGE and MLST results. Antibacterial susceptibility of C . jejuni and C . coli isolates Antibiotic susceptibility testing was conducted on 73 Campylobacter isolates, including 54 C . jejuni (34 from patient isolates and 20 from food isolates) and 19 C . coli strains (6 from patient isolates and 13 from food isolates), see . C . jejuni showed resistance most frequently to NAL (94.44%), followed by TET (88.89%), CIP (87.04%), CLI (16.67%), ERY /GEN/FLO (12.96%), AZM/TEL (11.11%), STR (9.26) and CHL (5.56%), while C . coli displayed highest resistance rate to NAL/CIP (94.74%), followed by TET (84.21%), ERY (63.16%), AZM (52.63%), TEL/CLI (42.11%), GEN (31.58%), STR (26.32), FLO (10.53) and CHL (5.26%). Compared with C . jejuni , statistical higher resistant rates were observed in C . coli for ERY (χ 2 = 18.39, p < 0.0001), AMZ (χ 2 = 14.16, p = 0.0001) and TEL (χ 2 = 8.71, p = 0.003). Specifically, ERY resistance in C . coli (63.16%) is much more prevalent than that in C . jejuni (12.96%). The MARI of the tested C . jejuni and C . coli isolates from the current study are presented in Tables and , respectively. A total of 18 different antibiotic resistance patterns with MARI ranging from 0.18 to 0.91 were observed in 54 C . jejuni isolates, while a total of 12 different antibiotic resistance patterns with MARI ranging from 0.09 to 0.73 were observed in 19 C . coli isolates. The MDR rate were 29.63% (16/54) for C . jejuni and 89.47% (17/19) for C . coli . The difference in MDR rates between the two groups was statistically significant (χ 2 = 20.321, P < 0.01). Campylobacter spp. in Huzhou We collected 342 fecal specimens from diarrheal patients and 168 samples of raw meat products from farmers’ markets and supermarkets during 2021 and 2022. Seventy eight Campylobacter isolates were recovered, comprising 58 isolates of C . jejuni (74.36%, 58/78) and 20 isolates of C . coli (25.64%, 20/78). The isolation rate of Campylobacter in diarrhea patients was 11.70% (40/342), with detection rates of C . jejuni and C . coli at 9.94% (34/342) and 1.75% (6/342), respectively. The isolation rate of Campylobacter in raw meat products was 22.62% (38/168), with all strains isolated from poultry meat (5 from duck and 33 from chicken). A significant higher percentage (P = 0.035,χ2 = 4.4476) of Campylobacter spp. isolates was observed in chicken samples (37.50%, 33/88) as compared to those in duck samples (16.67%, 5/30). The detection rate for C . jejuni in raw meat was 14.29% (24/168), while C . coli had a detection rate of 1.75% (6/342). The comparison of detection rates between C . jejuni and C . coli in samples from diarrhea patients showed statistically significant differences (χ2 = 20.81, P < 0.0001), see . A total of 78 Campylobacter strains were subjected to PFGE after digestion with the restriction enzyme Sma I and 73 valid profiles were obtained, comprising 54 C . jejuni strains (34 from patients and 20 from food samples) and 19 C . coli strains (6 from patients and 13 from food samples). Cluster analysis revealing a band pattern similarity of 26.8% to 100% among the 73 Campylobacter isolates, see . Among the 54 C . jejuni strains, a total of 45 band patterns were obtained, with 15 groups of closely related isolates (greater than 85.0% similarity in banding patterns) which were assigned a profile group number (J1–J15). The most common profile group was J1, which included 6 isolates from diarrhoea patient. 4 groups had both patient and food isolates (J2, J4, J6, J9) and only 1 group (J9) each of patient isolate (ID 2022636) and food isolate (ID 2022756) had identical profiles. 17 band patterns were obtained from the 19 C . coli isolates by SmaI digestion analysis. 5 groups were identified that had isolates possessing over 85% similarities with each other and were designated C1–C5. The most common profile group was C3, which included 6 food isolates. Only one group (C2) comprised both patient and food isolates and shared 85.7% similarities with each other. No isolates from patients formed identical PFGE patterns with chicken or duck among C . coli strains. 50 C . jejuni strains (34 patient isolates and 16 food isolates) and 18 C . coli strains (6 patient isolates and 12 food isolates) were selected for MLST molecular typing, with the results presented in . We identified 37 STs of 12 CCs among the 50 C . jejuni isolates, including 5 STs (ST-11775, ST-11822, ST-12371, ST-12391, ST-12392) were newly designated in this study. 13 STs from 17 isolates did not assign to any known CCs. The most commonly isolated CC was CC-21 (22.00%, 11/50), followed by CC-353(6.00%, 3/50), CC-464(6.00%, 3/50) and CC-574 (6.00%, 3/50). The most common STs in patients and food were ST-298(4 strains) and ST-2328 (3 strains), respectively. STs overlapped in both patients and food isolates included ST-6500, ST-464, ST-9621 and ST-2328. MLST analysis for the 18 C . coli isolates resulted in 11 STs, of which 1 ST (ST12390) was new. Except for 3 unclassified STs (ST-1145, ST-11932 and ST-12337), all other STs were classified into the same clonal complex, CC-828. STs overlapped in both patients and food isolates for C . coli in this study was ST-825. MLST data summary of 50 C . jejuni strains and 18 C . coli strains in this study were presented in . The minimum spanning trees based on MLST data for the cluster analysis of C . jejuni and C . coli isolates are depicted in Figs and . The results revealed close genetic relationships among strains within the same CCs, see . For instance, ST-11822, ST-298, and ST-6500 from the CC-21 were positioned on the same small branch. Similarly, within the CC-828, ST-829, ST-825, ST-1563 and ST-5511 displayed close genetic relationships. Strains from different CCs exhibited relatively distant genetic relationships. C . jejuni and C . coli from diarrheal patients and raw meat products demonstrated a dispersed distribution, with no significant clustering observed, See . Comparison of genetic relatedness between PFGE and MLST revealed that not all clustered PFGE types with 100.0% similarity in banding pattern showed identical ST types, while not all the isolates with the same STs shared 100.0% similarity in PFGE profile type, see Figs and groups (PFGE J2-ST464 and PFGE J9-ST-2328) of C . jejuni strains originated from humans and chickens were confirmed to be clonally related by comparing PFGE and MLST results. C . jejuni and C . coli isolates Antibiotic susceptibility testing was conducted on 73 Campylobacter isolates, including 54 C . jejuni (34 from patient isolates and 20 from food isolates) and 19 C . coli strains (6 from patient isolates and 13 from food isolates), see . C . jejuni showed resistance most frequently to NAL (94.44%), followed by TET (88.89%), CIP (87.04%), CLI (16.67%), ERY /GEN/FLO (12.96%), AZM/TEL (11.11%), STR (9.26) and CHL (5.56%), while C . coli displayed highest resistance rate to NAL/CIP (94.74%), followed by TET (84.21%), ERY (63.16%), AZM (52.63%), TEL/CLI (42.11%), GEN (31.58%), STR (26.32), FLO (10.53) and CHL (5.26%). Compared with C . jejuni , statistical higher resistant rates were observed in C . coli for ERY (χ 2 = 18.39, p < 0.0001), AMZ (χ 2 = 14.16, p = 0.0001) and TEL (χ 2 = 8.71, p = 0.003). Specifically, ERY resistance in C . coli (63.16%) is much more prevalent than that in C . jejuni (12.96%). The MARI of the tested C . jejuni and C . coli isolates from the current study are presented in Tables and , respectively. A total of 18 different antibiotic resistance patterns with MARI ranging from 0.18 to 0.91 were observed in 54 C . jejuni isolates, while a total of 12 different antibiotic resistance patterns with MARI ranging from 0.09 to 0.73 were observed in 19 C . coli isolates. The MDR rate were 29.63% (16/54) for C . jejuni and 89.47% (17/19) for C . coli . The difference in MDR rates between the two groups was statistically significant (χ 2 = 20.321, P < 0.01). Bacterial diarrhea poses a serious global public health challenge, with Campylobacter infection considered one of the primary culprits . In recent years, Campylobacter infections have been on the rise worldwide . Conventional methods for the isolation and identification of Campylobacter include enrichment culturing and selective isolation. The enrichment with double membrane filtration method was recognized as a more effective isolation method for Campylobacter and several studies of Campylobacter prevalence in diarrhea cases have been conducted using this method previously, with isolation ratio from 7.0%-12.1% . Zhang et al. reported the Campylobacter isolation rate in diarrhea patients in Beijing was 7.81% . Another study in Wenzhou (Southeast China) in patients with diarrhea showed that prevalence of Campylobacter infection was 10.5% . Yan et al. revealed a C . jejuni prevalence of 4.0% in children and 5.8% in adults with diarrhea in Shenzhen (South China) . The discrepancy in prevalence may be caused by regional differences or variations in sample size. Besides, picking out the suspected Campylobacter colonies on the selective medium was laboratory experience depending, and this might be another reason why there has been such variations in prevalence reported for diarrheal patients . In this study, the double membrane filtration method was employed for the first time starting from September 2021 to detect Campylobacter in fecal specimens from diarrhea patients in Huzhou. The detection rate reached 11.70% among 342 diarrhea samples, surpassing the detection rates of other causative agents such as enteropathogenic Escherichia coli (6.49%), Vibrio parahaemolyticus (5.36%), and Salmonella (2.85%) in recent years . Campylobacter has emerged as a predominant foodborne pathogen in the local region. Notably, the detection rate of C . jejuni in diarrheal patients was significantly higher than that of C . coli (χ 2 = 20.81, P < 0.0001), consistent with previous study in other region of China . Poultry has long been identified as the primary vehicle for sporadic Campylobacter iosis and the most common cause of Campylobacter outbreaks . A meta-analysis by Zbrunab et al. on the global prevalence of Campylobacter in animal products also highlighted chickens as the primary source of Campylobacter transmission . Similar results were also found in this study, the double membrane filtration method was applied to test 168 raw meat products (50 samples of livestork meat products and 118 samples of raw poultry products), and 38 Campylobacter isolates were detected, all originating from poultry meat, with chicken been the major source of infection (86.84%, 33/38). The MLST and other genotyping approaches, including PFGE technology, shows that Campylobacter is not a genetically monomorphic organism, but includes highly diverse assemblies with an array of dierent phenotypes . Consistent with previous reports, both PGFE type and MLST data confirmed that Campylobacter stains circulating in Huzhou are genetically diverse, with C . jejuni isolates being more diverse than C . coli based on MLST analysis. PFGE typing revealed 45 band patterns among 54 C . jejuni strains and 17 band patterns among 19 C . coli strains. Our findings demonstrated that the 50 C . jejuni strains from different sources were classified into 37 ST types, showing a dispersed distribution and encompassing over 12 CCs. The distribution of ST types in the 18 C . coli strains was relatively concentrated, with 83.33% (15/18) of isolates belonging to the CC-828, consistent with previous reports . We also identified seven C . jejuni strains and one C . coli strain with novel ST types, enriching the global MLST database. Numerous studies have reported varying major CCs of C . jejuni in different countries and regions, but CC-21, CC-45, CC-353, and CC-574 are consistently the predominant CCs among isolates in many investigations , with CC-21 considered closely associated with human infections and representing 17.9% of all C . jejuni strains submitted to the PubMLST database. Our study reveals that the most prevalent CC among different sources of C . jejuni is CC-21 (22.00%, 11/50), with 9 strains from diarrhea patients and 2 strains from raw meat products. The CC-464, reported as a dominant clone in domestic settings , was detected in 2 strains of patients isolate and 1 strain of food isolates. Recent phylogenetic studies using relatedness between PFGE and MLST have revealed that the two methods have effective discriminatory power in evaluating the genetic homology among Campylobacter strains . In this study, 2 groups of C . jejuni strains (PFGE J2-ST464 and PFGE J9-ST-2328) originated from humans and chickens showed high genetic homologies by comparing PFGE and MLST results. Besides, some disagreement between PFGE and MLST was observed for certain ST, indicating a weak correlation between PFGE and MLST for certain Campylobacter strains. As the sequencing cost continues to decrease, next-generation sequencing (NGS) technology, which has the advantages of high throughput, high precision, and rich genetic information, may be more suitable for evaluating the genetic homology among Campylobacter strains from different sources. In recent years, the widespread and sometimes inappropriate use, even misuse, of antibiotics in clinical practice and extensive long-term use of antibiotic drugs in animal husbandry have led to the emergence of antibiotic-resistant Campylobacter strains. According to literature reports, Campylobacter exhibits high resistance to quinolones and tetracycline antibiotics, with fluoroquinolone resistance rates ranging from 75% to 90% in Campylobacter strains from different countries . The use of fluoroquinolones in food-producing animals has resulted in fluoroquinolone-resistant Campylobacter strains worldwide . In this study, C . jejuni showed resistance most frequently to NAL (94.44%), followed by TET (88.89%), CIP (87.04%), CLI (16.67%), while C . coli displayed highest resistance rate to NAL/CIP (94.74%), followed by TET (84.21%). Overall, C . coli showed higher resistance rates to all antimicrobials than C . jejuni did except for CHL, FLO and TET, with MDR rate significantly higher than that of C . jejuni . Similar findings have been reported in Southeast and North China , as well as other countries. Additionally, we observed ERY resistance in C . coli (63.16%) is much more prevalent than that in C . jejuni (63.16% vs 12.96%), in accordance with other studies . The widely observed higher rate of macrolide resistance in C . coli than in C . jejuni may be associated with fitness costs impacts of certain antibiotic-resistant mutants, with the underlying mechanisms remain to be further elucidated . In addition, over 90% of C . jejuni and C . coli clinical samples were susceptible to chloramphenicol (CHL), indicating that chloramphenicol remains effective for the treatment of C . jejuni and C . coli infection in Huzhou area. Furthermore, it is worth mentioning that research regarding zoonotic diseases often focuses on infectious diseases animals have given to humans. However, an increasing number of reports indicate that bacteria expressing resistance to critically important antimicrobials were likely introduced along pathways involving reverse zoonosis (human-animal transmission) . This included emergence of human pandemic O25:H4-ST131 CTX-M-15 extended-spectrum-beta-lactamase-producing Escherichia coli among companion animals and community-associated methicillin-resistant Staphylococcus aureus, in dairy cow . Recent reports from New Zealand demonstrated that fluoroquinolone resistance detected there among poultry was attributable to the emergence of a new clone of C . jejuni (ST6964) and it has been hypothesized that this clone was potentially introduced via exposure to other species (human or other livestock) because fluoroquinolones are not registered for use in poultry in New Zealand . Therefore, the risk of antibiotic-resistant Campylobacter being transmitted from humans, including raw meat handlers, to poultry should not be overlooked. In conclusion, this study provides a preliminary understanding of the molecular genetic features and antibiotic resistance characteristics of Campylobacter spp. from raw meat products and diarrhea cases in the Huzhou area. Campylobacter is an important foodborne pathogen in both diarrheal patients and raw meat products in Huzhou City, exhibiting multiple antibiotic resistance and high level of genetic diversity. Two groups of C . jejuni strains originated from humans and chickens were confirmed to be clonally related by comparing PFGE and MLST results. More comprehensive study based on the genetic correlation between isolates from humans and food animals is needed to prevent and control diseases caused by them. Considering the advantages of NGS, future work is warranted to integrate NGS-based typing methods into routine foodborne pathogen surveillance to elucidate the molecular characteristics of Campylobacter spp. isolates. S1 Table MLST data summary of 50 C . jejuni strains and 18 C . coli strains in this study. (DOCX) |
The (cost‐)effectiveness of preventive, integrated care for community‐dwelling frail older people: A systematic review | dc05de30-7ee7-4af7-b5a3-32c4d391b61c | 7379491 | Preventive Medicine[mh] | Integrated care is perceived as a promising solution for frail older people with complex problems to “age in place”. Despite the high expectations, a (recent) systematic review on the (cost)‐effectiveness of preventive, integrated care interventions for community‐dwelling frail older people is lacking. The evidence for the (cost‐) effectiveness of preventive, integrated care is limited since the majority of reported outcomes show no effect and evidence is fragmented because populations, interventions and evaluation studies differ substantially. No clear relationship exists between (cost‐)effectiveness and specific preventive, integrated care elements or levels of integration. Researchers in integrated care should be more aware of the underlying principles of integrated care: they should integrate their research, consider continuity and differentiate between frail older people. INTRODUCTION Integrated care is increasingly promoted as an effective way to organise care for community‐dwelling frail older people. Societal developments such as population ageing and rising care costs have led to more frail older people with complex problems to “age in place” (Wiles, Leibing, Guberman, Reeve, & Allen, ). Their complex problems in the physical, psychological or social domain cannot be adequately addressed by a single primary care professional and require co‐ordination and multidisciplinary collaboration. A solution is found in integrated care which is defined as an organisational process of co‐ordination that seeks to achieve seamless and continuous care, tailored to the patient's needs and based on a holistic view of the patient (Mur‐Veeman, Hardy, Steenbergen, & Wistow, ). Integrated care is proclaimed to pursue a wide range of aims such as improving the quality of care and consumer satisfaction, enhancing clinical results, quality of life, system efficiency and cost‐effectiveness (Kodner & Spreeuwenberg, ). Professionals, policy makers and researchers consider integrated care as a complex phenomenon and promising solution. In literature, conceptual frameworks have been developed to enhance the understanding of integrated care (Valentijn, Schepman, Opheij, & Bruijnzeels, ). Several integrated care interventions for frail older people have been developed (Oliver, Foot, & Humphries, ) and much effort has been put into evaluating the effectiveness of these interventions (Evers & Paulus, ). Despite the widespread interest in integrated care, a systematic review of integrated care interventions for community‐dwelling frail older people is lacking. Previous reviews have concentrated on specific interventions such as home‐visiting programmes (Elkan et al., ; Stuck, Egger, Hammer, Minder, & Beck, ) and case management (Stokes et al., ; You, Dunt, Doyle, & Hsueh, ) or have focused on other target groups such as older patients with chronic diseases (Ouwens, Wollersheim, Hermens, Hulscher, & Grol, ) and older people in general (Johri, Beland, & Bergman, ). Our aim is to systematically review the empirical evidence on the effectiveness and cost‐effectiveness of preventive, integrated care for frail older people in the community. Hence, our study makes five main contributions. First, we focus explicitly on integrated care for community‐dwelling frail older people. Frailty is a specific condition that differs from chronic diseases (Fried et al., ) and chronological age (Slaets, ). Frailty refers to a dynamic state affecting an individual who experiences loss in one or more domains of human functioning (physical, psychological, social). This loss is influenced by a range of variables that increase the risk of adverse outcomes (Gobbens, Luijkx, Wijnen‐Sponselee, & Schols, ; Lacas & Rockwood, ). Other reviews focused on frail older people but their eligibility criteria were based on chronological age (Eklund & Wilhelmson, ; Johri et al., ). Focusing on community‐dwelling frail older people implies that the integrated care interventions are based in primary care which provides integrated, accessible healthcare services by clinicians who are accountable for addressing a large majority of personal healthcare needs, developing a sustained partnership with patients, and practicing in the context of family and community (Vanselow, Donaldson, & Yordy, ). Second, our review provides insight into the value of prevention in integrated care interventions for frail older people, whereas previous systematic reviews have not paid explicit attention to the preventive component in integrated care (Eklund & Wilhelmson, ). Frailty should be prevented in order to reduce the risk of adverse outcomes such as health problems and disability (Fried et al., ), poor quality of life (Gobbens & van Assen, ) and crisis situations (Vedel et al., ). Prevention of frailty is also important to avoid or delay institutionalisation, thereby fulfilling an essential aim of national health policies. Therefore, it is important to incorporate prevention into integrated care interventions, including screening for frailty and comprehensive geriatric assessments (Oliver et al., ). Third, our systematic review includes all quantitative designs with a control group and is not limited to randomised controlled trials. Although randomised controlled trials are known to provide strong evidence, their use is questioned for complex interventions (Clark, ). Integrated care interventions in primary care particularly illustrate the difficulties with randomised controlled trials because randomisation of participants to a general practitioner (GP) is almost impossible. Fourth, our review incorporates economic evaluations of integrated care interventions for frail older people. Cost‐effectiveness is an important aim of integrated care (Kodner & Spreeuwenberg, ) and economic evaluations of integrated care for frail older people have recently generated considerable research interest (Evers & Paulus, ). Due to budget constraints and population ageing, health and social care expenditures are under pressure. Therefore, it is relevant to explore whether integrated care with a preventive component can put the available resources to optimal use. Finally, we relate the effectiveness and cost‐effectiveness with the specific content of the preventive, integrated care interventions. In the current fragmented healthcare systems, achieving seamless and continuous care tailored to the needs of frail older people is complex. Integration could be pursued at different levels and with different strategies such as comprehensive geriatric assessments, multidisciplinary teams or organisational integration (Kodner & Spreeuwenberg, ; Valentijn et al., ). The assumption is that a higher level of integration leads to better outcomes (Kodner & Spreeuwenberg, ); however, it still remains unclear what specific bundles of integrated care lead to specific outcomes (Eklund & Wilhelmson, ; Kodner, ). Therefore, the preventive integrated care interventions will be analysed following the taxonomy of the Rainbow Model of Integrated Care; a conceptual framework for integrated care from a primary care perspective (Valentijn et al., ). METHODS The methods and results of this systematic review are reported according to PRISMA guidelines (Moher, Liberati, Tetzlaff, & Altman, ). 2.1 Search strategy We searched nine databases, including Embase, Medline (Ovid), Web‐of‐Science, CINAHL (EBSCO), PsycINFO (Ovid), Cochrane, PubMed publisher, ProQuest (ABI Inform, Dissertations) and Google Scholar. The search terms were discussed with a medical librarian who is a specialist in conducting and designing searches for systematic reviews (Bramer, Giustini, Kramer, & Anderson, ). The main search terms were “integrated health care system,” “frail older people” and “primary care.” The complete Embase search strategy is presented in Appendix . Besides Boolean operators AND and OR, we used the proximity operators NEAR and NEXT so that terms within a certain reach were also detected in the search. The search was done in August 2015 and updated in May 2016. 2.2 Eligibility criteria Box presents the eligibility criteria of our systematic review. Box 1 Eligibility criteria 1 2.3 Study selection After removing duplicates, one reviewer screened the titles of all articles. Then two reviewers independently screened the remaining abstracts according to the inclusion and exclusion criteria. Any disagreements over abstracts were discussed until the reviewers reached a consensus. The remaining full texts were assessed for eligibility by one reviewer. All full texts that met the inclusion criteria or where doubts arose were discussed with the second reviewer. A reference check was performed on all included full texts. 2.4 Data extraction All included full texts were summarised, focusing on the study methods, the intervention and its outcomes. The methods of each study were described according to inclusion criteria (definition of frailty), study design, types of outcomes, sample size and country. The interventions are presented following the taxonomy of the Rainbow Model of Integrated Care (Valentijn et al., ). The elements of each intervention are distinguished according to the micro, meso and macro levels of integration described by Valentijn. The micro level consists of service integration in which the following elements are distinguished: assessment; care plan; follow‐up; and single entry point. The meso level includes professional integration (with four elements: the focal organisation of the intervention; the role of the GP, team composition and education professionals) and organisational integration. The macro level consists of financial integration. These three levels are connected by normative integration and functional integration (with two elements: co‐ordination and information system). Additional information is provided about the role of the informal caregiver and prevention in the interventions. Five outcome categories are presented in subsequent tables: health outcomes, outcomes regarding informal caregivers and professionals, process outcomes, healthcare utilisation and cost‐effectiveness. The results for the outcomes are presented as follows: (+: significant outcome in favour of the intervention, 0: no significant outcome; −: significant outcome in favour of the control group; +/− significant outcome both in favour of the intervention and the control group within one category; NS: outcome not tested for significance). Outcomes are presented at the level of the intervention, so the results of studies reporting on the same intervention are combined. The number of statistically significant results has been counted. 2.5 Quality assessment The quality of the included studies was assessed with the Effective Practice and Organization of Care (EPOC) risk‐of‐bias tool for studies with a separate control group (EPOC, ). This quality assessment tool is the most suitable to assess the included studies because our systematic review was not restricted to randomised controlled trials. The EPOC comprises nine standard criteria, including generation and concealment of allocation, similarity of outcome and baseline measures, adequacy of addressing missing outcome date, prevention of knowledge of allocated intervention, protection against contamination, selective outcome reporting and other risks of bias. The nine criteria are assessed in three categories: low risk (1 point), high risk (0 point) and unclear risk (0 point) and the total quality score ranges from 0 to 9. Two reviewers separately assessed the risk of bias; any disagreements over criteria were discussed until the two reviewers reached a consensus. Search strategy We searched nine databases, including Embase, Medline (Ovid), Web‐of‐Science, CINAHL (EBSCO), PsycINFO (Ovid), Cochrane, PubMed publisher, ProQuest (ABI Inform, Dissertations) and Google Scholar. The search terms were discussed with a medical librarian who is a specialist in conducting and designing searches for systematic reviews (Bramer, Giustini, Kramer, & Anderson, ). The main search terms were “integrated health care system,” “frail older people” and “primary care.” The complete Embase search strategy is presented in Appendix . Besides Boolean operators AND and OR, we used the proximity operators NEAR and NEXT so that terms within a certain reach were also detected in the search. The search was done in August 2015 and updated in May 2016. Eligibility criteria Box presents the eligibility criteria of our systematic review. Box 1 Eligibility criteria 1 Study selection After removing duplicates, one reviewer screened the titles of all articles. Then two reviewers independently screened the remaining abstracts according to the inclusion and exclusion criteria. Any disagreements over abstracts were discussed until the reviewers reached a consensus. The remaining full texts were assessed for eligibility by one reviewer. All full texts that met the inclusion criteria or where doubts arose were discussed with the second reviewer. A reference check was performed on all included full texts. Data extraction All included full texts were summarised, focusing on the study methods, the intervention and its outcomes. The methods of each study were described according to inclusion criteria (definition of frailty), study design, types of outcomes, sample size and country. The interventions are presented following the taxonomy of the Rainbow Model of Integrated Care (Valentijn et al., ). The elements of each intervention are distinguished according to the micro, meso and macro levels of integration described by Valentijn. The micro level consists of service integration in which the following elements are distinguished: assessment; care plan; follow‐up; and single entry point. The meso level includes professional integration (with four elements: the focal organisation of the intervention; the role of the GP, team composition and education professionals) and organisational integration. The macro level consists of financial integration. These three levels are connected by normative integration and functional integration (with two elements: co‐ordination and information system). Additional information is provided about the role of the informal caregiver and prevention in the interventions. Five outcome categories are presented in subsequent tables: health outcomes, outcomes regarding informal caregivers and professionals, process outcomes, healthcare utilisation and cost‐effectiveness. The results for the outcomes are presented as follows: (+: significant outcome in favour of the intervention, 0: no significant outcome; −: significant outcome in favour of the control group; +/− significant outcome both in favour of the intervention and the control group within one category; NS: outcome not tested for significance). Outcomes are presented at the level of the intervention, so the results of studies reporting on the same intervention are combined. The number of statistically significant results has been counted. Quality assessment The quality of the included studies was assessed with the Effective Practice and Organization of Care (EPOC) risk‐of‐bias tool for studies with a separate control group (EPOC, ). This quality assessment tool is the most suitable to assess the included studies because our systematic review was not restricted to randomised controlled trials. The EPOC comprises nine standard criteria, including generation and concealment of allocation, similarity of outcome and baseline measures, adequacy of addressing missing outcome date, prevention of knowledge of allocated intervention, protection against contamination, selective outcome reporting and other risks of bias. The nine criteria are assessed in three categories: low risk (1 point), high risk (0 point) and unclear risk (0 point) and the total quality score ranges from 0 to 9. Two reviewers separately assessed the risk of bias; any disagreements over criteria were discussed until the two reviewers reached a consensus. RESULTS Figure presents the PRISMA flow chart. Our review included 46 studies regarding a total of 29 separate interventions. The 29 interventions were carried out in 10 countries (see Table ): Canada ( n = 8), United States ( n = 7), the Netherlands ( n = 6), Sweden ( n = 2), and Australia, Finland, France, Hong Kong, Japan, New Zealand ( n = 1 each). Most studies were randomised controlled trials ( n = 18). Other types were controlled before‐and‐after studies ( n = 6), cluster‐randomised controlled trials ( n = 3), case–control study and stepped‐wedge cluster‐randomised controlled trial ( n = 1 for both). Of the 46 included studies, 36 reported the effectiveness and 10 the cost‐effectiveness of an integrated care intervention. The total number of participants ranged from 36 participants to 3,689 participants. The follow‐up period varied from 3 to 48 months. Overall, the quality of the evidence was moderate ranging from 2 to 9 on the EPOC risk‐of‐bias scale with an average score of 5.3 (see also Table ). Our results revealed that each intervention defined frailty differently. All interventions used different tools and inclusion criteria and the dimensions of frailty differed considerably between the interventions. Of the 29 interventions, 13 incorporated the physical dimension of frailty in their inclusion criteria. Five interventions combined the physical and psychological dimensions of frailty and two focused on the physical and social dimension. Eight interventions adopted a broader approach to frailty, including the physical, psychological and social domains of functioning. Additionally, researchers used different age criteria, ranging from 50 years and older to 75 years and older and most interventions adopted the criterion of 65 years and older. 3.1 Interventions The 29 interventions were arranged according to the Valentijn framework (Valentijn et al., ; see Table ). The level of integration of the interventions is high at the micro level but generally low at the meso and macro levels of integration. Service integration was substantially high in all 29 interventions. All interventions used assessment tools, mostly a comprehensive geriatric assessment, which the majority of interventions used to develop a care plan. Occasionally, the frail older person and their informal caregiver were also involved in the development of the care plan. The assessments and care plans revealed the preventive character of the integrated interventions. The assessment demonstrated that it could detect a wide range of problems that might not have been recognised in usual care. The care plan addressed a selection of these problems; however, the articles provided limited insight into how the assessments resulted in a care plan. Despite the similarities in assessments and care plans, the follow‐up differed between interventions, particularly in the role of prevention. Predominantly, case management was an important part of the follow‐up which involved executing the care plan, monitoring the frail older people, advocacy by arranging admission to services and updating other professionals. Follow‐up could also include home visits or specific interventions aimed at fall prevention or activation. Follow‐up standardisation fluctuated: some interventions developed protocols so that follow‐up took place each month, whereas other interventions were more flexible, responding to the needs of the frail older people. Remarkably, the role of prevention in the follow‐up was generally limited and differed between interventions. A few interventions ( n = 9) paid explicit attention to health education, health promotion or adopting an active lifestyle and coping. Professional integration varied between interventions. Different professionals were responsible for follow‐up: (practice) nurses, social workers, physiotherapists, geriatricians or a multidisciplinary team of professionals. The involved professionals and organisations differed between interventions. Physicians and nurses are involved most frequently but also collaboration with geriatricians in secondary care and social workers commonly occurs (both n = 13). Some interventions were situated in a clear focal organisation, such as a primary care or community practice, home‐care organisation, Geriatric Evaluation and Management outpatient clinic, physiotherapist or rehabilitation centre, whereas other interventions are situated in a network of organisations. The level of involvement of the GP varied between the interventions; the GP was at the core of some of the interventions, whereas occasionally the GP had no role at all and the integrated care intervention co‐existed alongside usual care. Finally, the intervention‐specific education of professionals was sparse and concentrated mostly on very specific elements of the interventions such as assessment instruments or protocols. Organisational integration was modest in the preventive, integrated care interventions. A few cases created a network of organisations: five cases set up a Joint Governing Board and two built a new consortium. Financial integration was even less frequent. Two interventions had partial financial integration; one was fully integrated financially and its teams controlled their own budget. Functional integration was limited; a few interventions ( n = 9) used a shared information system or developed multidisciplinary protocols ( n = 6) on specific themes such as urinary incontinence or falls. In addition, the level of normative integration was negligible ( n = 4) according to the intervention descriptions. Workshops and training courses focused on the following topics: collaboration of the practice nurse and GP; goals and responsibilities of collaborative care teams; team development; client‐centredness and interdisciplinary collaboration. Informal caregivers of the frail older people were not always considered as active participants by the professionals in the interventions. Sporadically ( n = 2), the caregiver burden was included in the comprehensive assessment and occasionally ( n = 6) the follow‐up was also aimed at the informal caregivers. At times ( n = 5), the professionals actively involved informal caregivers in the care process, by validating the care plan with them or involving them in the actual decision‐making process. 3.2 Health outcomes There was generally limited evidence of integrated care interventions on health outcomes of frail older people. No clear pattern emerged in the elements or level of integration of the interventions that did generate significant effects. An extensive range of health outcomes were considered (see Table ). The outcomes reported most often were activities of daily living (ADL)/instrumental activities of daily living (IADL) ( n = 18), mortality ( n = 15) and physical functioning ( n = 13). Less frequently used outcomes were social support ( n = 3), vitality ( n = 3) and desire for institutionalisation and frailty ( n = 1 for both). In terms of effectiveness, four outcomes were most promising: well‐being, life satisfaction, frailty and desire for institutionalisation. The majority of the interventions reporting these specific outcomes found a positive effect for the intervention. However, these outcomes were reported less frequently, especially desire for institutionalisation and frailty. For other outcomes, positive effects were reported occasionally; for instance, depression ( n = 4 out of 10) and cognitive functioning ( n = 3 out of 8). Four outcome measures did not reach significance in any of the interventions: pain, role, social support and health‐related quality of life. We found an effect in favour of the control group only twice: reported morbidities (Burns, Nichols, Graney, & Cloar, ) and life satisfaction (Kono et al., ). The differences in outcomes could not be explained by the elements and level of integration of the interventions. This, for example, is shown by the 18 interventions that reported ADL and IADL as an outcome. Four interventions that showed positive effects had, for example, a multidisciplinary team, whereas the two other interventions with positive effects had no multidisciplinary team. The same mixed pattern was found in the 12 interventions that reported no effects on ADL and IADL. Some outcomes tended to show that better outcomes were accompanied by a lower level of integration. The studies that showed an effect on mortality in favour of the intervention were not integrated normatively, organisationally or financially. The interventions that reported a positive effect on mental health were not integrated functionally, normatively or organisationally. Two remarkable effective interventions showed similar effects for life satisfaction, well‐being, depression and social functioning. One intervention (Shapiro & Taylor, ) also found significant effects in mortality, whereas the other also reported effects on perceived health, cognitive functioning and IADL (Burns, Nichols, Martindale‐Adams, & Graney, ; Burns et al., ). These results highlighted the limited effect in the physical domain of functioning. Both these interventions showed a low level of integration at the meso and macro level since both had no functional, organisational and financial integration. 3.3 Outcomes for informal caregivers and professionals Our results show a considerable lack of emphasis on outcomes regarding the informal caregivers and professionals. Subsequently, the effects on these outcomes were negligible. Nine of the 29 interventions reported on the following outcomes: caregiver's satisfaction with care, caregiver's desire for institutionalisation, caregiver's subjective and objective burden and professional satisfaction with care (Table ). The effect on caregiver's satisfaction with care was most convincing, since it was effective in one of the two studies reporting this outcome. Caregiver's satisfaction improved in an intervention which encouraged family participation in care and decision‐making and professionals also intervened with caregivers (Beland et al., 2006). No effect was found in another intervention where no specific attention was paid to the informal caregiver (Montgomery & Fallis, ). Caregiver's desire for institutionalisation did not show any significant effect. The effects on caregiver subjective burden were rather inconsistent. Four studies reported this outcome, all using the same measurement instrument, but the results were mixed: an effect in favour of the intervention (Tourigny, Durand, Bonin, Hebert, & Rochette, ), the control group (Hébert et al., ) or no effect at all (Béland et al., ; Montgomery & Fallis, ). These results were unrelated to the role of the informal caregiver in the intervention since informal caregivers were the least involved in the care process in the most effective intervention. The objective burden of informal caregivers was not affected by preventive, integrated care interventions. The objective burden—time spent on informal care—was considered from a societal perspective in five cost‐effectiveness analyses and one intervention found an effect in favour of the caregivers in the intervention group. Time spent on IADL by the caregivers decreased in this intervention that aimed specially at improving the functional status of frail older people (Sandberg et al. ). Professional satisfaction was the only outcome regarding professionals that was taken into account by a single study (Morishita, Boult, Boult, Smith, & Pacala, ). However, this study did not apply significance testing. The professionals indicated that the intervention is appropriate, helpful for both their patients and themselves in ongoing care for their patients. 3.4 Process outcomes Process outcomes of integrated care interventions generated little interest but the effects were beneficial, particularly for care process. Five types of outcomes fit into the category of process outcomes: goal attainment, empowerment, satisfaction with care, care process and rate of implementation (Table ). For three types of outcomes, most effects were in favour of the intervention group: goal attainment, empowerment and care process. Goal attainment was reported for only one intervention as the primary outcome measure (Rockwood et al., ), in which an effect in favour of the intervention was generated. Empowerment had a positive effect in two of four interventions. The definition of empowerment was aligned with the focus of intervention studies: it was related either to patient involvement in the care process or to empowerment in terms of activities of daily life. Both definitions showed a significant effect once. The care process improved in all five integrated, preventive care interventions in which it was considered an outcome measure. These five interventions were not integrated normatively, organisationally or financially. The operationalisation of care process differed between studies and was closely aligned to specific interventions. For example, the Rubenstein intervention focused on five geriatric target conditions and referrals. The researchers operationalised the care process by evaluating documentation and assessing the target conditions and referrals (Rubenstein et al., ). Evidence for the most common outcome in this category—satisfaction with care—was not convincing. Of the 10 interventions reporting on this outcome, three found an increase in satisfaction with preventive, integrated care. No clear pattern emerged on what could explain the differences in effects. Two Outpatient Geriatric Evaluation Management interventions in the United States reported higher satisfaction with care (Engelhardt, Toseland, & O'Donnell, ; Morishita et al., ; Toseland, O'Donnell, & Engelhardt, ) but a very comparable intervention, also using a similar measurement instrument, did not result in higher satisfaction (Reuben, Frank, Hirsch, McGuigan, & Maly, ). PRISMA resulted in higher satisfaction with care after 4 years (Hébert et al., ) but this effect was not yet established after 1 year (Hébert, Dubois, Raiche, Dubuc, & Group, ). Comparable interventions to PRISMA with a high level of professional integration (Kerse et al., ) and organisational integration (Béland et al., ; Gagnon, Schein, McVey, & Bergman, ; Looman et al. ) found no effect in shorter follow‐up periods (3–36 months). 3.5 Healthcare utilisation Healthcare utilisation did not differ substantially between frail older people receiving care as usual and preventive, integrated care. Nonetheless, we observed both decreases and increases in utilisation. Healthcare utilisation was the most reported outcome ( n = 27; Table ). The focus was mainly on secondary care since the most frequently reported outcomes were hospital length of stay ( n = 19), hospital admission ( n = 18) and nursing home admission ( n = 18). Far less attention was paid to social care utilisation such as psychosocial care ( n = 4) or meals on wheels ( n = 5). The least reported outcomes were diagnostics ( n = 4) and equipment ( n = 3). The majority of the interventions reported no significant increase or decrease in healthcare utilisation in any outcome category. Despite the limited effects, some patterns in healthcare utilisation could be revealed. Three types of healthcare utilisation were not affected at all by integrated care: use of equipment, psychosocial care and day surgery. The effects of integrated care interventions on hospital care tend to be positive; slightly more interventions showed a decrease in hospital care utilisation by the frail older people than an increase. This accounted for four types of hospital care: admission to the emergency department, length of stay in hospital, admission to the hospital and contact with physicians in outpatient care. On the other hand, more increases than decreases in utilisation were reported for other types of care. Primary care increased for almost half of the interventions reporting this outcome. For paramedical care, day care, diagnostics and meals on wheels only increases in utilisation were observed, although led by non‐significant effects for all types of healthcare utilisation. The effect on nursing home admissions was ambiguous since 14 interventions found no effects, two showed a decrease in admissions (Montgomery & Fallis, ; Shapiro & Taylor, ) and two an increase (Kerse et al., ; Kono et al., ). In 14 interventions, the healthcare utilisation outcomes were converted into costs. The effects were sparse; 11 interventions find no significant effect, due mostly to the wide variation in costs. At intervention level, six interventions reported no significant effects at all for healthcare utilisation. Moreover, a substantial number ( n = 12) of interventions reported more increases in healthcare utilisation than decreases. Remarkably, the PRISMA intervention reported increases in six types of healthcare utilisation in the first year of follow‐up (Hébert et al., ), but these increases disappeared (i.e. became non‐significant) in the 4‐year follow‐up period (Hébert et al., ). The differences in outcomes in healthcare utilisation could not be fully explained by the differences in components or level of integration of the interventions. The results indicated that a higher level of integration did not result in better outcomes. For instance, for hospital length of stay, there was no organisational and financial integration in the interventions that generated a decrease in length of stay, whereas the interventions that had an increase in length of stay were integrated organisationally and financially. The one intervention that resulted in a decrease in primary care had no functional, organisational and financial integration, whereas this was both present and absent for interventions that found no effect or an increase in primary care utilisation. 3.6 Cost‐effectiveness Our systematic review showed limited evidence for the cost‐effectiveness of preventive, integrated care interventions for frail older people. Cost‐effectiveness was determined for nine interventions, of which three stated they were cost‐effective (Table ). Generally, we observed no significant differences in total cost between the preventive, integrated care interventions and care as usual. The total costs of two interventions were higher than care as usual (Gray, Armstrong, Dahrouge, Hogg, & Zhang, ; Kehusmaa, Autti‐Rämö, Valaste, Hinkka, & Rissanen, ) due mostly to high intervention costs rather than any increase in healthcare utilisation. Besides the limited cost savings, the effects of the interventions were also modest, particularly in terms of quality‐adjusted life years (QALY). Seven studies chose QALY as an effect measure and one study adopted another measure for health‐related quality of life. None of these interventions found an effect in favour of the intervention. Two significant effects were established: quality of care for APTcare (Gray et al., ) and frailty for FIT (Fairhall et al., ). These effect measures were more properly aligned to the two interventions. APTcare, for instance, was a disease management programme and quality of care was determined by specific performance measures for each chronic disease. FIT strongly focused on frailty by assessing specific frailty characteristics and implementing specific interventions for each frailty condition. Due to their modest effects, the majority of interventions were not cost‐effective. Three interventions had a high probability of being cost‐effective, 75% at a willingness to pay 20,000 euro (Drubbel et al., ), 95% at 34,000 euros (Melis et al., ) and 80% at 50,000 dollars (Fairhall et al., ). These three interventions had some features in common: the absence of case management, a single entry point, information system and organisational and financial integration. These elements were both present and absent in the seven interventions that were not cost‐effective. Interventions The 29 interventions were arranged according to the Valentijn framework (Valentijn et al., ; see Table ). The level of integration of the interventions is high at the micro level but generally low at the meso and macro levels of integration. Service integration was substantially high in all 29 interventions. All interventions used assessment tools, mostly a comprehensive geriatric assessment, which the majority of interventions used to develop a care plan. Occasionally, the frail older person and their informal caregiver were also involved in the development of the care plan. The assessments and care plans revealed the preventive character of the integrated interventions. The assessment demonstrated that it could detect a wide range of problems that might not have been recognised in usual care. The care plan addressed a selection of these problems; however, the articles provided limited insight into how the assessments resulted in a care plan. Despite the similarities in assessments and care plans, the follow‐up differed between interventions, particularly in the role of prevention. Predominantly, case management was an important part of the follow‐up which involved executing the care plan, monitoring the frail older people, advocacy by arranging admission to services and updating other professionals. Follow‐up could also include home visits or specific interventions aimed at fall prevention or activation. Follow‐up standardisation fluctuated: some interventions developed protocols so that follow‐up took place each month, whereas other interventions were more flexible, responding to the needs of the frail older people. Remarkably, the role of prevention in the follow‐up was generally limited and differed between interventions. A few interventions ( n = 9) paid explicit attention to health education, health promotion or adopting an active lifestyle and coping. Professional integration varied between interventions. Different professionals were responsible for follow‐up: (practice) nurses, social workers, physiotherapists, geriatricians or a multidisciplinary team of professionals. The involved professionals and organisations differed between interventions. Physicians and nurses are involved most frequently but also collaboration with geriatricians in secondary care and social workers commonly occurs (both n = 13). Some interventions were situated in a clear focal organisation, such as a primary care or community practice, home‐care organisation, Geriatric Evaluation and Management outpatient clinic, physiotherapist or rehabilitation centre, whereas other interventions are situated in a network of organisations. The level of involvement of the GP varied between the interventions; the GP was at the core of some of the interventions, whereas occasionally the GP had no role at all and the integrated care intervention co‐existed alongside usual care. Finally, the intervention‐specific education of professionals was sparse and concentrated mostly on very specific elements of the interventions such as assessment instruments or protocols. Organisational integration was modest in the preventive, integrated care interventions. A few cases created a network of organisations: five cases set up a Joint Governing Board and two built a new consortium. Financial integration was even less frequent. Two interventions had partial financial integration; one was fully integrated financially and its teams controlled their own budget. Functional integration was limited; a few interventions ( n = 9) used a shared information system or developed multidisciplinary protocols ( n = 6) on specific themes such as urinary incontinence or falls. In addition, the level of normative integration was negligible ( n = 4) according to the intervention descriptions. Workshops and training courses focused on the following topics: collaboration of the practice nurse and GP; goals and responsibilities of collaborative care teams; team development; client‐centredness and interdisciplinary collaboration. Informal caregivers of the frail older people were not always considered as active participants by the professionals in the interventions. Sporadically ( n = 2), the caregiver burden was included in the comprehensive assessment and occasionally ( n = 6) the follow‐up was also aimed at the informal caregivers. At times ( n = 5), the professionals actively involved informal caregivers in the care process, by validating the care plan with them or involving them in the actual decision‐making process. Health outcomes There was generally limited evidence of integrated care interventions on health outcomes of frail older people. No clear pattern emerged in the elements or level of integration of the interventions that did generate significant effects. An extensive range of health outcomes were considered (see Table ). The outcomes reported most often were activities of daily living (ADL)/instrumental activities of daily living (IADL) ( n = 18), mortality ( n = 15) and physical functioning ( n = 13). Less frequently used outcomes were social support ( n = 3), vitality ( n = 3) and desire for institutionalisation and frailty ( n = 1 for both). In terms of effectiveness, four outcomes were most promising: well‐being, life satisfaction, frailty and desire for institutionalisation. The majority of the interventions reporting these specific outcomes found a positive effect for the intervention. However, these outcomes were reported less frequently, especially desire for institutionalisation and frailty. For other outcomes, positive effects were reported occasionally; for instance, depression ( n = 4 out of 10) and cognitive functioning ( n = 3 out of 8). Four outcome measures did not reach significance in any of the interventions: pain, role, social support and health‐related quality of life. We found an effect in favour of the control group only twice: reported morbidities (Burns, Nichols, Graney, & Cloar, ) and life satisfaction (Kono et al., ). The differences in outcomes could not be explained by the elements and level of integration of the interventions. This, for example, is shown by the 18 interventions that reported ADL and IADL as an outcome. Four interventions that showed positive effects had, for example, a multidisciplinary team, whereas the two other interventions with positive effects had no multidisciplinary team. The same mixed pattern was found in the 12 interventions that reported no effects on ADL and IADL. Some outcomes tended to show that better outcomes were accompanied by a lower level of integration. The studies that showed an effect on mortality in favour of the intervention were not integrated normatively, organisationally or financially. The interventions that reported a positive effect on mental health were not integrated functionally, normatively or organisationally. Two remarkable effective interventions showed similar effects for life satisfaction, well‐being, depression and social functioning. One intervention (Shapiro & Taylor, ) also found significant effects in mortality, whereas the other also reported effects on perceived health, cognitive functioning and IADL (Burns, Nichols, Martindale‐Adams, & Graney, ; Burns et al., ). These results highlighted the limited effect in the physical domain of functioning. Both these interventions showed a low level of integration at the meso and macro level since both had no functional, organisational and financial integration. Outcomes for informal caregivers and professionals Our results show a considerable lack of emphasis on outcomes regarding the informal caregivers and professionals. Subsequently, the effects on these outcomes were negligible. Nine of the 29 interventions reported on the following outcomes: caregiver's satisfaction with care, caregiver's desire for institutionalisation, caregiver's subjective and objective burden and professional satisfaction with care (Table ). The effect on caregiver's satisfaction with care was most convincing, since it was effective in one of the two studies reporting this outcome. Caregiver's satisfaction improved in an intervention which encouraged family participation in care and decision‐making and professionals also intervened with caregivers (Beland et al., 2006). No effect was found in another intervention where no specific attention was paid to the informal caregiver (Montgomery & Fallis, ). Caregiver's desire for institutionalisation did not show any significant effect. The effects on caregiver subjective burden were rather inconsistent. Four studies reported this outcome, all using the same measurement instrument, but the results were mixed: an effect in favour of the intervention (Tourigny, Durand, Bonin, Hebert, & Rochette, ), the control group (Hébert et al., ) or no effect at all (Béland et al., ; Montgomery & Fallis, ). These results were unrelated to the role of the informal caregiver in the intervention since informal caregivers were the least involved in the care process in the most effective intervention. The objective burden of informal caregivers was not affected by preventive, integrated care interventions. The objective burden—time spent on informal care—was considered from a societal perspective in five cost‐effectiveness analyses and one intervention found an effect in favour of the caregivers in the intervention group. Time spent on IADL by the caregivers decreased in this intervention that aimed specially at improving the functional status of frail older people (Sandberg et al. ). Professional satisfaction was the only outcome regarding professionals that was taken into account by a single study (Morishita, Boult, Boult, Smith, & Pacala, ). However, this study did not apply significance testing. The professionals indicated that the intervention is appropriate, helpful for both their patients and themselves in ongoing care for their patients. Process outcomes Process outcomes of integrated care interventions generated little interest but the effects were beneficial, particularly for care process. Five types of outcomes fit into the category of process outcomes: goal attainment, empowerment, satisfaction with care, care process and rate of implementation (Table ). For three types of outcomes, most effects were in favour of the intervention group: goal attainment, empowerment and care process. Goal attainment was reported for only one intervention as the primary outcome measure (Rockwood et al., ), in which an effect in favour of the intervention was generated. Empowerment had a positive effect in two of four interventions. The definition of empowerment was aligned with the focus of intervention studies: it was related either to patient involvement in the care process or to empowerment in terms of activities of daily life. Both definitions showed a significant effect once. The care process improved in all five integrated, preventive care interventions in which it was considered an outcome measure. These five interventions were not integrated normatively, organisationally or financially. The operationalisation of care process differed between studies and was closely aligned to specific interventions. For example, the Rubenstein intervention focused on five geriatric target conditions and referrals. The researchers operationalised the care process by evaluating documentation and assessing the target conditions and referrals (Rubenstein et al., ). Evidence for the most common outcome in this category—satisfaction with care—was not convincing. Of the 10 interventions reporting on this outcome, three found an increase in satisfaction with preventive, integrated care. No clear pattern emerged on what could explain the differences in effects. Two Outpatient Geriatric Evaluation Management interventions in the United States reported higher satisfaction with care (Engelhardt, Toseland, & O'Donnell, ; Morishita et al., ; Toseland, O'Donnell, & Engelhardt, ) but a very comparable intervention, also using a similar measurement instrument, did not result in higher satisfaction (Reuben, Frank, Hirsch, McGuigan, & Maly, ). PRISMA resulted in higher satisfaction with care after 4 years (Hébert et al., ) but this effect was not yet established after 1 year (Hébert, Dubois, Raiche, Dubuc, & Group, ). Comparable interventions to PRISMA with a high level of professional integration (Kerse et al., ) and organisational integration (Béland et al., ; Gagnon, Schein, McVey, & Bergman, ; Looman et al. ) found no effect in shorter follow‐up periods (3–36 months). Healthcare utilisation Healthcare utilisation did not differ substantially between frail older people receiving care as usual and preventive, integrated care. Nonetheless, we observed both decreases and increases in utilisation. Healthcare utilisation was the most reported outcome ( n = 27; Table ). The focus was mainly on secondary care since the most frequently reported outcomes were hospital length of stay ( n = 19), hospital admission ( n = 18) and nursing home admission ( n = 18). Far less attention was paid to social care utilisation such as psychosocial care ( n = 4) or meals on wheels ( n = 5). The least reported outcomes were diagnostics ( n = 4) and equipment ( n = 3). The majority of the interventions reported no significant increase or decrease in healthcare utilisation in any outcome category. Despite the limited effects, some patterns in healthcare utilisation could be revealed. Three types of healthcare utilisation were not affected at all by integrated care: use of equipment, psychosocial care and day surgery. The effects of integrated care interventions on hospital care tend to be positive; slightly more interventions showed a decrease in hospital care utilisation by the frail older people than an increase. This accounted for four types of hospital care: admission to the emergency department, length of stay in hospital, admission to the hospital and contact with physicians in outpatient care. On the other hand, more increases than decreases in utilisation were reported for other types of care. Primary care increased for almost half of the interventions reporting this outcome. For paramedical care, day care, diagnostics and meals on wheels only increases in utilisation were observed, although led by non‐significant effects for all types of healthcare utilisation. The effect on nursing home admissions was ambiguous since 14 interventions found no effects, two showed a decrease in admissions (Montgomery & Fallis, ; Shapiro & Taylor, ) and two an increase (Kerse et al., ; Kono et al., ). In 14 interventions, the healthcare utilisation outcomes were converted into costs. The effects were sparse; 11 interventions find no significant effect, due mostly to the wide variation in costs. At intervention level, six interventions reported no significant effects at all for healthcare utilisation. Moreover, a substantial number ( n = 12) of interventions reported more increases in healthcare utilisation than decreases. Remarkably, the PRISMA intervention reported increases in six types of healthcare utilisation in the first year of follow‐up (Hébert et al., ), but these increases disappeared (i.e. became non‐significant) in the 4‐year follow‐up period (Hébert et al., ). The differences in outcomes in healthcare utilisation could not be fully explained by the differences in components or level of integration of the interventions. The results indicated that a higher level of integration did not result in better outcomes. For instance, for hospital length of stay, there was no organisational and financial integration in the interventions that generated a decrease in length of stay, whereas the interventions that had an increase in length of stay were integrated organisationally and financially. The one intervention that resulted in a decrease in primary care had no functional, organisational and financial integration, whereas this was both present and absent for interventions that found no effect or an increase in primary care utilisation. Cost‐effectiveness Our systematic review showed limited evidence for the cost‐effectiveness of preventive, integrated care interventions for frail older people. Cost‐effectiveness was determined for nine interventions, of which three stated they were cost‐effective (Table ). Generally, we observed no significant differences in total cost between the preventive, integrated care interventions and care as usual. The total costs of two interventions were higher than care as usual (Gray, Armstrong, Dahrouge, Hogg, & Zhang, ; Kehusmaa, Autti‐Rämö, Valaste, Hinkka, & Rissanen, ) due mostly to high intervention costs rather than any increase in healthcare utilisation. Besides the limited cost savings, the effects of the interventions were also modest, particularly in terms of quality‐adjusted life years (QALY). Seven studies chose QALY as an effect measure and one study adopted another measure for health‐related quality of life. None of these interventions found an effect in favour of the intervention. Two significant effects were established: quality of care for APTcare (Gray et al., ) and frailty for FIT (Fairhall et al., ). These effect measures were more properly aligned to the two interventions. APTcare, for instance, was a disease management programme and quality of care was determined by specific performance measures for each chronic disease. FIT strongly focused on frailty by assessing specific frailty characteristics and implementing specific interventions for each frailty condition. Due to their modest effects, the majority of interventions were not cost‐effective. Three interventions had a high probability of being cost‐effective, 75% at a willingness to pay 20,000 euro (Drubbel et al., ), 95% at 34,000 euros (Melis et al., ) and 80% at 50,000 dollars (Fairhall et al., ). These three interventions had some features in common: the absence of case management, a single entry point, information system and organisational and financial integration. These elements were both present and absent in the seven interventions that were not cost‐effective. DISCUSSION The widespread interest in preventive, integrated care has generated high expectations for improving the organisation of care for community‐dwelling frail older people. The aim of this study was to systematically review the empirical evidence for its effectiveness and cost‐effectiveness to test these expectations. Our results showed that the fragmented evidence is not compelling. Preventive, integrated care is not likely to be effective since the majority of the reported outcomes show no effect. Less frequently reported outcomes were most promising such as care process, well‐being and life satisfaction, even as outcomes closely aligned to the aim of the interventions such as frailty and fall prevention. However, when interventions were specifically aimed at ADL, IADL and physical functioning, effects were less likely to be substantiated. The evidence for healthcare utilisation was mixed but preventive, integrated care did not lead to clear cost reductions or substitution of healthcare and cost‐effectiveness was limited. Our review showed no clear relation between (cost‐) effectiveness and specific preventive, integrated elements or levels of integration. The more integrated interventions, in particular, in terms of functional, normative, organisational and financial integration, tended not to result in more effectiveness. Differences in outcomes could neither be explained by the quality of the studies, the sample size, nor the follow‐up period. Another important result of our systematic review was that populations, interventions and outcomes differed substantially which made it extremely difficult to compare both interventions and evaluation studies. First, fragmentation was caused by the heterogeneity of the target population of the interventions. No consensus existed on the definition of frailty since the inclusion criteria of the participants were formulated differently in literally all studies. Frailty was mostly related to the physical domain of functioning, but the psychological and social domain were gradually incorporated as well. In the inclusion criteria, the physical domain was very frequently translated to dependency in ADL or IADL, whereas previous research has shown that frailty is a different condition than disability (Fried, Ferrucci, Darer, Williamson, & Anderson, ; Lutomski et al., ). Second, the interventions were built up differently in terms of elements and level of integration. Some common elements could be derived, such as assessments and care plans but their follow‐up varied between interventions and was not clearly described in the intervention descriptions. Also the role of prevention differed between interventions. Secondary prevention was part of all interventions due to the comprehensive geriatric assessment and care plans. Nevertheless, screening the older population for frailty was less common. Only few interventions paid explicit attention to self‐management, health education and empowerment in the follow‐up of frail older people; thus tertiary prevention was limited. Besides the differences in the elements, the level of integration of the interventions also varied. Some were organisationally integrated interventions but were not normatively and functionally integrated and vice versa. Third, the fragmentation of the evaluation research is caused predominantly by the extensive variation in outcome measures. Some main categories that nearly always are considered to determine the (cost‐)effectiveness of preventive integrated care can be distinguished: ADL and IADL, hospitalisation and nursing home admission. But besides these commonalities, the outcomes were dispersed, ranging from vitality to desire for institutionalisation for frail older people and caregivers. Many different measurement instruments were used for these outcomes which fragmented the evidence even more and made comparisons more difficult. Although measurement of healthcare utilisation was consistent ‐ by self report or from registrations ‐ the outcomes typically focused on healthcare rather than social care and were distinctive for each intervention. These differences also implied that the cost of preventive, integrated care was calculated differently for each intervention. 4.1 Interpretation of results in the context of other studies Our results added nuances to the high expectations for integrated care in the literature. Some theoretical studies on (general) integrated care state that it could pursue a wide range of aims (Kodner & Spreeuwenberg, ). However, our results were in line with other empirical reviews on integrated care interventions for older people. Previous research also emphasised the unconvincing effects on health outcomes (Eklund & Wilhelmson, ; Johri et al., ; Low, Yap, & Brodaty, ; Stokes et al., ; You et al., ). The positive effect on well‐being was confirmed in a systematic review on case management of frail older people and people with dementia (You et al., ). Our results confirmed the lack of emphasis on informal caregivers and professionals, in particular (Eklund & Wilhelmson, ; Johri et al., ; Stokes et al., ; You et al., ). Previous research showed similar results for the care process but this outcome was considered far less often than health outcomes and healthcare utilisation. Integrated care for patients with chronic diseases also resulted in improvement of the quality of care (Ouwens et al., ) and case management for older people resulted in fewer unmet service needs (You et al., ). However, our review did not show encouraging effects on care satisfaction, in contrast to case management interventions (Stokes et al., ). Our results mitigate the effects of integrated care on healthcare utilisation. Two previous reviews showed a decrease in hospitalisation and institutionalisation (Eklund & Wilhelmson, ; Johri et al., ). Our results were less conclusive when more types of health and social care utilisation were considered. Indeed, there was an indication that hospital care might decrease because of integrated care intervention but the effect on institutionalisation was inconsistent in our review. Our broader range of outcomes also showed increases in healthcare utilisation, mostly for primary care. 4.2 Strengths and limitations The strength of this systematic review is the comprehensive overview it provides in terms of both interventions and outcomes. Analysing the interventions with the Valentijn theoretical framework with an additional focus on prevention provided useful insights into the various components of integrated care and the different levels of integration in relation to the wide range of outcomes. Besides the included articles, we also considered corresponding study protocols in order to provide all available information on the interventions. Furthermore, we considered all types of outcomes, divided into five categories, one of which was cost‐effectiveness for which systematic evidence is scarce (Ouwens et al., ; Stokes et al., ). The first limitation of our systematic review is that we did not perform a meta‐analysis. We were not able to do a meta‐analysis because of the substantial differences in population, interventions, research designs and the wide range of outcomes measured with different instruments. Our aim was to present the bigger picture rather than limiting ourselves to a selection of more common outcome categories. The most common outcomes were ADL/IADL, physical functioning, mortality, hospital admissions, home care and institutionalisation. However, this would have been too restricted to fully explore the potential effectiveness of preventive, integrated care. Our research showed that effects can be observed in other outcomes, such as care process or well‐being. In providing this broad overview, we had to categorise the outcome measures, which is the second limitation of our study. Many different operationalisations of outcomes could be distinguished, especially for ADL/IADL, physical functioning, hospital admissions and well‐being. A concrete example is the category of hospitalisation that not only includes actual hospitalisation, but also the number of multiple, acute, subacute, planned and total hospitalisations. Another example was physical functioning, for which the following measurements were used in a single intervention: physical functioning, number of restricted activities days, number of bed days, physical performance test, NIA battery score and physical health summary scale (Reuben et al., ). In these cases, we adopted an optimistic approach; if one of the outcomes within a category had a positive effect, we reported it as a positive outcome for that category. The last limitation is the moderate state of empirical evidence, risk of bias and quality of the studies. This was partly due to our inclusion criterion on controlled designs, which implied that non‐randomised trials were also included and that increased the risk of bias. Yet, a more important contributor to the moderate risk of bias was the lack of information in the evaluation studies. The number of EPOC criteria we determined as “unclear risk” was approximately equivalent to the number of criteria determined as “high risk.” 4.3 Implications for research, policy and practice The first implication is that the heterogeneity of frail older people in the community should be further explored. The population of the interventions differed substantially between and within interventions. Several studies adopted a narrow definition of frailty, focusing on the physical domain, but more recent studies also considered the psychological and social domain. Still, there is no consensus on the definition and measurement of frailty (Dent, Kowal, & Hoogendijk, ) and thereby on identifying which community‐dwelling older people would benefit most from the preventive, integrated care interventions (Collard, Boter, Schoevers, & Oude Voshaar, ). Researchers have become increasingly aware of the complexity and heterogeneity of frailty (see also Eklund & Wilhelmson, ) and recently, have distinguished subpopulations of physically frail older people (Lafortune, Béland, Bergman, & Ankri, ; Liu, ). These subpopulations could further unravel frailty and support professionals in daily practice. However, in evaluations of studies into preventive, integrated care, the population of frail older people is still considered as a single group and no distinction is made between the characteristics of the frail older people. When the population of the intervention is more heterogeneous, it might be harder to achieve effectiveness (Almeida Mello et al., ; Ferrucci et al., ; Lette, Baan, van den Berg, & de Bruin, ). Accordingly, a possible explanation for the limited effectiveness of integrated care might be that it is more beneficial for certain subpopulations of frail older people; this hypothesis should be explored further. The second implication is that further research should provide better insight into the term “effectiveness” for community‐dwelling frail older people before extensive (expensive) preventive, integrated care interventions are designed, implemented and evaluated. It is crucial to explore what specific outcomes can be influenced for the frail older people—who are deteriorating in multiple domains of functioning—and their informal caregivers. Likewise, it is fundamental to formulate realistic expectations for what preventive, integrated care can achieve. Our systematic review challenges the important role that physical domain of functioning plays in preventive, integrated care for frail older people and its evaluation research. Many professionals involved in integrated care aim specifically at improving ADL/IADL or at preventing functional decline with limited effectiveness. An important question for practice, policy and research is whether we can expect a positive effect for ADL/IADL in preventive, integrated care at all. In fact, a recent systematic review proved that it is very difficult to influence ADL limitations for the older population (van Vorst et al., ). The QALY is another outcome that might be less suitable for determining cost‐effectiveness for the community‐dwelling frail older population. This outcome is widely used in the curative sector and is known for its comparability across populations and interventions (Drummond, Sculpher, Claxton, Stoddart, & Torrance, ). None of the interventions found an effect on health‐related quality of life and previous research has also confirmed that it might be less appropriate for frail older people (Comans, Peel, Gray, & Scuffham, ; Makai, ). Our systematic review provides useful support for a shift from (psychical) functioning to well‐being in preventive, integrated care and, correspondingly, its evaluation research. Also well‐being of informal caregivers should be considered since the role of informal caregivers has become more prominent in the care for frail older people (Grootegoed & Van Dijk, ). Primary care professionals are originally trained to adopt a monodisciplinary, disease‐specific approach (Lette et al., ) but preventive, integrated care requires a more holistic approach including an important role for well‐being (Schuurmans, ; Valentijn et al., ). Previous research has shown dimensions of well‐being for frail older people such as affection and doing things that make you feel valued (Coast et al., ; Schuurmans, ) but more research is required, also on well‐being of informal caregivers. Our systematic review indicates that we possibly need to shift our focus from effectiveness in terms of clinical outcomes to the process of integrated care. Integration implies “bringing together or merging the elements or components that were formerly separate” (Kodner & Spreeuwenberg, ) and integrated care is one strategy designed to solve the fragmentation of care, lack of continuity and co‐ordination (Fabbricotti, ; Kodner, ). However, our review shows that the focus of research is mainly on health and healthcare utilisation outcomes rather than on the care process. The evidence thus far on care process outcomes is rather promising. Consequently, professionals, researchers and policy makers might need to shift their expectations of the influence of integrated care from health outcomes to achieving organisational aims such as maintaining continuity and integrating health, social and informal care. This requires further empirical work on valid measurement instruments for the care process (see also Bautista, Nurjono, Lim, Dessers, & Vrijhoef, ), as well as on outcomes for the informal caregivers and professionals. Future research should provide recommendations on specific cost drivers of preventive, integrated care for frail older people. Researchers considered various types of costs to determine the cost‐effectiveness of preventive, integrated interventions. There seems to be some consensus on the consideration of hospital care, nursing home admissions, home care and primary care but until now other types of care such as paramedical care and different forms of social care (psychosocial care, meals on wheels, day care) have often been neglected. A final implication is that researchers might want to adopt a less static approach to research since both integration and frailty are dynamic, complex processes. The evaluations are summative; researchers have taken two to four quantitative snapshots in time. However, it might be useful to monitor both the frail older people and the integration process more closely and continuously. Integration is very complex since it involves overcoming several barriers to integration (Kodner, ; Valentijn et al., ). Close continuous monitoring would also lead to more transparency on the specific contents of the interventions, particularly the follow‐up, since the description of the interventions in the current type of evaluation research is limited (see also Eklund & Wilhelmson, ). Action research, which integrates research and practice in close co‐operation could be a future direction of study in order to improve daily care practice (Meyer, ). Interpretation of results in the context of other studies Our results added nuances to the high expectations for integrated care in the literature. Some theoretical studies on (general) integrated care state that it could pursue a wide range of aims (Kodner & Spreeuwenberg, ). However, our results were in line with other empirical reviews on integrated care interventions for older people. Previous research also emphasised the unconvincing effects on health outcomes (Eklund & Wilhelmson, ; Johri et al., ; Low, Yap, & Brodaty, ; Stokes et al., ; You et al., ). The positive effect on well‐being was confirmed in a systematic review on case management of frail older people and people with dementia (You et al., ). Our results confirmed the lack of emphasis on informal caregivers and professionals, in particular (Eklund & Wilhelmson, ; Johri et al., ; Stokes et al., ; You et al., ). Previous research showed similar results for the care process but this outcome was considered far less often than health outcomes and healthcare utilisation. Integrated care for patients with chronic diseases also resulted in improvement of the quality of care (Ouwens et al., ) and case management for older people resulted in fewer unmet service needs (You et al., ). However, our review did not show encouraging effects on care satisfaction, in contrast to case management interventions (Stokes et al., ). Our results mitigate the effects of integrated care on healthcare utilisation. Two previous reviews showed a decrease in hospitalisation and institutionalisation (Eklund & Wilhelmson, ; Johri et al., ). Our results were less conclusive when more types of health and social care utilisation were considered. Indeed, there was an indication that hospital care might decrease because of integrated care intervention but the effect on institutionalisation was inconsistent in our review. Our broader range of outcomes also showed increases in healthcare utilisation, mostly for primary care. Strengths and limitations The strength of this systematic review is the comprehensive overview it provides in terms of both interventions and outcomes. Analysing the interventions with the Valentijn theoretical framework with an additional focus on prevention provided useful insights into the various components of integrated care and the different levels of integration in relation to the wide range of outcomes. Besides the included articles, we also considered corresponding study protocols in order to provide all available information on the interventions. Furthermore, we considered all types of outcomes, divided into five categories, one of which was cost‐effectiveness for which systematic evidence is scarce (Ouwens et al., ; Stokes et al., ). The first limitation of our systematic review is that we did not perform a meta‐analysis. We were not able to do a meta‐analysis because of the substantial differences in population, interventions, research designs and the wide range of outcomes measured with different instruments. Our aim was to present the bigger picture rather than limiting ourselves to a selection of more common outcome categories. The most common outcomes were ADL/IADL, physical functioning, mortality, hospital admissions, home care and institutionalisation. However, this would have been too restricted to fully explore the potential effectiveness of preventive, integrated care. Our research showed that effects can be observed in other outcomes, such as care process or well‐being. In providing this broad overview, we had to categorise the outcome measures, which is the second limitation of our study. Many different operationalisations of outcomes could be distinguished, especially for ADL/IADL, physical functioning, hospital admissions and well‐being. A concrete example is the category of hospitalisation that not only includes actual hospitalisation, but also the number of multiple, acute, subacute, planned and total hospitalisations. Another example was physical functioning, for which the following measurements were used in a single intervention: physical functioning, number of restricted activities days, number of bed days, physical performance test, NIA battery score and physical health summary scale (Reuben et al., ). In these cases, we adopted an optimistic approach; if one of the outcomes within a category had a positive effect, we reported it as a positive outcome for that category. The last limitation is the moderate state of empirical evidence, risk of bias and quality of the studies. This was partly due to our inclusion criterion on controlled designs, which implied that non‐randomised trials were also included and that increased the risk of bias. Yet, a more important contributor to the moderate risk of bias was the lack of information in the evaluation studies. The number of EPOC criteria we determined as “unclear risk” was approximately equivalent to the number of criteria determined as “high risk.” Implications for research, policy and practice The first implication is that the heterogeneity of frail older people in the community should be further explored. The population of the interventions differed substantially between and within interventions. Several studies adopted a narrow definition of frailty, focusing on the physical domain, but more recent studies also considered the psychological and social domain. Still, there is no consensus on the definition and measurement of frailty (Dent, Kowal, & Hoogendijk, ) and thereby on identifying which community‐dwelling older people would benefit most from the preventive, integrated care interventions (Collard, Boter, Schoevers, & Oude Voshaar, ). Researchers have become increasingly aware of the complexity and heterogeneity of frailty (see also Eklund & Wilhelmson, ) and recently, have distinguished subpopulations of physically frail older people (Lafortune, Béland, Bergman, & Ankri, ; Liu, ). These subpopulations could further unravel frailty and support professionals in daily practice. However, in evaluations of studies into preventive, integrated care, the population of frail older people is still considered as a single group and no distinction is made between the characteristics of the frail older people. When the population of the intervention is more heterogeneous, it might be harder to achieve effectiveness (Almeida Mello et al., ; Ferrucci et al., ; Lette, Baan, van den Berg, & de Bruin, ). Accordingly, a possible explanation for the limited effectiveness of integrated care might be that it is more beneficial for certain subpopulations of frail older people; this hypothesis should be explored further. The second implication is that further research should provide better insight into the term “effectiveness” for community‐dwelling frail older people before extensive (expensive) preventive, integrated care interventions are designed, implemented and evaluated. It is crucial to explore what specific outcomes can be influenced for the frail older people—who are deteriorating in multiple domains of functioning—and their informal caregivers. Likewise, it is fundamental to formulate realistic expectations for what preventive, integrated care can achieve. Our systematic review challenges the important role that physical domain of functioning plays in preventive, integrated care for frail older people and its evaluation research. Many professionals involved in integrated care aim specifically at improving ADL/IADL or at preventing functional decline with limited effectiveness. An important question for practice, policy and research is whether we can expect a positive effect for ADL/IADL in preventive, integrated care at all. In fact, a recent systematic review proved that it is very difficult to influence ADL limitations for the older population (van Vorst et al., ). The QALY is another outcome that might be less suitable for determining cost‐effectiveness for the community‐dwelling frail older population. This outcome is widely used in the curative sector and is known for its comparability across populations and interventions (Drummond, Sculpher, Claxton, Stoddart, & Torrance, ). None of the interventions found an effect on health‐related quality of life and previous research has also confirmed that it might be less appropriate for frail older people (Comans, Peel, Gray, & Scuffham, ; Makai, ). Our systematic review provides useful support for a shift from (psychical) functioning to well‐being in preventive, integrated care and, correspondingly, its evaluation research. Also well‐being of informal caregivers should be considered since the role of informal caregivers has become more prominent in the care for frail older people (Grootegoed & Van Dijk, ). Primary care professionals are originally trained to adopt a monodisciplinary, disease‐specific approach (Lette et al., ) but preventive, integrated care requires a more holistic approach including an important role for well‐being (Schuurmans, ; Valentijn et al., ). Previous research has shown dimensions of well‐being for frail older people such as affection and doing things that make you feel valued (Coast et al., ; Schuurmans, ) but more research is required, also on well‐being of informal caregivers. Our systematic review indicates that we possibly need to shift our focus from effectiveness in terms of clinical outcomes to the process of integrated care. Integration implies “bringing together or merging the elements or components that were formerly separate” (Kodner & Spreeuwenberg, ) and integrated care is one strategy designed to solve the fragmentation of care, lack of continuity and co‐ordination (Fabbricotti, ; Kodner, ). However, our review shows that the focus of research is mainly on health and healthcare utilisation outcomes rather than on the care process. The evidence thus far on care process outcomes is rather promising. Consequently, professionals, researchers and policy makers might need to shift their expectations of the influence of integrated care from health outcomes to achieving organisational aims such as maintaining continuity and integrating health, social and informal care. This requires further empirical work on valid measurement instruments for the care process (see also Bautista, Nurjono, Lim, Dessers, & Vrijhoef, ), as well as on outcomes for the informal caregivers and professionals. Future research should provide recommendations on specific cost drivers of preventive, integrated care for frail older people. Researchers considered various types of costs to determine the cost‐effectiveness of preventive, integrated interventions. There seems to be some consensus on the consideration of hospital care, nursing home admissions, home care and primary care but until now other types of care such as paramedical care and different forms of social care (psychosocial care, meals on wheels, day care) have often been neglected. A final implication is that researchers might want to adopt a less static approach to research since both integration and frailty are dynamic, complex processes. The evaluations are summative; researchers have taken two to four quantitative snapshots in time. However, it might be useful to monitor both the frail older people and the integration process more closely and continuously. Integration is very complex since it involves overcoming several barriers to integration (Kodner, ; Valentijn et al., ). Close continuous monitoring would also lead to more transparency on the specific contents of the interventions, particularly the follow‐up, since the description of the interventions in the current type of evaluation research is limited (see also Eklund & Wilhelmson, ). Action research, which integrates research and practice in close co‐operation could be a future direction of study in order to improve daily care practice (Meyer, ). CONCLUSION The diverse and high expectations for preventive, integrated care for community‐dwelling frail older people in research, policy and practice should be tempered slightly. Our systematic review does not provide a solid base of evidence, particularly for important policy aims such as preventing functional decline and institutionalisation. Effectiveness may be pursued in other outcomes, such as well‐being and care processes. The level of integration is not decisive since higher level of integration does not seem to lead to better outcomes. More attention should be devoted to exploring effectiveness for subgroups of frail older people. Researchers in integrated care should be more aware of the underlying principles of the topic of integrated care: they should integrate their research, consider continuity and differentiate between frail older people. The authors thank Wichor Bramer, librarian from the Erasmus Medical Centre Rotterdam, for the help with the search terms and designing the search syntaxes for all nine databases. WL screened the abstracts, reviewed the full texts, assessed the risk of bias, extracted the data and wrote the paper. IF screened the abstracts and reviewed full texts that met the inclusion criteria or where doubts arose. RH assessed the risk of bias of the included studies. IF and RH have critically reviewed the content of the paper and contributed to revising the paper. All authors have approved the submitted version of the manuscript. Click here for additional data file. Click here for additional data file. Click here for additional data file. |
Exploring Haemodialysis Nurses' Perceptions on Kidney Replacement Therapy Modality Education: A Framework Analysis | 56522d92-6954-4355-8b01-f44d80cd2120 | 11771708 | Patient Education as Topic[mh] | Introduction The choice of a treatment modality option for patients with kidney failure should be patient‐centred and incorporate patient values, clinical appropriateness, and the availability of treatment options (Rivara and Mehrotra ). It is essential to engage patients and their families to make shared decisions about modality selection. Treatment modality education typically occurs during the pre‐dialysis period as individuals approach kidney failure or need to start unplanned dialysis (Machowska et al. ). At this time, some patients find decision‐making too complex or cannot engage in education due to illness or emotional distress associated with the reality of reaching kidney failure (Combes, Sein, and Allen ). Choosing a dialysis modality requires much thought and consideration to maintain an acceptable quality of life for both the patient and their loved ones. Unfortunately, dialysis modality options are rarely revisited once patients start dialysis, although patients may wish to review their preferred treatment options if their social or clinical circumstances change. Friberg et al. highlighted the importance of providing repeated, comprehensive, high‐quality information to patients as they navigate the progression of their illness. Furthermore, Combes, Sein, and Allen stipulated that pre‐dialysis treatment decisions are considered temporary and should be revisited. Unfortunately, this ongoing education is inconsistent in practice, which could be contributing to patients remaining on the initial treatment modality they encounter in a hospital setting, which is usually in‐centre haemodialysis (ICHD) (Canadian Agency for Drugs and Technologies in Health ). Of all members of the care team, ICHD nurses spend the most time with patients receiving haemodialysis. These nurses are in a unique position to provide this education to patients. 1.1 Literature Review There have been many studies exploring patients' perceptions of modality education (Balzer et al. ; Cassidy et al. ; Dahlerus et al. ; Finderup et al. ; Friberg et al. ; Jennette et al. ; Landreneau and Ward‐Smith ; Morton et al. , ; Van Biesen et al. ). However, the literature on nurses' perspectives of ICHD and modality education is limited. Studies demonstrated that ICHD nurses have preferences about modality selection associated with their area of expertise (Tennankore et al. ). Firanek et al. demonstrated that ICHD nurses are partial to ICHD treatment because they perceive haemodialysis as a complex procedure, not a responsibility that a lay person can safely undertake at home. These perceptions contribute to patients starting and remaining on ICHD despite the significant benefits of home dialysis therapies. The quality of education that haemodialysis nurses receive may contribute to their perceptions of the different dialysis treatment options. Researchers indicated that ICHD nurses were in favour of receiving continuing education in home therapies (Ding ; Lauder et al. ; Phillips et al. ; Poinen et al. ; Tennankore ). Haemodialysis nurses may be more motivated to provide modality education to help patients make an informed modality choice (Phillips et al. ; Poinen et al. ; Schreiber, Chatoth, and Salenger ). An informed modality choice through shared decision‐making (Meijers et al. ) could lead to patients switching modalities to more suitable options for their lifestyle. By understanding nurses' perceptions of modality education, it is possible to create interventions to support high‐quality modality education for patients.
Literature Review There have been many studies exploring patients' perceptions of modality education (Balzer et al. ; Cassidy et al. ; Dahlerus et al. ; Finderup et al. ; Friberg et al. ; Jennette et al. ; Landreneau and Ward‐Smith ; Morton et al. , ; Van Biesen et al. ). However, the literature on nurses' perspectives of ICHD and modality education is limited. Studies demonstrated that ICHD nurses have preferences about modality selection associated with their area of expertise (Tennankore et al. ). Firanek et al. demonstrated that ICHD nurses are partial to ICHD treatment because they perceive haemodialysis as a complex procedure, not a responsibility that a lay person can safely undertake at home. These perceptions contribute to patients starting and remaining on ICHD despite the significant benefits of home dialysis therapies. The quality of education that haemodialysis nurses receive may contribute to their perceptions of the different dialysis treatment options. Researchers indicated that ICHD nurses were in favour of receiving continuing education in home therapies (Ding ; Lauder et al. ; Phillips et al. ; Poinen et al. ; Tennankore ). Haemodialysis nurses may be more motivated to provide modality education to help patients make an informed modality choice (Phillips et al. ; Poinen et al. ; Schreiber, Chatoth, and Salenger ). An informed modality choice through shared decision‐making (Meijers et al. ) could lead to patients switching modalities to more suitable options for their lifestyle. By understanding nurses' perceptions of modality education, it is possible to create interventions to support high‐quality modality education for patients.
Aim We explored the research question, what are the perceptions of ICHD nurses on providing dialysis modality education to patients receiving ICHD in Alberta? The aim of this study was to examine in‐centre haemodialysis nurses' perceptions around modality education for patients receiving in‐centre haemodialysis using the COM‐B model of behaviour change.
Methods We completed this qualitative study using framework analysis (Ritchie and Spencer ) to analyse semi‐structured interviews with ICHD nurses about modality education. 3.1 Framework Analysis Framework analysis is a qualitative methodology that provides a systematic and transparent approach to analysing data (Srivastava and Thomson ). According to Ritchie and Spencer , framework analysis can support researchers in describing and interpreting what is happening in a particular setting, using an established theory to support interpretation. Framework analysis is a form of thematic analysis that uses an organized structure to carry out a cross‐sectional analysis using a combination of data description and abstraction (Ritchie and Spencer ; Smith and Firth ). Although Ritchie and Spencer originally designed the approach with five distinct phases, Gale et al. expanded them to seven to ease application. Framework analysis enabled us to explore these study data while maintaining a clear audit trail, enhancing the rigour of the analytic process and the credibility of the study results (Ritchie and Lewis ). 3.2 Theoretical Framework: The COM‐B Model In framework analysis, researchers use an analytical framework to systematically organize qualitative data to support answering the research question (Gale et al. ). We utilized the COM‐B model (Michie, van Stralen, and West ), a behaviour change theory, as the theoretical framework to guide the analytical process. In the COM‐B model, three components—capability (C), opportunity (O) and motivation (M) interact to influence behaviour (B) (Michie, van Stralen, and West ). Changing behaviour involves changing one or more of these components in such a way as to change outcomes. Several studies utilizing the COM‐B model have demonstrated its value in assessing and implementing behaviour change interventions in healthcare settings (Byrne‐Davis et al. ; Handley et al. ; Moore et al. ; Virtanen et al. ). We chose the COM‐B model as our theoretical framework (Table ) because of its comprehensiveness as a behaviour change model to interpret nurses' perceptions of modality education. 3.3 Study Setting We conducted this research in one Canadian province, with a single healthcare system, and a population of about four million people. All dialysis care is publicly funded and available at no cost to the patient. 3.4 Sampling and Recruitment We used purposive sampling (Guarte and Barrios ) to access nurses with direct experience of ICHD and modality education. We recruited 13 registered nurses (RNs) and licensed practical nurses (LPNs) who work full‐time or part‐time in any urban or rural ICHD unit in the region. The number of recruited participants was guided by evidence‐based recommendations for sample sizes for interviews (Guest, Bunce, and Johnson ). All practicing professional nurses (RNs and LPNs) working full‐time or part‐time in ICHD units with at least 2 years of experience in the role were eligible to participate in the study. We excluded from this study nurses who worked as casual staff, had less than 2 years of experience in this area, and students. Potential participants received an email via the local public health authority's information technology team with consent from operational managers. Potential participants could then email the researchers to demonstrate their interest in participating. We did not contact participants directly until they expressed their interest. We sent participants a consent form, which they could sign and return to participate in the study. After receiving the consent form, we scheduled participants for an interview. 3.5 Data Collection Interviews are a common method of data collection when using framework analysis and/or the COM‐B model (Michie, Atkins, and West ; Ritchie and Spencer ). We conducted individual, semi‐structured Zoom interviews with participants. The interviews were digitally recorded using Zoom and professionally transcribed. The interviews took 30–60 min, starting with the collection of demographic information and then open‐ended questions we created using the COM‐B model. 3.6 Data Analysis Knowledge creation in framework analysis incorporates both deductive and inductive approaches to analysis (Gale et al. ). We employed a deductive approach to analyse these data based on the COM‐B model of behaviour change. Despite keeping an open mind for codes and themes that did not necessarily fall within the thematic framework, we did not identify any inductive themes during our coding process. We applied the stages of framework analysis proposed by Gale et al. , to systematically guide data analysis and interpretation. We utilized the NVivo as our data management software, to facilitate data analysis. In a reflexive journal, we noted interpretations, ideas and concepts that came up through the data analysis process. 3.7 Ethical Considerations Participants signed consent forms before completing interviews. We stored all the signed consent forms in a password‐protected cloud drive. We informed the participants that they could withdraw their consent to participate at any point throughout the study up to 2 weeks after their interview. As a token of appreciation for their time and contribution to our work, we provided a $10 gift card to all participants via email. The Research Ethics Board reviewed and approved the study at the University of Calgary, REB23‐0345.
Framework Analysis Framework analysis is a qualitative methodology that provides a systematic and transparent approach to analysing data (Srivastava and Thomson ). According to Ritchie and Spencer , framework analysis can support researchers in describing and interpreting what is happening in a particular setting, using an established theory to support interpretation. Framework analysis is a form of thematic analysis that uses an organized structure to carry out a cross‐sectional analysis using a combination of data description and abstraction (Ritchie and Spencer ; Smith and Firth ). Although Ritchie and Spencer originally designed the approach with five distinct phases, Gale et al. expanded them to seven to ease application. Framework analysis enabled us to explore these study data while maintaining a clear audit trail, enhancing the rigour of the analytic process and the credibility of the study results (Ritchie and Lewis ).
Theoretical Framework: The COM‐B Model In framework analysis, researchers use an analytical framework to systematically organize qualitative data to support answering the research question (Gale et al. ). We utilized the COM‐B model (Michie, van Stralen, and West ), a behaviour change theory, as the theoretical framework to guide the analytical process. In the COM‐B model, three components—capability (C), opportunity (O) and motivation (M) interact to influence behaviour (B) (Michie, van Stralen, and West ). Changing behaviour involves changing one or more of these components in such a way as to change outcomes. Several studies utilizing the COM‐B model have demonstrated its value in assessing and implementing behaviour change interventions in healthcare settings (Byrne‐Davis et al. ; Handley et al. ; Moore et al. ; Virtanen et al. ). We chose the COM‐B model as our theoretical framework (Table ) because of its comprehensiveness as a behaviour change model to interpret nurses' perceptions of modality education.
Study Setting We conducted this research in one Canadian province, with a single healthcare system, and a population of about four million people. All dialysis care is publicly funded and available at no cost to the patient.
Sampling and Recruitment We used purposive sampling (Guarte and Barrios ) to access nurses with direct experience of ICHD and modality education. We recruited 13 registered nurses (RNs) and licensed practical nurses (LPNs) who work full‐time or part‐time in any urban or rural ICHD unit in the region. The number of recruited participants was guided by evidence‐based recommendations for sample sizes for interviews (Guest, Bunce, and Johnson ). All practicing professional nurses (RNs and LPNs) working full‐time or part‐time in ICHD units with at least 2 years of experience in the role were eligible to participate in the study. We excluded from this study nurses who worked as casual staff, had less than 2 years of experience in this area, and students. Potential participants received an email via the local public health authority's information technology team with consent from operational managers. Potential participants could then email the researchers to demonstrate their interest in participating. We did not contact participants directly until they expressed their interest. We sent participants a consent form, which they could sign and return to participate in the study. After receiving the consent form, we scheduled participants for an interview.
Data Collection Interviews are a common method of data collection when using framework analysis and/or the COM‐B model (Michie, Atkins, and West ; Ritchie and Spencer ). We conducted individual, semi‐structured Zoom interviews with participants. The interviews were digitally recorded using Zoom and professionally transcribed. The interviews took 30–60 min, starting with the collection of demographic information and then open‐ended questions we created using the COM‐B model.
Data Analysis Knowledge creation in framework analysis incorporates both deductive and inductive approaches to analysis (Gale et al. ). We employed a deductive approach to analyse these data based on the COM‐B model of behaviour change. Despite keeping an open mind for codes and themes that did not necessarily fall within the thematic framework, we did not identify any inductive themes during our coding process. We applied the stages of framework analysis proposed by Gale et al. , to systematically guide data analysis and interpretation. We utilized the NVivo as our data management software, to facilitate data analysis. In a reflexive journal, we noted interpretations, ideas and concepts that came up through the data analysis process.
Ethical Considerations Participants signed consent forms before completing interviews. We stored all the signed consent forms in a password‐protected cloud drive. We informed the participants that they could withdraw their consent to participate at any point throughout the study up to 2 weeks after their interview. As a token of appreciation for their time and contribution to our work, we provided a $10 gift card to all participants via email. The Research Ethics Board reviewed and approved the study at the University of Calgary, REB23‐0345.
Results In this section, we detail (a) the participants' characteristics, (b) the overarching deductive themes from elements of the COM‐B model and (c) the associated subthemes expanding on each element of the COM‐B model. We did not identify any inductive themes outside the COM‐B model. A total of 13 nurses participated in this research study, with their demographic data presented in Table . Table illustrates the components and subcomponents of the COM‐B model used as themes and subthemes in this study. 4.1 Capability In the context of our study, physical and psychological abilities refer to physical stamina, physical environment, nurses' training, skills, knowledge and experiences that influenced ICHD nurses' delivery of personalized modality education. 4.1.1 Physical Capability Participants identified privacy issues as impacting their physical ability to provide modality education in the ICHD setting. The layout of a dialysis unit provided easy access to patients in case of emergencies during haemodialysis, but it also made it hard for nurses and patients to discuss modality choices privately. A participant observed that ‘… it's not necessarily the most private discussion, but sometimes people do like privacy when they're discussing the reasons why certain options might not work for them’ (Participant 12). Privacy would allow individuals to speak more freely about their modality decisions, including aspects such as housing issues, medical conditions, or cost implications. These circumstances could be sensitive, the lack of privacy constrain ICHD nurses' ability to provide modality education. 4.1.2 Psychological Capability The participants reported psychological capability as (a) nurses' need for modality education, (b) nurses' experiences and confidence in providing modality education and (c) nurses' bias towards ICHD. The perceived lack of adequate education on dialysis modalities was the participants' most cited constraint relating to modality education. Participants reported that they did not have adequate knowledge about modalities to provide patients with modality education. A participant expressed what she wanted to be more proficient at modality education: Have someone come speak to the staff. If there was an education day that was just for nurses to understand the different modalities because not everyone's worked in other fields or other parts of dialysis. Then, [ICHD nurses] would be able to present that to the patient better because they've had a day [of training] to understand the different modalities and be able to articulate it to the patient but in layman's terms so they'd understand. (Participant 13) This quote illustrates that ICHD nurses are cognisant of the need to be educated about other modalities to provide patients with high‐quality modality education. Participants also expressed a desire to receive experiential knowledge where they could learn about other dialysis treatment options by spending time in these practice areas. One participant expressed their experience working in home dialysis after ICHD and recommended: I think [ICHD nurses] need to know what's done in the [home dialysis] units and the benefits to the patients of PD and home hemo. I kind of understood until I went to home hemo and I was like “Wow, this can be amazing for people.” But until then, it's kind of an abstract thought until you actually see it in practice and in real life. (Participant 9) This participant reported having a deeper understanding of home dialysis modalities and fully appreciated the benefits to patients after learning about the modalities directly. Providing these opportunities to more nurses could enhance their knowledge of modality options for patients. Participants reported their confidence increased with additional experience working in other areas of nephrology or with increased time spent as a renal nurse. A participant reported that ‘…some of the senior staff, they have more confidence, and they have sat down and actually elaborated more on how [a patient] can go for [different kinds of dialysis]’ (Participant 2). This participant observed that nurses with more experience were confident to initiate discussions about modalities with patients. Participants worried they would harm their relationships with clients if they did not have all the answers about modality education, because the patient may lose trust in nurses. Participants were privileged to keep the nurse–client relationship intact rather than taking a risk to provide modality education when the nurses did not feel comfortable answering patients' questions. Participants in this study acknowledged that they have some bias towards ICHD. A participant expressed this insight by stating, ‘I think to have in‐centre nurses do it, it's also—it's kind of biased too. We're skewed to our [modality] option’ (Participant 13). This quote represents a prevalent sentiment among ICHD nurses, suggesting a preference for their modality. This bias may influence nurses' capability in deciding to provide modality education to patients about home dialysis options. 4.2 Opportunity Participants identified opportunity as impacting a conducive physical and social environment for modality education to occur within the ICHD setting. 4.2.1 Physical Opportunity Participants revealed that the physical opportunities that influenced modality education in the ICHD setting were (a) time, (b) staffing issues and (c) the availability of clinical resources. Participants from urban ICHD settings primarily reported that modality education was not a priority for their daily work. They reported prioritizing the patient's dialysis process during their 4‐h visit, meaning they lacked the time for other activities like modality education. One participant outlined her daily priorities as such: What was the priority list? It went [assessing and getting patients on haemodialysis treatment], medications, of course, drawing bloodwork, following up on bloodwork, and then dressings and doublechecks, and then the [list of other scheduled tasks to be completed]. So, if we get to the checklist and we're still working on iron protocols or anaemia protocols, potassium protocols, or giving antibiotics, modality education just has to wait until the next run. So, [modality education] doesn't make it on the priority list unless all those other things are done first. (Participant 13) Participants reported that they addressed patients' dialysis treatment needs before other needs. Therefore, this participant assessed modality education as a ‘nice to have’ rather than an ‘essential to have’ activity. Inadequate staffing was highlighted by several participants from urban areas as a barrier to providing modality education. Participants felt they had to ration their work to meet patients' basic haemodialysis care needs. Participants articulated that it could be beneficial to assign a nurse with more training to provide modality education. A participant suggested: Maybe have a modality super user or champion, if you will, on the unit, who can be a resource for the staff or can go to the patient and sit down (Participant 12). The participant also identified that a modality champion would also save patients' time by having this appointment during a dialysis session. A lack of supporting resources were identified by participants as another barrier to providing modality education. Teaching resources for modality education could include items such as pamphlets, patient testimonials or dialysis equipment to demonstrate. Participants said that they were not aware of teaching resources or that they were not readily available. You can find it, but I honestly don't think, for the new nurses coming in, or for people who haven't worked in other areas, that they would really know how to get that information. (Participant 12) This participant's response indicated that resources are not easily accessible for nurses to support modality teaching. Participants understood that teaching resources for modality education were useful, but they did not know how to find them for patients. 4.2.2 Social Opportunity The social opportunities nurses identified as impacting modality education were the increased involvement of physicians and families. Many participants felt that the nephrologist should be involved in modality decisions for patients as the nephrologist would ultimately be prescribing the other therapies. A participant explained what they told their patients who might express interest in a different modality: ‘…It might be worth mentioning with your primary nephrologist. And they actually do. That's how I get them to switch’ (Participant 10). It is possible that participants did not feel qualified to take on modality discussions on their own with the patients because these discussions were not perceived as part of a nursing scope of practice. Participants suggested that peer support groups could be a great way for patients to learn about different modalities. Patients could learn about the potential impact of a different modality for their specific life situation from another patient who faces similar issues. ‘For the patients if they had some, patient support groups, that we would roll out that would provide advice to other patients or suggestions or share their experiences’ (Participant 12). This participant illustrated the potential benefits of peer support when it came to modality education. 4.3 Motivation Participants identified aspects of motivation that that were both self‐conscious intentions (reflective) and those that were emotional reactions (automatic). 4.3.1 Reflective Motivation Participants demonstrated two areas of reflective motivation that impacted their delivery of modality education: (a) participants' perceived roles in modality education and (b) participants' perceived need for modality education in ICHD. Participants agreed that they should take part in modality education, but there was no consensus on what nurses' roles should be. The participants who had experience in other modalities reported that nurses should take an active role in modality education to encourage home modalities. These participants regularly assessed patients' need for modality education and provided the education within a care context. One participant explained how they approached modality education in the ICHD unit: I see our role is to pick up on these clues and just assess patient knowledge and what their lifestyle goals are, what their knowledge base is, what fits with their current life, and then helping support them as they need it. (Participant 6) This participant assessed the patient's suitability for ICHD by assessing the fit between modality and the patient's lifestyle and preference. The participant's approach to modality education, tailored to the patient's specific needs was likely to keep the patient engaged. The participant asked relevant questions as part of their patient assessment, which went beyond immediate HD needs. This participant recognized that modality choices can change depending on the patients' circumstances. Another participant thought modality education should not be the role of the ICHD nurse because of the barriers present. This participant reported that it would be a disservice to patients to attempt modality education: To be honest with you, no. I don't think that in‐centre nurses should do the teaching for modalities because, one, there isn't much time to do that education. That's something that—this could be life changing. It's something that they should be able to sit down with a nurse individually without distraction to get that education, so they can get informed information about what their options are…we can do a quick run‐through, but I don't think it's fair to the patient unless they get all the options and all the pros and cons. (Participant 13) This participant's lack of motivation was because of a lack of time and the risk of providing partial or inaccurate information to patients. The participant considered modality education to be an important conversation that was best suited for a different context. These views contrasted with participants with experience in other dialysis modalities, who practiced at rural sites, and those with more years of experience, who did think there was a need for modality education in the ICHD setting. 4.3.2 Automatic Motivation Nurses articulated two main aspects that represented automatic motivation. These were (a) modality education as a mandatory task to be completed and (b) nurse‐perceived patient barriers. Participants' reports of their workplace expectations around modality education varied. Some participants stated that they were expected to review modalities with patients every 8–16 weeks. Others reported that they did not have to review modalities with patients at all. A participant observed that staff would be more motivated to perform modality education if it were on a list of activities to be completed by nurses. ‘I think a checklist of tasks that have to be done for their job to be complete is a huge motivating factor for nurses. If it is a task, we tend to get it done’ (Participant 12). Although the practice of mandatory modality education varied among sites, this participant's statement suggested that nurses' motivation to engage in modality discussions could be prompted by external factors like a checklist. Many nurses reported being demotivated to talk about modality options with patients because of some patient‐related factors. Some participants admitted to not wanting to attempt modality education because they were worried that the patient would not understand the information provided. Because for me, too, it's culture and language and their background, and if this is an English‐speaking person, then I can go at [modality education] really hard. But if it's not an English‐speaking person, then my hands are really tied, especially at the beginning because with so much other things to do, to get an interpreter to help with that too, if the family is not there is almost impossible. (Participant 1). This participant cited language barriers as a deterrent to providing modality education. The participant did not want to confuse or burden the patient with incomplete information, which limited the nurse's motivation to provide modality education. 4.4 Behaviour In this section, we highlight the types of behaviours that participants reported. We noted three distinct types of behaviours: (a) nurses providing comprehensive modality education, (b) nurses presenting modality options with or without context and (c) nurses not providing modality education. Some participants with experience in other modalities were found to be more prepared, even enthusiastic to provide modality education. They possessed the capability for modality education, mostly from experiential knowledge, and took opportunities throughout the day to speak with patients about their modality options. These participants were motivated to discuss modalities to provide patients with the information that they need to make an informed modality choice. One participant explained how she approaches modality education in the ICHD unit: So, I think the nurse's role is to support our patients in whatever choices in renal replacement options to make sure that they receive the education that they need to make informed decisions regarding their modality, and make sure that any questions that they have are answered in a non‐biased way. (Participant 6) This participant perceived modality education as an integral part of their job because they had an awareness of the benefits of choosing an optimal modality option for patients' way of life. Other participants recognized the need for modality education but wanted to provide options for patients outside of their haemodialysis treatment times. These participants were motivated to improve patients' quality of life and health outcomes and to make more beds available in‐centre. One participant reported that: I don't get into great detail about any of [the modalities]. We just give the real basics because, honestly, I don't really know a lot of detail, because I've only ever been in in‐centre hemo. So I'm just like, “If this is something that will maybe work for you, let's get you in touch with some resources or some people.” (Participant 4) Nurses who approached modality education in this manner reported they were doing the best they could with the resources available to them, recognizing that there were other resources available. The nurses found opportunities to speak with patients about their modality options, even if they could not provide comprehensive education at the bedside. Some participants avoided modality education altogether. These participants perceived that ICHD was the patients' choice and that it was not the role of the nurse to try and change their minds. They did not consider the HD unit as a place where modality education should be happening, nor did they think that modality education was in their scope of practice. A participant explained that ‘Any [ICHD patient] who is on dialysis, it is his choice. He just came for dialysis…he chose this [modality]’ (Participant 3). Another participant directed patients to their nephrologist if they showed interest in another modality ‘Talk to your doctor about it. Let your doctor know. That's basically all I can do’ (Participant 1). These participants reported that they wanted to support their patients' choices; they did not see the ICHD unit as a conducive environment for modality education. Instead, these participants saw modality education as the nephrologist's role. These participants prioritized preserving therapeutic relationships, focusing on providing haemodialysis.
Capability In the context of our study, physical and psychological abilities refer to physical stamina, physical environment, nurses' training, skills, knowledge and experiences that influenced ICHD nurses' delivery of personalized modality education. 4.1.1 Physical Capability Participants identified privacy issues as impacting their physical ability to provide modality education in the ICHD setting. The layout of a dialysis unit provided easy access to patients in case of emergencies during haemodialysis, but it also made it hard for nurses and patients to discuss modality choices privately. A participant observed that ‘… it's not necessarily the most private discussion, but sometimes people do like privacy when they're discussing the reasons why certain options might not work for them’ (Participant 12). Privacy would allow individuals to speak more freely about their modality decisions, including aspects such as housing issues, medical conditions, or cost implications. These circumstances could be sensitive, the lack of privacy constrain ICHD nurses' ability to provide modality education. 4.1.2 Psychological Capability The participants reported psychological capability as (a) nurses' need for modality education, (b) nurses' experiences and confidence in providing modality education and (c) nurses' bias towards ICHD. The perceived lack of adequate education on dialysis modalities was the participants' most cited constraint relating to modality education. Participants reported that they did not have adequate knowledge about modalities to provide patients with modality education. A participant expressed what she wanted to be more proficient at modality education: Have someone come speak to the staff. If there was an education day that was just for nurses to understand the different modalities because not everyone's worked in other fields or other parts of dialysis. Then, [ICHD nurses] would be able to present that to the patient better because they've had a day [of training] to understand the different modalities and be able to articulate it to the patient but in layman's terms so they'd understand. (Participant 13) This quote illustrates that ICHD nurses are cognisant of the need to be educated about other modalities to provide patients with high‐quality modality education. Participants also expressed a desire to receive experiential knowledge where they could learn about other dialysis treatment options by spending time in these practice areas. One participant expressed their experience working in home dialysis after ICHD and recommended: I think [ICHD nurses] need to know what's done in the [home dialysis] units and the benefits to the patients of PD and home hemo. I kind of understood until I went to home hemo and I was like “Wow, this can be amazing for people.” But until then, it's kind of an abstract thought until you actually see it in practice and in real life. (Participant 9) This participant reported having a deeper understanding of home dialysis modalities and fully appreciated the benefits to patients after learning about the modalities directly. Providing these opportunities to more nurses could enhance their knowledge of modality options for patients. Participants reported their confidence increased with additional experience working in other areas of nephrology or with increased time spent as a renal nurse. A participant reported that ‘…some of the senior staff, they have more confidence, and they have sat down and actually elaborated more on how [a patient] can go for [different kinds of dialysis]’ (Participant 2). This participant observed that nurses with more experience were confident to initiate discussions about modalities with patients. Participants worried they would harm their relationships with clients if they did not have all the answers about modality education, because the patient may lose trust in nurses. Participants were privileged to keep the nurse–client relationship intact rather than taking a risk to provide modality education when the nurses did not feel comfortable answering patients' questions. Participants in this study acknowledged that they have some bias towards ICHD. A participant expressed this insight by stating, ‘I think to have in‐centre nurses do it, it's also—it's kind of biased too. We're skewed to our [modality] option’ (Participant 13). This quote represents a prevalent sentiment among ICHD nurses, suggesting a preference for their modality. This bias may influence nurses' capability in deciding to provide modality education to patients about home dialysis options.
Physical Capability Participants identified privacy issues as impacting their physical ability to provide modality education in the ICHD setting. The layout of a dialysis unit provided easy access to patients in case of emergencies during haemodialysis, but it also made it hard for nurses and patients to discuss modality choices privately. A participant observed that ‘… it's not necessarily the most private discussion, but sometimes people do like privacy when they're discussing the reasons why certain options might not work for them’ (Participant 12). Privacy would allow individuals to speak more freely about their modality decisions, including aspects such as housing issues, medical conditions, or cost implications. These circumstances could be sensitive, the lack of privacy constrain ICHD nurses' ability to provide modality education.
Psychological Capability The participants reported psychological capability as (a) nurses' need for modality education, (b) nurses' experiences and confidence in providing modality education and (c) nurses' bias towards ICHD. The perceived lack of adequate education on dialysis modalities was the participants' most cited constraint relating to modality education. Participants reported that they did not have adequate knowledge about modalities to provide patients with modality education. A participant expressed what she wanted to be more proficient at modality education: Have someone come speak to the staff. If there was an education day that was just for nurses to understand the different modalities because not everyone's worked in other fields or other parts of dialysis. Then, [ICHD nurses] would be able to present that to the patient better because they've had a day [of training] to understand the different modalities and be able to articulate it to the patient but in layman's terms so they'd understand. (Participant 13) This quote illustrates that ICHD nurses are cognisant of the need to be educated about other modalities to provide patients with high‐quality modality education. Participants also expressed a desire to receive experiential knowledge where they could learn about other dialysis treatment options by spending time in these practice areas. One participant expressed their experience working in home dialysis after ICHD and recommended: I think [ICHD nurses] need to know what's done in the [home dialysis] units and the benefits to the patients of PD and home hemo. I kind of understood until I went to home hemo and I was like “Wow, this can be amazing for people.” But until then, it's kind of an abstract thought until you actually see it in practice and in real life. (Participant 9) This participant reported having a deeper understanding of home dialysis modalities and fully appreciated the benefits to patients after learning about the modalities directly. Providing these opportunities to more nurses could enhance their knowledge of modality options for patients. Participants reported their confidence increased with additional experience working in other areas of nephrology or with increased time spent as a renal nurse. A participant reported that ‘…some of the senior staff, they have more confidence, and they have sat down and actually elaborated more on how [a patient] can go for [different kinds of dialysis]’ (Participant 2). This participant observed that nurses with more experience were confident to initiate discussions about modalities with patients. Participants worried they would harm their relationships with clients if they did not have all the answers about modality education, because the patient may lose trust in nurses. Participants were privileged to keep the nurse–client relationship intact rather than taking a risk to provide modality education when the nurses did not feel comfortable answering patients' questions. Participants in this study acknowledged that they have some bias towards ICHD. A participant expressed this insight by stating, ‘I think to have in‐centre nurses do it, it's also—it's kind of biased too. We're skewed to our [modality] option’ (Participant 13). This quote represents a prevalent sentiment among ICHD nurses, suggesting a preference for their modality. This bias may influence nurses' capability in deciding to provide modality education to patients about home dialysis options.
Opportunity Participants identified opportunity as impacting a conducive physical and social environment for modality education to occur within the ICHD setting. 4.2.1 Physical Opportunity Participants revealed that the physical opportunities that influenced modality education in the ICHD setting were (a) time, (b) staffing issues and (c) the availability of clinical resources. Participants from urban ICHD settings primarily reported that modality education was not a priority for their daily work. They reported prioritizing the patient's dialysis process during their 4‐h visit, meaning they lacked the time for other activities like modality education. One participant outlined her daily priorities as such: What was the priority list? It went [assessing and getting patients on haemodialysis treatment], medications, of course, drawing bloodwork, following up on bloodwork, and then dressings and doublechecks, and then the [list of other scheduled tasks to be completed]. So, if we get to the checklist and we're still working on iron protocols or anaemia protocols, potassium protocols, or giving antibiotics, modality education just has to wait until the next run. So, [modality education] doesn't make it on the priority list unless all those other things are done first. (Participant 13) Participants reported that they addressed patients' dialysis treatment needs before other needs. Therefore, this participant assessed modality education as a ‘nice to have’ rather than an ‘essential to have’ activity. Inadequate staffing was highlighted by several participants from urban areas as a barrier to providing modality education. Participants felt they had to ration their work to meet patients' basic haemodialysis care needs. Participants articulated that it could be beneficial to assign a nurse with more training to provide modality education. A participant suggested: Maybe have a modality super user or champion, if you will, on the unit, who can be a resource for the staff or can go to the patient and sit down (Participant 12). The participant also identified that a modality champion would also save patients' time by having this appointment during a dialysis session. A lack of supporting resources were identified by participants as another barrier to providing modality education. Teaching resources for modality education could include items such as pamphlets, patient testimonials or dialysis equipment to demonstrate. Participants said that they were not aware of teaching resources or that they were not readily available. You can find it, but I honestly don't think, for the new nurses coming in, or for people who haven't worked in other areas, that they would really know how to get that information. (Participant 12) This participant's response indicated that resources are not easily accessible for nurses to support modality teaching. Participants understood that teaching resources for modality education were useful, but they did not know how to find them for patients. 4.2.2 Social Opportunity The social opportunities nurses identified as impacting modality education were the increased involvement of physicians and families. Many participants felt that the nephrologist should be involved in modality decisions for patients as the nephrologist would ultimately be prescribing the other therapies. A participant explained what they told their patients who might express interest in a different modality: ‘…It might be worth mentioning with your primary nephrologist. And they actually do. That's how I get them to switch’ (Participant 10). It is possible that participants did not feel qualified to take on modality discussions on their own with the patients because these discussions were not perceived as part of a nursing scope of practice. Participants suggested that peer support groups could be a great way for patients to learn about different modalities. Patients could learn about the potential impact of a different modality for their specific life situation from another patient who faces similar issues. ‘For the patients if they had some, patient support groups, that we would roll out that would provide advice to other patients or suggestions or share their experiences’ (Participant 12). This participant illustrated the potential benefits of peer support when it came to modality education.
Physical Opportunity Participants revealed that the physical opportunities that influenced modality education in the ICHD setting were (a) time, (b) staffing issues and (c) the availability of clinical resources. Participants from urban ICHD settings primarily reported that modality education was not a priority for their daily work. They reported prioritizing the patient's dialysis process during their 4‐h visit, meaning they lacked the time for other activities like modality education. One participant outlined her daily priorities as such: What was the priority list? It went [assessing and getting patients on haemodialysis treatment], medications, of course, drawing bloodwork, following up on bloodwork, and then dressings and doublechecks, and then the [list of other scheduled tasks to be completed]. So, if we get to the checklist and we're still working on iron protocols or anaemia protocols, potassium protocols, or giving antibiotics, modality education just has to wait until the next run. So, [modality education] doesn't make it on the priority list unless all those other things are done first. (Participant 13) Participants reported that they addressed patients' dialysis treatment needs before other needs. Therefore, this participant assessed modality education as a ‘nice to have’ rather than an ‘essential to have’ activity. Inadequate staffing was highlighted by several participants from urban areas as a barrier to providing modality education. Participants felt they had to ration their work to meet patients' basic haemodialysis care needs. Participants articulated that it could be beneficial to assign a nurse with more training to provide modality education. A participant suggested: Maybe have a modality super user or champion, if you will, on the unit, who can be a resource for the staff or can go to the patient and sit down (Participant 12). The participant also identified that a modality champion would also save patients' time by having this appointment during a dialysis session. A lack of supporting resources were identified by participants as another barrier to providing modality education. Teaching resources for modality education could include items such as pamphlets, patient testimonials or dialysis equipment to demonstrate. Participants said that they were not aware of teaching resources or that they were not readily available. You can find it, but I honestly don't think, for the new nurses coming in, or for people who haven't worked in other areas, that they would really know how to get that information. (Participant 12) This participant's response indicated that resources are not easily accessible for nurses to support modality teaching. Participants understood that teaching resources for modality education were useful, but they did not know how to find them for patients.
Social Opportunity The social opportunities nurses identified as impacting modality education were the increased involvement of physicians and families. Many participants felt that the nephrologist should be involved in modality decisions for patients as the nephrologist would ultimately be prescribing the other therapies. A participant explained what they told their patients who might express interest in a different modality: ‘…It might be worth mentioning with your primary nephrologist. And they actually do. That's how I get them to switch’ (Participant 10). It is possible that participants did not feel qualified to take on modality discussions on their own with the patients because these discussions were not perceived as part of a nursing scope of practice. Participants suggested that peer support groups could be a great way for patients to learn about different modalities. Patients could learn about the potential impact of a different modality for their specific life situation from another patient who faces similar issues. ‘For the patients if they had some, patient support groups, that we would roll out that would provide advice to other patients or suggestions or share their experiences’ (Participant 12). This participant illustrated the potential benefits of peer support when it came to modality education.
Motivation Participants identified aspects of motivation that that were both self‐conscious intentions (reflective) and those that were emotional reactions (automatic). 4.3.1 Reflective Motivation Participants demonstrated two areas of reflective motivation that impacted their delivery of modality education: (a) participants' perceived roles in modality education and (b) participants' perceived need for modality education in ICHD. Participants agreed that they should take part in modality education, but there was no consensus on what nurses' roles should be. The participants who had experience in other modalities reported that nurses should take an active role in modality education to encourage home modalities. These participants regularly assessed patients' need for modality education and provided the education within a care context. One participant explained how they approached modality education in the ICHD unit: I see our role is to pick up on these clues and just assess patient knowledge and what their lifestyle goals are, what their knowledge base is, what fits with their current life, and then helping support them as they need it. (Participant 6) This participant assessed the patient's suitability for ICHD by assessing the fit between modality and the patient's lifestyle and preference. The participant's approach to modality education, tailored to the patient's specific needs was likely to keep the patient engaged. The participant asked relevant questions as part of their patient assessment, which went beyond immediate HD needs. This participant recognized that modality choices can change depending on the patients' circumstances. Another participant thought modality education should not be the role of the ICHD nurse because of the barriers present. This participant reported that it would be a disservice to patients to attempt modality education: To be honest with you, no. I don't think that in‐centre nurses should do the teaching for modalities because, one, there isn't much time to do that education. That's something that—this could be life changing. It's something that they should be able to sit down with a nurse individually without distraction to get that education, so they can get informed information about what their options are…we can do a quick run‐through, but I don't think it's fair to the patient unless they get all the options and all the pros and cons. (Participant 13) This participant's lack of motivation was because of a lack of time and the risk of providing partial or inaccurate information to patients. The participant considered modality education to be an important conversation that was best suited for a different context. These views contrasted with participants with experience in other dialysis modalities, who practiced at rural sites, and those with more years of experience, who did think there was a need for modality education in the ICHD setting. 4.3.2 Automatic Motivation Nurses articulated two main aspects that represented automatic motivation. These were (a) modality education as a mandatory task to be completed and (b) nurse‐perceived patient barriers. Participants' reports of their workplace expectations around modality education varied. Some participants stated that they were expected to review modalities with patients every 8–16 weeks. Others reported that they did not have to review modalities with patients at all. A participant observed that staff would be more motivated to perform modality education if it were on a list of activities to be completed by nurses. ‘I think a checklist of tasks that have to be done for their job to be complete is a huge motivating factor for nurses. If it is a task, we tend to get it done’ (Participant 12). Although the practice of mandatory modality education varied among sites, this participant's statement suggested that nurses' motivation to engage in modality discussions could be prompted by external factors like a checklist. Many nurses reported being demotivated to talk about modality options with patients because of some patient‐related factors. Some participants admitted to not wanting to attempt modality education because they were worried that the patient would not understand the information provided. Because for me, too, it's culture and language and their background, and if this is an English‐speaking person, then I can go at [modality education] really hard. But if it's not an English‐speaking person, then my hands are really tied, especially at the beginning because with so much other things to do, to get an interpreter to help with that too, if the family is not there is almost impossible. (Participant 1). This participant cited language barriers as a deterrent to providing modality education. The participant did not want to confuse or burden the patient with incomplete information, which limited the nurse's motivation to provide modality education.
Reflective Motivation Participants demonstrated two areas of reflective motivation that impacted their delivery of modality education: (a) participants' perceived roles in modality education and (b) participants' perceived need for modality education in ICHD. Participants agreed that they should take part in modality education, but there was no consensus on what nurses' roles should be. The participants who had experience in other modalities reported that nurses should take an active role in modality education to encourage home modalities. These participants regularly assessed patients' need for modality education and provided the education within a care context. One participant explained how they approached modality education in the ICHD unit: I see our role is to pick up on these clues and just assess patient knowledge and what their lifestyle goals are, what their knowledge base is, what fits with their current life, and then helping support them as they need it. (Participant 6) This participant assessed the patient's suitability for ICHD by assessing the fit between modality and the patient's lifestyle and preference. The participant's approach to modality education, tailored to the patient's specific needs was likely to keep the patient engaged. The participant asked relevant questions as part of their patient assessment, which went beyond immediate HD needs. This participant recognized that modality choices can change depending on the patients' circumstances. Another participant thought modality education should not be the role of the ICHD nurse because of the barriers present. This participant reported that it would be a disservice to patients to attempt modality education: To be honest with you, no. I don't think that in‐centre nurses should do the teaching for modalities because, one, there isn't much time to do that education. That's something that—this could be life changing. It's something that they should be able to sit down with a nurse individually without distraction to get that education, so they can get informed information about what their options are…we can do a quick run‐through, but I don't think it's fair to the patient unless they get all the options and all the pros and cons. (Participant 13) This participant's lack of motivation was because of a lack of time and the risk of providing partial or inaccurate information to patients. The participant considered modality education to be an important conversation that was best suited for a different context. These views contrasted with participants with experience in other dialysis modalities, who practiced at rural sites, and those with more years of experience, who did think there was a need for modality education in the ICHD setting.
Automatic Motivation Nurses articulated two main aspects that represented automatic motivation. These were (a) modality education as a mandatory task to be completed and (b) nurse‐perceived patient barriers. Participants' reports of their workplace expectations around modality education varied. Some participants stated that they were expected to review modalities with patients every 8–16 weeks. Others reported that they did not have to review modalities with patients at all. A participant observed that staff would be more motivated to perform modality education if it were on a list of activities to be completed by nurses. ‘I think a checklist of tasks that have to be done for their job to be complete is a huge motivating factor for nurses. If it is a task, we tend to get it done’ (Participant 12). Although the practice of mandatory modality education varied among sites, this participant's statement suggested that nurses' motivation to engage in modality discussions could be prompted by external factors like a checklist. Many nurses reported being demotivated to talk about modality options with patients because of some patient‐related factors. Some participants admitted to not wanting to attempt modality education because they were worried that the patient would not understand the information provided. Because for me, too, it's culture and language and their background, and if this is an English‐speaking person, then I can go at [modality education] really hard. But if it's not an English‐speaking person, then my hands are really tied, especially at the beginning because with so much other things to do, to get an interpreter to help with that too, if the family is not there is almost impossible. (Participant 1). This participant cited language barriers as a deterrent to providing modality education. The participant did not want to confuse or burden the patient with incomplete information, which limited the nurse's motivation to provide modality education.
Behaviour In this section, we highlight the types of behaviours that participants reported. We noted three distinct types of behaviours: (a) nurses providing comprehensive modality education, (b) nurses presenting modality options with or without context and (c) nurses not providing modality education. Some participants with experience in other modalities were found to be more prepared, even enthusiastic to provide modality education. They possessed the capability for modality education, mostly from experiential knowledge, and took opportunities throughout the day to speak with patients about their modality options. These participants were motivated to discuss modalities to provide patients with the information that they need to make an informed modality choice. One participant explained how she approaches modality education in the ICHD unit: So, I think the nurse's role is to support our patients in whatever choices in renal replacement options to make sure that they receive the education that they need to make informed decisions regarding their modality, and make sure that any questions that they have are answered in a non‐biased way. (Participant 6) This participant perceived modality education as an integral part of their job because they had an awareness of the benefits of choosing an optimal modality option for patients' way of life. Other participants recognized the need for modality education but wanted to provide options for patients outside of their haemodialysis treatment times. These participants were motivated to improve patients' quality of life and health outcomes and to make more beds available in‐centre. One participant reported that: I don't get into great detail about any of [the modalities]. We just give the real basics because, honestly, I don't really know a lot of detail, because I've only ever been in in‐centre hemo. So I'm just like, “If this is something that will maybe work for you, let's get you in touch with some resources or some people.” (Participant 4) Nurses who approached modality education in this manner reported they were doing the best they could with the resources available to them, recognizing that there were other resources available. The nurses found opportunities to speak with patients about their modality options, even if they could not provide comprehensive education at the bedside. Some participants avoided modality education altogether. These participants perceived that ICHD was the patients' choice and that it was not the role of the nurse to try and change their minds. They did not consider the HD unit as a place where modality education should be happening, nor did they think that modality education was in their scope of practice. A participant explained that ‘Any [ICHD patient] who is on dialysis, it is his choice. He just came for dialysis…he chose this [modality]’ (Participant 3). Another participant directed patients to their nephrologist if they showed interest in another modality ‘Talk to your doctor about it. Let your doctor know. That's basically all I can do’ (Participant 1). These participants reported that they wanted to support their patients' choices; they did not see the ICHD unit as a conducive environment for modality education. Instead, these participants saw modality education as the nephrologist's role. These participants prioritized preserving therapeutic relationships, focusing on providing haemodialysis.
Discussion The categories in the COM‐B model (Michie, Atkins, and West ) were a helpful organizing framework to understand nurses' experiences. Participants confirmed that ICHD nurses require more education about dialysis modalities other than their own to be able to educate patients confidently and effectively. This finding is consistent with other studies. Phillips et al. supported this finding, indicating that when nurses are educated about other modalities, they feel better prepared to address the topic with patients. Cassidy et al. identified the availability of resources as an influencing factor in dialysis modality decision‐making. Structured educational programmes have been shown to increase knowledge about dialysis modalities and impact modality choice selection in patients (Schanz et al. ). Little is known regarding specific attributes of educational programmes that are most successful, as no single approach has emerged as a best practice (St. Clair Russell and Boulware ). Teaching resources could incorporate some structure in the content of information delivered to patients. In our study, ICHD nurses had varied perceptions about their role in modality education, depending on their comfort level and experience. Poinen et al. stipulated that the focus of the ICHD has not been patient education, but rather the management of the nurses' own dialysis modality. Many other studies demonstrated that ICHD nurses often lacked the confidence and knowledge necessary to offer effective guidance to patients on choosing the appropriate modality (Firanek et al. ; Lauder et al. ; Phillips et al. ; Poinen et al. ; Tennankore et al. ). Some participants saw modality education as the role of the nephrologists. Although nephrologists have a role in modality education, it still falls within nurses' scope of practice for nurses (Canadian Council for Practical Nurse Regulators ; College of Registered Nurses of Alberta ). Patient education on dialysis modalities is part of the ICHD nurses' role in holistic care provision as it offers patients other treatment options that they may find more appropriate in coping with kidney failure. Participants highlighted a lack of time for modality education. Poinen et al. affirmed that ICHD nurses faced significant time constraints with their workload, such as rotating patients and shortened patient interactions, which required nurses to prioritize immediate dialysis concerns. Studies have supported the use of dedicated modality educators or training champions within renal programmes and have demonstrated increased uptake of home dialysis modalities (Fortnum and Ludlow ; Wilson, Crandall, and Harwood ). Using these champions would allow for high‐quality and consistent education to be delivered by appropriately trained staff who have enough time and necessary resources to provide the education. In our study, some participants expressed that ICHD nurses do not see the need for modality education. Other studies showed similar findings, demonstrating that nurses specializing in ICHD tend to believe ICHD is best for patients (Tennankore et al. ). Firanek et al. also illustrated that ICHD nurses tend to favour ICHD treatment due to their perception that haemodialysis is a complex procedure with associated risks, making it unsuitable for laypersons to manage safely at home. In turn, nurses may communicate misperceptions regarding these modalities to patients (Firanek et al. ). ICHD nurses' biases against other dialysis modalities could be a contributing factor to why nurses do not provide modality education.
Implications for Practice The COM‐B model could inform approaches to modality education interventions in the ICHD setting. Based on the study findings, practitioners could consider improving capability by providing ICHD nurses with education about other modalities, interprofessional discussions about roles in modality education, and providing ICHD nurses with opportunities to observe other dialysis modalities. To strengthen opportunity, practitioners could have a clear process map for patient modality education referrals, have printed resources about modalities readily available, and conduct and document family conversations about patients' modality preferences. Practitioners could also support motivation by providing education through the lens of improving patients' clinical and lifestyle outcomes and by strengthening therapeutic relationships between nurses and patients.
Limitations This study is specific to the research context and needs to be applied with caution in other practice areas. However, renal programmes that find similar issues around modality education may benefit from the study findings and apply them to their practice. We did not have any nurses with 2–5 years of experience sign up for the study. Although the more experienced nurses talked about the perspective of nurses with limited experience, it would have been more informative to hear directly from nurses with less than 6 years of experience. The study had 13 participants. Future studies may expand on this number to gain richer insights. Additionally, shared decision‐making is an essential aspect of dialysis modality selection. Studying this concept was beyond the scope of this study but could be explored in future studies.
Conclusion Nurses perceived that they could have a role in modality education, but had different views on what this role should be. ICHD nurses face some barriers that hinder engagement in modality education such as knowledge deficits, a lack of experience with home modalities and limited patient teaching resources. Factors that favoured modality education were strong nurse–patient relationships and previous experience with other modalities. The COM‐B model was a fit with ICHD nurses' perceptions and could help guide interventions.
Elke Jaibeeh Barah: conceptualization, methodology, validation, formal analysis, investigation, writing–original draft and editing and review, project administration, visualization, funding acquisition. Jennifer Jackson: conceptualization, writing–original draft and editing and review, resources, supervision, project administration.
The authors declare no conflicts of interest.
|
Applying an equity lens to pharmacogenetic research and translation to under‐represented populations | 86d57c6a-daac-40ab-a6e7-288b66174810 | 8604241 | Pharmacology[mh] | Translation of genetic research into clinical practice is currently being implemented as precision health, while the race toward full clinical implementation across practice settings is expanding beyond academic‐based health institutions. When available, genetic information is being used as an accepted, evidence‐based biomarker to optimize care that is quickly gaining support and interest from patients, providers, and payers. When genetic information is not available, race, ethnicity, and family history serve as clinical proxies. In this review, we use the term “European” to represent the biogeographical ancestry group that includes populations primarily of European descent, including European Americans and that could be referenced elsewhere as “White” or “Caucasian.” Due to the inherent bias and complexities of overgeneralization of racial categorization, and the intricacies of using the terms “race,” “ethnicity,” and “genetic ancestry, we use the terms “genetic ancestry” and “genetic ancestry group” to describe the population from which the individual’s recent biological ancestors originated. , Genetics influence an individual’s susceptibility to certain disease states but can also contribute to the wide variability observed with medication response. Pharmacogenetics (PGx) uses information about genes that encode proteins involved in pharmacokinetics, pharmacodynamics, and hypersensitivity reactions to guide clinical decision making to optimize medication therapy selection. Using PGx information to guide clinical decisions parallels the use of other clinical information, similar to how liver and kidney function guides medication therapy decisions. For example, knowing the presence of a CYP2C19 loss of function allele, such as *2 or *3, can help guide antiplatelet therapy decisions as the prodrug clopidogrel requires bioactivation to the active metabolite predominantly by CYP2C19. However, response variability observed from medications due to underlying genetic differences can vary between genetic ancestry groups. Allele frequencies in pharmacogenes differ across genetic ancestry groups and can even differ between subgroups of a specific population, as evidenced by variation in the CYP2C19 *2 allele which varies in frequency from 5.7% to 49.4% within Asian ancestry. With allele frequencies differing across genetic ancestry groups, identification of variants in pharmacogenes that are clinically relevant for that population presents challenges. Despite the relative infancy of the genomic era in healthcare, the outsized influence of European‐based research is apparent. A recent review of genomewide association studies (GWAS) found that 78% of individuals included were of European descent with a small percentage representing Asian, African, and Hispanics, and less than 1% representing all other ethnicities. This review concluded that the bias of European‐based genetic research translated to a non‐European population can result in heterogeneous treatment outcomes. Several reviews have discussed the lack of under‐represented populations in PGx studies, , , , , , all of which note an overwhelming lack of genetic diversity that will impede equitable clinical implementation through inappropriate application of gene‐based dosing algorithms and by missed opportunities for identification of population‐specific single nucleotide variants and alleles. Systematic reviews in Africans, North American Indigenous populations, US Hispanics, Asians, Mexicans, and Brazilians share several common themes, including that there are existing differences in allele frequencies across races in common pharmacogenes, application of European‐based PGx to other races may unintentionally result in heterogeneous clinical outcomes, and that non‐European ethnicities must be represented in PGx studies both for discovery of novel variants and to guide clinical implications through population specific PGx tests and dosing algorithms. , , , , , As precision health is translated from research into clinical practice, the question is no longer if using genetic information will become standard of care, but rather who it will be the standard of care for. As PGx implementation progresses, non‐European populations are being left behind exacerbating existing disparity gaps. This review uses an equity lens, a process to analyze the impact of design on underserved populations to identify and mitigate barriers specific to PGx studies. In doing so, this review explores the challenges of studying PGx in under‐represented populations, highlights successful PGx studies conducted in non‐European populations, and proposes a path forward for equitable PGx research and clinical translation.
Several challenges exist as it pertains to conducting research implementation of PGx in broad populations, including the context of what is considered diversity, the collaborative involvement and participatory research in populations rarely included in clinical research, and using PGx panels developed based on variants/alleles in Europeans in non‐European populations. To assess under‐representation in PGx studies, it is important to scrutinize the definition of under‐representation as it applies to genetic ancestry. The National Institutes of Health (NIH) defines Blacks or African Americans, Hispanics or Latinos, American Indians or Alaska Natives, and Native Hawaiians and other Pacific Islanders as under‐represented in health‐related sciences, while also acknowledging that under‐representation will vary depending on the setting. Under‐representation defined as non‐European populations is insufficient as self‐identified ancestry has questionable reliability and the categories presented to patients and research participants is heterogeneous and inconsistent. Even within the NIH category of “White,” there are more than a dozen subgroup ethnic categories that show comparatively different allele frequencies for certain genes, as described by the Clinical Pharmacogenetics Implementation Consortium. Previously, self‐reported ancestry was thought to be an accurate representation of genetic ancestry; however, it has recently become apparent that there is a discordance between genetic ancestry and self‐reported ancestry. In a previous study of over 3500 participants, 0.14% showed genetic ancestry differing from self‐reported ancestry. However, the four categories used were generic representations of genetic ancestry and the specificity of self‐reported ancestry may be enhanced when participants are faced with a greater number of subgroups to choose from. Additionally, studies have shown that self‐reported African Americans, European Americans, and Latino populations can have different genetic ancestry, especially in an admixed population. In addition to self‐reported ancestry being a controversial marker for genetic ancestry and a proxy for clinical decisions, it is further complicated by the number of choices or categories for genetic ancestry used by researchers and presented to research participants or patients. A recent study by Zhang et al., revealed a dearth of standardization in race and ethnicity categories used in research to categorize race, ethnicity, and genetic ancestry internationally. For example, they found that Malaysia used 24 different categories to classify the category “Asian,” whereas the United States used only three. Different research settings may or may not have standards set by regulatory agencies, which can also compound the complexity of standardizing categorization. For example, in the United States, the NIH defined standards contain five racial categories (American Indian or Alaska Native, Asian, Black or African American, Native Hawaiian or Other Pacific Islander, and White) and two ethnicity categories (Hispanic or Latino and Not Hispanic or Latino) to be used in clinical and medical research. The use of broad categories to capture genetic ancestry could lead to overgeneralization of subgroups resulting in inaccurate translation into clinical care. Similarly, oversimplifying genetic ancestry in studies that show differences in efficacy or adverse events without additional investigation of PGx contributions could be clinically detrimental when translated into practice. If the clinical outcome observed is due to a phenotype (i.e., a poor or ultra‐rapid metabolizer) and that phenotype is found in a subgroup that is driving the difference observed in the outcome, that could be true only for that specific subgroup and the overgeneralization of this outcome to the overall racial category could lead to inappropriate clinical decisions. Clinically, this contributes to the disparity gap as medications are selected, or avoided, based inappropriately on genetic ancestry rather than PGx phenotype. This disparity gap is exacerbated further when we clinically apply findings from a majority European PGx study to a non‐European population as alleles in linkage disequilibrium with alleles causing the clinical impact but not tested for, can also differ across populations. However, in some cases, without broader categorization, substantially increased population sizes are needed to attain power to detect differences. This creates challenges in balancing the need for statistical power and replication studies with accuracy in self‐reported racial categorization. The second barrier to enhancing inclusion in PGx studies is creating a sustainable, collaborative environment. Challenges in creating a collaborative research environment in any population include participant mistrust, lack of comfort with the research process, lack of information, time and resource constraints, and lack of awareness. A recent study evaluating the reasons for participant enrollment refusal in African Americans revealed that mistrust of genetic research, a commonly cited barrier to research involvement, was only cited about 5% of the time and ranked below the participant not being interested in the research and other convenience factors, such as the time involvement and the site being too far for travel. Another study noted differences in willingness to participate across race but showed that when genetic health‐related or genetic ancestry results were returned and discrimination issues (life and health insurance costs and employment) addressed, those differences were alleviated. Last, a clinical barrier to under‐represented PGx is the application of PGx panels in non‐European populations. Analysis of GWAS studies informing PGx variants showed the majority of studies (52%) were conducted in European populations. A similar analysis of PGx studies showed that the majority (53%, n = 102) were conducted in North America but only five were conducted with American Indian or Alaska Native populations and only six conducted with Hispanic/Latino populations. When using a general PGx panel with the most commonly described variants and alleles in a PGx study in a largely unstudied population differences in allele frequencies can be revealed; however, it is a missed opportunity for identification of novel alleles with clinical impact. A review conducted in an African population on CYP2C9 , CYP2C19 , CYP2D6 , and other CYPs revealed that of 74 PGx studies, only 16% ( n = 12) used methodology to detect novel variants. To highlight PGx studies in understudied populations and their methods, we briefly describe two well‐conducted studies in under‐represented populations. Additional PGx studies in under‐represented populations are included as a table in the .
CYP2D6 ELUCIDATION IN THE AMERICAN INDIAN POPULATION Fohner et al. used a community‐based participatory research (CBPR) approach to develop a partnership with the Confederated Salish and Kootenai Tribes (CSKT) community, and through this prioritized optimization of anticancer agents through PGx testing by focusing on CYP2D6 and tamoxifen. One hundred eighty‐seven CSKT participants underwent CYP2D6 sequencing, resulting in 67 CYP2D6 variants identified, including nine novel variants. Additionally, novel variants are also described in CYP3A4, CYP3A5, and CYP2C9 (Table ). Allele frequencies were similar to those seen in European‐observed frequencies, with the exception of CYP3A4, and differed from other North American indigenous populations. This study was designed to investigate CYP2D6 variation in an indigenous population previously under‐represented in PGx studies. In doing so, they collaborated with the CSKT community to determine prioritization of a PGx research tract with meaningful impact on tamoxifen optimization within the community.
CYP2D6 ELUCIDATION IN THE XHOSA POPULATION Wright et al. conducted a PGx study in the Xhosa population in South Africa to optimize medication therapy for the treatment of schizophrenia. The Xhosa people are under‐represented in research despite accounting for a large proportion of the South African population. The study recruited individuals of Xhosa ethnicity with written informed consent and institutional approval and was undertaken to elucidate variation in CYP2D6 within the Xhosa people to better guide medications that are CYP2D6 substrates used to treat schizophrenia, such as antipsychotics, including risperidone, aripiprazole, brexpiprazole, clozapine, perphenazine, thioridazine, and paliperidone. This study used two methods for CYP2D6 investigation, including CYP2D6 sequencing in a subgroup and using a CYP2D6 panel in another. CYP2D6 was sequenced in 15 individuals and CYP2D6 was genotyped for over 25 alleles in controls and individuals with schizophrenia using long‐range polymerase chain reaction (PCR), DNA sequencing and single nucleotide primer extension analysis. In total, 56 CYP2D6 variants were identified with allele frequencies unique to the Xhosa population, higher frequencies of *5 and *40, and differing from another South African population. Sequencing revealed two novel alleles in this population, *73 and *74 (Table ). Notably, 12.5% of participants were either poor or ultrarapid CYP2D6 metabolizers. Overall, this study was designed to detect novel variants and establish allele frequencies in CYP2D6 in a diverse South African population. The clinical impact of this CYP2D6 investigation is important for treatment of schizophrenia within the Xhosa people with CYP2D6 substrates.
The studies showcased above highlight two common themes for successful PGx research in under‐represented populations. One of the most resounding themes is establishing a collaborative environment for research in the population. Both studies worked with under‐represented populations to expand knowledge of a relevant pharmacogene, but importantly sought to optimize therapy for related medications and indications to improve outcomes that were meaningful to the population. Fohner et al. describes eliciting community buy‐in via the development of a Tribal Health and community advisory board that helped facilitate discussions with the community. Although not explicitly described in the study highlighted, more in‐depth references on their collaborative partnership with the CSKT community are available and highlight the community as a stakeholder in the oversight of the project, with research objectives that focus on community health needs, bidirectional learning and communication, and including cultural competency training. , The second common theme for these studies was the sequencing of the pharmacogenes under review. Both studies identified novel variants within CYP2D6 and revealed a unique allele frequency distribution for the population when compared to other populations.
Designing PGx studies with an equity lens starts by acknowledging structural inequities and through CBPR efforts with the specific population. CBPR is defined as “a collaborative, action‐oriented research approach that seeks to address health disparities through aligning community members’ insider knowledge of their communities with academic researchers’ methodological expertise,” and should be used to establish meaningful and lasting relationships within communities. With CBPR, mistrust within the community, as well as other reasons for not participating in research, such as general interest, knowledge, and convenience factors, can be addressed and overcome as barriers for participation in PGx research. Engaging a community in research in a positive manner may also inspire those within the community to pursue research as a career. Increased representation of these individuals could result in a positive feedback loop that further strengthens participation in research. The relationship established should also be sustained beyond a single study to continue to provide benefits for both the researcher and the population, with several examples of CBPR being used successfully within American Indian/Alaskan Native communities. , , Carroll et al. provides eight recommendations based on CBPR principles as a framework for increasing American Indian representation in PGx research. Importantly, they highlight building trust within the community, practicing cultural humility, providing resources and support within the community, and finding a balance between the realistic benefits to the community and the knowledge gained in pursuit of research. Although their recommendations focus on the American Indian community, this framework can be adapted for use within other under‐represented and more heterogeneous populations (Figure ). While establishing a research relationship within a community, resources should be addressed. PGx studies done with predefined genotyped panels, whereas an attractive option due to widespread availability and lesser cost can provide useful information about how common pharmacogene alleles frequencies differ between populations; however, they do not provide insight on new variants unique to that population. The greatest amount of genetic diversity is found outside of European ancestry and using a PGx panel mostly defined by this research is a missed opportunity. Thus, partnerships should be sought with research groups that can provide the technology to sequence. Sequencing allows identification and categorization based on a genetic basis rather than race or ethnicity and may enhance clinical practice by further expanding PGx panels offered. Notably, there are groups that are working to design population‐specific genotyping arrays for under‐represented populations, including Multi‐Ethnic Global Array, Global Screening Array, and the H3Africa Array. The PGx studies highlighted in this review were done with relatively homogenous communities found in a local geographical region. When applying to under‐represented populations that are more heterogeneous and geographically unrestricted, sequencing and use of genetic ancestry groups will be paramount. As efforts increase to include under‐represented populations in PGx research, it will be important that it is translated into clinical practice. The recent decision that manufacturers of clopidogrel “engaged in unfair and deceptive business practices” resulted in a ruling of over $800 million in penalties as the state of Hawaii claimed that the manufacturers knew that clopidogrel could have diminished or no effect in people of ancestry with higher frequencies of CYP2C19 loss of function alleles. This ruling could have a tremendous impact on the clinical PGx community and it is imperative that translation of PGx into clinical practice is done thoughtfully and equitably. An additional consideration is standardization of categorizing individuals by race and ethnicity. Zhang et al. showed that race and ethnicity is complex and allele frequencies are heterogeneous across subgroups of ethnicities, thus applying genetic frequency assumptions of a group to a subgroup may be clinically inappropriate. Efforts have been made to address these inconsistencies and include using biogeographical groups based on the geographical distribution of genetic ancestry. Beyond the scope of this review are larger efforts to enroll under‐represented populations in genetic studies. Efforts to increase diversity include RIBEF, 1000 Genomes Project, All of US, and the African Genome Project. In particular, the 1000 Genomes Project sequenced over 2000 people across 26 populations and aims to ensure access and usability of the data while continuing to collect from populations not included in the original project. The All of Us Research Program aims to generate genomic data from their participants across the United States and has a core value devoted to diversity and inclusion.
Genetic variation is linked to medication response variation and is used as an evidence‐based tool in clinical care to optimize medication therapies. Implementation of PGx as it is translated into clinical care from research is increasing; however, the heavy influence of European ancestry genetics in PGx studies is exacerbating the existing healthcare disparity gap, creating a growing need for PGx studies to be done in under‐represented populations so that the promising translation of PGx into clinical care can be implemented equitably. Research within under‐represented populations should begin by addressing structural inequities and social determinants of health with a CBPR approach, as PGx research may not be the highest priority for under‐represented populations.
All authors declared no competing interests for this work.
Table S1 Click here for additional data file.
|
Comparative assessment of the stability of buccal shelf mini-screws with and without pre-drilling- a split-mouth, randomized controlled trial | 4d3af07c-ce1e-4d16-8fa8-e777ecd38978 | 11452459 | Dentistry[mh] | Anchorage, which is the resistance to unwanted tooth movement is an essential component of the orthodontic treatment of dental and skeletal malocclusions. Temporary anchorage devices (TADs) have a plethora of applications in clinical orthodontics including anchorage conservation and are an important part of a clinician’s armamentarium. In recent years, orthodontics has seen significant advancements in TADs with the introduction of infra-zygomatic crest (IZC), and buccal shelf (BS) orthodontic bone screws, leading to a transformation in the field. These innovations have redefined the concept of absolute anchorage, providing orthodontists with versatile tools to tackle complex cases without resorting to surgical interventions. Buccal shelf screws are indicated for complete arch distalization of the mandibular dentition to conceal a Class III malocclusion as well as for distalization of arches in re-treatment cases of anchorage loss, which are difficult to treat with a standard micro-implant elsewhere . The most preferable location for bone screw insertion in the mandible is the buccal shelf area, which is situated lateral to and just below the region of the second molar . Two types of protocols can be used for insertion: a self-drilling (SD) protocol or a pre-drilling (PD) protocol. In a self-drilling protocol, the bone screw is placed without a punch cut or a pilot drill. The cutting edges of the implant tip are sharp and provide the necessary force for the implant placement. This method may lead to very high insertion torque and in turn an increased resistance to implant placement which in turn can cause a higher amount of bone compression during implant placement. An alternative method for placing the orthodontic bone screws is with a pre-drilling protocol. This involves making a small punch cut using a tissue punch device, followed by a pilot hole at the desired position and thereafter implant insertion. This reduces bone compression during implant placement and decreases the incidence of implant slippage during the placement protocol. Several anatomical factors affect both, the primary and secondary stability of the implant - the type of soft tissue (mucosa vs. attached gingiva, tissue thickness, mobility, and proximity to the frenum), the type of bone (bone density, bone depth, cortical bone thickness), and the proximity to particular anatomical structures (roots, nerves, vessels, sinus/nasal cavities) . Primary stability is a mechanical phenomenon and is reliant on the type and quantity of bone present in the area as well as the implant type and placement technique . The formation of new bone and remodelling at the interface between the implant and tissue, as well as in the surrounding bone, are responsible for secondary stability . A meta-analysis carried out by Hong showed that the stability of TADs placed in the mandible is significantly lower than that of the maxilla by 2.23 times. This can be attributed to peri-implant inflammation and irritation from chewing as well as bone characteristics. The dense, thick cortical bone also significantly increases the risk of implant fracture and insertion torque which negatively affects the stability . Clinicians are therefore often faced with the dilemma of choosing the right insertion protocol that ensures maximum stability and minimises the risk of fracture. A literature search revealed that the stability of buccal shelf bone screws placed using the SD and PD protocols has not been previously studied. This trial was designed to assess and compare the outcome of these two methods, which would help orthodontists to make informed decisions. There are various methods and devices to assess implant stability such as clinical measurement of cutting resistance during implant placement, reverse torque test, and the periotest . These techniques are not reproducible and are often cumbersome to perform. In this study, we decided to employ a new approach called the Resonance Frequency Analysis (RFA) to test for stability. The device consists of a transducer, a metallic rod with a magnet on top, which is attached onto an implant or abutment. A magnetic pulse which has a duration of 1 ms, passing through a wireless probe excites the magnet. After excitation, the peg vibrates freely, and the magnet induces an electric voltage in the probe coil which is measured by the resonance frequency analyzer . The readings recorded indicate the implant stability quotient (ISQ) value. This technique is sensitive, easily reproducible and non-invasive. Objectives To assess and compare the stability of buccal shelf bone screws placed with and without pre-drilling, using Resonance Frequency Analysis (RFA).
To assess and compare the stability of buccal shelf bone screws placed with and without pre-drilling, using Resonance Frequency Analysis (RFA).
The study was carried out at the Department of Orthodontics after obtaining approval from the Institutional Ethics Committee (Protocol Ref. no 19099) in accordance with the guidelines laid down by the 1964 Declaration of Helsinki. It was a prospective, split mouth randomized controlled trial and informed consent was obtained from all participants before recruitment. With an assumption of stability of buccal shelf screws as 85% with pre-drilling and 10% without pre-drilling, 80% power, 95% confidence interval, and 1:1 allocation, a sample size of 14 was arrived at. The inclusion criteria for the present study were patients between the age group of 18–30 years, requiring fixed orthodontic treatment with the application of buccal shelf screws and with buccal shelf bone thickness of at least 5 mm. Patients presenting with bone pathologies or systemic conditions like diabetes and those with a history of smoking were excluded. The recruitment and flow of patients as per the CONSORT guidelines are shown in Fig. . The type of implant that was used is A-1P-212,012 (stainless steel, diameter of 2 mm, length of 12 mm) manufactured by A1 Bio-RayTM Biotech Instrument Co., Ltd. In one quadrant of the mandible, the screw was placed by a self-drilling protocol, and on the other side, the screw was placed after pre-drilling. A 3 mm punch cut was done before pre-drilling to remove the tissue tag. A standardized force of 400 g was applied to the bone screws with the help of E-chains (3 M Alastik) to effectively aid in the orthodontic procedure. Only one person was responsible for inserting the screws to eliminate inter-operator variability and force was checked using a dontrix gauge to ensure uniformity. Two E-chains were loaded onto the implants, one placed mesial to the canine and the other placed mesial to the pre-molars, as shown in Fig. . Implant stability was checked in both buccolingual as well as mesiodistal direction at the time of loading and after 1, 2, 3 and 4 months after implant placement with the device and RFA, shown in Fig. . The ISQ readings were entered into the specific patient data collection form. The commercially available device was originally designed to assess the stability of endosseous implants and had SmartPegs TM for the same. As orthodontic buccal shelf implants possess a smaller dimension and a different design than endosseous implants, a customized smart peg was fabricated to assess their stability. The custom-fabricated magnetic attachment (SmartPeg) was attached to the buccal shelf implant (Fig. ). It was square and was made up of aluminium which is also the core material used in the manufacturing of commercially available SmartPeg ™. Aluminium is a good electrical and thermal conductor, nontoxic in nature, corrosion resistant, and easily formable. It also consisted of a zinc-coated magnet placed on top of it. The attachment was screwed onto the spherical head of the buccal shelf implant with the help of a small external screw. When the magnet was brought close to the transducer probe, it was triggered by a magnetic pulse which was created. An audible sound was produced by the instrument upon capturing the response signal from the probe. The ISQ value, which ranged from 1 to 100, was then displayed, with 100 denoting the maximum stability (Fig. ). Figure depicts the Osstell TM ISQ scale with readings of less than 60 indicating low stability, 60–69 indicating medium stability, and more than 70 indicating high stability. Statistical analysis Version 20 of the Statistical Package for the Social Sciences (SPSS) was used to compile and enter the data. Using the relevant tables and figures, the results were expressed as proportion and summary measures (median with IQR). Friedman’s Two-Way Analysis of Variance and the Wilcoxon signed rank test were utilized for the intergroup comparison. A statistically significant result was defined as one with a p-value of less than 0.05. Pearson regression correlation analysis was used to compare the ISQ readings between both the groups.
Version 20 of the Statistical Package for the Social Sciences (SPSS) was used to compile and enter the data. Using the relevant tables and figures, the results were expressed as proportion and summary measures (median with IQR). Friedman’s Two-Way Analysis of Variance and the Wilcoxon signed rank test were utilized for the intergroup comparison. A statistically significant result was defined as one with a p-value of less than 0.05. Pearson regression correlation analysis was used to compare the ISQ readings between both the groups.
The total sample of 14 buccal shelf implants was divided into two subgroups of 7 each. 7 buccal shelf bone screws were placed using the pre-drilling protocol and the other 7 were placed using the self-drilling protocol in a split-mouth study design. Table depicts the ISQ readings in the two groups at various time intervals. The ISQ readings in GROUP 1 were 72.0 (T1), 70.0(T2), 70.0(T3), and 69.0(T4). Table shows the comparison of the ISQ values between pre-drilling and self-drilling groups at T0 i.e., at the time of placement of the screw. The median ISQ value for the pre-drilling group was 72.0 and for the self-drilling group was 67.0. The result indicates a statistically significant difference between the implant stability readings between the two groups indicating significantly higher implant stability in the pre- drilling group, when compared to the self-drilling group. (p value = 0.018). The comparison of the ISQ values between two groups between T1-T4 is shown in Table . The median ISQ value for the pre-drilling group from T1-T4 is 70.0 and for the self-drilling group is 67.0. The result indicates a statistically significant difference between the implant stability readings between the two groups from T1-T4 indicating significantly higher secondary implant stability when compared to the self-drilling group. (p value = 0.018*) Table : Change in ISQ readings amongst the periods assessed in the study for both groups There was a statistically significant change in the pre-drilling group (Group 1) from the time of placement to the first month (T0 to T1), but the ISQ readings remained stable thereafter in both groups. There was no significant difference in the ISQ readings between the first and the fourth month (T0 and T4). Pearson’s Regression correlation test was done to compare the inter – group ISQ readings at different time intervals. The results indicated a high degree of consistency in the measurements across different time points for both pre-drilling and self-drilling analyses. The correlations were generally very strong, with most P-values indicating statistical significance, highlighting the reliability of the measurements over time.
In the current era, temporary anchorage devices have gained tremendous attention and popularity due to their versatile applications in clinical orthodontics. Several sites have been used for temporary anchorage device insertion such as the palatal bone, infra zygomatic crest area, buccal cortical plate, the mandibular retromolar area, and the posterior alveolar process of the palate. The mandibular buccal shelf has lately been proposed as a viable location for the insertion of extra-alveolar bone screws. It is located in front of the oblique line of the mandibular ramus, buccal to the roots of the first and second molars, bilaterally in the posterior region of the mandibular body . When compared to traditional interradicular mini-screw insertion sites, this position provides significant clinical advantages. The orthodontist can position the screws parallel to the long axes of the molar roots since it extends buccally with a substantial amount of bone, which minimizes the chance of screw-to-root contact during anterior dental movements. Another significant benefit is a decreased chance of screw-to-root contact during insertion, which is one of the most frequent reasons for implant failure. Chang et al. in 2015, reported reduced failure rates in buccal shelf bone screws as compared to inter radicular screws . Sufficient stability, both primary and secondary is essential for the longevity of an implant and successful conservation of anchorage. Numerous factors influence the stability of orthodontic bone screws which include angulation of the implant to the bone, site of implantation, degree of implant-to-bone contact, thickness, and mobility of soft tissues, insertion and removal torque and quality, implant site preparation, and quantity of cortical bone . According to Baumgaertel et al. , the most important factor which determines implant stability is insertion torque. It is the amount of torque applied during the insertion of an implant and is indicative of the resistance the implant encounters which is directly proportional to the amount of bone compression during placement. It increases with greater cortical bone thickness and serves as an indirect measure for the primary stability of the mini-implant . They suggested that, for maximum stability, the insertion torque should ideally fall between 5 and 10 Ncm. This is achievable with a cortical bone thickness of roughly 0.5 to 1 mm according to a study done by Wilmes et al. It has been observed that the insertion torque is considerably higher in areas with thick, dense cortical bone like the mandibular buccal shelf, median upper alveolar process, and around the mid palatal suture. This negatively impacts bone remodelling and results in a lack of secondary stability, bringing down the overall success rate of the implant to 60.9%, along with an increased risk of implant fracture. Reduced initial insertion torque during bone screw installation is therefore, the goal . The technique used for implant placement significantly affects bone compression and insertion torque. Two protocols are widely used for the placement of buccal shelf bone screws. The first is a self-drilling protocol wherein the screw is placed with the aid of the sharp cutting edges of the screw itself, without a punch cut or a pilot hole. The second approach is a pre-drilling technique wherein an initial tissue punch cut is made, followed by a pilot hole at the site of implant placement. This weakens the adjacent cortical one and reduces the resistance experienced during placement . Clinicians are often faced with the dilemma of which protocol to use to ensure maximum stability and successful treatment outcomes. Keeping this in mind, the present study was designed to evaluate and compare the stability of buccal shelf screws inserted using both the above-mentioned protocols. There are various methods used to assess implant stability such as the pull-out test, insertion torque analysis, removal torque assessment etc. The disadvantage of these techniques is that they are only valid at the time of placement and are not reproducible . A newer technique known as the Resonance Frequency Analysis, which is based on the tuning fork principle, was employed in the present study with the help of the Osstell Beacon TM device. The device came with attachments compatible with conventional prosthetic implants. However, orthodontic implants differ significantly from dental implants in terms of their design, surface characteristics and size. To ensure a strong connection between the buccal shelf implant and the transducer, a specially designed smart peg was created, with properties like the commercially available Smartpeg ™. Each implant was fitted with a sterile, disposable Smartpeg and the frequency with the strongest vibration was recorded as the resonance frequency. This measurement is non-invasive, reproducible and swift. The outcome is displayed as a value in the range of 1 to 100; >70 ISQ is regarded as high stability, 60–69 as medium stability, and < 60 ISQ as low stability . In the present study, we assessed and compared the stability of the bone screws at four different periods between the two placement protocols: pre-drilling and self-drilling. Excellent stability was observed in both groups across the time periods assessed. However, a higher Implant Stability Quotient (ISQ) was observed in the pre-drilling group as compared to the self-drilling group. This can be attributed to the fact that an initial pre-drill weakens the adjacent cortical bone, thereby reducing the insertion torque which results in increased stability. According to the current investigation, the buccal shelf bone screw implantation resulted in maximum stability at the time of placement, which then declined. The relaxing of the surrounding hard tissue brought on by bone resorption as a result of osteoclast activity during the early healing phase may explain this phenomenon. This corroborates the theory that, as previously demonstrated for conventional dental implants, primary stability is most potent just after placement and diminishes over time. The physiological processes that are taking place in and around the implant can explain this. After two hours of implant placement, neutrophils, macrophages, and erythrocytes combine to form a fibrin network. On the fourth day following implant placement, osteoclasts and mesenchymal cells start to emerge and begin to eliminate any bone damage. As seen in the current study, this causes the stability to decline after the first month . Deguchi et al. conducted another study in 2009 that evaluated the histological healing of the osseous tissue surrounding mini screws used as an orthodontic anchorage as well as the alteration in cortical bone thickness at 3, 6, and 12 weeks after the screws were inserted. After three weeks, it was found that, in comparison to the control, there was less cortical bone thickness in all regions of the jaw. Bone-implant contact revealed less bone in all areas surrounding the implant when compared to the maxilla. This showed that the mandible did not recover its cortical bone thickness even after three weeks of healing . This confirmed the current study’s findings, which depicted that after the first month, ISQ measurements dropped. This study shows that both the protocols result in a high degree of implant stability with a marginally higher level of stability with the pre-drilling method, over time. This outcome will help clinicians make informed decisions while choosing the best technique to place buccal shelf screws. Due to the novel design of the customized Smart peg attachment, further studies can be done to evaluate its efficacy for measuring the stability of the buccal shelf bone screws using a bigger sample size.
|
Nephrology providers’ perspective and use of mortality prognostic tools in dialysis patients | 930c0c92-6b9d-4057-8496-68f83677f15f | 11590527 | Internal Medicine[mh] | Kidney disease is common and highly morbid, with over 3 million people worldwide receiving dialysis. The mortality rate among patients receiving maintenance dialysis is a staggering 60% at 5 years . However, much heterogeneity exists , making it difficult to predict patients’ outcomes, particularly in older adults . Accurately predicting mortality is essential for prognostication and honest conversations may enhance advance care planning. In fact, studies have shown that patients with chronic and end stage kidney disease desire this prognostic information in shared decision making (SDM) . In addition, the ASN Choosing Wisely Campaign , the RPA Clinical Practice Guidelines , and the KDIGO 2012 CKD guidelines support that individualized prognostic information should be included in the decision to initiate dialysis. Because prognostication is challenging, several prognostic tools have been developed to help make an accurate prognosis that can be used in these conversations. However, a recent study of Canadian nephrology providers found that > 80% of providers use clinical gestalt to prognosticate and 70% never or rarely use clinical prediction tools . To our knowledge, there is little research focusing on Nephrology providers’ perspectives about and method of use of these tools in the real world. This study aimed to elucidate Nephrology providers’ attitudes about and practice patterns of mortality prognostic tools in their care of patients on dialysis. This study also aimed to discover whether their perspectives and use of these tools changed after they were presented with data on how these tools performed in their own patients and the patients in their state. Study setting and participants This study was conducted at the University of Vermont Medical Center (UVMMC), located in Burlington, Vermont. UVMMC is Vermont’s only academic medical center and serves over 1 million patients in Vermont and northern New York. There are six UVMMC affiliated, non-profit dialysis units. All Nephrology providers (8 physicians and 2 nurse practitioners) caring for patients receiving maintenance dialysis were eligible to participate. Qualitative study methods Semi-structured interviews were conducted via Zoom (Zoom Video Communications, Inc., San Jose, CA) by the first author (JB), in May 2020. Two of the Nephrology providers had worked with JB (medicine resident) before as the attending on Nephrology consults. All the providers knew JB and knew that she was doing this project to support her application to nephrology fellowship. Providers were asked about their knowledge of and experience with mortality prognostic tools for patients receiving dialysis (see interview guide—Supplement 1). The interviews were 20 min ± 10 min. No field notes were made. The interviews were recorded and transcribed verbatim by CM and JB. The transcripts were not returned to the participants for comment or correction. Qualitative study analysis Two members of the study team, the principal investigator (JB) and a medical student who did not know any of the providers (CM) performed a thematic analysis for content using the transcripts. The backbone of the code tree was created using the questions from the semi-structed interview guide, but the data for each question was analyzed using grounded theory. The initial codes were generated independently and then they were reviewed together for each interview and themes were identified. Disagreements about themes, the coding tree, and final coding were resolved by discussion. Mortality prognostic tool selection Three mortality prognostic tools commonly reported in the literature and available without cost were selected (See Table ). Cohen et. al’s 2010 model was derived from 514 prevalent hemodialysis patients in New England using age, albumin, dementia, peripheral vascular disease and the surprise question: “Would I be surprised if this patient died in the next six months?” . Charlson et al.’s “Charlson Comorbidity Index” (CCI) was derived from 559 medical patients in the US using age and 16 comorbidites . Couchoud et. al’s algorithm in 2015 was derived from 24,348 incident elderly ESKD patients over 75 years old in France using age, gender, albumin, five comorbidities, and mobility . The three prognostic tools were chosen because they focus on different aspects of prognostication: Cohen’s tool includes provider gestalt with use of the surprise question, Charlson is heavily weighted by comorbidities and is the most commonly used prognostic tool , and Couchoud tool includes mobility and was designed to be used in older adults. Mortality prognostication and measurement In April 2020, 279 prevalent dialysis patients cared for by these Nephrology providers were identified and prospectively followed for six months. All patients receiving maintenance dialysis were included. Data were extracted through chart review of the dialysis electronic medical record (CyberRen) and UVMMC’s EMR (EPIC) in April 2020. Most patients had data in both EMRs. A standardized approach to identify comorbid conditions from the EMRs was used. To capture the most complete assessment of burden of comorbid conditions, a patient was considered to have a comorbidity if it was listed in at least one of the EMRs (as problem list completeness in EMRs varies anywhere from 60–99% ). A patient was considered to have the more severe disease stage if the stages differed in the two EMRs. The most recent serum albumin resulted before May 1st, 2020 was chosen. The providers were given a list of their patients and asked to answer the surprise question for each. The responses and patient characteristics were used in the corresponding online calculators for the prognostic tools . Each patient had a score calculated for each of the three tools (Cohen’s result was a percentage from 0 to 100, Charlson’s was a score from 0 to 37, and Couchoud’s was a score from 0 to 28). At six months follow up, EMR review was used to identify patients who had died. The C statistic, or discrimination, for each tool was calculated via logistic regression and subsequent receiver operating characteristic (ROC) analysis using Stata (Stata 16.1, Stata Corp, LLC. College Station, TX). A C statistic of 0.5 is no better than flipping a coin, 0.7 is considered a good model and a C statistic of 0.8 is considered a “strong” or “excellent” model. Brief intervention and follow up interviews A similar process of email invitation, semi-structured interview (Supplement 2), transcription, and coding was used for the follow up interviews. Providers received the results of the prognostic tools, patients’ outcome at the time of the email invitation, and the percentage of their own accuracy with the surprise question (Supplement 3). Results were also reviewed with the providers at the beginning of the interview before the follow-up questions were asked. The follow up interviews were shorter, on average 10 min ± 5 min. This study was conducted at the University of Vermont Medical Center (UVMMC), located in Burlington, Vermont. UVMMC is Vermont’s only academic medical center and serves over 1 million patients in Vermont and northern New York. There are six UVMMC affiliated, non-profit dialysis units. All Nephrology providers (8 physicians and 2 nurse practitioners) caring for patients receiving maintenance dialysis were eligible to participate. Semi-structured interviews were conducted via Zoom (Zoom Video Communications, Inc., San Jose, CA) by the first author (JB), in May 2020. Two of the Nephrology providers had worked with JB (medicine resident) before as the attending on Nephrology consults. All the providers knew JB and knew that she was doing this project to support her application to nephrology fellowship. Providers were asked about their knowledge of and experience with mortality prognostic tools for patients receiving dialysis (see interview guide—Supplement 1). The interviews were 20 min ± 10 min. No field notes were made. The interviews were recorded and transcribed verbatim by CM and JB. The transcripts were not returned to the participants for comment or correction. Two members of the study team, the principal investigator (JB) and a medical student who did not know any of the providers (CM) performed a thematic analysis for content using the transcripts. The backbone of the code tree was created using the questions from the semi-structed interview guide, but the data for each question was analyzed using grounded theory. The initial codes were generated independently and then they were reviewed together for each interview and themes were identified. Disagreements about themes, the coding tree, and final coding were resolved by discussion. Three mortality prognostic tools commonly reported in the literature and available without cost were selected (See Table ). Cohen et. al’s 2010 model was derived from 514 prevalent hemodialysis patients in New England using age, albumin, dementia, peripheral vascular disease and the surprise question: “Would I be surprised if this patient died in the next six months?” . Charlson et al.’s “Charlson Comorbidity Index” (CCI) was derived from 559 medical patients in the US using age and 16 comorbidites . Couchoud et. al’s algorithm in 2015 was derived from 24,348 incident elderly ESKD patients over 75 years old in France using age, gender, albumin, five comorbidities, and mobility . The three prognostic tools were chosen because they focus on different aspects of prognostication: Cohen’s tool includes provider gestalt with use of the surprise question, Charlson is heavily weighted by comorbidities and is the most commonly used prognostic tool , and Couchoud tool includes mobility and was designed to be used in older adults. In April 2020, 279 prevalent dialysis patients cared for by these Nephrology providers were identified and prospectively followed for six months. All patients receiving maintenance dialysis were included. Data were extracted through chart review of the dialysis electronic medical record (CyberRen) and UVMMC’s EMR (EPIC) in April 2020. Most patients had data in both EMRs. A standardized approach to identify comorbid conditions from the EMRs was used. To capture the most complete assessment of burden of comorbid conditions, a patient was considered to have a comorbidity if it was listed in at least one of the EMRs (as problem list completeness in EMRs varies anywhere from 60–99% ). A patient was considered to have the more severe disease stage if the stages differed in the two EMRs. The most recent serum albumin resulted before May 1st, 2020 was chosen. The providers were given a list of their patients and asked to answer the surprise question for each. The responses and patient characteristics were used in the corresponding online calculators for the prognostic tools . Each patient had a score calculated for each of the three tools (Cohen’s result was a percentage from 0 to 100, Charlson’s was a score from 0 to 37, and Couchoud’s was a score from 0 to 28). At six months follow up, EMR review was used to identify patients who had died. The C statistic, or discrimination, for each tool was calculated via logistic regression and subsequent receiver operating characteristic (ROC) analysis using Stata (Stata 16.1, Stata Corp, LLC. College Station, TX). A C statistic of 0.5 is no better than flipping a coin, 0.7 is considered a good model and a C statistic of 0.8 is considered a “strong” or “excellent” model. A similar process of email invitation, semi-structured interview (Supplement 2), transcription, and coding was used for the follow up interviews. Providers received the results of the prognostic tools, patients’ outcome at the time of the email invitation, and the percentage of their own accuracy with the surprise question (Supplement 3). Results were also reviewed with the providers at the beginning of the interview before the follow-up questions were asked. The follow up interviews were shorter, on average 10 min ± 5 min. The providers (8 MDs and 2 NPs) who participated in the study were 50% female, 60% Caucasian, 30% Asian, and 10% Black and had a mean age of 54 (range 36–73). They had an average of 17 years of practice (range 2 years to 43) and had been trained in a wide variety of locations. They each cared for an average of 34 patients (range 6–55). Providers’ views on mortality prognostic tools Providers were only aware of 2 tools to predict mortality in dialysis patients. 80% of the providers had heard of Cohen’s mortality prognostic tool, especially regarding the surprise question. 10% of the nephrology providers had heard of Charlson comorbidity index. None of the nephrologists used these tools in their current practice. Representative quotes of providers’ views on mortality prognostic tools can be seen in Table . The main barrier identified to use was provider concern that the tool was not applicable or accurate in their specific patients. Most providers also noted that the disease course itself is unpredictable. Time restraints and the addition of more “work” was a barrier identified by all the providers. Lack of knowledge of the tools and the data behind them were also acknowledged by 6 of the 10 providers. All the providers identified clinical experience as their main source of prognostication. The providers identified a few advantages to using mortality prognostic tools. They noted that some patients are number-oriented and being able to provide that information may help those patients in decision making. Providers also noted that these prognostic tools, if predicting a poor prognosis, would be a reminder to have a goals of care conversation and make providers more likely to encourage supportive care over dialysis. The majority of the providers reported they were open to the idea of using these tools in their prognostication if further evidence for the validity and education about the use of these tools was provided. Providers noted that if a tool was shown to have strong discrimination and predicted a high mortality, it would change how they discuss management options with the patient- i.e. make them more likely to encourage supportive care over dialysis. At the same time though, providers expressed that mortality was not the key factor on which to prognosticate and that patients will make decisions based on a variety of quality of life measures. Validation of mortality prognostic tools The overall 6-month mortality in Vermont’s prevalent dialysis population was 14%. Couchoud had the best discrimination of 6-month mortality in Vermont’s dialysis patients with a C statistic of 0.77 compared to Cohen and Charlson at 0.68 (Fig. ). Prognostication accuracy of providers The accuracy of each provider’s response to the surprise question varied across provider. The average accuracy of providers was 68%, range 47–91%. Post intervention interview Five physicians and two nurse practitioners participated in the follow up interviews as of March 2023. Of the remaining three physicians, one no longer worked at the study site, one was on maternity leave, and one did not respond to emails to arrange a second interview. The providers overall thought the tools performed about “as well as expected.” “There were no surprises.” “I think they’re about what you would expect because I’ll never be that excited about predictive tools.” This perception did not vary between clinicians whose answers to the surprise question were more or less accurate. Providers speculated that the Couchoud model performed the best because of its inclusion patient mobility and tied that in with frailty as a risk factor for not doing well on dialysis. “I do think that mobility is a major factor for a lot of patients, so I do think that it was a good idea for the Couchoud model to include that.” “I like these, these factors in this tool, with the albumin-nutrition and the frailty, because I know those are independent predictors of those not doing well on dialysis.” “It actually makes me think more of mobility as an important index of patient wellness.” Though some acknowledged that it may be because the original Couchoud cohort had the largest study population. Still, while most providers endorsed a “role” for using risk assessment tools, none of the providers routinely used the tools or had plans to implement it into their practice. Providers again voiced concern that although tools are good for populations, and even their specific population, but that they were not accurate for any one specific patient. “I think the tools are reasonably good at predicting what will happen in the population, not particularly for what will happen in an individual. So obviously that makes the utility of that somewhat questionable when you’re dealing with the individual rather than planning for the population.” “I mean they’re nice for studies if you are trying to look at large populations or and you have to have a particular reason for wanting to understand that particular prediction. But for individuals they’re never terribly good so I’m not totally surprised.” Providers still identified clinical experience and gestalt as their main determinants of prognostication. This was true for all providers, regardless of their own accuracy with answering the surprise question. A few providers noted that after seeing this data, they might try to incorporate some of the individual risk factors from the tools into their clinical assessment. “ I would place it in my ‘subjective-ometer’ when I’m thinking about these things with the patient.” Providers were only aware of 2 tools to predict mortality in dialysis patients. 80% of the providers had heard of Cohen’s mortality prognostic tool, especially regarding the surprise question. 10% of the nephrology providers had heard of Charlson comorbidity index. None of the nephrologists used these tools in their current practice. Representative quotes of providers’ views on mortality prognostic tools can be seen in Table . The main barrier identified to use was provider concern that the tool was not applicable or accurate in their specific patients. Most providers also noted that the disease course itself is unpredictable. Time restraints and the addition of more “work” was a barrier identified by all the providers. Lack of knowledge of the tools and the data behind them were also acknowledged by 6 of the 10 providers. All the providers identified clinical experience as their main source of prognostication. The providers identified a few advantages to using mortality prognostic tools. They noted that some patients are number-oriented and being able to provide that information may help those patients in decision making. Providers also noted that these prognostic tools, if predicting a poor prognosis, would be a reminder to have a goals of care conversation and make providers more likely to encourage supportive care over dialysis. The majority of the providers reported they were open to the idea of using these tools in their prognostication if further evidence for the validity and education about the use of these tools was provided. Providers noted that if a tool was shown to have strong discrimination and predicted a high mortality, it would change how they discuss management options with the patient- i.e. make them more likely to encourage supportive care over dialysis. At the same time though, providers expressed that mortality was not the key factor on which to prognosticate and that patients will make decisions based on a variety of quality of life measures. The overall 6-month mortality in Vermont’s prevalent dialysis population was 14%. Couchoud had the best discrimination of 6-month mortality in Vermont’s dialysis patients with a C statistic of 0.77 compared to Cohen and Charlson at 0.68 (Fig. ). The accuracy of each provider’s response to the surprise question varied across provider. The average accuracy of providers was 68%, range 47–91%. Five physicians and two nurse practitioners participated in the follow up interviews as of March 2023. Of the remaining three physicians, one no longer worked at the study site, one was on maternity leave, and one did not respond to emails to arrange a second interview. The providers overall thought the tools performed about “as well as expected.” “There were no surprises.” “I think they’re about what you would expect because I’ll never be that excited about predictive tools.” This perception did not vary between clinicians whose answers to the surprise question were more or less accurate. Providers speculated that the Couchoud model performed the best because of its inclusion patient mobility and tied that in with frailty as a risk factor for not doing well on dialysis. “I do think that mobility is a major factor for a lot of patients, so I do think that it was a good idea for the Couchoud model to include that.” “I like these, these factors in this tool, with the albumin-nutrition and the frailty, because I know those are independent predictors of those not doing well on dialysis.” “It actually makes me think more of mobility as an important index of patient wellness.” Though some acknowledged that it may be because the original Couchoud cohort had the largest study population. Still, while most providers endorsed a “role” for using risk assessment tools, none of the providers routinely used the tools or had plans to implement it into their practice. Providers again voiced concern that although tools are good for populations, and even their specific population, but that they were not accurate for any one specific patient. “I think the tools are reasonably good at predicting what will happen in the population, not particularly for what will happen in an individual. So obviously that makes the utility of that somewhat questionable when you’re dealing with the individual rather than planning for the population.” “I mean they’re nice for studies if you are trying to look at large populations or and you have to have a particular reason for wanting to understand that particular prediction. But for individuals they’re never terribly good so I’m not totally surprised.” Providers still identified clinical experience and gestalt as their main determinants of prognostication. This was true for all providers, regardless of their own accuracy with answering the surprise question. A few providers noted that after seeing this data, they might try to incorporate some of the individual risk factors from the tools into their clinical assessment. “ I would place it in my ‘subjective-ometer’ when I’m thinking about these things with the patient.” This study assessed nephrology providers’ perspectives and use of mortality prognostic tools in dialysis patients. We found that nephrology providers had some knowledge of prognostic tools but did not routinely use them in practice. The main barrier identified to using prognostication tools was the perspective that they are not generalizable nor specific enough for a given patient. After external validation of three routinely available prognostic tools in these providers’ practice, and demonstration that these tools performed better than clinician’s gestalt, perspectives were seemingly unchanged reflecting a lack of trust in using mortality prognostic tools. Time constraints were also a theme throughout this study, as there was also interest in the tools automatically generating prognostic information in the EMR and finding that Cohen’s tool, the tool with the least number of input variables, was the tool the providers had heard of or used the most. Provider interviews suggested a need for better training on how to incorporate prognostic information into serious illness conversations with patients. Several prognostic tools for mortality on dialysis exist, but few have been externally validated . This study revealed that generalizability was a major concern for nephrology providers. The Vermont population is predominantly rural, white, older, and of a lower socioeconomic status than the derivation cohorts and other studies have shown that these tools don’t perform well in older populations . To respond to this concern, three prognostic tools were validated in providers’ dialysis patient population. In this study, Couchoud’s tool had the highest discrimination for six month mortality with a C statistic of 0.765, which is comparable to the highest C statistic found in the 2019 meta-analysis of 32 indices to predict mortality in incident dialysis patients (C statistic 0.74) . There, the overall C statistic was 0.71 for any prediction length for mortality and had high heterogeneity, with the sub group analysis for models predicting six month mortality range having a C statistic of 0.540–0.896. It is worth noting that the metanalysis was in incident patients rather than the prevalent population in our study. The current study showed that Couchoud’s tool had strong discrimination for six mortality and should have assuaged providers’ concerns about external validation, allowing other barriers to be identified in the follow up interviews. Our study confirmed findings by Forzley et al that nephrology providers do not use prognostic tools to provide prognostic information, preferring clinical gestalt. In addition, this study demonstrated that provider preference did not change even after validation of the tools in their patients or observing that their clinical gestalt had slightly lower accuracy than that of the prognostic tools. Therefore, creating more accurate prognostic tools or making them easier to implement may not increase providers’ use. Provider perspectives suggest a disconnect in patient-physician communication around prognosis, as providers report they are comfortable using gestalt to prognosticate, but other studies show patients aren’t receiving the prognostic information they desire . This suggests that helping providers to refine the accuracy of their clinical gestalt and convey it more effectively to patients may be of higher utility to improve prognostic communication. One such example is a recent pilot study that found training nephrologists to use best case/worst case communication improved SDM about dialysis and may increase access to palliative care . Even as far back as 2016, Couchoud et al. called for other prognostic markers . Interestingly, providers in our study self-identified that Couchoud’s tool may have been the most accurate due to the mobility factor and that they would like to use that factor to refine their prognostication. As more evidence mounts that dialysis does not confer morbidity or mortality benefits for all patients with kidney failure , future studies are needed to help bridge this prognostication gap. This study adds to the growing identification of systems issues preventing optimal advance care planning . We describe that although providers felt comfortable with these conversations, they also reported a clear inability to embrace principles of SDM as they felt that the final decision always rests with the patients (Table ). SDM is key in these situations and is recommended in our clinical practice guidelines , but providers receive little training for SDM in fellowship or practice . It is possible that providers feel ill-equipped or overwhelmed in these patient focused conversations, as kidney disease care, especially dialysis, is inherently “disease oriented” . Studies are beginning to look towards leveraging all members of the interdisciplinary dialysis team to promote advance care planning in patients on dialysis . Systems based approaches are needed to facilitate learning and skill building to create individualized care plans with patients and their families living with kidney failure. This study, as the first to evaluate Nephrology providers’ perceptions and barriers to use of mortality prognostic tools, had several strengths. Foremost was this study’s use of mixed methods and a brief intervention. Externally validating the tools addressed a major concern that the providers identified in the first interview and allowed the subsequent interviews to capture other unresolved barriers. Furthermore, giving the providers the results of their patients’ 6-month mortality next to their predictions and the predictions from the tools (Supplement 3) yielded more grounded and real-world discussion of their perceptions. Performing the second interviews allowed for analysis of any dynamic perceptions and verified previous themes, which is often not done in qualitative studies. Lastly, the choice of prognostic tools with different aspects of prognostication allowed the interviews to capture provider perspectives on which parts of prognostication are highest yield. There were limitations to this study. First, it is a small sample size of both providers and patients from one state, and not all providers were available for the second interview. Therefore, the interviewed providers’ responses may not be generalizable. However, the providers do have a wide variety of training backgrounds, employment history, and practice length. Second, social acceptability bias may have been at play as the interviews were not blinded and the primary author conducting the interviews was a resident interested in Nephrology at their institution. Third, the interviews were semi-structured which gave the opportunity for more in-depth conversation, but may have introduced interviewer bias with leading questions, wording bias, or confirmation bias. Lastly, the study was conducted during the COVID-19 pandemic which could have contributed to the observed mortality, but Vermont had < 15 COVID-19 related deaths during our study period , and only 2% of covid-cases at that time were reported to have “chronic kidney disease” making it unlikely that this skewed the results of our data. In conclusion, several well validated prognostic tools are available for predicting mortality in dialysis patients, but nephrology providers do not use them in routine practice due to concerns about their applicability in their patients. Addressing the barriers of external validity and lack of knowledge of the tools did not change the nephrology providers’ use or attitude towards the tools. Implementation research is needed to help providers share prognosis and enhance shared-decision-making surrounding dialysis. Supplementary Material 1. |
A comparison of clinical paediatric guidelines for hypotension with population-based lower centiles: a systematic review | f692a764-2f27-4562-b7ef-a8abdd02f166 | 6882047 | Pediatrics[mh] | Vital signs are important in the recognition of acutely ill children. One parameter associated with serious illness is hypotension . Because normal blood pressure values vary with age, accurate age-related reference values are needed to correctly identify hypotension in children and guide interventions. Blood pressure can be measured by invasive, oscillometric and auscultatory methods. In addition, various outcome measures for blood pressure exist such as mean arterial pressure, and diastolic and systolic blood pressure. Paediatric guidelines propose different definitions of hypotension and in general use cut-off values of systolic blood pressure . Although not based on evidence, several guidelines use the fifth percentile of systolic blood pressure in healthy children as cut-off for hypotension . Moreover, it is unclear how well these guidelines discriminate between normal and low blood pressure. To date, no study has summarized the available evidence on reference values of low systolic blood pressure in children. This study aims to identify population-based reference values for non-invasive low blood pressure in healthy children and to compare these with cut-offs for hypotension defined by existing paediatric guidelines.
Search strategy and selection of population-based studies We systematically searched databases including MEDLINE, EMBASE and other databases (1950 to 14 February 2019) to identify primary studies that defined lower centiles for non-invasive systolic blood pressure measurement in healthy children (Additional file : detailed search strategy). Studies that were included were published in English, recorded blood pressure and defined age-related centiles for systolic blood pressure (first to fifth centile) on a minimum of 100 children aged < 18 years. Studies were excluded if populations involved children with underlying diseases, or studies reporting on premature neonates, measurements during anaesthesia, exercise or orthostasis. We excluded populations from low- and middle-income countries since factors influencing blood pressure levels, such as body composition and nutrition, are different compared to high-income countries . We excluded abstracts, reviews and commentaries, and studies reporting on lower centiles solely derived from mathematical analysis. One researcher (NH) conducted the first selection, and two researchers (NH, JZ) independently conducted the second and third selection. Disagreements were discussed and agreed upon consensus or discussed with a third researcher (HM) for majority decision. Data extraction and analysis For the selected studies, data were extracted by one researcher (NH) and included country, population, setting, sample size, age range, blood pressure measurement method and age-specific centiles (P1–P5). We included the centiles for non-overweight children and for the median height if blood pressure centile values were reported for different height categories. The age-specific fifth centiles were summarized using weighted medians and interquartile ranges for age categories which involved three or more studies. If sample sizes were only given for age ranges > 1 year, we estimated the sample size per age group by dividing the total sample size by the number of years. Quality assessment No specific tool exists for quality assessment of observational studies . The Quality Assessment of Diagnostic Accuracy Studies-2 checklist was the most appropriate to use for these observational studies . This checklist covers risk of bias and applicability judgements on four domains: patient selection, index test, reference standard and flow and timing. For each question, studies were classified as high, low or unclear. Disagreements were agreed upon consensus. Cut-off values for hypotension from clinical guidelines We selected a sample of clinical cut-offs for hypotension by consulting experts, well-known textbooks and resuscitation, emergency care and sepsis guidelines. Clinical cut-offs included recommended target values for hypotension defined by systolic blood pressure. For each clinical cut-off, we determined the presence of a literature reference and whether this reference agreed with the cut-off values. To compare clinical cut-offs with the population-based centiles identified in the literature, we plotted the age-specific fifth centile values in a step chart separate for boys and girls. Data analyses were performed in SPSS version 25.0 and R version 3.4.
We systematically searched databases including MEDLINE, EMBASE and other databases (1950 to 14 February 2019) to identify primary studies that defined lower centiles for non-invasive systolic blood pressure measurement in healthy children (Additional file : detailed search strategy). Studies that were included were published in English, recorded blood pressure and defined age-related centiles for systolic blood pressure (first to fifth centile) on a minimum of 100 children aged < 18 years. Studies were excluded if populations involved children with underlying diseases, or studies reporting on premature neonates, measurements during anaesthesia, exercise or orthostasis. We excluded populations from low- and middle-income countries since factors influencing blood pressure levels, such as body composition and nutrition, are different compared to high-income countries . We excluded abstracts, reviews and commentaries, and studies reporting on lower centiles solely derived from mathematical analysis. One researcher (NH) conducted the first selection, and two researchers (NH, JZ) independently conducted the second and third selection. Disagreements were discussed and agreed upon consensus or discussed with a third researcher (HM) for majority decision.
For the selected studies, data were extracted by one researcher (NH) and included country, population, setting, sample size, age range, blood pressure measurement method and age-specific centiles (P1–P5). We included the centiles for non-overweight children and for the median height if blood pressure centile values were reported for different height categories. The age-specific fifth centiles were summarized using weighted medians and interquartile ranges for age categories which involved three or more studies. If sample sizes were only given for age ranges > 1 year, we estimated the sample size per age group by dividing the total sample size by the number of years.
No specific tool exists for quality assessment of observational studies . The Quality Assessment of Diagnostic Accuracy Studies-2 checklist was the most appropriate to use for these observational studies . This checklist covers risk of bias and applicability judgements on four domains: patient selection, index test, reference standard and flow and timing. For each question, studies were classified as high, low or unclear. Disagreements were agreed upon consensus.
We selected a sample of clinical cut-offs for hypotension by consulting experts, well-known textbooks and resuscitation, emergency care and sepsis guidelines. Clinical cut-offs included recommended target values for hypotension defined by systolic blood pressure. For each clinical cut-off, we determined the presence of a literature reference and whether this reference agreed with the cut-off values. To compare clinical cut-offs with the population-based centiles identified in the literature, we plotted the age-specific fifth centile values in a step chart separate for boys and girls. Data analyses were performed in SPSS version 25.0 and R version 3.4.
Population-based studies Our systematic search identified 7625 studies. After the study selection process, we included 14 studies in the final selection that defined lower centiles for non-invasive systolic blood pressure measurement in healthy children (Fig. ). The median sample size was 5362 (IQR 1760–11,940). Seven out of 14 studies used an automatic oscillometric device for blood pressure measurement. Two studies included children aged < 1 year (Table ). Studies included populations from Europe ( n = 8), North America ( n = 3), Australia ( n = 2) and Asia ( n = 1). Four studies excluded overweight patients. For development of the centiles, 11 studies used the average of multiple blood pressure measurements and 3 studies used only the first measurement. Blood pressure centiles were stratified by gender ( n = 12), height ( n = 4), ethnicity ( n = 1) and overweight vs non-overweight ( n = 2). Studies most frequently reported the fifth centile ( n = 13), in which the third centile ( n = 2) and first centile ( n = 3) were also reported separately. One study only reported the first and third centiles. The fifth centiles of the population-based studies showed variation ranging across the age groups from 7 to 17 mmHg for boys (Fig. ) and 7 to 22 mmHg for girls (Additional file ). Median values and interquartile ranges of the lower fifth centiles are provided in Additional files and . Quality of the population studies was generally good. No concerns regarding applicability were found in 12 out of 14 studies. Six studies had high risk of bias in the patient flow and timing domain, due to poor reporting of how missing data were handled (Table , Fig. ). Cut-off values for hypotension from clinical guidelines We identified 13 clinical cut-offs for hypotension of which 8 referred to a literature reference (Additional file ). Five cut-offs provided an accurate literature reference , of which four out of five referred to the fifth centile of healthy children. In two textbooks, the values of the literature reference did not agree with the provided cut-offs . One literature reference could not be obtained . Age-specific cut-off values for hypotension showed large differences, ranging from 15 to 30 mmHg (Fig. , Additional file ). Comparison of population-based studies with cut-off values for hypotension from clinical guidelines The clinical hypotension cut-offs showed poor to moderate agreement with the lower centiles derived from population-based studies (Fig. ). The frequently used hypotension cut-off from Advanced Paediatric Life Support (APLS) showed moderate agreement for children < 12 years, but was above the highest fifth centile values for children > 12 years. The cut-off from Paediatric Advanced Life Support (PALS) agreed well for children < 12 years but was below the fifth centile values for children > 12 years. The cut-off of Parshuram’s early warning score (PEWS) agreed well for children > 12 years . Three other cut-offs were mostly below the fifth centiles (Goldstein, primary paediatric care and Paediatric Risk of Mortality III (PRISM III)) , and one cut-off had higher values (Nelson) .
Our systematic search identified 7625 studies. After the study selection process, we included 14 studies in the final selection that defined lower centiles for non-invasive systolic blood pressure measurement in healthy children (Fig. ). The median sample size was 5362 (IQR 1760–11,940). Seven out of 14 studies used an automatic oscillometric device for blood pressure measurement. Two studies included children aged < 1 year (Table ). Studies included populations from Europe ( n = 8), North America ( n = 3), Australia ( n = 2) and Asia ( n = 1). Four studies excluded overweight patients. For development of the centiles, 11 studies used the average of multiple blood pressure measurements and 3 studies used only the first measurement. Blood pressure centiles were stratified by gender ( n = 12), height ( n = 4), ethnicity ( n = 1) and overweight vs non-overweight ( n = 2). Studies most frequently reported the fifth centile ( n = 13), in which the third centile ( n = 2) and first centile ( n = 3) were also reported separately. One study only reported the first and third centiles. The fifth centiles of the population-based studies showed variation ranging across the age groups from 7 to 17 mmHg for boys (Fig. ) and 7 to 22 mmHg for girls (Additional file ). Median values and interquartile ranges of the lower fifth centiles are provided in Additional files and . Quality of the population studies was generally good. No concerns regarding applicability were found in 12 out of 14 studies. Six studies had high risk of bias in the patient flow and timing domain, due to poor reporting of how missing data were handled (Table , Fig. ).
We identified 13 clinical cut-offs for hypotension of which 8 referred to a literature reference (Additional file ). Five cut-offs provided an accurate literature reference , of which four out of five referred to the fifth centile of healthy children. In two textbooks, the values of the literature reference did not agree with the provided cut-offs . One literature reference could not be obtained . Age-specific cut-off values for hypotension showed large differences, ranging from 15 to 30 mmHg (Fig. , Additional file ).
The clinical hypotension cut-offs showed poor to moderate agreement with the lower centiles derived from population-based studies (Fig. ). The frequently used hypotension cut-off from Advanced Paediatric Life Support (APLS) showed moderate agreement for children < 12 years, but was above the highest fifth centile values for children > 12 years. The cut-off from Paediatric Advanced Life Support (PALS) agreed well for children < 12 years but was below the fifth centile values for children > 12 years. The cut-off of Parshuram’s early warning score (PEWS) agreed well for children > 12 years . Three other cut-offs were mostly below the fifth centiles (Goldstein, primary paediatric care and Paediatric Risk of Mortality III (PRISM III)) , and one cut-off had higher values (Nelson) .
This systematic review demonstrates large variation among commonly used paediatric reference values for systolic hypotension. In general, the clinical guidelines are not based on available evidence and showed variable agreement with existing population-based blood pressure centiles. The reviewed literature addressing population-based centiles showed limited studies in children < 1 year of age. Reference ranges of blood pressure are influenced by multiple factors such as age, gender, height, ethnicity and method of measurement . In the literature, low centiles for blood pressure are often presented for different ages and in some cases for height. To facilitate interpretation, guidelines provide simplified cut-off values for hypotension for various age groups. For early recognition of acutely ill children, these simplified reference values are essential for clinicians. The evidence for clinically used cut-offs for hypotension is mostly unclear as only five clinical cut-offs for hypotension reported accurate literature references. Our systematic search shows availability of population-based centiles that could provide evidence for lower reference values of blood pressure. Although not evidence based, we propose that clinical cut-offs for hypotension should not exceed the fifth centile. Clinical cut-offs that are generally below the fifth centile may possibly be too low, whilst clinical cut-offs that are generally above the fifth centile may be too high. These high clinical cut-offs may classify too many patients incorrectly as hypotensive since by definition 5% of healthy children will fall below this centile. In children < 12 years, the values of PALS have good agreement with the low centiles, but for children age > 12 years, the PALS could possibly be too low. Our results are in line with a previous study that compared three clinical cut-offs with the fifth centile, based on a mathematical analysis of a large sample of healthy children . They reported that the fifth centile for systolic blood pressure was generally below three clinical cut-offs for hypotension. Sarganas et al. found that low centiles from a German and US population were higher than the PALS definition in children > 13 years . In contrast to the previous studies, our study conducted an exhaustive systematic search for population-based centiles in all ages and compared them with a large sample of cut-offs for hypotension that are widely used in clinical practice. Our study identified only two studies that provided blood pressure centiles in children < 1 year including one study in new-borns and one at age of 6 months . Therefore, more studies providing reference values of blood pressure in children < 1 year are required. Reference values based on healthy children may not be accurate for acutely ill children, as pain and distress could increase blood pressure values. In addition, cuff size, movement of limbs, crying and uncooperativeness influence the measured values. In the interpretation of the measured values, these factors should be accounted for. There is no consensus on which definition of hypotension should be used for the assessment of acutely ill children. Hypotension defined by APLS, PALS and PEWS showed an association with serious illness, adjusted for tachycardia. These definitions, however, lacked sensitivity for serious illness . In our systematic review, the PALS cut-off showed the best agreement with the values based on healthy children with an average of 4 mmHg difference from the weighted median of the population-based fifth centiles. In addition, current guidelines do not agree on treatment targets for blood pressure after identification of hypotension in critically ill children. The goal for treatment target of blood pressure is to maintain adequate tissue perfusion. The guideline of International Liaison Committee on Resuscitation recommends targeting systolic blood pressure values higher than the fifth percentile for children who are post-cardiac arrest , whilst the APLS and the surviving sepsis campaign advise to maintain normal blood pressure for age without defining specific measures. The American College of Critical Care Medicine recommends to use the 50th centile of the mean arterial pressure (MAP) and to use perfusion pressure (MAP-central venous pressure) to guide treatment . Some evidence is available suggesting higher MAP levels are needed to improve outcome in traumatic brain injury and central nervous system infections in children . Trials in adult critically ill patients with septic shock showed that targeting higher mean arterial pressure levels of 75–85 mmHg did not influence mortality or other adverse events . Future trials will need to evaluate different blood pressure measures and targets in acutely ill children and relate those to interventions and relevant clinical outcomes. Our review focused on systolic blood pressure and did not include mean arterial blood pressure or diastolic blood pressure. Although the mean arterial pressure is often used in critical care, we focused on systolic hypotension for general illness, since in general, clinical guidelines only report hypotension definitions of systolic blood pressure. Strengths and limitations Major strengths of this study are the use of an extensive search strategy, the overview of low reference values of blood pressure in healthy children covering all ages and the comparison with a diverse sample of clinical cut-offs of hypotension that are widely used in practice. Although we used a sensitive search strategy in multiple databases, it is possible we have not included all available data. Since we focused on lower age-related centiles, we excluded studies that reported blood pressure centiles solely for height or body mass index. This study has some limitations. First, the selected sample of clinical definitions was not exhaustive and various blood pressure cut-offs in early warning scores and mortality score were not included. We selected Parshuram’s early warning score and the PRISM III mortality score as these have been validated and are commonly used in practice. We acknowledge that these cut-offs are part of a score containing other clinical markers. In addition, the PRISM III score has been developed specifically for predicting mortality in critically ill children. Second, blood pressure is determined by height and we only included blood pressure values for the median height value. However, height is usually not available in the assessment of acutely ill children and none of the clinical guidelines accounted for height. Third, we focused on non-invasive measurement methods including oscillometric and auscultatory measurements. Oscillometric measured values could be different than auscultatory measurements . As different devices were used in the studies and their validity in the assessment of low blood pressure is unknown, we combined centiles for oscillometric and auscultatory measurements. Fourth, since non-invasive blood pressure measurements could overestimate hypotension when compared to invasive arterial measurement, generalization of our study to invasive measurements should be undertaken with caution .
Major strengths of this study are the use of an extensive search strategy, the overview of low reference values of blood pressure in healthy children covering all ages and the comparison with a diverse sample of clinical cut-offs of hypotension that are widely used in practice. Although we used a sensitive search strategy in multiple databases, it is possible we have not included all available data. Since we focused on lower age-related centiles, we excluded studies that reported blood pressure centiles solely for height or body mass index. This study has some limitations. First, the selected sample of clinical definitions was not exhaustive and various blood pressure cut-offs in early warning scores and mortality score were not included. We selected Parshuram’s early warning score and the PRISM III mortality score as these have been validated and are commonly used in practice. We acknowledge that these cut-offs are part of a score containing other clinical markers. In addition, the PRISM III score has been developed specifically for predicting mortality in critically ill children. Second, blood pressure is determined by height and we only included blood pressure values for the median height value. However, height is usually not available in the assessment of acutely ill children and none of the clinical guidelines accounted for height. Third, we focused on non-invasive measurement methods including oscillometric and auscultatory measurements. Oscillometric measured values could be different than auscultatory measurements . As different devices were used in the studies and their validity in the assessment of low blood pressure is unknown, we combined centiles for oscillometric and auscultatory measurements. Fourth, since non-invasive blood pressure measurements could overestimate hypotension when compared to invasive arterial measurement, generalization of our study to invasive measurements should be undertaken with caution .
Large variation exists among paediatric cut-offs for hypotension. In general, these clinical definitions are not evidence-based and have variable agreement with existing population-based blood pressure lower centiles. For children < 12 years, the PALS definition agreed well. For children > 12 years, the PEWS agreed well but the PALS cut-off possibly underestimates and the APLS overestimates hypotension. Future studies should focus on developing reference values for hypotension for acutely ill children.
Additional file 1. Systematic search strategy. Additional file 2. Clinical definitions for hypotension and range of 5th centile of systolic blood pressure for girls according to age. Additional file 3. 5th centile of systolic blood pressure and median (IQR) for boys. Additional file 4. 5th centile of systolic blood pressure and median (IQR) for girls. Additional file 5. Clinical cut-offs for hypotension.
|
PEDIATRICIANS AFTER RESIDENCY: A SURVEY OF PERSONAL/PROFESSIONAL DATA
AND ISSUES | f90c02b0-c63c-403b-ab58-d55c8d99c2cb | 7401498 | Pediatrics[mh] | General and pediatric residency programs have been modified worldwide with new
curriculums, including a modern approach to teaching, patient care with ethical and
humanistic attitudes, and research training. , , , , , Increasing technological innovation with new medical procedures, radiology
and laboratory exams, translational medicine with drugs development and specific
therapeutic targets for each disease are a reality in daily pediatric practice. , , , , , , , , After the conclusion of pediatric residency, pediatricians and pediatric specialists
routinely deal with different patient care profiles, including life-threatening,
chronic diseases, patients suffering and death. , In addition, severely compromised patients, harassment, discrimination, low
income and disruption in family life may induce high levels of
physical/psychological stress and/or professional issues in pediatricians , , , , and may reduce their overall satisfaction, particularly in very early career physicians. To the best of our knowledge, simultaneous analysis of personal, professional,
medical, and scientific educational characteristics and issues reported by
pediatricians have not been carried out in Latin America. Therefore, this study aimed to assess pediatricians’ reports regarding demographic
data, location of clinical practice, pediatric specialties, overall satisfaction
rates with residency/clinical practice, income, medical and scientific education,
patient care profiles, laboratory exams/medication use and main physical,
psychiatric/psychological and professional issues related to clinical practice. In
addition, comparisons of personal, professional, educational characteristics and
issues according to years of pediatric clinical practice were evaluated.
A cross-sectional study involving 614 pediatricians was carried out based on an
online survey in terms of personal, professional, medical, and scientific
educational characteristics and issues. All physicians had successfully concluded
the General Pediatric Residency Program and/or Pediatric Specialties Residency
Program in a Pediatric Department in Brazil, which are teaching-oriented training
programs. The General Pediatric Residency Program includes primary and secondary
care activities during the first year of training, focusing on care and attention to
healthy children and adolescents. In the second and third years, there is a
predominance of secondary and tertiary care activities at outpatient and inpatients
clinics, including multiple pediatric subspecialties. The survey was carried out using REDCap tool, which is a secure web application for
building, managing, and accessing electronic questionnaires and databases. This
survey was sent to all of the pediatricians between November 2018 and January 2019.
They had concluded their General Pediatric Residency Program or Pediatric
Specialties Residency Program between 2006 and 2018. At least 15 emails were sent to
improve the response survey rate. The Ethics Committee of our university hospital
approved this study (CAAE: 93564518.5.0000.0068) and an informed consent was
obtained from all participants. Anonymous self-report questionnaire invitations were distributed by email. The online
survey comprised 21 questions focused on physician-reported personal, educational,
and professional issues. These questions were multiple-choice, dichotomous (yes and
no) or horizontal visual analog scale, recollecting events during their practice.
Estimated time for completion was nearly 15 minutes The online survey included 21 items related to the following issues: Demographic data of pediatricians (current age, gender, marital status,
number of children, city, state, country and years of pediatric clinical
practice after the conclusion of general pediatric residency). Location of pediatricians practice in the last year (public service,
private service, pediatric primary care, pediatric ward, pediatric
emergency room, pediatric intensive care, neonate care, pediatrician
private practice, public/private university professor, pharmaceutical
industry, medical procedure, radiology service, laboratory tests,
non-governmental organization and administrative service). Pediatric specialties after General Pediatric Residency Program
(Adolescent Medicine, Cardiology, Developmental and Behavioral,
Emergency, Endocrinology, Gastroenterology, Genetics, Hematology,
Pediatric Intensive Care, Immunology and Allergy, Infectious Disease,
Neonatology, Nephrology and Renal Transplantation, Neurology, Nutrology,
Oncology, Palliative and Pain Care, Pneumology, Rheumatology and no
specialty). Overall satisfaction rate during General Pediatric Residency Program at
the Pediatric Department of Universidade de São Paulo, measured in a
horizontal visual analog scale (0 = no satisfaction at all and
10=excellent overall satisfaction). Overall satisfaction rate with pediatric clinical practice in the last
year, measured in a horizontal visual analog scale (0=no satisfaction at
all and 10=excellent overall satisfaction). Workload in hours/week in the last year (≤20 hours, 20-40 hours, 40-60
hours and >60 hours). Number of pediatric patients/week in the last year (≤50 patients, 50-100
patients, 100-200 patients, and >200 patients). Health care insurance availability for pediatric patients in the last
year (public health insurance, private insurance and/or other). Pediatrician income in the last year (<15 minimum wages/month and ≥15
minimum wages/month). Work exclusively as a pediatrician in the last year (yes/no). Scientific initiation fellowship during medical school (yes/no). Number of meeting/congress/course participation in the last year. Published papers in medical literature (yes/no). Enrolled in Master’s and/or PhD (doctor of philosophy) program
(yes/no). Main patient care profiles in the last year (emergency room, ward,
pediatric intensive care, primary care, newborn, growth and development
monitoring, chronic disease management, psychiatric/psychological
management, contraception and gynecological counseling, licit and
illicit drug use, sexually transmitted infections and pregnancy
counseling, and physical/psychological violence assessment). Use of laboratory tests during clinical practice in the last year
(clinical laboratory, radiography, electrocardiogram,
electroencephalogram, endoscopic examinations, echocardiogram,
ultrasonography, computed tomography, magnetic resonance imaging, bone
densitometry, eye examination, biopsy, whole genomic sequencing, and
audiometry). Medication use during clinical practice in the last year (antibiotic,
non-hormonal anti-inflammatory, oral or intravenous glucocorticoid,
inhaled glucocorticoid, vasoactive drug, surfactant, antidepressant,
anticonvulsant, painkiller, antihistamine,
immunosuppressive/chemotherapy, immunobiological, contraceptive,
vaccines, and homeopathy). Contraception prescription for adolescent patients in the last year
(condom, oral contraceptive, depot medroxyprogesterone acetate, implant
progesterone contraceptive, emergency contraception, intrauterine device
and/or none of them). General supportive care used during clinical practice in the last year
(electronic medical chart, day hospital, anthropometric data evaluation,
health-related quality of life questionnaires, pain assessment
evaluation, dental care, physiotherapy, dietary orientation, speech and
language, psychological support, rehabilitation care therapy, sex
education, physical activity orientation, patients/family education
about the illness and treatment, multidisciplinary team approach, phone
calls to improve adherence and schedule appointments, participation in
clinical trials, adverse events monitoring, sunscreen protection,
palliative care support and transition-focused program to adult
care). Vaccination card evaluation in all appointments in the last year
(yes/no). Main physical, psychiatric, psychological, and professional issues
related to clinical practice in the last year (long working hours, poor
social life, physical inactivity, overall decrease of health-related
quality of life, anxiety, depression, burnout syndrome, disruption in
family life, harassment, obsessive-compulsive symptoms, low income,
legal issues, workplace violence, stress induced by boss/professor and
stress induced by health insurance). Pediatricians were further subdivided into two groups according to the median years
of pediatric clinical practice: group 1 (≤5 years) and group 2 (>5 years). The Statistical Package for the Social Sciences version 13.0 was
used. The results for the continuous variables were presented by median
(minimum-maximum values) or mean±standard deviation (SD), and compared by
Mann-Whitney test and Student’s t-test, respectively. The results for categorical
variables were presented as frequency (percentage) and compared by Fisher’s exact
test or Pearson chi-square test, as appropriated. Pearson or Spearman rank
correlation coefficients were used for correlations between overall satisfaction
with pediatric residency/pediatrician clinical practice and current age and years of
medical practice after residency conclusion. P values less than 0.05 were considered
significant.
The response rate for this web-based questionnaire was 377/614 (61%) pediatricians.
Incomplete data were observed in 44/614 (7%) respondents and only 2/614 (0.3%)
declined to participate in the present study. Therefore, 331/614 (54%) respondents
completed the questionnaire and were evaluated. shows demographic data, the location
of practice and type of pediatric specialties reported by pediatricians after
residency conclusion. Most pediatricians were female (82%), with median of current
age 33 years (27-40) and median of 5 years of pediatric clinical practice after
residency program (1-13). The main local of practice reported by pediatricians in
the last year were public (79%) and private services (81%), whereas medical
procedure, administrative service, public university professor, radiology service,
laboratory, pharmaceutical industry, and non-governmental organization were
infrequently described (<4%). The three most frequent pediatric specialties
reported by respondents were Neonatology (25%), Pediatric Intensive Care (14%) and
Pediatric Cardiology (7%) . Most pediatricians (86%) followed-up patients aged between 28 days to 10 years. High
workload (>60 hours/week) occurred in one-quarter of pediatricians, and 15%
examined more than 100 pediatric patients/week. Almost 50% of pediatricians earned
≥15 minimum wages/month. At least one meeting/congress/course participation was
reported in almost 90% of physicians, and approximately 30% reported published
research papers in medical literature. Scientific initiation fellowship during
medical school was described by 64% of respondents, and 16% reported enrollment in
Master and/or PhD program . The main patient care profiles described by pediatricians were emergency room,
pediatric intensive care, ward, and growth and development monitoring (>40%). The
most important physical, psychiatric/psychological and professional issues (>50%)
were long working hours, poor social life, physical inactivity and disruption in
family life . Groups were further divided into two: Group 1 [≤5 years of clinical practice after
conclusion of general pediatric residency, n=172/302 (67%)] and group 2 [>5
years, n=159/312 (56%)]. The median of current age [30 (27‒37) vs. 35 (31‒40) years;
p<0.001] and number of children [0 (0‒3) vs. 1 (0‒4); p<0.001] were
significantly lower in group 1 compared to group 2 . The median of overall satisfaction with pediatric residency [8
(0‒10) vs. 9 (4‒10); p=0.002] was significantly higher in group 2, however with a
similar median of overall satisfaction with pediatric clinical practice [8 (1‒10)
vs. 8 (4‒10); p=0.845] . The frequencies of general pediatricians (19 vs. 3%; p<0.001), workload >60
hours (31 vs. 19%; p=0.011), work on pediatric ward (37 vs. 20%; p=0.001), and
pediatric intensive care (58 vs. 27%; p<0.001) were significantly higher in group
1 compared to group 2, whereas being currently married/partnered (52 vs. 80%;
p<0.001) and work on private practice were significantly lower in the first group
(44 vs. 57%; p=0.021). Regarding main issues related to clinical practice in the
last year, long working hours (73 vs. 53%; p<0.001), poor social life (75 vs.
62%; p=0.018) and harassment (23 vs. 4%; p=0.003) were significantly higher in the
former group ( and ). Regarding Spearman’s rank correlations, current age and years of pediatric clinical
practice after completing residency were not correlated with overall satisfaction
with pediatric residency and overall satisfaction with pediatric clinical practice
(p>0.05). Further analyses were also carried out comparing two groups according to the duration
of pediatric residency program: A (3 years of pediatric residency program, n=82) and
B (2 years of pediatric residency program, n=249). The frequencies of infirmary (45
vs. 24%; p<0.001) and emergency room (77 vs. 32%; p<0.001) were significantly
higher in group A than in B. The median of overall satisfaction with pediatric
residency [8 (0-10) vs. 8 (4-10); p=0.015] and the frequency of pediatricians with
≥15 minimum wages/month salary (13 vs. 59%; p<0.001) were significantly higher in
group B than A, as well as pediatric specialty (68 vs. 96%; p<0.001).
This was the first web-based survey to carry out a simultaneous analysis of work-life
balance and scientific education of pediatricians after the conclusion of residency
in a Latin America Residency Program. Pediatricians were young, reported low income
and high workload, particularly in emergency room and pediatric ward. The overall
satisfaction with pediatric residency was good, however, reduced in very early
career pediatricians (≤5 years). The advantage of the present study was the moderate response rate (54%) of the online
self-reported survey without financial incentive. This response rate observed herein
contrasted with a low response in previous studies with neonatologists, members of
The American Academy of Pediatrics (15%), with undergraduates in neurology residency (23%) and with medical students, residents and early career physicians listed in
Physician Masterfile (35%). The confidentiality of the survey was relevant since there was no disclosure
of their identity. Another strength of this study was the assessment of self-report
and standardized questionnaire, including measuring instruments to assess overall
satisfaction. Pediatricians in the present study were young, married, and with a low number of
children. A predominance of female sex was also observed, similar to other
studies. , Indeed, this finding is related to the increase of women in medical schools
and residency programs around the world. , Nearly 80% of pediatricians worked in both public and private services, known as dual
practice. This is a particularity in Brazil, where more than half of medical
professionals are engaged in dual practice. Our residents were exposed to all pediatric specialties during the residency training
program. However, the most frequent pediatric specialties chosen by our
pediatricians were Neonatology, Pediatric Intensive Care and Pediatric Cardiology,
as also previously reported. In fact, these specialties are hospital and procedure-based and may have
more job opportunities with an improvement of income. , Of note, the medical profile described in our institution may not represent the
Brazilian pediatrician demographic data. In fact, data from the state of Sao Paulo
in 2018 showed that the mean age of pediatricians was 47.4±11.6 years, and the most
frequent pediatric specialties were Immunology and Allergy, Oncology and
Endocrinology. Regarding continuing medical educational, the majority of pediatricians attends
medical conferences regularly. This result was important to physicians due to great
advances in scientific knowledge in clinical practice in newborn, children and
adolescent patients, particularly in those with complex chronic diseases. , , , , One-quarter of pediatricians reported publishing research papers in medical
literature. There has been an increase of this practice in our Medical School and
Pediatric Department in the last years, reinforcing the relevance of additional
gains in writing a manuscript, such as professional education, financial, social benefits and intellectual
pleasure. This is also relevant for those residents that intend to engage in
academic medicine. Importantly, the main location of pediatric practice described by very early career
pediatricians was emergency room and pediatric ward, requiring long working hours
with poor social life and sedentary lifestyle. The possibility of acute, severely
compromised, life-threatening and chronic patients may induce issues and contribute
to the decrease in overall satisfaction with pediatric residency reported by our
pediatricians with less than 5 years of clinical practice. In addition, these are issues in medical training programs around the world. The
transition from residency training to clinical practice may be challenging for very
early career pediatrician, causing work-related stress, fatigue, discomfort and
emotional exhaustion, such as burnout, anxiety, depression and susceptibility to
harassment. , , Our pediatricians with >5 years of clinical practice became more specialized,
worked at private practices and reported high income. Indeed, self-employed
pediatricians may choose their workload based on preferences and financial
interests. , The differences between the location of pediatric practices among the pediatricians
showed that most physicians started their early career in the emergency rooms and
wards and, later on, in pediatric specialties, particularly working in pediatrician
private practice. This reinforces the need for the residency programs to cover all
areas of practice in pediatrician training. Of note, the overall satisfaction with clinical practice was high and similar in very
early (≤5 years) and early career pediatricians (>5 years). Our study suggests
that pediatricians are pleased with their profession, probably due to the strict
relationship between patients and family, and the holistic awareness on mental,
physical and emotional health. However, general pediatrics is non-procedure based,
contrasting with early-career cardiologists training in Japan, where satisfaction
was associated with invasive procedures, such as coronary angiography and
percutaneous coronary interventions. In 2014, a new 3-year pediatric residency program was instituted in our Pediatric
Department, aimed at training pediatricians in the 21 st century. The
topics of Adolescent Medicine, Developmental and Behavioral Care, mental health and
pediatric chronic diseases care were expanded on. Approximately one third of our
3-year pediatric residents did not study pediatric specialties after the general
residency program. Future multicenter studies with a large population will be
necessary to clarify this issue. The limitations of this study included the possibility of memory bias, since
respondents were asked to report questions preferable using a 1-year recall period.
In addition, the results of this survey must not be generalized beyond the
pediatrician population in our country. Other limitations were the cross-sectional
study design, and the fact that the survey did not include instruments to evaluate
physiological/psychiatric issues. Therefore, longitudinal and qualitative studies
are needed to clarify the trajectory of pediatricians and gain more insight into
work-life balance and future goals. In conclusion, very early career pediatricians (≤5 years) reported higher workload,
lower income, and work-related issues compared with late career pediatrician (>5
years), but over time they moved on to private practices and specialized care,
consequently improving earnings and living conditions. The overall satisfaction with
pediatric residency was good, but reduced in very early career pediatricians,
possibly due to not being able to encompass all the knowledge they had acquired in
their medical residency.
|
Clean your own house first: integrating sustainability into microbiology labs | 8ad686bd-07b2-4379-b97a-c2ab844078f5 | 11210500 | Microbiology[mh] | Energy efficiency A critical step towards sustainability is reducing energy consumption. Microbiology labs, with their array of equipment like fridges, freezers, biosafety cabinets, autoclaves, incubators, thermo cyclers, and centrifuges, to name a few, are typically energy intensive. Simple measures like turning off equipment when not in use, defrosting freezers, avoiding blocking air vents in biosafety cabinets, using LED lighting, and maintaining equipment for optimal performance can make a considerable difference. Notably, the National Renewable Energy Laboratory, the Lawrence Berkeley National Laboratory, and the International Institute for Sustainable Laboratories have produced a report on assessing baseline energy performance for laboratories (Mathew et al. ). They also presented The Environmental Performance Criteria (EPC)—a point-based rating system designed to score overall environmental performance for laboratories (Mathew et al. ). Updated versions for both, the Laboratory Benchmarking energy use (I2SL ) and EPC (I2SL ) tools are available online and are free to use. These institutes have further developed guidelines and a Smart Labs Toolkit to help increase environmental sustainability in laboratories (I2SL ), currently used by numerous academic, public, and industrial laboratories. Additionally, agreements with manufacturers and suppliers could facilitate upgrading to energy-efficient models, significantly cutting down on power usage. Furthermore, questioning our long-standing practices could also lead to energy savings. For example, a discussion on Twitter raised the question of whether setting a freezer at –80°C is necessary. Could increasing the temperature to –70°C also suffice? Leak and collaborators explored this question, finding that operating an energy-efficient freezer at –70°C, if necessary, with empty boxes to buffer against temperature changes, can reduce energy use by about 36% compared to running the same model at –80°C (Leak et al. ). Further, the Freezer Challenge organisation has made available scientific evidence that –70°C is a safe storage temperature for various sample types (Freezer Challenge ), including fungal isolates (Espinel-Ingroff et al. ) and proteins (Beekhof et al. ). Indeed, together with the University of Colorado Boulder, the Freezer Challenge has created a database where life scientists across the world share the type of samples stored at –70°C, their applications, and safe storage tests conducted. These initiatives are open invitations for more microbiology labs to participate. Granting social media might not seem like the ideal environment for academic discussion, but it can provide a space for open conversations about long-standing beliefs, leading to useful knowledge. Economic benefits further support sustainability practices. This has been proven by the Harvard University's Green Labs Program, which by encouraging practices such as shutting fume hoods when not in use, has saved approximately $250 000 USD annually in energy costs, reducing the carbon footprint and improving laboratory safety and efficiency by maintaining optimal airflow (Quentin ). Water conservation Water is a critical resource in microbiology labs, used for everything from washing glassware to preparing solutions and culture media. In settings where autoclaves and dishwashers are the major water consuming pieces of equipment, running these only when completely full and efficiently loaded would greatly improve the efficient use of this resource. Coordinating with other laboratories could help maximise the use of each cycle. Equally, assessing the type of water necessary for each application can reduce water consumption. For instance, considering that producing 1 L of deionised water requires 3 L of tap water (Leak et al. ), should prompt the evaluation of whether a given activity needs deionised water. Other advisable changes include using foot-pedals in lab sinks to facilitate turning water on and off and recirculating water systems for cooling laboratory equipment (Royal Society of Chemistry ). The University of Bristol has implemented various water-saving measures in their laboratories, including in those dedicated to medical microbiology and infectious disease. For example, their Green Labs scheme, focused on optimising water usage among other sustainability goals. Measures include installing low-flow aerators on lab sinks, maintaining reverse-osmosis systems efficiently, and actively identifying and fixing leaks. This proactive maintenance has prevented water loss and potential damage to lab equipment and facilities, ensuring smoother laboratory operations by reducing equipment downtime, ultimately allowing the University to allocate more resources to research activities (The University of Bristol , ). Chemical use and disposal Chemicals are indispensable in microbiology labs, to the extent, that we can all think of containers with reagents from years past, often with the same chemical being ordered over and over again without any given jar being actually empty. Thus, by carefully managing inventory and avoiding overordering, labs can prevent waste from expired or unused chemicals. Also, the choice and disposal of chemicals impacts sustainability (Royal Society of Chemistry ). Eliminating or exchanging our reagents for less or non-hazardous compounds (EFLM ) and centralising their use and disposal at the department or institutional level can help to improve research reproducibility and sustainability practices (Meyn et al. ) while minimising the type and amount of chemical waste produced by individual laboratories. Waste management Microbiology laboratories generate a variety of wastes, with single-use plastics likely constituting the largest proportion. The complexity of recycling plastics due to biological contaminants and mixed waste makes it challenging. Although plastics used to manufacture laboratory consumables are technically recyclable (e.g. polypropylene), the presence of biological contaminants, the diversity of plastic types, and the mixed nature of waste make recycling a complex and often unfeasible option. Furthermore, almost all single-use plastic waste in microbiology labs is routinely collected in biohazard containers or bags for disposal and incinerated (Tan et al. ). However, a significant amount of the plastic waste discarded by research laboratories is not biohazardous. Indeed, a report analysing the recycling potential of plastic waste produced by clinical facilities in the United States found that plastic waste can significantly be reduced by appropriate segregation of waste (Lee et al. ). In fact, some alternative strategies are already being implemented. For instance, the University of York waste audit and plastics decontamination station development (Kuntin ) led to correct waste segregation, allowing them to partner with a laboratory supplier to recycle centrifuge tubes, pipette tips, and other plasticware that have not held biohazardous materials (Sterilab Services ). Furthermore, some manufacturers are developing biodegradable plastics specifically for laboratory use. These plastics, under the right conditions, can break down more quickly and with fewer environmental repercussions. Equally, some manufacturers offer take-back programmes for used plastic items, ensuring proper recycling or disposal. Microbiology laboratories should segregate plastic waste and collaborate with companies that can handle lab-specific recycling challenges, such as pipette tip boxes. Which other products could follow a similar scheme? Wherever possible, laboratories could shift from single-use to reusable items. Glassware can often replace plastic containers and be sterilised for repeated use. Metal instruments can replace some plastic tools and last much longer. This is not new to microbiology laboratories where single-use plastics have often replaced more sustainable options, for example, glass Petri dishes, metal inoculating rods, and glass L-shaped spreaders, to name a few. We can also develop protocols where microcentrifuge tubes are changed and disposed of after a few centrifugation cycles (e.g. DNA extraction); would it not be reasonable to reuse autoclaved tubes or to use biodegradable plastics? Indeed, protocols for reusing plastic consumables in routine microbiology assays already exist. For example, Soltani and colleagues (Soltani et al. ) showed that reusing pipette tips and tubes after chemical washing with sodium hypochlorite had no impact on PCR amplicons' quality and purity. Similarly, reusing pipette tips washed with sodium hypochlorite in automated clinical microbiology protocols for SARS-CoV-2 RT-qPCR increased efficiency, mitigated consumable shortages, and reduced costs and plastic waste (Berger et al. ). It is also important to consider that around the world these practices are conducted on a daily basis. Perhaps more due to economic constraints than environmental reasons, nonetheless, it is feasible. Would this be a practice that laboratories in more economically developed countries would consider adopting? While the transition away from single-use plastics is not without challenges, particularly regarding sterility and contamination, the environmental benefits are significant. By adopting alternative materials, recycling programmes, and a culture of sustainability, microbiology laboratories can reduce the ecological footprint of scientific research. For example, Alves and colleagues (Alves et al. ) at the University of Edinburgh assessed and addressed the plastic use and waste of their multidisciplinary microbiology, molecular biology, and immunology laboratory facilities. Their work lead to an important reduction of single-use plastic waste and biohazard waste needing autoclaving and or incineration, thus achieving cost savings for the research institute. This case study could be used as a benchmark for other groups. Mindful automation With the advancement of automation, microbiology labs in universities, industries, and the public sector are increasing their energy consumption, especially with equipment recommended to stay “on” regardless of usage. Equally, automated systems often produce more plastic waste due to high throughput processes and proprietary consumables. For example, a liquid handler from one company might not operate unless specific pipette tips are used. There are microbiology laboratories where multiple brands' liquid handlers require different pipette tips, increasing plastic waste and packaging. Just like for most manual pipettes, where pipette tips can be bought from either the pipette manufacturer or from a generic one, automated liquid handlers offer the opportunity for a conversation across manufacturers to produce leap-frog products that can be used across technologies, or for a consumables manufacturer to come up with size-fit all alternatives. It is worth noting that such a change would probably require a push by the consumer, it is therefore crucial that microbiologists, molecular biologists, and scientists in general request and are part of those conversations. Achieving changes in energy management and single-use consumables would boost the benefits that automated systems already provide. For instance, the reduction of time and effort required to conduct molecular laboratory assays and the accuracy of reagent usage that often minimises the generation of chemical waste. Procurement By purchasing supplies in bulk, labs can reduce the amount of packaging waste. Choosing suppliers and products that prioritise sustainability is essential. This means selecting products with minimal packaging, opting for goods made from recycled materials, selecting products which offer recycling schemes, and partnering with companies that have a clear commitment to environmental responsibility. Investment in research Supporting research into new materials and methods that reduce reliance on single-use plastics, fossil fuels, and high energy operations can contribute to long-term solutions. Grants and funding opportunities specifically targeted at sustainability in the lab can incentivise innovation in this area. For instance, testing the effect of relatively higher temperatures on freezers for long term sample storage; assessing how many times pipette tips and microcentrifuge tubes could be possibly re-used, if any, for a given application; use of biodegradable consumables for growing/storing/maintaining live cultures; development of biodegradable plastics for lab applications, etc. Education and awareness Perhaps the most crucial factor in driving sustainability is fostering a culture of environmental awareness within the lab. Regular training and discussions about sustainable practices encourage lab personnel to be mindful of their environmental impact. By understanding the importance of each action, from recycling a plastic tube to properly shutting down equipment, lab staff can collectively contribute to more sustainable operations.
A critical step towards sustainability is reducing energy consumption. Microbiology labs, with their array of equipment like fridges, freezers, biosafety cabinets, autoclaves, incubators, thermo cyclers, and centrifuges, to name a few, are typically energy intensive. Simple measures like turning off equipment when not in use, defrosting freezers, avoiding blocking air vents in biosafety cabinets, using LED lighting, and maintaining equipment for optimal performance can make a considerable difference. Notably, the National Renewable Energy Laboratory, the Lawrence Berkeley National Laboratory, and the International Institute for Sustainable Laboratories have produced a report on assessing baseline energy performance for laboratories (Mathew et al. ). They also presented The Environmental Performance Criteria (EPC)—a point-based rating system designed to score overall environmental performance for laboratories (Mathew et al. ). Updated versions for both, the Laboratory Benchmarking energy use (I2SL ) and EPC (I2SL ) tools are available online and are free to use. These institutes have further developed guidelines and a Smart Labs Toolkit to help increase environmental sustainability in laboratories (I2SL ), currently used by numerous academic, public, and industrial laboratories. Additionally, agreements with manufacturers and suppliers could facilitate upgrading to energy-efficient models, significantly cutting down on power usage. Furthermore, questioning our long-standing practices could also lead to energy savings. For example, a discussion on Twitter raised the question of whether setting a freezer at –80°C is necessary. Could increasing the temperature to –70°C also suffice? Leak and collaborators explored this question, finding that operating an energy-efficient freezer at –70°C, if necessary, with empty boxes to buffer against temperature changes, can reduce energy use by about 36% compared to running the same model at –80°C (Leak et al. ). Further, the Freezer Challenge organisation has made available scientific evidence that –70°C is a safe storage temperature for various sample types (Freezer Challenge ), including fungal isolates (Espinel-Ingroff et al. ) and proteins (Beekhof et al. ). Indeed, together with the University of Colorado Boulder, the Freezer Challenge has created a database where life scientists across the world share the type of samples stored at –70°C, their applications, and safe storage tests conducted. These initiatives are open invitations for more microbiology labs to participate. Granting social media might not seem like the ideal environment for academic discussion, but it can provide a space for open conversations about long-standing beliefs, leading to useful knowledge. Economic benefits further support sustainability practices. This has been proven by the Harvard University's Green Labs Program, which by encouraging practices such as shutting fume hoods when not in use, has saved approximately $250 000 USD annually in energy costs, reducing the carbon footprint and improving laboratory safety and efficiency by maintaining optimal airflow (Quentin ).
Water is a critical resource in microbiology labs, used for everything from washing glassware to preparing solutions and culture media. In settings where autoclaves and dishwashers are the major water consuming pieces of equipment, running these only when completely full and efficiently loaded would greatly improve the efficient use of this resource. Coordinating with other laboratories could help maximise the use of each cycle. Equally, assessing the type of water necessary for each application can reduce water consumption. For instance, considering that producing 1 L of deionised water requires 3 L of tap water (Leak et al. ), should prompt the evaluation of whether a given activity needs deionised water. Other advisable changes include using foot-pedals in lab sinks to facilitate turning water on and off and recirculating water systems for cooling laboratory equipment (Royal Society of Chemistry ). The University of Bristol has implemented various water-saving measures in their laboratories, including in those dedicated to medical microbiology and infectious disease. For example, their Green Labs scheme, focused on optimising water usage among other sustainability goals. Measures include installing low-flow aerators on lab sinks, maintaining reverse-osmosis systems efficiently, and actively identifying and fixing leaks. This proactive maintenance has prevented water loss and potential damage to lab equipment and facilities, ensuring smoother laboratory operations by reducing equipment downtime, ultimately allowing the University to allocate more resources to research activities (The University of Bristol , ).
Chemicals are indispensable in microbiology labs, to the extent, that we can all think of containers with reagents from years past, often with the same chemical being ordered over and over again without any given jar being actually empty. Thus, by carefully managing inventory and avoiding overordering, labs can prevent waste from expired or unused chemicals. Also, the choice and disposal of chemicals impacts sustainability (Royal Society of Chemistry ). Eliminating or exchanging our reagents for less or non-hazardous compounds (EFLM ) and centralising their use and disposal at the department or institutional level can help to improve research reproducibility and sustainability practices (Meyn et al. ) while minimising the type and amount of chemical waste produced by individual laboratories.
Microbiology laboratories generate a variety of wastes, with single-use plastics likely constituting the largest proportion. The complexity of recycling plastics due to biological contaminants and mixed waste makes it challenging. Although plastics used to manufacture laboratory consumables are technically recyclable (e.g. polypropylene), the presence of biological contaminants, the diversity of plastic types, and the mixed nature of waste make recycling a complex and often unfeasible option. Furthermore, almost all single-use plastic waste in microbiology labs is routinely collected in biohazard containers or bags for disposal and incinerated (Tan et al. ). However, a significant amount of the plastic waste discarded by research laboratories is not biohazardous. Indeed, a report analysing the recycling potential of plastic waste produced by clinical facilities in the United States found that plastic waste can significantly be reduced by appropriate segregation of waste (Lee et al. ). In fact, some alternative strategies are already being implemented. For instance, the University of York waste audit and plastics decontamination station development (Kuntin ) led to correct waste segregation, allowing them to partner with a laboratory supplier to recycle centrifuge tubes, pipette tips, and other plasticware that have not held biohazardous materials (Sterilab Services ). Furthermore, some manufacturers are developing biodegradable plastics specifically for laboratory use. These plastics, under the right conditions, can break down more quickly and with fewer environmental repercussions. Equally, some manufacturers offer take-back programmes for used plastic items, ensuring proper recycling or disposal. Microbiology laboratories should segregate plastic waste and collaborate with companies that can handle lab-specific recycling challenges, such as pipette tip boxes. Which other products could follow a similar scheme? Wherever possible, laboratories could shift from single-use to reusable items. Glassware can often replace plastic containers and be sterilised for repeated use. Metal instruments can replace some plastic tools and last much longer. This is not new to microbiology laboratories where single-use plastics have often replaced more sustainable options, for example, glass Petri dishes, metal inoculating rods, and glass L-shaped spreaders, to name a few. We can also develop protocols where microcentrifuge tubes are changed and disposed of after a few centrifugation cycles (e.g. DNA extraction); would it not be reasonable to reuse autoclaved tubes or to use biodegradable plastics? Indeed, protocols for reusing plastic consumables in routine microbiology assays already exist. For example, Soltani and colleagues (Soltani et al. ) showed that reusing pipette tips and tubes after chemical washing with sodium hypochlorite had no impact on PCR amplicons' quality and purity. Similarly, reusing pipette tips washed with sodium hypochlorite in automated clinical microbiology protocols for SARS-CoV-2 RT-qPCR increased efficiency, mitigated consumable shortages, and reduced costs and plastic waste (Berger et al. ). It is also important to consider that around the world these practices are conducted on a daily basis. Perhaps more due to economic constraints than environmental reasons, nonetheless, it is feasible. Would this be a practice that laboratories in more economically developed countries would consider adopting? While the transition away from single-use plastics is not without challenges, particularly regarding sterility and contamination, the environmental benefits are significant. By adopting alternative materials, recycling programmes, and a culture of sustainability, microbiology laboratories can reduce the ecological footprint of scientific research. For example, Alves and colleagues (Alves et al. ) at the University of Edinburgh assessed and addressed the plastic use and waste of their multidisciplinary microbiology, molecular biology, and immunology laboratory facilities. Their work lead to an important reduction of single-use plastic waste and biohazard waste needing autoclaving and or incineration, thus achieving cost savings for the research institute. This case study could be used as a benchmark for other groups.
With the advancement of automation, microbiology labs in universities, industries, and the public sector are increasing their energy consumption, especially with equipment recommended to stay “on” regardless of usage. Equally, automated systems often produce more plastic waste due to high throughput processes and proprietary consumables. For example, a liquid handler from one company might not operate unless specific pipette tips are used. There are microbiology laboratories where multiple brands' liquid handlers require different pipette tips, increasing plastic waste and packaging. Just like for most manual pipettes, where pipette tips can be bought from either the pipette manufacturer or from a generic one, automated liquid handlers offer the opportunity for a conversation across manufacturers to produce leap-frog products that can be used across technologies, or for a consumables manufacturer to come up with size-fit all alternatives. It is worth noting that such a change would probably require a push by the consumer, it is therefore crucial that microbiologists, molecular biologists, and scientists in general request and are part of those conversations. Achieving changes in energy management and single-use consumables would boost the benefits that automated systems already provide. For instance, the reduction of time and effort required to conduct molecular laboratory assays and the accuracy of reagent usage that often minimises the generation of chemical waste.
By purchasing supplies in bulk, labs can reduce the amount of packaging waste. Choosing suppliers and products that prioritise sustainability is essential. This means selecting products with minimal packaging, opting for goods made from recycled materials, selecting products which offer recycling schemes, and partnering with companies that have a clear commitment to environmental responsibility.
Supporting research into new materials and methods that reduce reliance on single-use plastics, fossil fuels, and high energy operations can contribute to long-term solutions. Grants and funding opportunities specifically targeted at sustainability in the lab can incentivise innovation in this area. For instance, testing the effect of relatively higher temperatures on freezers for long term sample storage; assessing how many times pipette tips and microcentrifuge tubes could be possibly re-used, if any, for a given application; use of biodegradable consumables for growing/storing/maintaining live cultures; development of biodegradable plastics for lab applications, etc.
Perhaps the most crucial factor in driving sustainability is fostering a culture of environmental awareness within the lab. Regular training and discussions about sustainable practices encourage lab personnel to be mindful of their environmental impact. By understanding the importance of each action, from recycling a plastic tube to properly shutting down equipment, lab staff can collectively contribute to more sustainable operations.
Institutional support The discussion above highlight the benefits of collaboration not only within a laboratory group, but also with a larger spectrum of potential partners. Clearly, universities and other institutions where microbiology laboratories are hosted would be the first avenue for a larger improvement. Nowadays, universities get ranked on their sustainability credentials. For example, the Impact Rankings from the Times Higher Education evaluate universities performance in accordance to the UN Sustainability Goals. Similarly, the QS sustainability University Rankings measure around 1400 universities ability to tackle global environmental and social challenges. Furthermore, nation specific tables, such as the People and Planet University League in the United Kingdom, do a specific ranking of their country's higher education institutions commitment to environmental, social, and governance sustainability. It is therefore reasonable to expect and actively ask for institutional support to achieve sustainable laboratory operations. Such support should include conducting energy and water assessments at laboratory and building scales, while conducting the necessary adjustments; evaluate appropriate insulation and ventilation installations and guiding and training researchers in their sustainability journeys. Institutions can also assist if not drive a more environmentally sustainable procurement by ordering in bulk and compile and select suppliers with sustainability commitments. Moreover, the creation of Shared Research Resources (SRR) at institutional level, where instruments, reagents, and technical expertise are shared, has demonstrated to be an invaluable tool to ensure quality, reproducible science while reducing environmental impacts of biomolecular research (Meyn et al. ). Microbiologist can then argue that their research institutions can achieve higher research quality outputs, economic savings, and reputation gains by supporting sustainable laboratory practices. Microbiology societies Another important partnership can be formed between microbiologist and learned societies. Many microbiology societies already implement sustainability practices, such as in the organisation of events. For instance, in the latest FEMS 2023 Symposium, most of the catering was vegetarian, and single use plastic cutlery was avoided. Instead, ceramic cups were available for every coffee break, and a stainless-steel bottle was provided to all attendees so that they could refill as needed from the multiple water supplies available. Another exceptional example is the Microbial Ecology and Evolution Hubs, which in January 2024 by facilitating hybrid participation, allowed in-person attendance for local researchers, while supporting full virtual participation across Europe and North America Hubs. Hybrid conferences where travel can be flexible not only cut on carbon emissions (Achten et al. ), but are also more inclusive to audiences from less economically developed countries. However, microbiology societies could improve their sustainable event credentials, by asking for sustainability statements for the grants they provide for this matter, and by sponsoring events and or knowledge exchange activities specifically targeting sustainable labs. Another key activity that microbiology societies could facilitate due to their extensive networks, is the organisation of round tables where microbiologists, institutes, industries (e.g. manufacturers, suppliers), and even policy makers could discuss alternatives and solution to environmental issues caused by research laboratories. Also, society journals could promote scientific research in sustainability, such as this FEMS special issue on “Microbiology for a Sustainable Future”, also accepting case reports on sustainable laboratory best practices. Another example in this regard is the Applied Microbiology International new bespoke journal Sustainable Microbiology, which requires that any manuscript submission states how that piece of research addresses the UN Global Sustainability Goals. Would a similar requirement about sustainability measures/considerations taken for conducting any and every piece of research make sense? This would at least promote a thought process by authors and readers about the environmental impact of a given investigation.
The discussion above highlight the benefits of collaboration not only within a laboratory group, but also with a larger spectrum of potential partners. Clearly, universities and other institutions where microbiology laboratories are hosted would be the first avenue for a larger improvement. Nowadays, universities get ranked on their sustainability credentials. For example, the Impact Rankings from the Times Higher Education evaluate universities performance in accordance to the UN Sustainability Goals. Similarly, the QS sustainability University Rankings measure around 1400 universities ability to tackle global environmental and social challenges. Furthermore, nation specific tables, such as the People and Planet University League in the United Kingdom, do a specific ranking of their country's higher education institutions commitment to environmental, social, and governance sustainability. It is therefore reasonable to expect and actively ask for institutional support to achieve sustainable laboratory operations. Such support should include conducting energy and water assessments at laboratory and building scales, while conducting the necessary adjustments; evaluate appropriate insulation and ventilation installations and guiding and training researchers in their sustainability journeys. Institutions can also assist if not drive a more environmentally sustainable procurement by ordering in bulk and compile and select suppliers with sustainability commitments. Moreover, the creation of Shared Research Resources (SRR) at institutional level, where instruments, reagents, and technical expertise are shared, has demonstrated to be an invaluable tool to ensure quality, reproducible science while reducing environmental impacts of biomolecular research (Meyn et al. ). Microbiologist can then argue that their research institutions can achieve higher research quality outputs, economic savings, and reputation gains by supporting sustainable laboratory practices.
Another important partnership can be formed between microbiologist and learned societies. Many microbiology societies already implement sustainability practices, such as in the organisation of events. For instance, in the latest FEMS 2023 Symposium, most of the catering was vegetarian, and single use plastic cutlery was avoided. Instead, ceramic cups were available for every coffee break, and a stainless-steel bottle was provided to all attendees so that they could refill as needed from the multiple water supplies available. Another exceptional example is the Microbial Ecology and Evolution Hubs, which in January 2024 by facilitating hybrid participation, allowed in-person attendance for local researchers, while supporting full virtual participation across Europe and North America Hubs. Hybrid conferences where travel can be flexible not only cut on carbon emissions (Achten et al. ), but are also more inclusive to audiences from less economically developed countries. However, microbiology societies could improve their sustainable event credentials, by asking for sustainability statements for the grants they provide for this matter, and by sponsoring events and or knowledge exchange activities specifically targeting sustainable labs. Another key activity that microbiology societies could facilitate due to their extensive networks, is the organisation of round tables where microbiologists, institutes, industries (e.g. manufacturers, suppliers), and even policy makers could discuss alternatives and solution to environmental issues caused by research laboratories. Also, society journals could promote scientific research in sustainability, such as this FEMS special issue on “Microbiology for a Sustainable Future”, also accepting case reports on sustainable laboratory best practices. Another example in this regard is the Applied Microbiology International new bespoke journal Sustainable Microbiology, which requires that any manuscript submission states how that piece of research addresses the UN Global Sustainability Goals. Would a similar requirement about sustainability measures/considerations taken for conducting any and every piece of research make sense? This would at least promote a thought process by authors and readers about the environmental impact of a given investigation.
Driven by a similar motivation to that of the present work, a group of ECR researchers worked with the publisher eLife Sciences Publications to launch the #LabWasteDay campaign on Twitter in 2019 (Howes ), highlighting the single-use plastic used by scientists globally. Plastic waste and other realities of the environmental impact that life-sciences research, including microbiology, has, have been recounted in numerous perspectives, case reports, and letters to the editor mostly written by ECR. On their reports, ECR narrate the contrasting perceptions that the need of sterility and sustainability entail. Some of these researchers have tried to influence change in their laboratories, with more or less success. The success stories by Alves et al. at the University of Edinburgh to cut plastic waste, as well as that of David Kuntin at the University of York (Kuntin ) to create a plastics decontamination station, were driven by ECR. However, the reality of fixed-term contracts and the pressure to generate quality results worthy of publication in a short period of time, could hinder the initial enthusiasm of ECR to abide by sustainability rules. Thus, highlighting how crucial of the role of senior microbiologist is to successfully implement these changes in their laboratories. Principal investigators, laboratory managers, and team leaders are the person of reference in every laboratory. Just as they establish the research culture and how a laboratory operates, senior scientists along with permanent technical staff should set the standards for sustainability practices in laboratory procedures. It is their prerogative, and we might argue, their responsibility, to foster a culture open to innovation committed to conducting reproducible and quality science while procuring sustainable research practices. Furthermore, permanent staff should ensure the longevity of environmentally friendly initiatives, perhaps spearheaded by ECR, beyond fix-term projects. It is therefore paramount to bring faculty, ECR, technical staff, and students together to develop, engage, and support such changes. Having an open mindset to new practices that do not compromise on research quality and reduce the impact of research on the environment should no longer be a choice but a mandate. In a world moved mostly by economic considerations, it is important to remember that sustainable practices not only reduce environmental footprints but also often lead to cost savings and efficiency improvements. Microbiologists, particularly environmental microbiologists, should lead by example in promoting sustainability in laboratory environments. Let's clean our own house first.
|
Modeling and scientific analysis of pediatric medication evaluation based on MDM-DEA-Malmquist model: construction of health management in pediatrics in developing countries | b80db399-d68d-437f-bf15-5dda9eb3785c | 11806530 | Pediatrics[mh] | For a long time, developing countries have been facing problems such as resource constraints and difficulties in development. Their economy, technology, and living standards are significantly lower than developed countries. However, the population size of developing countries is as high as 6.39 billion. Their land area and population volume account for more than 70% of the world. Developing countries have become major production bases for agriculture, industry, and economic activities in the world[ . The proportion of newborns born in developing countries is nearly 80% of the annual global population of about 78 million . Against the backdrop of a gradually negative birth rate in developed countries, developing countries have colossal population potential and provide the world with substantial labor resources. Unfortunately, developing countries have difficulty creating a practical population value and even face a huge population burden. Developed countries have transferred important population resources, creating a siphoning effect in developed countries and further compressing the space for developing countries to build. Developing countries have been completely reduced to working as laborers and are gradually locked into low-end economic activities . Resources for child care in developing countries are more vulnerable due to infrastructural, economic, human, social, and cultural weaknesses. Their pediatric systems and pediatric healthcare activities are fragile. Taking China, a developing country, as an example. Even though China is close to the level of developed countries in all areas and represents a higher level of developing countries. However, China's pediatric construction still needs to be improved. As of 2022, the total number of children aged 0–14 in China will be about 230 million, accounting for 18% of the country's total population . Moreover, there are only 118,000 professionals qualified as pediatric occupational physicians. Only 0.53 professional pediatric care is available for every 1,000 children . It takes about 1,800 children to receive care from one pediatric professional. It is important to realize that this is only for essential pediatric personnel . The number of pediatric professionals with advanced titles or high levels of expertise is far lower. In addition, there is a mismatch and imbalance in the resources of pediatric medical institutions in China. In China, 43.6% of pediatric outpatient services and 53.5% of pediatric emergency services are provided by general hospitals. The proportion of specialized pediatric hospitals accounts for only 0.5% of local pediatric medical services. In rural areas, this figure is almost 0. This means that China needs more specialized or well-targeted pediatric health care. Practical, standardized, and effective medication activities in the pediatric healthcare system are the key landing points for ensuring the efficiency of pediatric healthcare and promoting the healthy growth of minors. The Declaration of Alma-Ata states that the principles of self-participation and State medical care are central to promoting a standardized basic pediatric health care system. Moreover, the accessibility of public health and essential healthcare services determines the height of children's growth . The public will be guaranteed access to high-quality, efficient, affordable medical services. The establishment of a composite health insurance payment mechanism and a multi-level medical security system based on clinical pathways. The goal of synergistic development with a high-quality and efficient health care system; and reforms and tilts around children's medication have become the focus of pediatric construction in the next phase . On a practical level, China's pediatric medicine is also facing a tight situation. Compared with developed countries, the medical foundation of developing countries could be more robust. There is an extreme need for medical professionals, and their R&D laboratories, R&D systems, and experimental activities for innovative medicines are all significantly underdeveloped . This means that developing countries rely on imported pediatric drugs or generic drugs. Limited by the domestic economic level, even after the introduction of the production of children's drugs, the level, effect, and price are less ideal than in developed countries. Data show that China's number of children's medical consultations is increasing by 5 million annually. In 2021, the pediatric drug market reached 107.9 billion yuan, with a CAGR of 9.9%. Regarding incremental data on pediatric drugs, as of May 2022, the number of approved pharmaceutical products in China was about 18,400. However, 95% of the drugs were non-pediatric, and there were only 930 pediatric drugs. Regarding pediatric drug stock data, 90% of pediatric drugs have only 1 product specification and rely heavily on imports or single manufacturers. Only 6% of products have more than two specifications, and only 3% of pediatric products have more than three specifications. In addition, China's pediatric drugs are dominated by granules, tablets, and oral solutions, accounting for 32%, 25%, and 21%, respectively, in 2021 . The combined share of the three dosage forms reaches 78%. Dispersions, pills, and capsules are relatively few, accounting for less than 10%. Regarding indications, China's pediatric drugs are dominated by general drugs for the respiratory system, anti-infective drugs, and the digestive system. The share in 2021 reaches 38%, 23%, and 17%, respectively, with a severe lack of the remaining specialty or targeted drugs. For other developing countries, this figure may be even more unsatisfactory. In addition, at a deeper level, there are endogenous problems with pediatric drug use. Children's medicines do not taste good, and medication compliance is poor. For a long time, there have been situations such as "use medicine by breaking, dose by guessing" and "pediatric discretionary reduction." The abuse of drugs by children has seriously harmed children's health. At the same time, China's pediatric medication drug development started late, and technology research and development are relatively backward. The number of approved pediatric products and active ingredients is relatively small, and the corresponding number of approved products accounts for less than 2%. Compared with China's 250 million children population, the contrast is enormous. At a high level, China's pediatric medication has both structural conflicts of insufficient supply and demand, as well as overflow problems of low-end drug reuse. There exists both the rigid problem of insufficient doctor-patient communication mechanisms and the soft problem of overly controlling parents and inadequate nursing literacy. There is both a lack of scientific and standardized medication strategies and a confused medication orientation that emphasizes inputs and results. Developing countries account for most of the world's population resources and constitute a significant force in population turnover. Developing countries have the largest population of children in the world . However, there is a severe shortage of resources for pediatric specialists and pediatric medicines. This not only restricts the construction of developing countries but also limits the quality of development of the world . Overall, scientific management of pediatric medication has become an urgent issue for developing countries. According to statistics, approximately 30000 children in China suffer from hearing loss, and around 7000 children die each year due to improper medication. The incidence of adverse drug reactions in children in China is 12.5%, which is twice that of adults and four times that of newborns. The drug damage caused by irrational and incorrect use of drugs in children is more serious . In the child population, drug poisoning accounted for 53% of all poisoned children seeking medical treatment in 2012 and increased to 73% in 2014. In terms of the age of poisoning, children aged 1 to 4 years old account for the most significant proportion of drug poisoning among children aged 0 to 14 years old. The problem of pediatric medication occurs in multiple stages before, during, and after medication. In the early stage of medication, there are still problems, such as a shortage of medication varieties, inappropriate dosage forms, and specifications for children . The insufficient infrastructure capacity of hospitals and assessing children's condition have weakened medication effectiveness. In the middle and late stages of medication, 84.9% of children have safety hazards with medication, and most parents lack awareness of safe medication. Unreasonable or even incorrect medication is frequently used. Meanwhile, due to inadequate community management, there is a lack of personalized dispensing services for children's medication . There is also a lack of an individualized precision medication system. From a profound background perspective, we need a scientific pediatric medication evaluation system to help pharmaceutical managers better carry out pediatric healthcare. For countries such as China, lean management of pediatric medication activities with limited resources can significantly improve the efficiency of pediatric medication. At the same time, it also prevents more children from facing problems such as resource loss and medication injuries. It also helps hospital managers reduce diagnosis time and improve work efficiency . This provides the necessary research background for this study: breaking through resource constraints and constructing a lean evaluation system for pediatric medication. Therefore, the immediate problem is realizing the precise pediatric medication in developing countries under the limited resource environment and realistic conditions . Considering multiple human, social, and economic constraints, how can we achieve efficient pediatric medication use in developing countries in the context of insufficient pharmaceutical resources? This paper aims to construct a pediatric medication assessment framework with characteristics of developing countries, help doctors and parents in developing countries have a better knowledge of pediatric medication, and select specific medications in a targeted manner . Due to the significant variability within developing countries, this paper mainly selects empirical data from China. However, the pediatric assessment framework provided in this paper is strongly compatible with the MDM matrix system (Multiple-Domain-Matrix). Other scholars can improve the pediatric medication assessment system and provide ideas for pediatric health care in their countries based on real-life factors. Based on the assessment results of the MDM Matrix, this paper further constructs an input–output matching model for pediatric medication. With the help of MDM matrix thinking and assessment effects, this paper provides theoretical guidance and realistic feedback on the input–output system. This paper then uses the DEA-Malmquist model to provide feedback on the specific efficiency of pediatric medication in each province in China. Thus, the MDM matrix assessment idea is validated, and scientific strategies for pediatric medication use are given. Another significant contribution of this paper is that we do not call for the rapid establishment of pharmaceutical R&D and management systems in developing countries (which is a long transition process and requires sustained efforts by multiple generations of governments in the economic, social, educational, and innovation fields), considering the urgent problems of reality. Instead, we hope to sift through the limited factor environment to identify important pediatric medication assessment ideas that can quickly and qualitatively help children access excellent welfare benefits. Pediatric medication has long been the focus of pediatric theory and practice. Based on CiteSpace, Li combed the literature studies on pediatric medication use in China and other countries. He found that Chinese pediatrics have continued to focus on directions such as rational and safe medication use, while foreign literature emphasizes the dimensions of medication diagnosis, safety assessment, and innovative research and development . The focus of this study was to identify areas of action in pediatric medication use and to summarize the key motivations that influence pediatric medication use. Such motivations clearly vary between countries. In developing countries, most studies focus on surveys of over-the-counter medication use in hospital outpatient clinics. This highlights a dimension of pediatric development in developing countries that focuses on the outcome orientation of medication use . On the one hand, they need to screen for cost-effective drugs with limited drug resources. On the other hand, due to the lack of drug safety or evaluation trials, they can only invest more effort at the clinical level of drugs . In developed countries, most of the research is focused on the pediatric drug development level. They encourage the development of drug research and development work on child-specific dosage forms and specifications, and improve the specification of children's clinical use of drugs. Therefore, from the existing level, this motivation is manifested in developing countries in the form of multidimensional safety assessment of pediatric medications. In developed countries, they focus more on the cutting edge and innovation of pediatric medication. That is, developing or experimenting or applying pediatric medication drug with maximum efficacy within the maximum boundaries of safety . In addition, limited by the economic level, developing countries also show the characteristics of pursuing short-term time-saving and low price in pediatric medication. Based on the analysis of big data of pediatric outpatient prescriptions in China, We found that the irrational rate of pediatric medication use reaches 15.02%. Among them, the misuse of hormones or antibiotics dominates the problem. Due to the gap between national conditions and the popularity of medical education, parents in developing countries are slightly less cognizant, which leads to the eagerness of doctors to use medication. In order to provide quick relief to the child and soothe the parents' anxiety, doctors often use heavy medications . In addition, in response to outpatient emergency room turnover rates, as well as to larger waves of influenza and pediatric patients, more pediatric patients can often be cared for with more generalized and abbreviated medications. Therefore, antibiotics and hormones, which are better generalized, have become the mainstream means of medication administration. Among the mainstream pediatric drug resources, most of the hormones and antibiotics are relatively inexpensive, which meets the requirement of most families for inexpensive drugs. Moreover, the effect of medication in providing quick relief is generally recognized by the market. In developed countries, a high level of family drug knowledge and scientific literacy, so that the family pediatric drug tends to be rigorous and standardized . The huge resources of the healthcare system support “peer-to-peer” personalized pediatric medical treatment plans, avoiding the problem of irrational or redundant pediatric medication. In addition, other scholars emphasize the issue of compounding and flexibility. That is to say, in pediatric medicine, we should consider children's patients in a forward-looking manner. Because children are in a complex external environment, the development of their own immune system is incomplete, and it is very likely to form cross or multiple infections. It is also difficult to keep track of the progress of the disease. This means that pediatric medications need to be considered in multiple ways in order to avoid potential medication conflicts and prevent future outbreaks. According to the National Adverse Drug Reaction Monitoring Annual Report (2022), 7.8% of all adverse drug reaction reports in China are in children aged 14 years and younger. 86% of drug poisoning incidents in children occur during home medication use. Nearly 85% of families are at risk for children's medication safety. Focusing on children's pCms, the situation of few varieties, few dosage forms, few specifications and few labels needs to be reversed. Children's pCms are characterized by smaller dosage, relatively suitable taste and portability. However, there are few varieties of pCms for children, with only 12.6% of children's pCms included in the National Essential Drugs Catalog (2018 edition). Compared with the demand for children's medicines on the market, there are problems such as insufficient varieties in the supply of medicines. Relevant data show that the morbidity rate for children has been around 19.4% in recent years. However, only 3.2% of the drugs approved for marketing in China are specialized for children, and only 12.4% of the drugs are used by adults and children together. In cerebral, renal, dermatologic, and toxicological diseases, children-specific drugs are even more insufficient. China has established a more comprehensive urban and rural residents' medical insurance system, which is centered on the scientific construction of a pediatric medication system, as well as the maintenance and protection of the health and property interests of families. The number of insured children is steadily increasing, with 256 million insured children in 2023. The steady increase in the number of insured children has played a fundamental role in safeguarding the health rights and interests of children. At the same time, however, there are problems such as the many procedures and lengthy processes involved in insuring newborns, the lack of awareness among a small number of parents of the need to participate in the insurance system, and the fact that individual cities have not yet fully liberalized the restrictions on the household registration of resident children to participate in the insurance system. Although the government has invested more elements to serve the construction of pediatric medication system, the efficiency is still not high, and the quality of medication and the results of medication are not satisfactory . It is particularly important to design a scientific pediatric medication evaluation structure system, and to measure and adjust the effectiveness of pediatric medication use in China based on this system. Taken as a whole, time, price, efficacy, flexibility and safety are only some of the main focuses of pediatric drug use in developing countries. In reality, differences in subjects, environments and subjective and objective thinking have potentially shaped many of the guiding factors and key motivations for pediatric drug use. There is a need for an objective overview of the underlying causes of pediatric medication use based on the assessment process and attention to the integration of core operational and child and family needs. On the other hand, previous pediatric medication management lacked an application-friendly and process-oriented, cause-and-effect approach . A detailed analysis of the current state of affairs is needed before assessment to show the complex process of pediatric medication administration. Based on this, this paper proposes to construct a pediatric medication diagnostic assessment framework with developing country characteristics and in line with the above regional conditions. In order to show the logic of hospitals, doctors, parents, and children in the context of resource constraints to use and receive medication. This is essentially a model of pediatric medication use based on lean management principles . Through a well-conceived pediatric medication process model centered on optimizing resources and organizing and combing, it can improve the efficiency of pediatric medication administration in developing countries and alleviate the contradictions in the development of pediatrics. We refer to the lean management principles proposed to integrate the customer's desire (pediatric medication requirements in developing countries) with the process as a whole (pediatric medication-oriented cause and effect) to determine the priority strength of key factors in pediatric medication . This provides an effect-oriented classification of pediatric medication root causes. Among them, MDM is a research method with good adaptability. Since pediatric medication use involves a variety of indicators such as processes and factors, the existence of MDM enables the joint judgment of multi-domain matrices and ultimately the assessment results. Based on the assessment results, a scientific efficacy analysis model is needed to examine and verify the validity of the assessment and to analyze the pediatric medications in a profound and quantitative way based on the practice data. For a long time, DEA has been an effective method to measure the effectiveness results. DEA allows the axiomatic assumptions to be circumvented in assessment studies through the design of multi–input–multi–output systems. DEA floats segmented linear surfaces to the top of the data through linear programming techniques. In other words, statistical regression methods estimate parameters in the form of hypothetical functions by performing a single optimization over all decision-making units (DMUs). DEA, on the other hand, uses different optimizations (linear programming problems) for different DMUs and makes no a priori assumptions about the underlying functional form. Traditional DEA is dominated by radial characterization, i.e., it considers the relationship between inputs and outputs varying in the same proportion. This relationship determines the non-necessity of a priori information about the underlying function. In green governance activities, it is obvious that there is an ambiguity relationship between inputs and outputs. Traditional DEA efficiency is characterized by radial efficiency scores and possible non-zero input (output) slack that are difficult to retain. The radial Malmquist productivity index is based on radial DEA efficiency only. Ignoring non-zero input slack in an input-oriented index (or non-zero output slack in an output-oriented index) clearly does not fully characterize productivity changes . The non-radial Malmquist index, on the other hand, effectively analyzes the efficacy results of complex scaling relationships. Considering the complexity of the MDM matrix assessment results and the unstructured nature of the real transformation assessment system, the non-radial DEA-Malmquist method effectively solves this problem. This enables accurate assessment of pediatric medication efficacy. Research methods This paper constructs and uses a joint model of MDM and DEA Malmquist to effectively analyze the current status and scientific improvement direction of pediatric medication in China. Its operational logic is shown in Fig. . As shown in Fig. , based on pediatric medication literature and reality, combined with expert interviews, a multi-level and multi cycle evaluation index system for pediatric medication has been formed . Based on small sample measurement and analysis, improve the MDM matrix framework and conduct actual calculations. After clarifying the multidimensional ranking of medication factors under each goal orientation, the indicators of each factor are mapped into an input–output system and input into the DEA Malmquist model, combined with panel data from each province for measurement. Based on the results of the DEA Malmquist model, verify whether the optimal performance of pediatric efficacy evaluation is consistent with the factor ranking of the MDM matrix framework , and also extend the effectiveness and practical strategies of pediatric medication evaluation results. MDM Matrix Method This study constructed a framework for assessing pediatric medication use in developing countries, drawing on the Lean Management Model and the MDM Matrix principles . The structure of the framework is shown in Table . In this framework, PD (Pediatric direction) refers to the outcome factors, i.e., outcome variables, surrounding pediatric medication; MF (Main Factor) refers to the primary factors surrounding pediatric medicines, i.e., time, price, effectiveness, flexibility, and safety as analyzed in the previous section, and TF (Time Factor) refers to the time factors surrounding pediatric medication, including three factors: before, during, and after the drug is administered . AF (Actual Factor) refers to the actual factors surrounding pediatric medicines. A multilevel causal structure within the Actual Factor is equally mapped and divided into different time periods (TFs). MAF*PD is characterized as the feedback of the Actual Factor on the final pediatric medication and is presented as a ranked order of the percentage of each factor, which exhibits the detailed importance of each factor for the reference of the physicians and the families. MMF*PD、MTF*MF、MAF*TF、MAF*AF are the scores of each level of the factor in the evaluation based on the expert research, where MTF*MF are computed using a seven-point Likert scale, as a way to differentiate the differences in the importance of each level of factors in each category . Based on the MDM matrix framework, this paper can calculate the M AF*PD 、M AF*MF 、M* AF*TF 、M* AF*AF results, which are calculated as follows: [12pt]{minimal} $$ & M_{AF*AF}^{*} = M_{AF*TF} + M_{AF*AF}^{1} + M_{AF*TF}^{} \\ & M_{AF*TF}^{*} = M_{AF*TF} + M_{AF*AF}^{*} M_{AF*TF} \\ & M_{AF*MF} = M_{AF*TF}^{*} M_{TF*MF}\\ & M_{AF*PD} = M_{AF*MF} M_{MF*PD} $$ M A F ∗ A F ∗ = M A F ∗ T F + M A F ∗ A F 1 + … M A F ∗ T F n M A F ∗ T F ∗ = M A F ∗ T F + M A F ∗ A F ∗ · M A F ∗ T F M A F ∗ M F = M A F ∗ T F ∗ · M T F ∗ M F M A F ∗ P D = M A F ∗ M F · M M F ∗ P D (2) DEA-malmquist model For a long time, DEA (Data Envelopment Analysis) has been the main method for objective evaluation. DEA avoids axiomatic assumptions in evaluation research through the design of a multi input multi output system. In the evaluation of pediatric medication, its indicators cover many dimensions before, during, and after medication, but the induction of these dimensions should follow the principles of scientific logic. The combination of MDM framework and DEA model is beneficial for comprehensive coverage of indicators and optimization of internal evaluation structure, resulting in more scientific evaluation results. In the MDM framework, through screening principles, some variables are used as inputs to the DEA model, and the state and impact dimensions are used as outputs of the DEA model. Through the matching adjustment of this structure, the most authentic pediatric medication efficacy is clarified . Considering that non-zero input slack in the input oriented index (or non-zero output slack in the output oriented index) cannot fully describe changes in pediatric medication. Therefore, this study extends the radial Malmquist index to a non radial index , where the input oriented index does not allow non-zero input relaxation and the output oriented index does not allow non-zero output relaxation. Assuming there are n decision units (DMUs) in the study, each DMUj generates a corresponding output [12pt]{minimal} $$y_{j}^{t} = (y_{1j}^{t} ,...{,}y_{sj}^{t} )$$ y j t = ( y 1 j t , . . . , y sj t ) at each set of time t through a set of input [12pt]{minimal} $$x_{j}^{t} = (x_{1j}^{t} ,...{,}x_{mj}^{t} )$$ x j t = ( x 1 j t , . . . , x mj t ) variables. Therefore, the most primitive DEA model is: 1 [12pt]{minimal} $$ _{0}^{t} (x_{0}^{t} ,y_{0}^{t} ) = _{{_{0} ,_{j} }} _{0} \\ s.t.{ 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j = 1}^{n} {_{j} x_{j}^{t} } _{0} x_{0}^{t} , \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j = 1}^{n} {_{j} y_{j}^{t} } y_{0}^{t} \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j} 0,j = 1,...,n \\ $$ θ 0 t ( x 0 t , y 0 t ) = min θ 0 , λ j θ 0 s . t . ∑ j = 1 n λ j x j t ≤ θ 0 x 0 t , ∑ j = 1 n λ j y j t ≥ y 0 t λ j ≥ 0 , j = 1 , . . . , n Model (1) is an input oriented formula that considers the potential radial reduction trend of output factors at the current level of output scale. By replacing [12pt]{minimal} $$x_{j}^{t + 1}$$ x j t + 1 [12pt]{minimal} $$y_{j}^{t + 1}$$ y j t + 1 and [12pt]{minimal} $$x_{j}^{t}$$ x j t [12pt]{minimal} $$y_{j}^{t}$$ y j t studying, the technical efficiency of each decision-making unit can be obtained during the t + 1 period. Through the update from period t to period t + 1, the technical efficiency of each decision-making unit will change, and the forefront of experience production will also change accordingly. Research has formed the following non radial DEA : 2 [12pt]{minimal} $$ _{0}^{t} (x_{0}^{t} ,y_{0}^{t} ) = ^{m} {_{i} } }}_{{_{{_{0} }}^{i} ,_{j} }} _{i = 1}^{m} {_{i} } _{0}^{i} \\ s.t.{ 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j = 1}^{n} {_{j} x_{ij}^{t} } _{0}^{i} x_{i0}^{t} ,{ 1pt} { 1pt} { 1pt} { 1pt} i = 1,...,m \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j = 1}^{n} {_{j} y_{rj}^{t} } y_{r0}^{t} ,{ 1pt} { 1pt} { 1pt} { 1pt} { 1pt} r = 1,...s, \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{i}^{0} { 1pt} { 1pt} free \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j} 0,j = 1,...,n. \\ $$ θ ~ 0 t ( x 0 t , y 0 t ) = 1 ∑ i = 1 m α i min θ 0 i , λ j ∑ i = 1 m α i θ 0 i s . t . ∑ j = 1 n λ j x ij t ≤ θ 0 i x i 0 t , i = 1 , . . . , m ∑ j = 1 n λ j y rj t ≥ y r 0 t , r = 1 , . . . s , θ i 0 f r e e λ j ≥ 0 , j = 1 , . . . , n . The specified weights are [12pt]{minimal} $$a_{i}$$ a i i = 1,…, m, to reflect the different preferences of each decision-making unit for improving various inputs. Model (2) measured the relative efficiency of DMU during time period t under weight [12pt]{minimal} $$a_{i}$$ a i guidance. If [12pt]{minimal} $$a_{i}$$ a i = 0 and the corresponding [12pt]{minimal} $$_{i}^{0}$$ θ i 0 = 1 is set. At this point, the greater the weight, the higher the priority given by the DMU to reduce its i-th input. Thus, model (2) determined the optimal EPF. Corresponding to the characterization of Malmquist productivity index in radial DEA, the relative efficiency of DMU at time t + 1 can be obtained: 3 [12pt]{minimal} $$ _{0}^{t + 1} (x_{0}^{t} ,y_{0}^{t} ) = ^{m} {_{i} } }}_{{_{{_{0} }}^{i} ,_{j} }} _{i = 1}^{m} {_{i} } _{0}^{i} \\ s.t.{ 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j = 1}^{n} {_{j} x_{ij}^{t + 1} } _{0}^{i} x_{i0}^{t} ,{ 1pt} { 1pt} { 1pt} { 1pt} i = 1,...,m \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j = 1}^{n} {_{j} y_{rj}^{t + 1} } y_{r0}^{t} ,{ 1pt} { 1pt} { 1pt} { 1pt} { 1pt} r = 1,...s, \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{i}^{0} { 1pt} { 1pt} free \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j} 0,j = 1,...,n. \\ $$ θ ~ 0 t + 1 ( x 0 t , y 0 t ) = 1 ∑ i = 1 m α i min θ 0 i , λ j ∑ i = 1 m α i θ 0 i s . t . ∑ j = 1 n λ j x ij t + 1 ≤ θ 0 i x i 0 t , i = 1 , . . . , m ∑ j = 1 n λ j y rj t + 1 ≥ y r 0 t , r = 1 , . . . s , θ i 0 f r e e λ j ≥ 0 , j = 1 , . . . , n . Furthermore, by replacing the t + 1 values of input and output variables, the relative efficiency and EPF at time t are obtained 4 [12pt]{minimal} $$ _{0}^{t} (x_{0}^{t + 1} ,y_{0}^{t + 1} ) = ^{m} {_{i} } }}_{{_{{_{0} }}^{i} ,_{j} }} _{i = 1}^{m} {_{i} } _{0}^{i} \\ s.t.{ 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j = 1}^{n} {_{j} x_{ij}^{t} } _{0}^{i} x_{i0}^{t + 1} ,{ 1pt} { 1pt} { 1pt} { 1pt} i = 1,...,m \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j = 1}^{n} {_{j} y_{rj}^{t} } y_{r0}^{t + 1} ,{ 1pt} { 1pt} { 1pt} { 1pt} { 1pt} r = 1,...s, \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{i}^{0} { 1pt} { 1pt} free \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j} 0,j = 1,...,n. \\ $$ θ ~ 0 t ( x 0 t + 1 , y 0 t + 1 ) = 1 ∑ i = 1 m α i min θ 0 i , λ j ∑ i = 1 m α i θ 0 i s . t . ∑ j = 1 n λ j x ij t ≤ θ 0 i x i 0 t + 1 , i = 1 , . . . , m ∑ j = 1 n λ j y rj t ≥ y r 0 t + 1 , r = 1 , . . . s , θ i 0 f r e e λ j ≥ 0 , j = 1 , . . . , n . Finally, through [12pt]{minimal} $$_{i}^{0}$$ θ i 0 ( [12pt]{minimal} $$x_{0}^{t}$$ x 0 t , [12pt]{minimal} $$y_{0}^{t}$$ y 0 t ) 、 [12pt]{minimal} $$_{0}^{t + 1}$$ θ 0 t + 1 ( [12pt]{minimal} $$x_{0}^{{t + {1}}}$$ x 0 t + 1 , [12pt]{minimal} $$y_{0}^{{t + {1}}}$$ y 0 t + 1 ) 、 [12pt]{minimal} $$_{0}^{t + 1}$$ θ 0 t + 1 ( [12pt]{minimal} $$x_{0}^{t}$$ x 0 t , [12pt]{minimal} $$y_{0}^{t}$$ y 0 t ) 和 [12pt]{minimal} $$_{i}^{0}$$ θ i 0 ( [12pt]{minimal} $$x_{0}^{{t + {1}}}$$ x 0 t + 1 , [12pt]{minimal} $$y_{0}^{{t + {1}}}$$ y 0 t + 1 ) The non radial efficiency score determines the input oriented non radial Malmquist productivity index: [12pt]{minimal} $$P_{0} = _{0}^{t} (x_{0}^{t} ,y_{0}^{t} )}}{{_{0}^{t + 1} (x_{0}^{t + 1} ,y_{0}^{t + 1} )}}[ {_{0}^{t + 1} (x_{0}^{t + 1} ,y_{0}^{t + 1} )_{0}^{t + 1} (x_{0}^{t} ,y_{0}^{t} )}}{{_{0}^{t} (x_{0}^{t + 1} ,y_{0}^{t + 1} )_{0}^{t} (x_{0}^{t} ,y_{0}^{t} )}}} ]^{1/2}$$ P I ~ 0 = θ ~ 0 t ( x 0 t , y 0 t ) θ ~ 0 t + 1 ( x 0 t + 1 , y 0 t + 1 ) θ ~ 0 t + 1 ( x 0 t + 1 , y 0 t + 1 ) θ ~ 0 t + 1 ( x 0 t , y 0 t ) θ ~ 0 t ( x 0 t + 1 , y 0 t + 1 ) θ ~ 0 t ( x 0 t , y 0 t ) 1 / 2 Data acquisition In this paper, based on pediatric general hospitals and healthcare disciplines, health system experts' research, and a literature review, 5 MFs, 3 TFs, 7 level 1 AFs and 17 level 2 AFs frameworks were formed. Based on the research of 20 pediatric experts and 30 groups of families, the strong and weak relationships were obtained after mean transformation and ordinal comparison, as shown in Fig. . Among them, the arrows indicate the transmission of causality, and the symbols on the arrows show the relative strength of the causal chain, e.g., "7" shows more robust causality than "5", meaning that pediatric medication should pay more attention to this factor . The meanings of the symbols in Fig. are as follows . A-1.1:Common diseases; A-1.2:Disease severity; A-2.1:Childhood status; A-2.2:Parental cognition; A-3.1:Hospital infrastructure; A-3.2:Doctor's Literacy; A-3.3:Doctor preferences. B-1.1:Timeliness of medication; B-1.2:Drug control degree; B-2.1:Improvement rate of children;B-2.2:Child Medication;B-2.3:Disease variability;B-2.4:external environment. C-1.1:Health records;C-1.2:Medication tracking;C-2.1:Health habits;C-2.2:Medical network. Furthermore, based on the analysis logic and core variables in Fig. , combined with causal relationships and input–output perspectives, the data is realistically extracted and summarized into the input–output framework of the DEA model. The variables are shown in Table . More details are shown in Appendix A. This paper constructs and uses a joint model of MDM and DEA Malmquist to effectively analyze the current status and scientific improvement direction of pediatric medication in China. Its operational logic is shown in Fig. . As shown in Fig. , based on pediatric medication literature and reality, combined with expert interviews, a multi-level and multi cycle evaluation index system for pediatric medication has been formed . Based on small sample measurement and analysis, improve the MDM matrix framework and conduct actual calculations. After clarifying the multidimensional ranking of medication factors under each goal orientation, the indicators of each factor are mapped into an input–output system and input into the DEA Malmquist model, combined with panel data from each province for measurement. Based on the results of the DEA Malmquist model, verify whether the optimal performance of pediatric efficacy evaluation is consistent with the factor ranking of the MDM matrix framework , and also extend the effectiveness and practical strategies of pediatric medication evaluation results. MDM Matrix Method This study constructed a framework for assessing pediatric medication use in developing countries, drawing on the Lean Management Model and the MDM Matrix principles . The structure of the framework is shown in Table . In this framework, PD (Pediatric direction) refers to the outcome factors, i.e., outcome variables, surrounding pediatric medication; MF (Main Factor) refers to the primary factors surrounding pediatric medicines, i.e., time, price, effectiveness, flexibility, and safety as analyzed in the previous section, and TF (Time Factor) refers to the time factors surrounding pediatric medication, including three factors: before, during, and after the drug is administered . AF (Actual Factor) refers to the actual factors surrounding pediatric medicines. A multilevel causal structure within the Actual Factor is equally mapped and divided into different time periods (TFs). MAF*PD is characterized as the feedback of the Actual Factor on the final pediatric medication and is presented as a ranked order of the percentage of each factor, which exhibits the detailed importance of each factor for the reference of the physicians and the families. MMF*PD、MTF*MF、MAF*TF、MAF*AF are the scores of each level of the factor in the evaluation based on the expert research, where MTF*MF are computed using a seven-point Likert scale, as a way to differentiate the differences in the importance of each level of factors in each category . Based on the MDM matrix framework, this paper can calculate the M AF*PD 、M AF*MF 、M* AF*TF 、M* AF*AF results, which are calculated as follows: [12pt]{minimal} $$ & M_{AF*AF}^{*} = M_{AF*TF} + M_{AF*AF}^{1} + M_{AF*TF}^{} \\ & M_{AF*TF}^{*} = M_{AF*TF} + M_{AF*AF}^{*} M_{AF*TF} \\ & M_{AF*MF} = M_{AF*TF}^{*} M_{TF*MF}\\ & M_{AF*PD} = M_{AF*MF} M_{MF*PD} $$ M A F ∗ A F ∗ = M A F ∗ T F + M A F ∗ A F 1 + … M A F ∗ T F n M A F ∗ T F ∗ = M A F ∗ T F + M A F ∗ A F ∗ · M A F ∗ T F M A F ∗ M F = M A F ∗ T F ∗ · M T F ∗ M F M A F ∗ P D = M A F ∗ M F · M M F ∗ P D (2) DEA-malmquist model For a long time, DEA (Data Envelopment Analysis) has been the main method for objective evaluation. DEA avoids axiomatic assumptions in evaluation research through the design of a multi input multi output system. In the evaluation of pediatric medication, its indicators cover many dimensions before, during, and after medication, but the induction of these dimensions should follow the principles of scientific logic. The combination of MDM framework and DEA model is beneficial for comprehensive coverage of indicators and optimization of internal evaluation structure, resulting in more scientific evaluation results. In the MDM framework, through screening principles, some variables are used as inputs to the DEA model, and the state and impact dimensions are used as outputs of the DEA model. Through the matching adjustment of this structure, the most authentic pediatric medication efficacy is clarified . Considering that non-zero input slack in the input oriented index (or non-zero output slack in the output oriented index) cannot fully describe changes in pediatric medication. Therefore, this study extends the radial Malmquist index to a non radial index , where the input oriented index does not allow non-zero input relaxation and the output oriented index does not allow non-zero output relaxation. Assuming there are n decision units (DMUs) in the study, each DMUj generates a corresponding output [12pt]{minimal} $$y_{j}^{t} = (y_{1j}^{t} ,...{,}y_{sj}^{t} )$$ y j t = ( y 1 j t , . . . , y sj t ) at each set of time t through a set of input [12pt]{minimal} $$x_{j}^{t} = (x_{1j}^{t} ,...{,}x_{mj}^{t} )$$ x j t = ( x 1 j t , . . . , x mj t ) variables. Therefore, the most primitive DEA model is: 1 [12pt]{minimal} $$ _{0}^{t} (x_{0}^{t} ,y_{0}^{t} ) = _{{_{0} ,_{j} }} _{0} \\ s.t.{ 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j = 1}^{n} {_{j} x_{j}^{t} } _{0} x_{0}^{t} , \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j = 1}^{n} {_{j} y_{j}^{t} } y_{0}^{t} \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j} 0,j = 1,...,n \\ $$ θ 0 t ( x 0 t , y 0 t ) = min θ 0 , λ j θ 0 s . t . ∑ j = 1 n λ j x j t ≤ θ 0 x 0 t , ∑ j = 1 n λ j y j t ≥ y 0 t λ j ≥ 0 , j = 1 , . . . , n Model (1) is an input oriented formula that considers the potential radial reduction trend of output factors at the current level of output scale. By replacing [12pt]{minimal} $$x_{j}^{t + 1}$$ x j t + 1 [12pt]{minimal} $$y_{j}^{t + 1}$$ y j t + 1 and [12pt]{minimal} $$x_{j}^{t}$$ x j t [12pt]{minimal} $$y_{j}^{t}$$ y j t studying, the technical efficiency of each decision-making unit can be obtained during the t + 1 period. Through the update from period t to period t + 1, the technical efficiency of each decision-making unit will change, and the forefront of experience production will also change accordingly. Research has formed the following non radial DEA : 2 [12pt]{minimal} $$ _{0}^{t} (x_{0}^{t} ,y_{0}^{t} ) = ^{m} {_{i} } }}_{{_{{_{0} }}^{i} ,_{j} }} _{i = 1}^{m} {_{i} } _{0}^{i} \\ s.t.{ 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j = 1}^{n} {_{j} x_{ij}^{t} } _{0}^{i} x_{i0}^{t} ,{ 1pt} { 1pt} { 1pt} { 1pt} i = 1,...,m \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j = 1}^{n} {_{j} y_{rj}^{t} } y_{r0}^{t} ,{ 1pt} { 1pt} { 1pt} { 1pt} { 1pt} r = 1,...s, \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{i}^{0} { 1pt} { 1pt} free \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j} 0,j = 1,...,n. \\ $$ θ ~ 0 t ( x 0 t , y 0 t ) = 1 ∑ i = 1 m α i min θ 0 i , λ j ∑ i = 1 m α i θ 0 i s . t . ∑ j = 1 n λ j x ij t ≤ θ 0 i x i 0 t , i = 1 , . . . , m ∑ j = 1 n λ j y rj t ≥ y r 0 t , r = 1 , . . . s , θ i 0 f r e e λ j ≥ 0 , j = 1 , . . . , n . The specified weights are [12pt]{minimal} $$a_{i}$$ a i i = 1,…, m, to reflect the different preferences of each decision-making unit for improving various inputs. Model (2) measured the relative efficiency of DMU during time period t under weight [12pt]{minimal} $$a_{i}$$ a i guidance. If [12pt]{minimal} $$a_{i}$$ a i = 0 and the corresponding [12pt]{minimal} $$_{i}^{0}$$ θ i 0 = 1 is set. At this point, the greater the weight, the higher the priority given by the DMU to reduce its i-th input. Thus, model (2) determined the optimal EPF. Corresponding to the characterization of Malmquist productivity index in radial DEA, the relative efficiency of DMU at time t + 1 can be obtained: 3 [12pt]{minimal} $$ _{0}^{t + 1} (x_{0}^{t} ,y_{0}^{t} ) = ^{m} {_{i} } }}_{{_{{_{0} }}^{i} ,_{j} }} _{i = 1}^{m} {_{i} } _{0}^{i} \\ s.t.{ 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j = 1}^{n} {_{j} x_{ij}^{t + 1} } _{0}^{i} x_{i0}^{t} ,{ 1pt} { 1pt} { 1pt} { 1pt} i = 1,...,m \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j = 1}^{n} {_{j} y_{rj}^{t + 1} } y_{r0}^{t} ,{ 1pt} { 1pt} { 1pt} { 1pt} { 1pt} r = 1,...s, \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{i}^{0} { 1pt} { 1pt} free \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j} 0,j = 1,...,n. \\ $$ θ ~ 0 t + 1 ( x 0 t , y 0 t ) = 1 ∑ i = 1 m α i min θ 0 i , λ j ∑ i = 1 m α i θ 0 i s . t . ∑ j = 1 n λ j x ij t + 1 ≤ θ 0 i x i 0 t , i = 1 , . . . , m ∑ j = 1 n λ j y rj t + 1 ≥ y r 0 t , r = 1 , . . . s , θ i 0 f r e e λ j ≥ 0 , j = 1 , . . . , n . Furthermore, by replacing the t + 1 values of input and output variables, the relative efficiency and EPF at time t are obtained 4 [12pt]{minimal} $$ _{0}^{t} (x_{0}^{t + 1} ,y_{0}^{t + 1} ) = ^{m} {_{i} } }}_{{_{{_{0} }}^{i} ,_{j} }} _{i = 1}^{m} {_{i} } _{0}^{i} \\ s.t.{ 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j = 1}^{n} {_{j} x_{ij}^{t} } _{0}^{i} x_{i0}^{t + 1} ,{ 1pt} { 1pt} { 1pt} { 1pt} i = 1,...,m \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j = 1}^{n} {_{j} y_{rj}^{t} } y_{r0}^{t + 1} ,{ 1pt} { 1pt} { 1pt} { 1pt} { 1pt} r = 1,...s, \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{i}^{0} { 1pt} { 1pt} free \\ { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} { 1pt} _{j} 0,j = 1,...,n. \\ $$ θ ~ 0 t ( x 0 t + 1 , y 0 t + 1 ) = 1 ∑ i = 1 m α i min θ 0 i , λ j ∑ i = 1 m α i θ 0 i s . t . ∑ j = 1 n λ j x ij t ≤ θ 0 i x i 0 t + 1 , i = 1 , . . . , m ∑ j = 1 n λ j y rj t ≥ y r 0 t + 1 , r = 1 , . . . s , θ i 0 f r e e λ j ≥ 0 , j = 1 , . . . , n . Finally, through [12pt]{minimal} $$_{i}^{0}$$ θ i 0 ( [12pt]{minimal} $$x_{0}^{t}$$ x 0 t , [12pt]{minimal} $$y_{0}^{t}$$ y 0 t ) 、 [12pt]{minimal} $$_{0}^{t + 1}$$ θ 0 t + 1 ( [12pt]{minimal} $$x_{0}^{{t + {1}}}$$ x 0 t + 1 , [12pt]{minimal} $$y_{0}^{{t + {1}}}$$ y 0 t + 1 ) 、 [12pt]{minimal} $$_{0}^{t + 1}$$ θ 0 t + 1 ( [12pt]{minimal} $$x_{0}^{t}$$ x 0 t , [12pt]{minimal} $$y_{0}^{t}$$ y 0 t ) 和 [12pt]{minimal} $$_{i}^{0}$$ θ i 0 ( [12pt]{minimal} $$x_{0}^{{t + {1}}}$$ x 0 t + 1 , [12pt]{minimal} $$y_{0}^{{t + {1}}}$$ y 0 t + 1 ) The non radial efficiency score determines the input oriented non radial Malmquist productivity index: [12pt]{minimal} $$P_{0} = _{0}^{t} (x_{0}^{t} ,y_{0}^{t} )}}{{_{0}^{t + 1} (x_{0}^{t + 1} ,y_{0}^{t + 1} )}}[ {_{0}^{t + 1} (x_{0}^{t + 1} ,y_{0}^{t + 1} )_{0}^{t + 1} (x_{0}^{t} ,y_{0}^{t} )}}{{_{0}^{t} (x_{0}^{t + 1} ,y_{0}^{t + 1} )_{0}^{t} (x_{0}^{t} ,y_{0}^{t} )}}} ]^{1/2}$$ P I ~ 0 = θ ~ 0 t ( x 0 t , y 0 t ) θ ~ 0 t + 1 ( x 0 t + 1 , y 0 t + 1 ) θ ~ 0 t + 1 ( x 0 t + 1 , y 0 t + 1 ) θ ~ 0 t + 1 ( x 0 t , y 0 t ) θ ~ 0 t ( x 0 t + 1 , y 0 t + 1 ) θ ~ 0 t ( x 0 t , y 0 t ) 1 / 2 In this paper, based on pediatric general hospitals and healthcare disciplines, health system experts' research, and a literature review, 5 MFs, 3 TFs, 7 level 1 AFs and 17 level 2 AFs frameworks were formed. Based on the research of 20 pediatric experts and 30 groups of families, the strong and weak relationships were obtained after mean transformation and ordinal comparison, as shown in Fig. . Among them, the arrows indicate the transmission of causality, and the symbols on the arrows show the relative strength of the causal chain, e.g., "7" shows more robust causality than "5", meaning that pediatric medication should pay more attention to this factor . The meanings of the symbols in Fig. are as follows . A-1.1:Common diseases; A-1.2:Disease severity; A-2.1:Childhood status; A-2.2:Parental cognition; A-3.1:Hospital infrastructure; A-3.2:Doctor's Literacy; A-3.3:Doctor preferences. B-1.1:Timeliness of medication; B-1.2:Drug control degree; B-2.1:Improvement rate of children;B-2.2:Child Medication;B-2.3:Disease variability;B-2.4:external environment. C-1.1:Health records;C-1.2:Medication tracking;C-2.1:Health habits;C-2.2:Medical network. Furthermore, based on the analysis logic and core variables in Fig. , combined with causal relationships and input–output perspectives, the data is realistically extracted and summarized into the input–output framework of the DEA model. The variables are shown in Table . More details are shown in Appendix A. MDM matrix analysis results Key capture of overall medication factors Based on the results of the data survey (Fig. ), in developing countries, pediatricians and parents of affected children generally focus on drug prices, efficacy, and safety during the pre medication period . Due to differences in symptom characteristics and limitations in medical conditions, pediatric medication in developing countries generally does not overly emphasize timing and compatibility issues before use. As the medication phase begins, families of affected children begin to pay attention to medication timing, efficacy, flexibility, and safety issues. In terms of medication duration, families are generally concerned that prolonged medication cycles may harm children's bodies or have adverse effects. In terms of medication flexibility, children generally have more changes in their condition, such as fluctuating fever or common diarrhea problems, which often worsen or are mixed in with the original condition. And parents cannot go to the hospital all the time to confirm the medication status and coordinate with doctors to modify the prescription, which leads to parents temporarily increasing medication. The mixing of multiple drugs highlights the issue of medication flexibility in children, and parents must pay attention to the conflicts and compatibility mechanisms of medication when applying it. In the post medication stage, medical institutions and families generally continue to pay attention to the effectiveness and flexibility of medication . On the one hand, the effectiveness of medication lies in the manifestation of medication integrity. Families of sick children are generally concerned about the possibility of recurrent illness and medication failure, which may further damage the child's health. On the other hand, some children with chronic diseases also need to worry about the flexibility of medication left after use . Based on research and questionnaire collection evaluation results, the importance ratios of time, price, effectiveness, flexibility, and safety are 0.15, 0.1, 0.3, 0.05, and 0.4. During the specific period, safety assessment before medication is the most important, followed by safety considerations during the medication phase. The importance of pediatric medication efficacy is slightly weaker during the medication phase, followed by consideration of flexibility during medication, analysis of efficacy before medication, and consideration of flexibility after medication . Capture the key factors of actual medication use In the actual factor representation, the pre-medication phase was divided into three primary sub-factor constructs: disease control, parental attitudes, and physician norms; the mid-medication phase included two sub-factor constructs: parental behavior and child feedback; and the post-medication cycle was divided into two sub-factor constructs: medical-social cooperation and parental feedback . In the premedication cycle. Disease control is primarily a normative code for the use of medication in children. Among other things, the commonness of the disease and the severity of the child's illness affect the effectiveness of the manual and ultimately determine the normative nature of disease control before medication administration. Parental attitudes are also a critical factor in the premedication assessment, which consists of two sub-dimensions: child status and parental perceptions. The reality of the child's state before the medication visit can profoundly affect the final medication pattern. If a child's state is poor, such as depression and loss of appetite, even if their condition is not severe and the disease is not complex, it will affect the doctor's prescription of medication . In addition, parental perception is an essential factor in medication assessment. Low parental literacy or poor medication habits may start doctor-patient conflicts. In addition, misperceptions may force physicians to forgo the use of fast and efficient pediatric medications. Physician norms are also a key consideration in the premedication cycle. Hospital infrastructure, physician literacy, and physician preference will ultimately influence the expression of physician norms. In China, high-level hospitals such as tertiary hospitals are in better condition to support the cultivation of physician's medication specifications. Medication-in-use phase. Parental behavior and child feedback determine the state of medication assessment. Due to the unique nature of pediatric medication administration, parental behavior is critical in guiding medication administration and exerting efficacy. Parental behavior includes medication punctuality and dosage control. Therefore, parental habits and busyness should be considered in pediatric medication assessment to adjust medication protocols and avoid multiple doses, multiple dosages, or complex means of medication administration. In addition, children's feedback is an important category to consider. Children getting better, children being medicated, and changes in disease all determine the outcome of children's feedback and determine whether parents or physicians need to adjust pediatric medications. In addition, the external environment can accelerate thinking about pediatric medicines. For example, in the context of the new crown, children may exacerbate disease adjustments, thus requiring multidimensional consideration of medication. In the post-medication cycle, developing countries need help to build solid medical community systems need be built. Therefore, the emerging health records and the tracking of national medication habits determine the medical community's cooperation quality and ultimately affect the pediatric medication mechanism. In China, in recent years, health administrations have invested much effort in tracking and managing common pediatric medications to circumvent the abuse of some inefficient or inappropriate medications. With the continued promotion of medical community building, pediatric medication assessment has become more standardized and proactive, and the quality of medication management has improved dramatically. In addition, parental feedback is a vital assessment variable, and their health care habits and medical contacts, developed over time, can profoundly determine the outcome of a family's pediatric medication use . In addition, some of the second-level practical factors also influence each other and even feedback or indirectly influence other first-level practical factors. The specific assessment results are shown in Fig. . Based on MDM structure establishment and data analysis calculation, this paper has formed the core factor results of pediatric medication evaluation, as shown in Table . The following results can be obtained from Table . Specifically, The proportion of factor assessment before medication is the highest, reaching 53.5%; Next is the pediatric evaluation during the mid-term of medication, with factor evaluation accounting for approximately 32.7%; The proportion of pediatric evaluation factors in the post medication cycle is about 13.7%. However, it is worth noting that the number of evaluation factors in the post medication stage of pediatrics is the smallest, only 4, so its average evaluation proportion is 3.4%. The evaluation of various factors during medication accounts for 5.5%, while the evaluation of various factors before medication accounts for approximately 7.6%. This indicates that the evaluation of pediatric drug use in the pre medication stage is the most important and requires the most comprehensive consideration, with each stage accounting for the highest proportion of the overall evaluation. From the perspective of first level practical factors, the proportion of standardized evaluations by doctors is the highest, reaching 25.8%. The average proportion of secondary factors is 8.6%. Indicating that physician standardization is the most critical dimension in pediatric medication evaluation. The standardization of doctors and the quality of medication determine the final effectiveness of medication. The evaluation proportion of children's feedback is second only to doctor's standards, with a proportion of 23.5% and an average proportion of 5.9% in its sub dimensions. Therefore, the actual performance in pediatric medication is also a key factor for inclusion in the pediatric medication evaluation system. But the importance of its sub items is not as significant as the doctor's standard dimension. The key to improving the quality of medication lies in the matching and effective interaction between doctors and patients. The evaluation of parental attitudes accounts for 20.17% of the overall consideration, but its sub item proportion reaches 10.1%, indicating that child status and parental cognition constitute important evaluation sub variables. The proportions of disease control, parental behavior, medical and social cooperation, and parental feedback are all less than 10%, accounting for 7.6%, 9.3%, 6.4%, and 7.3% respectively, indicating that the above-mentioned AF is an important reference variable that plays a corrective and maintenance role in pediatric medication evaluation. From the perspective of secondary AF, physician competence is the most important factor, accounting for 12.6% of the total evaluation factors, followed by child status, accounting for 10.8%. The proportion of factors related to parental cognition, hospital infrastructure, medication punctuality, child improvement, child drug resistance, and health habits all exceeded 5%, accounting for9.4%, 9.3%, 5.3%, 7.6%, 8.0%, and 7.0%, respectively. For hospitals and health management departments, this indicator system can be used to input actual data and evaluate the effectiveness, combined with the proportion of each factor to obtain the final score of pediatric medication, in order to optimize the quality of various punishments and medication habits. Capturing medication factors under different guiding objectives In practice, due to increased constraints in medication administration, hospitals and doctors can optimize and adjust according to the specific claims of families. For example, if some families pursue the duration of medication, they can be optimized according to the AF-MF matrix. Specifically, parental perception was the most critical and primary reference variable when pursued with medication duration. This is also more consistent with reality. The lack of parental perception can lead to conflicts between the doctor and the family, and too much strife and medication violation (not exercised according to the prescription in the actual medication by the parents) can lead to a failure in the assessment of the medication. Child receptivity to drugs is the second most crucial factor, with about 97.6% of the importance of the reference variable (parental perception). Good medication acceptance in children accelerates the time to take medication and avoids prolonged medication procrastination or dependence. The remaining essential factors were physician literacy (83.7%), health records (70.1%), child status (69.7%), medication punctuality (59.8%), and disease variability (55.8%), with less than 50% of the remaining factors. This is highly consistent with the reality that more competent doctors tend to cure the disease and shorten the medication cycle; the establishment of a health record helps to share some of the data of the child, which can lead to a suitable prescription for future medication; the status of the child is an important starting point for medication, and milder illnesses contribute to a quicker recovery; and punctuality of medication and changes in disease are also closely related to the length of drugs. Child status is the primary reference variable when pursuing the price of medication. The child's status determines whether or not low-cost medicines can be administered. If the status of the child is too poor, it isn't very sensible to pursue the price of medication. On the other hand, physician literacy is the most crucial variable, with an importance share relative to the reference variable of 1.4. The remaining essential factors are physician preference (93.5%), parental perceptions (71.4%), and disease severity (51.4%). Doctors' medication habits determine the choice for drug prices, in addition to parental perceptions and disease severity, which determine the eventual use of cheaper drugs. When the effect of medication is pursued, the child's status remains the primary reference variable. It is clear that the purpose of medication use, in addition to addressing childhood illnesses, is also primarily aimed at alleviating the status of the child. Hospital infrastructure is the most crucial variable, with an importance weight of 1.1 compared to the reference variable, and it is clear that in developing countries, hospital infrastructure determines the final effect of medication. This aligns with reality, so most families visit tertiary or pediatric specialist hospitals. High-quality hospital infrastructure and a high reputation of hospital prestige will attract many patients to visit the hospital. In addition, a higher level of hospital infrastructure determines the top pediatric resources in the developing world, which can generally support most pediatric patients' medication claims. The remaining factors are relatively close in level but slightly less important. The health profile is the primary reference variable when medication flexibility is sought. An active and effective health record is a critical factor in the relationship between contraindications and flexibility of medication use in children. On the one hand, timely recording of children's medication habits, medication characteristics, and common acute and chronic diseases will help doctors avoid adverse compounding problems in subsequent medication administration; at the same time, a solid health record will help doctors quickly perceive children's characteristics, thus strengthening the flexibility of medication administration. Parental perception is the most critical factor, with a weighting of 1.01 compared to the reference variable, which means that parents are still the first person responsible for children's medication. Parents are often familiar with their children's characteristics and common illnesses and are aware of their children's allergies or medications with high side effects. These will help to avoid risks in pediatric medicines. Physician literacy is the primary reference variable when medication safety is pursued and the most important one. Doctors possess the most knowledge about pediatric medication and are the most critical factor in achieving pediatric medication safety. Parental cognition was second only to physician literacy, with 80.2% importance. Positive parental cognition helps them to cooperate with the doctor, thus improving medication safety. Better family literacy can circumvent the problems of unauthorized and incorrect use of medication by parents and further enhance medication safety. The percentage of importance of hospital infrastructure (56.6%) and healthcare habits (50.4%) is higher than 50%, and the above factors positively affect medication safety outcomes. Overall, despite the constraints on pediatric drug use and the underdeveloped level of care in developing countries, the goals of effective pediatric drug use can be achieved mainly with the help of the present assessment framework. However, with the help of this assessment framework, the goal of effective pediatric medication use can still be primarily achieved. By identifying multidimensional factor criticality, doctors and parents can quickly identify the key factors, accelerate the turnaround efficiency of pediatric consultation, improve the quality of pediatric medication, and promote the optimal construction of pediatrics in developing countries. Analysis of DEA Malmquist model Furthermore, the evaluation variables will be macroscopically evaluated and combined with DEA model analysis to obtain the Malmquist index of pediatric medication in various provinces of China. As shown in Table . As shown in Table , the average annual Malmquist index in Beijing, Shanxi, Inner Mongolia, Jilin, Jiangsu, Zhejiang, Anhui, Anhui, Anhui, Shandong, Henan, Hunan, Guangdong, Guangxi, Chongqing, Sichuan, Guizhou, and Xinjiang is higher than 1, indicating that the structure and effectiveness of pediatric medication in these regions continue to optimize and are in a state of progress. Among them, the average annual Malmquist index of Beijing, Shanxi, Jiangsu, Zhejiang, Anhui, Shandong, Guangdong, Sichuan, Jilin, and Fujian is higher than 1.05, indicating a significant growth optimization state. This is highly consistent with the reality of China's national conditions. Jiangsu, Zhejiang, Anhui, and Shandong are located in the economically developed East China region, which has a high level of infrastructure and excellent medical conditions, especially in the top 100 pediatric departments with a market share of over 70%. Beijing and Guangdong are the main economic and political regions in China, and their awareness of children's health care and medication level are also very excellent. Sichuan, Jilin, Fujian, and Shanxi are regional medical centers, with famous hospitals such as West China Hospital and Putian Medical Center located in these areas. Although the reputation of Putian hospitals varies, they provide a large amount of basic medical care and undertake a certain amount of workload in basic outpatient services, popularization of simple pediatric medication, and other aspects. From a yearly perspective, the Malmquist index performance of each region was relatively low from 2015 to 2016, with an average of 0.9843. However, from 2016 to 2019, the Malmquist index maintained stable growth. Due to the impact of the epidemic, there was a certain decline in 2019–2020, but in the era of epidemic and post epidemic, the Malmquist index for pediatric medication quickly recovered. Every crisis is also a challenge and adjustment. Major health events reshape the logic of pediatric medication, guiding refined, scientific, and standardized medication methods to serve pediatric patients. This led to the resurgence of the Malmquist index between 2020 and 2021. Further refining to each province, areas with excellent Malmquist index such as Beijing, Jiangsu, and Sichuan have input and output structures that are basically consistent with MDM matrix evaluation. According to the logic of each dominant factor and the overall goal, if the primary variable is not redundant as an input factor and significantly expands as an output result, it means that the Malmquist index performance in the region is better, that is, the efficacy of pediatric medication drugs is more outstanding. This further validates the effectiveness of MDM evaluation for pediatric medication and confirms the direction of standardized strategy development for pediatric medication. Key capture of overall medication factors Based on the results of the data survey (Fig. ), in developing countries, pediatricians and parents of affected children generally focus on drug prices, efficacy, and safety during the pre medication period . Due to differences in symptom characteristics and limitations in medical conditions, pediatric medication in developing countries generally does not overly emphasize timing and compatibility issues before use. As the medication phase begins, families of affected children begin to pay attention to medication timing, efficacy, flexibility, and safety issues. In terms of medication duration, families are generally concerned that prolonged medication cycles may harm children's bodies or have adverse effects. In terms of medication flexibility, children generally have more changes in their condition, such as fluctuating fever or common diarrhea problems, which often worsen or are mixed in with the original condition. And parents cannot go to the hospital all the time to confirm the medication status and coordinate with doctors to modify the prescription, which leads to parents temporarily increasing medication. The mixing of multiple drugs highlights the issue of medication flexibility in children, and parents must pay attention to the conflicts and compatibility mechanisms of medication when applying it. In the post medication stage, medical institutions and families generally continue to pay attention to the effectiveness and flexibility of medication . On the one hand, the effectiveness of medication lies in the manifestation of medication integrity. Families of sick children are generally concerned about the possibility of recurrent illness and medication failure, which may further damage the child's health. On the other hand, some children with chronic diseases also need to worry about the flexibility of medication left after use . Based on research and questionnaire collection evaluation results, the importance ratios of time, price, effectiveness, flexibility, and safety are 0.15, 0.1, 0.3, 0.05, and 0.4. During the specific period, safety assessment before medication is the most important, followed by safety considerations during the medication phase. The importance of pediatric medication efficacy is slightly weaker during the medication phase, followed by consideration of flexibility during medication, analysis of efficacy before medication, and consideration of flexibility after medication . Capture the key factors of actual medication use In the actual factor representation, the pre-medication phase was divided into three primary sub-factor constructs: disease control, parental attitudes, and physician norms; the mid-medication phase included two sub-factor constructs: parental behavior and child feedback; and the post-medication cycle was divided into two sub-factor constructs: medical-social cooperation and parental feedback . In the premedication cycle. Disease control is primarily a normative code for the use of medication in children. Among other things, the commonness of the disease and the severity of the child's illness affect the effectiveness of the manual and ultimately determine the normative nature of disease control before medication administration. Parental attitudes are also a critical factor in the premedication assessment, which consists of two sub-dimensions: child status and parental perceptions. The reality of the child's state before the medication visit can profoundly affect the final medication pattern. If a child's state is poor, such as depression and loss of appetite, even if their condition is not severe and the disease is not complex, it will affect the doctor's prescription of medication . In addition, parental perception is an essential factor in medication assessment. Low parental literacy or poor medication habits may start doctor-patient conflicts. In addition, misperceptions may force physicians to forgo the use of fast and efficient pediatric medications. Physician norms are also a key consideration in the premedication cycle. Hospital infrastructure, physician literacy, and physician preference will ultimately influence the expression of physician norms. In China, high-level hospitals such as tertiary hospitals are in better condition to support the cultivation of physician's medication specifications. Medication-in-use phase. Parental behavior and child feedback determine the state of medication assessment. Due to the unique nature of pediatric medication administration, parental behavior is critical in guiding medication administration and exerting efficacy. Parental behavior includes medication punctuality and dosage control. Therefore, parental habits and busyness should be considered in pediatric medication assessment to adjust medication protocols and avoid multiple doses, multiple dosages, or complex means of medication administration. In addition, children's feedback is an important category to consider. Children getting better, children being medicated, and changes in disease all determine the outcome of children's feedback and determine whether parents or physicians need to adjust pediatric medications. In addition, the external environment can accelerate thinking about pediatric medicines. For example, in the context of the new crown, children may exacerbate disease adjustments, thus requiring multidimensional consideration of medication. In the post-medication cycle, developing countries need help to build solid medical community systems need be built. Therefore, the emerging health records and the tracking of national medication habits determine the medical community's cooperation quality and ultimately affect the pediatric medication mechanism. In China, in recent years, health administrations have invested much effort in tracking and managing common pediatric medications to circumvent the abuse of some inefficient or inappropriate medications. With the continued promotion of medical community building, pediatric medication assessment has become more standardized and proactive, and the quality of medication management has improved dramatically. In addition, parental feedback is a vital assessment variable, and their health care habits and medical contacts, developed over time, can profoundly determine the outcome of a family's pediatric medication use . In addition, some of the second-level practical factors also influence each other and even feedback or indirectly influence other first-level practical factors. The specific assessment results are shown in Fig. . Based on MDM structure establishment and data analysis calculation, this paper has formed the core factor results of pediatric medication evaluation, as shown in Table . The following results can be obtained from Table . Specifically, The proportion of factor assessment before medication is the highest, reaching 53.5%; Next is the pediatric evaluation during the mid-term of medication, with factor evaluation accounting for approximately 32.7%; The proportion of pediatric evaluation factors in the post medication cycle is about 13.7%. However, it is worth noting that the number of evaluation factors in the post medication stage of pediatrics is the smallest, only 4, so its average evaluation proportion is 3.4%. The evaluation of various factors during medication accounts for 5.5%, while the evaluation of various factors before medication accounts for approximately 7.6%. This indicates that the evaluation of pediatric drug use in the pre medication stage is the most important and requires the most comprehensive consideration, with each stage accounting for the highest proportion of the overall evaluation. From the perspective of first level practical factors, the proportion of standardized evaluations by doctors is the highest, reaching 25.8%. The average proportion of secondary factors is 8.6%. Indicating that physician standardization is the most critical dimension in pediatric medication evaluation. The standardization of doctors and the quality of medication determine the final effectiveness of medication. The evaluation proportion of children's feedback is second only to doctor's standards, with a proportion of 23.5% and an average proportion of 5.9% in its sub dimensions. Therefore, the actual performance in pediatric medication is also a key factor for inclusion in the pediatric medication evaluation system. But the importance of its sub items is not as significant as the doctor's standard dimension. The key to improving the quality of medication lies in the matching and effective interaction between doctors and patients. The evaluation of parental attitudes accounts for 20.17% of the overall consideration, but its sub item proportion reaches 10.1%, indicating that child status and parental cognition constitute important evaluation sub variables. The proportions of disease control, parental behavior, medical and social cooperation, and parental feedback are all less than 10%, accounting for 7.6%, 9.3%, 6.4%, and 7.3% respectively, indicating that the above-mentioned AF is an important reference variable that plays a corrective and maintenance role in pediatric medication evaluation. From the perspective of secondary AF, physician competence is the most important factor, accounting for 12.6% of the total evaluation factors, followed by child status, accounting for 10.8%. The proportion of factors related to parental cognition, hospital infrastructure, medication punctuality, child improvement, child drug resistance, and health habits all exceeded 5%, accounting for9.4%, 9.3%, 5.3%, 7.6%, 8.0%, and 7.0%, respectively. For hospitals and health management departments, this indicator system can be used to input actual data and evaluate the effectiveness, combined with the proportion of each factor to obtain the final score of pediatric medication, in order to optimize the quality of various punishments and medication habits. Capturing medication factors under different guiding objectives In practice, due to increased constraints in medication administration, hospitals and doctors can optimize and adjust according to the specific claims of families. For example, if some families pursue the duration of medication, they can be optimized according to the AF-MF matrix. Specifically, parental perception was the most critical and primary reference variable when pursued with medication duration. This is also more consistent with reality. The lack of parental perception can lead to conflicts between the doctor and the family, and too much strife and medication violation (not exercised according to the prescription in the actual medication by the parents) can lead to a failure in the assessment of the medication. Child receptivity to drugs is the second most crucial factor, with about 97.6% of the importance of the reference variable (parental perception). Good medication acceptance in children accelerates the time to take medication and avoids prolonged medication procrastination or dependence. The remaining essential factors were physician literacy (83.7%), health records (70.1%), child status (69.7%), medication punctuality (59.8%), and disease variability (55.8%), with less than 50% of the remaining factors. This is highly consistent with the reality that more competent doctors tend to cure the disease and shorten the medication cycle; the establishment of a health record helps to share some of the data of the child, which can lead to a suitable prescription for future medication; the status of the child is an important starting point for medication, and milder illnesses contribute to a quicker recovery; and punctuality of medication and changes in disease are also closely related to the length of drugs. Child status is the primary reference variable when pursuing the price of medication. The child's status determines whether or not low-cost medicines can be administered. If the status of the child is too poor, it isn't very sensible to pursue the price of medication. On the other hand, physician literacy is the most crucial variable, with an importance share relative to the reference variable of 1.4. The remaining essential factors are physician preference (93.5%), parental perceptions (71.4%), and disease severity (51.4%). Doctors' medication habits determine the choice for drug prices, in addition to parental perceptions and disease severity, which determine the eventual use of cheaper drugs. When the effect of medication is pursued, the child's status remains the primary reference variable. It is clear that the purpose of medication use, in addition to addressing childhood illnesses, is also primarily aimed at alleviating the status of the child. Hospital infrastructure is the most crucial variable, with an importance weight of 1.1 compared to the reference variable, and it is clear that in developing countries, hospital infrastructure determines the final effect of medication. This aligns with reality, so most families visit tertiary or pediatric specialist hospitals. High-quality hospital infrastructure and a high reputation of hospital prestige will attract many patients to visit the hospital. In addition, a higher level of hospital infrastructure determines the top pediatric resources in the developing world, which can generally support most pediatric patients' medication claims. The remaining factors are relatively close in level but slightly less important. The health profile is the primary reference variable when medication flexibility is sought. An active and effective health record is a critical factor in the relationship between contraindications and flexibility of medication use in children. On the one hand, timely recording of children's medication habits, medication characteristics, and common acute and chronic diseases will help doctors avoid adverse compounding problems in subsequent medication administration; at the same time, a solid health record will help doctors quickly perceive children's characteristics, thus strengthening the flexibility of medication administration. Parental perception is the most critical factor, with a weighting of 1.01 compared to the reference variable, which means that parents are still the first person responsible for children's medication. Parents are often familiar with their children's characteristics and common illnesses and are aware of their children's allergies or medications with high side effects. These will help to avoid risks in pediatric medicines. Physician literacy is the primary reference variable when medication safety is pursued and the most important one. Doctors possess the most knowledge about pediatric medication and are the most critical factor in achieving pediatric medication safety. Parental cognition was second only to physician literacy, with 80.2% importance. Positive parental cognition helps them to cooperate with the doctor, thus improving medication safety. Better family literacy can circumvent the problems of unauthorized and incorrect use of medication by parents and further enhance medication safety. The percentage of importance of hospital infrastructure (56.6%) and healthcare habits (50.4%) is higher than 50%, and the above factors positively affect medication safety outcomes. Overall, despite the constraints on pediatric drug use and the underdeveloped level of care in developing countries, the goals of effective pediatric drug use can be achieved mainly with the help of the present assessment framework. However, with the help of this assessment framework, the goal of effective pediatric medication use can still be primarily achieved. By identifying multidimensional factor criticality, doctors and parents can quickly identify the key factors, accelerate the turnaround efficiency of pediatric consultation, improve the quality of pediatric medication, and promote the optimal construction of pediatrics in developing countries. Based on the results of the data survey (Fig. ), in developing countries, pediatricians and parents of affected children generally focus on drug prices, efficacy, and safety during the pre medication period . Due to differences in symptom characteristics and limitations in medical conditions, pediatric medication in developing countries generally does not overly emphasize timing and compatibility issues before use. As the medication phase begins, families of affected children begin to pay attention to medication timing, efficacy, flexibility, and safety issues. In terms of medication duration, families are generally concerned that prolonged medication cycles may harm children's bodies or have adverse effects. In terms of medication flexibility, children generally have more changes in their condition, such as fluctuating fever or common diarrhea problems, which often worsen or are mixed in with the original condition. And parents cannot go to the hospital all the time to confirm the medication status and coordinate with doctors to modify the prescription, which leads to parents temporarily increasing medication. The mixing of multiple drugs highlights the issue of medication flexibility in children, and parents must pay attention to the conflicts and compatibility mechanisms of medication when applying it. In the post medication stage, medical institutions and families generally continue to pay attention to the effectiveness and flexibility of medication . On the one hand, the effectiveness of medication lies in the manifestation of medication integrity. Families of sick children are generally concerned about the possibility of recurrent illness and medication failure, which may further damage the child's health. On the other hand, some children with chronic diseases also need to worry about the flexibility of medication left after use . Based on research and questionnaire collection evaluation results, the importance ratios of time, price, effectiveness, flexibility, and safety are 0.15, 0.1, 0.3, 0.05, and 0.4. During the specific period, safety assessment before medication is the most important, followed by safety considerations during the medication phase. The importance of pediatric medication efficacy is slightly weaker during the medication phase, followed by consideration of flexibility during medication, analysis of efficacy before medication, and consideration of flexibility after medication . In the actual factor representation, the pre-medication phase was divided into three primary sub-factor constructs: disease control, parental attitudes, and physician norms; the mid-medication phase included two sub-factor constructs: parental behavior and child feedback; and the post-medication cycle was divided into two sub-factor constructs: medical-social cooperation and parental feedback . In the premedication cycle. Disease control is primarily a normative code for the use of medication in children. Among other things, the commonness of the disease and the severity of the child's illness affect the effectiveness of the manual and ultimately determine the normative nature of disease control before medication administration. Parental attitudes are also a critical factor in the premedication assessment, which consists of two sub-dimensions: child status and parental perceptions. The reality of the child's state before the medication visit can profoundly affect the final medication pattern. If a child's state is poor, such as depression and loss of appetite, even if their condition is not severe and the disease is not complex, it will affect the doctor's prescription of medication . In addition, parental perception is an essential factor in medication assessment. Low parental literacy or poor medication habits may start doctor-patient conflicts. In addition, misperceptions may force physicians to forgo the use of fast and efficient pediatric medications. Physician norms are also a key consideration in the premedication cycle. Hospital infrastructure, physician literacy, and physician preference will ultimately influence the expression of physician norms. In China, high-level hospitals such as tertiary hospitals are in better condition to support the cultivation of physician's medication specifications. Medication-in-use phase. Parental behavior and child feedback determine the state of medication assessment. Due to the unique nature of pediatric medication administration, parental behavior is critical in guiding medication administration and exerting efficacy. Parental behavior includes medication punctuality and dosage control. Therefore, parental habits and busyness should be considered in pediatric medication assessment to adjust medication protocols and avoid multiple doses, multiple dosages, or complex means of medication administration. In addition, children's feedback is an important category to consider. Children getting better, children being medicated, and changes in disease all determine the outcome of children's feedback and determine whether parents or physicians need to adjust pediatric medications. In addition, the external environment can accelerate thinking about pediatric medicines. For example, in the context of the new crown, children may exacerbate disease adjustments, thus requiring multidimensional consideration of medication. In the post-medication cycle, developing countries need help to build solid medical community systems need be built. Therefore, the emerging health records and the tracking of national medication habits determine the medical community's cooperation quality and ultimately affect the pediatric medication mechanism. In China, in recent years, health administrations have invested much effort in tracking and managing common pediatric medications to circumvent the abuse of some inefficient or inappropriate medications. With the continued promotion of medical community building, pediatric medication assessment has become more standardized and proactive, and the quality of medication management has improved dramatically. In addition, parental feedback is a vital assessment variable, and their health care habits and medical contacts, developed over time, can profoundly determine the outcome of a family's pediatric medication use . In addition, some of the second-level practical factors also influence each other and even feedback or indirectly influence other first-level practical factors. The specific assessment results are shown in Fig. . Based on MDM structure establishment and data analysis calculation, this paper has formed the core factor results of pediatric medication evaluation, as shown in Table . The following results can be obtained from Table . Specifically, The proportion of factor assessment before medication is the highest, reaching 53.5%; Next is the pediatric evaluation during the mid-term of medication, with factor evaluation accounting for approximately 32.7%; The proportion of pediatric evaluation factors in the post medication cycle is about 13.7%. However, it is worth noting that the number of evaluation factors in the post medication stage of pediatrics is the smallest, only 4, so its average evaluation proportion is 3.4%. The evaluation of various factors during medication accounts for 5.5%, while the evaluation of various factors before medication accounts for approximately 7.6%. This indicates that the evaluation of pediatric drug use in the pre medication stage is the most important and requires the most comprehensive consideration, with each stage accounting for the highest proportion of the overall evaluation. From the perspective of first level practical factors, the proportion of standardized evaluations by doctors is the highest, reaching 25.8%. The average proportion of secondary factors is 8.6%. Indicating that physician standardization is the most critical dimension in pediatric medication evaluation. The standardization of doctors and the quality of medication determine the final effectiveness of medication. The evaluation proportion of children's feedback is second only to doctor's standards, with a proportion of 23.5% and an average proportion of 5.9% in its sub dimensions. Therefore, the actual performance in pediatric medication is also a key factor for inclusion in the pediatric medication evaluation system. But the importance of its sub items is not as significant as the doctor's standard dimension. The key to improving the quality of medication lies in the matching and effective interaction between doctors and patients. The evaluation of parental attitudes accounts for 20.17% of the overall consideration, but its sub item proportion reaches 10.1%, indicating that child status and parental cognition constitute important evaluation sub variables. The proportions of disease control, parental behavior, medical and social cooperation, and parental feedback are all less than 10%, accounting for 7.6%, 9.3%, 6.4%, and 7.3% respectively, indicating that the above-mentioned AF is an important reference variable that plays a corrective and maintenance role in pediatric medication evaluation. From the perspective of secondary AF, physician competence is the most important factor, accounting for 12.6% of the total evaluation factors, followed by child status, accounting for 10.8%. The proportion of factors related to parental cognition, hospital infrastructure, medication punctuality, child improvement, child drug resistance, and health habits all exceeded 5%, accounting for9.4%, 9.3%, 5.3%, 7.6%, 8.0%, and 7.0%, respectively. For hospitals and health management departments, this indicator system can be used to input actual data and evaluate the effectiveness, combined with the proportion of each factor to obtain the final score of pediatric medication, in order to optimize the quality of various punishments and medication habits. In practice, due to increased constraints in medication administration, hospitals and doctors can optimize and adjust according to the specific claims of families. For example, if some families pursue the duration of medication, they can be optimized according to the AF-MF matrix. Specifically, parental perception was the most critical and primary reference variable when pursued with medication duration. This is also more consistent with reality. The lack of parental perception can lead to conflicts between the doctor and the family, and too much strife and medication violation (not exercised according to the prescription in the actual medication by the parents) can lead to a failure in the assessment of the medication. Child receptivity to drugs is the second most crucial factor, with about 97.6% of the importance of the reference variable (parental perception). Good medication acceptance in children accelerates the time to take medication and avoids prolonged medication procrastination or dependence. The remaining essential factors were physician literacy (83.7%), health records (70.1%), child status (69.7%), medication punctuality (59.8%), and disease variability (55.8%), with less than 50% of the remaining factors. This is highly consistent with the reality that more competent doctors tend to cure the disease and shorten the medication cycle; the establishment of a health record helps to share some of the data of the child, which can lead to a suitable prescription for future medication; the status of the child is an important starting point for medication, and milder illnesses contribute to a quicker recovery; and punctuality of medication and changes in disease are also closely related to the length of drugs. Child status is the primary reference variable when pursuing the price of medication. The child's status determines whether or not low-cost medicines can be administered. If the status of the child is too poor, it isn't very sensible to pursue the price of medication. On the other hand, physician literacy is the most crucial variable, with an importance share relative to the reference variable of 1.4. The remaining essential factors are physician preference (93.5%), parental perceptions (71.4%), and disease severity (51.4%). Doctors' medication habits determine the choice for drug prices, in addition to parental perceptions and disease severity, which determine the eventual use of cheaper drugs. When the effect of medication is pursued, the child's status remains the primary reference variable. It is clear that the purpose of medication use, in addition to addressing childhood illnesses, is also primarily aimed at alleviating the status of the child. Hospital infrastructure is the most crucial variable, with an importance weight of 1.1 compared to the reference variable, and it is clear that in developing countries, hospital infrastructure determines the final effect of medication. This aligns with reality, so most families visit tertiary or pediatric specialist hospitals. High-quality hospital infrastructure and a high reputation of hospital prestige will attract many patients to visit the hospital. In addition, a higher level of hospital infrastructure determines the top pediatric resources in the developing world, which can generally support most pediatric patients' medication claims. The remaining factors are relatively close in level but slightly less important. The health profile is the primary reference variable when medication flexibility is sought. An active and effective health record is a critical factor in the relationship between contraindications and flexibility of medication use in children. On the one hand, timely recording of children's medication habits, medication characteristics, and common acute and chronic diseases will help doctors avoid adverse compounding problems in subsequent medication administration; at the same time, a solid health record will help doctors quickly perceive children's characteristics, thus strengthening the flexibility of medication administration. Parental perception is the most critical factor, with a weighting of 1.01 compared to the reference variable, which means that parents are still the first person responsible for children's medication. Parents are often familiar with their children's characteristics and common illnesses and are aware of their children's allergies or medications with high side effects. These will help to avoid risks in pediatric medicines. Physician literacy is the primary reference variable when medication safety is pursued and the most important one. Doctors possess the most knowledge about pediatric medication and are the most critical factor in achieving pediatric medication safety. Parental cognition was second only to physician literacy, with 80.2% importance. Positive parental cognition helps them to cooperate with the doctor, thus improving medication safety. Better family literacy can circumvent the problems of unauthorized and incorrect use of medication by parents and further enhance medication safety. The percentage of importance of hospital infrastructure (56.6%) and healthcare habits (50.4%) is higher than 50%, and the above factors positively affect medication safety outcomes. Overall, despite the constraints on pediatric drug use and the underdeveloped level of care in developing countries, the goals of effective pediatric drug use can be achieved mainly with the help of the present assessment framework. However, with the help of this assessment framework, the goal of effective pediatric medication use can still be primarily achieved. By identifying multidimensional factor criticality, doctors and parents can quickly identify the key factors, accelerate the turnaround efficiency of pediatric consultation, improve the quality of pediatric medication, and promote the optimal construction of pediatrics in developing countries. Furthermore, the evaluation variables will be macroscopically evaluated and combined with DEA model analysis to obtain the Malmquist index of pediatric medication in various provinces of China. As shown in Table . As shown in Table , the average annual Malmquist index in Beijing, Shanxi, Inner Mongolia, Jilin, Jiangsu, Zhejiang, Anhui, Anhui, Anhui, Shandong, Henan, Hunan, Guangdong, Guangxi, Chongqing, Sichuan, Guizhou, and Xinjiang is higher than 1, indicating that the structure and effectiveness of pediatric medication in these regions continue to optimize and are in a state of progress. Among them, the average annual Malmquist index of Beijing, Shanxi, Jiangsu, Zhejiang, Anhui, Shandong, Guangdong, Sichuan, Jilin, and Fujian is higher than 1.05, indicating a significant growth optimization state. This is highly consistent with the reality of China's national conditions. Jiangsu, Zhejiang, Anhui, and Shandong are located in the economically developed East China region, which has a high level of infrastructure and excellent medical conditions, especially in the top 100 pediatric departments with a market share of over 70%. Beijing and Guangdong are the main economic and political regions in China, and their awareness of children's health care and medication level are also very excellent. Sichuan, Jilin, Fujian, and Shanxi are regional medical centers, with famous hospitals such as West China Hospital and Putian Medical Center located in these areas. Although the reputation of Putian hospitals varies, they provide a large amount of basic medical care and undertake a certain amount of workload in basic outpatient services, popularization of simple pediatric medication, and other aspects. From a yearly perspective, the Malmquist index performance of each region was relatively low from 2015 to 2016, with an average of 0.9843. However, from 2016 to 2019, the Malmquist index maintained stable growth. Due to the impact of the epidemic, there was a certain decline in 2019–2020, but in the era of epidemic and post epidemic, the Malmquist index for pediatric medication quickly recovered. Every crisis is also a challenge and adjustment. Major health events reshape the logic of pediatric medication, guiding refined, scientific, and standardized medication methods to serve pediatric patients. This led to the resurgence of the Malmquist index between 2020 and 2021. Further refining to each province, areas with excellent Malmquist index such as Beijing, Jiangsu, and Sichuan have input and output structures that are basically consistent with MDM matrix evaluation. According to the logic of each dominant factor and the overall goal, if the primary variable is not redundant as an input factor and significantly expands as an output result, it means that the Malmquist index performance in the region is better, that is, the efficacy of pediatric medication drugs is more outstanding. This further validates the effectiveness of MDM evaluation for pediatric medication and confirms the direction of standardized strategy development for pediatric medication. This paper constructs a pediatric medication evaluation system based on the national conditions of developing countries, using MEM and lean management concepts; Through expert research and questionnaire collection, various levels and types of factors have been formed and their preliminary strong and weak relationships have been sorted out; By using the MEM calculation principle, the importance sequence of various factors for pediatric medication evaluation was finally obtained, which shows the proportion of various factors to the quality of pediatric medication. Based on the research results of this paper, it can serve hospitals, doctors, and parents' perception of pediatric medication, and capture the medication mechanisms and strategies under different goal orientations, greatly alleviating the medication conflicts caused by resource limitations in developing countries. Considering the limitations of the study, this paper hopes that there will be more discussions in the follow-up study. First of all, as far as the object is concerned, this paper selects China, a typical representative of developing countries. However, due to the differences within developing countries, follow-up research can build an evaluation system based on the actual situation of other countries. And integrate the actual data of these countries to obtain more practical research results. Secondly, in terms of data, this paper selects the panel data from 2015 to 2021. Although this observation fully includes the pediatric construction in China. However, follow-up studies can further update the data and observe the performance of pediatric drug use in China after the epidemic. So, in the process of index collection and establishment of MDM matrix framework, this paper is carried out in the form of cross-sectional study. That is, the questionnaire data focus on regional feedback at a single time. The follow-up study can take the form of multi cycle and multi sampling to sort out the data, so as to further obtain the weight of pediatric drug MDM. Finally, due to the fact that there are differences in pediatric cognition within China, there may be potential medication bias. The follow-up research can enrich this issue based on samples and case studies. That is to discuss the development of pediatric drug use in China from the perspective of cognition and prejudice. Overall, the pre medication cycle is a critical period for pediatric medication evaluation, and multiple factors need to be coordinated during this time point. In the AF factor architecture, the importance of physician standardization is the highest, contributing about 1/4 of the evaluation weight. In the AF sub factor architecture, physician competence is the most important factor, contributing approximately 12.6% of the evaluation weight. Various national medical departments can further optimize the use of this evaluation model by considering factor variables with local characteristics and incorporating them into the evaluation system, in order to calculate factor variables that are more meaningful and important to the local area. Furthermore, when both parties (doctors and families) pursue medication time, parental cognition is the most important variable and also a reference variable. When emphasizing medication prices, the status of children is the reference variable, while the quality of doctors is the most important reference indicator. When medication efficacy is the guiding factor, the state of children is considered a reference variable, but hospital infrastructure is the most important reference variable. When emphasizing drug flexibility and addressing medication compatibility issues, it is necessary to use health records as a reference variable, but parental cognition is the most important indicator. When pursuing medication safety, physician competence serves as a reference variable and is also the most important feedback indicator. Medical institutions, personnel, and parents at all levels should adjust the weight of reference indicators in a timely manner according to their own goals in pediatric medication activities, in order to optimize and improve the effectiveness of pediatric medication. Based on the evaluation framework of MDM and the real data of Chinese provinces, the MDM matrix is mapped into an input–output system. This paper analyzes the real efficacy of pediatric medication in each province from 2015 to 2021. From the results, it can be seen that the standardization of pediatric medication in China is significantly strengthening, and the efficiency of pediatric medication is showing an upward trend. Pediatric medication efficacy is relatively superior in medically developed or economically advanced regions, while pediatric medication is relatively redundant in economically or resource backward regions, and the overall trend of progress is relatively weak. From the perspective of input–output structure, the input–output ratio of pediatric medication efficacy leading regions is relatively consistent with the MDM matrix evaluation trend, that is, the scale of important output factors is relatively large, and the control of important input factors is relatively small. By simplifying and optimizing the structural ratio, pediatric medication maintains a good efficiency frontier, and the overall medication quality is greatly improved. One contribution and goal of this paper is to help developing countries with limited medical conditions establish a pediatric medication evaluation system that is suitable for their national conditions. Although developing countries do not have a complete market and medication system for pediatric drugs, and lack effective innovation systems for pediatric drugs, there is even a shortage of some specific and specialized drugs. But within this framework, we can help users clearly indicate their medication goals and establish a pathway for key considerations towards medication goals. At the same time, we can also guide the medication subject to quickly establish a thinking paradigm. Help them quickly capture core variables in the complex outpatient process, avoiding unnecessary diagnostic and medication errors. Of course, in future research, we can incorporate more indicator factors and establish a multi-level feedback framework to form a more practical medication evaluation system. In addition, various factors represent multiple strategies for improving pediatric medication. During the post medication period, China is vigorously promoting the construction of medical cooperatives, which is reflected in the improvement of the medical system. Medical institutions can strengthen guidance and education for parents of children regarding common issues during and after medication. Establish a comprehensive medication follow-up system, especially to track and guide the medication use of discharged chronic disease children. Medical institutions should comprehensively utilize various methods such as community hospitals and medical publicity to carry out clinical rational drug use promotion for children, guide parents of children to establish scientific drug use concepts, improve safety medication awareness and children's medication compliance. Therefore, for other developing countries that use this evaluation framework, they can propose improvement plans for pediatric medication based on their own weak factors, strengthen and improve the scientific management system for pediatric medication at the grassroots level, and ultimately enhance the quality of pediatric healthcare nationwide. Supplementary Material 1. |
Overexpression of Dehydrogenase/Reductase 9 Predicts Poor Response to Concurrent Chemoradiotherapy and Poor Prognosis in Rectal Cancer Patients | 52ce7cac-301c-4939-9093-6b23033338f1 | 9582124 | Anatomy[mh] | Stemming from the large intestine (colon) or rectum, colorectal cancer (CRC) ranks third in men and ranks second in women in terms of global cancer incidence . Notably, in Eastern Asia, rectal cancer incidence rates rank among the highest . Rectal cancer patients without distant metastasis are usually managed by a standardized surgical technique termed total mesorectal excision . To reduce the risk of locoregional recurrence, the use of multimodal approach such as neoadjuvant concurrent chemoradiotherapy (CCRT) is recommended before surgical management for rectal cancer patients with stage T3/T4 or node-positive (N1/N2) disease . However, despite identical tumor histology, individual patient response to neoadjuvant CCRT can range from complete remission to disease progression . Identification of predictive factors such as tumor regression has been introduced for decision-making according to preoperative staging information, but a comprehensive molecular characterization that is used to predict individual therapy response remains lacking. The intestinal epithelium is continuously regenerated (every 3–5 days) by a population of stem cells that differentiate into multiple epithelial cell subsets which operate in concert to maintain the intestinal barrier and provide host defense. Among these epithelial cell subsets, goblet cells are well known for their secretion of the glycosylated mucin 2 (MUC2) protein, which creates the protective mucus barrier covering the epithelium . Muc2 -deficient mice have been indicated to develop spontaneous colitis, an inflammatory bowel disease, caused by direct contact between the bacteria and the colonic epithelium . Low MUC2 expression was also correlated with poor survival in CRC patients without receiving radiation or chemotherapy . However, our previous study has suggested that MUC2 overexpression is an unfavorable predictive and prognostic factor for rectal adenocarcinoma patients receiving neoadjuvant CCRT . These observations imply that the roles of mucin in tumor progression and CCRT efficacy may be different. In addition, poorly differentiated CRC with high proliferative and metastatic capacity is correlated with worse patient survival . Interestingly, it has been reported that, in the radio-resistant rectal cancer cells, most of the differently expressed genes were associated with cell-cell adhesion and regulation of epithelial cell differentiation . Whether epithelial cell differentiation can function as a barrier to defend against radiation penetration remains an open question. Accordingly, the role of epithelial cell differentiation in the efficacy of CCRT in rectal cancer patients and the underlying molecular mechanisms deserve further investigation. Using a transcriptome dataset, we focused on differentially expressed genes in relation to epithelial cell differentiation and then identified that the dehydrogenase/reductase 9 ( DHRS9 ) gene level was the most considerably upregulated among CCRT nonresponders in rectal cancer. The human DHRS9 gene, which maps to chromosome 2q31.1, encodes an enzyme of the short-chain dehydrogenase/reductase (SDR) family. DHRS9 , also known as retinol dehydrogenase 15 (RDH15), can catalyze the oxidation of retinol to retinaldehyde and is found specifically in the intestine and featured mainly by mucin production according to the analysis of single cell type expression cluster ( https://www.proteinatlas.org/ENSG00000073737-DHRS9 ). DHRS9 is also implicated in the biosynthesis of all-trans-retinoic acid (atRA), which is the most active retinoid metabolite and contributes to the suppression of CRC progression through cell growth inhibition and cell differentiation induction . Moreover, low DHRS9 expression has been indicated to be correlated with poor prognosis in CRC patients, but those who received any anti-cancer therapy were excluded . Consequently, the current study intended to connect DHRS9 expression to CCRT efficacy and patient survival and illuminate the role of DHRS9 in rectal cancer patients undergoing neoadjuvant CCRT.
Transcriptome Analysis of Rectal Cancer Biopsies A public transcriptomic dataset (GSE35452) of tissue blocks from forty-six rectal adenocarcinoma patients receiving neoadjuvant CCRT was analyzed to recognize promising genes related to CCRT efficacy. Before receiving CCRT, rectal cancer biopsies were collected in the course of colonoscopic examination in this dataset. Applying the Affymetrix Human Genome U133 Plus 2.0 Array platform, the expression profiles were determined, and all probe sets were analyzed without any filtering. The tumor specimens were split into “nonresponders” and “responders” as determined by the response to CCRT, and a comparative analysis was accomplished under supervision. We took notice of differentially expressed genes in relation to epithelial cell differentiation (GO: 0030855) and then identified those with log2 ratio >0.2 and p less than 0.005 for further assessment. Patient Eligibility and Enrollment Approved by the Ethics Committee and Institutional Review Board (IRB) of Chi Mei Medical Center (10302014), formalin-fixed paraffin-embedded (FFPE) tissue blocks of 172 consecutive (from 1998 to 2004) rectal cancer patients were obtained from our biobank. We retrospectively reviewed the medical records of these patients and recorded their clinical and pathological characteristics and clinical outcomes. All patients were primarily clinically diagnosed as having rectal cancer by colonoscopy and no distant metastasis by abdominopelvic computed tomography (CT) and/or chest X-ray radiography. The patients were regularly followed up following diagnosis until their last appointment or death. Before surgery, all patients were treated with CCRT, comprising 24-h continuous infusion of 5-fluorouracil (5-FU)-based chemotherapy and radiotherapy with a total dose of 45–50 Gy in twenty-five fractions over 5 weeks. Before or after CCRT, for patients with nodal or tumoral status no less than N1 or T3, respectively, adjuvant chemotherapy was performed. Almost all patients developed events within 60 months. For the non-eventful patients, the mean follow-up duration was 66.4 months (median 59.2, ranged from 10.3 to 131.3). Histopathological Assessment and Immunohistochemical Scoring Blind to the clinical information of the patients, two expert pathologists (Wan-Shan Li and Tzu-Ju Chen) looked over all tumor specimens to gain more objective assessment. As stated by the eighth edition of the American Joint Committee on Cancer (AJCC) tumor-node-metastasis (TNM) staging system, the tumor and node stages were determined. The Dworak tumor regression grade system was used to predict the tumor response to CCRT and is defined as follows: 0–1, no or little response; 2–3, modest response; 4, complete response . Following standard protocols , immunohistochemical (IHC) staining was conducted. The slides were placed in an oven with 65°C to melt the paraffin and were deparaffinized in xylene and rehydrated. Heat-induced antigen retrieval was performed in 10 mM sodium citrate buffer (pH 6) in a microwave for 20 min. Subsequently, the slides were incubated with DHRS9 primary antibody (H00010170-M05, 1:100) (Novus Biologicals, Littleton, CO, United States) for 1 h at room temperature and then stained with secondary antibody (Dako REAL™ EnVision™ Detection System, Peroxidase/DAB, Rabbit/Mouse) for 30 min at room temperature. The H-score provided a dynamic range to quantify the biomarker of interest from the IHC staining and was calculated based on the following equation: H-score = Σ Pi ( i + 1), where Pi is the percentage, varying from 0% to 100%, of stained tumor cells for each intensity, and i is the intensity (0–3+) of staining. The H-score assigned an IHC score to each patient and comprised values from 100 to 400 . The median H-score was utilized to divide DHRS9 immunoreactivity into low and high expressions. Gene Function Prediction of the Cancer Genome Atlas Data To forecast the unidentified functions of DHRS9 in rectal cancer, the associations between the mRNA levels of DHRS9 and its coexpressed genes from the colorectal adenocarcinoma dataset ( n = 594, PanCancer Atlas, TCGA) were reviewed applying the cBioPortal online platform ( http://cbioportal.org ). Thereafter, the top two hundred transcripts with either a positive association or a negative association with DHRS9 were annotated applying the Gene Ontology (GO) classification system ( http://geneontology.org/ ) according to Protein Annotation Through Evolutionary Relationship (PANTHER) overrepresentation test and were ranked by fold enrichment. An R script with ggplot2 package was applied to visualize the representative GO terms. Statistical Analysis The Statistical Product and Service Solutions (SPSS) software version 22.0 was employed for all statistical analyses. Pearson’s chi-squared test was utilized to correlate DHRS9 expression with clinicopathological features. Appraised from operation to the date when the event developed, three endpoints: metastasis-free survival (MeFS), local recurrence-free survival (LRFS), and disease-specific survival (DSS), were analyzed. Survival curves were created applying the Kaplan-Meier method, and the log-rank test was conducted to compare prognostic utility between high and low DHRS9 expression groups. Multivariate Cox proportional hazards regression analysis was applied to identify independent prognostic biomarkers based on variables with prognostic utility at the univariate level. A two-tailed test with p < 0.05 was regarded as statistical significance.
A public transcriptomic dataset (GSE35452) of tissue blocks from forty-six rectal adenocarcinoma patients receiving neoadjuvant CCRT was analyzed to recognize promising genes related to CCRT efficacy. Before receiving CCRT, rectal cancer biopsies were collected in the course of colonoscopic examination in this dataset. Applying the Affymetrix Human Genome U133 Plus 2.0 Array platform, the expression profiles were determined, and all probe sets were analyzed without any filtering. The tumor specimens were split into “nonresponders” and “responders” as determined by the response to CCRT, and a comparative analysis was accomplished under supervision. We took notice of differentially expressed genes in relation to epithelial cell differentiation (GO: 0030855) and then identified those with log2 ratio >0.2 and p less than 0.005 for further assessment.
Approved by the Ethics Committee and Institutional Review Board (IRB) of Chi Mei Medical Center (10302014), formalin-fixed paraffin-embedded (FFPE) tissue blocks of 172 consecutive (from 1998 to 2004) rectal cancer patients were obtained from our biobank. We retrospectively reviewed the medical records of these patients and recorded their clinical and pathological characteristics and clinical outcomes. All patients were primarily clinically diagnosed as having rectal cancer by colonoscopy and no distant metastasis by abdominopelvic computed tomography (CT) and/or chest X-ray radiography. The patients were regularly followed up following diagnosis until their last appointment or death. Before surgery, all patients were treated with CCRT, comprising 24-h continuous infusion of 5-fluorouracil (5-FU)-based chemotherapy and radiotherapy with a total dose of 45–50 Gy in twenty-five fractions over 5 weeks. Before or after CCRT, for patients with nodal or tumoral status no less than N1 or T3, respectively, adjuvant chemotherapy was performed. Almost all patients developed events within 60 months. For the non-eventful patients, the mean follow-up duration was 66.4 months (median 59.2, ranged from 10.3 to 131.3).
Blind to the clinical information of the patients, two expert pathologists (Wan-Shan Li and Tzu-Ju Chen) looked over all tumor specimens to gain more objective assessment. As stated by the eighth edition of the American Joint Committee on Cancer (AJCC) tumor-node-metastasis (TNM) staging system, the tumor and node stages were determined. The Dworak tumor regression grade system was used to predict the tumor response to CCRT and is defined as follows: 0–1, no or little response; 2–3, modest response; 4, complete response . Following standard protocols , immunohistochemical (IHC) staining was conducted. The slides were placed in an oven with 65°C to melt the paraffin and were deparaffinized in xylene and rehydrated. Heat-induced antigen retrieval was performed in 10 mM sodium citrate buffer (pH 6) in a microwave for 20 min. Subsequently, the slides were incubated with DHRS9 primary antibody (H00010170-M05, 1:100) (Novus Biologicals, Littleton, CO, United States) for 1 h at room temperature and then stained with secondary antibody (Dako REAL™ EnVision™ Detection System, Peroxidase/DAB, Rabbit/Mouse) for 30 min at room temperature. The H-score provided a dynamic range to quantify the biomarker of interest from the IHC staining and was calculated based on the following equation: H-score = Σ Pi ( i + 1), where Pi is the percentage, varying from 0% to 100%, of stained tumor cells for each intensity, and i is the intensity (0–3+) of staining. The H-score assigned an IHC score to each patient and comprised values from 100 to 400 . The median H-score was utilized to divide DHRS9 immunoreactivity into low and high expressions.
To forecast the unidentified functions of DHRS9 in rectal cancer, the associations between the mRNA levels of DHRS9 and its coexpressed genes from the colorectal adenocarcinoma dataset ( n = 594, PanCancer Atlas, TCGA) were reviewed applying the cBioPortal online platform ( http://cbioportal.org ). Thereafter, the top two hundred transcripts with either a positive association or a negative association with DHRS9 were annotated applying the Gene Ontology (GO) classification system ( http://geneontology.org/ ) according to Protein Annotation Through Evolutionary Relationship (PANTHER) overrepresentation test and were ranked by fold enrichment. An R script with ggplot2 package was applied to visualize the representative GO terms.
The Statistical Product and Service Solutions (SPSS) software version 22.0 was employed for all statistical analyses. Pearson’s chi-squared test was utilized to correlate DHRS9 expression with clinicopathological features. Appraised from operation to the date when the event developed, three endpoints: metastasis-free survival (MeFS), local recurrence-free survival (LRFS), and disease-specific survival (DSS), were analyzed. Survival curves were created applying the Kaplan-Meier method, and the log-rank test was conducted to compare prognostic utility between high and low DHRS9 expression groups. Multivariate Cox proportional hazards regression analysis was applied to identify independent prognostic biomarkers based on variables with prognostic utility at the univariate level. A two-tailed test with p < 0.05 was regarded as statistical significance.
DHRS9 Upregulation is Linked to Concurrent Chemoradiotherapy Resistance in Rectal Adenocarcinoma Patients To recognize promising genes in relation to CCRT efficacy, a published transcriptomic dataset (GSE35452) of tissue blocks from forty-six rectal adenocarcinoma patients receiving neoadjuvant CCRT was analyzed. Twenty-two patients (47.8%) and 24 patients (52.2%) were allocated as nonresponders and responders, respectively, and a comparative analysis was carried out to identify predictive genetic biomarkers. To investigate the role and the underlying molecular mechanisms of epithelial cell differentiation in the efficacy of CCRT in patients with rectal cancer, we focused on epithelial cell differentiation (GO: 0030855) and identified 4 probes covering 2 transcripts: DHRS9 and neurogenin 3 ( NEUROG3 ), associated with CCRT resistance . Specifically found in the intestine and significantly upregulated among CCRT-resistant rectal cancer patients (log2 ratio = 1.3317, p = 0.0001), the DHRS9 gene was selected for further analysis. As a consequence, we intended to appraise the utility of DHRS9 expression status on CCRT efficacy, clinicopathological features, and patient prognosis in our rectal cancer cohort. Clinicopathological Characteristics of a Cohort of Rectal Cancer Patients Tissue specimens of 172 rectal cancer patients treated with neoadjuvant CCRT were obtained from our biobank, and their clinical and pathological features are exhibited in . In the course of primary clinical diagnosis, the tumor status of 81 patients (47.1%) was early stage (cT1–T2), and the lymph node status of 125 patients (72.7%) was negative (cN0). After CCRT, 123 patients (71.5%) had no lymph node involvement (ypN0), and 86 patients (50%) had an invasion depth limited to the muscularis propria (ypT1–T2). In addition, no vascular invasion and perineural invasion was observed in 157 patients (91.3%) and 167 patients (97.1%), correspondingly. Also, the tumor regression grade was used to assess the therapeutic response in patients with rectal cancer following CCRT, and the results revealed that 37 patients (21.5%) had little or no response (grade 0–1). Immunoexpression of DHRS9 and its Connections With Clinicopathological Parameters IHC staining was conducted to assess the utility of DHRS9 expression status on CCRT efficacy and clinicopathological parameters. As displayed in , high DHRS9 immunoexpression was remarkably connected to advanced pre-CCRT and post-CCRT tumor status ( p = 0.032 and p < 0.001), post-CCRT lymph node involvement ( p = 0.042), and vascular invasion ( p = 0.005). Especially, tumors with high DHRS9 immunoexpression (H-scores were above or identical to the median of all scored cases) had a considerably lower grade of tumor regression ( p < 0.001). Among patients with high DHRS9 immunoexpression, 27 patients (31.4%) had little or no response to CCRT (grade 0–1). Also, the representative images of IHC staining revealed that the immunoexpression of DHRS9 was remarkably higher in rectal cancer patients with CCRT resistance . Prognostic Influence of DHRS9 Immunoexpression on Rectal Cancer Patients Thirty-one patients (18%) died because of rectal adenocarcinoma, and local recurrence and distant metastasis were first detected in 27 patients (15.7%) and 31 patients (18%), respectively. Univariate and multivariate analyses were then carried out to assess the influence of clinicopathological features and DHRS9 immunoexpression on patient survival. The results of univariate analysis revealed that the advanced post-CCRT tumor status, low grade of tumor regression, and high DHRS9 immunoexpression were considerably unfavorably prognostic of all three endpoints (all p ≤ 0.009): metastasis-free survival (MeFS), local recurrence-free survival (LRFS), and disease-specific survival (DSS) . Furthermore, only high DHRS9 immunoexpression was independently unfavorably prognostic of all three endpoints (all p ≤ 0.048) in the multivariate analysis . Function Prediction of DHRS9 via Bioinformatic Analysis Since DHRS9 has been considered as a moonlighting protein, we carried out a gene coexpression analysis to forecast the unidentified functions of DHRS9 in rectal cancer. The top two hundred differentially expressed genes showing positive correlations or negative correlations with DHRS9 were obtained from the colorectal adenocarcinoma dataset ( n = 594, PanCancer Atlas, TCGA). Thereafter, the Gene Ontology (GO) classification system was employed for functional annotation on the basis of Protein Annotation Through Evolutionary Relationship (PANTHER) overrepresentation test, which compares a test gene list with a reference gene list and determines whether a particular group (e.g., biological process, molecular function, and cellular component) of genes is overrepresented. The results revealed that the most outstanding terms positively correlated with DHRS9 were keratan sulfate biosynthetic process (GO: 0018146, fold enrichment: 25.24) and UDP-galactose:beta- N -acetylglucosamine beta-1,3-galactosyltransferase activity (GO: 0008499, fold enrichment: 28.84) in the matter of biological processes and molecular functions, correspondingly . Especially, we identified that both the UDP-GlcNAc:betaGal beta-1,3- N -acetylglucosaminyltransferase 6 ( B3GNT6 ) gene (Spearman’s correlation: 0.497) and the B3GNT7 gene (Spearman’s correlation: 0.463) were implicated in both the functions mentioned above . The carbohydrate sulfotransferase 5 ( CHST5 ) gene (Spearman’s correlation: 0.373) was involved only in the keratan sulfate biosynthetic process . Besides, as to cellular components , the most prominent term positively correlated with DHRS9 was brush border membrane (GO: 0031526, fold enrichment: 9.01). Interestingly, MUC2 (Spearman’s correlation: 0.462) was also identified as one of the significant genes positively correlated with DHRS9 .
Upregulation is Linked to Concurrent Chemoradiotherapy Resistance in Rectal Adenocarcinoma Patients To recognize promising genes in relation to CCRT efficacy, a published transcriptomic dataset (GSE35452) of tissue blocks from forty-six rectal adenocarcinoma patients receiving neoadjuvant CCRT was analyzed. Twenty-two patients (47.8%) and 24 patients (52.2%) were allocated as nonresponders and responders, respectively, and a comparative analysis was carried out to identify predictive genetic biomarkers. To investigate the role and the underlying molecular mechanisms of epithelial cell differentiation in the efficacy of CCRT in patients with rectal cancer, we focused on epithelial cell differentiation (GO: 0030855) and identified 4 probes covering 2 transcripts: DHRS9 and neurogenin 3 ( NEUROG3 ), associated with CCRT resistance . Specifically found in the intestine and significantly upregulated among CCRT-resistant rectal cancer patients (log2 ratio = 1.3317, p = 0.0001), the DHRS9 gene was selected for further analysis. As a consequence, we intended to appraise the utility of DHRS9 expression status on CCRT efficacy, clinicopathological features, and patient prognosis in our rectal cancer cohort.
Tissue specimens of 172 rectal cancer patients treated with neoadjuvant CCRT were obtained from our biobank, and their clinical and pathological features are exhibited in . In the course of primary clinical diagnosis, the tumor status of 81 patients (47.1%) was early stage (cT1–T2), and the lymph node status of 125 patients (72.7%) was negative (cN0). After CCRT, 123 patients (71.5%) had no lymph node involvement (ypN0), and 86 patients (50%) had an invasion depth limited to the muscularis propria (ypT1–T2). In addition, no vascular invasion and perineural invasion was observed in 157 patients (91.3%) and 167 patients (97.1%), correspondingly. Also, the tumor regression grade was used to assess the therapeutic response in patients with rectal cancer following CCRT, and the results revealed that 37 patients (21.5%) had little or no response (grade 0–1).
IHC staining was conducted to assess the utility of DHRS9 expression status on CCRT efficacy and clinicopathological parameters. As displayed in , high DHRS9 immunoexpression was remarkably connected to advanced pre-CCRT and post-CCRT tumor status ( p = 0.032 and p < 0.001), post-CCRT lymph node involvement ( p = 0.042), and vascular invasion ( p = 0.005). Especially, tumors with high DHRS9 immunoexpression (H-scores were above or identical to the median of all scored cases) had a considerably lower grade of tumor regression ( p < 0.001). Among patients with high DHRS9 immunoexpression, 27 patients (31.4%) had little or no response to CCRT (grade 0–1). Also, the representative images of IHC staining revealed that the immunoexpression of DHRS9 was remarkably higher in rectal cancer patients with CCRT resistance .
Thirty-one patients (18%) died because of rectal adenocarcinoma, and local recurrence and distant metastasis were first detected in 27 patients (15.7%) and 31 patients (18%), respectively. Univariate and multivariate analyses were then carried out to assess the influence of clinicopathological features and DHRS9 immunoexpression on patient survival. The results of univariate analysis revealed that the advanced post-CCRT tumor status, low grade of tumor regression, and high DHRS9 immunoexpression were considerably unfavorably prognostic of all three endpoints (all p ≤ 0.009): metastasis-free survival (MeFS), local recurrence-free survival (LRFS), and disease-specific survival (DSS) . Furthermore, only high DHRS9 immunoexpression was independently unfavorably prognostic of all three endpoints (all p ≤ 0.048) in the multivariate analysis .
DHRS9 via Bioinformatic Analysis Since DHRS9 has been considered as a moonlighting protein, we carried out a gene coexpression analysis to forecast the unidentified functions of DHRS9 in rectal cancer. The top two hundred differentially expressed genes showing positive correlations or negative correlations with DHRS9 were obtained from the colorectal adenocarcinoma dataset ( n = 594, PanCancer Atlas, TCGA). Thereafter, the Gene Ontology (GO) classification system was employed for functional annotation on the basis of Protein Annotation Through Evolutionary Relationship (PANTHER) overrepresentation test, which compares a test gene list with a reference gene list and determines whether a particular group (e.g., biological process, molecular function, and cellular component) of genes is overrepresented. The results revealed that the most outstanding terms positively correlated with DHRS9 were keratan sulfate biosynthetic process (GO: 0018146, fold enrichment: 25.24) and UDP-galactose:beta- N -acetylglucosamine beta-1,3-galactosyltransferase activity (GO: 0008499, fold enrichment: 28.84) in the matter of biological processes and molecular functions, correspondingly . Especially, we identified that both the UDP-GlcNAc:betaGal beta-1,3- N -acetylglucosaminyltransferase 6 ( B3GNT6 ) gene (Spearman’s correlation: 0.497) and the B3GNT7 gene (Spearman’s correlation: 0.463) were implicated in both the functions mentioned above . The carbohydrate sulfotransferase 5 ( CHST5 ) gene (Spearman’s correlation: 0.373) was involved only in the keratan sulfate biosynthetic process . Besides, as to cellular components , the most prominent term positively correlated with DHRS9 was brush border membrane (GO: 0031526, fold enrichment: 9.01). Interestingly, MUC2 (Spearman’s correlation: 0.462) was also identified as one of the significant genes positively correlated with DHRS9 .
All-trans-retinoic acid (atRA), an active metabolite of vitamin A (retinol), regulates gene expression through binding to the nuclear receptors and is applied as a classical differentiation therapy for acute promyelocytic leukemia (APL) . Since atRA has been reported to exert tumor-suppressive functions in various cancer types , DHRS9 , implicated in the biosynthesis of atRA, may be considered to have antitumor activities. Actually, DHRS9 downregulation has been correlated with poor survival in oral squamous cell carcinoma and colorectal cancer patients, but those who received any anti-cancer therapy were excluded in these two studies. Moreover, in pancreatic cancer, DHRS9 overexpression has been correlated with poor prognosis , implying that DHRS9 may play an oncogenic role based on distinct contexts. Interestingly, in the present study, we identified DHRS9 as the most significantly upregulated gene in relation to epithelial cell differentiation among CCRT-resistant rectal cancer patients. In our rectal cancer cohort, we also demonstrated that high DHRS9 immunoexpression is remarkably linked to poor therapeutic response to CCRT and inferior patient survival, suggesting that epithelial cell differentiation may play a role in reducing CCRT efficacy in patients with rectal cancer. To identify the unrevealed functions of DHRS9 in rectal cancer, we carried out a gene coexpression analysis and found that the most significant GO terms positively correlated with DHRS9 were keratan sulfate biosynthetic process and UDP-galactose:beta- N -acetylglucosamine beta-1,3-galactosyltransferase activity . Glycosaminoglycans (GAGs) are linear polysaccharides with highly negative charges and display distinct functions as determined by their molecular weight and sulfation degree. In spite of their complicated structure, the backbone of these glycans is generally simply composed of repeating disaccharide units containing alternating hexosamines (glucosamine or galactosamine) and uronic acids (glucuronic or iduronic acid) . Based on the combination of different amino sugars and uronic acids, GAGs are categorized into four primary groups, including keratan sulfate (KS), which alternates between N -acetylglucosamine (GlcNAc) and galactose (Gal) and does not contain uronic acids . GAGs, the major macromolecules in the extracellular matrix (ECM), may function as a protective barrier that impedes drug delivery to the tumor cells . Additionally, GAGs can also play a key role in cell signaling and modulate abundant biological functions . Recent evidence suggests that GAGs may be involved in atRA-induced neural differentiation and cancer stem cell formation and therapeutic resistance . Keratan sulfate is also regarded as a glycan marker expressed by stem cells , which is suggested to confer drug resistance in CRC patients . Nevertheless, the correlations among DHRS9 expression, GAGs, especially keratan sulfate, and CCRT resistance in rectal cancer patients need to be further investigated. Proteoglycans (PGs) are made up of a core protein and at least one covalently attached GAG side chain. Keratan sulfate is the newest GAG based on an evolutionary perspective but the least understood. On the basis of linkage structure utilized to attach to PG core proteins, internal structural organization, and tissue distribution, keratan sulfate can be classified into three types, namely KS type I (corneal KS, N -linked), II (skeletal KS, O -linked), and III (brain KS, O -linked) . Especially, KS type II attaches to a threonine or serine (Thr/Ser) residue on the core protein via an O -linked mucin-type structure (core 2, GalNAc-Thr/Ser). During the biosynthesis of KS, several glycosyltransferases and sulfotransferases act to add Gal or GlcNAc to an acceptor residue for chain elongation and undergo sulfation at carbon position 6 (C6) on Gal-GlcNAc either individually or collectively, respectively . The chain length and sulfation degree (charge heterogeneity) of KS increase with the age of the connective tissues where it is enriched in and their pathological status, including tumor development. Intriguingly, we observed that the B3GNT6 and B3GNT7 genes , which encode N-acetylglucosaminyltransferase enzymes, and the carbohydrate sulfotransferase 5 (CHST5) gene were positively correlated with DHRS9 . B3GNT6 supports the formation of an O -linked mucin-type core 3 structure , and B3GNT7 is responsible for elongation of KS chains . In addition, CHST5 is a sulfotransferase that can transfer sulfate to O -linked glycans of mucin-type glycoproteins . Actually, the GalNAc residue of mucin-type glycoproteins can also serve as an acceptor molecule for the addition of Gal or GlcNAc that can also be sulfated and contribute to minimally sulfated KS type II-like mucins . As a key component of the ECM, mucins create a gel-like epithelial barrier thought to protect the gut lumen from external stress and microbial infection. However, aberrant mucin synthesis may also function as a barrier to drug penetration and cytotoxic T cell infiltration . MUC2, the major intestinal mucin, has been suggested to carry immunoregulatory signals to favor tumor growth in the large intestine , and our previous study also demonstrated that overexpression of MUC2 is linked to CCRT resistance and poor prognosis in patients with rectal adenocarcinoma . Notably, we also identified that the MUC2 gene was positively correlated with DHRS9 in gene coexpression analysis . In addition, brush border membrane was the most prominent term positively correlated with DHRS9 in terms of cellular components . The epithelium of the villus is composed primarily of absorptive enterocytes and mucin-secreting goblet cells. Since the brush border is formed with microvilli on the apical surface of the enterocytes, whether DHRS9 can promote mucin synthesis through enterocyte–goblet cell interaction to defend against CCRT penetration deserves further analysis. To predict the tumor response to preoperative CCRT, the tumor regression grade system is commonly used based on pathological features observed in surgical specimens. However, before surgery, there is currently no precise tool to predict CCRT effectiveness. Benefitting from the advancement of sequencing technologies, only small piece of tissue is required to obtain genetic information. To exploit our findings in the clinical practice, rectal cancer biopsies could be collected in the course of colonoscopic examination before receiving neoadjuvant CCRT. Afterwards, these biopsies could be used to conduct RNA sequencing or apply array platform and guide treatment more accurately according to the biomarkers such as high DHRS9 level detected.
The intestinal epithelium is composed of multiple well-differentiated epithelial cell subsets which work in concert to maintain the intestinal barrier and provide host defense. However, aberrant epithelial cell differentiation may be associated with therapy resistance. In this study, we identified that DHRS9 is the most significantly upregulated gene related to epithelial cell differentiation among CCRT-resistant rectal cancer patients. We also demonstrated that high DHRS9 immunoexpression is considerably associated with an advanced disease stage, CCRT resistance, and inferior prognosis in patients with rectal adenocarcinoma. Additionally, we also linked DHRS9 with unrevealed functions, such as keratan sulfate and mucin synthesis which may be implicated in CCRT resistance. Collectively, DHRS9 expression may assist decision-making for rectal cancer patients who underwent neoadjuvant CCRT.
|
Opportunistic infections in pediatrics: when to suspect and how to approach | d3c581e1-f129-4350-98a2-b599eef4ec9a | 9432119 | Pediatrics[mh] | Opportunistic infections are those caused by pathogens (bacteria, viruses, fungi, or protozoa) that benefit from a host with a weakened immune system, an altered microbiota, or the breach of skin barriers. The condition characterized by a balance between the several species that comprise the microbiota is called eubiosis. Any disturbances in eubiosis, known under the broad name of dysbiosis, can trigger infectious and noninfectious diseases. Opportunistic infections occur in situations of dysbiosis, predisposing the individual to exogenous and endogenous infections. They occur in the context of autoimmunity or show reactions of varying intensity, both increased (in allergic reactions and conditions of chronic inflammation) and decreased (in cases of immunodeficiency and cancer). The presentation of the infection varies according to the patient's comorbidity, which in turn is associated with aspects of the immune system that are not fully functional. Comorbidities or situations that predispose to opportunistic infections are increasingly present in pediatric practice. Human immunodeficiency virus (HIV) infection, innate immunity errors (formerly called primary immunodeficiencies), neoplasms, autoimmune conditions, and use of chemotherapy, radiotherapy, or immunomodulatory drugs of the immune system are some examples of these comorbidities. This review describes the opportunistic infections grouped according to the different classes of pathogens. Diagnostic and therapeutic aspects of mycobacterial, fungal, herpes virus infections, and those affecting individuals using immunobiological agents will be discussed.
Non-tuberculous mycobacteria are largely disseminated in the environment and can cause diseases known as mycobacterial infections. Most disseminated infections are associated with impaired cellular immunity, such as patients with innate immunity errors affecting the interferon-gamma (IFN)/interleukin (IL)-12/IL-23 axis (IL-12deficiency, IFN-gamma deficiency, NF-kappa-B essential modulator [NEMO] mutations,and IFN-gamma and IL-12 receptor defects), hematopoietic stem cell transplant recipients, or individuals with advanced HIV infection. They present as respiratory infections, usually in patients with previous pulmonary pathologies (cystic fibrosis, chronic obstructive pulmonary disease, bronchiectasis), and those with skin and soft tissue infections, including lymphadenitis and surgical wound infections, whether or not associated with device implantation. They may also manifest as disseminated infections in immunocompromised patients. They are classified into two groups according to their phenotypic characteristics: slow-growing and fast-growing mycobacteria . Among the slow growing mycobacteria, those of the Mycobacterium avium complex are the most frequent cause of pulmonary infection and also the main cause of lymphadenitis in children under 5 years. They affect patients living with HIV with cluster of differentiation 4 (CD4) T lymphocytes below 50/mm 3 or those with innate immunity errors, causing extrapulmonary and disseminated clinical pictures. M. kansasii , another non-tuberculous mycobacterium, causes pulmonary infection with tuberculosis-like fibrocavitary pattern and, less frequently, focal or disseminated infections in patients with HIV or other immunosuppression conditions. Fast-growing mycobacteria also cause chronic respiratory infections in people with pre-existing pulmonary lesions, and skin and soft tissue infections, many associated with aesthetic procedures, as well as infections associated with catheters and prostheses. They can form biofilms, which makes treatment difficult, and it is necessary to remove these devices to cure the patient. Of this group, M. abscessus has special relevance, causes respiratory infections, and shows great difficulty in its therapeutic management. The definitive diagnosis of mycobacterial infections requires identification of the agent. If there is a clinical suspicion of non-tuberculous mycobacterial infection, the laboratory should be contacted to ensure adequate specimen handling and cultivation conditions are utilized to allow pathogen isolation. Generally, in adults, two or more specimens of sputum or one specimen of bronchoalveolar lavage should result in the isolation of non-tuberculous mycobacteria for diagnosis. In children, these criteria are yet to be established. Moreover, the isolation of non-tuberculous mycobacteria from sterile sites is evidence of infection. It is worth recalling that the tuberculin skin test in cases of mycobacterial infection is usually positive, because several antigens are common to M. tuberculosis and other mycobacteria. In the case of interferon-gamma release assays, cross-reaction may occur in case of M. kansasii , M. marinum , and M. szulgai infections. The diagnosis and treatment of mycobacterial infections are described in .
Opportunistic mycoses are fungal infections of low pathogenicity that specifically infect immunocompromised hosts. They are caused by ubiquitous fungi in the environment, such as filamentous fungi ( Aspergillus spp., Fusarium spp., Mucorales , etc. ), or yeasts that are part of the endogenous or exogenous fungal microbiota, such as Candida spp. Infections by Candida spp. present as bloodstream, urinary tract, bone, skin, or surgical site infections, myocarditis, meningitis, and abscesses, with the latter being associated with catheter insertion. The most common clinical picture is the onset of fever unresponsive to antibiotics in at-risk patients. Most cases of candidemia are believed to be endogenously acquired by pathogen translocation through the gastrointestinal tract, in which up to 70 % of the immunocompetent healthy population is colonized by Candida spp. Factors that increase intestinal colonization by Candida (use of antibiotics, corticosteroids, paralytic ileus, intestinal occlusion) or determine intestinal mucosa atrophy or injury (prolonged fasting, total parenteral nutrition, hypotension, surgical procedure, mucositis secondary to chemotherapy, or radiotherapy) can potentiate the phenomenon of translocation from the gastrointestinal tract. Less frequent are exogenous infections due to medical procedures, contamination of solutions or prostheses, or central venous catheter colonization. Among the species of Candida spp ., C. albicans is the most often found in clinical practice. However, several non-albic an species, such as C. tropicalis , C. parapsilosis , C. glabrata , and C. krusei are involved in the increased incidence of invasive infections, with high rates of treatment failure related to resistance to azoles and echinocandins. , C. glabrata ranks second in deep soft tissue infections in the United States and Europe, with a frequency of multidrug resistance > 10 %. Its incidence has also increased in Brazilian hospitals. In a retrospective study of pediatric cancer patients with candidemia, it was observed that patients from whom Candida tropicalis was isolated had more skin lesions when compared to those with candidemia by other species. Until a few years ago, there were no reports of multiresistant Candida , but the current scenario comprises invasive infections by multiresistant non-albicans Candida , most of them by C. glabrata and C. auris. C. auris is an emerging species. Discovered in 2009, it has been described in more than 30 countries on six continents. It has a high antifungal resistance rate, with an estimated mortality rate of 30%–72%. Its isolation is difficult when the usual biochemical methods are employed, and it may be mistaken for C. haemulonii . The use of the matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) allows the differentiation of C. auris from other species. As in other Candida spp. species, the presence of C. auris in non-sterile sites may represent only colonization, but its detection is very important since, besides being a difficult-to-eradicate colonization, there is a risk of horizontal transmission. Early recognition of sporadic cases, identification of reservoirs, and reporting are important measures for outbreak prevention. The definitive diagnosis of invasive candidiasis requires agent isolation from a usually sterile site or demonstration of the presence of the microorganism in a tissue sample. However, negative results do not exclude the diagnosis of invasive infection in the immunocompromised host. The sensitivity of a blood culture can be lower than 50 %. The diagnosis and treatment of systemic candidiasis are described in . Infections by Aspergillus spp. include respiratory diseases due to hypersensitivity (allergic sinusitis and allergic bronchopulmonary aspergillosis), skin and epithelial infections, intracavitary colonization (pulmonary fungal ball), and invasive forms (invasive and chronic necrotizing pulmonary aspergillosis, sinusitis, and disseminated forms with nervous system invasion and brain abscess formation). Invasive aspergillosis occurs in immunocompromised patients with severe and persistent neutropenia due to corticosteroid treatment or chemotherapy, stem cell transplantation, or solid organ transplantation, especially the lungs. It has a high mortality rate, with Aspergillus fumigatus, A. flavus, A. niger , A. terreus , and A. versicolor the most frequently involved species. A.fumigatus is the main agent of invasive pulmonary aspergillosis, followed by A. flavus and A. terreus. , It was recently demonstrated that A. fumigatus can produce aerosols and has the potential to be transmitted to others. In invasive aspergillosis, the earliest manifestation is fever in a patient with prolonged neutropenia, respiratory symptoms such as cough and dyspnea, and poor lung auscultation. Patients with strong immunosuppression may progress to the disseminated forms with central nervous system involvement, leading to brain abscess and, rarely, meningitis. Diagnosis is made through clinical suspicion, imaging tests, antigen screening (galactomannan and beta-D glucan), and isolation of the fungus through microscopy and culture. Chest tomography is more sensitive than chest radiography, especially at the disease onset and may have two radiological characteristics: the halo sign and the air crescent sign; the latter is rare in neutropenic individuals. Galactomannan is a cell wall polysaccharide released by the fungus during its growth in tissues, detected in serum and other fluids. It may be present five to eight days prior to clinical manifestation and should be ordered as a screening test for patients with prolonged neutropenia and recipients of allogeneic hematopoietic stem cell transplantation who are not on antifungal prophylaxis. The measurement is made by the immunoenzymatic method and has a sensitivity and specificity of 90 % in neutropenic patients. The highest test accuracy is when two consecutive samples have a value ≥ 0.5 and the best performance is when it is performed two to three times a week to monitor at-risk patients, correlated with imaging test and clinical picture. Screening for galactomannan in bronchoalveolar lavage is also a good test for invasive aspergillosis; the optimal cutoff ranges from 0.5 to 1.0. False positive results may occur when patients receive piperacillin/tazobactam, blood product transfusions, and in other fungal infections, such as histoplasmosis, fusariosis, and talaromycosis. (1 → 3)-beta- d -glucan is also a polysaccharide component of the fungal cell wall that can be released in several fungal infections: Aspergillus , Candida , Fusarium , Trichosporon , Saccharomyces , Acremonium , and Pneumocystis jiroveci . Therefore, it is not specific for Aspergillus . Bronchoalveolar lavage and/or lung biopsy are the methods of choice for microscopy and culture. Specific fungal stainings should be employed in the histological analysis. The presence of dichotomized and septate hyaline hyphae in sterile materials constitute evidence of Aspergillus infection even without culture isolation. While its presence in the respiratory tract of immunocompetent individuals may represent only colonization, it may indicate invasive disease in immunosuppressed individuals. The culture has low sensitivity and has a positivity in 63 % in the bronchoalveolar lavage of patients with infection. When it is not possible to collect the lavage, the culture can be tried in three sputum samples. The diagnosis and treatment of invasive aspergillosis are described in . Infections by Fusarium spp., another opportunistic fungus, may manifest clinically as persistent fever non-responsive to broad-spectrum antibiotic therapy in neutropenic patients with T-cell immunodeficiency or in acute leukemia patients. Immunosuppressed individuals may have skin lesions characterized by painful erythematous macules or papules that develop into necrotic ulcers, known as ecthyma gangrenosum, which is more common in the extremities and rapidly disseminate. The main entryway for Fusarium spp. is the respiratory tract, followed by damaged or burned skin. Catheter infection can lead to fungemia and its removal, associated with the antifungal, is crucial for treatment. Fungal isolation can be performed on skin biopsy or blood culture. Mortality in the disseminated forms is high but seems to be lower when voriconazole and liposomal amphotericin are used, compared to amphotericin deoxycholate. The diagnosis and treatment of disseminated fusariosis are described in . Pneumocystis jiroveci is known as an opportunistic pneumonia agent in individuals infected with HIV. The incidence has decreased in this group due to combined antiretroviral therapy and the use of prophylaxis for pneumocystosis. On the other hand, there is currently an increased incidence of infection in those receiving immunosuppressants for oncological or autoimmune disease, in hematopoietic stem cells or solid-organ transplantation recipients. There are few data on P. jiroveci infection in children. A recent study evaluating adults and children shows that in the latter, oncohematologic diseases, post-transplantation period, and innate immunity errors were the most common comorbidities. The main clinical manifestation was pneumonia with fever, cough, dyspnea, and oxygen desaturation, developing into respiratory failure in adults and children. The radiological examination showed bilateral consolidation or bilateral interstitial infiltrate. Mortality was 25 %. The diagnosis of P. jiroveci infection is based on the visualization of the agent with specific staining in lung tissue or specimens of respiratory secretions, such as bronchoalveolar lavage, induced sputum, or endotracheal aspiration in intubated patients. The treatment of choice is sulfamethoxazole trimethoprim (SMZ-TMP) administered intravenously for 21 days. Intravenous pentamidine can be an alternative for those who cannot tolerate SMT-TMP or who have not responded after four to eight days of SMZ-TMP therapy. In patients with partial O 2 pressure below 70 mmHg in ambient air, oral prednisone for 21 days is recommended . Cryptococcus, an encapsulated yeast that causes cryptococcosis, is part of the yeast group, and is found predominantly in patients with hematological malignancies, those using high-dose corticosteroids, solid-organ transplant recipients, and those infected with HIV and cell immunodepression. In these children, hematogenous dissemination may occur to the central nervous system, bones, and skin. The most common form is cryptococcal meningitis, which has an indolent course, with fever, headache, and behavioral changes.Intracranial hypertension and inflammatory immune reconstitution syndrome are common complications. The diagnosis may be attained by CSF or serum antigen screening, but the definitive diagnosis depends on the agent isolation in culture of body fluids or biopsy material. The differentiation between the two species, C. neoformans and C. gatti , depends on the use of selective culture media. The diagnosis and treatment of cryptococcosis are described in . Another deep mycosis, mucormycosis , should be a differential diagnosis of invasive aspergillosis. Both have similarities in the affected patients (oncologic and diabetic patients, in the case of mucormycosis), in the risk factors (prolonged neutropenia), and in clinical and radiological signs. A recent study comparing invasive aspergillosis with mucormycosis in oncologic patients showed that mucormycosis was more frequent in children and adolescents than in adults and in patients with acute leukemia and graft-versus-host disease, whereas aspergillosis was more frequent in patients with lymphoma. Pulmonary involvement was lower; however, the involvement of the paranasal sinuses, central nervous system, and infection in more than two sites was more frequent in mucormycosis. The diagnosis and treatment of mucormycosis are described in .
Herpes simplex 1 (HHV-1) and 2 (HHV-2) are among the nine herpesviruses that infect humans. With the exception of the neonatal period, HHV-1 and HHV-2are usually localized. In immunocompromised patients, localized severe intense lesions and, less commonly, disseminated HHV-1 and HHV-2 infection may occur, with generalized vesicles on the skin and/or visceral involvement. Reactivation of HHV-1 and HHV-2 after the primary infection occurs more often in the immunosuppressed individual and is more prolonged. Reactivations may be preceded by a burning or itching sensation at the site of recurrence, which may be helpful in the early onset of the antiviral therapy. HSV (herpes simplex virus) encephalitis occurs after primary or recurrent infection. It is characterized by fever, altered state of consciousness, personality alteration, seizures, and focal neurological signs. It usually has an acute onset, with fulminant evolution, progressing to coma and death if early therapy is not instituted. Magnetic resonance imaging is the most sensitive imaging exam and typically shows temporal lobe involvement. Different mutations have been associated with predisposition to the development of encephalitis. Meningitis may also occur. The diagnosis of HHV-1 and HHV-2 can be made by inoculation of vesicle secretion or cerebrospinal fluid in cell cultures, with the cytopathic effect being observed in one to three days after inoculation, and it can be confirmed by direct immunofluorescence of the culture material. Polymerase chain reaction (PCR) can be performed in a cerebrospinal fluid (CSF) sample. If both tests (culture and PCR) are repeatedly negative, histopathological analysis and viral culture of brain tissue biopsy are the most reliable tests to confirm the diagnosis of encephalitis caused by HHV-1 and HHV-2. The treatment in immunosuppressed patients should be carried out using intravenous acyclovir for mucocutaneous HSV. In cases of acyclovir-resistant HSV, parenteral foscarnet is recommended. For encephalitis, parenteral acyclovir is recommended for 21 days. Infections caused by the varicella-zoster virus (VZV) (herpesvirus 3, HHV-3) in immunosuppressed individuals may show the presence of lesions that appear successively, sometimes with a hemorrhagic aspect, accompanied by high fever. Visceralization of the infection may occur, with encephalitis, hepatitis, and pneumonia. Severe cases of varicella have been observed in children using corticosteroids at immunosuppressive doses, as well as in individuals with inborn errors of immunity with T-cell impairment and in those infected with HIV. Children and adolescents with underlying lung or skin alterations are also prone to have severe varicella. The infection by the VZV can also manifest as herpes zoster, which occurs after reactivation of the virus that was latent in the dorsal root of the medulla, cranial nerves, or enteric autonomic nerves. In immunosuppressed individuals, the lesions tend not to be restricted to one or two dermatomes and may disseminate. Most commonly, zoster lesions appear with skin manifestation, but may also present as isolated aseptic meningitis, encephalitis, infarction, or gastrointestinal tract involvement. In a retrospective study comparing zoster episodes in adolescents infected with vertically transmitted-HIV and adolescents with juvenile systemic lupus erythematosus, those infected with HIV were more likely to have recurrent zoster when compared to those with lupus. The diagnosis of choice for VZV infections is PCR of vesicle or scab material. Direct immunofluorescence for virus detection and viral culture can also detect the virus, but they are less sensitive than PCR and, unlike PCR, do not differentiate the vaccinal from the wild virus. The serological diagnosis of varicella is not recommended when immunodeficiency is suspected, due to its low sensitivity. Cases of varicella in immunocompromised individuals with varicella antibodies have also been reported. Therapy with intravenous acyclovir is recommended for immunocompromised individuals. The treatment should be started early, preferably within 24 h of the onset of exanthema. Valacyclovir has been recommended by some to treat children aged 2–17 years for cases of immunocompromised patients at lower risk of developing severe varicella, such as HIV-infected patients with higher levels of CD4 T-lymphocytes and some children with leukemia, under close medical supervision. In rare cases of acyclovir-resistant varicella, intravenous foscarnet can be used. Immunocompromised individuals exposed to varicella should receive prophylaxis with intramuscular VZV-specific immunoglobulin or intravenous immunoglobulin as soon as possible after the exposure. Chemoprophylaxis after VZV exposure of immunocompromised individuals has been recommended by some, although there is little evidence demonstrating its benefit in the literature. Acyclovir or valacyclovir is used (starting seven to ten days after the exposure and lasting seven days. Epstein-Barr virus (EBV) infections (Herpesvirus 4, HHV-4) can cause a wide variety of clinical manifestations. They range from asymptomatic infections in infants to infectious mononucleosis, which is more common in schoolchildren and adolescents, which can become severe conditions when they affect the immunosuppressed individual. In these cases, lymphoproliferative disorders such as hemophagocytic syndrome, X-linked lymphoproliferative syndrome, post-transplantation lymphoproliferative disorders, Burkitt's lymphoma, nasopharyngeal carcinoma, and undifferentiated T- and B-cell lymphomas are observed. Special attention should be given to transplant recipients and to those individuals infected with HIV. While in immunocompetent individuals the diagnosis of EBV infection can be made through serology, in immunosuppressed individuals, the diagnosis may require the use of PCR for virus detection in serum, plasma, and tissue, and of real-time PCR in lymphoid cells, tissues, and body fluids. The use of corticosteroids (prednisone at a dose of 1 mg/kg/day orally, maximum 20 mg/day) for seven days may be recommended for EBV infections with tonsillar hypertrophy at risk of airway obstruction, significant splenomegaly, myocarditis, hemolytic anemia, and hemophagocytic syndrome. In cases of hemophagocytic syndrome, it may be necessary to use other cytotoxic and immunomodulatory agents, such as etoposide orcyclosporine. For EBV patients with post-transplant lymphoproliferative disorders, reduced immunosuppressive therapy may be required. When cytomegalovirus infections (Herpesvirus 5, HHV-5) affect immunosuppressed patients, they can trigger pneumonia, colitis, retinitis, meningoencephalitis, and transverse myelitis. Moreover, a syndrome characterized by fever, thrombocytopenia, leukopenia, and mild hepatitis may occur. Cytomegalovirus (CMV) can be acquired by vertical transmission, by person-to-person transmission through contact with contaminated secretions (saliva, urine, or genital secretions), via blood component transfusion, and also via solid organ or hematopoietic stem cell transplantation. Additionally, the virus persists in the body, replicates, and can be transmitted intermittently, particularly in immunosuppressive situations. Individuals especially at risk for these manifestations are those undergoing cancer treatment, those infected with HIV, and those receiving immunosuppressive therapy for hematopoietic stem cell and solid organ transplantation. Patients on biological agents may show retinitis and hepatitis, although more rarely. Isolation of the CMV from the infection target organ is the best evidence that the disease is caused by this virus. Traditional cell line culture can take up to 28 days; fast shell-vial culture with viral detection by immunofluorescence gives results in 24–72 h. CMV antigenemia (pp65 antigen detection) is another way of screening for CMV, but it is technically more difficult than the PCR. Moreover, the several body fluids can be evaluated for the presence of CMV through PCR. Intravenous ganciclovir is approved for the treatment of CMV retinitis in immunocompromised adults, including those infected with HIV, and for the prophylaxis and treatment of CMV disease in transplant recipients. Valganciclovir is also approved for the prevention of CMV disease in kidney transplant recipients older than 4 months and in pediatric heart transplant recipients older than one month. Secondary prophylaxis in children should be maintained in HIV-infected patients older than 6 years until reaching CD4 counts > 100/mm 3 for six consecutive months; for those under 6 years of age, they should maintain CD4 levels above 15 % for the same period before discontinuing the prophylaxis. In children with innate immunity error who have had retinitis, prophylaxis withdrawal should be evaluated on a case-by-case basis, with ophthalmologist monitoring at least every three to six months. The reactivation of herpesvirus 6B infection (HHV-6B) is associated with disease in solid-organ and hematopoietic stem cell recipients, with fever, exanthema, hepatitis, spinal cord suppression, graft rejection, pneumonia, and encephalitis. Herpesvirus 6A (HHV-6A) and herpesvirus 7 (HHV-7) infection are much rarer in immunosuppressed individuals. The diagnosis of herpesvirus 6B infection is difficult, being available only in reference laboratories. Even when using the tests of these labs, it can be difficult to differentiate infection from disease. The use of ganciclovir (and valganciclovir) or foscarnet may be beneficial in immunocompromised patients with HHV-6B encephalitis. Herpesvirus 8 (HHV-8) infections are associated with Kaposi's sarcoma, multicenter Castleman's disease, and inflammatory cytokine syndrome associated Kaposi's sarcoma herpesvirus. In Brazil, HHV-8 is associated with cases of Kaposi's sarcoma in HIV-infected adult individuals without antiretroviral medication. Cases are very rare in Brazilian children. There is no antiviral medication approved for treatment of HHV-8. HHV-8-associated neoplasms are usually treated with radio- and chemotherapy. The diagnosis and treatment of herpesvirus infections are summarized in .
Biologicals are part of a wide range of products that include vaccines, blood components, allergens, somatic cells, gene therapy, tissues, and proteins. In recent years, biological agents have been used in pediatrics to treat rheumatologic diseases, inflammatory bowel diseases, and neoplasms . The purpose of these drugs is to interfere with the immune alteration that leads to the clinical disease, reducing or increasing the immune response. However, alterations in the immune response pathways lead to selective deficiencies that potentiate the risk of infection. Tumor necrosis factor-alpha (TNF-alpha) inhibitors TNF-alpha inhibitors block the immune response and reduce acute inflammation in patients with rheumatoid arthritis, juvenile idiopathic arthritis, psoriasis, and inflammatory bowel disease such as Crohn's disease and ulcerative colitis. Infliximab, adalimumab, and etanercept have an approved indication in pediatrics. The use of anti-TNF-alpha reduces the immune response to mycobacteria and predisposes to tuberculosis reactivation. In addition, severe bacterial infections by Streptococcus pyogenes , Listeria meningitis, systemic Mycobacterium avium complex infections, and endemic fungal infections such as histoplasmosis and blastomycosis have been reported in pediatric patients. Reactivation of VZV and EBV have also been described. What is not yet clear in the pediatric age group is how the use of anti-TNF-alpha effectively increases the risk of certain opportunistic infections when compared to children with autoimmune disorders using other immunosuppressants, such as corticosteroids. Interleukin inhibitors Different interleukin (IL) inhibitors have been used aiming to interfere with the inflammatory cascade of patients with autoimmune diseases. This therapy has been associated with the development of infections. IL-1 inhibitors . IL-1 mediates macrophage and T-cell activation, triggering fever. IL-1 inhibitors have been used in neonatal onset multisystemic inflammatory disease and cryopyrin-associated periodic syndromes, in addition to its off-label use in systemic juvenile idiopathic arthritis. Infections in the pediatric age group have been reported with the use of anakinra: visceral leishmaniasis, varicella, HHV-1, and upper airway infections. Canakinumab, used to treat juvenile idiopathic arthritis, showed no increase in the number of infections. IL-6 inhibitors. IL-6 modulates T- and B-lymphocyte growth and differentiation, stimulating the production of acute phase proteins. Tocilizumab is an IL-6 blocker used to treat systemic and polyarticular juvenile idiopathic arthritis. There is little experience with the use of this drug in the pediatric age group. IL-2 inhibitors. IL-2 promotes T-cell proliferation, and IL-2 blockers have been used in solid organ transplantation to stop T-cell proliferation after transplantation. Basiliximab does not seem to be associated with an increased number of infections. IL-12, IL-17, and IL-23 inhibitors. IL-12 is produced by macrophages and dendritic cells and promotes both NK-cell activation and T-cell differentiation. IL-23 stimulates Th17 cell proliferation, and IL-17 produced by these cells stimulates the production of proinflammatory cytokines, with an effect on the endothelium, epithelium, and fibroblasts. The use of these biologicals is still restricted in pediatrics. Few infections have been reported with their use. Other targets different than interleukins The use of CD28 inhibitory drugs such as abatacept is not associated with infections. However, anti-CD20 drugs, which act on B cells, lead to hypogammaglobulinemia and are a risk factor for different infections for which the defense mechanism depends on the humoral immune response. One example of anti-CD20 is rituximab, used in cases of post-transplant lymphoproliferative disease, EBV-related hemophagocytic lymph histiocytosis, glomerular diseases, inflammatory diseases of the central nervous system, and Burkitt's lymphoma. The infections associated with the use of rituximab are bacterial sepsis, cytomegalovirus infection, varicella, acute pyelonephritis, BK nephropathy, and salmonella enteritis. The risk of infection ranges from 1 % to 10 %.
TNF-alpha inhibitors block the immune response and reduce acute inflammation in patients with rheumatoid arthritis, juvenile idiopathic arthritis, psoriasis, and inflammatory bowel disease such as Crohn's disease and ulcerative colitis. Infliximab, adalimumab, and etanercept have an approved indication in pediatrics. The use of anti-TNF-alpha reduces the immune response to mycobacteria and predisposes to tuberculosis reactivation. In addition, severe bacterial infections by Streptococcus pyogenes , Listeria meningitis, systemic Mycobacterium avium complex infections, and endemic fungal infections such as histoplasmosis and blastomycosis have been reported in pediatric patients. Reactivation of VZV and EBV have also been described. What is not yet clear in the pediatric age group is how the use of anti-TNF-alpha effectively increases the risk of certain opportunistic infections when compared to children with autoimmune disorders using other immunosuppressants, such as corticosteroids.
Different interleukin (IL) inhibitors have been used aiming to interfere with the inflammatory cascade of patients with autoimmune diseases. This therapy has been associated with the development of infections. IL-1 inhibitors . IL-1 mediates macrophage and T-cell activation, triggering fever. IL-1 inhibitors have been used in neonatal onset multisystemic inflammatory disease and cryopyrin-associated periodic syndromes, in addition to its off-label use in systemic juvenile idiopathic arthritis. Infections in the pediatric age group have been reported with the use of anakinra: visceral leishmaniasis, varicella, HHV-1, and upper airway infections. Canakinumab, used to treat juvenile idiopathic arthritis, showed no increase in the number of infections. IL-6 inhibitors. IL-6 modulates T- and B-lymphocyte growth and differentiation, stimulating the production of acute phase proteins. Tocilizumab is an IL-6 blocker used to treat systemic and polyarticular juvenile idiopathic arthritis. There is little experience with the use of this drug in the pediatric age group. IL-2 inhibitors. IL-2 promotes T-cell proliferation, and IL-2 blockers have been used in solid organ transplantation to stop T-cell proliferation after transplantation. Basiliximab does not seem to be associated with an increased number of infections. IL-12, IL-17, and IL-23 inhibitors. IL-12 is produced by macrophages and dendritic cells and promotes both NK-cell activation and T-cell differentiation. IL-23 stimulates Th17 cell proliferation, and IL-17 produced by these cells stimulates the production of proinflammatory cytokines, with an effect on the endothelium, epithelium, and fibroblasts. The use of these biologicals is still restricted in pediatrics. Few infections have been reported with their use.
The use of CD28 inhibitory drugs such as abatacept is not associated with infections. However, anti-CD20 drugs, which act on B cells, lead to hypogammaglobulinemia and are a risk factor for different infections for which the defense mechanism depends on the humoral immune response. One example of anti-CD20 is rituximab, used in cases of post-transplant lymphoproliferative disease, EBV-related hemophagocytic lymph histiocytosis, glomerular diseases, inflammatory diseases of the central nervous system, and Burkitt's lymphoma. The infections associated with the use of rituximab are bacterial sepsis, cytomegalovirus infection, varicella, acute pyelonephritis, BK nephropathy, and salmonella enteritis. The risk of infection ranges from 1 % to 10 %.
Despite the improved diagnosis of opportunistic infections in recent years, they remain a challenge for pediatricians who are not used to these infections. They must raise the suspicion and start managing the case, but also should resort to specialists with practice in the management of these infections to provide a better outcome for these patients, who still have high mortality.
The authors declare no conflicts of interest.
|
RNA-Seq of Tumor-Educated Platelets Enables Blood-Based Pan-Cancer, Multiclass, and Molecular Pathway Cancer Diagnostics | 5601822b-686f-4735-91f1-5b5c4a293054 | 4644263 | Pathology[mh] | Blood-based “liquid biopsies” provide a means for minimally invasive molecular diagnostics, overcoming limitations of tissue acquisition. Early detection of cancer, clinical cancer diagnostics, and companion diagnostics are regarded as important applications of liquid biopsies. Here, we report that mRNA profiles of tumor-educated blood platelets (TEPs) enable for pan-cancer, multiclass cancer, and companion diagnostics in both localized and metastasized cancer patients. The ability of TEPs to pinpoint the location of the primary tumor advances the use of liquid biopsies for cancer diagnostics. The results of this proof-of-principle study indicate that blood platelets are a potential all-in-one platform for blood-based cancer diagnostics, using the equivalent of one drop of blood.
Cancer is primarily diagnosed by clinical presentation, radiology, biochemical tests, and pathological analysis of tumor tissue, increasingly supported by molecular diagnostic tests. Molecular profiling of tumor tissue samples has emerged as a potential cancer classifying method . In order to overcome limitations of tissue acquisition, the use of blood-based liquid biopsies has been suggested . Several blood-based biosources are currently being evaluated as liquid biopsies, including plasma DNA and circulating tumor cells . So far, implementation of liquid biopsies for early detection of cancer has been hampered by non-specificity of these biosources to pinpoint the nature of the primary tumor . It has been reported that tumor-educated platelets (TEPs) may enable blood-based cancer diagnostics . Blood platelets—the second most-abundant cell type in peripheral blood—are circulating anucleated cell fragments that originate from megakaryocytes in bone marrow and are traditionally known for their role in hemostasis and initiation of wound healing . More recently, platelets have emerged as central players in the systemic and local responses to tumor growth. Confrontation of platelets with tumor cells via transfer of tumor-associated biomolecules (“education”) is an emerging concept and results in the sequestration of such biomolecules . Moreover, external stimuli, such as activation of platelet surface receptors and lipopolysaccharide-mediated platelet activation , induce specific splicing of pre-mRNAs in circulating platelets . Platelets may also undergo queue-specific splice events in response to signals released by cancer cells and the tumor microenvironment—such as stromal and immune cells. The combination of specific splice events in response to external signals and the capacity of platelets to directly ingest (spliced) circulating mRNA can provide TEPs with a highly dynamic mRNA repertoire, with potential applicability to cancer diagnostics ( A). In this study, we characterize the platelet mRNA profiles of various cancer patients and healthy donors and investigate their potential for TEP-based pan-cancer, multiclass cancer, and companion diagnostics.
mRNA Profiles of Tumor-Educated Platelets Are Distinct from Platelets of Healthy Individuals We prospectively collected and isolated blood platelets from healthy donors (n = 55) and both treated and untreated patients with early, localized (n = 39) or advanced, metastatic cancer (n = 189) diagnosed by clinical presentation and pathological analysis of tumor tissue supported by molecular diagnostics tests. The patient cohort included six tumor types, i.e., non-small cell lung carcinoma (NSCLC, n = 60), colorectal cancer (CRC, n = 41), glioblastoma (GBM, n = 39), pancreatic cancer (PAAD, n = 35), hepatobiliary cancer (HBC, n = 14), and breast cancer (BrCa, n = 39) ( B; ; ). The cohort of healthy donors covered a wide range of ages (21–64 years old, ). Platelet purity was confirmed by morphological analysis of randomly selected and freshly isolated platelet samples (contamination is 1 to 5 nucleated cells per 10 million platelets, see ), and platelet RNA was isolated and evaluated for quality and quantity ( A). A total of 100–500 pg of platelet total RNA (the equivalent of purified platelets in less than one drop of blood) was used for SMARTer mRNA amplification and sequencing ( C and A). Platelet RNA sequencing yielded a mean read count of ∼22 million reads per sample. After selection of intron-spanning (spliced) RNA reads and exclusion of genes with low coverage (see ), we detected in platelets of healthy donors (n = 55) and localized and metastasized cancer patients (n = 228) 5,003 different protein coding and non-coding RNAs that were used for subsequent analyses. The obtained platelet RNA profiles correlated with previously reported mRNA profiles of platelets and megakaryocytes and not with various non-related blood cell mRNA profiles ( B). Furthermore, DAVID Gene Ontology (GO) analysis revealed that the detected RNAs are strongly enriched for transcripts associated with blood platelets (false discovery rate [FDR] < 10 −126 ). Among the 5,003 RNAs, we identified known platelet markers, such as B2M, PPBP, TMSB4X, PF4, and several long non-coding RNAs (e.g., MALAT1). A total of 1,453 out of 5,003 mRNAs were increased and 793 out of 5,003 mRNAs were decreased in TEPs as compared to platelet samples of healthy donors (FDR < 0.001), while presenting a strong correlation between these platelet mRNA profiles (r = 0.90, Pearson correlation) ( D). Unsupervised hierarchical clustering based on the differentially detected platelet mRNAs distinguished two sample groups with minor overlap ( E; ). DAVID GO analysis revealed that the increased TEP mRNAs were enriched for biological processes such as vesicle-mediated transport and the cytoskeletal protein binding while decreased mRNAs were strongly involved in RNA processing and splicing . A correlative analysis of gene set enrichment (CAGE) GO methodology, in which 3,875 curated gene sets of the GSEA database were correlated to TEP profiles (see ), demonstrated significant correlation of TEP mRNA profiles with cancer tissue signatures, histone deacetylases regulation, and platelets . The levels of 20 non-protein coding RNAs were altered in TEPs as compared to platelets from healthy individuals and these show a tumor type-associated RNA profile ( C). Next, we determined the diagnostic accuracy of TEP-based pan-cancer classification in the training cohort (n = 175), employing a leave-one-out cross-validation support vector machine algorithm (SVM/LOOCV, see ; D and S1E), previously used to classify primary and metastatic tumor tissues . Briefly, the SVM algorithm (blindly) classifies each individual sample as cancer or healthy by comparison to all other samples (175 − 1) and was performed 175 times to classify and cross validate all individuals samples. The algorithms we developed use a limited number of different spliced RNAs for sample classification. To determine the specific input gene lists for the classifying algorithms we performed ANOVA testing for differences (as implemented in the R-package edgeR), yielding classifier-specific gene lists . For the specific algorithm of the pan-cancer TEP-based classifier test we selected 1,072 RNAs for the n = 175 training cohort, yielding a sensitivity of 96%, a specificity of 92%, and an accuracy of 95% ( F). Subsequent validation using a separate validation cohort (n = 108), not involved in input gene list selection and training of the algorithm, yielded a sensitivity of 97%, a specificity of 94%, and an accuracy of 96% ( G), with an area under the curve (AUC) of 0.986 to detect cancer ( H) and high predictive strength ( I). In contrast, random classifiers, as determined by multiple rounds of randomly shuffling class labels (permutation) during the SVM training process (see ), had no predictive power (mean overall accuracy: 78%, SD ± 0.3%, p < 0.01), thereby showing, albeit an unbalanced representation of both groups in the study cohort, specificity of our procedure. A total of 100 times random class-proportional subsampling of the entire dataset in a training and validation set (ratio 60:40) yielded similar accuracy rates (mean overall accuracy: 96%, SD: ± 2%), confirming reproducible classification accuracy in this dataset. Of note, all 39 patients with localized tumors and 33 of the 39 patients with primary tumors in the CNS were correctly classified as cancer patients ( I). Visualization of 22 genes previously identified at differential RNA levels in platelets of patients with various non-cancerous diseases , revealed mixed levels in our TEP dataset ( F), suggesting that the platelet RNA repertoire in patients with non-cancerous disease is distinct from patients with cancer. Tumor-Specific Educational Program of Blood Platelets Allows for Multiclass Cancer Diagnostics In addition to the pan-cancer diagnosis, the TEP mRNA profiles also distinguished healthy donors and patients with specific types of cancer, as demonstrated by the unsupervised hierarchical clustering of differential platelet mRNA levels of healthy donors and all six individual tumor types, i.e., NSCLC, CRC, GBM, PAAD, BrCa, and HBC ( A, all p < 0.0001, Fisher’s exact test, and A; ), and this resulted in tumor-specific gene lists that were used as input for training and validation of the tumor-specific algorithms . For the unsupervised clustering of the all-female group of BrCa patients, male healthy donors were excluded to avoid sample bias due to gender-specific platelet mRNA profiles ( B). SVM-based classification of all individual tumor classes with healthy donors resulted in clear distinction of both groups in both the training and validation cohort, with high sensitivity and specificity, and 38/39 (97%) cancer patients with localized disease were classified correctly ( B and C). CAGE GO analysis showed that biological processes differed between TEPs of individual tumor types, suggestive of tumor-specific “educational” programs . We did not detect sufficient differences in mRNA levels to discriminate patients with non-metastasized from patients with metastasized tumors, suggesting that the altered platelet profile is predominantly influenced by the molecular tumor type and, to a lesser extent, by tumor progression and metastases. We next determined whether we could discriminate three different types of adenocarcinomas in the gastro-intestinal tract by analysis of the TEP-profiles, i.e., CRC, PAAD, and HBC. We developed a CRC/PAAD/HBC algorithm that correctly classified the mixed TEP samples (n = 90) with an overall accuracy of 76% (mean overall accuracy random classifiers: 42%, SD: ± 5%, p < 0.01, C). In order to determine whether the TEP mRNA profiles allowed for multiclass cancer diagnosis across all tumor types and healthy donors, we extended the SVM/LOOCV classification test using a combination of algorithms that classified each individual sample of the training cohort (n = 175) as healthy donor or one of six tumor types ( D and S2E). The results of the multiclass cancer diagnostics test resulted in an average accuracy of 71% (mean overall accuracy random classifiers: 19%, SD: ± 2%, p < 0.01, D), demonstrating significant multiclass cancer discriminative power in the platelet mRNA profiles. The classification capacity of the multiclass SVM-based classifier was confirmed in the validation cohort of 108 samples, with an overall accuracy of 71% ( E). An overall accuracy of 71% might not be sufficient for introduction into cancer diagnostics. However, of the initially misclassified samples according to the SVM algorithms choice with strongest classification strength the second ranked classification was correct in 60% of the cases. This yields an overall accuracy using the combined first and second ranked classifications of 89%. The low validation score of HBC samples can be attributed to the relative low number of samples and possibly to the heterogenic nature of this group of cancers (hepatocellular cancers and cholangiocarcinomas). Companion Diagnostics Tumor Tissue Biomarkers Are Reflected by Surrogate TEP mRNA Onco-signatures Blood provides a promising biosource for the detection of companion diagnostics biomarkers for therapy selection . We selected platelet samples of patients with distinct therapy-guiding markers confirmed in matching tumor tissue. Although the platelet mRNA profiles contained undetectable or low levels of these mutant biomarkers, the TEP mRNA profiles did allow to distinguish patients with KRAS mutant tumors from KRAS wild-type tumors in PAAD, CRC, NSCLC, and HBC patients, and EGFR mutant tumors in NSCLC patients, using algorithms specifically trained on biomarker-specific input gene lists (all p < 0.01 versus random classifiers, A–3E; ). Even though the number of samples analyzed is relatively low and the risk of algorithm overfitting needs to be taken into account, the TEP profiles distinguished patients with HER2 -amplified, PIK3CA mutant or triple-negative BrCa, and NSCLC patients with MET overexpression (all p < 0.01 versus random classifiers, F–3I). We subsequently compared the diagnostic accuracy of the TEP mRNA classification method with a targeted KRAS (exon 12 and 13) and EGFR (exon 20 and 21) amplicon deep sequencing strategy (∼5,000× coverage) on the Illumina Miseq platform using prospectively collected blood samples of patients with localized or metastasized cancer. This method did allow for the detection of individual mutant KRAS and EGFR sequences in both plasma DNA and platelet RNA , indicating sequestration and potential education capacity of mutant, tumor-derived RNA biomarkers in TEPs. Mutant KRAS was detected in 62% and 39%, respectively, of plasma DNA (n = 103, kappa statistics = 0.370, p < 0.05) and platelet RNA (n = 144, kappa statistics = 0.213, p < 0.05) of patients with a KRAS mutation in primary tumor tissue. The sensitivity of the plasma DNA tests was relatively poor as reported by others , which may partly be attributed to the loss of plasma DNA quality due to relatively long blood sample storage (EDTA blood samples were stored up to 48 hr at room temperature before plasma isolation). To discriminate KRAS mutant from wild-type tumors in blood, the TEP mRNA profiles provided superior concordance with tissue molecular status (kappa statistics = 0.795–0.895, p < 0.05) compared to KRAS amplicon sequencing analysis of both plasma DNA and platelet RNA . Thus, TEP mRNA profiles can harness potential blood-based surrogate onco-signatures for tumor tissue biomarkers that enable cancer patient stratification and therapy selection. TEP-Profiles Provide an All-in-One Biosource for Blood-Based Liquid Biopsies in Patients with Cancer Unequivocal discrimination of primary versus metastatic nature of a tumor may be difficult and hamper adequate therapy selection. Since the TEP profiles closely resemble the different tumor types as determined by their organ of origin—regardless of systemic dissemination—this potentially allows for organ-specific cancer diagnostics. Hence we selected all healthy donors and all patients with primary or metastatic tumor burden in the lung (n = 154), brain (n = 114), or liver (n = 127). We performed “organ exams” and instructed the SVM/LOOCV algorithm to determine for lung, brain, and liver the presence or absence of cancer (96%, 91%, and 96% accuracy, respectively), with cancer subclassified as primary or metastatic tumor (84%, 93%, and 90% accuracy, respectively) and in case of metastases to identify the potential organ of origin (64%, 70%, and 64% accuracy, respectively). The platelet mRNA profiles enabled assignment of the cancer to the different organs with high accuracy . In addition, using the same TEP mRNA profiles we were able to again indicate the biomarker status of the tumor tissues (90%, 82%, and 93% accuracy, respectively) .
We prospectively collected and isolated blood platelets from healthy donors (n = 55) and both treated and untreated patients with early, localized (n = 39) or advanced, metastatic cancer (n = 189) diagnosed by clinical presentation and pathological analysis of tumor tissue supported by molecular diagnostics tests. The patient cohort included six tumor types, i.e., non-small cell lung carcinoma (NSCLC, n = 60), colorectal cancer (CRC, n = 41), glioblastoma (GBM, n = 39), pancreatic cancer (PAAD, n = 35), hepatobiliary cancer (HBC, n = 14), and breast cancer (BrCa, n = 39) ( B; ; ). The cohort of healthy donors covered a wide range of ages (21–64 years old, ). Platelet purity was confirmed by morphological analysis of randomly selected and freshly isolated platelet samples (contamination is 1 to 5 nucleated cells per 10 million platelets, see ), and platelet RNA was isolated and evaluated for quality and quantity ( A). A total of 100–500 pg of platelet total RNA (the equivalent of purified platelets in less than one drop of blood) was used for SMARTer mRNA amplification and sequencing ( C and A). Platelet RNA sequencing yielded a mean read count of ∼22 million reads per sample. After selection of intron-spanning (spliced) RNA reads and exclusion of genes with low coverage (see ), we detected in platelets of healthy donors (n = 55) and localized and metastasized cancer patients (n = 228) 5,003 different protein coding and non-coding RNAs that were used for subsequent analyses. The obtained platelet RNA profiles correlated with previously reported mRNA profiles of platelets and megakaryocytes and not with various non-related blood cell mRNA profiles ( B). Furthermore, DAVID Gene Ontology (GO) analysis revealed that the detected RNAs are strongly enriched for transcripts associated with blood platelets (false discovery rate [FDR] < 10 −126 ). Among the 5,003 RNAs, we identified known platelet markers, such as B2M, PPBP, TMSB4X, PF4, and several long non-coding RNAs (e.g., MALAT1). A total of 1,453 out of 5,003 mRNAs were increased and 793 out of 5,003 mRNAs were decreased in TEPs as compared to platelet samples of healthy donors (FDR < 0.001), while presenting a strong correlation between these platelet mRNA profiles (r = 0.90, Pearson correlation) ( D). Unsupervised hierarchical clustering based on the differentially detected platelet mRNAs distinguished two sample groups with minor overlap ( E; ). DAVID GO analysis revealed that the increased TEP mRNAs were enriched for biological processes such as vesicle-mediated transport and the cytoskeletal protein binding while decreased mRNAs were strongly involved in RNA processing and splicing . A correlative analysis of gene set enrichment (CAGE) GO methodology, in which 3,875 curated gene sets of the GSEA database were correlated to TEP profiles (see ), demonstrated significant correlation of TEP mRNA profiles with cancer tissue signatures, histone deacetylases regulation, and platelets . The levels of 20 non-protein coding RNAs were altered in TEPs as compared to platelets from healthy individuals and these show a tumor type-associated RNA profile ( C). Next, we determined the diagnostic accuracy of TEP-based pan-cancer classification in the training cohort (n = 175), employing a leave-one-out cross-validation support vector machine algorithm (SVM/LOOCV, see ; D and S1E), previously used to classify primary and metastatic tumor tissues . Briefly, the SVM algorithm (blindly) classifies each individual sample as cancer or healthy by comparison to all other samples (175 − 1) and was performed 175 times to classify and cross validate all individuals samples. The algorithms we developed use a limited number of different spliced RNAs for sample classification. To determine the specific input gene lists for the classifying algorithms we performed ANOVA testing for differences (as implemented in the R-package edgeR), yielding classifier-specific gene lists . For the specific algorithm of the pan-cancer TEP-based classifier test we selected 1,072 RNAs for the n = 175 training cohort, yielding a sensitivity of 96%, a specificity of 92%, and an accuracy of 95% ( F). Subsequent validation using a separate validation cohort (n = 108), not involved in input gene list selection and training of the algorithm, yielded a sensitivity of 97%, a specificity of 94%, and an accuracy of 96% ( G), with an area under the curve (AUC) of 0.986 to detect cancer ( H) and high predictive strength ( I). In contrast, random classifiers, as determined by multiple rounds of randomly shuffling class labels (permutation) during the SVM training process (see ), had no predictive power (mean overall accuracy: 78%, SD ± 0.3%, p < 0.01), thereby showing, albeit an unbalanced representation of both groups in the study cohort, specificity of our procedure. A total of 100 times random class-proportional subsampling of the entire dataset in a training and validation set (ratio 60:40) yielded similar accuracy rates (mean overall accuracy: 96%, SD: ± 2%), confirming reproducible classification accuracy in this dataset. Of note, all 39 patients with localized tumors and 33 of the 39 patients with primary tumors in the CNS were correctly classified as cancer patients ( I). Visualization of 22 genes previously identified at differential RNA levels in platelets of patients with various non-cancerous diseases , revealed mixed levels in our TEP dataset ( F), suggesting that the platelet RNA repertoire in patients with non-cancerous disease is distinct from patients with cancer.
In addition to the pan-cancer diagnosis, the TEP mRNA profiles also distinguished healthy donors and patients with specific types of cancer, as demonstrated by the unsupervised hierarchical clustering of differential platelet mRNA levels of healthy donors and all six individual tumor types, i.e., NSCLC, CRC, GBM, PAAD, BrCa, and HBC ( A, all p < 0.0001, Fisher’s exact test, and A; ), and this resulted in tumor-specific gene lists that were used as input for training and validation of the tumor-specific algorithms . For the unsupervised clustering of the all-female group of BrCa patients, male healthy donors were excluded to avoid sample bias due to gender-specific platelet mRNA profiles ( B). SVM-based classification of all individual tumor classes with healthy donors resulted in clear distinction of both groups in both the training and validation cohort, with high sensitivity and specificity, and 38/39 (97%) cancer patients with localized disease were classified correctly ( B and C). CAGE GO analysis showed that biological processes differed between TEPs of individual tumor types, suggestive of tumor-specific “educational” programs . We did not detect sufficient differences in mRNA levels to discriminate patients with non-metastasized from patients with metastasized tumors, suggesting that the altered platelet profile is predominantly influenced by the molecular tumor type and, to a lesser extent, by tumor progression and metastases. We next determined whether we could discriminate three different types of adenocarcinomas in the gastro-intestinal tract by analysis of the TEP-profiles, i.e., CRC, PAAD, and HBC. We developed a CRC/PAAD/HBC algorithm that correctly classified the mixed TEP samples (n = 90) with an overall accuracy of 76% (mean overall accuracy random classifiers: 42%, SD: ± 5%, p < 0.01, C). In order to determine whether the TEP mRNA profiles allowed for multiclass cancer diagnosis across all tumor types and healthy donors, we extended the SVM/LOOCV classification test using a combination of algorithms that classified each individual sample of the training cohort (n = 175) as healthy donor or one of six tumor types ( D and S2E). The results of the multiclass cancer diagnostics test resulted in an average accuracy of 71% (mean overall accuracy random classifiers: 19%, SD: ± 2%, p < 0.01, D), demonstrating significant multiclass cancer discriminative power in the platelet mRNA profiles. The classification capacity of the multiclass SVM-based classifier was confirmed in the validation cohort of 108 samples, with an overall accuracy of 71% ( E). An overall accuracy of 71% might not be sufficient for introduction into cancer diagnostics. However, of the initially misclassified samples according to the SVM algorithms choice with strongest classification strength the second ranked classification was correct in 60% of the cases. This yields an overall accuracy using the combined first and second ranked classifications of 89%. The low validation score of HBC samples can be attributed to the relative low number of samples and possibly to the heterogenic nature of this group of cancers (hepatocellular cancers and cholangiocarcinomas).
Blood provides a promising biosource for the detection of companion diagnostics biomarkers for therapy selection . We selected platelet samples of patients with distinct therapy-guiding markers confirmed in matching tumor tissue. Although the platelet mRNA profiles contained undetectable or low levels of these mutant biomarkers, the TEP mRNA profiles did allow to distinguish patients with KRAS mutant tumors from KRAS wild-type tumors in PAAD, CRC, NSCLC, and HBC patients, and EGFR mutant tumors in NSCLC patients, using algorithms specifically trained on biomarker-specific input gene lists (all p < 0.01 versus random classifiers, A–3E; ). Even though the number of samples analyzed is relatively low and the risk of algorithm overfitting needs to be taken into account, the TEP profiles distinguished patients with HER2 -amplified, PIK3CA mutant or triple-negative BrCa, and NSCLC patients with MET overexpression (all p < 0.01 versus random classifiers, F–3I). We subsequently compared the diagnostic accuracy of the TEP mRNA classification method with a targeted KRAS (exon 12 and 13) and EGFR (exon 20 and 21) amplicon deep sequencing strategy (∼5,000× coverage) on the Illumina Miseq platform using prospectively collected blood samples of patients with localized or metastasized cancer. This method did allow for the detection of individual mutant KRAS and EGFR sequences in both plasma DNA and platelet RNA , indicating sequestration and potential education capacity of mutant, tumor-derived RNA biomarkers in TEPs. Mutant KRAS was detected in 62% and 39%, respectively, of plasma DNA (n = 103, kappa statistics = 0.370, p < 0.05) and platelet RNA (n = 144, kappa statistics = 0.213, p < 0.05) of patients with a KRAS mutation in primary tumor tissue. The sensitivity of the plasma DNA tests was relatively poor as reported by others , which may partly be attributed to the loss of plasma DNA quality due to relatively long blood sample storage (EDTA blood samples were stored up to 48 hr at room temperature before plasma isolation). To discriminate KRAS mutant from wild-type tumors in blood, the TEP mRNA profiles provided superior concordance with tissue molecular status (kappa statistics = 0.795–0.895, p < 0.05) compared to KRAS amplicon sequencing analysis of both plasma DNA and platelet RNA . Thus, TEP mRNA profiles can harness potential blood-based surrogate onco-signatures for tumor tissue biomarkers that enable cancer patient stratification and therapy selection.
Unequivocal discrimination of primary versus metastatic nature of a tumor may be difficult and hamper adequate therapy selection. Since the TEP profiles closely resemble the different tumor types as determined by their organ of origin—regardless of systemic dissemination—this potentially allows for organ-specific cancer diagnostics. Hence we selected all healthy donors and all patients with primary or metastatic tumor burden in the lung (n = 154), brain (n = 114), or liver (n = 127). We performed “organ exams” and instructed the SVM/LOOCV algorithm to determine for lung, brain, and liver the presence or absence of cancer (96%, 91%, and 96% accuracy, respectively), with cancer subclassified as primary or metastatic tumor (84%, 93%, and 90% accuracy, respectively) and in case of metastases to identify the potential organ of origin (64%, 70%, and 64% accuracy, respectively). The platelet mRNA profiles enabled assignment of the cancer to the different organs with high accuracy . In addition, using the same TEP mRNA profiles we were able to again indicate the biomarker status of the tumor tissues (90%, 82%, and 93% accuracy, respectively) .
The use of blood-based liquid biopsies to detect, diagnose, and monitor cancer may enable earlier diagnosis of cancer, lower costs by tailoring molecular targeted treatments, improve convenience for cancer patients, and ultimately supplements clinical oncological decision-making. Current blood-based biosources under evaluation demonstrate suboptimal sensitivity for cancer diagnostics, in particular in patients with localized disease. So far, none of the current blood-based biosources, including plasma DNA, exosomes, and CTCs, have been employed for multiclass cancer diagnostics , hampering its implementation for early cancer detection. Here, we report that molecular interrogation of blood platelet mRNA can offer valuable diagnostics information for all cancer patients analyzed—spanning six different tumor types. Our results suggest that platelets may be employable as an all-in-one biosource to broadly scan for molecular traces of cancer in general and provide a strong indication on tumor type and molecular subclass. This includes patients with localized disease possibly allowing for targeted diagnostic confirmation using routine clinical diagnostics for each particular tumor type. Since the discovery of circulating tumor material in blood of patients with cancer and the recognition of the clinical utility of blood-based liquid biopsies, a wealth of studies has assessed the use of blood for cancer diagnostics, prognostication and treatment monitoring . By development of highly sensitive targeted detection methods, such as targeted deep sequencing , droplet digital PCR , and allele-specific PCR , the utility and applicability of liquid biopsies for clinical implementation has accelerated. These advances previously allowed for a pan-cancer comparison of various biosources and revealed that in >75% of cancers, including advanced stage pancreas, colorectal, breast, and ovarian cancer, cell-free DNA is detectable although detection rates are dependent on the grade of the tumor and depth of analysis . Here, we show that the platelet RNA profiles are affected in nearly all cancer patients, regardless of the type of tumor, although the abundance of tumor-associated RNAs seems variable among cancer patients. In addition, surrogate RNA onco-signatures of tissue biomarkers, also in 88% of localized KRAS mutant cancer patients as measured by the tumor-specific and pan-cancer SVM/LOOCV procedures, are readily available from a minute amount (100–500 pg) of platelet RNA. As whole blood can be stored up to 48 hr on room temperature prior to isolation of the platelet pellet, while maintaining high-quality RNA and the dominant cancer RNA signatures, TEPs can be more readily implemented in daily clinical laboratory practice and could potentially be shipped prior to further blood sample processing. Blood platelets are widely involved in tumor growth and cancer progression . Platelets sequester solubilized tumor-associated proteins and spliced and unspliced mRNAs , whereas platelets do also directly interact with tumor cells , neutrophils , circulating NK-cells , and circulating tumor cells . Interestingly, in vivo experiments have revealed breast cancer-mediated systemic instigation by supplying circulating platelets with pro-inflammatory and pro-angiogenic proteins, supporting outgrowth of dormant metastatic foci . Using a gene ontology methodology, CAGE, we correlated TEP-cancer signatures with publicly available curated datasets. Indeed, we identified widespread correlations with cancer tissues, hypoxia, platelet-signatures, and cytoskeleton, possibly reflecting the “alert” and pro-tumorigenic state of TEPs. We observed strong negative correlations with RNAs implicated in RNA translation, T cell immunity, and interleukin-signaling, implying diminished needs of TEPs for RNAs involved in these biological processes or orchestrated translation of these RNAs to proteins . We observed that the tumor-specific educational programs in TEPs are predominantly influenced by tumor type and, to a lesser extent, by tumor progression and metastases. Although we were not able to measure significant differences between non-metastasized and metastasized tumors, we do not exclude that the use of larger sample sets could allow for the generation of SVM algorithms that do have the power to discriminate between certain stages of cancer, including those with in situ carcinomas and even pre-malignant lesions. In addition, different molecular tumor subtypes (e.g., HER2 -amplified versus wild-type BrCa) result in different effects on the platelet profiles, possibly caused by different “educational” stimuli generated by the different molecular tumor subtypes . Altogether, the RNA content of platelets in patients with cancer is dependent on the transcriptional state of the bone-marrow megakaryocyte , complemented by sequestration of spliced RNA , release of RNA , and possibly queue-specific pre-mRNA splicing during platelet circulation. Partial or complete normalization of the platelet profiles following successful treatment of the tumor would enable TEP-based disease recurrence monitoring, requiring the analysis of follow-up platelet samples. Future studies will be required to address the tumor-specific “educated” profiles on both an (small non-coding) RNA and protein level and determine the ability of gene ontology, blood-based cancer classification. In conclusion, we provide robust evidence for the clinical relevance of blood platelets for liquid biopsy-based molecular diagnostics in patients with several types of cancer. Further validation is warranted to determine the potential of surrogate TEP profiles for blood-based companion diagnostics, therapy selection, longitudinal monitoring, and disease recurrence monitoring. In addition, we expect the self-learning algorithms to further improve by including significantly more samples. For this approach, isolation of the platelet fraction from whole blood should be performed within 48 hr after blood withdrawal, the platelet fraction can subsequently be frozen for cancer diagnosis. Also, future studies should address causes and anticipated risks of outlier samples identified in this study, such as healthy donors classified as cancer patients. Systemic factors such as chronic or transient inflammatory diseases, or cardiovascular events and other non-cancerous diseases may also influence the platelet mRNA profile and require evaluation in follow-up studies, possibly also including individuals predisposed for cancer.
Sample Collection and Study Oversight Blood was drawn from all patients and healthy donors at the VU University Medical Center, Amsterdam, the Netherlands, or the Massachusetts General Hospital (MGH), Boston, in 6 ml purple-cap BD Vacutainers containing the anti-coagulant EDTA. To minimize effects of long-term storage of platelets at room temperature and loss of platelet RNA quality and quantity, samples were processed within 48 hr after blood collection. Blood samples of patients were collected pre-operatively (GBM) or during follow-up in the outpatient clinic (CRC, NSCLC, PAAD, BrCa, HBC). Nine cancer patient samples included were follow-up samples of the same patient collected within months of the first blood collection (five samples in NSCLC, two samples in PAAD, and one sample in BrCa and HBC). Localized disease cancer patients were defined as cancer patients without known metastasis from the primary tumor to distant organ(s), as noticed by the physician or additional imaging and/or pathological tests. Patients with glioblastoma, a tumor that metastasizes rarely, were regarded as late-stage (high-grade) cancers. Samples for both training and validation cohort were collected and processed similarly and simultaneously. Tumor tissues of patients were analyzed for the presence of genetic alterations by tissue DNA sequencing, including next-generation sequencing SNaPShot, assessing 39 genes over 152 exons with an average sequencing coverage of >500, including KRAS , EGFR , and PIK3CA . Assessment of MET overexpression in non-small cell lung cancer FFPE slides was performed by immunohistochemistry (anti-Total cMET SP44 Rabit monoclonal antibody (mAb), Ventana, or the A2H2-3 anti-human MET mAb . The estrogen and progesterone status of BrCa tumor tissues and the HER2 amplification of BrCa tumor tissue were determined using immunohistochemistry and fluorescent in situ hybridization, respectively, and scored according to the routine clinical diagnostics protocol at the MGH, Boston. Healthy donors were at the moment of blood collection, or previously, not diagnosed with cancer. This study was conducted in accordance with the principles of the Declaration of Helsinki. Approval was obtained from the institutional review board and the ethics committee at each hospital, and informed consent was obtained from all subjects. Clinical follow-up of healthy donors is not available due to anonymization of these samples according to the ethical rules of the hospitals. Support Vector Machine Classifier For binary (pan-cancer) and multiclass sample classification, a support vector machine (SVM) algorithm was used implemented by the e1071 R-package. In principal, the SVM algorithm determines the location of all samples in a high-dimensional space, of which each axis represents a transcript included and the sample expression level of a particular transcript determines the location on the axis. During the training process, the SVM algorithm draws a hyperplane best separating two classes, based on the distance of the closest sample of each class to the hyperplane. The different sample classes have to be positioned at each side of the hyperplane. Following, a test sample with masked class identity is positioned in the high-dimensional space and its class is “predicted” by the distance of the particular sample to the constructed hyperplanes. For the multiclass SVM classification algorithm, a One-Versus-One (OVO) approach was used. Here, each class is compared to all other individual classes and thus the SVM algorithm defines multiple hyperplanes. To cross validate the algorithm for all samples in the training cohort, the SVM algorithm was trained by all samples in the training cohort minus one, while the remaining sample was used for (blind) classification. This process was repeated for all samples until each sample was predicted once (leave-one-out cross-validation [LOOCV] procedure). The percentage of correct predictions was reported as the classifier’s accuracy. To assess the predictive value of the SVM algorithm on an independent dataset, which is not previously involved in the SVM training process and thus entirely new for the algorithm, the algorithm was trained on the training dataset, all SVM parameters were fixed, and the samples belonging to the validation cohort were predicted. In addition, an iterative (100×) process was performed in which samples of the dataset were randomly subsampled in a training and validation set (ratio training:validation = 60:40 (all cancer classes) or 70:30 (healthy individuals), per sample class samples were subsampled in this ratio according the total size of the individual classes (class-proportional, stratified subsampling)) and mean accuracy of all individual classifications was reported. Internal performance of the SVM algorithm could be improved by enabling the SVM tuning function, which implies optimal determination of parameters of the SVM algorithm (gamma, cost) by randomly subsampling the dataset used for training (“internal cross-validation”) of the algorithm. Prior to construction of the SVM algorithm, transcripts with low expression (<5 reads in all samples) were excluded and read counts were normalized as described in the (differential expression of transcripts). For each individual prediction, feature selection (identification of transcripts with notable influence on the predictive performance) was performed by ANOVA testing for differences, yielding classifier-specific input gene lists . mRNAs with a LogCPM >3 and a p value corrected for multiple hypothesis testing (FDR) of <0.95 (pan-cancer KRAS ), <0.90 (CRC, PAAD, and NSCLC KRAS and HER2 -amplified BrCa), <0.80 ( PIK3CA BrCa), <0.70 (NSCLC EGFR ), <0.50 (triple negative-status BrCa), <0.30 (MET-overexpression NSCLC), <0.10 (CRC/PAAD/HBC), <0.0001 (multiclass tumor type and individual tumor class-healthy), and <0.00005 (pan-cancer/healthy-cancer) were included. Internal SVM tuning was enabled to improve predictive performance. All individual tumor class versus healthy donors and molecular pathway SVMs algorithms were tuned by a (default) 10-fold internal cross-validation. The pan-cancer/healthy-cancer, multiclass tumor type, and the gastro-intestinal CRC/PAAD/HBC SVM algorithms were tuned by a 2-fold internal cross-validation. The training cohort of the pan-cancer and multiclass tumor type, the individual tumor classes versus healthy donor tests, the gastro-intestinal CRC/PAAD/HBC test, and all molecular pathway tests were analyzed using a LOOCV approach. To increase classification specificity in the multiclass tumor type test, additional binary and multiclass classifiers algorithms were developed, namely the pan-cancer test ( F and 1G), HBC-CRC, HBC-PAAD, BrCa-CRC, BrCa-CRC-NSCLC, and BrCa-HD-GBM-NSCLC tests, evaluated in both the training and validation cohort separately, which were applied sequentially to the multiclass tumor type test. Samples predicted as either condition of the supplemental classifier were all re-evaluated using the filter. The latter tumor class classification was regarded as the follow-up classification. In addition, samples predicted as the all-female breast cancer class, but of male origin as determined by the gender-specific RNAs ( B), and samples predicted as healthy, while in the pan-cancer test predicted as cancer, were automatically assigned to the class with second predictive strength, as supplemented by the SVM output. To determine the accuracy rates of the classifiers that can be obtained by chance, class labels of the samples used by the SVM algorithm for training were randomly permutated (“random classifiers”). This process was performed for 100 LOOCV classification procedures. P values were determined by counting the overall random classifier LOOCV-classification accuracies that yielded similar or higher total accuracy rates compared to the observed total accuracy rate. The predictive strength was also used as input to generate a receiver operating curve (ROC) as implemented in the R-package pROC (version 1.7.3). Organ exams were calculated based on the compiled results of the SVM/LOOCV of the training cohort and subsequent prediction of the validation cohort, spanning in total 283 samples. The pan-cancer binary SVM, the multiclass SVM, and all molecular pathway SVM algorithms were processed individually. Samples included for each organ exam (all healthy donors, all samples with primary tumor in a particular organ, and all samples with known metastases to the particular organ) were selected. Only samples with correct predictions at a particular level of the organ exam were passed to the next level for evaluation. Counts of correct and false predictions in the “mutational subtypes”-stage were determined from all individual molecular pathway SVM algorithms in which the selected samples were included. Correlative Analysis of Gene Set Enrichment Analysis Correlative Analyses of Gene Set Enrichment (CAGE) analysis was performed in the online platform R2 (R2.amc.nl). To enable analyses of RNA-sequencing read counts in a micro-array-based statistical platform, counts per million normalized read counts were voom-transformed, using sequencing batch and sample group as variables, and uploaded in the R2-environment. Highly correlating mRNAs (FDR < 0.01) of a tumor type or all tumor classes combined (pan-cancer) compared to all other classes was used to generate a class-specific gene signature. These individual signatures were subsequently correlated with 3,875 curated gene sets as provided by the Broad Institute ( http://www.broadinstitute.org/gsea ). Top 25 ranking correlations were manually annotated by two independent researchers (M.G.B. and B.A.W.) and shared annotated terms were after agreement of both researchers reported.
Blood was drawn from all patients and healthy donors at the VU University Medical Center, Amsterdam, the Netherlands, or the Massachusetts General Hospital (MGH), Boston, in 6 ml purple-cap BD Vacutainers containing the anti-coagulant EDTA. To minimize effects of long-term storage of platelets at room temperature and loss of platelet RNA quality and quantity, samples were processed within 48 hr after blood collection. Blood samples of patients were collected pre-operatively (GBM) or during follow-up in the outpatient clinic (CRC, NSCLC, PAAD, BrCa, HBC). Nine cancer patient samples included were follow-up samples of the same patient collected within months of the first blood collection (five samples in NSCLC, two samples in PAAD, and one sample in BrCa and HBC). Localized disease cancer patients were defined as cancer patients without known metastasis from the primary tumor to distant organ(s), as noticed by the physician or additional imaging and/or pathological tests. Patients with glioblastoma, a tumor that metastasizes rarely, were regarded as late-stage (high-grade) cancers. Samples for both training and validation cohort were collected and processed similarly and simultaneously. Tumor tissues of patients were analyzed for the presence of genetic alterations by tissue DNA sequencing, including next-generation sequencing SNaPShot, assessing 39 genes over 152 exons with an average sequencing coverage of >500, including KRAS , EGFR , and PIK3CA . Assessment of MET overexpression in non-small cell lung cancer FFPE slides was performed by immunohistochemistry (anti-Total cMET SP44 Rabit monoclonal antibody (mAb), Ventana, or the A2H2-3 anti-human MET mAb . The estrogen and progesterone status of BrCa tumor tissues and the HER2 amplification of BrCa tumor tissue were determined using immunohistochemistry and fluorescent in situ hybridization, respectively, and scored according to the routine clinical diagnostics protocol at the MGH, Boston. Healthy donors were at the moment of blood collection, or previously, not diagnosed with cancer. This study was conducted in accordance with the principles of the Declaration of Helsinki. Approval was obtained from the institutional review board and the ethics committee at each hospital, and informed consent was obtained from all subjects. Clinical follow-up of healthy donors is not available due to anonymization of these samples according to the ethical rules of the hospitals.
For binary (pan-cancer) and multiclass sample classification, a support vector machine (SVM) algorithm was used implemented by the e1071 R-package. In principal, the SVM algorithm determines the location of all samples in a high-dimensional space, of which each axis represents a transcript included and the sample expression level of a particular transcript determines the location on the axis. During the training process, the SVM algorithm draws a hyperplane best separating two classes, based on the distance of the closest sample of each class to the hyperplane. The different sample classes have to be positioned at each side of the hyperplane. Following, a test sample with masked class identity is positioned in the high-dimensional space and its class is “predicted” by the distance of the particular sample to the constructed hyperplanes. For the multiclass SVM classification algorithm, a One-Versus-One (OVO) approach was used. Here, each class is compared to all other individual classes and thus the SVM algorithm defines multiple hyperplanes. To cross validate the algorithm for all samples in the training cohort, the SVM algorithm was trained by all samples in the training cohort minus one, while the remaining sample was used for (blind) classification. This process was repeated for all samples until each sample was predicted once (leave-one-out cross-validation [LOOCV] procedure). The percentage of correct predictions was reported as the classifier’s accuracy. To assess the predictive value of the SVM algorithm on an independent dataset, which is not previously involved in the SVM training process and thus entirely new for the algorithm, the algorithm was trained on the training dataset, all SVM parameters were fixed, and the samples belonging to the validation cohort were predicted. In addition, an iterative (100×) process was performed in which samples of the dataset were randomly subsampled in a training and validation set (ratio training:validation = 60:40 (all cancer classes) or 70:30 (healthy individuals), per sample class samples were subsampled in this ratio according the total size of the individual classes (class-proportional, stratified subsampling)) and mean accuracy of all individual classifications was reported. Internal performance of the SVM algorithm could be improved by enabling the SVM tuning function, which implies optimal determination of parameters of the SVM algorithm (gamma, cost) by randomly subsampling the dataset used for training (“internal cross-validation”) of the algorithm. Prior to construction of the SVM algorithm, transcripts with low expression (<5 reads in all samples) were excluded and read counts were normalized as described in the (differential expression of transcripts). For each individual prediction, feature selection (identification of transcripts with notable influence on the predictive performance) was performed by ANOVA testing for differences, yielding classifier-specific input gene lists . mRNAs with a LogCPM >3 and a p value corrected for multiple hypothesis testing (FDR) of <0.95 (pan-cancer KRAS ), <0.90 (CRC, PAAD, and NSCLC KRAS and HER2 -amplified BrCa), <0.80 ( PIK3CA BrCa), <0.70 (NSCLC EGFR ), <0.50 (triple negative-status BrCa), <0.30 (MET-overexpression NSCLC), <0.10 (CRC/PAAD/HBC), <0.0001 (multiclass tumor type and individual tumor class-healthy), and <0.00005 (pan-cancer/healthy-cancer) were included. Internal SVM tuning was enabled to improve predictive performance. All individual tumor class versus healthy donors and molecular pathway SVMs algorithms were tuned by a (default) 10-fold internal cross-validation. The pan-cancer/healthy-cancer, multiclass tumor type, and the gastro-intestinal CRC/PAAD/HBC SVM algorithms were tuned by a 2-fold internal cross-validation. The training cohort of the pan-cancer and multiclass tumor type, the individual tumor classes versus healthy donor tests, the gastro-intestinal CRC/PAAD/HBC test, and all molecular pathway tests were analyzed using a LOOCV approach. To increase classification specificity in the multiclass tumor type test, additional binary and multiclass classifiers algorithms were developed, namely the pan-cancer test ( F and 1G), HBC-CRC, HBC-PAAD, BrCa-CRC, BrCa-CRC-NSCLC, and BrCa-HD-GBM-NSCLC tests, evaluated in both the training and validation cohort separately, which were applied sequentially to the multiclass tumor type test. Samples predicted as either condition of the supplemental classifier were all re-evaluated using the filter. The latter tumor class classification was regarded as the follow-up classification. In addition, samples predicted as the all-female breast cancer class, but of male origin as determined by the gender-specific RNAs ( B), and samples predicted as healthy, while in the pan-cancer test predicted as cancer, were automatically assigned to the class with second predictive strength, as supplemented by the SVM output. To determine the accuracy rates of the classifiers that can be obtained by chance, class labels of the samples used by the SVM algorithm for training were randomly permutated (“random classifiers”). This process was performed for 100 LOOCV classification procedures. P values were determined by counting the overall random classifier LOOCV-classification accuracies that yielded similar or higher total accuracy rates compared to the observed total accuracy rate. The predictive strength was also used as input to generate a receiver operating curve (ROC) as implemented in the R-package pROC (version 1.7.3). Organ exams were calculated based on the compiled results of the SVM/LOOCV of the training cohort and subsequent prediction of the validation cohort, spanning in total 283 samples. The pan-cancer binary SVM, the multiclass SVM, and all molecular pathway SVM algorithms were processed individually. Samples included for each organ exam (all healthy donors, all samples with primary tumor in a particular organ, and all samples with known metastases to the particular organ) were selected. Only samples with correct predictions at a particular level of the organ exam were passed to the next level for evaluation. Counts of correct and false predictions in the “mutational subtypes”-stage were determined from all individual molecular pathway SVM algorithms in which the selected samples were included.
Correlative Analyses of Gene Set Enrichment (CAGE) analysis was performed in the online platform R2 (R2.amc.nl). To enable analyses of RNA-sequencing read counts in a micro-array-based statistical platform, counts per million normalized read counts were voom-transformed, using sequencing batch and sample group as variables, and uploaded in the R2-environment. Highly correlating mRNAs (FDR < 0.01) of a tumor type or all tumor classes combined (pan-cancer) compared to all other classes was used to generate a class-specific gene signature. These individual signatures were subsequently correlated with 3,875 curated gene sets as provided by the Broad Institute ( http://www.broadinstitute.org/gsea ). Top 25 ranking correlations were manually annotated by two independent researchers (M.G.B. and B.A.W.) and shared annotated terms were after agreement of both researchers reported.
M.G.B., B.A.T., P.W., and T.W. designed the study and wrote the manuscript. E.F.S., D.P.N., H.M.V., J.C.R., and B.A.T. provided clinical samples. M.G.B., N.S., J.T., F.R., P.S., J.D., B.Y., H.V., and E.P. performed sample processing for mRNA-seq. R.J.A.N., P.S., H.V., E.P., and T.W. designed and performed amplicon sequencing assays. M.G.B., N.S., I.K., J.D., B.A.W., J.K., N.A., E.P., and T.W. performed data analyses and interpretation. All authors provided critical comments on the manuscript.
P.S, H.V., E.P., R.J.A.N., and T.W. are employees of thromboDx BV. R.J.A.N. and T.W. are shareholders and founders of thromboDx BV.
|
Modulating the cholinergic system—Novel targets for deep brain stimulation in Parkinson's disease | f8c30d76-5f98-412d-86a2-6556aefcedc8 | 11808463 | Surgical Procedures, Operative[mh] | INTRODUCTION Parkinson's disease (PD) is a progressive neurological disease characterized by motor symptoms, primarily resulting from the alpha‐synuclein (α‐syn) induced loss of dopaminergic neurons in the substantia nigra pars compacta (SNc) and their striatal terminals (Braak et al., ). In 2016, nearly 6.1 million people worldwide were affected by PD and the global burden of the disease has nearly doubled over the last years (Dorsey et al., ). Virtually all PD patients (98.6%) suffer from at least one non‐motor symptom, and approximately 75% of patients who survive more than 10 years will develop dementia (Aarsland & Kurz, ; Barone et al., ). Severe cognitive impairment usually emerges at later disease stages in PD (Aarsland et al., ). In contrast, dementia with Lewy bodies (DLB) should be suspected in patients presenting with cognitive impairment manifesting before or concomitant with motor onset. PD dementia (PDD) and DLB share many clinical, neurochemical, and morphological features. Both are characterized by pathological α‐syn aggregates, neuronal loss, and neurite Lewy body inclusions in midbrain neurons (Bohnen & Albin, ). In clinical practice, it can be challenging to distinguish PDD from DLB. According to the current Movement Disorder Society diagnostic guidelines, the distinction between both entities is still based solely on the time elapsed between the emergence of cognitive and motor symptoms, respectively (i.e., in DLB, cognitive symptoms occur before or within 12 months after the onset of parkinsonism) (McKeith et al., ). Even though distinct alterations of neurochemistry and histopathology have been reported for DLB and PDD (Bohnen & Albin, ; Jellinger, ), there is substantial pathophysiological overlap, and most likely PDD and DLB present different ends of an α‐syn induced neurodegenerative process (Jellinger, ). C ognitive function, but also gait, and posture rely on the integrity of the cholinergic system. Importantly, there is a close interplay between dopaminergic and cholinergic transmission in the basal ganglia. For example, reduced dopaminergic input from the SNc in PD drives cholinergic hyperactivity in the striatum, which explains why anticholinergic drugs improve motor symptoms, and tremor in particular (Calabresi et al., ; Clarke, ). In addition to striatal dysbalance, α‐syn related neurodegeneration of the pedunculopontine nucleus (PPN) and the nucleus basalis of Meynert (NBM) cause a decrease of the cholinergic tone (Bohnen et al., ). Alongside the striatum, the PPN and the NBM are the two main cholinergic resources of the highly complex and largest neurotransmitter system in the central nervous system (Bohnen & Albin, ; He et al., ). In advanced stages of PD, cognitive and gait dysfunction are frequent and often burdensome (Lieberman et al., ; Perez‐Lloret et al., ; Zhang et al., ). Gait is particularly impaired during dual task exercises, suggesting that gait dysfunction has a relevant cognitive component (Bohnen et al., ; O'Shea et al., ; Yogev et al., ). Decreased nigro‐striatal dopaminergic integrity and α‐syn induced pathology of subcortical and cortical cholinergic resources in concert are important drivers of cholinergic dysfunction underlying these often debilitating symptoms in PD (Calabresi et al., ). Drug and surgical treatments of PD primarily focus on improving motor symptoms (Nemade et al., ). Although dopamine replacement therapy is successful for most motor symptoms (Nemade et al., ), it has limited effects on axial motor symptoms, such as postural instability, FOG, and falls (Smulders et al., ). If axial symptoms are unresponsive to dopaminergic treatment, pharmacological options are often futile. However, there is evidence that administration of AChEIs may improve falls and gait to some extent (Henderson et al., ). Furthermore, the evidence for non‐pharmacological treatment with effects on gait has grown in recent years (Delgado‐Alvarado et al., ). FOG can be ameliorated by passive treatment options, such as non‐invasive stimulation techniques (i.e., repetitive transcranial magnetic stimulation; Kim et al., ), or active treatment options like physical training (Canning et al., ) and cognitive programs (Fietzek et al., ). These active and passive treatment options have been shown to induce long‐lasting effects on gait. The use of anticholinergics in PD to ameliorate motor symptoms is often limited by severe cognitive deterioration (Perez‐Lloret et al., ). In fact, cognitive impairment in PD can effectively be treated by strengthening the cholinergic system using acetylcholine esterase inhibitors (AChEIs) such as rivastigmine (Emre et al., ). In contrast, the application of dopaminergic medication has shown disappointing results in treating cognitive symptoms (Calabresi et al., ; Kulisevsky et al., ). Non‐pharmacological treatment options and their effects on cognition in PD patients have been reviewed lately (Pupíková & Rektorová, ). While cognitive training with the aim to improve performance in attention and working memory showed the highest evidence for having cognitive effects (Fellman et al., ; París et al., ; Petrelli et al., ), physical training or non‐invasive brain stimulation techniques cannot be recommended based on the current literature (Altmann et al., ; Gobbi et al., ; Hashimoto et al., ; Pupíková & Rektorová, ; Shinichi Amano & Chris J Hass, ; Silveira et al., ). Beyond these treatment options, deep brain stimulation (DBS) has become an established neurosurgical treatment for PD, which leads to sustained improvement of motor function and quality of life (Lozano et al., ). Although the exact mechanisms of DBS remain elusive, several theories have been proposed. Normalizing the firing rate of pathological activity patterns (e.g., beta‐band oscillations) and thereby regulating an overactive basal ganglia system is currently regarded as the most promising hypothesis (Agnesi et al., ; Alosaimi et al., ; Ashkan et al., ). However, these theories neglect global stimulation effects in remote areas from the stimulation site. In this context, the concept of a modulatory stimulation effect of non‐dopaminergic neurotransmitter systems has gained increasing interest (Alosaimi et al., ; Lozano et al., ). Indeed, DBS targeting the PPN and NBM has emerged as a promising strategy to enhance the cholinergic tone and consequently ameliorate gait disturbances and cognitive deficits in PD, which are otherwise often difficult to treat (Bohnen, Yarnall, et al., ). PPN and NBM DBS have been explored in animal studies, small case series, and few randomized controlled trials, providing promising results. However, studies directly comparing different stimulation paradigms are scarce, and clear recommendations for clinical application are lacking. Thus, patient and target selection and the identification of the most efficient stimulation paradigm for optimal clinical outcome remain challenging. Here, we provide a concise overview of proposed concepts of cholinergic modulation by NBM and PPN DBS in PD. We discuss the utility of targeting the cholinergic system to improve axial and cognitive symptoms and seek to provide guidance for patient selection, surgical approach, and stimulation paradigms. ANATOMY OF THE CHOLINERGIC SYSTEM AND CHANGES IN PD The PPN, NBM, and striatum are the key nodes of the cholinergic neurotransmitter system (Bohnen & Albin, ). 2.1 The PPN The PPN lacks clear anatomical definition and predominantly consists of cholinergic (25%–30%) and glutamatergic (40%–45%) neurons (Tubert et al., ). Historically, it has been categorized into the caudal pars compacta and the rostral pars dissipata based on its cytoarchitecture and neurochemical markers (Geula et al., ; Lin et al., ; Mesulam et al., ; Pienaar et al., ). Most cholinergic neurons are found in the caudal part of the PPN (Pahapill & Lozano, ). The dorsal part of the PPN initiates movement, while the ventral part coordinates the stopping (Lin et al., ; Sherman et al., ). The PPN is crucial to governing gait (Gut & Mena‐Segovia, ; Ricciardi et al., ). However, it also belongs to the reticular activating system and is involved in controlling the sleep–wake cycle. For example, PPN dysfunction has been associated with daytime sleepiness and rapid eye movement sleep behavior disorder pathophysiology (Boeve et al., ; Chambers et al., ). Its connection with cortical areas like the somatosensory and presupplementary motor areas underline its role in cognitive and motivational processes. Degeneration of the PPN leads to impairment in attention, motivation and compulsive behaviors (French & Muthusamy, ). It has been suggested that the PPN plays a modulatory role in motor control, mainly by influencing both cognitive and behavioral functions by shaping reward signaling and adaptive behavior. This modulatory role is crucial for sensorimotor integration. Alterations of this delicate system play an important role in the development of gait impairment in PD (Gut & Mena‐Segovia, ). On an electrophysiological level, based on spontaneous activity (3–16 Hz) with regular spiking, cholinergic neurons in the PPN are characterized as type II neurons, showing a high density of outward rectifier Kv4‐potassium channels (Takakusaki & Kitai, ; Tubert et al., ). The PPN is integral to a circuit governing movement. Both the afferent and efferent connections are complex (displayed in Figure ) and have been the subject of several reviews (Lin et al., ; Tubert et al., ). The PPN receives substantial input from the basal ganglia, notably gamma aminobutyric acid (GABA) projections from the substantia nigra pars reticulata (SNr) and the globus pallidus internus (GPi). Additionally, it receives dopaminergic input from the SNc, and glutamatergic projections from the subthalamic nucleus (STN). Furthermore, it receives excitatory input from the motor cortex, and deep cerebellar and midbrain nuclei (Lin et al., ). The PPN has glutamatergic and cholinergic efferent connections to various regions, including the cortex, thalamus, basal ganglia, limbic structures, several brainstem nuclei, and the spinal cord (Tubert et al., ). One of the best studied efferent connections is the direct excitatory glutamatergic projection of the PPN to dopaminergic neurons in the SNc, which was studied in electrophysiological experiments using organotypic rodent brain slices (Futami et al., ) and other ex vivo studies (Di Loreto et al., ; Scarnati et al., , ; Tubert et al., ). 2.2 The NBM The NBM is located in the basal forebrain above and parallel to the optic nerve, with its medial border being the lateral ventricle (Liu et al., ). Four cholinergic “cluster” cell groups (Ch1–Ch4) without distinct boundaries have been identified from studies in non‐human primates using immunohistochemistry and histochemistry (Mesulam et al., , ). Ch1 (medial septal nucleus) and Ch2 (vertical limb of the diagonal band nucleus) both project to the hippocampal complex (Liu et al., ). Ch3 (horizontal limb of the diagonal band nucleus) projects to the olfactory bulb (Liu et al., ). The largest is the Ch4 subgroup, which exhibits cellular variations and can be subdivided into further subgroups projecting to different cortical regions and the amygdala. Specifically, the anterior Ch4 subregion innervates the limbic region, with the anteromedial region projecting to the cingulate cortex, while the anterolateral Ch4 subregion projects to the frontoparietal cortex and the amygdala (Figure ). The posterior Ch4 subregion establishes connections with the superior temporal and temporal polar regions (Liu et al., ) (Figure ). Notably, these cortical NBM connections are reciprocal. The NBM lacks anatomically defined borders of the subregions. Therefore, in the human brain, the classification of the Ch4 subregion has been simplified to an anterior, intermediate, and a posterior subregion (Liu et al., ). The NBM plays a crucial role in memory, arousal, attention, and perception (Goard & Dan, ). There is mounting evidence from both histopathological and imaging studies linking NBM atrophy to cognitive impairment in PD in cross‐sectional and longitudinal study designs (Gang et al., ; Palmer et al., ; Schulz et al., ; Whitehouse et al., ). Moreover, recent studies have suggested that structural alterations of the NBM are predictive of cognitive decline in prodromal PD (Zhang et al., ). That said, cholinergic cell death in the NBM is likely not the sole driver of cognitive decline in PD. For example, meta‐analyses of imaging data clearly demonstrated widespread cortical atrophy in demented PD patients, whereby the insular cortex and hippocampi emerged as sites of most severe volume loss (Mihaescu et al., ). It remains unclear if and to what extent cholinergic dysfunction is responsible for these findings. Nevertheless, there is little doubt that NBM dysfunction is closely related to cognitive impairment in PD (Bohnen & Albin, ), but alterations of other neurotransmitters such as serotonin may also contribute to cognitive impairment (Prado et al., ). Over three decades ago, stimulation of the NBM has demonstrated an increase in cortical cholinergic release (Kurosawa et al., ). Consequently, there is a renewed interest in NBM DBS for addressing cognitive dysfunction in Alzheimer's disease (AD), DLB, and PDD (Bohnen, Yarnall, et al., ; Gratwicke et al., , ; Nazmuddin et al., ). 2.3 The striatum While only 1%–2% of neurons in the striatum are cholinergic, these cholinergic interneurons (CINs) provide the highest density of cholinergic markers in the brain (Bohnen & Albin, ). CINs are large (20–50 μm) aspiny neurons. The balance of dopaminergic and cholinergic transmitter systems is crucial in maintaining striatal microcircuitry for the control of movement and cognition in the physiological state (Ztaou & Amalric, ). CINs receive dopaminergic input from the SNc and the PPN and synapse within the striatum on GABAergic medium spiny neurons (MSNs), the latter of which form the largest neuronal population in the striatum (Calabresi et al., ; Izzo & Bolam, ). Inhibitory and excitatory signaling among CINs and MSNs is mediated by dopaminergic (D1 and D2), muscarinergic, and nicotinergic receptors (Calabresi et al., ; Sealfon & Olanow, ). MSNs in the direct pathway are activated by dopamine via dopaminergic D1 receptors and inhibited by acetylcholine (ACh) via muscarinergic M4 receptors, while indirect MSNs are inhibited by dopamine via D2 receptors and activated by ACh via muscarinergic M1 receptors. All CINs express D2‐receptors, while only a small portion expresses D1 receptors (Gonzales & Smith, ; Lim et al., ). Both receptors are involved in the functional circuitry of working memory (Castner et al., ; Wang et al., ). Dopamine exerts an inhibitory effect on CINs via D2‐receptors which leads to a decrease of striatal ACh release (Abercrombie & DeBoer, ; Consolo et al., ; Lehmann et al., ; Pisani et al., ; Stoof et al., ; Yan et al., ), while dopamine increases cholinergic transmitter concentrations via activation of D1‐receptors (Abercrombie & DeBoer, ; Acquas & Di Chiara, ; Damsma et al., ; Di Chiara et al., ; Steinberg et al., ). Together, dopaminergic and cholinergic receptors build synaptic strength and plasticity (Calabresi et al., ). Long term potentiation and depression are two forms of synaptic plasticity and are regarded as a model for storage and retrieval of neuronal information (Calabresi et al., ). The complex structural and functional striatal‐cortical interaction is crucially involved in executive function, including planning movement and goal directed behavior (Calabresi et al., ). 2.4 A‐syn pathology in PD and the cholinergic system While pathological α‐syn aggregation has been identified as a hallmark of PD pathology, these aggregates are not unique to this disease. Abnormal formation of α‐syn fibrils can also be found in atypical parkinsonian syndromes (e.g., multisystem atrophy or pure autonomic failure), presenting with unique clinical phenotypes (Calabresi et al., ). Furthermore, rapid eye movement sleep behavior disorder has been recognized as a prodromal state of alpha synucleinopathies such as PD and multisystem atrophy. According to the Braak stages of PD, α‐syn aggregates form in the glossopharyngeal and vagal nerve, and the anterior olfactory bulb (stage 1), proceed into the dorsal raphe nuclei and magnocellular portions of the reticular formation (stage 2), until pathology reaches the midbrain and forebrain including the pontine tegmentum (stage 3). Finally, α‐syn aggregates reach the remaining subcortical areas and the cortex (stages 4–6) (Braak et al., ). This widespread retrograde propagation of α‐syn throughout the entire brain implicates that neuropathological degeneration in PD extends beyond the degeneration of the dopaminergic nigrostriatal pathway and affects various other neurotransmitter systems. Recent studies suggest that the spread of α‐syn does not occur uniformly, and that regional differences in α‐syn pathology derive from selective neuronal vulnerability to α‐syn aggregation. Along these lines, the extent of dendritic arborization and neuronal electrophysiological properties lead to an increased vulnerability to age, environmental toxins, and gene mutations (Surmeier et al., ). For instance, dopaminergic neurons in the SNc are characterized by long, unmyelinated and highly branched axons with many transmitter release sites, which is correlated with oxidative stress and a higher uptake of α‐syn aggregates (Braak et al., ; Pacelli et al., ; Surmeier et al., ). However, striatal CINs also express long and highly branched axons but do not degenerate to such an extent in the parkinsonian brain (Surmeier et al., ). Therefore, besides morphology, other mechanisms leading to a higher vulnerability must exist. Dopaminergic neurons in SNc are autonomous pacemakers and fire in a slow, tonic pattern creating high cytosolic concentrations of calcium (Guzman et al., ; Puopolo et al., ; Surmeier et al., ). Especially the slow calcium oscillations (Puopolo et al., ; Putzier et al., ) and high cytosolic and mitochondrial calcium concentrations (Guzman et al., ; Hayashi et al., ; Sanchez‐Padilla et al., ) are considered as tipping points differentiating between physiological condition and the beginning of Lewy body disease (Surmeier et al., ). It is unclear whether similar changes are present in cholinergic neurons, and thus may contribute to differences in cellular vulnerability. That said, a recently published study exploring α‐syn vulnerability of striatal neuronal populations in mice found a comparable magnitude of α‐syn aggregation and related cell death in SNc dopaminergic neurons compared with PPN cholinergic neurons after local α‐syn injection into the SNc and PPN, respectively (Geibl et al., ). 2.4.1 Degeneration of the PPN in PD The presence of Lewy body formation is accompanied by neurodegeneration in the PPN. Because of neurodegeneration and excessive descending inhibition from the GPi and SNr, the PPN is less active (Thevathasan & Moro, ). Specifically, neuronal recordings from PPN electrodes have revealed a decrease in α‐band activity in the PD brain. α‐band activity is positively associated with gait performance and is detectable especially in the caudal part of the PPN. Decreased α‐band activity in the PPN has been associated with FOG in PD patients (Thevathasan & Moro, ). In PD, 40%–70% of cholinergic neurons in the lateral PPN are lost because of neurodegeneration, leading to deficits in gait and posture (Bohnen et al., , , ; Chambers et al., ; Gai et al., ; Hirsch et al., ; Jellinger, ; Rinne et al., ; Zweig et al., ). Until recently, studies of changes in the cholinergic system in PD have largely been based on post‐mortem autopsy studies. Today, modern molecular imaging methods including single photon emission computed tomography, positron emission tomography (PET), and magnetic resonance imaging (MRI) allow for non‐invasive correlations of cholinergic degeneration and structural changes with clinical symptoms (Albin et al., ). For example, the degree of cholinergic neuronal loss in the PPN is more prominent in PD individuals who experience falls compared with those who do not (Karachi et al., ; Nardone et al., ). Moreover, the degeneration of cholinergic neurons has been associated with cognitive deficits, particularly with attentional and executive dysfunction, which has led to the hypothesis that impaired gait‐balance in PD may be predominantly caused by deficits of the attentional cognitive domain (Albin et al., ). 2.4.2 Degeneration of the NBM in PD In PD, up to 80% of cholinergic cells in the NBM are depleted (Liu et al., ), particularly the large neuronal population within the Ch4 subregion (Hall et al., ). The extent of cholinergic neuron loss in the NBM in PD is comparable to AD cases, and pronounced in PDD patients (Hall et al., ; Liu et al., ). The cholinergic loss and concomitant atrophy may serve as predictive markers for the severity of cognitive deficits in PD as demonstrated by a significant correlation with cognitive impairment in several studies (Choi et al., ; Gratwicke et al., ; Gratwicke & Foltynie, ; Shimada et al., ; Whitehouse et al., ). Along these lines, NBM atrophy, in particular of the Ch4 subregion, identified via MRI volumetric studies, correlated with lower Montreal Cognitive Assessment (Gill et al., ; Zadikoff et al., ) test results (Gratwicke & Foltynie, ). Of note, the extent of NBM atrophy in PD is comparable to that observed in AD (Candy et al., ). A recent study using [ 18 F]‐fluoroethoxybenzovesamicol vesicular ACh transporter PET and post‐mortem volumetric MRI found that the ACh concentration in the basal forebrain of 101 non‐demented PD patients correlated with NBM volume (Ray et al., ). Interestingly, ACh binding in the basal forebrain differs between cognitively impaired and cognitively intact PD patients. In one study, 57 PD patients were divided into a subgroup of patients with mild cognitive impairment and another subgroup of patients without cognitive dysfunction (Van Der Zee, Kanel, Gerritsen, et al., ). Both groups presented lower cortical ACh binding than the control group (Van Der Zee, Kanel, Gerritsen, et al., ). However, there was a higher than normal ACh binding in de novo PD patients in cortical and subcortical subregions (Van Der Zee, Kanel, Gerritsen, et al., ). These findings have led to the hypothesis that an initial up‐regulation of ACh in the early stages of the disease may compensate for nigrostriatal dopaminergic degeneration to maintain cognitive function (Bohnen et al., ; Sarter et al., ; Van Der Zee, Kanel, Gerritsen, et al., ; Van Der Zee, Kanel, Müller, et al., ). As cholinergic degeneration emerges, this mechanism ultimately fails, and cognitive decline is inevitable (Bohnen, Roytman, et al., ). Neurodegeneration in the NBM is likely a result of α‐syn pathology (Del Tredici & Braak, ; Selden, ). NBM dysfunction is believed to start prior to NBM atrophy, probably because of α‐syn‐associated inflammatory processes (Rocha et al., ). Inflammation can be imaged by assessing the extracellular free water fraction on diffusion weighted MRI (Febo et al., ; Pasternak et al., ). Of note, increases of the free water fraction correlated with measures of executive dysfunction (an early symptom of cognitive decline in PD), while NBM volume correlated with memory impairment (a late symptom of cognitive decline in PD). These findings suggest a stepwise progression of cholinergic dysfunction and associated cognitive decline in PD that can be assessed by different imaging modalities (Crowley et al., ). Another clinical trial using C‐methyl‐4‐piperidinyl propionate, which reflects cortical acetylcholinesterase activity, reported reduced posterior basal forebrain volume only in those PD patients who exhibited reduced cortical acetylcholinesterase activity. Similarly, patients with mild cognitive impairment showed lower cortical acetylcholinesterase activity (Schumacher et al., ). These results further prove the tight relationship between NBM volume loss, cholinergic deficiency, and cognitive decline in PD. 2.4.3 Degeneration of the striatum in PD Striatal interneurons seem to be partially spared by neurodegeneration in PD (Calabresi et al., ). This has been attributed to a relative low vulnerability to α‐syn pathology and increased axonal sprouting of CINs as a result of dopaminergic deafferentation from the SNc, leading to an imbalance between the cholinergic and dopaminergic transmitter systems (Calabresi et al., ; Spehlmann & Stahl, ). Dopaminergic cell loss in PD reduces dopamine levels in the striatum and induces a shift toward the indirect pathway. As a result, CINs become more excitable, excessively release ACh, and induce synaptic reorganization (Ztaou & Amalric, ). The reduction of striatal dopamine partly explains the dysexecutive syndrome typical of PD, caused by an imbalance of mechanisms implied in synaptic plasticity, including long‐term potentiation, depression, and synaptic depotentiation (Calabresi et al., ). Accordingly, PD patients experience difficulties in tasks that require cognitive flexibility, such as adapting to new situations or making new strategies. Dopamine replacement therapy has been shown to improve some aspects of executive dysfunction in early PD (Cooper et al., ). However, dopaminergic treatment falls short in mitigating cognitive dysfunction on a more global level (Brusa et al., ; Kulisevsky et al., ). The PPN The PPN lacks clear anatomical definition and predominantly consists of cholinergic (25%–30%) and glutamatergic (40%–45%) neurons (Tubert et al., ). Historically, it has been categorized into the caudal pars compacta and the rostral pars dissipata based on its cytoarchitecture and neurochemical markers (Geula et al., ; Lin et al., ; Mesulam et al., ; Pienaar et al., ). Most cholinergic neurons are found in the caudal part of the PPN (Pahapill & Lozano, ). The dorsal part of the PPN initiates movement, while the ventral part coordinates the stopping (Lin et al., ; Sherman et al., ). The PPN is crucial to governing gait (Gut & Mena‐Segovia, ; Ricciardi et al., ). However, it also belongs to the reticular activating system and is involved in controlling the sleep–wake cycle. For example, PPN dysfunction has been associated with daytime sleepiness and rapid eye movement sleep behavior disorder pathophysiology (Boeve et al., ; Chambers et al., ). Its connection with cortical areas like the somatosensory and presupplementary motor areas underline its role in cognitive and motivational processes. Degeneration of the PPN leads to impairment in attention, motivation and compulsive behaviors (French & Muthusamy, ). It has been suggested that the PPN plays a modulatory role in motor control, mainly by influencing both cognitive and behavioral functions by shaping reward signaling and adaptive behavior. This modulatory role is crucial for sensorimotor integration. Alterations of this delicate system play an important role in the development of gait impairment in PD (Gut & Mena‐Segovia, ). On an electrophysiological level, based on spontaneous activity (3–16 Hz) with regular spiking, cholinergic neurons in the PPN are characterized as type II neurons, showing a high density of outward rectifier Kv4‐potassium channels (Takakusaki & Kitai, ; Tubert et al., ). The PPN is integral to a circuit governing movement. Both the afferent and efferent connections are complex (displayed in Figure ) and have been the subject of several reviews (Lin et al., ; Tubert et al., ). The PPN receives substantial input from the basal ganglia, notably gamma aminobutyric acid (GABA) projections from the substantia nigra pars reticulata (SNr) and the globus pallidus internus (GPi). Additionally, it receives dopaminergic input from the SNc, and glutamatergic projections from the subthalamic nucleus (STN). Furthermore, it receives excitatory input from the motor cortex, and deep cerebellar and midbrain nuclei (Lin et al., ). The PPN has glutamatergic and cholinergic efferent connections to various regions, including the cortex, thalamus, basal ganglia, limbic structures, several brainstem nuclei, and the spinal cord (Tubert et al., ). One of the best studied efferent connections is the direct excitatory glutamatergic projection of the PPN to dopaminergic neurons in the SNc, which was studied in electrophysiological experiments using organotypic rodent brain slices (Futami et al., ) and other ex vivo studies (Di Loreto et al., ; Scarnati et al., , ; Tubert et al., ). The NBM The NBM is located in the basal forebrain above and parallel to the optic nerve, with its medial border being the lateral ventricle (Liu et al., ). Four cholinergic “cluster” cell groups (Ch1–Ch4) without distinct boundaries have been identified from studies in non‐human primates using immunohistochemistry and histochemistry (Mesulam et al., , ). Ch1 (medial septal nucleus) and Ch2 (vertical limb of the diagonal band nucleus) both project to the hippocampal complex (Liu et al., ). Ch3 (horizontal limb of the diagonal band nucleus) projects to the olfactory bulb (Liu et al., ). The largest is the Ch4 subgroup, which exhibits cellular variations and can be subdivided into further subgroups projecting to different cortical regions and the amygdala. Specifically, the anterior Ch4 subregion innervates the limbic region, with the anteromedial region projecting to the cingulate cortex, while the anterolateral Ch4 subregion projects to the frontoparietal cortex and the amygdala (Figure ). The posterior Ch4 subregion establishes connections with the superior temporal and temporal polar regions (Liu et al., ) (Figure ). Notably, these cortical NBM connections are reciprocal. The NBM lacks anatomically defined borders of the subregions. Therefore, in the human brain, the classification of the Ch4 subregion has been simplified to an anterior, intermediate, and a posterior subregion (Liu et al., ). The NBM plays a crucial role in memory, arousal, attention, and perception (Goard & Dan, ). There is mounting evidence from both histopathological and imaging studies linking NBM atrophy to cognitive impairment in PD in cross‐sectional and longitudinal study designs (Gang et al., ; Palmer et al., ; Schulz et al., ; Whitehouse et al., ). Moreover, recent studies have suggested that structural alterations of the NBM are predictive of cognitive decline in prodromal PD (Zhang et al., ). That said, cholinergic cell death in the NBM is likely not the sole driver of cognitive decline in PD. For example, meta‐analyses of imaging data clearly demonstrated widespread cortical atrophy in demented PD patients, whereby the insular cortex and hippocampi emerged as sites of most severe volume loss (Mihaescu et al., ). It remains unclear if and to what extent cholinergic dysfunction is responsible for these findings. Nevertheless, there is little doubt that NBM dysfunction is closely related to cognitive impairment in PD (Bohnen & Albin, ), but alterations of other neurotransmitters such as serotonin may also contribute to cognitive impairment (Prado et al., ). Over three decades ago, stimulation of the NBM has demonstrated an increase in cortical cholinergic release (Kurosawa et al., ). Consequently, there is a renewed interest in NBM DBS for addressing cognitive dysfunction in Alzheimer's disease (AD), DLB, and PDD (Bohnen, Yarnall, et al., ; Gratwicke et al., , ; Nazmuddin et al., ). The striatum While only 1%–2% of neurons in the striatum are cholinergic, these cholinergic interneurons (CINs) provide the highest density of cholinergic markers in the brain (Bohnen & Albin, ). CINs are large (20–50 μm) aspiny neurons. The balance of dopaminergic and cholinergic transmitter systems is crucial in maintaining striatal microcircuitry for the control of movement and cognition in the physiological state (Ztaou & Amalric, ). CINs receive dopaminergic input from the SNc and the PPN and synapse within the striatum on GABAergic medium spiny neurons (MSNs), the latter of which form the largest neuronal population in the striatum (Calabresi et al., ; Izzo & Bolam, ). Inhibitory and excitatory signaling among CINs and MSNs is mediated by dopaminergic (D1 and D2), muscarinergic, and nicotinergic receptors (Calabresi et al., ; Sealfon & Olanow, ). MSNs in the direct pathway are activated by dopamine via dopaminergic D1 receptors and inhibited by acetylcholine (ACh) via muscarinergic M4 receptors, while indirect MSNs are inhibited by dopamine via D2 receptors and activated by ACh via muscarinergic M1 receptors. All CINs express D2‐receptors, while only a small portion expresses D1 receptors (Gonzales & Smith, ; Lim et al., ). Both receptors are involved in the functional circuitry of working memory (Castner et al., ; Wang et al., ). Dopamine exerts an inhibitory effect on CINs via D2‐receptors which leads to a decrease of striatal ACh release (Abercrombie & DeBoer, ; Consolo et al., ; Lehmann et al., ; Pisani et al., ; Stoof et al., ; Yan et al., ), while dopamine increases cholinergic transmitter concentrations via activation of D1‐receptors (Abercrombie & DeBoer, ; Acquas & Di Chiara, ; Damsma et al., ; Di Chiara et al., ; Steinberg et al., ). Together, dopaminergic and cholinergic receptors build synaptic strength and plasticity (Calabresi et al., ). Long term potentiation and depression are two forms of synaptic plasticity and are regarded as a model for storage and retrieval of neuronal information (Calabresi et al., ). The complex structural and functional striatal‐cortical interaction is crucially involved in executive function, including planning movement and goal directed behavior (Calabresi et al., ). A‐syn pathology in PD and the cholinergic system While pathological α‐syn aggregation has been identified as a hallmark of PD pathology, these aggregates are not unique to this disease. Abnormal formation of α‐syn fibrils can also be found in atypical parkinsonian syndromes (e.g., multisystem atrophy or pure autonomic failure), presenting with unique clinical phenotypes (Calabresi et al., ). Furthermore, rapid eye movement sleep behavior disorder has been recognized as a prodromal state of alpha synucleinopathies such as PD and multisystem atrophy. According to the Braak stages of PD, α‐syn aggregates form in the glossopharyngeal and vagal nerve, and the anterior olfactory bulb (stage 1), proceed into the dorsal raphe nuclei and magnocellular portions of the reticular formation (stage 2), until pathology reaches the midbrain and forebrain including the pontine tegmentum (stage 3). Finally, α‐syn aggregates reach the remaining subcortical areas and the cortex (stages 4–6) (Braak et al., ). This widespread retrograde propagation of α‐syn throughout the entire brain implicates that neuropathological degeneration in PD extends beyond the degeneration of the dopaminergic nigrostriatal pathway and affects various other neurotransmitter systems. Recent studies suggest that the spread of α‐syn does not occur uniformly, and that regional differences in α‐syn pathology derive from selective neuronal vulnerability to α‐syn aggregation. Along these lines, the extent of dendritic arborization and neuronal electrophysiological properties lead to an increased vulnerability to age, environmental toxins, and gene mutations (Surmeier et al., ). For instance, dopaminergic neurons in the SNc are characterized by long, unmyelinated and highly branched axons with many transmitter release sites, which is correlated with oxidative stress and a higher uptake of α‐syn aggregates (Braak et al., ; Pacelli et al., ; Surmeier et al., ). However, striatal CINs also express long and highly branched axons but do not degenerate to such an extent in the parkinsonian brain (Surmeier et al., ). Therefore, besides morphology, other mechanisms leading to a higher vulnerability must exist. Dopaminergic neurons in SNc are autonomous pacemakers and fire in a slow, tonic pattern creating high cytosolic concentrations of calcium (Guzman et al., ; Puopolo et al., ; Surmeier et al., ). Especially the slow calcium oscillations (Puopolo et al., ; Putzier et al., ) and high cytosolic and mitochondrial calcium concentrations (Guzman et al., ; Hayashi et al., ; Sanchez‐Padilla et al., ) are considered as tipping points differentiating between physiological condition and the beginning of Lewy body disease (Surmeier et al., ). It is unclear whether similar changes are present in cholinergic neurons, and thus may contribute to differences in cellular vulnerability. That said, a recently published study exploring α‐syn vulnerability of striatal neuronal populations in mice found a comparable magnitude of α‐syn aggregation and related cell death in SNc dopaminergic neurons compared with PPN cholinergic neurons after local α‐syn injection into the SNc and PPN, respectively (Geibl et al., ). 2.4.1 Degeneration of the PPN in PD The presence of Lewy body formation is accompanied by neurodegeneration in the PPN. Because of neurodegeneration and excessive descending inhibition from the GPi and SNr, the PPN is less active (Thevathasan & Moro, ). Specifically, neuronal recordings from PPN electrodes have revealed a decrease in α‐band activity in the PD brain. α‐band activity is positively associated with gait performance and is detectable especially in the caudal part of the PPN. Decreased α‐band activity in the PPN has been associated with FOG in PD patients (Thevathasan & Moro, ). In PD, 40%–70% of cholinergic neurons in the lateral PPN are lost because of neurodegeneration, leading to deficits in gait and posture (Bohnen et al., , , ; Chambers et al., ; Gai et al., ; Hirsch et al., ; Jellinger, ; Rinne et al., ; Zweig et al., ). Until recently, studies of changes in the cholinergic system in PD have largely been based on post‐mortem autopsy studies. Today, modern molecular imaging methods including single photon emission computed tomography, positron emission tomography (PET), and magnetic resonance imaging (MRI) allow for non‐invasive correlations of cholinergic degeneration and structural changes with clinical symptoms (Albin et al., ). For example, the degree of cholinergic neuronal loss in the PPN is more prominent in PD individuals who experience falls compared with those who do not (Karachi et al., ; Nardone et al., ). Moreover, the degeneration of cholinergic neurons has been associated with cognitive deficits, particularly with attentional and executive dysfunction, which has led to the hypothesis that impaired gait‐balance in PD may be predominantly caused by deficits of the attentional cognitive domain (Albin et al., ). 2.4.2 Degeneration of the NBM in PD In PD, up to 80% of cholinergic cells in the NBM are depleted (Liu et al., ), particularly the large neuronal population within the Ch4 subregion (Hall et al., ). The extent of cholinergic neuron loss in the NBM in PD is comparable to AD cases, and pronounced in PDD patients (Hall et al., ; Liu et al., ). The cholinergic loss and concomitant atrophy may serve as predictive markers for the severity of cognitive deficits in PD as demonstrated by a significant correlation with cognitive impairment in several studies (Choi et al., ; Gratwicke et al., ; Gratwicke & Foltynie, ; Shimada et al., ; Whitehouse et al., ). Along these lines, NBM atrophy, in particular of the Ch4 subregion, identified via MRI volumetric studies, correlated with lower Montreal Cognitive Assessment (Gill et al., ; Zadikoff et al., ) test results (Gratwicke & Foltynie, ). Of note, the extent of NBM atrophy in PD is comparable to that observed in AD (Candy et al., ). A recent study using [ 18 F]‐fluoroethoxybenzovesamicol vesicular ACh transporter PET and post‐mortem volumetric MRI found that the ACh concentration in the basal forebrain of 101 non‐demented PD patients correlated with NBM volume (Ray et al., ). Interestingly, ACh binding in the basal forebrain differs between cognitively impaired and cognitively intact PD patients. In one study, 57 PD patients were divided into a subgroup of patients with mild cognitive impairment and another subgroup of patients without cognitive dysfunction (Van Der Zee, Kanel, Gerritsen, et al., ). Both groups presented lower cortical ACh binding than the control group (Van Der Zee, Kanel, Gerritsen, et al., ). However, there was a higher than normal ACh binding in de novo PD patients in cortical and subcortical subregions (Van Der Zee, Kanel, Gerritsen, et al., ). These findings have led to the hypothesis that an initial up‐regulation of ACh in the early stages of the disease may compensate for nigrostriatal dopaminergic degeneration to maintain cognitive function (Bohnen et al., ; Sarter et al., ; Van Der Zee, Kanel, Gerritsen, et al., ; Van Der Zee, Kanel, Müller, et al., ). As cholinergic degeneration emerges, this mechanism ultimately fails, and cognitive decline is inevitable (Bohnen, Roytman, et al., ). Neurodegeneration in the NBM is likely a result of α‐syn pathology (Del Tredici & Braak, ; Selden, ). NBM dysfunction is believed to start prior to NBM atrophy, probably because of α‐syn‐associated inflammatory processes (Rocha et al., ). Inflammation can be imaged by assessing the extracellular free water fraction on diffusion weighted MRI (Febo et al., ; Pasternak et al., ). Of note, increases of the free water fraction correlated with measures of executive dysfunction (an early symptom of cognitive decline in PD), while NBM volume correlated with memory impairment (a late symptom of cognitive decline in PD). These findings suggest a stepwise progression of cholinergic dysfunction and associated cognitive decline in PD that can be assessed by different imaging modalities (Crowley et al., ). Another clinical trial using C‐methyl‐4‐piperidinyl propionate, which reflects cortical acetylcholinesterase activity, reported reduced posterior basal forebrain volume only in those PD patients who exhibited reduced cortical acetylcholinesterase activity. Similarly, patients with mild cognitive impairment showed lower cortical acetylcholinesterase activity (Schumacher et al., ). These results further prove the tight relationship between NBM volume loss, cholinergic deficiency, and cognitive decline in PD. 2.4.3 Degeneration of the striatum in PD Striatal interneurons seem to be partially spared by neurodegeneration in PD (Calabresi et al., ). This has been attributed to a relative low vulnerability to α‐syn pathology and increased axonal sprouting of CINs as a result of dopaminergic deafferentation from the SNc, leading to an imbalance between the cholinergic and dopaminergic transmitter systems (Calabresi et al., ; Spehlmann & Stahl, ). Dopaminergic cell loss in PD reduces dopamine levels in the striatum and induces a shift toward the indirect pathway. As a result, CINs become more excitable, excessively release ACh, and induce synaptic reorganization (Ztaou & Amalric, ). The reduction of striatal dopamine partly explains the dysexecutive syndrome typical of PD, caused by an imbalance of mechanisms implied in synaptic plasticity, including long‐term potentiation, depression, and synaptic depotentiation (Calabresi et al., ). Accordingly, PD patients experience difficulties in tasks that require cognitive flexibility, such as adapting to new situations or making new strategies. Dopamine replacement therapy has been shown to improve some aspects of executive dysfunction in early PD (Cooper et al., ). However, dopaminergic treatment falls short in mitigating cognitive dysfunction on a more global level (Brusa et al., ; Kulisevsky et al., ). Degeneration of the PPN in PD The presence of Lewy body formation is accompanied by neurodegeneration in the PPN. Because of neurodegeneration and excessive descending inhibition from the GPi and SNr, the PPN is less active (Thevathasan & Moro, ). Specifically, neuronal recordings from PPN electrodes have revealed a decrease in α‐band activity in the PD brain. α‐band activity is positively associated with gait performance and is detectable especially in the caudal part of the PPN. Decreased α‐band activity in the PPN has been associated with FOG in PD patients (Thevathasan & Moro, ). In PD, 40%–70% of cholinergic neurons in the lateral PPN are lost because of neurodegeneration, leading to deficits in gait and posture (Bohnen et al., , , ; Chambers et al., ; Gai et al., ; Hirsch et al., ; Jellinger, ; Rinne et al., ; Zweig et al., ). Until recently, studies of changes in the cholinergic system in PD have largely been based on post‐mortem autopsy studies. Today, modern molecular imaging methods including single photon emission computed tomography, positron emission tomography (PET), and magnetic resonance imaging (MRI) allow for non‐invasive correlations of cholinergic degeneration and structural changes with clinical symptoms (Albin et al., ). For example, the degree of cholinergic neuronal loss in the PPN is more prominent in PD individuals who experience falls compared with those who do not (Karachi et al., ; Nardone et al., ). Moreover, the degeneration of cholinergic neurons has been associated with cognitive deficits, particularly with attentional and executive dysfunction, which has led to the hypothesis that impaired gait‐balance in PD may be predominantly caused by deficits of the attentional cognitive domain (Albin et al., ). Degeneration of the NBM in PD In PD, up to 80% of cholinergic cells in the NBM are depleted (Liu et al., ), particularly the large neuronal population within the Ch4 subregion (Hall et al., ). The extent of cholinergic neuron loss in the NBM in PD is comparable to AD cases, and pronounced in PDD patients (Hall et al., ; Liu et al., ). The cholinergic loss and concomitant atrophy may serve as predictive markers for the severity of cognitive deficits in PD as demonstrated by a significant correlation with cognitive impairment in several studies (Choi et al., ; Gratwicke et al., ; Gratwicke & Foltynie, ; Shimada et al., ; Whitehouse et al., ). Along these lines, NBM atrophy, in particular of the Ch4 subregion, identified via MRI volumetric studies, correlated with lower Montreal Cognitive Assessment (Gill et al., ; Zadikoff et al., ) test results (Gratwicke & Foltynie, ). Of note, the extent of NBM atrophy in PD is comparable to that observed in AD (Candy et al., ). A recent study using [ 18 F]‐fluoroethoxybenzovesamicol vesicular ACh transporter PET and post‐mortem volumetric MRI found that the ACh concentration in the basal forebrain of 101 non‐demented PD patients correlated with NBM volume (Ray et al., ). Interestingly, ACh binding in the basal forebrain differs between cognitively impaired and cognitively intact PD patients. In one study, 57 PD patients were divided into a subgroup of patients with mild cognitive impairment and another subgroup of patients without cognitive dysfunction (Van Der Zee, Kanel, Gerritsen, et al., ). Both groups presented lower cortical ACh binding than the control group (Van Der Zee, Kanel, Gerritsen, et al., ). However, there was a higher than normal ACh binding in de novo PD patients in cortical and subcortical subregions (Van Der Zee, Kanel, Gerritsen, et al., ). These findings have led to the hypothesis that an initial up‐regulation of ACh in the early stages of the disease may compensate for nigrostriatal dopaminergic degeneration to maintain cognitive function (Bohnen et al., ; Sarter et al., ; Van Der Zee, Kanel, Gerritsen, et al., ; Van Der Zee, Kanel, Müller, et al., ). As cholinergic degeneration emerges, this mechanism ultimately fails, and cognitive decline is inevitable (Bohnen, Roytman, et al., ). Neurodegeneration in the NBM is likely a result of α‐syn pathology (Del Tredici & Braak, ; Selden, ). NBM dysfunction is believed to start prior to NBM atrophy, probably because of α‐syn‐associated inflammatory processes (Rocha et al., ). Inflammation can be imaged by assessing the extracellular free water fraction on diffusion weighted MRI (Febo et al., ; Pasternak et al., ). Of note, increases of the free water fraction correlated with measures of executive dysfunction (an early symptom of cognitive decline in PD), while NBM volume correlated with memory impairment (a late symptom of cognitive decline in PD). These findings suggest a stepwise progression of cholinergic dysfunction and associated cognitive decline in PD that can be assessed by different imaging modalities (Crowley et al., ). Another clinical trial using C‐methyl‐4‐piperidinyl propionate, which reflects cortical acetylcholinesterase activity, reported reduced posterior basal forebrain volume only in those PD patients who exhibited reduced cortical acetylcholinesterase activity. Similarly, patients with mild cognitive impairment showed lower cortical acetylcholinesterase activity (Schumacher et al., ). These results further prove the tight relationship between NBM volume loss, cholinergic deficiency, and cognitive decline in PD. Degeneration of the striatum in PD Striatal interneurons seem to be partially spared by neurodegeneration in PD (Calabresi et al., ). This has been attributed to a relative low vulnerability to α‐syn pathology and increased axonal sprouting of CINs as a result of dopaminergic deafferentation from the SNc, leading to an imbalance between the cholinergic and dopaminergic transmitter systems (Calabresi et al., ; Spehlmann & Stahl, ). Dopaminergic cell loss in PD reduces dopamine levels in the striatum and induces a shift toward the indirect pathway. As a result, CINs become more excitable, excessively release ACh, and induce synaptic reorganization (Ztaou & Amalric, ). The reduction of striatal dopamine partly explains the dysexecutive syndrome typical of PD, caused by an imbalance of mechanisms implied in synaptic plasticity, including long‐term potentiation, depression, and synaptic depotentiation (Calabresi et al., ). Accordingly, PD patients experience difficulties in tasks that require cognitive flexibility, such as adapting to new situations or making new strategies. Dopamine replacement therapy has been shown to improve some aspects of executive dysfunction in early PD (Cooper et al., ). However, dopaminergic treatment falls short in mitigating cognitive dysfunction on a more global level (Brusa et al., ; Kulisevsky et al., ). MODULATING THE CHOLINERGIC SYSTEM BY DBS DBS of the STN and GPi improves PD‐related motor symptoms such as rigor, tremor, and bradykinesia (Bohnen, Yarnall, et al., ; Bove et al., ; Lozano et al., ). However, effects on postural instability, falls, and FOG are limited (Brozova et al., ; Castrioto, ; Lozano et al., ; St. George et al., ). There are several reviews which have highlighted the association between an increased cholinergic tone and the improvement of gait, balance, and falls. For instance, Morris et al. have recently reviewed pharmacological, imaging, and electrophysiological data to investigate the role of ACh in axial symptoms. The strongest associations were described for gait speed and falls with increased or decreased cholinergic levels, respectively (Morris et al., ). Both NBM and PPN DBS are generally well tolerated, and few adverse events have been reported. Side effects of both targets include the known stimulation‐associated voltage side effects which can include oscillopsia, paresthesia, burning sensations, myoclonus, sleep induction, and incontinence (Hamani et al., ). In most cases, these side effects can be resolved by the adjustment of stimulation parameters. Data on severe complications such as intracranial hemorrhage are insufficient to allow for comparisons with traditional DBS targets (Welter et al., ). That said, the majority of reports available do not suggest a high rate of severe adverse events independent of the target (PPN or NBM) or surgical approach (single vs. multiple targets) (Ferraye et al., ; Gratwicke et al., , ; Maltête et al., ; Mazzone et al., ; Picton et al., ; Stefani et al., ). DBS of the striatum, especially the caudate nucleus, has been discussed controversially, as previous STN DBS trials have attributed poor cognitive outcomes to accidental lead placement through the caudate nucleus (Isler et al., ; Morishita et al., ; Witt et al., , ). However, these negative effects have been refuted by more recent studies (Bot et al., ; Tesio et al., ). Stimulation of the ventral striatum (nucleus accumbens) has shown some merit in a few cases of dystonia and Tourette syndrome (Johnson et al., ), and diseases caused by disrupted reward circuits, such as obsessive compulsive disorder (Mantione et al., ), eating disorder (Mantione et al., ), and obesity (Lee et al., ; Mantione et al., ). Few data, stemming from animal experiments, are available on the effect of striatal DBS. Recently, D1‐type receptors of striatal MSNs were targeted by cell specific optogenetic stimulation in dopamine‐depleted mice (Kim et al., ). Stimulation influenced brain rhythms, especially in the dorsal striatum and when applying low frequency stimulation (LFS: 4 Hz), leading to significant improvement of head movement in dopamine depleted mice. The effects of LFS and HFS on cortico‐striatal synapses regarding LD‐induced dyskinesia were previously investigated in rats. LD‐induced dyskinesia is believed to rely on cortico‐striatal synapses. Rats were rendered hemi‐parkinsonian and consecutively chronically treated with LD. Whereas some rats showed motor improvement without dyskinesia, others developed debilitating dyskinesia in response to LD treatment. HFS induced long‐term potentiation in both groups, while LFS led to depotentiation in the dyskinetic group only (Calabresi et al., ; Picconi et al., ). HFS (200 Hz) of the anterior caudate nucleus in rhesus monkeys has been shown to enhance learning, especially in the acquisition of specific visuomotor associations (Williams & Eskandar, ), while negative effects on cognition were observed following HFS in the dorsal striatum of rats (Schumacher et al., ). Despite these encouraging reports, the striatum is not an established DBS target. Furthermore, to our knowledge, there are no trials available exploring the effect of striatal DBS in PD, including the cholinergic system. This may derive from several reasons. Foremost, the striatum is a large, complex and functionally diverse structure with only 1%–2% of cholinergic neurons. It plays a critical role in multiple neural circuits, including those involved in motor control, reward processing, habit formation, and cognitive flexibility (French & Muthusamy, ). Targeting a specific part of the striatum without affecting neighboring areas or circuits is difficult. Small variations in electrode placement may lead to different outcomes or unintended side effects. 3.1 PPN DBS The PPN has been proposed as a target to alleviate axial symptoms and has been studied extensively in pre‐clinical and clinical studies with promising results (Jenkinson et al., ; Mazzone et al., ; Plaha & Gill, ; Stefani et al., ). 3.1.1 Animal studies The PPN receives inhibitory GABAergic afferences from the GPi and the substantia SNr (Lin et al., ). Thus, manipulating the GABAergic input is considered a strategy to disinhibit the PPN and, consequently, alleviate akinesia. Notably, microinjections of bicuculline (a GABA receptor‐A antagonistic substance) in a non‐human primate 1‐methyl‐4‐phenyl‐1,2,3,6‐tetrahydropyridine PD model, improved motor function comparable to LD treatment (Nandi et al., ). This pioneer work highlighted the PPN as a potential stimulation target for medication refractory gait and posture disability in advanced PD (Nandi et al., ). Subsequent experiments employed PPN DBS in macaques and other non‐human primates. Akinesia was induced by unilateral PPN stimulation at high frequencies, ranging from 45 to 100 Hz. Facial expression, limb and body motion, and behavior of monkeys were assessed using video recordings (Nandi et al., ). Similarly, Jenkinson et al. demonstrated that unilateral HFS (100 Hz) reduced motor activity in monkeys, which was assessed with a partially blinded motor score after video recording of motor behavior. In contrast, unilateral LFS (5 Hz) had adverse effects, prompting movement and reversing akinesia (Jenkinson et al., ). In rodents, most PPN connections exist bilaterally, but have a dominant side. Pre‐clinical findings have suggested that, even though unilateral stimulation has an effect on the contralateral side, bilateral stimulation might be more effective (Hamani et al., ). These studies have furthered the idea of a more effective low‐frequency, bilateral stimulation. This concept has subsequently been translated into human studies. Given the co‐existence of glutamatergic, GABAergic, and cholinergic neurons in the PPN, the question arises whether the effects of PPN DBS are indeed mediated by modulation of cholinergic neurotransmission. Along these lines, Wen et al. performed microdialysis experiments to assess cholinergic transmitter levels in brain tissue of a common PD animal model. Injections of 6‐hydroxydopamine into the medial‐forebrain bundle of rats reduced concentrations of ACh in the ventrolateral thalamic nucleus. The observed changes of transmitter levels correlated with a Parkinson phenotype characterized by reduced stride length, reduced maximum area of paw contact on the floor, and reduced base of support (average width between either the front or the hint paw) (Wen et al., ), suggesting a strong association between gait impairment and low cholinergic transmitter levels. Importantly, unilateral LFS (25 Hz) of the PPN increased ACh levels, which was accompanied by an improvement of gait observed on CatWalk gait analysis (Wen et al., ). We evaluated the impact of chronic STN DBS on the cholinergic system in a 1‐methyl‐4‐phenyl‐1,2,3,6‐tetrahydropyridine mouse model. In line with the previous study, 1‐methyl‐4‐phenyl‐1,2,3,6‐tetrahydropyridine reduced choline acetyltransferase‐positive neurons in the PPN compared to saline treatment. Mice exhibited a Parkinson phenotype as assessed by walking tests. Gait impairment was reversed by STN DBS. However, STN DBS did not alter the number of activated choline acetyltransferase expressing neurons in the depleted PPN. This suggests that STN‐stimulation likely did not improve gait via the modulation of the remaining cholinergic PPN neurons (Witzig et al., ). Along these lines, a structural connectivity study showed that gait improvements in STN DBS treated PD patients were linked to stimulation of fiber tracts connecting the STN and motor cortex (Gradinaru et al., ). Similarly, optogenetic stimulation of the STN has attributed the motor benefits of STN stimulation to the modulation of upstream connections between the STN and frontal cortices (Strelow et al., ). Thus, the positive effects on gait in our study likely were because of stimulation of fiber tracts outside the cholinergic system. That said, it is important to point out that STN DBS often fails to alleviate gait dysfunction and even can induce gait impairment in PD (Brozova et al., ; Castrioto, ; Lozano et al., ; St. George et al., ). While evidence for a direct modulation of cholinergic transmission via PPN DBS is limited, there is strong evidence for a close relationship of PPN cholinergic integrity and gait disturbances in PD. In this vein, a vesicular ACh transporter knock‐out mouse model was used to study cholinergic neurotransmission in the midbrain (Janickova et al., ). Modified mice lacked the vesicular ACh transporter in the pedunculopontine and laterodorsal tegmental nuclei cholinergic neurons and showed impaired motor learning and coordination deficits in a standardized gait balance test (rotarod test), moved slower, and presented smaller steps on the catwalk test. These symptoms worsened with aging but reached a ceiling effect, highlighting the dominant role of PPN cholinergic neurons in gait control (Janickova et al., ). Another line of research, emphasizing the pivotal role of PPN cholinergic neurons in the mediation of axial symptoms employed artificially engineered protein receptors (designer receptors) selectively targeted by certain ligands (so‐called designer drugs (DREADDs)) to detect specific cell‐types, which are activated by PPN DBS. Choline acetyltransferase transgenic mice rendered parkinsonian by intra‐nigral, monohemispheric stereotaxic administration of the ubiquitin‐proteasomal system inhibitor, lactacystin, received DREADDs to transiently activate surviving cholinergic PPN neurons. Behavioral testing of transgenic mice showed improvements in postural stability, gait, sensorimotor integration, forelimb akinesia, and general motor activity. In addition, electrophysiological recordings of PPN neurons revealed increased spiking after treatment with DREADDs (Pienaar et al., ). 3.1.2 Human studies While several studies have demonstrated a positive outcome on gait following PPN DBS, findings are equivocal (Bourilhon et al., ; Dayal et al., ). Potential reasons for the discrepancies observed may derive from the differing time‐point of surgery in the course of the disease and the exact location of electrode placement within the PPN. Thus, because of the lack of a clearly defined target within the PPN, results from these studies must be interpreted with caution. Furthermore, there are differences in how and which (axial) motor symptoms have been assessed. Most studies have used the Unified Parkinson's Disease Rating Scale III (UPDRS III) (Movement Disorder Society Task Force on Rating Scales for Parkinson's Disease, ), which is not well suited to measure axial symptoms (Thevathasan et al., ). To address this issue, customized questionnaires have been developed to facilitate the comparison of clinical stimulation effects on axial symptoms across studies (Dayal et al., ; Ferraye et al., ; Thevathasan et al., ). Nevertheless, questionnaire‐based assessments often lack intra‐ and inter‐rater reliability. Thus, quantitative measures have been proposed for gait assessment. To this end, postural sway (deviations in center of pressure) has been postulated as a suitable outcome measure. In a small study, involving 13 PD patients with severe clinical balance impairment, PPN stimulation showed an improvement of postural sway in both the medication ON and OFF state (Perera et al., ). The long‐term efficacy and safety of unilateral PPN stimulation in PD patients with refractory gait and balance difficulties was assessed in a clinical trial at 2and 4 years post‐surgery using the UPDRS part II. At 2 years, patients reported a significant improvement of FOG compared to baseline. In 4 years, there was no significant change of any item of the UPDRS part II. However, patients reported improvements of falls in both the ON and OFF medication state (Mestre et al., ). A recent meta‐analysis of 13 clinical trials has reviewed the effect of PPN LFS on motor symptoms, FOG, and falls, evaluated with gait specific questionnaires (Yu et al., ). The clinical trials included showed substantial heterogeneity with respect to the exact electrode location within the PPN, timespan of clinical follow‐up, patient age, disease duration, stimulation frequency, and daily dosage of LD. No improvement of the classical global motor symptoms (rigidity, tremor, and bradykinesia) was observed. However, virtually all trials reported a significant amelioration of gait and other axial symptoms (Yu et al., ). In a double‐blind randomized cross‐over study, nine PD patients with severe gait disorder were assessed 24 h after PPN‐surgery using the UPDRS III, and behavioral gait assessment. Compared with stimulation frequencies of 60–80 Hz, lower frequencies of 10–25 Hz led to an amelioration of akinesia and a reduction of gait difficulties in 7/9 patients (Nosko et al., ). The better response to LFS may be partially explained by the electrophysiological properties of cholinergic neurons in the caudal PPN. Along these lines, several patch‐clamp experiments have revealed a plateau phase of cholinergic neurons in the caudal PPN at frequencies of 40–60 Hz, while cholinergic neurons were deactivated at frequencies above 60 Hz (Garcia‐Rill et al., ; Simon et al., ). Because of interconnected nuclei, unilateral stimulation likely also exerts effects on the contralateral PPN (Hamani et al., ). Unilateral stimulation was also observed to enhance blood flow in the contralateral hemisphere, as evidenced by a PET study involving three patients with advanced PD in the OFF medication state, both at rest and during a lower limb motor task (Ballanger et al., ). Conversely, the paired nature of PPN connections suggests that bilateral stimulation may offer greater efficacy in modulating its function (Hamani et al., ). Only two studies have directly compared the effects of unilateral versus bilateral PPN DBS (Khan et al., ; Thevathasan, Cole, et al., ). In a cohort of five PD patients ON medication, Khan et al. reported an improvement of UPDRS motor scores by 5.7% with unilateral stimulation, while the same motor scores were ameliorated by 18.4% with bilateral stimulation (Khan et al., ). A second cohort of 17 PD patients was analyzed in a double‐blind study using an objective spatiotemporal gait analysis. DBS induced improvement of gait was twice as good with bilateral compared with unilateral stimulation (Thevathasan, Cole, et al., ). That said, findings of these studies should be interpreted cautiously, as the assessment of symptom improvement relied solely on UPDRS III scores (Khan et al., ; Thevathasan, Cole, et al., ). Alongside single target stimulation, PPN DBS has also been combined with other targets, including the STN, GPi, and caudal zona inserta. Combining PPN DBS with other targets has the advantage of treating classical motor symptoms and axial symptoms at the same time. Simultaneous bilateral STN DBS and PPN LFS (20–25 Hz) has demonstrated an improvement of UPDRS III scores, falls, postural instability, and FOG in individual PD cases (Ferraye et al., ; Plaha & Gill, ; Stefani et al., ). Similarly, combining PPN DBS with GPi DBS has shown marked effects on gait initiation and FOG in a 66‐year‐old PD patient with severe peak‐dose dyskinesia, ON freezing, and postural instability (Schrader et al., ). Notably, both GPi and PPN DBS alone reduced FOG, with PPN DBS being slightly more effective. However, the combination of both targets had a significantly larger impact on FOG compared with single target stimulation (Schrader et al., ). Caudal zona inserta stimulation was addressed in a study of seven PD patients with predominant axial symptoms. Bilateral caudal zona inserta and PPN stimulation in combination were superior to single target stimulation in improving both a composite axial subscore and UPDRS III motor scores (Khan et al., ). Within the PPN, the caudal part appears to be crucial for the effects of PPN DBS (Thevathasan et al., ; Yu et al., ), based on the topographical distribution of local field potentials (Tattersall et al., ; Thevathasan, Pogosyan, et al., ). Additional regional stimulation effects on gait in PD patients have been observed in the cuneiform nuclei and posterior parts of the PPN (pars dissipata and pars compacta), suggesting a slightly better response when stimulating posterior parts of the PPN (Goetz et al., ). This finding is in line with pre‐clinical studies, which likewise have reported the best stimulation effects in the posterior part of the PPN (Garciarill, ; Gut & Winn, ; Reese et al., ). Thus, most centers prefer to target the caudal part of the PPN (Hamani et al., ). However, given the small size and lack of anatomically defined inner boundaries, most trajectories likely will cover both the caudal and the rostral part of the PPN, enabling selective stimulation. 3.1.3 Summary of PPN DBS Animal studies clearly suggest a pivotal role of the PPN cholinergic neurons as important drivers of gait control. Furthermore, declining PPN cholinergic tone in the parkinsonian state is linked to deterioration of locomotion and other axial symptoms. PPN DBS ameliorates axial symptoms and at the same time restores PPN cholinergic function, suggesting that DBS effects are relayed mainly through the cholinergic system. This is supported by the effect of AChEIs, such as rivastigmine (Emre et al., ), donepezil (Aarsland, ), and galantamine (Aarsland et al., ), which reduce the frequency of falls in PD patients (Perez‐Lloret et al., ). Clinical data on the effects of PPN DBS in PD patients are still limited. While unilateral stimulation has shown significant improvement of gait, bilateral stimulation appears to be superior, maybe not surprising given the need for bilateral limb activation during locomotion. Even though reports are equivocal, LFS likely is more efficient than HFS, which is in line with data from STN DBS suggesting that lower frequencies are favorable when gait problems are predominant (Conway et al., ). PPN DBS is currently performed in PD patients experiencing early and severe FOG, postural instability, gait dysfunction, and may also be an option in patients with LD refractory motor symptoms (Thevathasan et al., ). PPN DBS is therefore considered as a rescue‐strategy after the development of severe FOG despite or as a consequence of STN and/or GPi DBS (Schrader et al., ). The rate and nature of PPN DBS related risks and side effects appear to be similar to that of conventional DBS. There are no head‐to‐head studies comparing the effects of PPN DBS with STN or GPi stimulation. However, PPN DBS effects on PD cardinal motor symptoms are of a lower magnitude, whereas it is more efficient in the treatment of axial symptoms (Collomb‐Clerc & Welter, ). Combining conventional targets such as the GPi or STN with PPN, in our view, is the most viable option in PD patients eligible for classical DBS who show pronounced and LD refractory FOG. That said, stimulation of multiple targets often requires parallel HFS and LFS. Even though some of the currently available DBS devices can handle multiple frequencies to some extent, implantation of two pulse generators may be necessary. Because of its small size and lack of defined anatomical subdivisions, targeting specific cell populations within the PPN is challenging, but stimulation of the caudal portion which inhabits the major portion of cholinergic neurons seems to provide the best clinical outcome. Indeed, there has been extensive research on surgical protocols to define anatomical landmarks to facilitate the precise localization of the PPN and its subregions (Hamani et al., ). In this context, the intraoperative use of microelectrode recording, and MRI based visualization techniques have become increasingly important in facilitating electrode placement. However, even though different firing rates of neurons have been recorded during PPN surgery (Shimamoto et al., ; Tattersall et al., ; Weinberger et al., ), it is still challenging to allocate them to specific subregions (Hamani et al., ). As the PPN extends along the longitudinal axis of the brainstem, the trajectory will include both the caudal and rostral subregions of the PPN in most cases. That said, intraoperative recordings in combination with test‐stimulation of different contacts are feasible and may lead to the best outcome. PPN DBS effects appear to be sustained over months to years, but data on longtime follow‐up are lacking. A summary of relevant PPN DBS studies can be found in Table . 3.2 NBM DBS The utilization of NBM DBS to increase ACh levels in the cortex has been explored as a potential therapeutic approach for improving cognitive symptoms in both PDD and DLB (Baldermann et al., ; Gratwicke et al., , ). DBS in PD patients with cognitive impairment is of particular significance, as PD patients with major cognitive deficits usually are not eligible for STN DBS because of the risk of cognitive deterioration (Foltynie & Hariz, ; Hariz et al., ). 3.2.1 Animal studies Early pre‐clinical studies have demonstrated an increase of cortical ACh mediated by both continuous and intermittent stimulation of the NBM (Casamenti et al., ; Kurosawa et al., , ; Rasmusson et al., ). In rats subjected to low frequency NBM stimulation (30 Hz), a 40% rise in ACh release within the parietal cortex was observed (Casamenti et al., ). Similarly, continuous LFS (20–50 Hz) in rats led to a twofold increase in cortical ACh (Kurosawa et al., ). Conversely, another study showed a higher release of cortical ACh following HFS (Rasmusson et al., ). In contrast to other studies, however, the latter study used pulsed rather than continuous stimulation and added atropine to enhance evoked release of ACh (Rasmusson et al., ). A recent meta‐analysis covering four animal studies reported NBM LFS (20–50 Hz) to induce the highest elevation of cortical ACh levels. No differences between continuous and intermittent stimulation were observed with regard to the effects on cortical ACh levels (Nazmuddin et al., ). It is believed that a vasodilative effect relays the elevation of cortical ACh levels by increasing cortical blood flow. Along these lines, Biesold et al. have demonstrated that NBM stimulation in anesthetized rats led to an ipsilateral vasodilation of cortical vessels, which could be blocked by nicotinergic and muscarinergic antagonists (Biesold et al., ). Beyond the elevation of cholinergic concentration, NBM DBS likely also induces neuronal plasticity. For example, Kilgard and Herzenich have demonstrated that episodic NBM stimulation, paired with concomitant auditory stimuli in adult rats, resulted in a substantial and progressive reorganization of the primary auditory cortex (Kilgard & Merzenich, ). The effects of NBM stimulation on different traits of cognition have previously been summarized in a review including 19 pre‐clinical trials in rodents and non‐human primates (Nazmuddin et al., ). The majority of these studies performed their stimulation experiments in wild‐type rats, while two studies used a transgenic AD line in rodents. Cognitive function was measured with different cognitive tasks, including encoding, consolidation, and retrieval. Effects mediated by NBM stimulation were observed mainly on encoding, immediate retention of memory, and speed of learning, while there was no effect on long‐term memory. In general, stimulation‐related effects emerged 24 h after training and lasted for up to 2 weeks (Nazmuddin et al., ). The majority of studies (14/19) applied unilateral stimulation (mostly in the right hemisphere), while the remaining studies used bilateral stimulation (Avila & Lin, ; Liu et al., , ; Mayse et al., ). None of these studies directly compared the outcome of unilateral with bilateral stimulation. Two studies compared a continuous with an intermittent stimulation protocol, reporting a clear superiority of intermittent stimulation (Koulousakis et al., ; Liu et al., ). Liu et al. compared intermittent and continuous stimulation in adult monkeys, corroborating with this finding. Continuous stimulation was applied as a block of 100 stimulation pulses interleaved with 100 pulses without stimulation. Continuous stimulation worsened working memory performance, while intermittent stimulation led to an improvement (Liu et al., ). Overall, the best cognitive improvement of spatial memory performance was observed with intermittent stimulation using biphasic electrical pulses (60/80 Hz) for 20 s interleaved with a pause for 40 s (Nazmuddin et al., ). Moreover, two studies in rats and rhesus monkeys, respectively, have revealed positive effects on cognitive performance at frequencies of 60 Hz (Koulousakis et al., ; Liu et al., ). Conversely, most pre‐clinical studies have used higher frequencies up to 120 Hz to stimulate the NBM and observed stronger cognitive benefits at higher frequencies. Some studies even reported worsening of cognitive performance after reducing the stimulation frequency (Huang et al., ; Liu et al., ). 3.2.2 Human studies The first case of NBM DBS reported in PD was a 71‐year‐old PD patient with slowly progressive PDD receiving combined high‐frequency STN stimulation and low‐frequency NBM DBS. High‐frequency STN DBS was administered for 3 months before low‐frequency NBM stimulation was turned on. Isolated STN DBS improved motor symptoms but had no effect on cognitive function. Notably, cognitive performance, including attention, concentration, and drive distinctly improved upon activating NBM stimulation and worsened after stimulation was turned off (Freund et al., ). In another report, a 68‐year‐old PD patient diagnosed with mild cognitive impairment underwent DBS surgery targeting the GPi and the NBM using a single electrode per hemisphere. The patient showed an improvement of UPDRS III scores by 61% 2 months after initiation of GPi stimulation. No further motor improvement was observed after NBM stimulation was added. However, combining NBM and GPi stimulation improved performance across various neuropsychological tests. These effects remained stable over 1 year and no side effects were observed (Nombela et al., ). Two randomized cross over clinical trials on NBM DBS in PDD and DLB were reported in 2020 (Gratwicke et al., ) and 2021 (Maltête et al., ), respectively. Surgery and stimulation were well tolerated. The cognitive assessments did not reveal significant stimulation induced improvements, and even worsening in one study (Maltête et al., ). That said, PET and functional MRI provided evidence for a modulation of regions and networks associated with cognitive function (Gratwicke et al., ; Maltête et al., ). In a randomized double‐blind clinical trial, six patients with PDD and motor fluctuations received either bilateral low‐frequency (20 Hz) NBM stimulation or sham stimulation for 6 weeks, subsequently crossing over to the respective other condition (Gratwicke et al., ). The intervention was well tolerated, with no serious adverse events reported. Although no improvements were observed in primary cognitive outcomes, NBM DBS showed an amelioration of scores of the neuropsychiatric inventory compared to sham stimulation. Two patients experienced a significant reduction of visual hallucinations, and three patients reported an improvement of health‐related quality of life (Gratwicke et al., ). In a phase‐II double‐blind crossover pilot trial, involving six participants diagnosed with advanced PD and cognitive impairment, Sasikumar et al. investigated the effects of single‐trajectory DBS of GPi and NBM on motor symptoms, cognitive performance, and biomarkers (Sasikumar et al., ). As expected, GPi DBS resulted in improvements of dyskinesia and motor fluctuations. NBM DBS in addition to GPi DBS led to reduced metabolism in right frontal and parietal cortical regions and enhanced functional connectivity in volume of tissue activated analysis as assessed by 18 F‐flourodeoxyglucose PET and magnetoencephalography. However, these findings were not accompanied by concomitant cognitive improvement of PD patients after 1 year (Sasikumar et al., ). Another double‐blind cross‐over study including six patients with PDD undergoing NBM LFS (20 Hz) did not report cognitive improvement, but two patients exhibited a stimulation induced slowing of cognitive decline (Cappon et al., ). Most animal and human studies for NBM DBS in PDD and DLB used bilateral continuous low‐frequency stimulation (20 Hz) (Nazmuddin et al., ). Although LFS seems to result in a more favorable outcome (Nazmuddin et al., ), the proposed negative effects of HFS were challenged by a recent study in 33 PD patients (Bogdan et al., ). Patients received GPi stimulation for motor fluctuations and dyskinesia. Because of the anatomical proximity of the GPi to the NBM, the most distal contact was located within the NBM in a subset of patients. Analysis of these patients with an active distal contact and assumed NBM high‐frequency co‐stimulation at 130–185 Hz showed no signs of cognitive decline after 12 months (Bogdan et al., ). Given these equivocal findings, suspicion arises that the stimulation paradigm (i.e., intermittent vs. continuous) rather than the frequency applied may be crucial for achieving an optimal stimulation effect. To this end, Sasikumar et al. compared intermittent stimulation consisting of a pulse train (3 mA, 60 μs at 60 Hz), cycling between 20 s ON and 40 s OFF stimulation for 1 h daily, with continuous stimulation using the same parameters. Intermittent stimulation significantly improved sustained attention, whereas continuous stimulation did not affect cognitive scores (Sasikumar et al., ). Most pre‐clinical studies have used unilateral NBM stimulation to achieve a cognitive improvement in rodents and non‐human primates. Subsequent clinical trials have either been performed with bilateral stimulation or NBM DBS was combined with other targets. No studies are available, directly comparing the outcome of unilateral versus bilateral NBM DBS. 3.2.3 Summary of NBM DBS The role of the NBM in cognition has been recognized for decades. The first attempt to modulate cognitive function by electric stimulation was performed by Turnbull et al. in 1984 (Turnbull et al., ). A 74‐year‐old patient with AD received unilateral NBM stimulation. After 9 months, ipsilateral cortical glucose levels increased, but cognitive function remained unchanged. Subsequent studies in both DLB and PDD have reported limited improvement of cognitive function and slowing of cognitive decline after NBM DBS, but also worsening of cognition in some cases. NBM DBS in DLB has been shown to modulate brain networks associated with cognition, but accompanying improvement of clinical parameters was mostly lacking. So far, there is no sufficient evidence for a significant and sustained cognitive improvement by NMB DBS in PD. This is particularly true for available randomized controlled trials. Even though the NBM is a key factor in cognitive function, the complex and widespread brain changes associated with altered cognition in PD (Mihaescu et al., ) may explain the somewhat equivocal findings obtained from NBM DBS studies. Regarding complication rates and side effects of NBM DBS, the limited data available do not suggest an increased risk compared with conventional DBS. Moreover, NBM DBS is rarely performed in a single target approach. Thus, it is difficult to provide an estimate of the target related risk. Patients included in the available trials generally showed mild‐to‐moderate cognitive impairment. It is unclear if NBM DBS also is effective in more severely cognitively impaired PD patients. Clinical trials in AD patients suggest that NBM DBS may be efficacious in patients with severe cognitive impairment (Picton et al., ), but long‐term data are scarce and inconclusive, and studies in severely affected PDD/DLB individuals are lacking. In the light of the limited evidence of NBM DBS efficacy in PD and largely lacking amelioration of PD motor symptoms, single target NBM DBS, in our view, is not recommended and should be applied in combination with traditional targets, preferably in the framework of clinical studies. Currently, a randomized, sham‐controlled trial is investigating combined STN and NBM DBS for PDD (DEMPARK DBS), with safety as the primary outcome and effects on cognition, daily functioning, motor skills, mood, caregiver burden, and economic aspects as a secondary outcome (Daniels et al., ). It is unclear whether NBM DBS should preferably be used in combination with either GPi or STN DBS. Both, GPi and STN DBS have been shown to ameliorate motor symptoms with STN DBS being superior in reducing dopaminergic medication after surgery (Bronstein et al., ; Follett et al., ). STN DBS led to a faster decline of Mattis Dementia Rating Scale scores (Weaver et al., ). Conversely, there were two other trials not observing a different cognitive outcome between STN and GPi DBS after 12 months and 3 years, respectively (Boel et al., ; Odekerken et al., ). With regard to the cognitive outcome and other non‐motor symptoms such as depression, combining NBM DBS with the GPi is preferred based on experts' consensus and the largest randomized study (Bronstein et al., ; Follett et al., ). With respect to the identification of optimal stimulation parameters, there are conflicting results from animal and human studies, respectively. Whereas most of both rodent and non‐human primate studies suggested greater efficacy with HFS, the first studies in human PD suggested that stimulation at 20 Hz may be beneficial. However, this has been challenged recently, and no clear recommendation can be made at this point. DBS efficacy appears to be better if intermittent rather than continuous stimulation paradigms are applied. That said, even though closed loop stimulation paradigms are on the rise for conventional DBS (Stanslaski et al., ), adaptive stimulation is not feasible at this point. While most studies applied unilateral stimulation, bilateral stimulation has been favored by more recent study designs. Lastly, the majority of pre‐clinical NBM DBS studies was performed in either wild‐type animals or AD models. Thus, the transferability to PD/DLB remains unclear. We have summarized the findings of the major NBM DBS studies in Table . PPN DBS The PPN has been proposed as a target to alleviate axial symptoms and has been studied extensively in pre‐clinical and clinical studies with promising results (Jenkinson et al., ; Mazzone et al., ; Plaha & Gill, ; Stefani et al., ). 3.1.1 Animal studies The PPN receives inhibitory GABAergic afferences from the GPi and the substantia SNr (Lin et al., ). Thus, manipulating the GABAergic input is considered a strategy to disinhibit the PPN and, consequently, alleviate akinesia. Notably, microinjections of bicuculline (a GABA receptor‐A antagonistic substance) in a non‐human primate 1‐methyl‐4‐phenyl‐1,2,3,6‐tetrahydropyridine PD model, improved motor function comparable to LD treatment (Nandi et al., ). This pioneer work highlighted the PPN as a potential stimulation target for medication refractory gait and posture disability in advanced PD (Nandi et al., ). Subsequent experiments employed PPN DBS in macaques and other non‐human primates. Akinesia was induced by unilateral PPN stimulation at high frequencies, ranging from 45 to 100 Hz. Facial expression, limb and body motion, and behavior of monkeys were assessed using video recordings (Nandi et al., ). Similarly, Jenkinson et al. demonstrated that unilateral HFS (100 Hz) reduced motor activity in monkeys, which was assessed with a partially blinded motor score after video recording of motor behavior. In contrast, unilateral LFS (5 Hz) had adverse effects, prompting movement and reversing akinesia (Jenkinson et al., ). In rodents, most PPN connections exist bilaterally, but have a dominant side. Pre‐clinical findings have suggested that, even though unilateral stimulation has an effect on the contralateral side, bilateral stimulation might be more effective (Hamani et al., ). These studies have furthered the idea of a more effective low‐frequency, bilateral stimulation. This concept has subsequently been translated into human studies. Given the co‐existence of glutamatergic, GABAergic, and cholinergic neurons in the PPN, the question arises whether the effects of PPN DBS are indeed mediated by modulation of cholinergic neurotransmission. Along these lines, Wen et al. performed microdialysis experiments to assess cholinergic transmitter levels in brain tissue of a common PD animal model. Injections of 6‐hydroxydopamine into the medial‐forebrain bundle of rats reduced concentrations of ACh in the ventrolateral thalamic nucleus. The observed changes of transmitter levels correlated with a Parkinson phenotype characterized by reduced stride length, reduced maximum area of paw contact on the floor, and reduced base of support (average width between either the front or the hint paw) (Wen et al., ), suggesting a strong association between gait impairment and low cholinergic transmitter levels. Importantly, unilateral LFS (25 Hz) of the PPN increased ACh levels, which was accompanied by an improvement of gait observed on CatWalk gait analysis (Wen et al., ). We evaluated the impact of chronic STN DBS on the cholinergic system in a 1‐methyl‐4‐phenyl‐1,2,3,6‐tetrahydropyridine mouse model. In line with the previous study, 1‐methyl‐4‐phenyl‐1,2,3,6‐tetrahydropyridine reduced choline acetyltransferase‐positive neurons in the PPN compared to saline treatment. Mice exhibited a Parkinson phenotype as assessed by walking tests. Gait impairment was reversed by STN DBS. However, STN DBS did not alter the number of activated choline acetyltransferase expressing neurons in the depleted PPN. This suggests that STN‐stimulation likely did not improve gait via the modulation of the remaining cholinergic PPN neurons (Witzig et al., ). Along these lines, a structural connectivity study showed that gait improvements in STN DBS treated PD patients were linked to stimulation of fiber tracts connecting the STN and motor cortex (Gradinaru et al., ). Similarly, optogenetic stimulation of the STN has attributed the motor benefits of STN stimulation to the modulation of upstream connections between the STN and frontal cortices (Strelow et al., ). Thus, the positive effects on gait in our study likely were because of stimulation of fiber tracts outside the cholinergic system. That said, it is important to point out that STN DBS often fails to alleviate gait dysfunction and even can induce gait impairment in PD (Brozova et al., ; Castrioto, ; Lozano et al., ; St. George et al., ). While evidence for a direct modulation of cholinergic transmission via PPN DBS is limited, there is strong evidence for a close relationship of PPN cholinergic integrity and gait disturbances in PD. In this vein, a vesicular ACh transporter knock‐out mouse model was used to study cholinergic neurotransmission in the midbrain (Janickova et al., ). Modified mice lacked the vesicular ACh transporter in the pedunculopontine and laterodorsal tegmental nuclei cholinergic neurons and showed impaired motor learning and coordination deficits in a standardized gait balance test (rotarod test), moved slower, and presented smaller steps on the catwalk test. These symptoms worsened with aging but reached a ceiling effect, highlighting the dominant role of PPN cholinergic neurons in gait control (Janickova et al., ). Another line of research, emphasizing the pivotal role of PPN cholinergic neurons in the mediation of axial symptoms employed artificially engineered protein receptors (designer receptors) selectively targeted by certain ligands (so‐called designer drugs (DREADDs)) to detect specific cell‐types, which are activated by PPN DBS. Choline acetyltransferase transgenic mice rendered parkinsonian by intra‐nigral, monohemispheric stereotaxic administration of the ubiquitin‐proteasomal system inhibitor, lactacystin, received DREADDs to transiently activate surviving cholinergic PPN neurons. Behavioral testing of transgenic mice showed improvements in postural stability, gait, sensorimotor integration, forelimb akinesia, and general motor activity. In addition, electrophysiological recordings of PPN neurons revealed increased spiking after treatment with DREADDs (Pienaar et al., ). 3.1.2 Human studies While several studies have demonstrated a positive outcome on gait following PPN DBS, findings are equivocal (Bourilhon et al., ; Dayal et al., ). Potential reasons for the discrepancies observed may derive from the differing time‐point of surgery in the course of the disease and the exact location of electrode placement within the PPN. Thus, because of the lack of a clearly defined target within the PPN, results from these studies must be interpreted with caution. Furthermore, there are differences in how and which (axial) motor symptoms have been assessed. Most studies have used the Unified Parkinson's Disease Rating Scale III (UPDRS III) (Movement Disorder Society Task Force on Rating Scales for Parkinson's Disease, ), which is not well suited to measure axial symptoms (Thevathasan et al., ). To address this issue, customized questionnaires have been developed to facilitate the comparison of clinical stimulation effects on axial symptoms across studies (Dayal et al., ; Ferraye et al., ; Thevathasan et al., ). Nevertheless, questionnaire‐based assessments often lack intra‐ and inter‐rater reliability. Thus, quantitative measures have been proposed for gait assessment. To this end, postural sway (deviations in center of pressure) has been postulated as a suitable outcome measure. In a small study, involving 13 PD patients with severe clinical balance impairment, PPN stimulation showed an improvement of postural sway in both the medication ON and OFF state (Perera et al., ). The long‐term efficacy and safety of unilateral PPN stimulation in PD patients with refractory gait and balance difficulties was assessed in a clinical trial at 2and 4 years post‐surgery using the UPDRS part II. At 2 years, patients reported a significant improvement of FOG compared to baseline. In 4 years, there was no significant change of any item of the UPDRS part II. However, patients reported improvements of falls in both the ON and OFF medication state (Mestre et al., ). A recent meta‐analysis of 13 clinical trials has reviewed the effect of PPN LFS on motor symptoms, FOG, and falls, evaluated with gait specific questionnaires (Yu et al., ). The clinical trials included showed substantial heterogeneity with respect to the exact electrode location within the PPN, timespan of clinical follow‐up, patient age, disease duration, stimulation frequency, and daily dosage of LD. No improvement of the classical global motor symptoms (rigidity, tremor, and bradykinesia) was observed. However, virtually all trials reported a significant amelioration of gait and other axial symptoms (Yu et al., ). In a double‐blind randomized cross‐over study, nine PD patients with severe gait disorder were assessed 24 h after PPN‐surgery using the UPDRS III, and behavioral gait assessment. Compared with stimulation frequencies of 60–80 Hz, lower frequencies of 10–25 Hz led to an amelioration of akinesia and a reduction of gait difficulties in 7/9 patients (Nosko et al., ). The better response to LFS may be partially explained by the electrophysiological properties of cholinergic neurons in the caudal PPN. Along these lines, several patch‐clamp experiments have revealed a plateau phase of cholinergic neurons in the caudal PPN at frequencies of 40–60 Hz, while cholinergic neurons were deactivated at frequencies above 60 Hz (Garcia‐Rill et al., ; Simon et al., ). Because of interconnected nuclei, unilateral stimulation likely also exerts effects on the contralateral PPN (Hamani et al., ). Unilateral stimulation was also observed to enhance blood flow in the contralateral hemisphere, as evidenced by a PET study involving three patients with advanced PD in the OFF medication state, both at rest and during a lower limb motor task (Ballanger et al., ). Conversely, the paired nature of PPN connections suggests that bilateral stimulation may offer greater efficacy in modulating its function (Hamani et al., ). Only two studies have directly compared the effects of unilateral versus bilateral PPN DBS (Khan et al., ; Thevathasan, Cole, et al., ). In a cohort of five PD patients ON medication, Khan et al. reported an improvement of UPDRS motor scores by 5.7% with unilateral stimulation, while the same motor scores were ameliorated by 18.4% with bilateral stimulation (Khan et al., ). A second cohort of 17 PD patients was analyzed in a double‐blind study using an objective spatiotemporal gait analysis. DBS induced improvement of gait was twice as good with bilateral compared with unilateral stimulation (Thevathasan, Cole, et al., ). That said, findings of these studies should be interpreted cautiously, as the assessment of symptom improvement relied solely on UPDRS III scores (Khan et al., ; Thevathasan, Cole, et al., ). Alongside single target stimulation, PPN DBS has also been combined with other targets, including the STN, GPi, and caudal zona inserta. Combining PPN DBS with other targets has the advantage of treating classical motor symptoms and axial symptoms at the same time. Simultaneous bilateral STN DBS and PPN LFS (20–25 Hz) has demonstrated an improvement of UPDRS III scores, falls, postural instability, and FOG in individual PD cases (Ferraye et al., ; Plaha & Gill, ; Stefani et al., ). Similarly, combining PPN DBS with GPi DBS has shown marked effects on gait initiation and FOG in a 66‐year‐old PD patient with severe peak‐dose dyskinesia, ON freezing, and postural instability (Schrader et al., ). Notably, both GPi and PPN DBS alone reduced FOG, with PPN DBS being slightly more effective. However, the combination of both targets had a significantly larger impact on FOG compared with single target stimulation (Schrader et al., ). Caudal zona inserta stimulation was addressed in a study of seven PD patients with predominant axial symptoms. Bilateral caudal zona inserta and PPN stimulation in combination were superior to single target stimulation in improving both a composite axial subscore and UPDRS III motor scores (Khan et al., ). Within the PPN, the caudal part appears to be crucial for the effects of PPN DBS (Thevathasan et al., ; Yu et al., ), based on the topographical distribution of local field potentials (Tattersall et al., ; Thevathasan, Pogosyan, et al., ). Additional regional stimulation effects on gait in PD patients have been observed in the cuneiform nuclei and posterior parts of the PPN (pars dissipata and pars compacta), suggesting a slightly better response when stimulating posterior parts of the PPN (Goetz et al., ). This finding is in line with pre‐clinical studies, which likewise have reported the best stimulation effects in the posterior part of the PPN (Garciarill, ; Gut & Winn, ; Reese et al., ). Thus, most centers prefer to target the caudal part of the PPN (Hamani et al., ). However, given the small size and lack of anatomically defined inner boundaries, most trajectories likely will cover both the caudal and the rostral part of the PPN, enabling selective stimulation. 3.1.3 Summary of PPN DBS Animal studies clearly suggest a pivotal role of the PPN cholinergic neurons as important drivers of gait control. Furthermore, declining PPN cholinergic tone in the parkinsonian state is linked to deterioration of locomotion and other axial symptoms. PPN DBS ameliorates axial symptoms and at the same time restores PPN cholinergic function, suggesting that DBS effects are relayed mainly through the cholinergic system. This is supported by the effect of AChEIs, such as rivastigmine (Emre et al., ), donepezil (Aarsland, ), and galantamine (Aarsland et al., ), which reduce the frequency of falls in PD patients (Perez‐Lloret et al., ). Clinical data on the effects of PPN DBS in PD patients are still limited. While unilateral stimulation has shown significant improvement of gait, bilateral stimulation appears to be superior, maybe not surprising given the need for bilateral limb activation during locomotion. Even though reports are equivocal, LFS likely is more efficient than HFS, which is in line with data from STN DBS suggesting that lower frequencies are favorable when gait problems are predominant (Conway et al., ). PPN DBS is currently performed in PD patients experiencing early and severe FOG, postural instability, gait dysfunction, and may also be an option in patients with LD refractory motor symptoms (Thevathasan et al., ). PPN DBS is therefore considered as a rescue‐strategy after the development of severe FOG despite or as a consequence of STN and/or GPi DBS (Schrader et al., ). The rate and nature of PPN DBS related risks and side effects appear to be similar to that of conventional DBS. There are no head‐to‐head studies comparing the effects of PPN DBS with STN or GPi stimulation. However, PPN DBS effects on PD cardinal motor symptoms are of a lower magnitude, whereas it is more efficient in the treatment of axial symptoms (Collomb‐Clerc & Welter, ). Combining conventional targets such as the GPi or STN with PPN, in our view, is the most viable option in PD patients eligible for classical DBS who show pronounced and LD refractory FOG. That said, stimulation of multiple targets often requires parallel HFS and LFS. Even though some of the currently available DBS devices can handle multiple frequencies to some extent, implantation of two pulse generators may be necessary. Because of its small size and lack of defined anatomical subdivisions, targeting specific cell populations within the PPN is challenging, but stimulation of the caudal portion which inhabits the major portion of cholinergic neurons seems to provide the best clinical outcome. Indeed, there has been extensive research on surgical protocols to define anatomical landmarks to facilitate the precise localization of the PPN and its subregions (Hamani et al., ). In this context, the intraoperative use of microelectrode recording, and MRI based visualization techniques have become increasingly important in facilitating electrode placement. However, even though different firing rates of neurons have been recorded during PPN surgery (Shimamoto et al., ; Tattersall et al., ; Weinberger et al., ), it is still challenging to allocate them to specific subregions (Hamani et al., ). As the PPN extends along the longitudinal axis of the brainstem, the trajectory will include both the caudal and rostral subregions of the PPN in most cases. That said, intraoperative recordings in combination with test‐stimulation of different contacts are feasible and may lead to the best outcome. PPN DBS effects appear to be sustained over months to years, but data on longtime follow‐up are lacking. A summary of relevant PPN DBS studies can be found in Table . Animal studies The PPN receives inhibitory GABAergic afferences from the GPi and the substantia SNr (Lin et al., ). Thus, manipulating the GABAergic input is considered a strategy to disinhibit the PPN and, consequently, alleviate akinesia. Notably, microinjections of bicuculline (a GABA receptor‐A antagonistic substance) in a non‐human primate 1‐methyl‐4‐phenyl‐1,2,3,6‐tetrahydropyridine PD model, improved motor function comparable to LD treatment (Nandi et al., ). This pioneer work highlighted the PPN as a potential stimulation target for medication refractory gait and posture disability in advanced PD (Nandi et al., ). Subsequent experiments employed PPN DBS in macaques and other non‐human primates. Akinesia was induced by unilateral PPN stimulation at high frequencies, ranging from 45 to 100 Hz. Facial expression, limb and body motion, and behavior of monkeys were assessed using video recordings (Nandi et al., ). Similarly, Jenkinson et al. demonstrated that unilateral HFS (100 Hz) reduced motor activity in monkeys, which was assessed with a partially blinded motor score after video recording of motor behavior. In contrast, unilateral LFS (5 Hz) had adverse effects, prompting movement and reversing akinesia (Jenkinson et al., ). In rodents, most PPN connections exist bilaterally, but have a dominant side. Pre‐clinical findings have suggested that, even though unilateral stimulation has an effect on the contralateral side, bilateral stimulation might be more effective (Hamani et al., ). These studies have furthered the idea of a more effective low‐frequency, bilateral stimulation. This concept has subsequently been translated into human studies. Given the co‐existence of glutamatergic, GABAergic, and cholinergic neurons in the PPN, the question arises whether the effects of PPN DBS are indeed mediated by modulation of cholinergic neurotransmission. Along these lines, Wen et al. performed microdialysis experiments to assess cholinergic transmitter levels in brain tissue of a common PD animal model. Injections of 6‐hydroxydopamine into the medial‐forebrain bundle of rats reduced concentrations of ACh in the ventrolateral thalamic nucleus. The observed changes of transmitter levels correlated with a Parkinson phenotype characterized by reduced stride length, reduced maximum area of paw contact on the floor, and reduced base of support (average width between either the front or the hint paw) (Wen et al., ), suggesting a strong association between gait impairment and low cholinergic transmitter levels. Importantly, unilateral LFS (25 Hz) of the PPN increased ACh levels, which was accompanied by an improvement of gait observed on CatWalk gait analysis (Wen et al., ). We evaluated the impact of chronic STN DBS on the cholinergic system in a 1‐methyl‐4‐phenyl‐1,2,3,6‐tetrahydropyridine mouse model. In line with the previous study, 1‐methyl‐4‐phenyl‐1,2,3,6‐tetrahydropyridine reduced choline acetyltransferase‐positive neurons in the PPN compared to saline treatment. Mice exhibited a Parkinson phenotype as assessed by walking tests. Gait impairment was reversed by STN DBS. However, STN DBS did not alter the number of activated choline acetyltransferase expressing neurons in the depleted PPN. This suggests that STN‐stimulation likely did not improve gait via the modulation of the remaining cholinergic PPN neurons (Witzig et al., ). Along these lines, a structural connectivity study showed that gait improvements in STN DBS treated PD patients were linked to stimulation of fiber tracts connecting the STN and motor cortex (Gradinaru et al., ). Similarly, optogenetic stimulation of the STN has attributed the motor benefits of STN stimulation to the modulation of upstream connections between the STN and frontal cortices (Strelow et al., ). Thus, the positive effects on gait in our study likely were because of stimulation of fiber tracts outside the cholinergic system. That said, it is important to point out that STN DBS often fails to alleviate gait dysfunction and even can induce gait impairment in PD (Brozova et al., ; Castrioto, ; Lozano et al., ; St. George et al., ). While evidence for a direct modulation of cholinergic transmission via PPN DBS is limited, there is strong evidence for a close relationship of PPN cholinergic integrity and gait disturbances in PD. In this vein, a vesicular ACh transporter knock‐out mouse model was used to study cholinergic neurotransmission in the midbrain (Janickova et al., ). Modified mice lacked the vesicular ACh transporter in the pedunculopontine and laterodorsal tegmental nuclei cholinergic neurons and showed impaired motor learning and coordination deficits in a standardized gait balance test (rotarod test), moved slower, and presented smaller steps on the catwalk test. These symptoms worsened with aging but reached a ceiling effect, highlighting the dominant role of PPN cholinergic neurons in gait control (Janickova et al., ). Another line of research, emphasizing the pivotal role of PPN cholinergic neurons in the mediation of axial symptoms employed artificially engineered protein receptors (designer receptors) selectively targeted by certain ligands (so‐called designer drugs (DREADDs)) to detect specific cell‐types, which are activated by PPN DBS. Choline acetyltransferase transgenic mice rendered parkinsonian by intra‐nigral, monohemispheric stereotaxic administration of the ubiquitin‐proteasomal system inhibitor, lactacystin, received DREADDs to transiently activate surviving cholinergic PPN neurons. Behavioral testing of transgenic mice showed improvements in postural stability, gait, sensorimotor integration, forelimb akinesia, and general motor activity. In addition, electrophysiological recordings of PPN neurons revealed increased spiking after treatment with DREADDs (Pienaar et al., ). Human studies While several studies have demonstrated a positive outcome on gait following PPN DBS, findings are equivocal (Bourilhon et al., ; Dayal et al., ). Potential reasons for the discrepancies observed may derive from the differing time‐point of surgery in the course of the disease and the exact location of electrode placement within the PPN. Thus, because of the lack of a clearly defined target within the PPN, results from these studies must be interpreted with caution. Furthermore, there are differences in how and which (axial) motor symptoms have been assessed. Most studies have used the Unified Parkinson's Disease Rating Scale III (UPDRS III) (Movement Disorder Society Task Force on Rating Scales for Parkinson's Disease, ), which is not well suited to measure axial symptoms (Thevathasan et al., ). To address this issue, customized questionnaires have been developed to facilitate the comparison of clinical stimulation effects on axial symptoms across studies (Dayal et al., ; Ferraye et al., ; Thevathasan et al., ). Nevertheless, questionnaire‐based assessments often lack intra‐ and inter‐rater reliability. Thus, quantitative measures have been proposed for gait assessment. To this end, postural sway (deviations in center of pressure) has been postulated as a suitable outcome measure. In a small study, involving 13 PD patients with severe clinical balance impairment, PPN stimulation showed an improvement of postural sway in both the medication ON and OFF state (Perera et al., ). The long‐term efficacy and safety of unilateral PPN stimulation in PD patients with refractory gait and balance difficulties was assessed in a clinical trial at 2and 4 years post‐surgery using the UPDRS part II. At 2 years, patients reported a significant improvement of FOG compared to baseline. In 4 years, there was no significant change of any item of the UPDRS part II. However, patients reported improvements of falls in both the ON and OFF medication state (Mestre et al., ). A recent meta‐analysis of 13 clinical trials has reviewed the effect of PPN LFS on motor symptoms, FOG, and falls, evaluated with gait specific questionnaires (Yu et al., ). The clinical trials included showed substantial heterogeneity with respect to the exact electrode location within the PPN, timespan of clinical follow‐up, patient age, disease duration, stimulation frequency, and daily dosage of LD. No improvement of the classical global motor symptoms (rigidity, tremor, and bradykinesia) was observed. However, virtually all trials reported a significant amelioration of gait and other axial symptoms (Yu et al., ). In a double‐blind randomized cross‐over study, nine PD patients with severe gait disorder were assessed 24 h after PPN‐surgery using the UPDRS III, and behavioral gait assessment. Compared with stimulation frequencies of 60–80 Hz, lower frequencies of 10–25 Hz led to an amelioration of akinesia and a reduction of gait difficulties in 7/9 patients (Nosko et al., ). The better response to LFS may be partially explained by the electrophysiological properties of cholinergic neurons in the caudal PPN. Along these lines, several patch‐clamp experiments have revealed a plateau phase of cholinergic neurons in the caudal PPN at frequencies of 40–60 Hz, while cholinergic neurons were deactivated at frequencies above 60 Hz (Garcia‐Rill et al., ; Simon et al., ). Because of interconnected nuclei, unilateral stimulation likely also exerts effects on the contralateral PPN (Hamani et al., ). Unilateral stimulation was also observed to enhance blood flow in the contralateral hemisphere, as evidenced by a PET study involving three patients with advanced PD in the OFF medication state, both at rest and during a lower limb motor task (Ballanger et al., ). Conversely, the paired nature of PPN connections suggests that bilateral stimulation may offer greater efficacy in modulating its function (Hamani et al., ). Only two studies have directly compared the effects of unilateral versus bilateral PPN DBS (Khan et al., ; Thevathasan, Cole, et al., ). In a cohort of five PD patients ON medication, Khan et al. reported an improvement of UPDRS motor scores by 5.7% with unilateral stimulation, while the same motor scores were ameliorated by 18.4% with bilateral stimulation (Khan et al., ). A second cohort of 17 PD patients was analyzed in a double‐blind study using an objective spatiotemporal gait analysis. DBS induced improvement of gait was twice as good with bilateral compared with unilateral stimulation (Thevathasan, Cole, et al., ). That said, findings of these studies should be interpreted cautiously, as the assessment of symptom improvement relied solely on UPDRS III scores (Khan et al., ; Thevathasan, Cole, et al., ). Alongside single target stimulation, PPN DBS has also been combined with other targets, including the STN, GPi, and caudal zona inserta. Combining PPN DBS with other targets has the advantage of treating classical motor symptoms and axial symptoms at the same time. Simultaneous bilateral STN DBS and PPN LFS (20–25 Hz) has demonstrated an improvement of UPDRS III scores, falls, postural instability, and FOG in individual PD cases (Ferraye et al., ; Plaha & Gill, ; Stefani et al., ). Similarly, combining PPN DBS with GPi DBS has shown marked effects on gait initiation and FOG in a 66‐year‐old PD patient with severe peak‐dose dyskinesia, ON freezing, and postural instability (Schrader et al., ). Notably, both GPi and PPN DBS alone reduced FOG, with PPN DBS being slightly more effective. However, the combination of both targets had a significantly larger impact on FOG compared with single target stimulation (Schrader et al., ). Caudal zona inserta stimulation was addressed in a study of seven PD patients with predominant axial symptoms. Bilateral caudal zona inserta and PPN stimulation in combination were superior to single target stimulation in improving both a composite axial subscore and UPDRS III motor scores (Khan et al., ). Within the PPN, the caudal part appears to be crucial for the effects of PPN DBS (Thevathasan et al., ; Yu et al., ), based on the topographical distribution of local field potentials (Tattersall et al., ; Thevathasan, Pogosyan, et al., ). Additional regional stimulation effects on gait in PD patients have been observed in the cuneiform nuclei and posterior parts of the PPN (pars dissipata and pars compacta), suggesting a slightly better response when stimulating posterior parts of the PPN (Goetz et al., ). This finding is in line with pre‐clinical studies, which likewise have reported the best stimulation effects in the posterior part of the PPN (Garciarill, ; Gut & Winn, ; Reese et al., ). Thus, most centers prefer to target the caudal part of the PPN (Hamani et al., ). However, given the small size and lack of anatomically defined inner boundaries, most trajectories likely will cover both the caudal and the rostral part of the PPN, enabling selective stimulation. Summary of PPN DBS Animal studies clearly suggest a pivotal role of the PPN cholinergic neurons as important drivers of gait control. Furthermore, declining PPN cholinergic tone in the parkinsonian state is linked to deterioration of locomotion and other axial symptoms. PPN DBS ameliorates axial symptoms and at the same time restores PPN cholinergic function, suggesting that DBS effects are relayed mainly through the cholinergic system. This is supported by the effect of AChEIs, such as rivastigmine (Emre et al., ), donepezil (Aarsland, ), and galantamine (Aarsland et al., ), which reduce the frequency of falls in PD patients (Perez‐Lloret et al., ). Clinical data on the effects of PPN DBS in PD patients are still limited. While unilateral stimulation has shown significant improvement of gait, bilateral stimulation appears to be superior, maybe not surprising given the need for bilateral limb activation during locomotion. Even though reports are equivocal, LFS likely is more efficient than HFS, which is in line with data from STN DBS suggesting that lower frequencies are favorable when gait problems are predominant (Conway et al., ). PPN DBS is currently performed in PD patients experiencing early and severe FOG, postural instability, gait dysfunction, and may also be an option in patients with LD refractory motor symptoms (Thevathasan et al., ). PPN DBS is therefore considered as a rescue‐strategy after the development of severe FOG despite or as a consequence of STN and/or GPi DBS (Schrader et al., ). The rate and nature of PPN DBS related risks and side effects appear to be similar to that of conventional DBS. There are no head‐to‐head studies comparing the effects of PPN DBS with STN or GPi stimulation. However, PPN DBS effects on PD cardinal motor symptoms are of a lower magnitude, whereas it is more efficient in the treatment of axial symptoms (Collomb‐Clerc & Welter, ). Combining conventional targets such as the GPi or STN with PPN, in our view, is the most viable option in PD patients eligible for classical DBS who show pronounced and LD refractory FOG. That said, stimulation of multiple targets often requires parallel HFS and LFS. Even though some of the currently available DBS devices can handle multiple frequencies to some extent, implantation of two pulse generators may be necessary. Because of its small size and lack of defined anatomical subdivisions, targeting specific cell populations within the PPN is challenging, but stimulation of the caudal portion which inhabits the major portion of cholinergic neurons seems to provide the best clinical outcome. Indeed, there has been extensive research on surgical protocols to define anatomical landmarks to facilitate the precise localization of the PPN and its subregions (Hamani et al., ). In this context, the intraoperative use of microelectrode recording, and MRI based visualization techniques have become increasingly important in facilitating electrode placement. However, even though different firing rates of neurons have been recorded during PPN surgery (Shimamoto et al., ; Tattersall et al., ; Weinberger et al., ), it is still challenging to allocate them to specific subregions (Hamani et al., ). As the PPN extends along the longitudinal axis of the brainstem, the trajectory will include both the caudal and rostral subregions of the PPN in most cases. That said, intraoperative recordings in combination with test‐stimulation of different contacts are feasible and may lead to the best outcome. PPN DBS effects appear to be sustained over months to years, but data on longtime follow‐up are lacking. A summary of relevant PPN DBS studies can be found in Table . NBM DBS The utilization of NBM DBS to increase ACh levels in the cortex has been explored as a potential therapeutic approach for improving cognitive symptoms in both PDD and DLB (Baldermann et al., ; Gratwicke et al., , ). DBS in PD patients with cognitive impairment is of particular significance, as PD patients with major cognitive deficits usually are not eligible for STN DBS because of the risk of cognitive deterioration (Foltynie & Hariz, ; Hariz et al., ). 3.2.1 Animal studies Early pre‐clinical studies have demonstrated an increase of cortical ACh mediated by both continuous and intermittent stimulation of the NBM (Casamenti et al., ; Kurosawa et al., , ; Rasmusson et al., ). In rats subjected to low frequency NBM stimulation (30 Hz), a 40% rise in ACh release within the parietal cortex was observed (Casamenti et al., ). Similarly, continuous LFS (20–50 Hz) in rats led to a twofold increase in cortical ACh (Kurosawa et al., ). Conversely, another study showed a higher release of cortical ACh following HFS (Rasmusson et al., ). In contrast to other studies, however, the latter study used pulsed rather than continuous stimulation and added atropine to enhance evoked release of ACh (Rasmusson et al., ). A recent meta‐analysis covering four animal studies reported NBM LFS (20–50 Hz) to induce the highest elevation of cortical ACh levels. No differences between continuous and intermittent stimulation were observed with regard to the effects on cortical ACh levels (Nazmuddin et al., ). It is believed that a vasodilative effect relays the elevation of cortical ACh levels by increasing cortical blood flow. Along these lines, Biesold et al. have demonstrated that NBM stimulation in anesthetized rats led to an ipsilateral vasodilation of cortical vessels, which could be blocked by nicotinergic and muscarinergic antagonists (Biesold et al., ). Beyond the elevation of cholinergic concentration, NBM DBS likely also induces neuronal plasticity. For example, Kilgard and Herzenich have demonstrated that episodic NBM stimulation, paired with concomitant auditory stimuli in adult rats, resulted in a substantial and progressive reorganization of the primary auditory cortex (Kilgard & Merzenich, ). The effects of NBM stimulation on different traits of cognition have previously been summarized in a review including 19 pre‐clinical trials in rodents and non‐human primates (Nazmuddin et al., ). The majority of these studies performed their stimulation experiments in wild‐type rats, while two studies used a transgenic AD line in rodents. Cognitive function was measured with different cognitive tasks, including encoding, consolidation, and retrieval. Effects mediated by NBM stimulation were observed mainly on encoding, immediate retention of memory, and speed of learning, while there was no effect on long‐term memory. In general, stimulation‐related effects emerged 24 h after training and lasted for up to 2 weeks (Nazmuddin et al., ). The majority of studies (14/19) applied unilateral stimulation (mostly in the right hemisphere), while the remaining studies used bilateral stimulation (Avila & Lin, ; Liu et al., , ; Mayse et al., ). None of these studies directly compared the outcome of unilateral with bilateral stimulation. Two studies compared a continuous with an intermittent stimulation protocol, reporting a clear superiority of intermittent stimulation (Koulousakis et al., ; Liu et al., ). Liu et al. compared intermittent and continuous stimulation in adult monkeys, corroborating with this finding. Continuous stimulation was applied as a block of 100 stimulation pulses interleaved with 100 pulses without stimulation. Continuous stimulation worsened working memory performance, while intermittent stimulation led to an improvement (Liu et al., ). Overall, the best cognitive improvement of spatial memory performance was observed with intermittent stimulation using biphasic electrical pulses (60/80 Hz) for 20 s interleaved with a pause for 40 s (Nazmuddin et al., ). Moreover, two studies in rats and rhesus monkeys, respectively, have revealed positive effects on cognitive performance at frequencies of 60 Hz (Koulousakis et al., ; Liu et al., ). Conversely, most pre‐clinical studies have used higher frequencies up to 120 Hz to stimulate the NBM and observed stronger cognitive benefits at higher frequencies. Some studies even reported worsening of cognitive performance after reducing the stimulation frequency (Huang et al., ; Liu et al., ). 3.2.2 Human studies The first case of NBM DBS reported in PD was a 71‐year‐old PD patient with slowly progressive PDD receiving combined high‐frequency STN stimulation and low‐frequency NBM DBS. High‐frequency STN DBS was administered for 3 months before low‐frequency NBM stimulation was turned on. Isolated STN DBS improved motor symptoms but had no effect on cognitive function. Notably, cognitive performance, including attention, concentration, and drive distinctly improved upon activating NBM stimulation and worsened after stimulation was turned off (Freund et al., ). In another report, a 68‐year‐old PD patient diagnosed with mild cognitive impairment underwent DBS surgery targeting the GPi and the NBM using a single electrode per hemisphere. The patient showed an improvement of UPDRS III scores by 61% 2 months after initiation of GPi stimulation. No further motor improvement was observed after NBM stimulation was added. However, combining NBM and GPi stimulation improved performance across various neuropsychological tests. These effects remained stable over 1 year and no side effects were observed (Nombela et al., ). Two randomized cross over clinical trials on NBM DBS in PDD and DLB were reported in 2020 (Gratwicke et al., ) and 2021 (Maltête et al., ), respectively. Surgery and stimulation were well tolerated. The cognitive assessments did not reveal significant stimulation induced improvements, and even worsening in one study (Maltête et al., ). That said, PET and functional MRI provided evidence for a modulation of regions and networks associated with cognitive function (Gratwicke et al., ; Maltête et al., ). In a randomized double‐blind clinical trial, six patients with PDD and motor fluctuations received either bilateral low‐frequency (20 Hz) NBM stimulation or sham stimulation for 6 weeks, subsequently crossing over to the respective other condition (Gratwicke et al., ). The intervention was well tolerated, with no serious adverse events reported. Although no improvements were observed in primary cognitive outcomes, NBM DBS showed an amelioration of scores of the neuropsychiatric inventory compared to sham stimulation. Two patients experienced a significant reduction of visual hallucinations, and three patients reported an improvement of health‐related quality of life (Gratwicke et al., ). In a phase‐II double‐blind crossover pilot trial, involving six participants diagnosed with advanced PD and cognitive impairment, Sasikumar et al. investigated the effects of single‐trajectory DBS of GPi and NBM on motor symptoms, cognitive performance, and biomarkers (Sasikumar et al., ). As expected, GPi DBS resulted in improvements of dyskinesia and motor fluctuations. NBM DBS in addition to GPi DBS led to reduced metabolism in right frontal and parietal cortical regions and enhanced functional connectivity in volume of tissue activated analysis as assessed by 18 F‐flourodeoxyglucose PET and magnetoencephalography. However, these findings were not accompanied by concomitant cognitive improvement of PD patients after 1 year (Sasikumar et al., ). Another double‐blind cross‐over study including six patients with PDD undergoing NBM LFS (20 Hz) did not report cognitive improvement, but two patients exhibited a stimulation induced slowing of cognitive decline (Cappon et al., ). Most animal and human studies for NBM DBS in PDD and DLB used bilateral continuous low‐frequency stimulation (20 Hz) (Nazmuddin et al., ). Although LFS seems to result in a more favorable outcome (Nazmuddin et al., ), the proposed negative effects of HFS were challenged by a recent study in 33 PD patients (Bogdan et al., ). Patients received GPi stimulation for motor fluctuations and dyskinesia. Because of the anatomical proximity of the GPi to the NBM, the most distal contact was located within the NBM in a subset of patients. Analysis of these patients with an active distal contact and assumed NBM high‐frequency co‐stimulation at 130–185 Hz showed no signs of cognitive decline after 12 months (Bogdan et al., ). Given these equivocal findings, suspicion arises that the stimulation paradigm (i.e., intermittent vs. continuous) rather than the frequency applied may be crucial for achieving an optimal stimulation effect. To this end, Sasikumar et al. compared intermittent stimulation consisting of a pulse train (3 mA, 60 μs at 60 Hz), cycling between 20 s ON and 40 s OFF stimulation for 1 h daily, with continuous stimulation using the same parameters. Intermittent stimulation significantly improved sustained attention, whereas continuous stimulation did not affect cognitive scores (Sasikumar et al., ). Most pre‐clinical studies have used unilateral NBM stimulation to achieve a cognitive improvement in rodents and non‐human primates. Subsequent clinical trials have either been performed with bilateral stimulation or NBM DBS was combined with other targets. No studies are available, directly comparing the outcome of unilateral versus bilateral NBM DBS. 3.2.3 Summary of NBM DBS The role of the NBM in cognition has been recognized for decades. The first attempt to modulate cognitive function by electric stimulation was performed by Turnbull et al. in 1984 (Turnbull et al., ). A 74‐year‐old patient with AD received unilateral NBM stimulation. After 9 months, ipsilateral cortical glucose levels increased, but cognitive function remained unchanged. Subsequent studies in both DLB and PDD have reported limited improvement of cognitive function and slowing of cognitive decline after NBM DBS, but also worsening of cognition in some cases. NBM DBS in DLB has been shown to modulate brain networks associated with cognition, but accompanying improvement of clinical parameters was mostly lacking. So far, there is no sufficient evidence for a significant and sustained cognitive improvement by NMB DBS in PD. This is particularly true for available randomized controlled trials. Even though the NBM is a key factor in cognitive function, the complex and widespread brain changes associated with altered cognition in PD (Mihaescu et al., ) may explain the somewhat equivocal findings obtained from NBM DBS studies. Regarding complication rates and side effects of NBM DBS, the limited data available do not suggest an increased risk compared with conventional DBS. Moreover, NBM DBS is rarely performed in a single target approach. Thus, it is difficult to provide an estimate of the target related risk. Patients included in the available trials generally showed mild‐to‐moderate cognitive impairment. It is unclear if NBM DBS also is effective in more severely cognitively impaired PD patients. Clinical trials in AD patients suggest that NBM DBS may be efficacious in patients with severe cognitive impairment (Picton et al., ), but long‐term data are scarce and inconclusive, and studies in severely affected PDD/DLB individuals are lacking. In the light of the limited evidence of NBM DBS efficacy in PD and largely lacking amelioration of PD motor symptoms, single target NBM DBS, in our view, is not recommended and should be applied in combination with traditional targets, preferably in the framework of clinical studies. Currently, a randomized, sham‐controlled trial is investigating combined STN and NBM DBS for PDD (DEMPARK DBS), with safety as the primary outcome and effects on cognition, daily functioning, motor skills, mood, caregiver burden, and economic aspects as a secondary outcome (Daniels et al., ). It is unclear whether NBM DBS should preferably be used in combination with either GPi or STN DBS. Both, GPi and STN DBS have been shown to ameliorate motor symptoms with STN DBS being superior in reducing dopaminergic medication after surgery (Bronstein et al., ; Follett et al., ). STN DBS led to a faster decline of Mattis Dementia Rating Scale scores (Weaver et al., ). Conversely, there were two other trials not observing a different cognitive outcome between STN and GPi DBS after 12 months and 3 years, respectively (Boel et al., ; Odekerken et al., ). With regard to the cognitive outcome and other non‐motor symptoms such as depression, combining NBM DBS with the GPi is preferred based on experts' consensus and the largest randomized study (Bronstein et al., ; Follett et al., ). With respect to the identification of optimal stimulation parameters, there are conflicting results from animal and human studies, respectively. Whereas most of both rodent and non‐human primate studies suggested greater efficacy with HFS, the first studies in human PD suggested that stimulation at 20 Hz may be beneficial. However, this has been challenged recently, and no clear recommendation can be made at this point. DBS efficacy appears to be better if intermittent rather than continuous stimulation paradigms are applied. That said, even though closed loop stimulation paradigms are on the rise for conventional DBS (Stanslaski et al., ), adaptive stimulation is not feasible at this point. While most studies applied unilateral stimulation, bilateral stimulation has been favored by more recent study designs. Lastly, the majority of pre‐clinical NBM DBS studies was performed in either wild‐type animals or AD models. Thus, the transferability to PD/DLB remains unclear. We have summarized the findings of the major NBM DBS studies in Table . Animal studies Early pre‐clinical studies have demonstrated an increase of cortical ACh mediated by both continuous and intermittent stimulation of the NBM (Casamenti et al., ; Kurosawa et al., , ; Rasmusson et al., ). In rats subjected to low frequency NBM stimulation (30 Hz), a 40% rise in ACh release within the parietal cortex was observed (Casamenti et al., ). Similarly, continuous LFS (20–50 Hz) in rats led to a twofold increase in cortical ACh (Kurosawa et al., ). Conversely, another study showed a higher release of cortical ACh following HFS (Rasmusson et al., ). In contrast to other studies, however, the latter study used pulsed rather than continuous stimulation and added atropine to enhance evoked release of ACh (Rasmusson et al., ). A recent meta‐analysis covering four animal studies reported NBM LFS (20–50 Hz) to induce the highest elevation of cortical ACh levels. No differences between continuous and intermittent stimulation were observed with regard to the effects on cortical ACh levels (Nazmuddin et al., ). It is believed that a vasodilative effect relays the elevation of cortical ACh levels by increasing cortical blood flow. Along these lines, Biesold et al. have demonstrated that NBM stimulation in anesthetized rats led to an ipsilateral vasodilation of cortical vessels, which could be blocked by nicotinergic and muscarinergic antagonists (Biesold et al., ). Beyond the elevation of cholinergic concentration, NBM DBS likely also induces neuronal plasticity. For example, Kilgard and Herzenich have demonstrated that episodic NBM stimulation, paired with concomitant auditory stimuli in adult rats, resulted in a substantial and progressive reorganization of the primary auditory cortex (Kilgard & Merzenich, ). The effects of NBM stimulation on different traits of cognition have previously been summarized in a review including 19 pre‐clinical trials in rodents and non‐human primates (Nazmuddin et al., ). The majority of these studies performed their stimulation experiments in wild‐type rats, while two studies used a transgenic AD line in rodents. Cognitive function was measured with different cognitive tasks, including encoding, consolidation, and retrieval. Effects mediated by NBM stimulation were observed mainly on encoding, immediate retention of memory, and speed of learning, while there was no effect on long‐term memory. In general, stimulation‐related effects emerged 24 h after training and lasted for up to 2 weeks (Nazmuddin et al., ). The majority of studies (14/19) applied unilateral stimulation (mostly in the right hemisphere), while the remaining studies used bilateral stimulation (Avila & Lin, ; Liu et al., , ; Mayse et al., ). None of these studies directly compared the outcome of unilateral with bilateral stimulation. Two studies compared a continuous with an intermittent stimulation protocol, reporting a clear superiority of intermittent stimulation (Koulousakis et al., ; Liu et al., ). Liu et al. compared intermittent and continuous stimulation in adult monkeys, corroborating with this finding. Continuous stimulation was applied as a block of 100 stimulation pulses interleaved with 100 pulses without stimulation. Continuous stimulation worsened working memory performance, while intermittent stimulation led to an improvement (Liu et al., ). Overall, the best cognitive improvement of spatial memory performance was observed with intermittent stimulation using biphasic electrical pulses (60/80 Hz) for 20 s interleaved with a pause for 40 s (Nazmuddin et al., ). Moreover, two studies in rats and rhesus monkeys, respectively, have revealed positive effects on cognitive performance at frequencies of 60 Hz (Koulousakis et al., ; Liu et al., ). Conversely, most pre‐clinical studies have used higher frequencies up to 120 Hz to stimulate the NBM and observed stronger cognitive benefits at higher frequencies. Some studies even reported worsening of cognitive performance after reducing the stimulation frequency (Huang et al., ; Liu et al., ). Human studies The first case of NBM DBS reported in PD was a 71‐year‐old PD patient with slowly progressive PDD receiving combined high‐frequency STN stimulation and low‐frequency NBM DBS. High‐frequency STN DBS was administered for 3 months before low‐frequency NBM stimulation was turned on. Isolated STN DBS improved motor symptoms but had no effect on cognitive function. Notably, cognitive performance, including attention, concentration, and drive distinctly improved upon activating NBM stimulation and worsened after stimulation was turned off (Freund et al., ). In another report, a 68‐year‐old PD patient diagnosed with mild cognitive impairment underwent DBS surgery targeting the GPi and the NBM using a single electrode per hemisphere. The patient showed an improvement of UPDRS III scores by 61% 2 months after initiation of GPi stimulation. No further motor improvement was observed after NBM stimulation was added. However, combining NBM and GPi stimulation improved performance across various neuropsychological tests. These effects remained stable over 1 year and no side effects were observed (Nombela et al., ). Two randomized cross over clinical trials on NBM DBS in PDD and DLB were reported in 2020 (Gratwicke et al., ) and 2021 (Maltête et al., ), respectively. Surgery and stimulation were well tolerated. The cognitive assessments did not reveal significant stimulation induced improvements, and even worsening in one study (Maltête et al., ). That said, PET and functional MRI provided evidence for a modulation of regions and networks associated with cognitive function (Gratwicke et al., ; Maltête et al., ). In a randomized double‐blind clinical trial, six patients with PDD and motor fluctuations received either bilateral low‐frequency (20 Hz) NBM stimulation or sham stimulation for 6 weeks, subsequently crossing over to the respective other condition (Gratwicke et al., ). The intervention was well tolerated, with no serious adverse events reported. Although no improvements were observed in primary cognitive outcomes, NBM DBS showed an amelioration of scores of the neuropsychiatric inventory compared to sham stimulation. Two patients experienced a significant reduction of visual hallucinations, and three patients reported an improvement of health‐related quality of life (Gratwicke et al., ). In a phase‐II double‐blind crossover pilot trial, involving six participants diagnosed with advanced PD and cognitive impairment, Sasikumar et al. investigated the effects of single‐trajectory DBS of GPi and NBM on motor symptoms, cognitive performance, and biomarkers (Sasikumar et al., ). As expected, GPi DBS resulted in improvements of dyskinesia and motor fluctuations. NBM DBS in addition to GPi DBS led to reduced metabolism in right frontal and parietal cortical regions and enhanced functional connectivity in volume of tissue activated analysis as assessed by 18 F‐flourodeoxyglucose PET and magnetoencephalography. However, these findings were not accompanied by concomitant cognitive improvement of PD patients after 1 year (Sasikumar et al., ). Another double‐blind cross‐over study including six patients with PDD undergoing NBM LFS (20 Hz) did not report cognitive improvement, but two patients exhibited a stimulation induced slowing of cognitive decline (Cappon et al., ). Most animal and human studies for NBM DBS in PDD and DLB used bilateral continuous low‐frequency stimulation (20 Hz) (Nazmuddin et al., ). Although LFS seems to result in a more favorable outcome (Nazmuddin et al., ), the proposed negative effects of HFS were challenged by a recent study in 33 PD patients (Bogdan et al., ). Patients received GPi stimulation for motor fluctuations and dyskinesia. Because of the anatomical proximity of the GPi to the NBM, the most distal contact was located within the NBM in a subset of patients. Analysis of these patients with an active distal contact and assumed NBM high‐frequency co‐stimulation at 130–185 Hz showed no signs of cognitive decline after 12 months (Bogdan et al., ). Given these equivocal findings, suspicion arises that the stimulation paradigm (i.e., intermittent vs. continuous) rather than the frequency applied may be crucial for achieving an optimal stimulation effect. To this end, Sasikumar et al. compared intermittent stimulation consisting of a pulse train (3 mA, 60 μs at 60 Hz), cycling between 20 s ON and 40 s OFF stimulation for 1 h daily, with continuous stimulation using the same parameters. Intermittent stimulation significantly improved sustained attention, whereas continuous stimulation did not affect cognitive scores (Sasikumar et al., ). Most pre‐clinical studies have used unilateral NBM stimulation to achieve a cognitive improvement in rodents and non‐human primates. Subsequent clinical trials have either been performed with bilateral stimulation or NBM DBS was combined with other targets. No studies are available, directly comparing the outcome of unilateral versus bilateral NBM DBS. Summary of NBM DBS The role of the NBM in cognition has been recognized for decades. The first attempt to modulate cognitive function by electric stimulation was performed by Turnbull et al. in 1984 (Turnbull et al., ). A 74‐year‐old patient with AD received unilateral NBM stimulation. After 9 months, ipsilateral cortical glucose levels increased, but cognitive function remained unchanged. Subsequent studies in both DLB and PDD have reported limited improvement of cognitive function and slowing of cognitive decline after NBM DBS, but also worsening of cognition in some cases. NBM DBS in DLB has been shown to modulate brain networks associated with cognition, but accompanying improvement of clinical parameters was mostly lacking. So far, there is no sufficient evidence for a significant and sustained cognitive improvement by NMB DBS in PD. This is particularly true for available randomized controlled trials. Even though the NBM is a key factor in cognitive function, the complex and widespread brain changes associated with altered cognition in PD (Mihaescu et al., ) may explain the somewhat equivocal findings obtained from NBM DBS studies. Regarding complication rates and side effects of NBM DBS, the limited data available do not suggest an increased risk compared with conventional DBS. Moreover, NBM DBS is rarely performed in a single target approach. Thus, it is difficult to provide an estimate of the target related risk. Patients included in the available trials generally showed mild‐to‐moderate cognitive impairment. It is unclear if NBM DBS also is effective in more severely cognitively impaired PD patients. Clinical trials in AD patients suggest that NBM DBS may be efficacious in patients with severe cognitive impairment (Picton et al., ), but long‐term data are scarce and inconclusive, and studies in severely affected PDD/DLB individuals are lacking. In the light of the limited evidence of NBM DBS efficacy in PD and largely lacking amelioration of PD motor symptoms, single target NBM DBS, in our view, is not recommended and should be applied in combination with traditional targets, preferably in the framework of clinical studies. Currently, a randomized, sham‐controlled trial is investigating combined STN and NBM DBS for PDD (DEMPARK DBS), with safety as the primary outcome and effects on cognition, daily functioning, motor skills, mood, caregiver burden, and economic aspects as a secondary outcome (Daniels et al., ). It is unclear whether NBM DBS should preferably be used in combination with either GPi or STN DBS. Both, GPi and STN DBS have been shown to ameliorate motor symptoms with STN DBS being superior in reducing dopaminergic medication after surgery (Bronstein et al., ; Follett et al., ). STN DBS led to a faster decline of Mattis Dementia Rating Scale scores (Weaver et al., ). Conversely, there were two other trials not observing a different cognitive outcome between STN and GPi DBS after 12 months and 3 years, respectively (Boel et al., ; Odekerken et al., ). With regard to the cognitive outcome and other non‐motor symptoms such as depression, combining NBM DBS with the GPi is preferred based on experts' consensus and the largest randomized study (Bronstein et al., ; Follett et al., ). With respect to the identification of optimal stimulation parameters, there are conflicting results from animal and human studies, respectively. Whereas most of both rodent and non‐human primate studies suggested greater efficacy with HFS, the first studies in human PD suggested that stimulation at 20 Hz may be beneficial. However, this has been challenged recently, and no clear recommendation can be made at this point. DBS efficacy appears to be better if intermittent rather than continuous stimulation paradigms are applied. That said, even though closed loop stimulation paradigms are on the rise for conventional DBS (Stanslaski et al., ), adaptive stimulation is not feasible at this point. While most studies applied unilateral stimulation, bilateral stimulation has been favored by more recent study designs. Lastly, the majority of pre‐clinical NBM DBS studies was performed in either wild‐type animals or AD models. Thus, the transferability to PD/DLB remains unclear. We have summarized the findings of the major NBM DBS studies in Table . CONCLUSION A‐syn mediated neurodegeneration of the PPN and NBM is key in the pathogenesis of cholinergic deficiency in PD. Pre‐clinical and clinical studies support a strong association of cholinergic degeneration with both gait impairment and cognitive deficits. Notably, the foundation of gait impairment appears to lie in the deterioration of the attentional cognitive domain rather than motor system deficiency. Furthermore, there are versatile interactions between the dopaminergic and cholinergic systems, which should be accounted for when aiming to target specific motor and non‐motor symptoms in PD. PPN DBS has been proven to modulate the cholinergic tone and is associated with improvement of axial symptoms, most importantly FOG unresponsive to dopaminergic treatment. Conversely, NBM DBS has shown its merit in partially improving cognition in cognitively impaired PD patients, albeit evidence from human studies is still limited and equivocal. Given the circumscribed effects of both PPN and NBM DBS, in our view, both targets should be combined with either STN or GPi DBS. PPN and NBM DBS demand specific stimulation paradigms, whereby data on the recommended settings to achieve optimal clinical outcome are still conflicting in many aspects. Neither PPN nor NBM DBS can be regarded as standard clinical care at his point and thus should be performed by experienced centers, preferably in the context of clinical trials, to enable recommendations for a potentially broader clinical application. Ideally, future studies should directly compare unilateral versus bilateral stimulation and pay particular attention to stimulation frequencies, for example, in cross‐over trials. Lastly, including recent advances in stimulation techniques such as mono‐segmental stimulation (“steering”) and in vivo assessment of local field potentials (“sensing”) in clinical trials may shed further light on the optimal stimulation site and improve clinical outcome. V. Witzig: Conceptualization; writing – original draft; funding acquisition; project administration; visualization; writing – review and editing; investigation; methodology; validation; data curation. R. Pjontek: Writing – review and editing; writing – original draft; visualization; investigation; methodology. S. K. H. Tan: Conceptualization; writing – review and editing. J. B. Schulz: Writing – review and editing; supervision; resources. F. Holtbernd: Conceptualization; supervision; writing – review and editing; project administration; validation. JBS is working in the advisory board of Forward Pharma, MSD, Lundbeck, Biogen, Eisai, Novo Nordisk, Roche, Reata, and Lilly. JBS has received reimbursement for lectures from Merz, Teva, Bayer, UCB, Lilly, Boehringer, GSK, Bial, Novartis, Biogen, and Eisai. JBS has received grants for research projects from Biogen, Eisai, and Lilly. FH received travel and conference fees from Bial, Desitin, Abbott, Zambon, and Abbvie. VW received travel and conference fees from Medtronic. The other authors have no conflicts of interest to declare. The peer review history for this article is available at https://www.webofscience.com/api/gateway/wos/peer‐review/10.1111/jnc.16264 . |
Using a Mobile Application for Health Communication to Facilitate a Sense of Coherence: Experiences of Older Persons with Cognitive Impairment | 15166c29-4e54-430c-ae10-bf1716788789 | 8583217 | Health Communication[mh] | As the use of technology can facilitate an independent life in old age, users of technology need to possess adequate knowledge and skills (i.e., technological literacy (TL)) , encompassing the ability to interact with and understand technical products . Older persons are increasingly using mobile technologies as communication tools, such as technology-based health communication (HC), which has been shown to assist in finding information . Research on mobile assistive technologies used by older persons with cognitive impairment (CI) has found that tablets and smartphones may benefit such users but should be adapted to their needs . In this study, the term “older person” is defined as 55 years old and above following the inclusion criteria of the Support Monitoring and Reminder Technology for Mild Dementia (SMART4MD) project, from which the participants in the current study were recruited. Mobile applications may assist and aid older persons who have functional and CI, including difficulties with attention, working memory, and the ability to learn new information . Therefore, older persons’ use and perceptions of technologies are influenced by personal, physical, and social factors . Due to aging, physical and CI can make the use of technologies difficult , highlighting the importance of individual preferences and needs in development of new technologies . Research on older persons and HC has shown that older persons prefer health information to be credible and easily understood. Therefore, to develop usable technology and account for possible CI, the design of technology-based HC should be grounded in user experiences . The sense of coherence (SOC) model helps understand life satisfaction in old age . The SOC, developed by Antonovsky , builds on three central components: comprehensibility, manageability, and meaningfulness. The model reflects how a person perceives the world and uses their own and external resources to manage different challenges (or stressors), which either hinder or contribute to a sense of coherence and health. Comprehensibility reflects how comprehensible challenges or stressors are seen. Manageability reflects how manageable these challenges are considered. Meaningfulness indicates how meaningful these challenges are viewed. SOC has previously been applied as a framework for analysis to understand older persons’ use of mobile applications from a quantitative approach . However, qualitative research that targets the use of mobile technologies by older persons with CI and that employs SOC as a theoretical model remains scarce . Additional qualitative studies using the SOC model as an analytical framework are needed . SOC was, therefore, chosen in this study as a suitable model to enhance the understanding of older persons’ technology use to improve their health. Research has shown that while older persons increasingly use and have access to technology, more suitable, user-friendly solutions are necessary to address the needs of individuals with CI . As the shift toward home-based health care is supported by technological solutions and HC, designs based on older persons’ actual needs and experiences are required . Hence, the aim of this study was to explain how older persons with CI experienced technology-based HC through the use of a mobile application to facilitate a SOC.
2.1. Design This qualitative study adopted a deductive approach to explain how older individuals with CI experienced technology-based HC using a mobile application to facilitate a SOC. Data were collected using semi-structured interviews, analyzed and sorted into themes using a matrix . The themes corresponded to the description of each central components in the theoretical SOC model: comprehensibility, manageability, and meaningfulness. A deductive approach was chosen because the study aimed to understand older persons’ experiences better using the theoretical model. 2.2. Study Context The EU project SMART4MD is a multicenter project aimed at investigating the effects of an intervention consisting of a mobile application designed for and used by older persons with CI, described further in Anderberg et al. . In this study, the term “mobile application” refers to the software and hardware specifically used and developed for the SMART4MD project. The software application was installed on a provided tablet with a seven-inch screen and had different functionalities, such as the possibility of setting medication reminders, obtaining information about dementia, reading web-based newspapers, and playing games (see ). Participants in the SMART4MD project scored 20–28 on the Mini-Mental State Examination (MMSE), were 55 years or older, had experienced memory problems in the six months prior to the project, did not receive any formal care, had no functional disability affecting the use of the mobile application, administered their medication, were not depressed, and had an informal caregiver who participated in the project. 2.3. Participants In the current study, participants from the Swedish site of the SMART4MD project were recruited purposefully based on the following inclusion criteria: an MMSE score ranging from 20–26, an indication of CI, and access to the mobile application for more than one year prior to the study. The participants had also acknowledged problems with memory recall. The participants were contacted by phone by the first author (EP), informed verbally about the project, and given a verbal description of the aim of the interviews. After the call, information letters were sent out in advance either by regular mail or by e-mail, according to the participant’s preference. When an individual agreed to participate, a time for the interview was scheduled. In total, 16 participants were recruited, comprising four females and 12 males aged 71–100 (see for participant characteristics). 2.4. Data Collection Individual, semi-structured interviews were conducted in June–August 2019. Each interview lasted between 17 and 55 min (mean = 34 min). An interview guide was devised to ensure that the same topics were discussed with all participants and that there would be opportunities to probe specific subjects further . The interview guide was partially based on Robinson et al. . The interviews commenced with questions regarding the participants’ previous knowledge of technology use. These were followed by questions about their mobile application use with follow-up prompts, such as “Could you give me an example?” or “Can you elaborate further?” The first interview was conducted as a pilot interview to ensure the clarity, adequacy, and appropriateness of the interview questions . The pilot interview was included in the data, with no changes to the interview guide. All the interviews took place in the participants’ homes and were conducted, recorded, and transcribed verbatim by the first author (EP), a PhD student with previous experience of conducting interviews at the bachelor’s and master’s degree levels. 2.5. Data Analysis The transcribed interviews were read to gain a general sense of the whole and units of analysis responding to the aim of the study were marked. The analysis was based on the SOC to understand the participant’s use of mobile technologies. To analyze the experiences, the units of analysis were grouped using a matrix as described by Elo and Kyngäs . The matrix was employed to sort the data corresponding to each central component in the theoretical SOC model: comprehensibility, manageability, and meaningfulness . These three areas were subsequently analyzed thematically, following the six phases described by Braun and Clarke . The six phases were as follows: (i) familiarization with the transcribed interviews; (ii) generation of initial codes from the units of analysis; (iii) creation of subthemes of the codes and the relationships among them; (iv) review of the themes corresponding to the description of the three components of the SOC theoretical model; (v) defining and naming the themes; and finally (vi) production of the report. Phases one and two were conducted in parallel with the deductive coding. The analysis was conducted by moving back and forth between the phases as needed. All the authors carried it out to obtain consensus in sorting the data into related themes according to the descriptions of the SOC model components. The analysis ended with the creation of a thematic map explaining the interrelations between the overall theme and the different themes corresponding to the SOC model. 2.6. Ethical Considerations This study was conducted in accordance with the Declaration of Helsinki . Each regional ethical review board granted ethical approvals for the SMART4MD project at each participating site to comply with research regulations in the respective countries. Approval for the Swedish site was granted by the Regional Ethical Review Board, Lund (code BTH:SMART4MD dnr 2016/470), and included approval for subsequent studies within the project. The written consent of the participants in the SMART4MD project was collected during their enrollment. As part of the informed consent process for the project, written information in the form of a participant information sheet was sent out at least 24 h prior to a screening meeting. Verbal and written consent were obtained during this screening meeting with the research team and the participant’s informal caregiver present. The research team that assessed the participant’s capacity to consent to the SMART4MD project of the Swedish site comprised of personnel with education in caring science, research nurses, and PhD students. The consent process was renewed during the project in follow-up visits every six months. More detailed information on the screening and consent process in the SMART4MD project is available in Anderberg et al. . The participants’ capacity to consent in the present study was determined by asking the participants for and receiving their consent multiple times and the consent already given for the SMART4MD project. The participants were provided with information about the purpose of the interviews, both verbally and in writing, by mail or e-mail. Before commencing of the interviews, this information was repeated if necessary. The participants were also asked if they had any further questions and were informed of their right to withdraw from the interview at any time . Participants also gave verbal consent and permission to record the interviews before the interviews started. Striving for the best conditions and considering the participants’ CI, they needed to understand the provided information to give their consent. The participants’ informal caregivers were also present to offer assistance if needed, but they were not in the interview room. The first author, EP, who assessed the capacity to consent, has previous experience with the consent process within the SMART4MD project, in thesis work at bachelor’s and master’s degree levels, and from a course in good clinical practice. The interview settings were intended to ensure a familiar and secure environment for the participants, and the interviews were conducted in their homes. The recorded interviews and details about the participants were only accessed by the first author (EP) and kept at a safe location to avoid unauthorized access in accordance with the General Data Protection Regulation (GDPR) and Personal Data Act .
This qualitative study adopted a deductive approach to explain how older individuals with CI experienced technology-based HC using a mobile application to facilitate a SOC. Data were collected using semi-structured interviews, analyzed and sorted into themes using a matrix . The themes corresponded to the description of each central components in the theoretical SOC model: comprehensibility, manageability, and meaningfulness. A deductive approach was chosen because the study aimed to understand older persons’ experiences better using the theoretical model.
The EU project SMART4MD is a multicenter project aimed at investigating the effects of an intervention consisting of a mobile application designed for and used by older persons with CI, described further in Anderberg et al. . In this study, the term “mobile application” refers to the software and hardware specifically used and developed for the SMART4MD project. The software application was installed on a provided tablet with a seven-inch screen and had different functionalities, such as the possibility of setting medication reminders, obtaining information about dementia, reading web-based newspapers, and playing games (see ). Participants in the SMART4MD project scored 20–28 on the Mini-Mental State Examination (MMSE), were 55 years or older, had experienced memory problems in the six months prior to the project, did not receive any formal care, had no functional disability affecting the use of the mobile application, administered their medication, were not depressed, and had an informal caregiver who participated in the project.
In the current study, participants from the Swedish site of the SMART4MD project were recruited purposefully based on the following inclusion criteria: an MMSE score ranging from 20–26, an indication of CI, and access to the mobile application for more than one year prior to the study. The participants had also acknowledged problems with memory recall. The participants were contacted by phone by the first author (EP), informed verbally about the project, and given a verbal description of the aim of the interviews. After the call, information letters were sent out in advance either by regular mail or by e-mail, according to the participant’s preference. When an individual agreed to participate, a time for the interview was scheduled. In total, 16 participants were recruited, comprising four females and 12 males aged 71–100 (see for participant characteristics).
Individual, semi-structured interviews were conducted in June–August 2019. Each interview lasted between 17 and 55 min (mean = 34 min). An interview guide was devised to ensure that the same topics were discussed with all participants and that there would be opportunities to probe specific subjects further . The interview guide was partially based on Robinson et al. . The interviews commenced with questions regarding the participants’ previous knowledge of technology use. These were followed by questions about their mobile application use with follow-up prompts, such as “Could you give me an example?” or “Can you elaborate further?” The first interview was conducted as a pilot interview to ensure the clarity, adequacy, and appropriateness of the interview questions . The pilot interview was included in the data, with no changes to the interview guide. All the interviews took place in the participants’ homes and were conducted, recorded, and transcribed verbatim by the first author (EP), a PhD student with previous experience of conducting interviews at the bachelor’s and master’s degree levels.
The transcribed interviews were read to gain a general sense of the whole and units of analysis responding to the aim of the study were marked. The analysis was based on the SOC to understand the participant’s use of mobile technologies. To analyze the experiences, the units of analysis were grouped using a matrix as described by Elo and Kyngäs . The matrix was employed to sort the data corresponding to each central component in the theoretical SOC model: comprehensibility, manageability, and meaningfulness . These three areas were subsequently analyzed thematically, following the six phases described by Braun and Clarke . The six phases were as follows: (i) familiarization with the transcribed interviews; (ii) generation of initial codes from the units of analysis; (iii) creation of subthemes of the codes and the relationships among them; (iv) review of the themes corresponding to the description of the three components of the SOC theoretical model; (v) defining and naming the themes; and finally (vi) production of the report. Phases one and two were conducted in parallel with the deductive coding. The analysis was conducted by moving back and forth between the phases as needed. All the authors carried it out to obtain consensus in sorting the data into related themes according to the descriptions of the SOC model components. The analysis ended with the creation of a thematic map explaining the interrelations between the overall theme and the different themes corresponding to the SOC model.
This study was conducted in accordance with the Declaration of Helsinki . Each regional ethical review board granted ethical approvals for the SMART4MD project at each participating site to comply with research regulations in the respective countries. Approval for the Swedish site was granted by the Regional Ethical Review Board, Lund (code BTH:SMART4MD dnr 2016/470), and included approval for subsequent studies within the project. The written consent of the participants in the SMART4MD project was collected during their enrollment. As part of the informed consent process for the project, written information in the form of a participant information sheet was sent out at least 24 h prior to a screening meeting. Verbal and written consent were obtained during this screening meeting with the research team and the participant’s informal caregiver present. The research team that assessed the participant’s capacity to consent to the SMART4MD project of the Swedish site comprised of personnel with education in caring science, research nurses, and PhD students. The consent process was renewed during the project in follow-up visits every six months. More detailed information on the screening and consent process in the SMART4MD project is available in Anderberg et al. . The participants’ capacity to consent in the present study was determined by asking the participants for and receiving their consent multiple times and the consent already given for the SMART4MD project. The participants were provided with information about the purpose of the interviews, both verbally and in writing, by mail or e-mail. Before commencing of the interviews, this information was repeated if necessary. The participants were also asked if they had any further questions and were informed of their right to withdraw from the interview at any time . Participants also gave verbal consent and permission to record the interviews before the interviews started. Striving for the best conditions and considering the participants’ CI, they needed to understand the provided information to give their consent. The participants’ informal caregivers were also present to offer assistance if needed, but they were not in the interview room. The first author, EP, who assessed the capacity to consent, has previous experience with the consent process within the SMART4MD project, in thesis work at bachelor’s and master’s degree levels, and from a course in good clinical practice. The interview settings were intended to ensure a familiar and secure environment for the participants, and the interviews were conducted in their homes. The recorded interviews and details about the participants were only accessed by the first author (EP) and kept at a safe location to avoid unauthorized access in accordance with the General Data Protection Regulation (GDPR) and Personal Data Act .
The analysis based on the components of the theoretical SOC model resulted in the three following themes: making sense of mobile technologies; mastering mobile technologies, and the potential added value to use mobile technologies. In addition, the analysis also gave rise to the following overall theme: a challenging technology that can provide support, which corresponds to ambiguity in the SOC model. The thematic map (see ) depicts the overall theme and the interrelation between the different themes and the overall theme, illustrated by arrows. The themes are described in the sections below using quotations from the interviews. Each quotation has been anonymized to maintain confidentiality. 3.1. Themes 3.1.1. A Challenging Technology That Can Provide Support (Sense of Coherence) The overall theme explains the participants’ views of the mobile application as being both challenging and potentially supportive, depending on their needs and interests. The challenges caused hesitation and doubtfulness due to the participants’ frustrations arising from the application’s inability to function as expected. The overall theme encompasses the three themes, highlighting that HC needs to be understandable, user-friendly, and coherent through mobile applications. 3.1.2. Making Sense of Mobile Technologies (Comprehensibility) This theme describes the participants’ views of the mobile application as understandable and comprehensible as well as challenging. The theme contains the following two subthemes: understanding usage and importance of language proficiency. Understanding Usage Participants expressed that it could be difficult to understand the mobile application, use its different functionalities, and access its information. The participants were open to other technologies, such as smartphones, which they frequently used. The text in the application, such as about dementia, was regarded as clear and informative. Sometimes, however, the participants had difficulties accessing the content. “I may not have found all the features. It’s very, very likely”. (Participant A) The participants also noted the evolution of mobile technologies, and the need to relearn these technologies made them feel left out. Although the participants had previous technical knowledge and work experience with computers, they felt that they could not keep up with software and features that changed over time. “So, I can’t access and do as I did before”. (Participant F) The participants also feared that misuse of the mobile application could lead to consequences due to a lack of understanding of the application or incorrect use. Incorrect use could result in errors and, in turn, a lack of trust in technology. “Uncertainty makes me afraid to make mistakes, which will have consequences”. (Participant N) Importance of Language Proficiency Language, especially English, was considered important when using the mobile application. The application itself was in Swedish. However, information not in the participants’ native language, such as notifications in English in the tablet’s operation system, could lead to uncertainty. When the participants did not understand the English vocabulary used in the operating system, they needed to ask for help from, for example, a spouse, next of kin, or close friend who could help and translate for them. “I am not 100 in English, but then I have help from upstairs. She knows that if there are certain words or a sentence, then I can ask her”. (Participant G) Another example of the uncertainty and challenges surrounding language was when Swedish was not the participant’s native language. Even though the participant could read Swedish, they experienced challenges using the correct spelling and avoiding being misunderstood. “I must write here that I have heart pain for example … how can I do that … for example, I have symptom heart failure … how can I do that … enter here … I cannot … must write heart … I can spell so wrong … I am not Swedish”. (Participant M) 3.1.3. Mastering Mobile Technologies (Manageability) This theme describes the participants’ views on having the resources available to utilize mobile technologies. The theme contained the following three subthemes: ability to use the interface, expectations on functionalities, and need for support. Ability to Use the Interface Reading the display of the mobile application and using the touchscreen to touch and scroll were considered difficult. The participants also found the application complicated to learn, adding that worsening memory problems could make the use of technology even more challenging. “Partly I have some difficulties seeing too. That’s not so good for me … And so, it’s this with the touch buttons and this”. (Participant P) However, the participants also noted that even if they made mistakes when using the application, they could do it repeatedly because learning how to use technology requires patience. “Sometime you do … I am making mistakes, so something else comes up but… then you have to go back and do it over … again because … everything is not perfect at once”. (Participant E) Expectations about Functionalities The participants expressed facing difficulties and frustration when using the application due to software restrictions and slow devices. When the tablet was slow or did not work as anticipated, other kinds of technologies, such as mobile phones or computers, were preferred. “It’s the one I used and the reason I did not continue … I think was that I thought this [the mobile application] was quite slow”. (Participant A) Need for Support The participants shared the view that they preferred to have supervision from someone, such as a family member or a close friend, who understood how to use the mobile application. Such help could be in the form of guidance while using the application or on which button to press. If the participants had trouble finding someone to explain or help, they tended to view the application as challenging to operate. They also highlighted the comfort of having someone who could provide support for the application and other technologies. According to the participants, it was easier to ask for help from, for example, a spouse or other family member and resolve the problem faster than to try to decode the application by themselves. “If something shows up then well … then I have to contact [my spouse] about which button to use”. (Participant E) 3.1.4. The Potential Added Value to Use Mobile Technologies (Meaningfulness) This theme describes the participants’ views on the ability of the mobile application to satisfy their needs. The theme contained the following three subthemes: the importance of fulfilling needs, choice of information sources, and importance of personal interaction. The Importance of Fulfilling Needs The participants observed that the different functionalities of the application could be useful. For example, according to the participants, the use of other technologies could be an option to access health care, but they currently used their telephone, which lead to waiting in telephone queues. The application was further seen as a helpful way to access, share, and receive information and set reminders. “It’s very good that you can get … notifications when you need them”. (Participant L) However, if the participants did not perceive that the application fulfilled a need or considered it unbeneficial, their interest in using the application decreased further. One participant expressed disinterest in the reminder feature due to the lack of a sense of need. “Yes, I have accessed and viewed most of it, but then I haven’t … I haven’t been that interested”. (Participant P) The lack of interest in using technology was also ascribed to participants viewing themselves as old or old-fashioned. The participants stated that there are probably other older persons interested in using technology, just not just themselves. “I am probably a bit old-fashioned to use that. Difficult to learn this with the touch function”. (Participant I) Choice of Information Sources The participants preferred other ways of finding information and setting reminders, such as methods for documenting or searching for information that were considered more convenient. Taking notes with a pen and paper and using notebooks and physical calendars were preferred to the mobile application. Participants also used paper to list their daily tasks and check off items that had been accomplished during the day. Other familiar technologies, such as other tablet models or desktop computers, were considered easier to use and hence more useful than the application. Participants mentioned preferring familiar technologies, such as other kinds of devices or software, for which they did not have to relearn to perform similar tasks. “No, I’m using the computer … because it’s easier than the tablet … it’s easier, quicker, not the same system”. (Participant B) However, the mobile application was considered suitable for different tasks, such as for medication reminders. The bigger screen on the tablet was also seen as positive. Yet, the participants also had smartphones with them with the functionalities they needed, and these smartphones were considered more convenient due to their smaller size and ability to fit in a pocket. Moreover, a smartphone can be used to make phone calls and is more portable than a tablet, which was considered more suitable for use within the home. The participants further mentioned that they did not use the reminders in the application when they were not at home because they did not take the tablet out with them. “No, I think it is too big to have in the bag … then you would need a bigger bag … and I do not have my medications with me”. (Participant L) These limitations caused uncertainty regarding the application’s purpose, especially as smartphones were considered more practical despite their smaller display size. Moreover, owning different mobile technologies that served similar purposes resulted in having too many devices to carry around. “It has worked well, but it’s just that you think you have it twice there, so it is a little easier to use”. (Participant O) Importance of Personal Interaction The participants emphasized the importance of meeting a health care professional in person rather than using technology. The participants felt safer and taken more seriously when sharing their concerns with a professional in person. They also had a greater understanding of the information given to them by health care professionals in person rather than during video calls. “To speak with a doctor and look them in the eyes instead of speaking on the phone … I think it’s important to meet them”. (Participant E)
3.1.1. A Challenging Technology That Can Provide Support (Sense of Coherence) The overall theme explains the participants’ views of the mobile application as being both challenging and potentially supportive, depending on their needs and interests. The challenges caused hesitation and doubtfulness due to the participants’ frustrations arising from the application’s inability to function as expected. The overall theme encompasses the three themes, highlighting that HC needs to be understandable, user-friendly, and coherent through mobile applications. 3.1.2. Making Sense of Mobile Technologies (Comprehensibility) This theme describes the participants’ views of the mobile application as understandable and comprehensible as well as challenging. The theme contains the following two subthemes: understanding usage and importance of language proficiency. Understanding Usage Participants expressed that it could be difficult to understand the mobile application, use its different functionalities, and access its information. The participants were open to other technologies, such as smartphones, which they frequently used. The text in the application, such as about dementia, was regarded as clear and informative. Sometimes, however, the participants had difficulties accessing the content. “I may not have found all the features. It’s very, very likely”. (Participant A) The participants also noted the evolution of mobile technologies, and the need to relearn these technologies made them feel left out. Although the participants had previous technical knowledge and work experience with computers, they felt that they could not keep up with software and features that changed over time. “So, I can’t access and do as I did before”. (Participant F) The participants also feared that misuse of the mobile application could lead to consequences due to a lack of understanding of the application or incorrect use. Incorrect use could result in errors and, in turn, a lack of trust in technology. “Uncertainty makes me afraid to make mistakes, which will have consequences”. (Participant N) Importance of Language Proficiency Language, especially English, was considered important when using the mobile application. The application itself was in Swedish. However, information not in the participants’ native language, such as notifications in English in the tablet’s operation system, could lead to uncertainty. When the participants did not understand the English vocabulary used in the operating system, they needed to ask for help from, for example, a spouse, next of kin, or close friend who could help and translate for them. “I am not 100 in English, but then I have help from upstairs. She knows that if there are certain words or a sentence, then I can ask her”. (Participant G) Another example of the uncertainty and challenges surrounding language was when Swedish was not the participant’s native language. Even though the participant could read Swedish, they experienced challenges using the correct spelling and avoiding being misunderstood. “I must write here that I have heart pain for example … how can I do that … for example, I have symptom heart failure … how can I do that … enter here … I cannot … must write heart … I can spell so wrong … I am not Swedish”. (Participant M) 3.1.3. Mastering Mobile Technologies (Manageability) This theme describes the participants’ views on having the resources available to utilize mobile technologies. The theme contained the following three subthemes: ability to use the interface, expectations on functionalities, and need for support. Ability to Use the Interface Reading the display of the mobile application and using the touchscreen to touch and scroll were considered difficult. The participants also found the application complicated to learn, adding that worsening memory problems could make the use of technology even more challenging. “Partly I have some difficulties seeing too. That’s not so good for me … And so, it’s this with the touch buttons and this”. (Participant P) However, the participants also noted that even if they made mistakes when using the application, they could do it repeatedly because learning how to use technology requires patience. “Sometime you do … I am making mistakes, so something else comes up but… then you have to go back and do it over … again because … everything is not perfect at once”. (Participant E) Expectations about Functionalities The participants expressed facing difficulties and frustration when using the application due to software restrictions and slow devices. When the tablet was slow or did not work as anticipated, other kinds of technologies, such as mobile phones or computers, were preferred. “It’s the one I used and the reason I did not continue … I think was that I thought this [the mobile application] was quite slow”. (Participant A) Need for Support The participants shared the view that they preferred to have supervision from someone, such as a family member or a close friend, who understood how to use the mobile application. Such help could be in the form of guidance while using the application or on which button to press. If the participants had trouble finding someone to explain or help, they tended to view the application as challenging to operate. They also highlighted the comfort of having someone who could provide support for the application and other technologies. According to the participants, it was easier to ask for help from, for example, a spouse or other family member and resolve the problem faster than to try to decode the application by themselves. “If something shows up then well … then I have to contact [my spouse] about which button to use”. (Participant E) 3.1.4. The Potential Added Value to Use Mobile Technologies (Meaningfulness) This theme describes the participants’ views on the ability of the mobile application to satisfy their needs. The theme contained the following three subthemes: the importance of fulfilling needs, choice of information sources, and importance of personal interaction. The Importance of Fulfilling Needs The participants observed that the different functionalities of the application could be useful. For example, according to the participants, the use of other technologies could be an option to access health care, but they currently used their telephone, which lead to waiting in telephone queues. The application was further seen as a helpful way to access, share, and receive information and set reminders. “It’s very good that you can get … notifications when you need them”. (Participant L) However, if the participants did not perceive that the application fulfilled a need or considered it unbeneficial, their interest in using the application decreased further. One participant expressed disinterest in the reminder feature due to the lack of a sense of need. “Yes, I have accessed and viewed most of it, but then I haven’t … I haven’t been that interested”. (Participant P) The lack of interest in using technology was also ascribed to participants viewing themselves as old or old-fashioned. The participants stated that there are probably other older persons interested in using technology, just not just themselves. “I am probably a bit old-fashioned to use that. Difficult to learn this with the touch function”. (Participant I) Choice of Information Sources The participants preferred other ways of finding information and setting reminders, such as methods for documenting or searching for information that were considered more convenient. Taking notes with a pen and paper and using notebooks and physical calendars were preferred to the mobile application. Participants also used paper to list their daily tasks and check off items that had been accomplished during the day. Other familiar technologies, such as other tablet models or desktop computers, were considered easier to use and hence more useful than the application. Participants mentioned preferring familiar technologies, such as other kinds of devices or software, for which they did not have to relearn to perform similar tasks. “No, I’m using the computer … because it’s easier than the tablet … it’s easier, quicker, not the same system”. (Participant B) However, the mobile application was considered suitable for different tasks, such as for medication reminders. The bigger screen on the tablet was also seen as positive. Yet, the participants also had smartphones with them with the functionalities they needed, and these smartphones were considered more convenient due to their smaller size and ability to fit in a pocket. Moreover, a smartphone can be used to make phone calls and is more portable than a tablet, which was considered more suitable for use within the home. The participants further mentioned that they did not use the reminders in the application when they were not at home because they did not take the tablet out with them. “No, I think it is too big to have in the bag … then you would need a bigger bag … and I do not have my medications with me”. (Participant L) These limitations caused uncertainty regarding the application’s purpose, especially as smartphones were considered more practical despite their smaller display size. Moreover, owning different mobile technologies that served similar purposes resulted in having too many devices to carry around. “It has worked well, but it’s just that you think you have it twice there, so it is a little easier to use”. (Participant O) Importance of Personal Interaction The participants emphasized the importance of meeting a health care professional in person rather than using technology. The participants felt safer and taken more seriously when sharing their concerns with a professional in person. They also had a greater understanding of the information given to them by health care professionals in person rather than during video calls. “To speak with a doctor and look them in the eyes instead of speaking on the phone … I think it’s important to meet them”. (Participant E)
The overall theme explains the participants’ views of the mobile application as being both challenging and potentially supportive, depending on their needs and interests. The challenges caused hesitation and doubtfulness due to the participants’ frustrations arising from the application’s inability to function as expected. The overall theme encompasses the three themes, highlighting that HC needs to be understandable, user-friendly, and coherent through mobile applications.
This theme describes the participants’ views of the mobile application as understandable and comprehensible as well as challenging. The theme contains the following two subthemes: understanding usage and importance of language proficiency. Understanding Usage Participants expressed that it could be difficult to understand the mobile application, use its different functionalities, and access its information. The participants were open to other technologies, such as smartphones, which they frequently used. The text in the application, such as about dementia, was regarded as clear and informative. Sometimes, however, the participants had difficulties accessing the content. “I may not have found all the features. It’s very, very likely”. (Participant A) The participants also noted the evolution of mobile technologies, and the need to relearn these technologies made them feel left out. Although the participants had previous technical knowledge and work experience with computers, they felt that they could not keep up with software and features that changed over time. “So, I can’t access and do as I did before”. (Participant F) The participants also feared that misuse of the mobile application could lead to consequences due to a lack of understanding of the application or incorrect use. Incorrect use could result in errors and, in turn, a lack of trust in technology. “Uncertainty makes me afraid to make mistakes, which will have consequences”. (Participant N) Importance of Language Proficiency Language, especially English, was considered important when using the mobile application. The application itself was in Swedish. However, information not in the participants’ native language, such as notifications in English in the tablet’s operation system, could lead to uncertainty. When the participants did not understand the English vocabulary used in the operating system, they needed to ask for help from, for example, a spouse, next of kin, or close friend who could help and translate for them. “I am not 100 in English, but then I have help from upstairs. She knows that if there are certain words or a sentence, then I can ask her”. (Participant G) Another example of the uncertainty and challenges surrounding language was when Swedish was not the participant’s native language. Even though the participant could read Swedish, they experienced challenges using the correct spelling and avoiding being misunderstood. “I must write here that I have heart pain for example … how can I do that … for example, I have symptom heart failure … how can I do that … enter here … I cannot … must write heart … I can spell so wrong … I am not Swedish”. (Participant M)
Participants expressed that it could be difficult to understand the mobile application, use its different functionalities, and access its information. The participants were open to other technologies, such as smartphones, which they frequently used. The text in the application, such as about dementia, was regarded as clear and informative. Sometimes, however, the participants had difficulties accessing the content. “I may not have found all the features. It’s very, very likely”. (Participant A) The participants also noted the evolution of mobile technologies, and the need to relearn these technologies made them feel left out. Although the participants had previous technical knowledge and work experience with computers, they felt that they could not keep up with software and features that changed over time. “So, I can’t access and do as I did before”. (Participant F) The participants also feared that misuse of the mobile application could lead to consequences due to a lack of understanding of the application or incorrect use. Incorrect use could result in errors and, in turn, a lack of trust in technology. “Uncertainty makes me afraid to make mistakes, which will have consequences”. (Participant N)
Language, especially English, was considered important when using the mobile application. The application itself was in Swedish. However, information not in the participants’ native language, such as notifications in English in the tablet’s operation system, could lead to uncertainty. When the participants did not understand the English vocabulary used in the operating system, they needed to ask for help from, for example, a spouse, next of kin, or close friend who could help and translate for them. “I am not 100 in English, but then I have help from upstairs. She knows that if there are certain words or a sentence, then I can ask her”. (Participant G) Another example of the uncertainty and challenges surrounding language was when Swedish was not the participant’s native language. Even though the participant could read Swedish, they experienced challenges using the correct spelling and avoiding being misunderstood. “I must write here that I have heart pain for example … how can I do that … for example, I have symptom heart failure … how can I do that … enter here … I cannot … must write heart … I can spell so wrong … I am not Swedish”. (Participant M)
This theme describes the participants’ views on having the resources available to utilize mobile technologies. The theme contained the following three subthemes: ability to use the interface, expectations on functionalities, and need for support. Ability to Use the Interface Reading the display of the mobile application and using the touchscreen to touch and scroll were considered difficult. The participants also found the application complicated to learn, adding that worsening memory problems could make the use of technology even more challenging. “Partly I have some difficulties seeing too. That’s not so good for me … And so, it’s this with the touch buttons and this”. (Participant P) However, the participants also noted that even if they made mistakes when using the application, they could do it repeatedly because learning how to use technology requires patience. “Sometime you do … I am making mistakes, so something else comes up but… then you have to go back and do it over … again because … everything is not perfect at once”. (Participant E) Expectations about Functionalities The participants expressed facing difficulties and frustration when using the application due to software restrictions and slow devices. When the tablet was slow or did not work as anticipated, other kinds of technologies, such as mobile phones or computers, were preferred. “It’s the one I used and the reason I did not continue … I think was that I thought this [the mobile application] was quite slow”. (Participant A) Need for Support The participants shared the view that they preferred to have supervision from someone, such as a family member or a close friend, who understood how to use the mobile application. Such help could be in the form of guidance while using the application or on which button to press. If the participants had trouble finding someone to explain or help, they tended to view the application as challenging to operate. They also highlighted the comfort of having someone who could provide support for the application and other technologies. According to the participants, it was easier to ask for help from, for example, a spouse or other family member and resolve the problem faster than to try to decode the application by themselves. “If something shows up then well … then I have to contact [my spouse] about which button to use”. (Participant E)
Reading the display of the mobile application and using the touchscreen to touch and scroll were considered difficult. The participants also found the application complicated to learn, adding that worsening memory problems could make the use of technology even more challenging. “Partly I have some difficulties seeing too. That’s not so good for me … And so, it’s this with the touch buttons and this”. (Participant P) However, the participants also noted that even if they made mistakes when using the application, they could do it repeatedly because learning how to use technology requires patience. “Sometime you do … I am making mistakes, so something else comes up but… then you have to go back and do it over … again because … everything is not perfect at once”. (Participant E)
The participants expressed facing difficulties and frustration when using the application due to software restrictions and slow devices. When the tablet was slow or did not work as anticipated, other kinds of technologies, such as mobile phones or computers, were preferred. “It’s the one I used and the reason I did not continue … I think was that I thought this [the mobile application] was quite slow”. (Participant A)
The participants shared the view that they preferred to have supervision from someone, such as a family member or a close friend, who understood how to use the mobile application. Such help could be in the form of guidance while using the application or on which button to press. If the participants had trouble finding someone to explain or help, they tended to view the application as challenging to operate. They also highlighted the comfort of having someone who could provide support for the application and other technologies. According to the participants, it was easier to ask for help from, for example, a spouse or other family member and resolve the problem faster than to try to decode the application by themselves. “If something shows up then well … then I have to contact [my spouse] about which button to use”. (Participant E)
This theme describes the participants’ views on the ability of the mobile application to satisfy their needs. The theme contained the following three subthemes: the importance of fulfilling needs, choice of information sources, and importance of personal interaction. The Importance of Fulfilling Needs The participants observed that the different functionalities of the application could be useful. For example, according to the participants, the use of other technologies could be an option to access health care, but they currently used their telephone, which lead to waiting in telephone queues. The application was further seen as a helpful way to access, share, and receive information and set reminders. “It’s very good that you can get … notifications when you need them”. (Participant L) However, if the participants did not perceive that the application fulfilled a need or considered it unbeneficial, their interest in using the application decreased further. One participant expressed disinterest in the reminder feature due to the lack of a sense of need. “Yes, I have accessed and viewed most of it, but then I haven’t … I haven’t been that interested”. (Participant P) The lack of interest in using technology was also ascribed to participants viewing themselves as old or old-fashioned. The participants stated that there are probably other older persons interested in using technology, just not just themselves. “I am probably a bit old-fashioned to use that. Difficult to learn this with the touch function”. (Participant I) Choice of Information Sources The participants preferred other ways of finding information and setting reminders, such as methods for documenting or searching for information that were considered more convenient. Taking notes with a pen and paper and using notebooks and physical calendars were preferred to the mobile application. Participants also used paper to list their daily tasks and check off items that had been accomplished during the day. Other familiar technologies, such as other tablet models or desktop computers, were considered easier to use and hence more useful than the application. Participants mentioned preferring familiar technologies, such as other kinds of devices or software, for which they did not have to relearn to perform similar tasks. “No, I’m using the computer … because it’s easier than the tablet … it’s easier, quicker, not the same system”. (Participant B) However, the mobile application was considered suitable for different tasks, such as for medication reminders. The bigger screen on the tablet was also seen as positive. Yet, the participants also had smartphones with them with the functionalities they needed, and these smartphones were considered more convenient due to their smaller size and ability to fit in a pocket. Moreover, a smartphone can be used to make phone calls and is more portable than a tablet, which was considered more suitable for use within the home. The participants further mentioned that they did not use the reminders in the application when they were not at home because they did not take the tablet out with them. “No, I think it is too big to have in the bag … then you would need a bigger bag … and I do not have my medications with me”. (Participant L) These limitations caused uncertainty regarding the application’s purpose, especially as smartphones were considered more practical despite their smaller display size. Moreover, owning different mobile technologies that served similar purposes resulted in having too many devices to carry around. “It has worked well, but it’s just that you think you have it twice there, so it is a little easier to use”. (Participant O) Importance of Personal Interaction The participants emphasized the importance of meeting a health care professional in person rather than using technology. The participants felt safer and taken more seriously when sharing their concerns with a professional in person. They also had a greater understanding of the information given to them by health care professionals in person rather than during video calls. “To speak with a doctor and look them in the eyes instead of speaking on the phone … I think it’s important to meet them”. (Participant E)
The participants observed that the different functionalities of the application could be useful. For example, according to the participants, the use of other technologies could be an option to access health care, but they currently used their telephone, which lead to waiting in telephone queues. The application was further seen as a helpful way to access, share, and receive information and set reminders. “It’s very good that you can get … notifications when you need them”. (Participant L) However, if the participants did not perceive that the application fulfilled a need or considered it unbeneficial, their interest in using the application decreased further. One participant expressed disinterest in the reminder feature due to the lack of a sense of need. “Yes, I have accessed and viewed most of it, but then I haven’t … I haven’t been that interested”. (Participant P) The lack of interest in using technology was also ascribed to participants viewing themselves as old or old-fashioned. The participants stated that there are probably other older persons interested in using technology, just not just themselves. “I am probably a bit old-fashioned to use that. Difficult to learn this with the touch function”. (Participant I)
The participants preferred other ways of finding information and setting reminders, such as methods for documenting or searching for information that were considered more convenient. Taking notes with a pen and paper and using notebooks and physical calendars were preferred to the mobile application. Participants also used paper to list their daily tasks and check off items that had been accomplished during the day. Other familiar technologies, such as other tablet models or desktop computers, were considered easier to use and hence more useful than the application. Participants mentioned preferring familiar technologies, such as other kinds of devices or software, for which they did not have to relearn to perform similar tasks. “No, I’m using the computer … because it’s easier than the tablet … it’s easier, quicker, not the same system”. (Participant B) However, the mobile application was considered suitable for different tasks, such as for medication reminders. The bigger screen on the tablet was also seen as positive. Yet, the participants also had smartphones with them with the functionalities they needed, and these smartphones were considered more convenient due to their smaller size and ability to fit in a pocket. Moreover, a smartphone can be used to make phone calls and is more portable than a tablet, which was considered more suitable for use within the home. The participants further mentioned that they did not use the reminders in the application when they were not at home because they did not take the tablet out with them. “No, I think it is too big to have in the bag … then you would need a bigger bag … and I do not have my medications with me”. (Participant L) These limitations caused uncertainty regarding the application’s purpose, especially as smartphones were considered more practical despite their smaller display size. Moreover, owning different mobile technologies that served similar purposes resulted in having too many devices to carry around. “It has worked well, but it’s just that you think you have it twice there, so it is a little easier to use”. (Participant O)
The participants emphasized the importance of meeting a health care professional in person rather than using technology. The participants felt safer and taken more seriously when sharing their concerns with a professional in person. They also had a greater understanding of the information given to them by health care professionals in person rather than during video calls. “To speak with a doctor and look them in the eyes instead of speaking on the phone … I think it’s important to meet them”. (Participant E)
This study aimed to explain how older persons with CI experienced technology-based HC through the use of a mobile application to facilitate a SOC. The findings show that the use of the mobile application created an ambiguity, as it was both challenging and had possible benefits. These findings are summarized in the overall theme of “a challenging technology that can provide support”. This finding aligns with Pirhonen et al. , who observed that older persons considered technology to have both advantages and drawbacks. Furthermore, Hedman et al. found that technology is viewed as complex and multifaceted and greatly impacts the daily activities of older persons with CI. In the study, participants expressed that for the application to be valuable, it had to both be user-friendly and fulfill a need. When participants considered that learning to use the application was not worth the effort, they showed low interest in further engagement. This can be interpreted as that the mobile application did not have the desired or expected benefits. Previous research has uncovered barriers that older persons face regarding technology use, such as high cost, lack of interest, lack of guidance, and device complexity . In the SOC model, all three central components should be viewed positively . The three themes of the study’s findings are discussed in relation to the SOC model components: comprehensibility, manageability, and meaningfulness, as illustrated in the thematic map. The interaction between the themes and overall theme in the findings corresponds to the relationship between the SOC model and its central components. The theme “making sense of mobile technologies” reflected the comprehensibility of the mobile application. The first subtheme, “understanding usage”, focused on the participants’ views of the application as difficult to use to find information and their concerns about the evolution of technology and fear of its consequences. Hedman et al. supports these finding that updating software can be perceived as difficult and impossible to avoid. Furthermore, older persons’ own beliefs about being incapable of learning contributed to their fear of using technology and making mistakes . However, older persons could be confident in using technology in some situations, implying that the difficulties in technology use are related to impairments and not age itself. The second subtheme, “importance of language proficiency”, focused on the language when using the mobile application, as the operating system was in English. Language proficiency is an aspect of functional literacy, which comprises reading and writing . Functional literacy serves as the basis for other forms of literacy, such as TL. In the findings, the use of the application constituted a dual complexity related to language and the CI. Implying that language as a cultural difference is related to health technologies, Matthew-Maich et al. emphasized considering cultural differences and values when developing technological solutions. However, older persons with cognitive and visual impairments are less likely to use technology . A sense of comprehensibility is achieved in the SOC model when both inner and outer stimuli are considered coherent, clear, and structured . Thus, comprehensibility in the SOC model is the second most crucial component after meaningfulness because understanding affects comprehensibility. This relation is also viewed in the findings, in which the comprehensibility of the mobile application impacted the participants’ interest or disinterest in further use. The theme “mastering mobile technologies” reflected the ease of use of the mobile application and its functionalities. This was influenced by the participants’ previous knowledge and personal skills in using technology. The first subtheme, “ability to use the interface”, focused on the difficulties encountered while using the touchscreen, patience of using new technologies, and memory loss. These findings are consistent with those of Bogza et al. , who observed that difficulties encountered while navigating technologies resulted in frustration in decision making. Hence, the information presented to persons with CI has to be meaningful, concise, and easy to remember. In the second subtheme, “expectations about functionalities”, the slowness of the application also contributed to frustration when the application’s functionality was not an expected benefit. Even with a simplified interface easy to use, the application’s responsiveness affected perceptions, and the device’s slowness was found to be frustrating. Previous research support that low engagement using mobile technology was due to difficulties of responsiveness of touch-screen devices . Regarding the third subtheme, “need for support”, personal support was seen as valuable when using the application. This finding is also confirmed in previous research, including informal caregivers’ support . Pirhonen et al. argued that availability of resources rather than abilities of older persons can explain access to and use of digital technologies. Blok et al. explained that a low interest in the use of technology is related to difficulties in asking for help. The participants’ attitudes toward technology depended on their quality of life experience and sociodemographic variables, presented in another study . Socioeconomic factors play an important role regarding the “digital divide”, as the use of smartphones and access to other technologies is necessary for digital inclusion . The findings highlight the challenges imposed by technology as a means of HC. The fact that smartphones are not available to everyone has to be considered when implementing mobile applications and digital channels for older persons . A recent study in which tablets were used for the cognitive training of older persons with CI showed promising results for HC mediated by mobile technology on cognitive ability but not on depression and daily activities . Other studies, including technology-based cognitive training, have pointed to a positive attitude in older persons regarding technology and suggested adaptions of a simplified user interface and instructions and the inclusion of reminders . Education may contribute to the use of technologies of persons with a lower level of education, which correlates with a lower use of health technologies . Older persons prefer health information that is credible . Therefore, it must be acknowledged that the participants in this study had prior experiences using different kinds of technology, such as computers, smartphones, and tablets. The theme “the potential added value to use mobile technologies” reflected participants’ views of the mobile application as meaningful and with different preferences for information sources. Regarding the first subtheme, “the importance of fulfilling needs”, the application had to meet a need to be considered interesting to use, which is consistent with previous findings in which technology was observed to contribute to self-management . Control, interactivity, and perceived usefulness have been pointed out to influence older persons’ use of technology . Additional factors include security, independence, safety, and the ability to socialize and receive support in daily activities apart from health management . Mercer et al. further pointed out the significance of understanding what motivates technology adoption. A sense of meaningfulness is significant for motivation for older persons and is central in the SOC model . The second subtheme, “choice of information sources”, focused on participants’ use of different information sources and means of communication involving both mobile technologies and paper-based information. Using traditional media, such as notes and magazines, has been previously confirmed as beneficial in complementing web-based information . In the findings, smartphones were preferred to tablets due to their smaller size, which made them more portable and had perceived additional functionalities. However, previous research has indicated that persons with CI who use mobile technologies found big screens more suitable, especially with the possibility to enlarge the graphics . Due to enhanced portability, it may result in preferences for smaller mobile devices . The third subtheme, “importance of personal interaction”, highlighted how social interaction was valued, which is consistent with previous research on technology use by older persons . These findings are also supported by Borg et al. , who emphasized the need for social interaction with family and friends and supporting skills to ensure digital inclusion. Using technology contributes to social contact and improved involvement in self-care of older persons . Obtaining support from formal and informal caregivers can improve the utilization of HC in a home environment . Ethical considerations when implementing technology for older persons also have to be highlighted , mainly due to the vulnerability related to CI. The present study has emphasized considerations in developing a communication path that contributes to independence in home-based care. The findings advance knowledge about how older persons with CI use HC to increase their independence regarding home-based care. The relevance of meaningful technologies to independence is considered necessary in future home care . Bol et al. indicated room for improvement regarding tailored technology-based HC. Social support and collaborative design (co-design) have been identified as strategies to improve digital inclusion and reduce barriers to technology use, such as attitude, digital ability, and access . Furthermore, both the design and functionalities of technologies should fit older persons’ cognitive and physical profiles, acknowledging their variety of needs and requirements . As CI impacts individuals’ ability to seek information, technology for the user needs to include specific functionalities, such as avoiding scrolling and the use of horizontal browsing . Research has underscored specific usability aspects related to the use of mobile technology, such as limitations related to the size and design . However, to avoid challenges related to the use of technology-based HC, it is imperative to consider individual skills, preferences, and characteristics and the context in which the applications are used when developing mobile applications . Previous research supported this, emphasizing that social support benefits HC, such as information seeking . Thus, the presence of social support and level of CI affects the use of technology-based HC. When applying the SOC model to the use of technology by older persons, the meaning of TL can be interpreted as similar to comprehensibility and manageability in the SOC model (i.e., the ability to use and understand technology). When utilizing HC, the concept of health literacy (HL), which signifies the understanding, use, and perception of health-related information , also impacts the interpretation of the communicated content. In the findings, both TL and HL were important skills when using mobile technologies. Technology-based HC can contribute to older persons’ satisfaction of emotional and social needs . Because technology can improve older persons’ self-care and access to HC, research has confirmed the importance of TL and personal support in using technologies . HL is further necessary to protect against the risks of misinformation when using digital channels for HC . The findings in this study suggest that the SOC model may contribute to a deeper understanding of technology-based HC among older persons with CI. Methodological Considerations The findings of this study offer important information regarding the experiences of older persons with CI. Regarding trustworthiness , the authors continuously discussed how to enhance the analysis credibility (and confirmability). All authors were involved throughout the analytical process, and the themes were discussed until a consensus was obtained. To improve credibility and transferability, the analysis process has been described in detail and illustrated with quotes. In addition, an interview guide was used to increase dependability, which is preferable in semi-structured interviews . However, this study also has limitations. Despite continuous probing and descriptive questions during the interviews, highly detailed responses were scarce. For example, the participants expressed views about the mobile application and tablet without giving further explanations. The study’s deductive approach may also have produced a less detailed description of the data and affected credibility. Due to CI, the participants may have had difficulties expressing their views on using the mobile application and their perceptions of HC, which may affect dependability. Furthermore, other age-related challenges may have contributed to the participants’ experiences of using the mobile application, although functional impairment was an exclusion criterion in the SMART4MD project. A feasibility study was conducted to improve the application in the SMART4MD project , and the final version was used in this study. Lastly, because the participants’ CI in the SMART4MD project was based on MMSE scores, the scores might not have been an accurate indication of the participants’ cognitive ability at the time of the interviews for the present study.
The findings of this study offer important information regarding the experiences of older persons with CI. Regarding trustworthiness , the authors continuously discussed how to enhance the analysis credibility (and confirmability). All authors were involved throughout the analytical process, and the themes were discussed until a consensus was obtained. To improve credibility and transferability, the analysis process has been described in detail and illustrated with quotes. In addition, an interview guide was used to increase dependability, which is preferable in semi-structured interviews . However, this study also has limitations. Despite continuous probing and descriptive questions during the interviews, highly detailed responses were scarce. For example, the participants expressed views about the mobile application and tablet without giving further explanations. The study’s deductive approach may also have produced a less detailed description of the data and affected credibility. Due to CI, the participants may have had difficulties expressing their views on using the mobile application and their perceptions of HC, which may affect dependability. Furthermore, other age-related challenges may have contributed to the participants’ experiences of using the mobile application, although functional impairment was an exclusion criterion in the SMART4MD project. A feasibility study was conducted to improve the application in the SMART4MD project , and the final version was used in this study. Lastly, because the participants’ CI in the SMART4MD project was based on MMSE scores, the scores might not have been an accurate indication of the participants’ cognitive ability at the time of the interviews for the present study.
This study indicated that using a mobile application for technology-based HC created an ambiguity to be challenging and have possible benefits. This was challenging for older persons with CI and affected their engagement. Mobile technology was related to the perception of being helpful, easy to use, and fulfilling of needs. The participant’s differences in abilities affected preferences, relevance, and choice of HC sources, either mediated by technology or not. The participant’s skills and expectations contributed to perceived benefits. Having support with the application contributed to feelings of meaningfulness and interest and thus motivation. Personal support also improved the usefulness of technology-based HC. However, personal interaction with formal caregivers was also considered positive. The use of the SOC model contributed to a deeper understanding of technology use in relation to the model’s central components: comprehensibility, manageability, and meaningfulness. This contributes to explain the use of technology-based HC among older persons with CI. Therefore, in the development of mobile health technologies, it is imperative to implement the preferences of older persons with CI and preferably include them as co-designers to improve the health technologies to be used in future home care.
|
Preparation and evaluation of | de9bc271-2097-415d-b243-a0adb2d4cffd | 11779724 | Biochemistry[mh] | Brucellosis, a severe zoonotic infectious disease caused by bacteria of the genus Brucella , poses a significant global public health threat due to its widespread prevalence . This disease not only hampers the development of the livestock industry but also spreads to humans through direct contact with infected animals, ingestion of inadequately processed dairy products, or inhalation of aerosols containing the bacteria . Clinical symptoms in humans include fever, fatigue, and joint pain, and in severe cases, it can lead to complications such as endocarditis, arthritis, and even death . Therefore, the development of efficient and accurate diagnostic methods for brucellosis is crucial for its prevention, control, and treatment. Currently, the diagnosis of brucellosis mainly relies on various methods, including pathogen detection, serological testing, and molecular biology techniques. Among these, serological testing is widely used due to its simplicity, low cost, and ability to reflect the immune status of the patient to some extent . However, most existing serological diagnostic antigens are based on the cell wall components or outer membrane proteins of Brucella . Although these antigens possess certain immunogenicity, their diagnostic sensitivity and specificity are often compromised in practical applications due to factors such as the patient’s immune status, the stage of infection, and cross-reactivity . VirB proteins, as key components of the Type IV secretion system (T4SS) in Brucella , play an important role in the interaction between the bacteria and host cells . The VirB system is not only involved in the survival and replication of Brucella within host cells but is also closely related to its pathogenic mechanisms. Despite the crucial role of VirB proteins in the biological functions of Brucella , there has been little research systematically analyzing their value in the serological diagnosis of human brucellosis. This gap in research limits our understanding of the pathogenic mechanisms of brucellosis and hinders the development and application of novel diagnostic antigens. In this study, we aim to utilize Tandem Mass Tag (TMT) proteomics technology to prepare and evaluate the value of Brucella VirB proteins in the serological diagnosis of human brucellosis. TMT technology, known for its high sensitivity and high throughput in protein quantification, allowed for precise identification and quantification of proteins in complex biological samples, providing strong technical support for the screening and validation of novel diagnostic antigens . Through TMT proteomics, we identified highly expressed VirB proteins in wild-type Brucella strains, prepared recombinant Type IV secretion system proteins, and established a serological detection method based on these proteins. The goal was to enhance the sensitivity and specificity of brucellosis diagnosis, thereby offering new antigen choices for the early diagnosis and prevention of the disease. Serum samples and bacterial strains In this study, a total of 100 positive and 96 negative serum samples were obtained from the Xuzhou Center for Disease Control and Prevention, all confirmed as positive or negative through tube agglutination tests. Additionally, serum from 40 febrile patients infected with other pathogens (stored in the laboratory, with detailed information available in : Cross-Reactivity Assessment) was used to evaluate the cross-reactivity of the developed method. To identify highly expressed proteins in wild strain of Brucella abortus , as well as to discover antigenic proteins that can be utilized in the diagnosis of human brucellosis, the vaccine strain Brucella abortus A19 and the wild-type Brucella abortus DT21, both isolated and preserved by the China Animal Health and Epidemiology Center, were also utilized in this study. Proteomics analysis Bacterial culture The preserved bacterial strain was inoculated into 500 mL of Tryptic Soy Broth (TSB, STBMTSB12, Millipore, USA) medium and incubated at 37°C with shaking for 24-48 hours. After incubation, 5 mL of 1% formaldehyde was added to inactivate the bacteria, which was then stored at 4°C for later use. Proteomics analysis Proteomics analysis was performed according to standard protocols referenced from the literature, including steps such as protein extraction and quantification, protein digestion and TMT labeling, LC-MS/MS analyses, and qualitative and quantitative analysis of proteins . The peak intensities of TMT-tagged reagent ions were quantitatively compared across samples . The statistical analysis of the differential proteins identified was conducted using analysis of variance (ANOVA), with a significance threshold set at p < 0.05. Proteins exhibiting a fold change greater than 1.2 (ratio≥1.2) or less than 0.83(ratio ≤ 0.83) were classified as highly expressed proteins. Preparation of recombinant T4SS proteins Based on TMT proteomics analysis, we selected highly expressed T4SS proteins from the wild-type strain. The amino acid sequences of these proteins were retrieved from the NCBI protein database. Using the UniProt website ( https://www.uniprot.org/uniprotkb ), we predicted and removed transmembrane regions, signal peptides, and hydrophobic regions, constructing the recombinant sequences of the Type IV secretion system proteins. The sequence was then submitted to Beijing Protein Innovation Co., LTD. for codon optimization to suit prokaryotic expression. Gene synthesis was carried out, and a 6xHis tag was added to facilitate subsequent protein purification. The synthesized recombinant protein gene was cloned into the pET30a expression vector. The vector was then transformed into competent BL21 cells for IPTG-induced expression. The procedure was conducted as follows: Competent BL21 cells, previously stored at -80°C, were thawed on ice and subsequently mixed with pET30a. The mixture was incubated on ice for 30 minutes, followed by a heat shock at 42°C for 90 seconds, after which the cells were immediately cooled on ice for 2 minutes. Subsequently, 800 μL of LB medium (L113084, Aladdin, USA) was added, and the cells were incubated at 37°C for 45 minutes. The culture was then centrifuged at 3214×g (Eppendorf Centrifuge 5810R, Germany) for 3 minutes, with most of the supernatant discarded, leaving approximately 100-150 μL, in which the cells were resuspended. The resuspended cells were plated onto LB agar plates containing the appropriate antibiotic and incubated overnight at 37°C. The following day, the cultured bacterial solution was transferred into 250 mL of LB liquid medium supplemented with the corresponding antibiotic and incubated at 37°C with shaking at 200 rpm using DHZ-DA Large-capacity full-temperature oscillator (Changzhou Guoyu Instrument Manufacturing Co., China) until the optical density at 600 nm (OD600) reached 0.6-0.8. Induction of protein expression was achieved by adding 0.5 mM IPTG (16758, Sigma, Germany) and continuing the incubation at 37°C for 4 hours. The culture was then centrifuged at 8228×g for 6 minutes, the supernatant was discarded, and the cell pellet was collected. The pellet was resuspended in 20-30 mL of 10 mM Tris-HCl (pH 8.0) solution and subjected to ultrasonic disruption (500 W, 180 cycles, 5 seconds per cycle with 5-second intervals). A 100 μL aliquot of the disrupted bacterial suspension was centrifuged at 18514×g for 10 minutes. Of the resulting supernatant, 50 μL was transferred to a separate Eppendorf tube, while the pellet was resuspended in 50 μL of 10 mM Tris-HCl (pH 8.0) solution. To ascertain the presence of the target protein in either the supernatant or the pellet, 12% SDS-PAGE (P0012AC, Beyotime Biotechnology, Shanghai, China) electrophoresis was performed for subsequent purification. The nickel column (Ni Sepharose 6 Fast Flow, GE Healthcare) was washed with deionized water until the pH reached 7.0, then equilibrated with approximately 100 mL of 10 mM Tris-HCl (pH 8.0, T3253, Sigma, Germany) solution. The column was further equilibrated with approximately 50 mL of 10 mM Tris-HCl (pH 8.0) solution containing 0.5 M NaCl (A501218-0001, Sangon Biotch, Shanghai, China). The sample containing the target protein was diluted and loaded onto the column. After loading, the column was washed with 10 mM Tris-HCl (pH 8.0) solution containing 0.5 M NaCl. The protein was eluted using 10 mM Tris-HCl (pH 8.0) solutions containing 15 mM, 60 mM, and 300 mM imidazole (with 0.5 M NaCl). The protein peaks were collected, and purification efficiency was analyzed by 12% SDS-PAGE electrophoresis. The protein was quantified using the BCA Protein Quantification Kit (P0010, Beyotime). Establishment of indirect ELISA method and serum detection The indirect enzyme linked immunosorbent assay (iELISA) method was established as follows: The purified protein was first diluted in carbonate buffer solution (CBS, pH=9.6) to a concentration of 10 µg/mL, and 100 µL per well was added to a 96-well microplate (Corning, USA). The plate was incubated overnight at 4°C. After washing three times with PBST, 300 µL of blocking solution (5% skim milk in PBS) was added to each well and incubated at 37°C for 2 hours. The plate was washed again with PBST, then human serum diluted in PBS (1:200) was added and incubated at 37°C for 1 hour. After three more washes with PBST, 100 µL of HRP-conjugated rabbit anti-human IgG (diluted 1:10,000, A18903, Thermo Fisher, USA) was added to each well and incubated at 37°C for 1 hour. The plate was washed three times with PBST, tetramethylbenzidine (TMB, T2573, TCI, Japan) substrate solution was added, and the plate was incubated in the dark for 10 minutes for color development. The reaction was stopped with 2M H 2 SO 4 , and the OD 450 was measured using a microplate reader (Versa Max microplate reader, MD, USA). Laboratory-stored lipopolysaccharide (LPS, provided by the China Animal Health and Epidemiology Center, 3 mg/mL) and Rose Bengal Ag (diluted 1:400, IDEXX Pourquier, Montpellier, France) were used as control antigens, and serum samples were tested in triplicate using the same procedure. Sensitivity, specificity, area under the curve (AUC), and the cut-off value were determined by receiver operating characteristic curve (ROC) analysis. Evaluation of cross-reactivity in indirect ELISA method Following the procedure described above, sera from febrile patients without brucellosis were tested using the constructed Brucella T4SS recombinant proteins to evaluate the cross-reactivity by comparing with LPS and Rose Bengal Ag. Cross-reactivity was assessed based on the cut-off value determined by the ROC curve. Statistical methods Dot plot and ROC curve analyses were conducted using GraphPad Prism version 6.05. Statistical analyses were performed using unpaired Student’s t-test and ANOVA, with a significance level set at P < 0.05. In this study, a total of 100 positive and 96 negative serum samples were obtained from the Xuzhou Center for Disease Control and Prevention, all confirmed as positive or negative through tube agglutination tests. Additionally, serum from 40 febrile patients infected with other pathogens (stored in the laboratory, with detailed information available in : Cross-Reactivity Assessment) was used to evaluate the cross-reactivity of the developed method. To identify highly expressed proteins in wild strain of Brucella abortus , as well as to discover antigenic proteins that can be utilized in the diagnosis of human brucellosis, the vaccine strain Brucella abortus A19 and the wild-type Brucella abortus DT21, both isolated and preserved by the China Animal Health and Epidemiology Center, were also utilized in this study. Bacterial culture The preserved bacterial strain was inoculated into 500 mL of Tryptic Soy Broth (TSB, STBMTSB12, Millipore, USA) medium and incubated at 37°C with shaking for 24-48 hours. After incubation, 5 mL of 1% formaldehyde was added to inactivate the bacteria, which was then stored at 4°C for later use. Proteomics analysis Proteomics analysis was performed according to standard protocols referenced from the literature, including steps such as protein extraction and quantification, protein digestion and TMT labeling, LC-MS/MS analyses, and qualitative and quantitative analysis of proteins . The peak intensities of TMT-tagged reagent ions were quantitatively compared across samples . The statistical analysis of the differential proteins identified was conducted using analysis of variance (ANOVA), with a significance threshold set at p < 0.05. Proteins exhibiting a fold change greater than 1.2 (ratio≥1.2) or less than 0.83(ratio ≤ 0.83) were classified as highly expressed proteins. The preserved bacterial strain was inoculated into 500 mL of Tryptic Soy Broth (TSB, STBMTSB12, Millipore, USA) medium and incubated at 37°C with shaking for 24-48 hours. After incubation, 5 mL of 1% formaldehyde was added to inactivate the bacteria, which was then stored at 4°C for later use. Proteomics analysis was performed according to standard protocols referenced from the literature, including steps such as protein extraction and quantification, protein digestion and TMT labeling, LC-MS/MS analyses, and qualitative and quantitative analysis of proteins . The peak intensities of TMT-tagged reagent ions were quantitatively compared across samples . The statistical analysis of the differential proteins identified was conducted using analysis of variance (ANOVA), with a significance threshold set at p < 0.05. Proteins exhibiting a fold change greater than 1.2 (ratio≥1.2) or less than 0.83(ratio ≤ 0.83) were classified as highly expressed proteins. Based on TMT proteomics analysis, we selected highly expressed T4SS proteins from the wild-type strain. The amino acid sequences of these proteins were retrieved from the NCBI protein database. Using the UniProt website ( https://www.uniprot.org/uniprotkb ), we predicted and removed transmembrane regions, signal peptides, and hydrophobic regions, constructing the recombinant sequences of the Type IV secretion system proteins. The sequence was then submitted to Beijing Protein Innovation Co., LTD. for codon optimization to suit prokaryotic expression. Gene synthesis was carried out, and a 6xHis tag was added to facilitate subsequent protein purification. The synthesized recombinant protein gene was cloned into the pET30a expression vector. The vector was then transformed into competent BL21 cells for IPTG-induced expression. The procedure was conducted as follows: Competent BL21 cells, previously stored at -80°C, were thawed on ice and subsequently mixed with pET30a. The mixture was incubated on ice for 30 minutes, followed by a heat shock at 42°C for 90 seconds, after which the cells were immediately cooled on ice for 2 minutes. Subsequently, 800 μL of LB medium (L113084, Aladdin, USA) was added, and the cells were incubated at 37°C for 45 minutes. The culture was then centrifuged at 3214×g (Eppendorf Centrifuge 5810R, Germany) for 3 minutes, with most of the supernatant discarded, leaving approximately 100-150 μL, in which the cells were resuspended. The resuspended cells were plated onto LB agar plates containing the appropriate antibiotic and incubated overnight at 37°C. The following day, the cultured bacterial solution was transferred into 250 mL of LB liquid medium supplemented with the corresponding antibiotic and incubated at 37°C with shaking at 200 rpm using DHZ-DA Large-capacity full-temperature oscillator (Changzhou Guoyu Instrument Manufacturing Co., China) until the optical density at 600 nm (OD600) reached 0.6-0.8. Induction of protein expression was achieved by adding 0.5 mM IPTG (16758, Sigma, Germany) and continuing the incubation at 37°C for 4 hours. The culture was then centrifuged at 8228×g for 6 minutes, the supernatant was discarded, and the cell pellet was collected. The pellet was resuspended in 20-30 mL of 10 mM Tris-HCl (pH 8.0) solution and subjected to ultrasonic disruption (500 W, 180 cycles, 5 seconds per cycle with 5-second intervals). A 100 μL aliquot of the disrupted bacterial suspension was centrifuged at 18514×g for 10 minutes. Of the resulting supernatant, 50 μL was transferred to a separate Eppendorf tube, while the pellet was resuspended in 50 μL of 10 mM Tris-HCl (pH 8.0) solution. To ascertain the presence of the target protein in either the supernatant or the pellet, 12% SDS-PAGE (P0012AC, Beyotime Biotechnology, Shanghai, China) electrophoresis was performed for subsequent purification. The nickel column (Ni Sepharose 6 Fast Flow, GE Healthcare) was washed with deionized water until the pH reached 7.0, then equilibrated with approximately 100 mL of 10 mM Tris-HCl (pH 8.0, T3253, Sigma, Germany) solution. The column was further equilibrated with approximately 50 mL of 10 mM Tris-HCl (pH 8.0) solution containing 0.5 M NaCl (A501218-0001, Sangon Biotch, Shanghai, China). The sample containing the target protein was diluted and loaded onto the column. After loading, the column was washed with 10 mM Tris-HCl (pH 8.0) solution containing 0.5 M NaCl. The protein was eluted using 10 mM Tris-HCl (pH 8.0) solutions containing 15 mM, 60 mM, and 300 mM imidazole (with 0.5 M NaCl). The protein peaks were collected, and purification efficiency was analyzed by 12% SDS-PAGE electrophoresis. The protein was quantified using the BCA Protein Quantification Kit (P0010, Beyotime). The indirect enzyme linked immunosorbent assay (iELISA) method was established as follows: The purified protein was first diluted in carbonate buffer solution (CBS, pH=9.6) to a concentration of 10 µg/mL, and 100 µL per well was added to a 96-well microplate (Corning, USA). The plate was incubated overnight at 4°C. After washing three times with PBST, 300 µL of blocking solution (5% skim milk in PBS) was added to each well and incubated at 37°C for 2 hours. The plate was washed again with PBST, then human serum diluted in PBS (1:200) was added and incubated at 37°C for 1 hour. After three more washes with PBST, 100 µL of HRP-conjugated rabbit anti-human IgG (diluted 1:10,000, A18903, Thermo Fisher, USA) was added to each well and incubated at 37°C for 1 hour. The plate was washed three times with PBST, tetramethylbenzidine (TMB, T2573, TCI, Japan) substrate solution was added, and the plate was incubated in the dark for 10 minutes for color development. The reaction was stopped with 2M H 2 SO 4 , and the OD 450 was measured using a microplate reader (Versa Max microplate reader, MD, USA). Laboratory-stored lipopolysaccharide (LPS, provided by the China Animal Health and Epidemiology Center, 3 mg/mL) and Rose Bengal Ag (diluted 1:400, IDEXX Pourquier, Montpellier, France) were used as control antigens, and serum samples were tested in triplicate using the same procedure. Sensitivity, specificity, area under the curve (AUC), and the cut-off value were determined by receiver operating characteristic curve (ROC) analysis. Following the procedure described above, sera from febrile patients without brucellosis were tested using the constructed Brucella T4SS recombinant proteins to evaluate the cross-reactivity by comparing with LPS and Rose Bengal Ag. Cross-reactivity was assessed based on the cut-off value determined by the ROC curve. Dot plot and ROC curve analyses were conducted using GraphPad Prism version 6.05. Statistical analyses were performed using unpaired Student’s t-test and ANOVA, with a significance level set at P < 0.05. Selection of recombinant type IV secretion system proteins Through TMT quantitative analysis, a total of 152 highly expressed proteins were identified in the wild-type strain, and 102 highly expressed proteins were identified in the vaccine strain . Among the highly expressed proteins of the wild strain, we identified seven T4SS proteins, including six VirB proteins (VirB3, VirB4, VirB8, VirB9, VirB10, VirB11) and one T4SS putative outer membrane lipoprotein (BMEII0036). After predicting and removing transmembrane regions, signal peptides, and hydrophobic regions through the UniProt website ( https://www.uniprot.org/uniprotkb ), we constructed recombinant sequences of the seven proteins for prokaryotic expression . Preparation of recombinant T4SS proteins All seven recombinant proteins were successfully expressed and purified through prokaryotic expression, as shown in and . After quantification by BCA assay, the concentration was adjusted to 0.5 mg/mL in PBS and stored at -20°C for future use. Results of iELISA According to the ROC curve analysis, the diagnostic accuracy of the recombinant proteins, ranked from highest to lowest, is as follows: rVirB3, rVirB4, rVirB9, rBMEII0036, rVirB8, rVirB11, and rVirB10. The area under the ROC curve (AUC) for each protein is 0.9979, 0.9914, 0.9825, 0.9817, 0.9782, 0.9764, and 0.9476, respectively, which is slightly lower compared to LPS and Rose Bengal Ag. According to the Youden index calculation, the sensitivity of these proteins is all above 0.9100, and the specificity is all above 0.9167. The highest sensitivity is 0.9900 (95% CI, 0.9455 - 0.9997) for rVirB4 and rVirB9, and the highest specificity is 0.9896 (95% CI, 0.9433 - 0.9997) for rVirB3. The sensitivity and specificity are slightly lower than those of LPS and Rose Bengal Ag. The results are presented in , , and . Cross-reactivity assessment Using iELISA and based on the determined cut-off values, cross-reactivity was observed in 2, 5, 8, 2, 1, 5, and 0 out of 40 serum samples from clinical febrile patients without brucellosis when tested with rVirB3, rVirB4, rVirB9, rBMEII0036, rVirB8, rVirB11, and rVirB10, respectively. In contrast, cross-reactivity with LPS and Rose Bengale Ag was observed in 16 and 18 cases, respectively. The cross-reactive pathogens with LPS and Rose Bengale Ag were primarily concentrated in Escherichia coli . Specifically, cross-reactivity with LPS included 8 cases of Escherichia coli infection, 3 cases of Staphylococcus aureus , and 1 case each of Enterococcus faecium , Klebsiella pneumoniae , Moraxella osloensis , Pseudomonas putida , and Streptococcus dysgalactiae . Cross-reactivity with Rose Bengale Ag was observed in 18 cases, including 7 cases of Escherichia coli infection, 2 cases each of Enterococcus faecium , Klebsiella pneumoniae , and Staphylococcus aureus , and 1 case each of Aeromonas sobria , Moraxella osloensis , Pseudomonas aeruginosa , Pseudomonas putida , and Rothia mucilaginosa . The results are shown in . Through TMT quantitative analysis, a total of 152 highly expressed proteins were identified in the wild-type strain, and 102 highly expressed proteins were identified in the vaccine strain . Among the highly expressed proteins of the wild strain, we identified seven T4SS proteins, including six VirB proteins (VirB3, VirB4, VirB8, VirB9, VirB10, VirB11) and one T4SS putative outer membrane lipoprotein (BMEII0036). After predicting and removing transmembrane regions, signal peptides, and hydrophobic regions through the UniProt website ( https://www.uniprot.org/uniprotkb ), we constructed recombinant sequences of the seven proteins for prokaryotic expression . All seven recombinant proteins were successfully expressed and purified through prokaryotic expression, as shown in and . After quantification by BCA assay, the concentration was adjusted to 0.5 mg/mL in PBS and stored at -20°C for future use. According to the ROC curve analysis, the diagnostic accuracy of the recombinant proteins, ranked from highest to lowest, is as follows: rVirB3, rVirB4, rVirB9, rBMEII0036, rVirB8, rVirB11, and rVirB10. The area under the ROC curve (AUC) for each protein is 0.9979, 0.9914, 0.9825, 0.9817, 0.9782, 0.9764, and 0.9476, respectively, which is slightly lower compared to LPS and Rose Bengal Ag. According to the Youden index calculation, the sensitivity of these proteins is all above 0.9100, and the specificity is all above 0.9167. The highest sensitivity is 0.9900 (95% CI, 0.9455 - 0.9997) for rVirB4 and rVirB9, and the highest specificity is 0.9896 (95% CI, 0.9433 - 0.9997) for rVirB3. The sensitivity and specificity are slightly lower than those of LPS and Rose Bengal Ag. The results are presented in , , and . Using iELISA and based on the determined cut-off values, cross-reactivity was observed in 2, 5, 8, 2, 1, 5, and 0 out of 40 serum samples from clinical febrile patients without brucellosis when tested with rVirB3, rVirB4, rVirB9, rBMEII0036, rVirB8, rVirB11, and rVirB10, respectively. In contrast, cross-reactivity with LPS and Rose Bengale Ag was observed in 16 and 18 cases, respectively. The cross-reactive pathogens with LPS and Rose Bengale Ag were primarily concentrated in Escherichia coli . Specifically, cross-reactivity with LPS included 8 cases of Escherichia coli infection, 3 cases of Staphylococcus aureus , and 1 case each of Enterococcus faecium , Klebsiella pneumoniae , Moraxella osloensis , Pseudomonas putida , and Streptococcus dysgalactiae . Cross-reactivity with Rose Bengale Ag was observed in 18 cases, including 7 cases of Escherichia coli infection, 2 cases each of Enterococcus faecium , Klebsiella pneumoniae , and Staphylococcus aureus , and 1 case each of Aeromonas sobria , Moraxella osloensis , Pseudomonas aeruginosa , Pseudomonas putida , and Rothia mucilaginosa . The results are shown in . The T4SS is a crucial virulence factor of Brucella , composed of 12 protein complexes named VirB1 to VirB12, encoded by the VirB regions . Numerous studies have explored the potential of VirB proteins for vaccine development and serological diagnosis. For instance, evaluated the use of VirB8 against pathogenic Brucella species through composite reverse vaccinology. They found that VirB8 could induce specific humoral and cellular immune responses, reduce the bacterial load of B. abortus S19 in mice, and provide varying degrees of protection. used immunoinformatics to identify antigenic epitopes of VirB8 and VirB10 from the Brucella T4SS, screening two cytotoxic T lymphocyte epitopes, nine helper T lymphocyte epitopes, six linear B cell epitopes, and six conformational B cell epitopes for constructing a multi-epitope vaccine. Several studies have also confirmed that combining VirB10 with other proteins to create recombinant vaccines can successfully induce immune responses . Research has shown that VirB7 and VirB9 can induce Th1 responses in mice and dogs . Additionally, there is evidence supporting the potential value of VirB5, VirB10, and VirB12 for serological diagnosis of brucellosis . However, existing studies have mainly focused on individual VirB proteins, and there is a lack of systematic analysis on the use of VirB proteins for serological diagnosis of brucellosis. In this study, we used TMT proteomics technology to identify highly expressed VirB proteins from wild-type Brucella strains and successfully prepared various recombinant VirB proteins for serological diagnosis. The results demonstrated that several VirB proteins (e.g., rVirB3, rVirB4, rVirB9) exhibited high sensitivity and specificity in diagnosing brucellosis. Although their performance was slightly lower than that of traditional LPS and Rose Bengale Ag, their potential as novel diagnostic antigens cannot be overlooked, the proteins still show good results for the diagnosis of human brucellosis. Besides VirB proteins, we also identified a T4SS-related protein through proteomics, namely T4SS putative outer membrane lipoprotein BMEII0036, which also showed high sensitivity and specificity when used in brucellosis diagnosis. As key components of the Brucella T4SS, VirB proteins not only play a role in the pathogen’s virulence mechanisms but also in its interaction with host cells, making them valuable diagnostic antigens that can more directly reflect the infection status of Brucella , with significant clinical application potential . In this study, we observed some differences in diagnostic performance among the various VirB proteins. For example, rVirB3 showed the best specificity, while rVirB4 and rVirB9 had the highest sensitivity. These differences might be attributed to the specific roles and expression levels of different VirB proteins during the Brucella lifecycle. Additionally, the antigenicity of these proteins could be influenced by factors such as amino acid sequences, spatial conformation, and glycosylation modifications . Therefore, future research should further explore the antigenic epitopes of these proteins and how to optimize their structures to enhance diagnostic performance. Cross-reactivity is an important factor in assessing the specificity of diagnostic antigens. This study found that although all VirB proteins exhibited some degree of cross-reactivity, the frequency and intensity of cross-reactivity were lower than those of LPS and Rose Bengale Ag. Notably, rVirB10 showed no cross-reactivity in 40 serum samples from febrile patients without brucellosis, indicating very high specificity. This suggests that VirB proteins may have an advantage in reducing cross-reactivity when used as diagnostic antigens. However, it is important to note that cross-reactivity should be assessed with a broader and more diverse sample set to comprehensively evaluate their specificity in practical applications. Although the TMT proteomics results indicate that other VirB proteins, including VirB1, VirB2, and VirB5-VirB7, did not exhibit any significant differences between vaccine strain Brucella abortus A19 and the wild-type Brucella abortus DT21, it is essential to further investigate the potential diagnostic value of these proteins for human brucellosis in future studies. This study provides compelling evidence that T4SS proteins play a crucial role in human brucellosis infection and offers important experimental evidence and theoretical foundation for the development of new diagnostic antigens for brucellosis. However, further research and exploration are necessary to achieve widespread clinical application of VirB proteins. In summary, VirB proteins, as key components of the Brucella T4SS, show great potential in the serological diagnosis of brucellosis. Future research should continue to explore their antigenicity and diagnostic performance to develop more efficient and accurate diagnostic methods for brucellosis. |
Uncovering students’ misconceptions by assessment of their written questions | 04263130-080c-4ce3-834c-72aab2dcf605 | 4997739 | Pathology[mh] | Pre-existing knowledge can positively influence how new concepts in science are learned . However, if new concepts conflict with pre-existing ideas, students may distort or ignore new information. Several terms are used in the literature to describe incorrect pre-existing ideas, including alternative conceptions, alternative frameworks and naïve beliefs. We use the term misconceptions throughout this article to describe students’ ideas that (1) are inconsistent with current scientific views , and (2) result in a misunderstanding or misinterpretation of new information . Recognition of misconceptions is a highly challenging and difficult task for teachers as they tend to either over- or underestimate students’ prior knowledge . Misconceptions are resistant to change and can negatively influence students’ learning performance, which stresses the importance of identifying student misconceptions in order to achieve effective learning and teaching. Misconceptions cannot be repaired unless they are recognized. Current teaching methods are not always effective in targeting and remediating misconceptions. Several studies demonstrated misconceptions prevailing throughout courses . Current methods to test conceptual understanding and uncover misconceptions include: multiple choice questions (MCQs) with or without written explanations ; MCQs including a confidence test ; open questions ; generating MCQ questions by the student ; drawing or selecting drawings ; individual interviews ; laboratory instructions with or without (verbal) predictions of the outcome of the experiment ; online self-directed E-learning modules ; or in-depth interviews with teachers to explore their perceptions of student’s misconceptions . MCQs are an efficient way to test large cohorts. However, a multiple choice questionnaire carries the disadvantage that students do not phrase or verbalize the misconceptions themselves, and, unfortunately, MCQs can inadvertently introduce new misconceptions. This occurs when students believe an incorrect alternative is correct. It is called a negative testing effect, and is aggravated when more false statements are included in a test . Drawings provide a rich source of information about student thinking , but not all topics are suited to be expressed in drawings. Interviews are very successful in identifying misconceptions , but require substantial training of the interviewer, and are less efficient in large cohorts. Each year, a large cohort of medical science and biomedical students enters our curriculum. Therefore we intended to explore an approach that is more efficient than interviews, but avoiding the risk of a negative testing effect by students adopting false answers, such as a multiple-choice questionnaire. In a previous study we investigated whether asking students to formulate written questions during small-group work sessions could enhance study performance. During subsequent evaluation of the questions we were struck by illogical and/or unclear elements in the formulations that reminded us of a misconception. Therefore, we wondered whether student’s written questions could be used to uncover misconceptions. Formulating questions could be educationally relevant for several reasons. Asking questions: (1) stimulates critical thinking ; (2) stimulates students to focus on the issues to be studied ; (3) forces them to reflect on their learning ; (4) provides information on the progress of the learner ; and (5) enhances the dialogue among students . Writing down questions forces students to focus and formulate in a clear and concise way. The current explorative follow-up study was conducted to explore the following approach: challenging students to formulate written open questions, which were subsequently evaluated by experienced tutors in order to uncover misconceptions. Based on our experiences in a previous study the current study was designed in the context of a small-group work session, as this was considered a highly suitable environment to challenge individual students to formulate written questions because of the safe learning environment, and the small-scaled setting for dialogue. In this small-scaled setting, students are constantly testing their mental models through interactions with one another and with the tutor . The students are actively engaged in the learning process, which enhances their conceptual understanding, based on the constructivist theory of learning . To the best of our knowledge, challenging students to formulate written questions during SGW has not yet been used to detect their misconceptions. Therefore, the aim of this study was i) to determine whether misconceptions can be uncovered in students’ written questions, and if so, ii) to measure the frequency of misconceptions that can be detected in this particular setting. In addition, iii) the difference in the number of misconceptions according to gender and discipline of the students was assessed. Finally, iv) it was determined if the presence of such misconceptions is negatively associated with the students’ course examination results.
Participants and setting The study was conducted during a second-year bachelor course on General Pathology at the Radboud University Nijmegen Medical Centre, the Netherlands, taken by 397 students from the medical and biomedical science discipline. A learner outcome-oriented curriculum consisting of consecutive courses was provided in which each course lasted 4 weeks. The successive topics of the course on General Pathology were: (1) Principles of Diagnosis and Cellular Damage; (2) Inflammation and Repair; (3) Circulatory Disorders; and (4) Tumour Pathology (pathogenesis and progression). Each topic had a consistent sequence of educational activities: lecture (voluntary); task-driven self-study in preparation for the subsequent SGW; SGW (voluntary); practical course (obligatory); interactive lecture (voluntary); and non-directed self-study. The study was executed during the voluntary SGW session on the topic of Tumour Pathology (2 h) during the 4th week. These sessions involved groups of 12–15 students. On the final day of the course, students were subjected to a formal examination on all four topics. Procedure At the start of the SGW on Tumour Pathology, the tutor invited the students to think about an extra question related to the topic. This aimed at a question on disease mechanisms (conceptual understanding) and not mere factual knowledge. Tutors used a guided instruction to invite the students . Students were told that they were provided questions in their manual to guide the discussion, but that they were challenged to come up with one additional open question themselves to stimulate the discussion even further. They were told it could be a question that represented a difficult issue for the student, or an issue that they would like to discuss further, eg during the subsequent interactive lecture. Students did not have to provide answers. At the end of the SWG, students wrote their individual question about the topic. Questions were assessed by two independent expert pathologists (DJR, RdW) who were blinded to the students’ gender and discipline. The operational definition used to recognize a misconception was: an illogical or unclear presupposition incongruent with the current state of scientific knowledge/ professional standard. Knowledge gaps were not classified as misconception, but were considered a result of insufficient preparation to the SGW session. If the expert pathologists did not agree initially on whether or not a question contained a misconception, a third expert pathologist (ES) discussed the question with the other two experts. Consensus was reached in all cases. Questions including grammatical errors making them impossible to interpret, and questions that were not original (e.g copied from the students’ course manual) were excluded. Questions derived from students who did not attend the formal examination were also excluded. Study outcomes The primary study outcome was to determine whether misconceptions can be uncovered in students’ written questions. Subsequent outcome measures were: the percentage of questions containing a misconception; the observed agreement among independent raters; the difference in the number of misconceptions among male/female students and medical/ biomedical students; and the formal examination score on Tumour Pathology and the remaining topics of the course: Principles of Diagnosis and Cellular Damage; Inflammation and Repair; and Circulatory Disorders. The formal examination score of the studied topic Tumour Pathology was compared to the score of the other three topics. In this way it was explored if students holding misconceptions generally performed lower in all course examination topics, or whether there was a topic-specific underperformance. Statistical analysis Linear mixed models with an SGW-group-dependent random intercept were used in order to account for the dependence caused by clustering of the students into SGW groups . After the primary analysis, subgroup analyses were performed according to gender and discipline. Cohen’s kappa was used to determine inter-rater agreement.
The study was conducted during a second-year bachelor course on General Pathology at the Radboud University Nijmegen Medical Centre, the Netherlands, taken by 397 students from the medical and biomedical science discipline. A learner outcome-oriented curriculum consisting of consecutive courses was provided in which each course lasted 4 weeks. The successive topics of the course on General Pathology were: (1) Principles of Diagnosis and Cellular Damage; (2) Inflammation and Repair; (3) Circulatory Disorders; and (4) Tumour Pathology (pathogenesis and progression). Each topic had a consistent sequence of educational activities: lecture (voluntary); task-driven self-study in preparation for the subsequent SGW; SGW (voluntary); practical course (obligatory); interactive lecture (voluntary); and non-directed self-study. The study was executed during the voluntary SGW session on the topic of Tumour Pathology (2 h) during the 4th week. These sessions involved groups of 12–15 students. On the final day of the course, students were subjected to a formal examination on all four topics.
At the start of the SGW on Tumour Pathology, the tutor invited the students to think about an extra question related to the topic. This aimed at a question on disease mechanisms (conceptual understanding) and not mere factual knowledge. Tutors used a guided instruction to invite the students . Students were told that they were provided questions in their manual to guide the discussion, but that they were challenged to come up with one additional open question themselves to stimulate the discussion even further. They were told it could be a question that represented a difficult issue for the student, or an issue that they would like to discuss further, eg during the subsequent interactive lecture. Students did not have to provide answers. At the end of the SWG, students wrote their individual question about the topic. Questions were assessed by two independent expert pathologists (DJR, RdW) who were blinded to the students’ gender and discipline. The operational definition used to recognize a misconception was: an illogical or unclear presupposition incongruent with the current state of scientific knowledge/ professional standard. Knowledge gaps were not classified as misconception, but were considered a result of insufficient preparation to the SGW session. If the expert pathologists did not agree initially on whether or not a question contained a misconception, a third expert pathologist (ES) discussed the question with the other two experts. Consensus was reached in all cases. Questions including grammatical errors making them impossible to interpret, and questions that were not original (e.g copied from the students’ course manual) were excluded. Questions derived from students who did not attend the formal examination were also excluded.
The primary study outcome was to determine whether misconceptions can be uncovered in students’ written questions. Subsequent outcome measures were: the percentage of questions containing a misconception; the observed agreement among independent raters; the difference in the number of misconceptions among male/female students and medical/ biomedical students; and the formal examination score on Tumour Pathology and the remaining topics of the course: Principles of Diagnosis and Cellular Damage; Inflammation and Repair; and Circulatory Disorders. The formal examination score of the studied topic Tumour Pathology was compared to the score of the other three topics. In this way it was explored if students holding misconceptions generally performed lower in all course examination topics, or whether there was a topic-specific underperformance.
Linear mixed models with an SGW-group-dependent random intercept were used in order to account for the dependence caused by clustering of the students into SGW groups . After the primary analysis, subgroup analyses were performed according to gender and discipline. Cohen’s kappa was used to determine inter-rater agreement.
Participation A total of 242 students attended the voluntary SGW sessions. In all, 221 students from the SGW group agreed to formulate a written question. Participation rate among the students in the SGW group sessions was 91 %. A total of 36 students were excluded because their questions were copied from the course manual ( n = 30), not interpretable ( n = 3), or because the student did not attend the formal examination ( n = 3) (Fig. ). A total of 185 students were included in the study: 132 female and 53 male students, 160 medical and 25 biomedical students. Misconceptions Of the 185 questions rated, 11 % ( n = 20) was classified as a misconception. The observed agreement among independent raters was 0.91 (95 % confidence interval [CI] 0.86–0.95), Cohen’s kappa: 0.51 (95 % CI 0.30–0.72). Inter-rater agreement was considered moderate. Examples of written questions containing a misconception are shown in Table . There was no difference in the prevalence of questions containing misconceptions among male and female students. All questions containing misconceptions were derived from medical students; questions written by biomedical science students did not reveal misconceptions. Formal examination scores Formal examination score on Tumour Pathology amounted to 5.0 (SD 2.0) in the group with misconceptions and 6.7 (SD 2.4) in the group without misconceptions ( p = 0.003). The average formal examination score on the other topics of the course, including: (1) Principles of Diagnosis and Cellular Damage; (2) Inflammation and Repair; and (3) Circulatory Disorders, was not significantly different: 6.9 (SD 0.95) in the group with misconceptions versus 6.9 (SD 1.1) in the group without misconceptions (Table ).
A total of 242 students attended the voluntary SGW sessions. In all, 221 students from the SGW group agreed to formulate a written question. Participation rate among the students in the SGW group sessions was 91 %. A total of 36 students were excluded because their questions were copied from the course manual ( n = 30), not interpretable ( n = 3), or because the student did not attend the formal examination ( n = 3) (Fig. ). A total of 185 students were included in the study: 132 female and 53 male students, 160 medical and 25 biomedical students.
Of the 185 questions rated, 11 % ( n = 20) was classified as a misconception. The observed agreement among independent raters was 0.91 (95 % confidence interval [CI] 0.86–0.95), Cohen’s kappa: 0.51 (95 % CI 0.30–0.72). Inter-rater agreement was considered moderate. Examples of written questions containing a misconception are shown in Table . There was no difference in the prevalence of questions containing misconceptions among male and female students. All questions containing misconceptions were derived from medical students; questions written by biomedical science students did not reveal misconceptions.
Formal examination score on Tumour Pathology amounted to 5.0 (SD 2.0) in the group with misconceptions and 6.7 (SD 2.4) in the group without misconceptions ( p = 0.003). The average formal examination score on the other topics of the course, including: (1) Principles of Diagnosis and Cellular Damage; (2) Inflammation and Repair; and (3) Circulatory Disorders, was not significantly different: 6.9 (SD 0.95) in the group with misconceptions versus 6.9 (SD 1.1) in the group without misconceptions (Table ).
Summary of the main findings Students’ written questions can be used to uncover their misconceptions, ie in 11 % of the questions evaluated. The presence of such misconceptions was negatively associated with their course examination score. Students holding misconceptions in Tumour Pathology do not perform lower in the other course examination topics compared to students without misconceptions, but only in tumour pathology, which implies a topic specific underperformance. There was no association between the number of misconceptions and gender. Surprisingly, all misconceptions were identified in questions posed by students from the medical discipline; biomedical science students posed no misconceptions. The possible reason for this will be discussed later. Strengths of the present study To the best of our knowledge this is the first prospective cohort study to assess students’ written open questions as an approach to identify misconceptions. The study was executed in a large cohort of students, which can be considered a strength, as it can be difficult to identify misconceptions among individual students in large cohorts . Expert pathologists, all experienced tutors, independently executed a careful evaluation of open questions in order to uncover misconceptions. Timely detection and correction of misconceptions is essential in learning environments based on the constructivist theory of learning in which students construct knowledge by appreciating new concepts in the context of their prior knowledge . Construction and reconstruction of mental models is considered a central element of active student centered learning . As Dennick stated, the constructivist theory implies that activation of prior knowledge may reveal incorrect conceptual understanding . Challenging students during SGW to formulate a written question as demonstrated in this study seems a potential approach to expose students’ conceptual misunderstanding. In addition, writing questions forces students to focus on uncertainties and to formulate concisely. This may stimulate deep learning as students are applying their mental models using the new information that has recently been taught and discussed during the SGW. Limitations of the present study An accurate interpretation of written questions is not an easy task, as reflected by the Cohen’s Kappa being moderate. Judgement could be enhanced by asking students to provide answers to their questions, which could give more information on student’s understanding. The current study primarily focused on identification of misconceptions as a first step of a series of activities to identify and remediate misconceptions. The most effective way of remediation followed by assessment of persisting misconceptions on the long term is to be investigated. The current outcome measures do not allow assessment of resistance of the misconceptions, as a specific follow-up survey was not part of the current study. Selection bias may have occurred, as participation in the SGW session was not mandatory. This could possibly have resulted in selection of the more motivated students. High-achieving students with a higher degree of intrinsic motivation might pose fewer questions containing a misconception. The difference in misconceptions between medical and biomedical science students could reflect the extended background in science methodology of biomedical science students. During their training, more emphasis is given to scientific questioning, in comparison with medical training. However, the difference could also be explained by selection bias, which could be assessed by replication of the study during an obligatory SGW session. Comparison to the literature There is an extensive body of research available on misconceptions, especially in the field of physiology. Sircar and Tandon conducted an observational study using written questions by students to induce in-depth learning and identify misconceptions . In contrast to our study, Sircar and Tandon used MCQs instead of open questions, and provided a more competitive environment. They observed that posing questions led to lively discussions among students in tutorial classes, and that the written questions revealed misconceptions, although the prevalence was not reported. Curtis et al. investigated misconceptions among dental students, and found the group of students with the lowest scores on the test to be similar to the group of students with the most misconceptions, although not completely identical . Furthermore, this study was congruent with ours in the fact that there was no difference reported between male and female students with respect to the percentage of misconceptions. Badenhorst et al. conducted a qualitative study among teachers using in-depth interviews to explore their perceptions of student’s misconceptions . Several misconceptions were reported, including those related to learning styles, as passive learners just absorb information without seeking for coherence. This stresses the importance of testing students’ conceptual understanding, because students seem to understand less than they appear to know . Students can give the right answers to MCQ tests based on correctly memorized facts without having developed a conceptual understanding of the disease mechanisms, making them unable to construct the right answer based on their mental model . This poses a threat to meaningful learning, because the half-life of newly acquired knowledge is short if the students do not understand why their answers are correct. Implementation in practice: (1) misconceptions inventory Evaluation of open questions by three expert pathologists is time consuming. Therefore, possible implementation in practice requires careful consideration in terms of the intended purpose. We see two different purposes for the approach demonstrated in this study. The first is to create an inventory of the existing misconceptions within the theme. A scrutinized assessment of the questions by expert pathologists is needed to serve this purpose. The list of misconceptions can be clustered in a ‘misconceptions inventory’. Such an inventory can be disseminated among tutors, so that they can challenge students to elaborate on these difficult topics to improve teaching and learning during subsequent courses. Especially less experienced tutors could benefit from using the misconceptions inventory that was created, based on other tutors’ experiences, to prepare their teaching activities. Implementation in practice: (2) using students’ written questions to feed the dialogue The second purpose of our approach is to encourage dialogue among students. To serve this purpose, students’ written questions could be rotated among their peers in the small working group. Students could be asked to assess their peers’ written questions, search for misconceptions, and discuss these in small groups, in order to feed their dialogue and have students elaborate on their thinking. This approach is not time consuming for tutors and is suitable for application in large cohorts. Once misconceptions are uncovered: implications for future studies It is obvious that identifying misconceptions alone is not enough to resolve them. Identification should be followed by remediation. Merely telling the student that their conceptual understanding is incorrect is unlikely to have effect. Students are to be challenged to test their mental models and experience that applying their incorrect beliefs results in incorrect answers. Reparation of misconceptions during an ongoing course could be executed during interactive sessions such as such as small group sessions and interactive lectures. During such an interactive session students can be engaged in a lively structured dialogue with their peers and with the tutor whereby their faulty mental models can be reconstructed . The misconceptions can be used as input for the dialogue and evoke in-depth discussion among students. Future research could be directed to finding the most effective way to accomplish successful reparation. As misconceptions can be resistant to change these follow up studies should preferably include repeated measurement of misconceptions on the long-term to assess the effectiveness of remediation.
Students’ written questions can be used to uncover their misconceptions, ie in 11 % of the questions evaluated. The presence of such misconceptions was negatively associated with their course examination score. Students holding misconceptions in Tumour Pathology do not perform lower in the other course examination topics compared to students without misconceptions, but only in tumour pathology, which implies a topic specific underperformance. There was no association between the number of misconceptions and gender. Surprisingly, all misconceptions were identified in questions posed by students from the medical discipline; biomedical science students posed no misconceptions. The possible reason for this will be discussed later.
To the best of our knowledge this is the first prospective cohort study to assess students’ written open questions as an approach to identify misconceptions. The study was executed in a large cohort of students, which can be considered a strength, as it can be difficult to identify misconceptions among individual students in large cohorts . Expert pathologists, all experienced tutors, independently executed a careful evaluation of open questions in order to uncover misconceptions. Timely detection and correction of misconceptions is essential in learning environments based on the constructivist theory of learning in which students construct knowledge by appreciating new concepts in the context of their prior knowledge . Construction and reconstruction of mental models is considered a central element of active student centered learning . As Dennick stated, the constructivist theory implies that activation of prior knowledge may reveal incorrect conceptual understanding . Challenging students during SGW to formulate a written question as demonstrated in this study seems a potential approach to expose students’ conceptual misunderstanding. In addition, writing questions forces students to focus on uncertainties and to formulate concisely. This may stimulate deep learning as students are applying their mental models using the new information that has recently been taught and discussed during the SGW.
An accurate interpretation of written questions is not an easy task, as reflected by the Cohen’s Kappa being moderate. Judgement could be enhanced by asking students to provide answers to their questions, which could give more information on student’s understanding. The current study primarily focused on identification of misconceptions as a first step of a series of activities to identify and remediate misconceptions. The most effective way of remediation followed by assessment of persisting misconceptions on the long term is to be investigated. The current outcome measures do not allow assessment of resistance of the misconceptions, as a specific follow-up survey was not part of the current study. Selection bias may have occurred, as participation in the SGW session was not mandatory. This could possibly have resulted in selection of the more motivated students. High-achieving students with a higher degree of intrinsic motivation might pose fewer questions containing a misconception. The difference in misconceptions between medical and biomedical science students could reflect the extended background in science methodology of biomedical science students. During their training, more emphasis is given to scientific questioning, in comparison with medical training. However, the difference could also be explained by selection bias, which could be assessed by replication of the study during an obligatory SGW session.
There is an extensive body of research available on misconceptions, especially in the field of physiology. Sircar and Tandon conducted an observational study using written questions by students to induce in-depth learning and identify misconceptions . In contrast to our study, Sircar and Tandon used MCQs instead of open questions, and provided a more competitive environment. They observed that posing questions led to lively discussions among students in tutorial classes, and that the written questions revealed misconceptions, although the prevalence was not reported. Curtis et al. investigated misconceptions among dental students, and found the group of students with the lowest scores on the test to be similar to the group of students with the most misconceptions, although not completely identical . Furthermore, this study was congruent with ours in the fact that there was no difference reported between male and female students with respect to the percentage of misconceptions. Badenhorst et al. conducted a qualitative study among teachers using in-depth interviews to explore their perceptions of student’s misconceptions . Several misconceptions were reported, including those related to learning styles, as passive learners just absorb information without seeking for coherence. This stresses the importance of testing students’ conceptual understanding, because students seem to understand less than they appear to know . Students can give the right answers to MCQ tests based on correctly memorized facts without having developed a conceptual understanding of the disease mechanisms, making them unable to construct the right answer based on their mental model . This poses a threat to meaningful learning, because the half-life of newly acquired knowledge is short if the students do not understand why their answers are correct.
Evaluation of open questions by three expert pathologists is time consuming. Therefore, possible implementation in practice requires careful consideration in terms of the intended purpose. We see two different purposes for the approach demonstrated in this study. The first is to create an inventory of the existing misconceptions within the theme. A scrutinized assessment of the questions by expert pathologists is needed to serve this purpose. The list of misconceptions can be clustered in a ‘misconceptions inventory’. Such an inventory can be disseminated among tutors, so that they can challenge students to elaborate on these difficult topics to improve teaching and learning during subsequent courses. Especially less experienced tutors could benefit from using the misconceptions inventory that was created, based on other tutors’ experiences, to prepare their teaching activities.
The second purpose of our approach is to encourage dialogue among students. To serve this purpose, students’ written questions could be rotated among their peers in the small working group. Students could be asked to assess their peers’ written questions, search for misconceptions, and discuss these in small groups, in order to feed their dialogue and have students elaborate on their thinking. This approach is not time consuming for tutors and is suitable for application in large cohorts.
It is obvious that identifying misconceptions alone is not enough to resolve them. Identification should be followed by remediation. Merely telling the student that their conceptual understanding is incorrect is unlikely to have effect. Students are to be challenged to test their mental models and experience that applying their incorrect beliefs results in incorrect answers. Reparation of misconceptions during an ongoing course could be executed during interactive sessions such as such as small group sessions and interactive lectures. During such an interactive session students can be engaged in a lively structured dialogue with their peers and with the tutor whereby their faulty mental models can be reconstructed . The misconceptions can be used as input for the dialogue and evoke in-depth discussion among students. Future research could be directed to finding the most effective way to accomplish successful reparation. As misconceptions can be resistant to change these follow up studies should preferably include repeated measurement of misconceptions on the long-term to assess the effectiveness of remediation.
This study demonstrates that misconceptions can be uncovered by analyzing students’ written questions. The occurrence of these misconceptions is negatively associated with the formal examination score, which supports the idea that misconceptions interfere with effective student learning. This approach can be useful in confronting students with their misconceptions and provides an opportunity to discuss and correct them during subsequent interactive sessions of the course in advance of the formal examination.
|
A Micro‐CT Based Cadaveric Study Investigating Bone Density Changes During Hip Arthroplasty Surgery | e8764482-b2cd-4688-8e88-73922796b517 | 11898157 | Surgical Procedures, Operative[mh] | Introduction Total Hip Arthroplasty (THA) is an effective surgery for relieving pain and restoring mobility in patients with hip osteoarthritis. Hip implants are categorised into cemented and uncemented, based on their bonding mechanisms. Uncemented implants rely on mechanical press fitting for primary stability and osseointegration for secondary stability . Although modern cemented implants perform well, limitations like poor tensile strength and the risk of osteolysis have led to increased use of uncemented implants . In the UK, the use of cemented hips nearly halved from 2006 to 2021, while uncemented implants increased by almost 2.5 times during the same period . By 2030, the number of THA in young adults is expected to increase fivefold , with over 80% of these patients receiving uncemented implants . Therefore, it is important to evaluate uncemented prostheses to minimise the risk of surgical complications. To ensure proper press‐fit and reduce the risk of periprosthetic fractures with uncemented implants, bone density is crucial . Preparation of the cavity before implantation involves broaching, resulting in osseodensification by breaking and compacting trabeculae within the bone tissue. Different types of broaches, such as compaction broaches, blunt extraction broaches, and sharp extraction broaches, are used in this process. Despite variations in broach design, all contribute to osseodensification . This process enhances primary implant stability by reducing micromotion and improving fixation strength before osseointegration . However, bone densification around implants and the effect of surgical intervention (broaching and implantation) on bone density have not been thoroughly evaluated. Some studies, using mechanical setups to mimic broach/implant surfaces, found that different surface finishes and higher initial bone density increased bone densification . However, these studies did not replicate the actual bone‐implant interface. Furthermore, these studies were performed using cadaveric femur samples and medical‐CT scans or bovine bone samples using μCT scans . This limits their applicability due to the resolution limitations of medical‐CT, which cannot differentiate trabecular bone structure, making it difficult to quantify the breaking and compaction of trabecular bone. Furthermore, the use of μCT scans in bovine bone samples reduces their relevance to human hip applications. The evaluation of changes in bone density due to surgical intervention using μCT scans of cadaveric human samples to obtain more in‐depth information is lacking in the literature. Current literature typically employs commercial density calibration phantoms (DCPs) in medical‐CT scans to correlate CT scan intensity with density values. These commercial DCPs contain hydroxyapatite in their inserts to mimic bone material, making them costly. Additionally, these phantoms are designed for specific use in either medical‐CT or μCT scanners. To the best of the authors' knowledge, there is no existing development or validation of an in‐house DCP that can be used cost‐effectively for both medical‐CT and μCT scans. Furthermore, the comparison of density predictions between the two CT scan modalities, is absent in the literature. Bone density estimation is crucial for developing personalised finite element analysis (FEA) of biomechanical systems, rather than relying on a generalised FEA that uses population‐averaged bone material properties. This is because material constants correlate with bone density through empirical relationships , and most of these correlations follow a power law. Bone density estimation is particularly important when analysing changes due to broaching and implantation steps, as these affect bone integrity and fracture risk. Therefore, the aim of the study was to investigate the change in bone density resulting from the broaching operation and uncemented implantation through μCT based cadaveric study. To address the study's aim, the following objectives were established: (a) To develop an in‐house DCP for mapping CT scan intensities to bone density, validate the results, and investigate its prediction sensitivity. (b) To compare density predictions between medical‐CT and μCT scans using the developed mapping procedure. (c) To examine changes in bone density due to the broaching operation and uncemented implantation by performing THA on three cadaveric femurs and conducting μCT scans at various surgical stages. It is worth mentioning that the density measured in this study refers to real density of bone. The manuscript is organised as follows: the development of the in‐house DCP is explained, followed by the validation of density predictions using the in‐house DCP compared to a commercial DCP with a lamb bone. The sensitivity of the density prediction to the µCT scan parameters is evaluated using the lamb bone. Additionally, the difference in density of the cadaveric femur between the two CT scan modalities, medical‐CT and µCT, is assessed. The experimental study on performing uncemented THA is described, along with the evaluation of changes in bone density due to uncemented THA at intermediate surgical steps using µCT.
Methods 2.1 Development and Validation of In‐House Density Calibration Phantom (DCP) An in‐house DCP was developed using five polymer inserts (nylon, PEEK, Acetal, PPS, PTFE) with densities of 1.14 g/cc, 1.32 g/cc, 1.42 g/cc, 1.64 g/cc, and 2.2 g/cc, respectively, according to the manufacturer's specifications. These materials were selected to closely match the bone density reported in previous studies , covering both trabecular and cortical bone, while minimising CT artifacts. The densities of the inserts were validated by determining their volumes through two methods: μCT scanning and laser scanning. The μCT scanning was performed using a Zeiss Metrotom 1500 at 60 kV, 650 μA, with a 1000 ms integration time and an isotropic voxel size of 48.25 µm. In addition, laser scanning was conducted with a Nikon ModelMaker H120, mounted on a Nikon MCAx S portable CMM arm, achieving a minimum resolution of 35 µm and a combined accuracy of 32 µm (2σ) with the scanning arm. The masses were measured with a Mettler Toledo analytical balance (readability: 0.0001 g), and subsequently the densities were calculated to verify the manufacturer‐specified values. 2.1.1 Density Mapping Procedure The segmentation of the DCP inserts from each CT scan was performed by establishing a threshold for each insert in Avizo 3D 2021 (Thermo Fisher Scientific, Germany). To determine the mean intensity of each DCP insert, an intensity histogram was generated, and the mean intensity was computed. To establish the relationship between CT intensity and density, a linear regression line was determined between the mean intensities and densities of the DCP inserts using Matlab 2022b (The MathWorks Inc., Natick, MA). The density was finally assigned based on the intensity values at each voxel, utilising the density calibration line determined for a specific scan. 2.1.2 Validation of the Density Prediction The accuracy of the density prediction using the in‐house DCP was validated by μCT scanning of a lamb bone in conjunction with the in‐house DCP and a commercially available DCP (QRM‐50124, QRM GmbH, Moehrendorf, Germany). The scan parameters were as follows: Tescan Unitom XL 170 kV, 70 W, 250 ms integration time, 100 µm isotropic voxel size, 0.25 mm Cu filter. The accuracy of the density prediction using the in‐house DCP was evaluated by comparing the lamb bone density predictions from the in‐house DCP and the commercial DCP, finding a correlation between the two. Furthermore, the precision of the density prediction using the in‐house DCP was evaluated by calculating the standard deviation of the difference in the density measurements of the lamb bone from two subsequent μCT scans performed with the same scan parameters without any interventions. 2.1.3 Sensitivity of the Density Prediction The sensitivity of the density prediction using the in‐house DCP with different µCT scan parameters was evaluated. A lamb lower limb, stripped of soft tissues and stored at −20°C, was thawed to room temperature for experimentation. A full factorial design (DOE) was employed to investigate CT parameter effects on predicted density, with voltage and exposure tested at two levels each and filtration at three levels. The lamb femur, in conjunction with the in‐house DCP, underwent μCT scanning on a Tescan Unitom XL with a fixed 70 μm isotropic voxel size, while systematically varying voltage, exposure, and filtration settings as detailed in Table . The lamb bone was segmented from each µCT scan using watershed segmentation. The bone density was assigned based on the intensity values at each voxel, utilising the density calibration line calculated for each µCT scan, as described in Section . The mean and variance of the lamb femur's density distribution were computed from the density distribution histogram. A one‐way analysis of variance (ANOVA) was conducted using Minitab Statistical Software 21 (Minitab LLC. 2021, Minitab) to assess potential statistical significance among the mean densities of each CT scan, with a significance level set at 5%. 2.2 Cadaver Experiment Procedure: THA Surgical Process Three healthy cadaveric femur samples, without any signs of arthritis or other pathological conditions, were obtained: a 70‐year‐old male (left), a 76‐year‐old female (right), and a 78‐year‐old male (right). The samples were thawed from −20°C to room temperature 24 h before the study. Approval was obtained from the Biomedical & Scientific Research Ethics Committee (BSREC) at the University of Warwick (Ref: BSREC 66/22‐23) and Research and Development, University Hospitals Coventry and Warwickshire (UHCW) NHS Trust (Ref: GF0503). Initially, medical‐CT scans were conducted at University Hospitals Coventry and Warwickshire (UHCW) NHS Trust using a GE Medical Systems Revolution CT scanner (120 kV) with a voxel size of 0.5 × 0.5 × 0.625 mm. After bisecting each femur with the distal part removed, μCT scans were performed at the CiMAT μCT scanning centre on a Tescan Unitom XL scanner. Following the initial medical‐CT scans and μCT scans, THA was performed on the three femur samples by an experienced orthopaedic surgeon. The CT images were used to determine the appropriate size of the broach and implant needed for each femur sample. The femur samples were secured to a height‐adjustable surgical table using a bone clamp during the surgery. The orientation of the femurs was adjusted to replicate the actual surgical positioning. A neck osteotomy was performed, and the entry point into the femoral cavity was established at the piriformis muscle insertion, followed by insertion of a smooth intramedullary rod according to the surgical technique manual provided by the implant manufacturer (Corin, Cirencester, Gloucestershire, UK). The femoral cavity was prepared by compacting the trabecular bone with the Metafix compaction broach (Corin, Cirencester, Gloucestershire, UK). The size of the broach was incrementally increased until achieving the necessary longitudinal and rotational stability, as determined by the surgeon. Subsequently, the final broach was removed, and μCT scans were conducted using the Tescan Unitom XL to capture the bone geometry post‐compaction broaching. Figure illustrates the experimental setup used for performing the THA of each femur sample after broaching operation and the in‐house DCP. After the second set of μCT scans, appropriately sized uncemented Metafix implants (Corin, Cirencester, Gloucestershire, UK) were implanted into each femur sample, matching the final broach size used. Finally, a set of μCT scans was performed on the Tescan Unitom XL to capture the bone geometry with the inserted uncemented implant. All the CT scans were performed in conjunction with the in‐house DCP, and a calibrated dimensional phantom was scanned after each μCT scan to calibrate the dimensional measurements . The μCT scan parameters used for the three sets of μCT scans are listed in Table . Different scan parameters were utilised in the μCT scans of the bone samples to achieve optimal image quality by minimising noise and maximising contrast. After the implant was introduced post‐implantation, a higher voltage setting was necessary to ensure adequate X‐ray penetration through both the implant and the surrounding bone. 2.3 Determination of Change in Bone Density Due to THA The cadaveric femurs were segmented from the CT scan data using watershed‐based segmentation in Avizo 3D 2021, and the bone density was assigned, as discussed in Section . After segmenting the femurs, the femur coordinate system (FCS) was defined according to the ISB recommendations . The four CT scans of each femur (pre‐surgery medical‐CT, pre‐surgery μCT, post‐broaching μCT, and post‐implantation μCT) were aligned using best‐fit registration to evaluate the density change in a specific region and transfer the region of interest (ROI) between the scans. For a quantitative comparison of bone density predicted from the different CT scans, the Gruen zones were defined using the post‐implantation μCT scans and subsequently transferred to the other CT scans (pre‐surgery medical‐CT, pre‐surgery µCT, and post‐broaching µCT) after alignment. An additional ROI was defined for the μCT scans around the bone‐implant interface by setting a 1 mm thick ROI around it. This thickness was selected based on visual inspection to ensure it captured all broken trabecular bone debris. The 1 mm depth also aligns with literature, where densification around the interface has been observed up to this depth . Subsequently, this ROI was transferred to the other two μCT scans (pre‐surgery µCT, and post‐broaching µCT). The intention of this step was to compare the alteration in bone volume fraction (BV/TV) resulting from the surgical intervention, specifically the ratio of bone volume (BV) to the total volume (TV) in the ROI around the bone‐implant interface.
Development and Validation of In‐House Density Calibration Phantom (DCP) An in‐house DCP was developed using five polymer inserts (nylon, PEEK, Acetal, PPS, PTFE) with densities of 1.14 g/cc, 1.32 g/cc, 1.42 g/cc, 1.64 g/cc, and 2.2 g/cc, respectively, according to the manufacturer's specifications. These materials were selected to closely match the bone density reported in previous studies , covering both trabecular and cortical bone, while minimising CT artifacts. The densities of the inserts were validated by determining their volumes through two methods: μCT scanning and laser scanning. The μCT scanning was performed using a Zeiss Metrotom 1500 at 60 kV, 650 μA, with a 1000 ms integration time and an isotropic voxel size of 48.25 µm. In addition, laser scanning was conducted with a Nikon ModelMaker H120, mounted on a Nikon MCAx S portable CMM arm, achieving a minimum resolution of 35 µm and a combined accuracy of 32 µm (2σ) with the scanning arm. The masses were measured with a Mettler Toledo analytical balance (readability: 0.0001 g), and subsequently the densities were calculated to verify the manufacturer‐specified values. 2.1.1 Density Mapping Procedure The segmentation of the DCP inserts from each CT scan was performed by establishing a threshold for each insert in Avizo 3D 2021 (Thermo Fisher Scientific, Germany). To determine the mean intensity of each DCP insert, an intensity histogram was generated, and the mean intensity was computed. To establish the relationship between CT intensity and density, a linear regression line was determined between the mean intensities and densities of the DCP inserts using Matlab 2022b (The MathWorks Inc., Natick, MA). The density was finally assigned based on the intensity values at each voxel, utilising the density calibration line determined for a specific scan. 2.1.2 Validation of the Density Prediction The accuracy of the density prediction using the in‐house DCP was validated by μCT scanning of a lamb bone in conjunction with the in‐house DCP and a commercially available DCP (QRM‐50124, QRM GmbH, Moehrendorf, Germany). The scan parameters were as follows: Tescan Unitom XL 170 kV, 70 W, 250 ms integration time, 100 µm isotropic voxel size, 0.25 mm Cu filter. The accuracy of the density prediction using the in‐house DCP was evaluated by comparing the lamb bone density predictions from the in‐house DCP and the commercial DCP, finding a correlation between the two. Furthermore, the precision of the density prediction using the in‐house DCP was evaluated by calculating the standard deviation of the difference in the density measurements of the lamb bone from two subsequent μCT scans performed with the same scan parameters without any interventions. 2.1.3 Sensitivity of the Density Prediction The sensitivity of the density prediction using the in‐house DCP with different µCT scan parameters was evaluated. A lamb lower limb, stripped of soft tissues and stored at −20°C, was thawed to room temperature for experimentation. A full factorial design (DOE) was employed to investigate CT parameter effects on predicted density, with voltage and exposure tested at two levels each and filtration at three levels. The lamb femur, in conjunction with the in‐house DCP, underwent μCT scanning on a Tescan Unitom XL with a fixed 70 μm isotropic voxel size, while systematically varying voltage, exposure, and filtration settings as detailed in Table . The lamb bone was segmented from each µCT scan using watershed segmentation. The bone density was assigned based on the intensity values at each voxel, utilising the density calibration line calculated for each µCT scan, as described in Section . The mean and variance of the lamb femur's density distribution were computed from the density distribution histogram. A one‐way analysis of variance (ANOVA) was conducted using Minitab Statistical Software 21 (Minitab LLC. 2021, Minitab) to assess potential statistical significance among the mean densities of each CT scan, with a significance level set at 5%.
Density Mapping Procedure The segmentation of the DCP inserts from each CT scan was performed by establishing a threshold for each insert in Avizo 3D 2021 (Thermo Fisher Scientific, Germany). To determine the mean intensity of each DCP insert, an intensity histogram was generated, and the mean intensity was computed. To establish the relationship between CT intensity and density, a linear regression line was determined between the mean intensities and densities of the DCP inserts using Matlab 2022b (The MathWorks Inc., Natick, MA). The density was finally assigned based on the intensity values at each voxel, utilising the density calibration line determined for a specific scan.
Validation of the Density Prediction The accuracy of the density prediction using the in‐house DCP was validated by μCT scanning of a lamb bone in conjunction with the in‐house DCP and a commercially available DCP (QRM‐50124, QRM GmbH, Moehrendorf, Germany). The scan parameters were as follows: Tescan Unitom XL 170 kV, 70 W, 250 ms integration time, 100 µm isotropic voxel size, 0.25 mm Cu filter. The accuracy of the density prediction using the in‐house DCP was evaluated by comparing the lamb bone density predictions from the in‐house DCP and the commercial DCP, finding a correlation between the two. Furthermore, the precision of the density prediction using the in‐house DCP was evaluated by calculating the standard deviation of the difference in the density measurements of the lamb bone from two subsequent μCT scans performed with the same scan parameters without any interventions.
Sensitivity of the Density Prediction The sensitivity of the density prediction using the in‐house DCP with different µCT scan parameters was evaluated. A lamb lower limb, stripped of soft tissues and stored at −20°C, was thawed to room temperature for experimentation. A full factorial design (DOE) was employed to investigate CT parameter effects on predicted density, with voltage and exposure tested at two levels each and filtration at three levels. The lamb femur, in conjunction with the in‐house DCP, underwent μCT scanning on a Tescan Unitom XL with a fixed 70 μm isotropic voxel size, while systematically varying voltage, exposure, and filtration settings as detailed in Table . The lamb bone was segmented from each µCT scan using watershed segmentation. The bone density was assigned based on the intensity values at each voxel, utilising the density calibration line calculated for each µCT scan, as described in Section . The mean and variance of the lamb femur's density distribution were computed from the density distribution histogram. A one‐way analysis of variance (ANOVA) was conducted using Minitab Statistical Software 21 (Minitab LLC. 2021, Minitab) to assess potential statistical significance among the mean densities of each CT scan, with a significance level set at 5%.
Cadaver Experiment Procedure: THA Surgical Process Three healthy cadaveric femur samples, without any signs of arthritis or other pathological conditions, were obtained: a 70‐year‐old male (left), a 76‐year‐old female (right), and a 78‐year‐old male (right). The samples were thawed from −20°C to room temperature 24 h before the study. Approval was obtained from the Biomedical & Scientific Research Ethics Committee (BSREC) at the University of Warwick (Ref: BSREC 66/22‐23) and Research and Development, University Hospitals Coventry and Warwickshire (UHCW) NHS Trust (Ref: GF0503). Initially, medical‐CT scans were conducted at University Hospitals Coventry and Warwickshire (UHCW) NHS Trust using a GE Medical Systems Revolution CT scanner (120 kV) with a voxel size of 0.5 × 0.5 × 0.625 mm. After bisecting each femur with the distal part removed, μCT scans were performed at the CiMAT μCT scanning centre on a Tescan Unitom XL scanner. Following the initial medical‐CT scans and μCT scans, THA was performed on the three femur samples by an experienced orthopaedic surgeon. The CT images were used to determine the appropriate size of the broach and implant needed for each femur sample. The femur samples were secured to a height‐adjustable surgical table using a bone clamp during the surgery. The orientation of the femurs was adjusted to replicate the actual surgical positioning. A neck osteotomy was performed, and the entry point into the femoral cavity was established at the piriformis muscle insertion, followed by insertion of a smooth intramedullary rod according to the surgical technique manual provided by the implant manufacturer (Corin, Cirencester, Gloucestershire, UK). The femoral cavity was prepared by compacting the trabecular bone with the Metafix compaction broach (Corin, Cirencester, Gloucestershire, UK). The size of the broach was incrementally increased until achieving the necessary longitudinal and rotational stability, as determined by the surgeon. Subsequently, the final broach was removed, and μCT scans were conducted using the Tescan Unitom XL to capture the bone geometry post‐compaction broaching. Figure illustrates the experimental setup used for performing the THA of each femur sample after broaching operation and the in‐house DCP. After the second set of μCT scans, appropriately sized uncemented Metafix implants (Corin, Cirencester, Gloucestershire, UK) were implanted into each femur sample, matching the final broach size used. Finally, a set of μCT scans was performed on the Tescan Unitom XL to capture the bone geometry with the inserted uncemented implant. All the CT scans were performed in conjunction with the in‐house DCP, and a calibrated dimensional phantom was scanned after each μCT scan to calibrate the dimensional measurements . The μCT scan parameters used for the three sets of μCT scans are listed in Table . Different scan parameters were utilised in the μCT scans of the bone samples to achieve optimal image quality by minimising noise and maximising contrast. After the implant was introduced post‐implantation, a higher voltage setting was necessary to ensure adequate X‐ray penetration through both the implant and the surrounding bone.
Determination of Change in Bone Density Due to THA The cadaveric femurs were segmented from the CT scan data using watershed‐based segmentation in Avizo 3D 2021, and the bone density was assigned, as discussed in Section . After segmenting the femurs, the femur coordinate system (FCS) was defined according to the ISB recommendations . The four CT scans of each femur (pre‐surgery medical‐CT, pre‐surgery μCT, post‐broaching μCT, and post‐implantation μCT) were aligned using best‐fit registration to evaluate the density change in a specific region and transfer the region of interest (ROI) between the scans. For a quantitative comparison of bone density predicted from the different CT scans, the Gruen zones were defined using the post‐implantation μCT scans and subsequently transferred to the other CT scans (pre‐surgery medical‐CT, pre‐surgery µCT, and post‐broaching µCT) after alignment. An additional ROI was defined for the μCT scans around the bone‐implant interface by setting a 1 mm thick ROI around it. This thickness was selected based on visual inspection to ensure it captured all broken trabecular bone debris. The 1 mm depth also aligns with literature, where densification around the interface has been observed up to this depth . Subsequently, this ROI was transferred to the other two μCT scans (pre‐surgery µCT, and post‐broaching µCT). The intention of this step was to compare the alteration in bone volume fraction (BV/TV) resulting from the surgical intervention, specifically the ratio of bone volume (BV) to the total volume (TV) in the ROI around the bone‐implant interface.
Results 3.1 In‐House DCP Development and Validation The densities of the DCP inserts measured are shown in Table , alongside the manufacturer‐specified density and densities obtained through mass and volume measurements using laser scan and µCT scan. The average density of the measured values was used as the density of the inserts to map the intensities. The measured density of the inserts was slightly different from the manufacturer‐specified values. Figure shows the comparison between the density predictions using the in‐house DCP (measurement 1) and the commercial DCP QRM‐50124 (measurement 2). The results show a strong agreement between the density predictions obtained with the in‐house DCP (measurement 1) and the commercial DCP QRM‐50124 (measurement 2) as observed from Figure . The accuracy of the density prediction using the in‐house DCP was determined to be ±0.097 g/cc (Figure ), assuming the commercial DCP QRM‐50124 as the reference. A strong linear correlation ( R = 1) was observed between the two sets of density measurements, demonstrating that the in‐house DCP can reliably predict bone density (Figure ). Additionally, the slope of 1.006 indicates that the density values measured by the in‐house DCP were nearly identical to those measured by the commercial DCP (Figure ). The precision of the density prediction using the in‐house DCP was found to be ±0.052 g/cc, based on the analysis of 70 million data points from two consecutive µCT scans conducted under the same conditions, as shown in Figure , without any external interference. 3.2 Density Prediction Sensitivity The sensitivity study on density prediction due to changes in µCT scan parameters revealed minimal variance in the mean density of the lamb femur, calculated at ±0.022 g/cc (σ), as observed in Table . It was found that 99% of the density variation could be attributed to these parameter alterations, with the variation falling within the ±0.129 g/cc (6σ) range (Table ). The density mapping process appears to be largely unaffected by changes in the scan parameters, indicating that the method is robust with respect to variations in the µCT scan parameters. As shown in the factorial plot in Figure , the change in mean density remained within 6σ, further supporting this robustness. 3.3 Medical‐CT and μCT Density Prediction Comparison The comparison of density predictions between μCT and medical‐CT scans indicates that the bone density measured by the medical‐CT scan was consistently lower than that obtained from the μCT scans, as illustrated in Figure . This difference is particularly evident in Figure , where the trabecular bone region in the femoral head and the femoral cavity show distinct colours. In the μCT scan, the bone appears shaded in greenish tones, while in the medical‐CT scan, it is shaded in bluish tones. This color difference reflects the lower density observed in the medical‐CT scan, as indicated by the colour map legend (Figure ). The difference in bone density between the two CT modalities was 0.196 ± 0.077 g/cc, as measured across the three femur samples (Figure ). This difference was particularly notable in the trabecular bone region, where the average density difference was nearly three times higher than in the cortical bone region (Figure ). However, in the cortical bone, the density values from both CT modalities were quite similar. Figure presents a comparison of femur densities across the entire bone constituents, including both trabecular and cortical bone, in various Gruen zones. In areas with less trabecular bone, such as in Case 2 within Gruen zone 5, the density differences between the two scanning methods were minimal (Figure ). It should be noted that the ‘cases’ refer to different femur specimens. 3.4 Density Change Due to Surgical Intervention The change in bone density across the intermediate surgical stages—pre‐surgery, post‐broaching, and post‐implantation—is depicted in Figure . In Case 1, the outer surface of the cortical bone appeared denser before surgery compared to post‐broaching and post‐implantation, as observed from the colourmap in Figure , where more regions are shaded in red. This observation is supported by a slight reduction in bone density, quantitatively shown across different Gruen zones in Figure . However, this pattern was not consistent across the other two cases. In most Gruen zones, there was a slight increase in bone density after broaching and implantation compared to the pre‐surgery μCT scans (Figure ). Additionally, there was an increase in bone fraction (BV/TV) around the bone‐implant interface, ranging from 3.31% to 20.69%. This increase can be attributed to the accumulation of trabecular bone debris caused by the broaching process and uncemented implantation.
In‐House DCP Development and Validation The densities of the DCP inserts measured are shown in Table , alongside the manufacturer‐specified density and densities obtained through mass and volume measurements using laser scan and µCT scan. The average density of the measured values was used as the density of the inserts to map the intensities. The measured density of the inserts was slightly different from the manufacturer‐specified values. Figure shows the comparison between the density predictions using the in‐house DCP (measurement 1) and the commercial DCP QRM‐50124 (measurement 2). The results show a strong agreement between the density predictions obtained with the in‐house DCP (measurement 1) and the commercial DCP QRM‐50124 (measurement 2) as observed from Figure . The accuracy of the density prediction using the in‐house DCP was determined to be ±0.097 g/cc (Figure ), assuming the commercial DCP QRM‐50124 as the reference. A strong linear correlation ( R = 1) was observed between the two sets of density measurements, demonstrating that the in‐house DCP can reliably predict bone density (Figure ). Additionally, the slope of 1.006 indicates that the density values measured by the in‐house DCP were nearly identical to those measured by the commercial DCP (Figure ). The precision of the density prediction using the in‐house DCP was found to be ±0.052 g/cc, based on the analysis of 70 million data points from two consecutive µCT scans conducted under the same conditions, as shown in Figure , without any external interference.
Density Prediction Sensitivity The sensitivity study on density prediction due to changes in µCT scan parameters revealed minimal variance in the mean density of the lamb femur, calculated at ±0.022 g/cc (σ), as observed in Table . It was found that 99% of the density variation could be attributed to these parameter alterations, with the variation falling within the ±0.129 g/cc (6σ) range (Table ). The density mapping process appears to be largely unaffected by changes in the scan parameters, indicating that the method is robust with respect to variations in the µCT scan parameters. As shown in the factorial plot in Figure , the change in mean density remained within 6σ, further supporting this robustness.
Medical‐CT and μCT Density Prediction Comparison The comparison of density predictions between μCT and medical‐CT scans indicates that the bone density measured by the medical‐CT scan was consistently lower than that obtained from the μCT scans, as illustrated in Figure . This difference is particularly evident in Figure , where the trabecular bone region in the femoral head and the femoral cavity show distinct colours. In the μCT scan, the bone appears shaded in greenish tones, while in the medical‐CT scan, it is shaded in bluish tones. This color difference reflects the lower density observed in the medical‐CT scan, as indicated by the colour map legend (Figure ). The difference in bone density between the two CT modalities was 0.196 ± 0.077 g/cc, as measured across the three femur samples (Figure ). This difference was particularly notable in the trabecular bone region, where the average density difference was nearly three times higher than in the cortical bone region (Figure ). However, in the cortical bone, the density values from both CT modalities were quite similar. Figure presents a comparison of femur densities across the entire bone constituents, including both trabecular and cortical bone, in various Gruen zones. In areas with less trabecular bone, such as in Case 2 within Gruen zone 5, the density differences between the two scanning methods were minimal (Figure ). It should be noted that the ‘cases’ refer to different femur specimens.
Density Change Due to Surgical Intervention The change in bone density across the intermediate surgical stages—pre‐surgery, post‐broaching, and post‐implantation—is depicted in Figure . In Case 1, the outer surface of the cortical bone appeared denser before surgery compared to post‐broaching and post‐implantation, as observed from the colourmap in Figure , where more regions are shaded in red. This observation is supported by a slight reduction in bone density, quantitatively shown across different Gruen zones in Figure . However, this pattern was not consistent across the other two cases. In most Gruen zones, there was a slight increase in bone density after broaching and implantation compared to the pre‐surgery μCT scans (Figure ). Additionally, there was an increase in bone fraction (BV/TV) around the bone‐implant interface, ranging from 3.31% to 20.69%. This increase can be attributed to the accumulation of trabecular bone debris caused by the broaching process and uncemented implantation.
Discussion In this study, the change in bone density due to the broaching operation and implantation during uncemented THA was investigated through a density mapping procedure using µCT. First, a robust method was established for the development of the in‐house DCP and the corresponding density mapping procedure through detailed validation and sensitivity studies. The validation study of the bone density predictions using the in‐house DCP, in comparison to a commercial DCP (QRM‐50124), showed a density prediction accuracy of ±0.097 g/cc and a precision of ±0.052 g/cc. Furthermore, the sensitivity of the density prediction to the µCT scan parameters was ±0.022 g/cc. Second, the density predictions using the density mapping procedure from the two CT scan modalities, namely μCT and medical‐CT, were investigated to assess the potential usefulness of the DCP in a clinical setting. Density comparisons between medical‐CT and μCT scans showed excellent agreement, especially in cortical bone. Finally, the change in bone density resulting from the broaching operation and uncemented implantation of the femur was assessed by μCT scanning of the femur at intermediate surgical stages on three cadaveric femur samples. An increase in bone density was observed compared to the density of the femurs following the broaching operation and implantation, with an average increase of 0.137 g/cc. The commercially available DCPs are expensive and are often designed to be used with either medical‐CT scans or μCT scanners, mainly due to the dimensions and the base material of the DCP. Furthermore, the inserts of commercial DCPs often contain hydroxyapatite to mimic bone composition, which raises the cost of the DCP. Therefore, in this study, an in‐house DCP was developed specifically tailored for use in both medical‐CT and μCT scanners, with DCP insert densities ranging from 1.15 g/cc to 2.25 g/cc made from polymers in a very cost‐effective way. The validation study demonstrated the robustness of the density prediction using the in‐house DCP with an accuracy of ±0.097 g/cc and a precision of ±0.052 g/cc. Furthermore, the sensitivity of the density prediction to the µCT scan parameters was ±0.022 g/cc. Therefore, developing an in‐house DCP tailored for specific purposes, by validating insert densities and following the density mapping procedure described in this study, is feasible and cost‐effective compared to purchasing a commercial DCP, which tends to be orders of magnitude more expensive. Furthermore, the inclusion of hydroxyapatite in the DCP inserts might not be necessary, as observed from the results of the validation study. Using polymer inserts does not limit their maximum density, allowing for more accurate density predictions through interpolation. This improves upon the use of commercially available DCP inserts, which often require extrapolation when measuring cortical bone density (typically between 1.6 g/cc and 2 g/cc). Since the density of cortical bone usually exceeds that of DCP inserts (which have a maximum density of approximately 1.6 g/cc to 1.8 g/cc), extrapolation becomes necessary with DCP inserts. This will allow volumetric bone density to be used as an additional parameter for evaluating bone quality, alongside the DEXA scan, which provides areal bone density and is considered the gold standard for this measurement and assessing fracture risk using FRAX . Furthermore, incorporating patient‐specific volumetric bone density as an input parameter in FEA would enable personalised evaluations, helping predict potential surgical complications such as peri‐prosthetic fractures (PPF) and bone ingrowth. A high degree of similarity in bone density between the medical‐CT and μCT scans was observed in both qualitative and quantitative comparisons across different bone structures and in various Gruen zones. The only noticeable difference was attributed to the resolution limitation of the medical‐CT scan, which prevented differentiation of trabecular bone microstructure. This discrepancy was particularly evident in quantitative comparisons within trabecular bone, where the average difference was ±0.147 g/cc, compared to ±0.054 g/cc in cortical bone. It can be concluded that for applications in which trabecular bone plays a crucial role, such as evaluating the primary stability of implants, the density predicted by medical‐CT scans may result in incorrect information. Consequently, in applications like finite element modeling of bone, where inhomogeneity and bone density are critical for mapping material constants, the trabecular bone density predicted by the medical‐CT scanner might lead to less accurate results . The density predicted from the medical‐CT scan would provide a better estimate of the apparent density of the bone, which includes hydrated tissue mass by total specimen volume (bone + soft tissue + voids) , as the voxel volume from the medical‐CT scan would also encompass soft tissue and voids due to its lower resolution. On the other hand, the density predicted from the μCT scan would offer a more accurate estimate of the real density, which is hydrated tissue mass divided by bone tissue volume , since the voxel volume from the μCT scan primarily encompasses only bone and no soft tissue, benefiting from its higher resolution. An increase in bone density was observed as a result of the broaching operation and implantation for the three femur cases in most of the ROIs, with the average increase in bone density being 0.137 g/cc across the three cases. The increase in bone density was within a similar range to that reported in the literature, which indicated an increase ranging from 0.16 g/cc to 0.30 g/cc . However, the densities reported in the literature consistently appeared to be slightly higher. This discrepancy in higher density reported at the bone‐implant interface in the literature could be attributed to the use of low‐resolution medical‐CT scans for quantifying the bone densification caused by the accumulation of trabecular bone debris during broaching. Medical‐CT scans have limitations in properly resolving trabecular bone, as the debris size falls below the minimum resolution achievable by the medical‐CT scan. Consequently, the bone appears denser in the medical‐CT scan, as each voxel near the bone‐implant interface is filled with more bone debris. This finding was corroborated by the current study, especially when comparing the change in bone fraction (BV/TV) due to the surgical intervention using μCT scan. An increase in bone fraction ranging from 3.31% to 20.69% in the ROI near the bone‐implant interface was observed, which is attributed to the accumulation of trabecular bone debris. This increase in bone volume fraction could potentially enhance the primary stability of the implant by acting as an autograft. Additionally, the accumulation of bone debris would increase bone‐implant contact, thereby promoting osseointegration through bone ingrowth after surgery . As a result, this might discourage surgeons from flushing the bone debris after the broaching operation, potentially improving bone fixation. However, this claim has not been substantiated in this study, and further research is needed to account for other contributing factors. Furthermore, the density of the femur among the three cases predicted from either of the CT scan modalities using the in‐house DCP was 1.842 ± 0.276 g/cc. This finding aligns with previously reported femoral densities in the literature, which typically range between 1.1 g/cc and 2.0 g/cc . The study presented has a few limitations. First, THA was performed on extracted femurs with a mechanical set‐up, which may not fully represent the actual surgical process. However, during broaching and uncemented implantation, the femurs were oriented to closely mimic actual surgery. Second, only one type of broach and implant was used, potentially making the results specific to this particular orthopaedic implant. Different sizes of broaches and implants were used in the three femur samples to minimise this limitation. Third, only three femur samples were used in this study. therefore, no statistical conclusions can be drawn on subject‐related variations. However, we do not expect significant changes in the results or conclusions with the inclusion of more samples, as the findings from the three femur samples were consistent. Future studies should consider these limitations to better understand the impact of surgical intervention on bone density.
Conclusion The change in bone density due to the broaching operation and uncemented implantation, two major surgical steps of uncemented THA, was investigated for the first time using cadaver hip specimens and µCT scans. An in‐house DCP was developed cost‐effectively and validated against the predicted density of lamb bone using a commercial DCP. This led to a bone density prediction accuracy of ±0.097 g/cc, and a precision of ±0.052 g/cc using in‐house DCP in comparison with the results predicted using the commercial DCP. The sensitivity of the density measurement to the µCT scan parameters was ±0.022 g/cc. Density prediction of the cadaver femur using medical‐CT and µCT scans showed excellent agreement, particularly in cortical bone. However, the average difference in trabecular bone measurements was nearly three times higher, primarily due to the limitations of medical‐CT scans in resolving trabecular microstructure. The broaching and implantation processes resulted in an increase in bone density in the cadaveric femur, with an average increase of 0.137 g/cc. This increase was attributed to the accumulation of bone debris around the bone‐implant interface, leading to a rise in the bone volume fraction from 3.31% to 20.69%.
Vineet Seemala: conceptualisation, formal analysis, investigation, methodology, visualisation, original draft writing. Mark A. Williams: conceptualisation, methodology, supervision, review and editing. Richard King: conceptualisation, investigation, methodology, supervision, review and editing. Sofia Goia: investigation, review and editing. Paul F. Wilson: investigation, review and editing. Arnab Palit: conceptualisation, investigation, methodology, supervision, review and editing. All authors approved the final submitted manuscript.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
Assessment of public literacy in TB prevention and control in the National 13th Five-Year plan for Tuberculosis Prevention and Control (2016–2020) in China | 7c24fb2f-bc77-4be3-a51a-bf44cb1e2541 | 11721279 | Health Literacy[mh] | Tuberculosis is a chronic infectious disease caused by Mycobacterium tuberculosis, which poses a serious threat to human health . China was a country with a high TB burden, with an estimated 748,000 new cases of TB in 2022 . China usually puts forward a national five-year plan to map out major issues concerning the overall economic and social development of the country in the coming five years. On the basis of this plan, to accelerate progress towards the Health China vision and further minimize the public health impact of tuberculosis, the Chinese government issued three national tuberculosis prevention and control plans: National Tuberculosis Prevention and Control Plan (2001–2010), National Tuberculosis Prevention and Control Plan (2011–2015), National 13th Five-Year plan for Tuberculosis Prevention and Control (2016–2020) . The three plans clearly emphasized the important role of public education and public TB literacy in TB prevention and control. The third plan proposed that Centers for Disease Control and Prevention (CDC) of all levels need to strengthen public education on TB, and set a target of more than 85% public awareness rate of TB health literacy . The goal of this study was to evaluate the public TB literacy through TB key information, public education methods, and public education materials. Meanwhile, this study provided the basis for the development and implementation of health promotion strategies and for achieving the goal of ending TB.
Study design The cross-sectional study was conducted in all provinces of China (31 provinces and the Xinjiang Production and Construction Corps). Participants were urban and rural permanent residents aged 15 years and above (including non-local residents who lived in the survey place for more than 6 months). Survey methods and data collection A multi-stage stratified cluster sampling method was used, assuming an infinite population . Based on the national survey results of the same category in 2015, the estimated lowest single information awareness rate items in 2020 were 60% (compared to 49.8% in 2015), with a relative error of 0.2 and α set at 0.05. The target group size was approximately 100 respondents per survey point, with an intra-cluster correlation coefficient of 0.3 and an effective response rate of 85% . Calculations showed that approximately 1200 people needed to be sampled per province, with sampling conducted in 12 survey sites, each consisting of 100 participants. The provincial CDC collected the proportion of urban and rural residents in each province in 2020.The proportion of urban and rural survey sites in each province is the same as the proportion of urban and rural residents, with a total of 12 survey sites in each province. The provincial CDC conducted equal proportion probability sampling according to the population of 15 years old and above in the townships and randomly selected the townships. In the selected townships, the provincial CDC also further randomly sampled streets and villages according to the probability of population equal proportion. A combination of household registration and field investigation was used to establish a complete list of all residents by each county (district) CDC in the selected villages or streets. The population aged 15 and above was assigned numbers and a random sampling method was used to select at least 100 people by each county (district) CDC. Face-to-face surveys were conducted by trained staff of county (district) CDC. After the survey was completed, the questionnaire was reviewed by municipal and provincial CDC, and then reviewed, collated and analyzed by China CDC. In total, 49,020 people were selected as survey subjects, resulting in 47,728 valid survey questionnaires with an effective rate of 97.36%. Questionnaire design The questionnaire of this study was developed by health education experts of China Center for Disease Control and Prevention on the basis of the National Tuberculosis Prevention and Control Plan (2011–2015) . The questionnaire in this study mainly included four aspects: demographic characteristics, TB key information, public education methods, and public education materials. Firstly, the demographic characteristics included gender, urban or rural residence, age, education, and occupation. Secondly, TB cognitive key information: TB is a chronic infectious disease; Tuberculosis is transmitted mainly through the respiratory tract, and anyone can be infected; If the whole course of treatment is standardized, the vast majority of patients can be cured and can avoid infecting others. TB behavioral key information: When you cough and sputum for more than 2 weeks, you should suspect that you have tuberculosis and seek medical attention promptly; Avoiding spitting, covering your mouth and nose when coughing and sneezing, and wearing a mask can reduce the spread of TB. Thirdly, the public education methods included the ways in which participants learned about health literacy and the ways in which participants preferred to learn about health literacy and the ways in which participants queried for health-related information on the Internet. Fourthly, the public education materials included the preferred materials of TB health literacy public education materials, and the preferred materials of searching for health-related information on the Internet. The elderly, low education and students were high risk groups for TB, so this study analyzed their public education methods and materials to explore how to carry out public education to these people. A supplementary questionnaire file shows this in more detail (see Supplementary file 1). Quality control Before the survey begins, county CDC organized trained professionals to explain the purpose of the survey to participants. After obtaining informed consent, a standardized survey questionnaire was used for data collection and entry. To ensure the quality of the survey data, the county CDC were responsible for reviewing the survey data entered on the same day, including checking the number of questionnaires, completeness, logical consistency, and the presence of errors or omissions. If any issues were identified, for example, age out of range, missing questionnaire answers, they would promptly contact the surveyors for modification or improvement. The municipal and provincial CDC provided necessary technical guidance and quality control during the survey implementation, such as on-site supervision and data quality sampling. China CDC conducted unified training for the staff of all CDC involved in the study, and provided technical guidance and consultation during the study. Statistical analysis Epidata 3.1 was used to establish the database, and R 4.2.1 was used for statistical analysis. Descriptive statistics such as frequency (n) and percentage were used to analyze sociodemographic characteristics, awareness of TB key information, public education methods, and public education materials. The total awareness rate of TB key information referred to the percentage of TB key information items answered correctly, and the formula was as follows: [12pt]{minimal}
$$=}{} 100\%$$ The total awareness rate of TB key information = ∑ The number of TB key information items correctly answered by participants The number of TB key information items answered by participants × 100 % The overall awareness rate referred to the percentage of correctly answering all TB core information items, and the formula was as follows: [12pt]{minimal}
$$=}{} 100\%$$ The overall awareness rate of TB key information = ∑ The number of participants who answered all the TB core information correctly The number of all participants × 100 % Chi-square analysis was used for comparing differences, with P < 0.05 indicating statistical significance . Binary logistic regression analysis was used to identify factors influencing the awareness of overall TB key information . Multicollinearity and outliers were checked using VIF and box plots, respectively (VIF < 10). Variables with statistical significance in the univariate analysis were included in the multivariate analysis, with a significance level of α = 0.05 and a two-tailed test.
The cross-sectional study was conducted in all provinces of China (31 provinces and the Xinjiang Production and Construction Corps). Participants were urban and rural permanent residents aged 15 years and above (including non-local residents who lived in the survey place for more than 6 months).
A multi-stage stratified cluster sampling method was used, assuming an infinite population . Based on the national survey results of the same category in 2015, the estimated lowest single information awareness rate items in 2020 were 60% (compared to 49.8% in 2015), with a relative error of 0.2 and α set at 0.05. The target group size was approximately 100 respondents per survey point, with an intra-cluster correlation coefficient of 0.3 and an effective response rate of 85% . Calculations showed that approximately 1200 people needed to be sampled per province, with sampling conducted in 12 survey sites, each consisting of 100 participants. The provincial CDC collected the proportion of urban and rural residents in each province in 2020.The proportion of urban and rural survey sites in each province is the same as the proportion of urban and rural residents, with a total of 12 survey sites in each province. The provincial CDC conducted equal proportion probability sampling according to the population of 15 years old and above in the townships and randomly selected the townships. In the selected townships, the provincial CDC also further randomly sampled streets and villages according to the probability of population equal proportion. A combination of household registration and field investigation was used to establish a complete list of all residents by each county (district) CDC in the selected villages or streets. The population aged 15 and above was assigned numbers and a random sampling method was used to select at least 100 people by each county (district) CDC. Face-to-face surveys were conducted by trained staff of county (district) CDC. After the survey was completed, the questionnaire was reviewed by municipal and provincial CDC, and then reviewed, collated and analyzed by China CDC. In total, 49,020 people were selected as survey subjects, resulting in 47,728 valid survey questionnaires with an effective rate of 97.36%.
The questionnaire of this study was developed by health education experts of China Center for Disease Control and Prevention on the basis of the National Tuberculosis Prevention and Control Plan (2011–2015) . The questionnaire in this study mainly included four aspects: demographic characteristics, TB key information, public education methods, and public education materials. Firstly, the demographic characteristics included gender, urban or rural residence, age, education, and occupation. Secondly, TB cognitive key information: TB is a chronic infectious disease; Tuberculosis is transmitted mainly through the respiratory tract, and anyone can be infected; If the whole course of treatment is standardized, the vast majority of patients can be cured and can avoid infecting others. TB behavioral key information: When you cough and sputum for more than 2 weeks, you should suspect that you have tuberculosis and seek medical attention promptly; Avoiding spitting, covering your mouth and nose when coughing and sneezing, and wearing a mask can reduce the spread of TB. Thirdly, the public education methods included the ways in which participants learned about health literacy and the ways in which participants preferred to learn about health literacy and the ways in which participants queried for health-related information on the Internet. Fourthly, the public education materials included the preferred materials of TB health literacy public education materials, and the preferred materials of searching for health-related information on the Internet. The elderly, low education and students were high risk groups for TB, so this study analyzed their public education methods and materials to explore how to carry out public education to these people. A supplementary questionnaire file shows this in more detail (see Supplementary file 1).
Before the survey begins, county CDC organized trained professionals to explain the purpose of the survey to participants. After obtaining informed consent, a standardized survey questionnaire was used for data collection and entry. To ensure the quality of the survey data, the county CDC were responsible for reviewing the survey data entered on the same day, including checking the number of questionnaires, completeness, logical consistency, and the presence of errors or omissions. If any issues were identified, for example, age out of range, missing questionnaire answers, they would promptly contact the surveyors for modification or improvement. The municipal and provincial CDC provided necessary technical guidance and quality control during the survey implementation, such as on-site supervision and data quality sampling. China CDC conducted unified training for the staff of all CDC involved in the study, and provided technical guidance and consultation during the study.
Epidata 3.1 was used to establish the database, and R 4.2.1 was used for statistical analysis. Descriptive statistics such as frequency (n) and percentage were used to analyze sociodemographic characteristics, awareness of TB key information, public education methods, and public education materials. The total awareness rate of TB key information referred to the percentage of TB key information items answered correctly, and the formula was as follows: [12pt]{minimal}
$$=}{} 100\%$$ The total awareness rate of TB key information = ∑ The number of TB key information items correctly answered by participants The number of TB key information items answered by participants × 100 % The overall awareness rate referred to the percentage of correctly answering all TB core information items, and the formula was as follows: [12pt]{minimal}
$$=}{} 100\%$$ The overall awareness rate of TB key information = ∑ The number of participants who answered all the TB core information correctly The number of all participants × 100 % Chi-square analysis was used for comparing differences, with P < 0.05 indicating statistical significance . Binary logistic regression analysis was used to identify factors influencing the awareness of overall TB key information . Multicollinearity and outliers were checked using VIF and box plots, respectively (VIF < 10). Variables with statistical significance in the univariate analysis were included in the multivariate analysis, with a significance level of α = 0.05 and a two-tailed test.
Demographic characteristics A total of 47,728 participants were included in this study. Among them, 24,831 participants were female (52.03%). The largest number of participants were aged 15 ~ 44 (38.10%). 34.22% participants had junior middle school degree, while 7.60% participants just had undergraduate and above degree. 7789 participants were both aged 60 and above and with primary school or below education level. The majority of participants were engaged in agricultural labor, accounting for 43.32% of the total. Additionally, the rural participants (55.00%) outnumbered the urban participants (Table ). Awareness of TB key information The total awareness rate of TB key information was 82.51%. Participants had poor awareness that TB is a chronic infectious disease, and if the whole course of treatment is standardized, the vast majority of patients can be cured and can avoid infecting others. Participants who received public education on TB had better awareness of TB key information (Table ). The association between overall awareness and participant characteristics Participants who were over 60 years old, had a primary school or below education, students, and had not received public education on TB were less likely to know all TB key information (Table ). The methods of public education on TB In all participants, participants were more likely to receive public education on TB through television or radio (67.93%) and Internet (33.85%). However, 12,027 participants did not receive public education. Compared with participants aged below 60, participants aged 60 and above were more likely to receive public education on TB through relatives or friends (16.80%) ( P < 0.001) (Table ). The methods of querying health-related information Participants were more likely to query health-related information through self-media platforms (41.55%) and search engines (31.41%) on the Internet. Compared with participants aged below 60, participants aged 60 and above were more likely not to query health-related information ( P < 0.001) (Table ). The preferred methods of public education on TB Participants preferred to receive public education on TB through television or radio (65.39%) and Internet (54.60%). Compared with participants aged below 60, participants aged 60 and above preferred to receive public education on TB through television or radio (68.15%), doctor consultation (42.19), relatives or friends (16.15%) ( P < 0.001) (Table ). The preferred materials of public education on TB Participants preferred audiovisual (40.69%) public education materials on TB (Table ). The preferred material types of querying health-related information Participants preferred video (60.12%) health-related information on the Internet. Compared with participants aged below 60, participants aged 60 and above preferred video health-related information on the Internet ( P < 0.05) (Table ).
A total of 47,728 participants were included in this study. Among them, 24,831 participants were female (52.03%). The largest number of participants were aged 15 ~ 44 (38.10%). 34.22% participants had junior middle school degree, while 7.60% participants just had undergraduate and above degree. 7789 participants were both aged 60 and above and with primary school or below education level. The majority of participants were engaged in agricultural labor, accounting for 43.32% of the total. Additionally, the rural participants (55.00%) outnumbered the urban participants (Table ).
The total awareness rate of TB key information was 82.51%. Participants had poor awareness that TB is a chronic infectious disease, and if the whole course of treatment is standardized, the vast majority of patients can be cured and can avoid infecting others. Participants who received public education on TB had better awareness of TB key information (Table ).
Participants who were over 60 years old, had a primary school or below education, students, and had not received public education on TB were less likely to know all TB key information (Table ).
In all participants, participants were more likely to receive public education on TB through television or radio (67.93%) and Internet (33.85%). However, 12,027 participants did not receive public education. Compared with participants aged below 60, participants aged 60 and above were more likely to receive public education on TB through relatives or friends (16.80%) ( P < 0.001) (Table ).
Participants were more likely to query health-related information through self-media platforms (41.55%) and search engines (31.41%) on the Internet. Compared with participants aged below 60, participants aged 60 and above were more likely not to query health-related information ( P < 0.001) (Table ).
Participants preferred to receive public education on TB through television or radio (65.39%) and Internet (54.60%). Compared with participants aged below 60, participants aged 60 and above preferred to receive public education on TB through television or radio (68.15%), doctor consultation (42.19), relatives or friends (16.15%) ( P < 0.001) (Table ).
Participants preferred audiovisual (40.69%) public education materials on TB (Table ).
Participants preferred video (60.12%) health-related information on the Internet. Compared with participants aged below 60, participants aged 60 and above preferred video health-related information on the Internet ( P < 0.05) (Table ).
Principal results The National 13th Five-Year plan for Tuberculosis Prevention and Control (2016–2020) is a guideline for TB prevention and control in China . TB health promotion is an important part of TB prevention and control measures in this plan. By enhancing prevention awareness and self-protection ability, the occurrence and transmission of TB can be effectively reduced . TB health literacy is the ability of individuals to acquire and understand basic TB health information to maintain their own health . In this study, TB health literacy was used to evaluate the effectiveness of TB health promotion. The survey results show that the total awareness rate of TB key information was 82.51%, which is a significant increase compared to the 74.45% in the National Tuberculosis Prevention and Control Plan (2011–2015). People who received public education on TB have better awareness of TB key information than those who not received public education on TB. This indicates that during the National 13th Five-Year plan for Tuberculosis Prevention and Control (2016–2020), various public education activities carried out in China have played a positive role, such as the "Millions of Volunteers Action", nationwide health check-ups, and the practice of internet-based smart information technology . “Million Volunteers Action” is a nationwide action launched by National Health Commission in 2012 to spread knowledge about TB prevention and control by recruiting volunteers at the national, provincial and city (county) levels. By 2020, more than 1 million volunteers have joined the action. However, it did not reach the awareness target (85%). CDC of all levels still need to mobilize the whole society to actively carry out public education and strive to achieve the planning target as soon as possible, making efforts to end tuberculosis by 2030. In cognitive TB key information, 76.46% participants knew TB is a chronic infectious disease. It suggested that the definition of TB was not well publicized. 89.10% participants knew Tuberculosis is transmitted mainly through the respiratory tract, and anyone can be infected. This showed that Chinese people had a good understanding of how TB is transmitted, which had a role in promoting the prevention of TB . 71.19% participants knew if the whole course of treatment is standardized, the vast majority of patients can be cured and can avoid infecting others. For TB patients, This TB key information directly affected confidence in TB rehabilitation, and affected treatment compliance and rehabilitation effects. Standardized treatment can effectively prevent the emergence of drug-resistant TB . When designing public education campaigns, this information about the definition of TB needs to be focused on and more innovative campaigns should be used to promote this information in the future . In behavioral TB key information, 85.99% participants knew avoiding spitting, covering your mouth and nose when coughing and sneezing, and wearing a mask can reduce the spread of TB. 89.80% participants knew when they cough and sputum for more than 2 weeks, they should suspect that you have tuberculosis and seek medical attention promptly. This key information is the most critical information, because tuberculosis is easy to spread and everyone is susceptible, the BCG vaccine has limited preventive effect on adults, early detection and active preventive behavior are very important for the prevention and control of tuberculosis . Therefore, CDC of all levels should continue to strengthen public education on TB, innovate the publicity of suspicious symptoms of TB, and promote the transformation of scientific cognition into healthy behavior. Older people, people with low education and students had poor awareness of key information, which suggested public education on TB didn't cover all communities and populations. Older people received less TB health promotion and had a poorer understanding of TB control information due to reduced learning capacity and limited internet access . This suggested that public education materials should be as simple as possible and that public education methods should be more convenient to avoid older people receiving public education on TB but not understanding the TB prevention and control information. People with low education had less understanding of the disease, limited internet access, and potential limitations in ability, so it was difficult to understand the necessity and feasibility of TB prevention and control . This suggested that public education on TB needs to focus not only on TB prevention and treatment, but also on the dangers of TB, so that all people are aware of the importance of TB education. Students had poor TB health literacy. Due to learning pressure and young age, it was difficult for students to recognize the harm of TB, and they rarely took the initiative to learn the relevant knowledge of TB prevention and control . This showed that schools should continue to be the focus of TB health promotion, and teachers should regularly take the initiative to teach students about TB in their spare time. Therefore, CDC of all levels should strengthen community-centered public education, such as holding themed activities on World TB Day and community volunteer activities. The elderly were a high incidence group of TB , CDC of all levels should take targeted public education methods, such as health check-ups for the elderly, nursing home education, and public education in the media favored by the elderly. CDC of all levels should organize more interactive and recreational activities to help low-education people understand the dangers of TB, develop healthy behaviors, and let them see a doctor if they notice suspicious symptoms. For students, regular education on TB on campus should be considered, and public education on TB can be carried out during the annual physical examination of new students. Audiovisual media (television, radio, etc.) and the Internet were the main methods of public education on TB received by Chinese people, as well as the methods of preference among Chinese people. Compared with previous studies, Internet dissemination was used and preferred by more people . On the Internet, Chinese people were more likely to query health-related information through self-media platforms and search engines. Audio-visual media (TV, radio, etc.) are mainstream media dominated by local governments, Chinese people relied more on and trusted the information in them, so audio-visual media is the most popular and preferred method of public education on TB for Chinese people both in previous studies and in this study . The number of Internet users in China has grown rapidly, and as of December 2023, the number of Internet users in China reached 1.092 billion . More people could use the Internet to learn about TB and understand the convenience of the Internet, so compared with the previous research, Internet publicity gradually became the main and preferred way of publicity for Chinese people . Self-media platforms and search engines also had a high usage rate among Chinese Internet users. It is more convenient for China to learn health knowledge through the Internet. However, there is too much information on the Internet, and even wrong publicity, so people who have not received public education on TB may not be able to get the correct TB knowledge from the Internet. It indicated that the government needs to publish the TB knowledge regularly in its official capacity and increase public education, so that more people can learn the correct TB prevention and treatment. Among people over 60 years old, relatives, friends and doctor consultation were also highly trusted. Older people were more likely not to query health-related information online. These people were more skeptical about the authenticity of online information and preferred to get information face-to-face . Relatives and friends were more familiar with the elderly and could gain the trust of the elderly. Doctors have a deep medical background and they interact more with the public during the diagnosis and treatment process. Therefore, CDC at all levels should publish as much health-related information as possible in official accounts on we-media platforms and search engines to enhance credibility. At the same time, the CDC needs to correct the misinformation propaganda on the Internet. When CDC of all levels advertise public education on TB among the elderly, they should publicize the advantages of querying health-related information on the Internet to these people and teach them how to query health-related information on the Internet correctly. CDC of all levels should conduct more health consultation and rehabilitation with designated TB hospitals and advocate people in daily life to disseminate TB prevention and control knowledge to the elderly. Audio-visual was the preferred types of public education on TB and health-related information in China. In previous studies, text was the main form of TB propaganda materials, but with the development of the Internet in recent years, Chinese people can access audio-visual materials more quickly from the Internet . Audio-visual can make the text more convenient to read and understand, and it is more convenient for people to learn knowledge in fragmented time, and it is easier to make interesting materials . The elderly also liked to use audio-visual materials to receive health-related information on the Internet, but they actively searched and obtained less because they were older and less adept at using web search engines, which may be one of the important reasons for the lack of TB awareness among the elderly. Therefore, audio-visual should be more commonly selected as types of public education materials and health-related information materials. The CDC should teach the elderly more about how to query health-related information on the Internet, rather than just publishing various health-related information on the Internet. The main advantage of this study is the use of large sample data to analyze the improvement of TB health literacy of Chinese people after the National 13th Five-Year plan for Tuberculosis Prevention and Control (2016–2020) and the preference of different populations for public education methods. However, this study also has some limitations. Logistic regression is difficult to fit the true distribution of data due to its simple form, and the result can only see the influence of different independent variables on dependent variables, missing the depth of the result. However, a large sample size can alleviate these limitations. In addition, when the selected person could not respond, the field staff would randomly select people from a population with similar characteristics at the survey site to reduce the impact of non-response bias.
The National 13th Five-Year plan for Tuberculosis Prevention and Control (2016–2020) is a guideline for TB prevention and control in China . TB health promotion is an important part of TB prevention and control measures in this plan. By enhancing prevention awareness and self-protection ability, the occurrence and transmission of TB can be effectively reduced . TB health literacy is the ability of individuals to acquire and understand basic TB health information to maintain their own health . In this study, TB health literacy was used to evaluate the effectiveness of TB health promotion. The survey results show that the total awareness rate of TB key information was 82.51%, which is a significant increase compared to the 74.45% in the National Tuberculosis Prevention and Control Plan (2011–2015). People who received public education on TB have better awareness of TB key information than those who not received public education on TB. This indicates that during the National 13th Five-Year plan for Tuberculosis Prevention and Control (2016–2020), various public education activities carried out in China have played a positive role, such as the "Millions of Volunteers Action", nationwide health check-ups, and the practice of internet-based smart information technology . “Million Volunteers Action” is a nationwide action launched by National Health Commission in 2012 to spread knowledge about TB prevention and control by recruiting volunteers at the national, provincial and city (county) levels. By 2020, more than 1 million volunteers have joined the action. However, it did not reach the awareness target (85%). CDC of all levels still need to mobilize the whole society to actively carry out public education and strive to achieve the planning target as soon as possible, making efforts to end tuberculosis by 2030. In cognitive TB key information, 76.46% participants knew TB is a chronic infectious disease. It suggested that the definition of TB was not well publicized. 89.10% participants knew Tuberculosis is transmitted mainly through the respiratory tract, and anyone can be infected. This showed that Chinese people had a good understanding of how TB is transmitted, which had a role in promoting the prevention of TB . 71.19% participants knew if the whole course of treatment is standardized, the vast majority of patients can be cured and can avoid infecting others. For TB patients, This TB key information directly affected confidence in TB rehabilitation, and affected treatment compliance and rehabilitation effects. Standardized treatment can effectively prevent the emergence of drug-resistant TB . When designing public education campaigns, this information about the definition of TB needs to be focused on and more innovative campaigns should be used to promote this information in the future . In behavioral TB key information, 85.99% participants knew avoiding spitting, covering your mouth and nose when coughing and sneezing, and wearing a mask can reduce the spread of TB. 89.80% participants knew when they cough and sputum for more than 2 weeks, they should suspect that you have tuberculosis and seek medical attention promptly. This key information is the most critical information, because tuberculosis is easy to spread and everyone is susceptible, the BCG vaccine has limited preventive effect on adults, early detection and active preventive behavior are very important for the prevention and control of tuberculosis . Therefore, CDC of all levels should continue to strengthen public education on TB, innovate the publicity of suspicious symptoms of TB, and promote the transformation of scientific cognition into healthy behavior. Older people, people with low education and students had poor awareness of key information, which suggested public education on TB didn't cover all communities and populations. Older people received less TB health promotion and had a poorer understanding of TB control information due to reduced learning capacity and limited internet access . This suggested that public education materials should be as simple as possible and that public education methods should be more convenient to avoid older people receiving public education on TB but not understanding the TB prevention and control information. People with low education had less understanding of the disease, limited internet access, and potential limitations in ability, so it was difficult to understand the necessity and feasibility of TB prevention and control . This suggested that public education on TB needs to focus not only on TB prevention and treatment, but also on the dangers of TB, so that all people are aware of the importance of TB education. Students had poor TB health literacy. Due to learning pressure and young age, it was difficult for students to recognize the harm of TB, and they rarely took the initiative to learn the relevant knowledge of TB prevention and control . This showed that schools should continue to be the focus of TB health promotion, and teachers should regularly take the initiative to teach students about TB in their spare time. Therefore, CDC of all levels should strengthen community-centered public education, such as holding themed activities on World TB Day and community volunteer activities. The elderly were a high incidence group of TB , CDC of all levels should take targeted public education methods, such as health check-ups for the elderly, nursing home education, and public education in the media favored by the elderly. CDC of all levels should organize more interactive and recreational activities to help low-education people understand the dangers of TB, develop healthy behaviors, and let them see a doctor if they notice suspicious symptoms. For students, regular education on TB on campus should be considered, and public education on TB can be carried out during the annual physical examination of new students. Audiovisual media (television, radio, etc.) and the Internet were the main methods of public education on TB received by Chinese people, as well as the methods of preference among Chinese people. Compared with previous studies, Internet dissemination was used and preferred by more people . On the Internet, Chinese people were more likely to query health-related information through self-media platforms and search engines. Audio-visual media (TV, radio, etc.) are mainstream media dominated by local governments, Chinese people relied more on and trusted the information in them, so audio-visual media is the most popular and preferred method of public education on TB for Chinese people both in previous studies and in this study . The number of Internet users in China has grown rapidly, and as of December 2023, the number of Internet users in China reached 1.092 billion . More people could use the Internet to learn about TB and understand the convenience of the Internet, so compared with the previous research, Internet publicity gradually became the main and preferred way of publicity for Chinese people . Self-media platforms and search engines also had a high usage rate among Chinese Internet users. It is more convenient for China to learn health knowledge through the Internet. However, there is too much information on the Internet, and even wrong publicity, so people who have not received public education on TB may not be able to get the correct TB knowledge from the Internet. It indicated that the government needs to publish the TB knowledge regularly in its official capacity and increase public education, so that more people can learn the correct TB prevention and treatment. Among people over 60 years old, relatives, friends and doctor consultation were also highly trusted. Older people were more likely not to query health-related information online. These people were more skeptical about the authenticity of online information and preferred to get information face-to-face . Relatives and friends were more familiar with the elderly and could gain the trust of the elderly. Doctors have a deep medical background and they interact more with the public during the diagnosis and treatment process. Therefore, CDC at all levels should publish as much health-related information as possible in official accounts on we-media platforms and search engines to enhance credibility. At the same time, the CDC needs to correct the misinformation propaganda on the Internet. When CDC of all levels advertise public education on TB among the elderly, they should publicize the advantages of querying health-related information on the Internet to these people and teach them how to query health-related information on the Internet correctly. CDC of all levels should conduct more health consultation and rehabilitation with designated TB hospitals and advocate people in daily life to disseminate TB prevention and control knowledge to the elderly. Audio-visual was the preferred types of public education on TB and health-related information in China. In previous studies, text was the main form of TB propaganda materials, but with the development of the Internet in recent years, Chinese people can access audio-visual materials more quickly from the Internet . Audio-visual can make the text more convenient to read and understand, and it is more convenient for people to learn knowledge in fragmented time, and it is easier to make interesting materials . The elderly also liked to use audio-visual materials to receive health-related information on the Internet, but they actively searched and obtained less because they were older and less adept at using web search engines, which may be one of the important reasons for the lack of TB awareness among the elderly. Therefore, audio-visual should be more commonly selected as types of public education materials and health-related information materials. The CDC should teach the elderly more about how to query health-related information on the Internet, rather than just publishing various health-related information on the Internet. The main advantage of this study is the use of large sample data to analyze the improvement of TB health literacy of Chinese people after the National 13th Five-Year plan for Tuberculosis Prevention and Control (2016–2020) and the preference of different populations for public education methods. However, this study also has some limitations. Logistic regression is difficult to fit the true distribution of data due to its simple form, and the result can only see the influence of different independent variables on dependent variables, missing the depth of the result. However, a large sample size can alleviate these limitations. In addition, when the selected person could not respond, the field staff would randomly select people from a population with similar characteristics at the survey site to reduce the impact of non-response bias.
The overall public TB literacy was considered high, but the awareness of some TB key information did not reach the target. The elderly, people with low education and students were less likely to know all TB key information. In the future, audiovisual media and the Internet should be the main methods of public education on TB for all people. Relatives or friends dissemination and doctor consultation are also suitable TB public education methods for older people. More health-related information should be promoted on the Internet, especially on self-media and search engine. Public education materials and health-related information should use more audio-visual types.
Supplementary Material 1.
|
Enhancing dental education: integrating online learning in complete denture rehabilitation | 371f167e-3a2f-49f8-9528-9ac64cf3d863 | 11445855 | Dentistry[mh] | Complete denture rehabilitation (CDR) is a traditional prosthodontic treatment option for edentulous patients experiencing systemic, anatomic, or financial limitations . CDR is challenging for dental interns (fifth-year dental students) because of its patient-centered nature, the many factors influencing treatment planning, the need for technical precision, and the profound impact on patients’ functional and psychosocial well-being . Mastering CDR is crucial for dental interns because it not only meets the clinical needs of patients but also contributes to the intern’s professional growth, reputation, and overall success in their dental practice. It aligns to provide comprehensive and patient-centric care and prepare interns for the challenges and responsibilities of modern dentistry. Dental students in CDR navigate through a challenging and rewarding field that requires a comprehensive understanding of dental science and a commitment to ongoing education and skill development. Online learning refers to the application of information and communication technologies to support and enhance learning and teaching between students and teachers . Integrating online learning into CDR education during the internship year offers several advantages specific to this critical period’s unique challenges and requirements . First, interns often have demanding clinical schedules during their internship year. Online learning provides flexibility, allowing interns to access CDR education conveniently and avoiding conflicts with clinical responsibilities. It is a resilient alternative, allowing interns to continue their CDR education even when physical attendance is not feasible. The self-paced nature of online learning enables interns to revisit and review CDR concepts until mastery is achieved. This flexibility supports individualized learning and ensures that interns are well prepared for clinical applications. For example, Subramani et al. reported that 90.4% of preclinical students, and 80.4% of clinical students, used smartphones with various learning apps to enhance their learning online . Moreover, online learning opens doors to expertise worldwide, and online platforms enable the efficient use of educational resources. Online platforms facilitate the incorporation of diverse learning resources, such as videos, interactive simulations, and multimedia presentations. Interns can collaborate with peers in ways that are not constrained by the limitations of physical boundaries. Online learning platforms can accommodate various learning styles, catering to the diverse preferences of interns. Online learning eliminates costs associated with commuting, accommodations, and physical textbooks. This can be particularly advantageous for interns facing financial constraints during their internship year. Finally, in prosthodontics, CDR in particular, it is not easy to visualize and correlate theory with practice. Therefore, exceptional knowledge and training are necessary for students to master CDR skills. A study conducted by Gilmour et al. , and a national study conducted by Ali et al. , investigated the preparedness and confidence levels of undergraduate dental students in the United Kingdom. These studies provide insight into the challenges that dental interns may face during their training, especially in complex procedures such as CDR. Lack of clinical experience and traditional teaching methods can lead to students lacking the confidence to complete CDR. Familiarity with online learning platforms equips interns with the technological skills that are essential for modern dental practice. Online assessments and quizzes provide immediate feedback, allowing interns to gauge their understanding and identify areas for improvement. This timely feedback contributes to ongoing skill enhancement. Bahanan et al. evaluated dental students’ perceptions and overall experiences with e-learning and found that most students considered e-learning a positive experience . Furthermore, substantial progress has been made in educational methodologies, especially in the integration of virtual reality (VR) technologies and artificial intelligence (AI). These innovations are proving transformative in online education, especially in specialized areas such as CDR . Loka et al. studied the effect of reflective thinking on academic performance among undergraduate dental students . They recognized that self-directed learning is a vital principle promoted in health professions education, particularly with the increasing use of online learning methods. Furthermore, Linjawi et al. conducted a cohort study in Saudi Arabia to assess students’ perceptions, attitudes, and readiness toward online learning in dental education. The results indicated that the attitude and understanding of interns towards online education are crucial to its development and effectiveness . When online learning is incorporated into CDR education during the internship year, dental interns can benefit from a comprehensive and adaptable learning environment that addresses their unique needs, supports continuous skill development, and prepares them for successful clinical practice in CDR. Therefore, this study aims to evaluate dental interns’ background in CDR and assess their attitudes toward the online learning of CDR.
Participants and questionnaire The study conducted a questionnaire-based online survey via the Universal Questionnaire Designer platform ( www.wjx.cn ) to assess dental internship students’ backgrounds in and attitudes toward online learning of CDR. The survey comprised three parts and 20 structured questions, including students’ online learning experiences, knowledge background about CDR, and attitudes toward online learning for CDR. The elements of the questionnaire are illustrated in Table . The study received ethical approval from the Academic Affairs Office of the West China School of Stomatology, Sichuan University (WCHSIRB-NR-2022-005). A total of 63 dental interns (19 male and 44 female undergraduate dental students) participated, and their privacy was safeguarded, with personally identifiable information kept confidential. The interns who participated in this survey were all fifth-year dental students who had just started their clinical internship. They had relevant theoretical knowledge but lacked practical clinical experience. The participants were required to respond to all the questions to ensure the completion of the electronic forms. Informed consent was obtained from all the participants involved in the study. Data analysis The data analysis involved descriptive statistics, with the findings presented as percentages. The response percentages were calculated based on the number of respondents for a specific response compared with the total number of answers to a question. This approach allowed for a comprehensive understanding of the participants’ experiences, knowledge, and attitudes related to online learning of CDR.
The study conducted a questionnaire-based online survey via the Universal Questionnaire Designer platform ( www.wjx.cn ) to assess dental internship students’ backgrounds in and attitudes toward online learning of CDR. The survey comprised three parts and 20 structured questions, including students’ online learning experiences, knowledge background about CDR, and attitudes toward online learning for CDR. The elements of the questionnaire are illustrated in Table . The study received ethical approval from the Academic Affairs Office of the West China School of Stomatology, Sichuan University (WCHSIRB-NR-2022-005). A total of 63 dental interns (19 male and 44 female undergraduate dental students) participated, and their privacy was safeguarded, with personally identifiable information kept confidential. The interns who participated in this survey were all fifth-year dental students who had just started their clinical internship. They had relevant theoretical knowledge but lacked practical clinical experience. The participants were required to respond to all the questions to ensure the completion of the electronic forms. Informed consent was obtained from all the participants involved in the study.
The data analysis involved descriptive statistics, with the findings presented as percentages. The response percentages were calculated based on the number of respondents for a specific response compared with the total number of answers to a question. This approach allowed for a comprehensive understanding of the participants’ experiences, knowledge, and attitudes related to online learning of CDR.
Students’ experience with online learning In this survey, 63 undergraduate dental students participated, with a gender distribution of 19 males (30.2%) and 44 females (69.8%). The findings revealed that 22.22% of the students preferred online learning, whereas the majority (60.32%) favored traditional face-to-face teaching. In addition, 17.46% of the students expressed uncertainty about their preferences (Fig. a). The survey indicated a high participation rate in online learning, with 93.65% of the students engaging in online educational activities and only 6.35% who did not participate (Fig. b). In terms of the perceived necessity of online learning, 76.19% of the students believed that it is essential, whereas 6.35% held different opinions (Fig. c). Furthermore, 80.95% of the students were willing to participate in online learning, whereas only 4.76% strongly indicated an unwillingness (Fig. d). With respect to readiness for online learning of CDR, 71.42% of the students considered themselves prepared, whereas 12.70% felt unprepared for such learning (Fig. e). Students’ knowledge background about CDR The evaluation of the students’ knowledge background about CDR yielded noteworthy insights. Only 7.94% considered their knowledge of CDR to be good, with a substantial 63.49% rating it as average and 28.57% rating it as poor (Fig. a). With respect to confidence in clinical performance, a mere 11.1% expressed confidence, 65.08% lacked confidence, and 23.81% were uncertain (Fig. b). In terms of readiness for participation, 44.4% felt prepared, 28.57% believed they were not ready, and 26.98% were unsure (Fig. c). In terms of familiarity with the CDR treatment plan, 26.98% claimed to be familiar with it, an equivalent percentage did not know, and 46.03% were uncertain (Fig. d). Significant disparities in students’ perceptions of appointment management for CDR patients existed, with 22.22% feeling confident, 46.03% feeling uncertain, and 31.75% having no idea (Fig. e). The level of communication confidence varied; 34.92% of the participants felt confident, 28.57% lacked confidence, and 36.51% were unsure (Fig. f). These findings indicate diverse levels of knowledge, confidence, and readiness, indicating potential areas for targeted educational interventions and support to enhance students’ understanding and skills in this critical aspect of dentistry. In specific clinical tasks related to CDR, students exhibited varying levels of self-confidence and uncertainty. Notably, 30.16% believed that they could handle impression-taking independently, 38.10% thought that they could not, and 31.75% were unsure (Fig. a). Similarly, only 15.87% of the students felt confident in achieving occlusal relationships alone, whereas 50.79% believed they could not, and 33.33% were unsure (Fig. b). With respect to the selection of correct artificial teeth for patients, only 7.94% felt capable, 53.97% believed they could not, and 38.1% were unsure (Fig. c). In the CDR try-in stage, 31.75% thought that they could perform the task independently, while 46.03% believed they could not, and 22.22% were uncertain (Fig. d). In addition, 39.68% believed that they knew how to instruct patients on wearing complete dentures, while 30.16% did not, and 30.16% were unsure (Fig. e). Finally, approximately 31.75% thought they knew how to provide postoperative guidance, 36.51% did not, and 31.75% were unclear (Fig. f). These findings indicate the diverse self-perceptions and potential areas for targeted educational support in specific clinical competencies related to CDR. Students’ attitude about online learning CDR When we explored students’ attitudes toward the online learning for CDR, significant insights emerged. A substantial percentage (60.90%) of the students enjoyed participating in online CDR learning, with only 6.35% expressing dislike, and 31.7% being unaware (Fig. a). In addition, a majority (71.43%) expressed a strong desire to continue online learning for CDR, only 7.94% declined, and 20.63% remained undecided (Fig. b). When assessing attitudes toward online learning in general, 82.54% of the students believed it was helpful, 6.35% held a contrary view, and 11.11% were uncertain (Fig. c). These findings underscore a positive inclination toward online learning for CDR among students, suggesting its perceived effectiveness and acceptance within the academic context.
In this survey, 63 undergraduate dental students participated, with a gender distribution of 19 males (30.2%) and 44 females (69.8%). The findings revealed that 22.22% of the students preferred online learning, whereas the majority (60.32%) favored traditional face-to-face teaching. In addition, 17.46% of the students expressed uncertainty about their preferences (Fig. a). The survey indicated a high participation rate in online learning, with 93.65% of the students engaging in online educational activities and only 6.35% who did not participate (Fig. b). In terms of the perceived necessity of online learning, 76.19% of the students believed that it is essential, whereas 6.35% held different opinions (Fig. c). Furthermore, 80.95% of the students were willing to participate in online learning, whereas only 4.76% strongly indicated an unwillingness (Fig. d). With respect to readiness for online learning of CDR, 71.42% of the students considered themselves prepared, whereas 12.70% felt unprepared for such learning (Fig. e).
The evaluation of the students’ knowledge background about CDR yielded noteworthy insights. Only 7.94% considered their knowledge of CDR to be good, with a substantial 63.49% rating it as average and 28.57% rating it as poor (Fig. a). With respect to confidence in clinical performance, a mere 11.1% expressed confidence, 65.08% lacked confidence, and 23.81% were uncertain (Fig. b). In terms of readiness for participation, 44.4% felt prepared, 28.57% believed they were not ready, and 26.98% were unsure (Fig. c). In terms of familiarity with the CDR treatment plan, 26.98% claimed to be familiar with it, an equivalent percentage did not know, and 46.03% were uncertain (Fig. d). Significant disparities in students’ perceptions of appointment management for CDR patients existed, with 22.22% feeling confident, 46.03% feeling uncertain, and 31.75% having no idea (Fig. e). The level of communication confidence varied; 34.92% of the participants felt confident, 28.57% lacked confidence, and 36.51% were unsure (Fig. f). These findings indicate diverse levels of knowledge, confidence, and readiness, indicating potential areas for targeted educational interventions and support to enhance students’ understanding and skills in this critical aspect of dentistry. In specific clinical tasks related to CDR, students exhibited varying levels of self-confidence and uncertainty. Notably, 30.16% believed that they could handle impression-taking independently, 38.10% thought that they could not, and 31.75% were unsure (Fig. a). Similarly, only 15.87% of the students felt confident in achieving occlusal relationships alone, whereas 50.79% believed they could not, and 33.33% were unsure (Fig. b). With respect to the selection of correct artificial teeth for patients, only 7.94% felt capable, 53.97% believed they could not, and 38.1% were unsure (Fig. c). In the CDR try-in stage, 31.75% thought that they could perform the task independently, while 46.03% believed they could not, and 22.22% were uncertain (Fig. d). In addition, 39.68% believed that they knew how to instruct patients on wearing complete dentures, while 30.16% did not, and 30.16% were unsure (Fig. e). Finally, approximately 31.75% thought they knew how to provide postoperative guidance, 36.51% did not, and 31.75% were unclear (Fig. f). These findings indicate the diverse self-perceptions and potential areas for targeted educational support in specific clinical competencies related to CDR.
When we explored students’ attitudes toward the online learning for CDR, significant insights emerged. A substantial percentage (60.90%) of the students enjoyed participating in online CDR learning, with only 6.35% expressing dislike, and 31.7% being unaware (Fig. a). In addition, a majority (71.43%) expressed a strong desire to continue online learning for CDR, only 7.94% declined, and 20.63% remained undecided (Fig. b). When assessing attitudes toward online learning in general, 82.54% of the students believed it was helpful, 6.35% held a contrary view, and 11.11% were uncertain (Fig. c). These findings underscore a positive inclination toward online learning for CDR among students, suggesting its perceived effectiveness and acceptance within the academic context.
Ensuring that dental students master CDR during their internship is crucial because this stage is pivotal in the transition to clinical practice. The internship provides a vital opportunity for students to apply their theoretical knowledge in real-world settings, refine their technical skills, and gain practical experience. Effective mentoring, hands-on training, and exposure to a variety of cases are essential in building confidence and competence in CDR. Integrating both traditional and innovative teaching methods, including online learning tools, can enhance this learning process. The flexibility of online platforms, which are accessible through smart devices and apps, enables students to review lessons at their convenience, making them a viable choice. Despite challenges, some scholars predict that online learning for dentures will become mainstream by 2025 . Thus, the active promotion and participation in online learning for CDR have become imperative in the current circumstances. The original purpose of this study was to assess dental students’ knowledge and attitudes toward online learning in CDR, with the aim of enhancing their understanding and clinical practice. The data indicated high engagement in online learning among students, with a majority recognizing its necessity and planning future participation. This inclination may be attributed to the increased emphasis on digital education and the use of online platforms during the COVID-19 pandemic, as supported by Wang et al.’s findings on the widespread adoption of online courses in dental education . However, it is important to point out that the data showing that more students prefer face-to-face teaching do not conflict with the popularity of online learning. The prevalence of online learning is not a substitute for face-to-face learning but rather a necessary auxiliary means of learning. Through online learning, conflicts between time arrangement and low personal learning efficiency can be easily solved, and thus it is more suitable for students who wish to personalize intensive learning so they can fully master the relevant knowledge. Moreover, the low efficiency of teacher‒student interactions can also be effectively solved via traditional face-to-face learning. This study also revealed gaps in students’ readiness and confidence with respect to CDR. Although a significant proportion felt prepared for online CDR learning, only a small percentage rated their CDR knowledge as good, and even fewer felt confident in performing CDR clinically. This may be due to the complexity of oral rehabilitation with complete dentures, which requires extensive theoretical and practical knowledge . Nonetheless, the literature suggests that students’ practical abilities and confidence can improve significantly during their internship, emphasizing the importance of clinical experience . The survey also explored students’ familiarity with various aspects of CDR, such as treatment planning, appointment scheduling, patient communication, and specific procedural skills. The results revealed that students had limited confidence in these areas, highlighting the need for targeted improvements in online CDR education to address these gaps. This necessity is highlighted by the ongoing relevance of CDR, especially for older patient populations . Overall, this study highlights the potential of online learning to improve dental students’ proficiency in CDR while also identifying specific areas for educational focus. In higher education, the acceptance of online learning is primarily due to its time and cost efficiency. As physical and online classrooms increasingly merge, dental students must adapt to online learning environments . Prosthodontics, a comprehensive subject, necessitates internships for dental graduates to develop clinical, communication, and teamwork competencies . Internships are pivotal in cultivating patient-centered attitudes and behaviors, thus significantly enhancing students’ future clinical performance. Therefore, assessing students’ attitudes and performance is crucial in evaluating the success and value of online learning . Our survey revealed interns’ strong optimism about online learning for complete dentures: 61.90% enjoyed online learning, 71.43% were motivated to continue, and 82.54% found it beneficial for CDR (Fig. a-c). However, dental students still require hands-on training and opportunities to apply their skills clinically . We advocate early clinical exposure and active preclinical prosthodontic teaching methods. Sole reliance on the internship year for the acquisition of procedural skills is inadequate. Dental students across all specialties need efficient access to educational materials. Research has indicated that the effectiveness of online learning is on par with, or exceeds, that of face-to-face methods . Chang et al. reported a 5–10% improvement in learning efficiency when blended learning was used compared with traditional methods . The positive perception of online learning among students and lecturers suggests its potential integration into post-COVID-19 curricula . However, given that stomatology focuses on clinical practice, the lack of practical experience may impede the enhancement of clinical skills , necessitating further research to evaluate the effectiveness of online learning in such practical subjects. In the field of dental education, online learning can leverage advanced digital resources to significantly enrich the learning experience, in particular in CDR techniques. For example, the application of on-demand, enhanced videos equipped with real-time subtitles that capture the presenter’s dialogue, along with concise text bullet points and summary pages, offers a robust platform for augmenting knowledge acquisition, enhancing perceptual skills, and improving clinical performance in dentistry . In addition, the integration of custom-built simulation models for impression-taking and tooth arrangement exercises can substantially improve online learning outcomes for dental students, fostering practical skills in complete denture impression creation and tooth positioning . Furthermore, the adoption of multimedia learning applications, such as video demonstrations of artificial teeth placement and patient case studies, serves to uphold or even increase the quality of dental education, effectively bridging the gap left by the absence of face-to-face instruction . Finally, the deployment of AI-driven e-learning tools, such as the Generative Pre-trained Transformer 4 (GPT-4) model by OpenAI, exemplifies a forward-thinking approach to training . This model facilitates an immersive learning environment that allows students to engage in realistic diagnostic conversations with virtual patients, thereby honing their diagnostic capabilities in a controlled yet lifelike setting. Furthermore, virtual reality (VR) has emerged as a transformative force in dental education, heralding a new era characterized by immersive training environments and immediate feedback mechanisms . This innovation facilitates the acquisition of standardized skills among students, bridging the gap between theoretical knowledge and practical expertise. The advent of VR simulation-based pedagogy marks a significant shift in educational paradigms, encompassing both undergraduate and postgraduate realms . It acts as a complement—or, in certain contexts, an alternative—to conventional training methodologies across various dental specialties . Although VR simulators cannot entirely supplant traditional hands-on training, their utility and effectiveness in specific educational scenarios are undeniable. In particular, VR technologies, in conjunction with three-dimensional computer models and simulators, are proving to be invaluable assets in the comprehensive management of edentulous patients. Research conducted by Mansoory et al. highlights the efficacy and utility of VR in facilitating the learning process related to the neutral zone and teeth arrangement for edentulous patients, thereby fostering a dynamic, engaging, and successful educational experience . The integration of VR simulators with advanced technological frameworks, such as big data analytics, cloud computing, the proliferation of 5G networks, and deep learning algorithms, promises to further revolutionize preclinical dental training. Numerous dental colleges and universities have already embarked on integrating VR-based experimental teaching into their curricula, highlighting the feasibility and adaptability of such innovative teaching modalities . Educational researchers now have a responsibility to rigorously evaluate this novel online and VR-assisted teaching methodologies. Their objective is to ascertain the efficacy of these methods in comparison with traditional educational approaches, ensuring that the quality and efficiency of dental education are not only maintained but also significantly enhanced. Virtual reality (VR) technologies provide immersive and interactive experiences that are particularly beneficial for dental education, where hands-on practice is essential . In complete denture rehabilitation, VR can re-create clinical environments, enabling students to practice crucial techniques such as impression-taking, jaw relation recording, and denture fitting in a controlled and highly realistic setting. This immersive experience allows students to hone their motor skills and decision-making abilities in a safe environment where mistakes can be made without compromising patient safety. VR also enhances the visualization of complex anatomical structures and the interaction of dentures with oral tissues, deepening students’ understanding of denture design and function . This critical aspect is often challenging to master through traditional methods. Artificial intelligence (AI) complements VR by offering personalized learning experiences, assessing student performance in real time, and providing instant feedback . AI-driven platforms can identify areas where students struggle, such as achieving proper occlusion or understanding material properties, and offer tailored resources to address these gaps. AI also enhances the accessibility and scalability of education by adapting content to different learning styles and paces, which is particularly valuable in online education environments. Integrating VR and AI in complete denture rehabilitation education also offers new opportunities for dynamic and interactive assessments of practical skills and clinical decision making; however, adopting these technologies requires significant investment and careful planning to ensure effective integration into educational curricula . Despite these challenges, VR and AI hold promise for revolutionizing dental education by making it more immersive, personalized, and effective in preparing students for clinical practice.
The manuscript discusses the importance of CDR training for dental interns, emphasizing the role of online learning in enhancing education. This highlights the need for comprehensive understanding, skill development, and the integration of innovative teaching methods. The study explores dental students’ attitudes toward and knowledge of online learning of CDR, revealing not only engagement but also gaps in readiness and confidence. The results suggest the necessity of improving online CDR education. This study also highlights the ongoing relevance of CDR, in particular for older populations, and the need to integrate theoretical knowledge with innovative technologies such as virtual reality into dental education. The results suggest that a balanced approach to online and traditional learning is crucial for equipping future dental professionals with the necessary skills and confidence to succeed in clinical practice.
Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2
|
Equity culture in pediatrics | f02a827c-656d-453f-b8d7-b4f6b7f228f1 | 7682686 | Pediatrics[mh] | |
Predictors of Seizure Outcome after Repeat Pediatric Epilepsy Surgery: Reasons for Failure, Sex, Electrophysiology, and Temporal Lobe Surgery | 4e98ced8-bfaf-4015-bd84-bae1392ce5aa | 8918369 | Physiology[mh] | Epilepsy surgery is an accepted treatment option for patients with drug-resistant epilepsy. Long-term seizure freedom is achieved in 60%–80% of selected patients with surgically remediable etiologies, such as mesial temporal sclerosis and developmental tumors. , The chance of seizure freedom is reported to be as low as 50% for non-lesional epilepsy or extra-temporal surgery. , A certain fraction of the operated patients continue to experience drug-resistant seizures, which significantly impairs their quality of life. Further adjustment of anti-seizure medications and palliative surgery such as vagus nerve stimulation are usually considered for those patients. Repeat resective surgery is another treatment option to control seizures, as seizure freedom is the most important outcome in patients with epilepsy that significantly improves quality of life and may decrease seizure-related mortality. , The current evidence on repeat epilepsy surgery remains limited. One systematic review revealed that the chance of seizure freedom was 47% after repeat epilepsy surgery and that congruent electrophysiology, lesional epilepsy, and the reasons for surgical failure were predictive of seizure freedom. The chance of seizure freedom is known to decrease following every subsequent resective epilepsy surgery. The above evidences were mainly obtained from studies with adult patients. It is important for surgeons to identify the predictive factors for seizure outcomes when considering repeat interventions with limited efficacy. This retrospective study aimed to elucidate the predictive factors for seizure outcomes after repeat pediatric epilepsy surgery.
This was a retrospective descriptive study, and the manuscript was prepared in accordance with the Strengthening the Reporting of Observational Studies in Epidemiology guidelines. This study was approved by the ethics committees at the National Center of Neurology and Psychiatry in Tokyo, Japan (No. A2018-049). The requirement for written informed consent was waived due to the retrospective design. Patients This study included 39 pediatric patients (<18 years of age) who underwent repeat epilepsy surgery for drug-resistant epilepsy between June 2008 and June 2020 at our institution with a minimum 1-year postoperative follow-up. Only curative epilepsy surgeries were included, and palliative procedures such as corpus callosotomy and vagus nerve stimulator implantation were excluded. A total of 323 curative epilepsy surgeries were performed for pediatric patients during this period. Among them, 48 procedures were performed as repeat epilepsy surgery, including 39 procedures as a second surgery, 7 procedures as a third surgery, and one procedure each as fourth and fifth surgeries. Adjustment of anti-seizure medication were generally attempted before the indications of repeat surgery were considered. A comprehensive pre-surgical evaluation, including 3.0-T magnetic resonance imaging and long-term video-electroencephalography (EEG) monitoring, was performed in all patients before surgery. At least one of the following additional examinations was performed before repeat surgery in all except two cases who had apparent residual lesions on MRI: Magnetoencephalography (MEG), ictal single-photon emission computed tomography (SPECT), or fluorodeoxyglucose positron emission tomography (FDG-PET). Among 48 presurgical evaluations before repeat surgeries, MEG, ictal SPECT, and FDG-PET were performed in 41 (85.4%), 30 (62.5%), and 31 (64.9%) occasions, respectively. Surgical indications were determined at the patient management conference attended by neurologists, pediatric neurologists, neurosurgeons, and certified epileptologists. Repeat surgery was generally indicated when localizing information was concordant between two or more modalities of evaluation. Seizure outcome Postoperative follow-up of the patients for evaluation was achieved through outpatient visits or at admission. The same anti-seizure medication as before surgery was generally continued for 1 year postoperatively. Postoperative seizure outcome was assessed using the International League Against Epilepsy (ILAE) classification. Data collection Candidate patients were first identified from the National Center of Neurology and Psychiatry Neurosurgical database. The following data were retrospectively collected from medical records: date of surgery, side and type of surgery, intracranial EEG evaluation, histopathological diagnosis of surgical specimen, etiology of epilepsy, age of epilepsy onset, date of seizure recurrence, MRI findings, reasons for surgical failure, and postoperative seizure outcome at the last follow-up. The etiology of epilepsy was determined based on the histopathological diagnosis, neuroimaging findings, and other clinical information at the initial surgery. The reasons for surgical failure were determined based on the clinical course of the patients and patient management discussions that occurred before the repeat surgeries. The timing of seizure recurrence was classified as acute postoperative when the drug-resistant seizures recurred within 1 week after surgery. Postoperative seizure outcome was categorized as seizure freedom (ILAE class 1) or no seizure freedom at the last follow-up. Statistical analysis Univariate analysis Descriptive statistics were used to summarize the patient characteristics. The above clinical data were categorized, and Fisher’s exact test was used to examine the association with postoperative seizure freedom. Multivariate analysis Logistic LASSO regression analysis (logistic regression with L1 regularization) was used to identify clinical predictors for postoperative seizure freedom. , Because L1-regularized regression shrinks unnecessary coefficients strictly to zero, it allows for feature selection without any handmade threshold parameters, such as the significance level. The following preoperative clinical variables were included in the regression analysis: sex, side of surgery, etiology of malformation of cortical development, reasons for surgical failure, acute seizure recurrence, location of surgery limited to the temporal lobe, lobar or larger surgery, intracranial EEG evaluation, lesional MRI, and congruent EEG findings as categorical variables; age at surgery, duration of epilepsy, age at onset, and time to seizure recurrence were included as integer values. The hyperparameter of the logistic LASSO model was determined by leave-one-out cross-validation (LOOCV). The predictive ability of the logistic LASSO regression model was evaluated by drawing a receiver operating characteristic (ROC) curve. The area under the ROC curve was calculated. R version 4.0.0 (The R Foundation for Statistical Computing) and the glmnet package version 4.1.2 were used for statistical analysis. Statistical significance was accepted at p <0.05.
This study included 39 pediatric patients (<18 years of age) who underwent repeat epilepsy surgery for drug-resistant epilepsy between June 2008 and June 2020 at our institution with a minimum 1-year postoperative follow-up. Only curative epilepsy surgeries were included, and palliative procedures such as corpus callosotomy and vagus nerve stimulator implantation were excluded. A total of 323 curative epilepsy surgeries were performed for pediatric patients during this period. Among them, 48 procedures were performed as repeat epilepsy surgery, including 39 procedures as a second surgery, 7 procedures as a third surgery, and one procedure each as fourth and fifth surgeries. Adjustment of anti-seizure medication were generally attempted before the indications of repeat surgery were considered. A comprehensive pre-surgical evaluation, including 3.0-T magnetic resonance imaging and long-term video-electroencephalography (EEG) monitoring, was performed in all patients before surgery. At least one of the following additional examinations was performed before repeat surgery in all except two cases who had apparent residual lesions on MRI: Magnetoencephalography (MEG), ictal single-photon emission computed tomography (SPECT), or fluorodeoxyglucose positron emission tomography (FDG-PET). Among 48 presurgical evaluations before repeat surgeries, MEG, ictal SPECT, and FDG-PET were performed in 41 (85.4%), 30 (62.5%), and 31 (64.9%) occasions, respectively. Surgical indications were determined at the patient management conference attended by neurologists, pediatric neurologists, neurosurgeons, and certified epileptologists. Repeat surgery was generally indicated when localizing information was concordant between two or more modalities of evaluation.
Postoperative follow-up of the patients for evaluation was achieved through outpatient visits or at admission. The same anti-seizure medication as before surgery was generally continued for 1 year postoperatively. Postoperative seizure outcome was assessed using the International League Against Epilepsy (ILAE) classification.
Candidate patients were first identified from the National Center of Neurology and Psychiatry Neurosurgical database. The following data were retrospectively collected from medical records: date of surgery, side and type of surgery, intracranial EEG evaluation, histopathological diagnosis of surgical specimen, etiology of epilepsy, age of epilepsy onset, date of seizure recurrence, MRI findings, reasons for surgical failure, and postoperative seizure outcome at the last follow-up. The etiology of epilepsy was determined based on the histopathological diagnosis, neuroimaging findings, and other clinical information at the initial surgery. The reasons for surgical failure were determined based on the clinical course of the patients and patient management discussions that occurred before the repeat surgeries. The timing of seizure recurrence was classified as acute postoperative when the drug-resistant seizures recurred within 1 week after surgery. Postoperative seizure outcome was categorized as seizure freedom (ILAE class 1) or no seizure freedom at the last follow-up.
Univariate analysis Descriptive statistics were used to summarize the patient characteristics. The above clinical data were categorized, and Fisher’s exact test was used to examine the association with postoperative seizure freedom. Multivariate analysis Logistic LASSO regression analysis (logistic regression with L1 regularization) was used to identify clinical predictors for postoperative seizure freedom. , Because L1-regularized regression shrinks unnecessary coefficients strictly to zero, it allows for feature selection without any handmade threshold parameters, such as the significance level. The following preoperative clinical variables were included in the regression analysis: sex, side of surgery, etiology of malformation of cortical development, reasons for surgical failure, acute seizure recurrence, location of surgery limited to the temporal lobe, lobar or larger surgery, intracranial EEG evaluation, lesional MRI, and congruent EEG findings as categorical variables; age at surgery, duration of epilepsy, age at onset, and time to seizure recurrence were included as integer values. The hyperparameter of the logistic LASSO model was determined by leave-one-out cross-validation (LOOCV). The predictive ability of the logistic LASSO regression model was evaluated by drawing a receiver operating characteristic (ROC) curve. The area under the ROC curve was calculated. R version 4.0.0 (The R Foundation for Statistical Computing) and the glmnet package version 4.1.2 were used for statistical analysis. Statistical significance was accepted at p <0.05.
Descriptive statistics were used to summarize the patient characteristics. The above clinical data were categorized, and Fisher’s exact test was used to examine the association with postoperative seizure freedom.
Logistic LASSO regression analysis (logistic regression with L1 regularization) was used to identify clinical predictors for postoperative seizure freedom. , Because L1-regularized regression shrinks unnecessary coefficients strictly to zero, it allows for feature selection without any handmade threshold parameters, such as the significance level. The following preoperative clinical variables were included in the regression analysis: sex, side of surgery, etiology of malformation of cortical development, reasons for surgical failure, acute seizure recurrence, location of surgery limited to the temporal lobe, lobar or larger surgery, intracranial EEG evaluation, lesional MRI, and congruent EEG findings as categorical variables; age at surgery, duration of epilepsy, age at onset, and time to seizure recurrence were included as integer values. The hyperparameter of the logistic LASSO model was determined by leave-one-out cross-validation (LOOCV). The predictive ability of the logistic LASSO regression model was evaluated by drawing a receiver operating characteristic (ROC) curve. The area under the ROC curve was calculated. R version 4.0.0 (The R Foundation for Statistical Computing) and the glmnet package version 4.1.2 were used for statistical analysis. Statistical significance was accepted at p <0.05.
Patient characteristics The clinical characteristics are summarized in . The first surgery was performed before the age of three in 14 (35.9%) patients. The main seizure types were tonic seizure in 16, focal impaired awareness seizure in 13, epileptic spasm in 5, clonic seizure in 3, and focal to bilateral tonic-clonic seizure (FBTCS) in 2 patients. Five patients had FBTCS as a part of their habitual seizures. The majority of the patients had daily seizures before the first surgery. Five and one patient had a previous history of West syndrome and Ohtahara syndrome, respectively. The average interval between the first surgery and seizure recurrence was 6.4 months. Acute postoperative seizure recurrence was observed in 14 patients (35.9%). The type of first surgery included focal resection in 28, frontal lobectomy in 2, temporal lobectomy in 2, posterior quadrant disconnection in 3, other multilobar resection/disconnection in 3, and vertical hemispherotomy in 1 patient. The most frequent location of surgery was the frontal lobe. The etiology of epilepsy was malformation of cortical development in 33 patients (84.6%). Seizure outcome The postoperative course after repeat surgery is summarized in . In all, 16 patients achieved seizure freedom after the second surgery (41.0%). Among the remaining 23 patients, seven patients underwent the third surgery, and three of them achieved seizure freedom (42.9%). One patient underwent the fourth and fifth surgeries, but their seizures did not improve (ILAE class 5). Overall, 19 patients achieved seizure freedom after repeat surgeries (48.7%). The postoperative seizure outcomes were ILAE class 2 in 1, class 3 in 2, class 4 in 10, and class 5 in 7 patients. The mean follow-up period after the last surgery was 54.2 ± 34.0 months (12–132). Reasons for surgical failure The reasons for surgical failure were roughly divided into technical limitations in surgery (n = 6) and diagnostic limitations in the identification of the epileptogenic zone (n = 33). Technical limitations included incomplete disconnection during hemispherotomy (n = 1) or during posterior quadrant disconnection (n = 2), and residual epileptogenic lesion that was recognized postoperatively on MRI (n = 3). Diagnostic limitations were further categorized into larger epileptogenic zones (n = 30), mislocalization of the epileptogenic zone (n = 2), and the emergence of a new epileptogenic zone (n = 1). The larger epileptogenic zones were classified as such because the epileptogenic zone was suspected from repeat evaluation in the same or contiguous area to the first diagnosis, and the second surgery was performed in the area next to the first surgery. This occurred in the near eloquent area in six cases. Mislocalization was determined based on minimum improvement after the first surgery and if the second surgery was performed in an area distant from the first surgery. The emergence of a new epileptogenic zone was observed in a patient with a tuberous sclerosis complex. The left occipital tuber became epileptogenic 7 years after the removal of the right central tuber in this patient. Univariate analysis summarizes the univariate analysis of postoperative seizure freedom after repeat surgeries. Female sex, congruent EEG findings, and surgical failure due to technical limitations were associated with postoperative seizure outcomes ( p <0.05). Multivariate analysis Logistic LASSO analysis revealed that six clinical factors were expected to be predictive of postoperative seizure outcome: female sex, surgical failure due to technical limitations, surgery limited to the temporal lobe, congruent EEG findings, lesional MRI, and Rt-sided surgery. The estimated coefficients are presented in . Female sex, surgical failure due to technical limitations, congruent EEG findings, lesional MRI, and Rt-sided surgery were predictive of seizure freedom. Surgery limited to the temporal lobe was predictive of residual seizures. This result was consistent with the univariate analysis. The coefficients of lesional MRI and Rt-sided surgery were small compared with other factors, suggesting minor contribution to the outcome. The ROC curve of the regression model is shown in . The area under the ROC curve was 0.91, suggesting sufficient performance as a predictive model.
The clinical characteristics are summarized in . The first surgery was performed before the age of three in 14 (35.9%) patients. The main seizure types were tonic seizure in 16, focal impaired awareness seizure in 13, epileptic spasm in 5, clonic seizure in 3, and focal to bilateral tonic-clonic seizure (FBTCS) in 2 patients. Five patients had FBTCS as a part of their habitual seizures. The majority of the patients had daily seizures before the first surgery. Five and one patient had a previous history of West syndrome and Ohtahara syndrome, respectively. The average interval between the first surgery and seizure recurrence was 6.4 months. Acute postoperative seizure recurrence was observed in 14 patients (35.9%). The type of first surgery included focal resection in 28, frontal lobectomy in 2, temporal lobectomy in 2, posterior quadrant disconnection in 3, other multilobar resection/disconnection in 3, and vertical hemispherotomy in 1 patient. The most frequent location of surgery was the frontal lobe. The etiology of epilepsy was malformation of cortical development in 33 patients (84.6%).
The postoperative course after repeat surgery is summarized in . In all, 16 patients achieved seizure freedom after the second surgery (41.0%). Among the remaining 23 patients, seven patients underwent the third surgery, and three of them achieved seizure freedom (42.9%). One patient underwent the fourth and fifth surgeries, but their seizures did not improve (ILAE class 5). Overall, 19 patients achieved seizure freedom after repeat surgeries (48.7%). The postoperative seizure outcomes were ILAE class 2 in 1, class 3 in 2, class 4 in 10, and class 5 in 7 patients. The mean follow-up period after the last surgery was 54.2 ± 34.0 months (12–132).
The reasons for surgical failure were roughly divided into technical limitations in surgery (n = 6) and diagnostic limitations in the identification of the epileptogenic zone (n = 33). Technical limitations included incomplete disconnection during hemispherotomy (n = 1) or during posterior quadrant disconnection (n = 2), and residual epileptogenic lesion that was recognized postoperatively on MRI (n = 3). Diagnostic limitations were further categorized into larger epileptogenic zones (n = 30), mislocalization of the epileptogenic zone (n = 2), and the emergence of a new epileptogenic zone (n = 1). The larger epileptogenic zones were classified as such because the epileptogenic zone was suspected from repeat evaluation in the same or contiguous area to the first diagnosis, and the second surgery was performed in the area next to the first surgery. This occurred in the near eloquent area in six cases. Mislocalization was determined based on minimum improvement after the first surgery and if the second surgery was performed in an area distant from the first surgery. The emergence of a new epileptogenic zone was observed in a patient with a tuberous sclerosis complex. The left occipital tuber became epileptogenic 7 years after the removal of the right central tuber in this patient.
summarizes the univariate analysis of postoperative seizure freedom after repeat surgeries. Female sex, congruent EEG findings, and surgical failure due to technical limitations were associated with postoperative seizure outcomes ( p <0.05).
Logistic LASSO analysis revealed that six clinical factors were expected to be predictive of postoperative seizure outcome: female sex, surgical failure due to technical limitations, surgery limited to the temporal lobe, congruent EEG findings, lesional MRI, and Rt-sided surgery. The estimated coefficients are presented in . Female sex, surgical failure due to technical limitations, congruent EEG findings, lesional MRI, and Rt-sided surgery were predictive of seizure freedom. Surgery limited to the temporal lobe was predictive of residual seizures. This result was consistent with the univariate analysis. The coefficients of lesional MRI and Rt-sided surgery were small compared with other factors, suggesting minor contribution to the outcome. The ROC curve of the regression model is shown in . The area under the ROC curve was 0.91, suggesting sufficient performance as a predictive model.
This institutional retrospective study of repeat pediatric epilepsy surgery revealed that the chance of seizure freedom was 41.0% after the second surgery and 42.9% after the third surgery. Cumulatively, 48.7% of patients achieved seizure freedom after repeat epilepsy surgery. Female sex, surgical failure due to technical limitations, first surgery limited to the temporal lobe, congruent EEG findings, lesional MRI, and Rt-sided surgery were predictive of postoperative seizure outcome in the multivariate analysis. The contribution of the first four factors was larger than the others based on the magnitude of the coefficients . The postoperative seizure outcome after repeat epilepsy surgery was below 50%. The chance of seizure freedom after repeat surgery was 48.7% at the last follow-up in this study. This figure is in line with those of previous studies. One meta-analysis including 782 patients from 36 studies reported that the overall rate of an Engel I outcome after repeat resective epilepsy surgery was 47%. The largest retrospective study from Cleveland Clinic showed that 42% of the patients with one prior surgery and 33% of those with two or more prior surgeries had Engel I outcomes 2 years after repeat surgery, suggesting that the chance of seizure freedom decreases after every subsequent surgery. Reoperation for failed epilepsy surgery is challenging, although a successful outcome is expected in a certain proportion of patients. Repeat epilepsy surgery should be carefully performed in selected patients. Advanced neuroimaging studies play an important role in identifying the residual epileptogenic zone. The majority of our patients underwent MEG, ictal SPECT, and FDG-PET in the repeat presurgical evaluation. Predictive factors for seizure freedom after repeat epilepsy surgery were similar to those for the initial surgery. Congruent electrophysiological findings and lesional pathology were reported from the meta-analysis as predictive of better outcomes. Tumors, cysts, and vascular malformations were categorized as lesional pathologies, but focal cortical dysplasia (FCD) and hippocampal sclerosis were not included in that meta-analysis. The initial pathology of FCD and mesial temporal sclerosis was associated with poor seizure outcome after repeat epilepsy surgery. Lesional MRI at the initial surgery was predictive of seizure outcome in our study, but the majority of our patients had cortical dysplasia. FCD and other malformations of cortical development can be categorized as lesional in some studies. , Abnormal MRI findings prior to the initial surgery showed trends related to seizure freedom; however, they were not predictive in multivariate analyses. , In contrast, congruent EEG findings are frequently reported as predictors of seizure freedom. , – The presence of remote, multifocal, or generalized epileptiform discharges is indicative of poor seizure outcomes. , , Female sex is a possible predictive factor for better seizure outcomes after repeat epilepsy surgery. Sex differences have been reported in the electrophysiological and metabolic presentation of mesial temporal lobe epilepsy with hippocampal sclerosis (MTLE). However, sex has never been raised as a factor related to postoperative outcome after epilepsy surgery. , Interestingly, one recent study found female sex to be a predictive factor for seizure freedom after “repeat” epilepsy surgery. Sex is considered an inherent biological marker of surgical refractoriness, together with the tendency for secondary generalization. Men tend to have more secondarily generalized tonic-clonic seizures than women with MTLE. One study with a two-hit rat model of MTLE showed that males were more vulnerable to epileptogenesis than females. Thus, sex may be a biological factor for epileptogenesis after surgical intervention. First surgery limited to the temporal lobe was associated with worse seizure outcomes in this study. No consistent relationships were found between the location of surgery and outcome in the previous studies, partly due to the different patient populations studied. , , Temporal lobe surgery is most frequent in studies that include adult patients. , A meta-analysis of repeat epilepsy surgery found a nonsignificant relationship between temporal lobe surgery and better outcomes. Our study focused on pediatric epilepsy surgery. The most frequent surgical location was the frontal lobe, and the majority of surgeries included the extra-temporal region in this study. Part of the failure after temporal lobe surgery is attributed to the epileptogenic foci in the extra-temporal limbic system, the so-called “temporal-plus epilepsy.” We have reported that postoperative seizure outcomes were paradoxically worse in patients who underwent invasive presurgical exploration limited to the temporal lobe. Reasons for failure of initial surgery are important predictors of seizure outcomes after repeat surgery. However, reasons for failure are difficult to formulate; this is because a retrospective review of the records may not have the information for which to correctly infer the reason; additionally, there could be multiple reasons behind the failure. Resection is especially reserved for surgeries close to functionally important areas. Whether a reason is a technically incomplete resection or an inaccurate estimation of the epileptogenic zone can be difficult to determine in these cases. “Surgery-related” factors for failed initial surgery, which include extension of the epileptogenic zone into functional areas, missed lesions, incomplete resections, lesional recurrences, and improperly categorized epileptogenic areas, were reported to be predictors of seizure freedom after repeat epilepsy surgery. This is in comparison to “disease-related” factors, which include emergence of a new epileptogenic zone and diffuse or bilateral epileptogenic zones. Repeat surgery for the cases with technical limitations such as obvious residual lesions and incomplete disconnection during hemispherotomy/posterior quadrantic disconnections warranted postoperative seizure freedom in this study. It is important to check if the planned surgery was performed completely when the seizures recur. Rt-sided surgery was predictive of seizure freedom in this study, although the contribution may be small. This is partly explained by the fact that a more reserved surgery would be performed on the dominant hemisphere, or that a more radical surgery would be performed on the non-dominant hemisphere. A retrospective study design with a relatively small number of subjects provided only low-level evidence in this study. Statistical comparison was performed mostly on two categorical variables, and the study might not have sufficient power to detect other meaningful factors. Our study only included pediatric patients, reflecting the characteristics of our institution. Careful interpretation is necessary to generalize our findings.
Seizure freedom was observed in 48.7% of patients after repeat epilepsy surgery, in line with previous observational studies. Female sex, surgical failure due to technical limitations, congruent EEG findings, lesional MRI, and Rt-sided surgery were predictive of postoperative seizure freedom, and first surgery limited to the temporal lobe was predictive of unfavorable outcomes. The reoperation of failed epilepsy surgery is challenging. Consideration of the above predictive factors can be helpful in deciding whether to reoperate on pediatric patients whose initial surgical intervention failed.
|
Assessing knowledge of herbal medicine course for dental students | f2c81d32-1f25-4f38-b2f2-d348989cdf59 | 9719615 | Pharmacology[mh] | The use of herbal medicine/supplements is popular throughout the world, including developed countries . In 2007, 17.7% of the United States’ adult population had used some form of herbal supplement , spending about $15 billion on them . These supplements are also known as “non-vitamin, non-mineral, natural products” . Many individuals take them for disease treatment and pain related to the back (17.1%), neck (5.9%), joints (5.2%), and arthritis (3.5%). Most users were females and above 40 years old . Herbal medicine is part of the complementary and alternative medicine (CAM) field , which focuses on the treatments and procedures administered alongside or instead of conventional therapy . CAM is divided into three groups: (a) natural products; (b) mind and body practices; and (c) other complementary health approaches . Thus far, most research on the knowledge and perception has focused on adults or students . There have also been several studies conducted on medical students . However, very limited research has been done among the dental students. Moreover, their usage, potential benefits, and side effects have not been confirmed. Hence, it was suggested to conduct a short course about herbal medicine based on the optimal available evidence on this topic.So, the aim of this study to present another effort to improve dentists’ knowledge through the introduction of specialized course in dental herbs medicine at faculty of dentistry at King Abdulaziz University (KAUFD). The course’s goal is to increase the dental students knowledge, resulting in better dental care, and to produce dentists equipped with the necessary awareness about dental herbal medicine. Hypothesis: After the course, the dental students will have expanded their knowledge of the uses of herbal medicine in dentistry compared to before the course. Intervention: King Abdulaziz University Faculty of Dentistry (KAUFD) developed a herbal medicine course to improve students’ knowledge about herbal medicine uses, impacts, and side effects. The course, which lasts a total of four hours, is held over two days in the Fall semester. It is optional and has a didactic component only in terms of lectures and discussion. The predoctoral program lasts six years in total; the herbal medicine course was introduced for fourth-year students, which is their first year of clinical practice. Study Design: The study consisted of two groups: fourth-year students, who were invited to take the course as a test group, and the sixth-year students, who did not take the course as a control (Fig. ). Neither group were informed at the beginning whether they were in the test or control group. The study was approved by the ethical committee at KAUFD (# 295-10-21). The lecture was created according to the evidence-based dentistry concept of what has been published in herbal medicine . In another way, more credit was given to high-quality meta-analysis, systematic review, randomized clinical trials, and cohort, while less credit was given to low-quality cross-sectional studies and reviews. Survey validation and relibilaity A survey containing 16 multiple-choice questions, and which included an “I don’t know” option for several questions, was distributed before and one month after the course. A survey questions was then established based on the materials that will be covered in the lecture. The survey was tested for content validity and distributed to 8 individuals with expertise in some aspect of the subject matter. These individuals included herbal medicine experts and dental consultants in oral medicine, restorative dentistry, periodontics, and maxillofacial surgery. They were asked to check and rate the importance of each question using a five-point Likert scale (from 1 = very important to 5 = not important). They determined if an item should be included in the questionnaire. For face validity, the same survey was reviewed by four different fifth-year students to confirm the clarity of the questions before distribution. They repeated the survey a second time after a one-week period. The results were compared for consistency by calculating kappa statistics which was ranged between 0.78 and 0.86. The internal consistency (Cronbach’s alpha coefficient ) was 0.738 (95% confidence interval [CI]: 0.671–0.795) for periodontal disease and 0.806 (95% CI: 0.757–0.848) for caries. We modified the web survey based on the results of face and content testing, as well as the results of the reliability testing. Questions were related to: (1) demographic such as gender, level, and marital status (4 questions); (2) personal opinion and use of herbal medicine (7 questions ); (3) herbal medicine and dentistry (5 questions). Participants understood that the course was voluntary and would not be graded. The participating students also completed an electronic consent form. Student ID number was used as the link between the two surveys. Sample size and data analysis : This is a pilot study as we did not find a similar study design in the dental field. It was a convenience sampling of fourth and sixth -year students who voluntarily participated in this study. All data were entered and analyzed using SPSS version 21. Responses of each question were summarized to create an overall frequency and percentage. The sum score of knowledge for each participant was calcuated based on the ability to identify the potential use of herbs in dentistry with high-quality evidence (correct answer) or total answer for periodontal disease and caries. To do this step, first we identified all the systematic review, clinical trial and observational study (cohort and case control studies only ) articles which linked between herbs and dentistry. Then, identified if a herb could be potentially benefical in periodontal disease disease or caries or not at all. These answers were considered the correct answers. We analyzed all the students answers (total) including incorrect answers. This was analyzed using the Wilcoxon signed-rank test. A significant difference is described as a p value less than 0.05. After the course, the dental students will have expanded their knowledge of the uses of herbal medicine in dentistry compared to before the course. King Abdulaziz University Faculty of Dentistry (KAUFD) developed a herbal medicine course to improve students’ knowledge about herbal medicine uses, impacts, and side effects. The course, which lasts a total of four hours, is held over two days in the Fall semester. It is optional and has a didactic component only in terms of lectures and discussion. The predoctoral program lasts six years in total; the herbal medicine course was introduced for fourth-year students, which is their first year of clinical practice. The study consisted of two groups: fourth-year students, who were invited to take the course as a test group, and the sixth-year students, who did not take the course as a control (Fig. ). Neither group were informed at the beginning whether they were in the test or control group. The study was approved by the ethical committee at KAUFD (# 295-10-21). The lecture was created according to the evidence-based dentistry concept of what has been published in herbal medicine . In another way, more credit was given to high-quality meta-analysis, systematic review, randomized clinical trials, and cohort, while less credit was given to low-quality cross-sectional studies and reviews. A survey containing 16 multiple-choice questions, and which included an “I don’t know” option for several questions, was distributed before and one month after the course. A survey questions was then established based on the materials that will be covered in the lecture. The survey was tested for content validity and distributed to 8 individuals with expertise in some aspect of the subject matter. These individuals included herbal medicine experts and dental consultants in oral medicine, restorative dentistry, periodontics, and maxillofacial surgery. They were asked to check and rate the importance of each question using a five-point Likert scale (from 1 = very important to 5 = not important). They determined if an item should be included in the questionnaire. For face validity, the same survey was reviewed by four different fifth-year students to confirm the clarity of the questions before distribution. They repeated the survey a second time after a one-week period. The results were compared for consistency by calculating kappa statistics which was ranged between 0.78 and 0.86. The internal consistency (Cronbach’s alpha coefficient ) was 0.738 (95% confidence interval [CI]: 0.671–0.795) for periodontal disease and 0.806 (95% CI: 0.757–0.848) for caries. We modified the web survey based on the results of face and content testing, as well as the results of the reliability testing. Questions were related to: (1) demographic such as gender, level, and marital status (4 questions); (2) personal opinion and use of herbal medicine (7 questions ); (3) herbal medicine and dentistry (5 questions). Participants understood that the course was voluntary and would not be graded. The participating students also completed an electronic consent form. Student ID number was used as the link between the two surveys. This is a pilot study as we did not find a similar study design in the dental field. It was a convenience sampling of fourth and sixth -year students who voluntarily participated in this study. All data were entered and analyzed using SPSS version 21. Responses of each question were summarized to create an overall frequency and percentage. The sum score of knowledge for each participant was calcuated based on the ability to identify the potential use of herbs in dentistry with high-quality evidence (correct answer) or total answer for periodontal disease and caries. To do this step, first we identified all the systematic review, clinical trial and observational study (cohort and case control studies only ) articles which linked between herbs and dentistry. Then, identified if a herb could be potentially benefical in periodontal disease disease or caries or not at all. These answers were considered the correct answers. We analyzed all the students answers (total) including incorrect answers. This was analyzed using the Wilcoxon signed-rank test. A significant difference is described as a p value less than 0.05. Description of the sample: The pre-course survey was sent to 316 students (152 fourth-year and 164 sixth-year students) and the response rate was 100%. However, the students who agreed to take the course comprised 114 students among the fourth-year respondents (74.0%). Out of the total fourth- and sixth-year students, those who answered the post-course survey comprised 112 fourth-year students (73.7%) and 64 sixth-year students (39.0%) ( Fig. ). There were 37 females and 27 males among the sixth-year students and 61 females and 53 males among the fourth-year students. Only two students were married; both were female, and each was from each year group . Overall personal opinion and use of herbal medicine: The fourth-year students ( test group ) displayed evidence of a higher overall knowledge score after the course in herbal medicine related to periodontal disease in total and correct answers (mean 4.48 ± 4.13, 3.73 ± 3.31, respectively) compared to before the course (mean 0.84 ± 1.23, 0.74 ± 1.16, respectively) ( p -value < 0.001). The post-course periodontal disease total (4.48 ± 4.13 vs. 0.59 ± 0.87 ) and correct answers (3.73 ± 3.31 vs. 0.52 ± 0.79 ) were statistically significant between fourth (test)- and sixth (control)-year students ( P -value < 0.001). Individual personal opinion and use of herbal medicine: The majority of the participants believed that it is beneficial to use herbal medicine (85.4%) and had not noticed any side effects during personal use or other use (93.7%) (Table ). However, only 69.6% of them had previously used herbal medicine. Among those who had not yet used it, the main reasons for not using it were “do not know much about it” (77.1%) followed by “no reason” (55.2%). The main sources of the information regarding herbal medicine were elderly relatives (77.8%), followed by Internet (51.6%) and friends (44.3%). Most agreed that their patients should tell the physician if they used it (90.5%). However, some barriers were identified, such as insufficient scientific evidence (62.7%) and lack of trained professionals (37.0%) (Table ). Among those who had used herbal medicine before, ginger was the most commonly used remedy (63.7%) followed by green tea (54.0%), cinnamon (42.7%), and black seed (43.3%) (Table ). Herbal medicine and dentistry: More than half of the participants were unsure about the importance of herbal medicine in dentistry (52.5%). (Table ). However, they mentioned that the most common herbs used in dentistry were clove (62.9%), followed by curcuma turmeric (54.7%) and meswak (43.0%) (Table ). Meswak and commiphora myrrha were the most commonly mentioned remedies for periodontal disease before and after the lecture, while clove and meswak were the most common herbs mentioned for use in caries prevention among the test group (Table ). The answers were almost identical for the control group; however, the percentages were much lower (Table ). Sum score of knowledge: There was a significant improvement was seen in all subjects’ total knowledge regarding periodontal herbs ( mean of answers was 0.83 before the program and 3.09 after one month) ( p < 0.001) (Table ). The mean of the correct answer was increased significantly as well (0.70, and 1.58, respectively, p < 0.001 ± 3.09 ). The sum score of knowledge of total and correct answers regarding caries was higher after the program. However, it was not statistical significant (Table ). The pre-course survey was sent to 316 students (152 fourth-year and 164 sixth-year students) and the response rate was 100%. However, the students who agreed to take the course comprised 114 students among the fourth-year respondents (74.0%). Out of the total fourth- and sixth-year students, those who answered the post-course survey comprised 112 fourth-year students (73.7%) and 64 sixth-year students (39.0%) ( Fig. ). There were 37 females and 27 males among the sixth-year students and 61 females and 53 males among the fourth-year students. Only two students were married; both were female, and each was from each year group . The fourth-year students ( test group ) displayed evidence of a higher overall knowledge score after the course in herbal medicine related to periodontal disease in total and correct answers (mean 4.48 ± 4.13, 3.73 ± 3.31, respectively) compared to before the course (mean 0.84 ± 1.23, 0.74 ± 1.16, respectively) ( p -value < 0.001). The post-course periodontal disease total (4.48 ± 4.13 vs. 0.59 ± 0.87 ) and correct answers (3.73 ± 3.31 vs. 0.52 ± 0.79 ) were statistically significant between fourth (test)- and sixth (control)-year students ( P -value < 0.001). The majority of the participants believed that it is beneficial to use herbal medicine (85.4%) and had not noticed any side effects during personal use or other use (93.7%) (Table ). However, only 69.6% of them had previously used herbal medicine. Among those who had not yet used it, the main reasons for not using it were “do not know much about it” (77.1%) followed by “no reason” (55.2%). The main sources of the information regarding herbal medicine were elderly relatives (77.8%), followed by Internet (51.6%) and friends (44.3%). Most agreed that their patients should tell the physician if they used it (90.5%). However, some barriers were identified, such as insufficient scientific evidence (62.7%) and lack of trained professionals (37.0%) (Table ). Among those who had used herbal medicine before, ginger was the most commonly used remedy (63.7%) followed by green tea (54.0%), cinnamon (42.7%), and black seed (43.3%) (Table ). More than half of the participants were unsure about the importance of herbal medicine in dentistry (52.5%). (Table ). However, they mentioned that the most common herbs used in dentistry were clove (62.9%), followed by curcuma turmeric (54.7%) and meswak (43.0%) (Table ). Meswak and commiphora myrrha were the most commonly mentioned remedies for periodontal disease before and after the lecture, while clove and meswak were the most common herbs mentioned for use in caries prevention among the test group (Table ). The answers were almost identical for the control group; however, the percentages were much lower (Table ). There was a significant improvement was seen in all subjects’ total knowledge regarding periodontal herbs ( mean of answers was 0.83 before the program and 3.09 after one month) ( p < 0.001) (Table ). The mean of the correct answer was increased significantly as well (0.70, and 1.58, respectively, p < 0.001 ± 3.09 ). The sum score of knowledge of total and correct answers regarding caries was higher after the program. However, it was not statistical significant (Table ). Several studies have been conducted to assess the perceptions and knowledge of different population groups regarding herbs . However, few articles have been conducted in the dental field . In the medical field, Yeo et al. found that 92% of medical students believed that conventional medicine would be more effective than herbs . In our study, 85.4% of participating dental students believed that herbs are beneficial. Their main sources of information are the elderly, the Internet, and friends. In the dental field, only 25% believed that herbal medicine is beneficial and the remainder were either unsure or did not think. The main reasons were insufficient scientific evidence and lack of trained professionals, which is in agreement with Harris et al., who found that 53.0% of students did not prefer to use herbal therapies if there is no scientific evidence to support them . Most oral health issues were associated with bacterial plaque, the removal of which will improve oral health . Mechanical or chemical removal of plaque is most common. One chemical method is the mouth rinse, which can prevent or facilitate the plaque removal; these treatments are an adjunct method, and herbal medicine was introduced and experimented in different forms to prevent the development of plaque chemically . Several published articles demonstrate the potential benefits of herbal medicine against dental caries and periodontal disease . These treatments/preventions could be safer due to reduced toxicity and cheaper than chemical drugs. For example, neem, eucalyptus, tulsi, and clove have antibacterial properties, which can help to treat forms of gingival inflammation . Herbal medicine is useful in several cases such as caries prevention, toothache, mouth ulcers, gingivitis, mouth ulcers, oral thrush, and hairy tongue . Some of these herbs have been recommended even for titanium implant coating such as Malus Domestica (apple), and for periodontal filler in periodontal regeneration such as Cissusquadrangularis (veldt grape) and Carthamustinctorius (safflower) . Arabic culture overall has a strong belief in herbal medicine, due to its significant historical background in traditional Arab medicine . However, the current research in this field in the Arabic region is small, with limited up-to-date knowledge on the Arabic forms of herbs . According to a survey, out of the 2,600 plant species in the Middle East, more than 700 plant are known to be used as medicinal herbs. Currently, traditional Arab medicine uses less than 200–250 plant species for the treatment of multiple diseases . The current status of Arab herbal medicine is concerning, because it is not part of the curriculum nor has any specific academic programs to support. In other countries, India has 57 universities and research institutes that focus on traditional medicine, while South Korea has established 12 universities and institutes focusing on traditional Korean medicine . A possible limitation of this research is that only about 75% of the participants took the course. We are not sure about the grade level of the students who did not participate and we did not evaluate whether the course would cause any burden to their studies. Another problem is that the students knew the course was not graded, and thus may not have taken it seriously. Moreover, we are comparing between fourth-year students (test) and sixth-year students (control), which could create bias because the sixth-year students have been exposed to more dental materials/patients and may be more aware of herbal medicine. However, the baseline analysis did not reveal any significant difference between them. This course did not evaluate the family background of the participants which might affect the results. Another limitation is the drop out in the control group (60%) which might affect the results, and there was a chance for the control group to read more about herbal medicine and educate themselves before the follow up. However, they did not know the actual date of the follow up, and there was no significant results when we compared knowledge changes in the control group. In addition, this is the first course and we are expecting to face these difficulties and we tried to encourage and remind the students to participate in this research. Herbal medicine has a potentially positive impact on dentistry. However, these effects have not been fully investigated and have received insufficient attention from academic institutions. A short educational program on medicinal herbs provided to dental students can improve their knowledge of the field. This will help increase awareness about the use and potential side effects of herbal medicine. However, further investigation is necessary to assess the long-term effects of the program. |
Codesigning a Digital Type 2 Diabetes Risk Communication Tool in Singapore: Qualitative Participatory Action Research Approach | dbe37edd-bf3a-40a9-8223-d051602b1b56 | 11576603 | Health Communication[mh] | Accelerating rates of diabetes incidence have given rise to a global public health epidemic. Diabetes imposes a large burden of morbidity and mortality, as well as an economic burden, on society. Lifestyle changes, such as weight management, physical activity, and healthy eating, can reduce the risk of developing type 2 diabetes (T2D) by up to 53% . Singapore’s government has recognized the magnitude of T2D as a public health problem, declaring a “War on Diabetes.” Screening has been prioritized, and resources have been allocated to promote physical activity, including the “National Steps Challenge.” In a Singapore National Health Survey, 70% of respondents who were unaffected by diabetes reported having gone for screening within the recommended time, and 87% strongly agreed that exercise and healthy eating can control the risk of diabetes . However, the uptake of recommended behavior change remains limited; a 2019 study found that only 28% of Singaporeans met the Health Promotion Board’s physical activity recommendation of 150 minutes per week, and only 37% ate 5 daily servings of fruits and vegetables . This illustrates the tenuous link between knowledge of disease and the adoption of preventative behaviors. Effective risk communication is essential in public understanding of their health status and promoting positive behavior change . Successfully and accurately increasing risk perception has been demonstrated to increase behavioral intention . However, presenting risk information alone has small effects on cognitive processes for behavior adoption unless actionable information that enhances autonomy and self-efficacy is also given . Protection Motivation Theory incorporates these 2 elements and suggests that 2 pathways influence the subsequent intention of practicing health-promoting behaviors: threat and coping appraisals . Threat appraisal considers the risk perception of T2D through perceived severity and perceived vulnerability. Coping appraisal considers if the recommended behavior is actionable and efficient given the perceived costs and benefits. Our previous qualitative work in exploring lay perceptions of T2D suggests that both the perceived threat of T2D and the coping appraisal related to prevention did not provide sufficient motivation to undertake lifestyle changes . Perceived threat is low as complications of T2D, such as limb amputations and blindness, were seen downstream of T2D onset and can be prevented with management of T2D after diagnosis. The centrality of food in Singapore’s culture also resulted in a high perceived response cost, resulting in a negative impact on coping appraisal. Messaging to inform individual risk and promote preventative measures are needed to influence these gaps in threat and coping appraisal accordingly. One way to identify individuals at increased risk of diabetes is to measure blood sugar or hemoglobin A 1c (HbA 1c ; an estimate of mean glucose). The population is often dichotomously classified as having prediabetes if they have levels of these parameters above a threshold of 5.7% HbA 1c . This approach is problematic in 2 ways. First, dichotomous messaging that is used in Singapore and in many other places may give rise to a false sense of security in those who fall just below the threshold. Given that the risk of T2D is continuous, we may miss opportunities to motivate behavioral change in those at lower, but nevertheless elevated, risk of T2D and who would benefit from lifestyle change. Second, even though diabetes is characterized by elevated blood glucose, T2D is a multidimensional disease, and the current approach has been criticized for being overly gluco-centric . Risk prediction based on multiple variables, on top of blood glucose, has shown to be a better estimate of risk. Recent developments to include additional clinical parameters like body mass index, systolic blood pressure, triglyceride, and high-density lipoprotein cholesterol have shown to have good accuracy in predicting the risk of developing T2D . Adopting such a strategy can move away from being gluco-centric and shift preventative efforts upstream instead of waiting for individuals to become prediabetic. However, the interpretation of the output from these multivariate predictive functions can be challenging. Communicating risk through percentage risk over the next 10 years (absolute risk) has shown to be falsely reassuring as the absolute risk tends to be numerically quite small. Generally, when faced with scales (like 0-100), people do not find percentages below 50 concerning . Visual imagery or analogies are examples of relevant and meaningful risk presentations that can increase the intention for behavioral change . Following risk information, demonstration of how the risk can be reduced is critical as the components of coping appraisal have shown to have the strongest predictors of practiced behavior change . We sought to design and develop a novel risk communication tool to enhance threat appraisal while positively influencing coping appraisal. Leveraging participatory action research, we engaged and involved members of the public to identify feasible risk communication tools so that they are customized for the prevention of T2D in Singapore. To support future scalability as well as to allow the risk communication tool to be dynamic and interactive, the tool was intended to be delivered digitally, likely as a website to promote ease of access. The study objectives were to (1) identify key characteristics that contribute to an effective risk communication tool and (2) test and iterate to develop a culturally sensitive and meaningful risk communication tool that can motivate T2D preventative behaviors. Developing the Risk Communication Prototypes We facilitated 2 ideation workshops to codesign ways to present risk information. These workshops adopted a design-thinking approach and included members of the public, health care professionals with experience in diabetes, and digital tool designers. First, participants were guided to define potential problems that could arise from identified gaps in the threat and coping appraisal of T2D prevention. Second, participants were asked to ideate on messaging solutions to address the problems they identified. The study team took the messaging solutions discussed in the workshops to create potential prototypes using message-framing strategies described in the Tripartite Model of Risk Perception (TRIRISK), heuristics from behavioral economics theories, and existing risk communication literature . At this stage of the development, the prototypes were nonfunctional as the focus was on the risk messaging and design. However, eventually, the aim is to incorporate the multi-factorial risk prediction model and convert it into the framing and/or analogy based on the messaging concept that tests to be the best received. Hence, to ensure these messaging concepts were realistic, we checked the feasibility and validity of data with the experts who built the multifactorial risk prediction model. This left us with 3 designs for risk result presentation prototypes: “Diabetes Onset”: estimated age of diabetes onset of T2D based on one’s risk factors was designed as a fear appeal to elevate threat appraisal. The higher the elevated risk factors, the sooner the estimated age of diabetes onset. This leverages the concept of rate advancement periods , which translates the impact of certain risk factors in terms of time of chronic disease occurrence. “Relative Risk”: the relative risk of T2D was presented using a 1-10 scale indicating where one’s risk score was in relation to others. This translated absolute risk to standardized risk percentiles , making it more relevant and appropriately positioned to understand one’s risk. To demonstrate that this risk increases over time with no action, the relative risks for now, in 5 years, and in 10 years’ time were presented. “Metabolic Age”: the median age of the risk category based on one’s risk factors, presented to be compared against their chronological age. Metabolic age was reflected as being older the more the risk factors were elevated; this is a form of relative risk measurement but presented in a different way. Risk communicated through “age” tools has shown to be effective in changing patient behavior given the strong desire for delayed aging and continued youthfulness . The risk functions were not integrated in this phase of development, so the prototypes presented a dummy risk results page according to the 3 designs outlined above. Each prototype also had an introduction, a data input page, and an intervention page. The introduction page was designed to address the constructs of threat appraisal by addressing perceived vulnerability, as 1 in 3 Singaporeans are diagnosed with T2D. The data page was prefilled with the parameters required for the multifactorial risk model (eg, age, BMI, parental history, hypertension, triglycerides, and HbA 1c ). The intervention page provided the opportunity for users to observe the impact of preventative action on their risk of T2D to increase self-efficacy and autonomy, facilitating their coping appraisal. Study Design To gather a diverse and wide set of input for our objectives, we used the “Patient and Public Involvement Hawker” (PPI Hawker) method . Hawker centers are open, noninstitutional, and public space food stalls where a large proportion of the Singapore population purchase their meals. The unstructured, short, and informal nature of this method facilitates engagement with those who do not usually participate in health research. To plan and conduct the “PPI Hawker” sessions, we followed the step-by-step guide described in the original publication . The study team was assisted by 4 lay facilitators. These facilitators were recruited from the ideation workshops and snowballing approach. The recruitment strategy for facilitators was driven by ensuring that at least one of the facilitators could speak each of the local languages: Mandarin, Malay, and Tamil, in addition to English. To interact with participants, we went to 6 hawker centers across Singapore. We approached hawker center patrons who appeared to be between 30 and 60 years of age. Purposive sampling was used to engage groups of the population not involved in previous discussions to ensure demographic diversity. For example, if we had interacted with mostly females of Chinese ethnicity, we would try to approach males and other ethnicities to diversify the perspectives we captured. Upon approaching potential participants, the facilitators would briefly introduce themselves and the study and ask if they would be willing to engage in a 5- to 10-minute discussion. If the participants give verbal consent, facilitators will provide more context and pose the questions. To avoid overwhelming participants and to moderate the time needed, we began by showing only 1 of the 3 paper-based prototypes during each interaction. Once the prototypes and prompts became more refined, in the latter sessions, we were able to show 2 prototypes during each interaction for comparison. Data Collection The lay facilitators refined the discussion prompts produced by the study team to reduce jargon and increase relatability. The initial discussion prompts were guided by 4 categories of inquiry: comprehensibility, relatability, usability, and impactfulness. These prompts evolved throughout the study and as the prototypes were iterated. Since no identifiable data was collected, the perceived age and ethnicity of each participant were noted. We began by presenting an A3-sized paper-based prototype to participants during each interaction. In the latter sessions, we presented the clickable prototypes on an iPad for participants to interact with. An A4 copy of the prototype presented in each encounter was used by a study team member to note feedback and suggestions to allow circling and annotation of the different features participants may be referring to. At the end of each session, notes and insights were discussed with the lay facilitators to ensure that all the comments and feedback were accurately captured and whether we had reached thematic saturation and sample diversity for that particular iteration. Any discrepancies in understanding the data collected or contradictory insights among interactions were discussed and noted accordingly. Prototype Iteration and Analysis After each hawker session, notes were consolidated and summarized for each prototype. If consensus were reached regarding thematic saturation and sample diversity. The next session was scheduled once the prototypes and prompts were revised according to the identified insights. We revised the prototypes and prompts twice, which produced 3 iterations (sessions 1 and 2 as the first iteration, sessions 3 and 4 as the second iteration, and sessions 5 and 6 as the third iteration). At the end of all the sessions, JH and LLP revisited all the data collected using an inductive thematic analysis to identify the key characteristics that contributed to an effective risk communication tool and shaped the various prototype iterations. Ethical Considerations This study received approval from the Nanyang Technological University institutional review board (approval number IRB-2021-01-041). A waiver for written consent was approved; hence, verbal consent was obtained from each participant before each interaction. No identifiable information was collected, and each interaction was anonymously annotated using an arbitrary participant number. After each interaction, the facilitators offered to buy a beverage for the participant as a token of appreciation. We facilitated 2 ideation workshops to codesign ways to present risk information. These workshops adopted a design-thinking approach and included members of the public, health care professionals with experience in diabetes, and digital tool designers. First, participants were guided to define potential problems that could arise from identified gaps in the threat and coping appraisal of T2D prevention. Second, participants were asked to ideate on messaging solutions to address the problems they identified. The study team took the messaging solutions discussed in the workshops to create potential prototypes using message-framing strategies described in the Tripartite Model of Risk Perception (TRIRISK), heuristics from behavioral economics theories, and existing risk communication literature . At this stage of the development, the prototypes were nonfunctional as the focus was on the risk messaging and design. However, eventually, the aim is to incorporate the multi-factorial risk prediction model and convert it into the framing and/or analogy based on the messaging concept that tests to be the best received. Hence, to ensure these messaging concepts were realistic, we checked the feasibility and validity of data with the experts who built the multifactorial risk prediction model. This left us with 3 designs for risk result presentation prototypes: “Diabetes Onset”: estimated age of diabetes onset of T2D based on one’s risk factors was designed as a fear appeal to elevate threat appraisal. The higher the elevated risk factors, the sooner the estimated age of diabetes onset. This leverages the concept of rate advancement periods , which translates the impact of certain risk factors in terms of time of chronic disease occurrence. “Relative Risk”: the relative risk of T2D was presented using a 1-10 scale indicating where one’s risk score was in relation to others. This translated absolute risk to standardized risk percentiles , making it more relevant and appropriately positioned to understand one’s risk. To demonstrate that this risk increases over time with no action, the relative risks for now, in 5 years, and in 10 years’ time were presented. “Metabolic Age”: the median age of the risk category based on one’s risk factors, presented to be compared against their chronological age. Metabolic age was reflected as being older the more the risk factors were elevated; this is a form of relative risk measurement but presented in a different way. Risk communicated through “age” tools has shown to be effective in changing patient behavior given the strong desire for delayed aging and continued youthfulness . The risk functions were not integrated in this phase of development, so the prototypes presented a dummy risk results page according to the 3 designs outlined above. Each prototype also had an introduction, a data input page, and an intervention page. The introduction page was designed to address the constructs of threat appraisal by addressing perceived vulnerability, as 1 in 3 Singaporeans are diagnosed with T2D. The data page was prefilled with the parameters required for the multifactorial risk model (eg, age, BMI, parental history, hypertension, triglycerides, and HbA 1c ). The intervention page provided the opportunity for users to observe the impact of preventative action on their risk of T2D to increase self-efficacy and autonomy, facilitating their coping appraisal. To gather a diverse and wide set of input for our objectives, we used the “Patient and Public Involvement Hawker” (PPI Hawker) method . Hawker centers are open, noninstitutional, and public space food stalls where a large proportion of the Singapore population purchase their meals. The unstructured, short, and informal nature of this method facilitates engagement with those who do not usually participate in health research. To plan and conduct the “PPI Hawker” sessions, we followed the step-by-step guide described in the original publication . The study team was assisted by 4 lay facilitators. These facilitators were recruited from the ideation workshops and snowballing approach. The recruitment strategy for facilitators was driven by ensuring that at least one of the facilitators could speak each of the local languages: Mandarin, Malay, and Tamil, in addition to English. To interact with participants, we went to 6 hawker centers across Singapore. We approached hawker center patrons who appeared to be between 30 and 60 years of age. Purposive sampling was used to engage groups of the population not involved in previous discussions to ensure demographic diversity. For example, if we had interacted with mostly females of Chinese ethnicity, we would try to approach males and other ethnicities to diversify the perspectives we captured. Upon approaching potential participants, the facilitators would briefly introduce themselves and the study and ask if they would be willing to engage in a 5- to 10-minute discussion. If the participants give verbal consent, facilitators will provide more context and pose the questions. To avoid overwhelming participants and to moderate the time needed, we began by showing only 1 of the 3 paper-based prototypes during each interaction. Once the prototypes and prompts became more refined, in the latter sessions, we were able to show 2 prototypes during each interaction for comparison. The lay facilitators refined the discussion prompts produced by the study team to reduce jargon and increase relatability. The initial discussion prompts were guided by 4 categories of inquiry: comprehensibility, relatability, usability, and impactfulness. These prompts evolved throughout the study and as the prototypes were iterated. Since no identifiable data was collected, the perceived age and ethnicity of each participant were noted. We began by presenting an A3-sized paper-based prototype to participants during each interaction. In the latter sessions, we presented the clickable prototypes on an iPad for participants to interact with. An A4 copy of the prototype presented in each encounter was used by a study team member to note feedback and suggestions to allow circling and annotation of the different features participants may be referring to. At the end of each session, notes and insights were discussed with the lay facilitators to ensure that all the comments and feedback were accurately captured and whether we had reached thematic saturation and sample diversity for that particular iteration. Any discrepancies in understanding the data collected or contradictory insights among interactions were discussed and noted accordingly. After each hawker session, notes were consolidated and summarized for each prototype. If consensus were reached regarding thematic saturation and sample diversity. The next session was scheduled once the prototypes and prompts were revised according to the identified insights. We revised the prototypes and prompts twice, which produced 3 iterations (sessions 1 and 2 as the first iteration, sessions 3 and 4 as the second iteration, and sessions 5 and 6 as the third iteration). At the end of all the sessions, JH and LLP revisited all the data collected using an inductive thematic analysis to identify the key characteristics that contributed to an effective risk communication tool and shaped the various prototype iterations. This study received approval from the Nanyang Technological University institutional review board (approval number IRB-2021-01-041). A waiver for written consent was approved; hence, verbal consent was obtained from each participant before each interaction. No identifiable information was collected, and each interaction was anonymously annotated using an arbitrary participant number. After each interaction, the facilitators offered to buy a beverage for the participant as a token of appreciation. Across the 6 hawker centers, we engaged with 112 participants, where 59 (56%) were perceived to be female. In total, 50 (45%) participants were identified as Chinese, 33 (29%) as Malay, and 25 (22%) as Asian Indian. contains the breakdown of participants across the different iterations. The key characteristics that shaped the iterations emerged as four main themes: (1) appeal and user experience, (2) trust and validity, (3) threat appraisal: salience of risk information, and (4) coping appraisal: facilitators for behavior change. Based on these findings, we were able to do rapid iterations to refine the prototypes. The different iterations of the prototypes are demonstrated in - . Appeal and User Experience Comments on first impressions across all the prototypes are often related to readability and appeal, especially in the first iteration. All participants expressed the need for more concise and accessible text and the inclusion of visual and interactive elements to increase sophistication and enhance the appeal of the risk communication prototypes. With the introduction of color in the second iteration, participants commented spontaneously on how their understanding of risk was intuitive. The risk communication tool’s availability only in English, rather than all 4 of the official languages of Singapore, was perceived as a limit to accessibility. Some participants experienced difficulty reading and suggested audio, video, or diagrams. There was a consensus that typing should be minimized and replaced with drop-down lists, checkboxes, or sliders. For the third iteration, clickable prototypes were presented on an iPad for participants to have a realistic experience of the risk communication tool. We observed that people interact with digital media copies and print copies differently. Participants rapidly clicked and scrolled through without taking the time to read each screen, whereas in the print copies, participants went through each section more carefully. Trust and Validity Many participants suggested the public’s perception of trustworthiness in risk communication would be increased if health care providers promoted the use of the tool. Furthermore, some noted it may help to track their risk over time and to incentivize them to maintain healthy behavior between appointments or screenings. Some participants noted that the requirement to input health screening information (ie, blood pressure, triglyceride, HbA 1c , etc) instead of just self-reported survey questions made the risk score generated more “real.” A participant positively referred to it as “based on the body and not just guessing from how many sugary drinks [someone] had.” Some suggested data integration between existing electronic medical records and the tool to observe risk trends over time. At the end of the first iteration, there was an overwhelming consensus that the “Diabetes Onset” prototype was too negative and inaccurate. Their perception was that predictions of when one would get diabetes were not grounded in evidence and made them distrustful of the tool. One participant questioned how we could know when he will get diabetes and asked, “Do you have a crystal ball?” Hence, we did not test this prototype in further iterations. The credibility of the institution providing the tool was important to participants. They shared preferences for the tool to be managed by governmental bodies or known health care institutions. They noted that the tool would need to include information on how information entered would be managed, protected, and stored. Participants emphasized that without this, people may be reluctant to share their personal or health information. Some justified their concerns using examples of the increasing number of scams and false information and the potential for insurance companies to gain access to their data and adjust premiums where there was higher risk. Threat Appraisal: Salience of Risk Information Many participants appreciated the simplicity of the risk score presented in the “Relative Risk” prototype as it helped them understand “how far off they were from either end.” The first 2 iterations also included the risk of diabetes in the next 5 and 10 years, but these were “too far off” to be meaningful for some participants. Instead, they suggested reflecting a single score accompanied by a reminder that risk increases with age. Reflecting on the term “Metabolic Age,” participants expressed concern that the tool may be intimidating and inaccessible, as it may not be clear how it relates to the risk of diabetes. However, upon probing, participants of all ages were able to explain the “metabolic age” concept and accurately perceive how it is related to one’s risk of diabetes. Participants anticipated the “Metabolic Age” prototype to motivate behavior change because of the urgency generated when observing their metabolic age to be older than their chronological age. Both “Relative Risk” and “Metabolic Age” being relative to others or to their chronological age was noted to be helpful in answering the questions “Where am I?” and “Where should I be?” Participants joked about their desire to “beat (their) past self, and to beat others,” reflecting the local kiasu (fear of losing) mentality. In the third iteration, both of these prototypes were shown to participants to assess which of the 2 risk presentations they preferred, and reactions were mixed. The familiarity of “Relative Risk” was perceived as the safe choice but had limitations in motivating behavior change, as it may be too “technical” and “boring.” In contrast, “Metabolic Age” required probing to be understood but was perceived as impactful and motivating for behavior change to help “[their] body to get younger.” Coping Appraisal: Facilitators for Behavior Change On the interventions page, observing how the recommended behaviors can impact their risk of T2D was anticipated to be a powerful incentive to commit to change. The demonstrated impact of prospective behavior change on their risk scores was described as the “good news that followed the bad news.” Many participants referred to this as the most important part of the tool as it showed what one can do to reduce their risk of T2D and the potential magnitude of this change. Feedback included the desire for personalization of the tool, with action items tailored to individual circumstances and focusing on areas where they were not “doing well enough.” For example, a participant shared that if their BMI is already low, they did not wish to see suggestions to eat a salad. Participants indicated that they would prefer the recommended behaviors to be small and specific steps that they could take in the context of their usual day-to-day lives. Several participants mentioned that their interest in the risk communication tool would be sustained if these risk results could be integrated with health and exercise trackers, such as Fitbit or Apple Watch. There were suggestions to follow models of rewarding goals accomplished to enhance motivation, like the “National Steps Challenge” does with financial incentives. Comments on first impressions across all the prototypes are often related to readability and appeal, especially in the first iteration. All participants expressed the need for more concise and accessible text and the inclusion of visual and interactive elements to increase sophistication and enhance the appeal of the risk communication prototypes. With the introduction of color in the second iteration, participants commented spontaneously on how their understanding of risk was intuitive. The risk communication tool’s availability only in English, rather than all 4 of the official languages of Singapore, was perceived as a limit to accessibility. Some participants experienced difficulty reading and suggested audio, video, or diagrams. There was a consensus that typing should be minimized and replaced with drop-down lists, checkboxes, or sliders. For the third iteration, clickable prototypes were presented on an iPad for participants to have a realistic experience of the risk communication tool. We observed that people interact with digital media copies and print copies differently. Participants rapidly clicked and scrolled through without taking the time to read each screen, whereas in the print copies, participants went through each section more carefully. Many participants suggested the public’s perception of trustworthiness in risk communication would be increased if health care providers promoted the use of the tool. Furthermore, some noted it may help to track their risk over time and to incentivize them to maintain healthy behavior between appointments or screenings. Some participants noted that the requirement to input health screening information (ie, blood pressure, triglyceride, HbA 1c , etc) instead of just self-reported survey questions made the risk score generated more “real.” A participant positively referred to it as “based on the body and not just guessing from how many sugary drinks [someone] had.” Some suggested data integration between existing electronic medical records and the tool to observe risk trends over time. At the end of the first iteration, there was an overwhelming consensus that the “Diabetes Onset” prototype was too negative and inaccurate. Their perception was that predictions of when one would get diabetes were not grounded in evidence and made them distrustful of the tool. One participant questioned how we could know when he will get diabetes and asked, “Do you have a crystal ball?” Hence, we did not test this prototype in further iterations. The credibility of the institution providing the tool was important to participants. They shared preferences for the tool to be managed by governmental bodies or known health care institutions. They noted that the tool would need to include information on how information entered would be managed, protected, and stored. Participants emphasized that without this, people may be reluctant to share their personal or health information. Some justified their concerns using examples of the increasing number of scams and false information and the potential for insurance companies to gain access to their data and adjust premiums where there was higher risk. Many participants appreciated the simplicity of the risk score presented in the “Relative Risk” prototype as it helped them understand “how far off they were from either end.” The first 2 iterations also included the risk of diabetes in the next 5 and 10 years, but these were “too far off” to be meaningful for some participants. Instead, they suggested reflecting a single score accompanied by a reminder that risk increases with age. Reflecting on the term “Metabolic Age,” participants expressed concern that the tool may be intimidating and inaccessible, as it may not be clear how it relates to the risk of diabetes. However, upon probing, participants of all ages were able to explain the “metabolic age” concept and accurately perceive how it is related to one’s risk of diabetes. Participants anticipated the “Metabolic Age” prototype to motivate behavior change because of the urgency generated when observing their metabolic age to be older than their chronological age. Both “Relative Risk” and “Metabolic Age” being relative to others or to their chronological age was noted to be helpful in answering the questions “Where am I?” and “Where should I be?” Participants joked about their desire to “beat (their) past self, and to beat others,” reflecting the local kiasu (fear of losing) mentality. In the third iteration, both of these prototypes were shown to participants to assess which of the 2 risk presentations they preferred, and reactions were mixed. The familiarity of “Relative Risk” was perceived as the safe choice but had limitations in motivating behavior change, as it may be too “technical” and “boring.” In contrast, “Metabolic Age” required probing to be understood but was perceived as impactful and motivating for behavior change to help “[their] body to get younger.” On the interventions page, observing how the recommended behaviors can impact their risk of T2D was anticipated to be a powerful incentive to commit to change. The demonstrated impact of prospective behavior change on their risk scores was described as the “good news that followed the bad news.” Many participants referred to this as the most important part of the tool as it showed what one can do to reduce their risk of T2D and the potential magnitude of this change. Feedback included the desire for personalization of the tool, with action items tailored to individual circumstances and focusing on areas where they were not “doing well enough.” For example, a participant shared that if their BMI is already low, they did not wish to see suggestions to eat a salad. Participants indicated that they would prefer the recommended behaviors to be small and specific steps that they could take in the context of their usual day-to-day lives. Several participants mentioned that their interest in the risk communication tool would be sustained if these risk results could be integrated with health and exercise trackers, such as Fitbit or Apple Watch. There were suggestions to follow models of rewarding goals accomplished to enhance motivation, like the “National Steps Challenge” does with financial incentives. Principal Findings We evaluated “Relative Risk,” “Metabolic Age,” and “Diabetes Onset” to assess which messaging would be the most effective risk communication tool. The predictive nature of “Diabetes Onset” was received poorly, likely due to its negative connotation. Future risk evaluating predictive framing and messaging could consider looking into shifting the concept into a positive frame like “T2D-free life-years” to assess if that is received differently. While “Relative Risk” was understood well due to its simplicity, “Metabolic Age” performed better in creating urgency to undertake preventative behavior. These 2 prototypes will require further testing to determine which prototype will be more effective in motivating uptake of T2D preventative behavior and inform future implementation. Compared with the Heart Age and Lung Age , the Metabolic Age refers to a complex process of metabolism instead of a single organ, which may require a greater level of health literacy to understand. This may explain why participants perceived the concept of the Metabolic Age as less accessible. In addition, the terms “metabolic age” and “metabolic risk” have been used in multiple contexts and allude to many diseases. If such a risk communication is implemented in the health system, appropriate education and awareness for the population will be necessary to avoid any misunderstandings. The differing urgency to undertake preventative behavior could be explained by the present bias heuristic, where potential future benefits are undervalued . The “Relative Risk” prototype presents a risk of developing diabetes in the next 10 years, whereas the “Metabolic Age” presents the current status of metabolic functioning. Hence, the salience of the current status of the body creates a more relatable feeling-at-risk and perceptions of immediate benefits to influence the urgency of engaging in the preventative behavior of T2D. The intervention page is received as the most important part of the tool and is aligned with existing literature, where coping appraisal is the strongest predictor of subsequent behavior change . The demonstration of potential improvement of their risk from preventative behavior was well-received and seen as the “good news following the bad news.” This perception can be explained by the preference of gain framing over loss framing in the context of disease prevention research . The desire to improve and regain control of one’s health status could be a nod towards Self-Regulation Theory (SRT) . A risk communication tool could act as an external stimulus or trigger to motivate intention for behavior. To support sustainable behavior change, it is likely that more comprehensive interventions enhancing key components of SRT, like goal-setting, self-monitoring, and self-efficacy, will be needed as a follow-up strategy . Trust in the institution providing the tool was a crucial factor in influencing the way the prototypes were received. The desire for the risk assessment tool to be linked to official health records and hence protected securely by governmental institutions illustrates the need for the intervention to be embedded within the larger healthcare context. Following such implementation can provide an opportunity to leverage “pre-accumulated” interagency trust, enabling key players to coordinate the dissemination of information and interventions efficiently . However, integrating with official health care records may limit access to only those who have access to health care. This element of trust also materialized in the credibility of the risk result, which influences the threat appraisal. Studies investigating risk communication during recent infectious disease outbreaks also found correlations between trust in the messenger and the public’s threat and coping appraisal, which impacted hygiene practices and physical distancing measures . The digital delivery of such a tool will need to pay close attention to the user experience and concise messaging. As noted in the findings, the appeal and ease of use were often the first impressions from users. A high barrier to use could negate the impact of the tool regardless of how effective the risk framing may be. The difference in engagement with the digital prototypes versus the paper-based prototypes is likely due to the shortened attention span and quicker need for satisfaction arising from the current use of the internet and technology . The length, format, and core message of the tool will need to grab the user’s attention quickly and make the message very easy to digest. Throughout the development process, we adopted various participatory action research methods to design, test, and iterate for culturally sensitive and meaningful risk communication tools for T2D. The ideation workshops allowed us to codesign with participants based on specific insights, which allowed the evidence translation to be through a lay perspective, reducing assumptions and preconceived biases. It has been recognized that when developing interventions, researchers and clinicians may fail to include significant design and content elements or propose an incorrect design, especially when it comes to addressing the needs of minority groups . However, often, those who are already health-seeking are the ones to participate in traditional health research or in efforts in which we ask participants to come to us. Hence, the PPI Hawker method allowed us to bring the research to the public and engage with those who may not be as health-seeking, which appropriately reflects the users who would benefit from such a tool. Guiding decision-making based on public opinion and working closely with our lay facilitators increases the potential impact and accessibility for users with different levels of health literacy and cultural traits . The study’s strong focus on the Singaporean culture and values may limit its generalizability. However, with Singapore’s diverse and multi-ethnic population, findings from the study could be used within the larger Asian context. Further, the key characteristics of risk communication tools identified, as well as the process of codesign, can be transferable to similar developments and contexts. In this study, the risk communication tool prototypes were prefilled with a dummy character’s risk profile. The reactions gathered were anticipated perceptions. Subsequent testing will benefit from having these prototypes programmed with the risk prediction model so that participants can enter their own health data and react more appropriately to personalized data. Participants can assess and experience their actual risk results for a more accurate assessment of the intention of behavior change. Further, gathering empirical evidence on the constructs of PMT as intended, namely threat and coping appraisals, in the different prototypes can provide a better understanding of how the theory translates to practice. Conclusion In this study, we used ideation workshops with key stakeholders to develop potential risk communication tools, which address the gaps in threat and coping appraisals in regard to T2D risk and its preventative behaviors. We applied the “PPI Hawker” method to test and iterate on the prototypes. Participants were split between the “Relative Risk” and “Metabolic Age” prototypes as the preferred risk messaging. Further testing using functional tools will be conducted to accurately assess the efficacy of risk communication tools to influence the intention of positive behavior change. The insights on the design process and valued characteristics of a risk communication tool can inform the future development of such interventions. We evaluated “Relative Risk,” “Metabolic Age,” and “Diabetes Onset” to assess which messaging would be the most effective risk communication tool. The predictive nature of “Diabetes Onset” was received poorly, likely due to its negative connotation. Future risk evaluating predictive framing and messaging could consider looking into shifting the concept into a positive frame like “T2D-free life-years” to assess if that is received differently. While “Relative Risk” was understood well due to its simplicity, “Metabolic Age” performed better in creating urgency to undertake preventative behavior. These 2 prototypes will require further testing to determine which prototype will be more effective in motivating uptake of T2D preventative behavior and inform future implementation. Compared with the Heart Age and Lung Age , the Metabolic Age refers to a complex process of metabolism instead of a single organ, which may require a greater level of health literacy to understand. This may explain why participants perceived the concept of the Metabolic Age as less accessible. In addition, the terms “metabolic age” and “metabolic risk” have been used in multiple contexts and allude to many diseases. If such a risk communication is implemented in the health system, appropriate education and awareness for the population will be necessary to avoid any misunderstandings. The differing urgency to undertake preventative behavior could be explained by the present bias heuristic, where potential future benefits are undervalued . The “Relative Risk” prototype presents a risk of developing diabetes in the next 10 years, whereas the “Metabolic Age” presents the current status of metabolic functioning. Hence, the salience of the current status of the body creates a more relatable feeling-at-risk and perceptions of immediate benefits to influence the urgency of engaging in the preventative behavior of T2D. The intervention page is received as the most important part of the tool and is aligned with existing literature, where coping appraisal is the strongest predictor of subsequent behavior change . The demonstration of potential improvement of their risk from preventative behavior was well-received and seen as the “good news following the bad news.” This perception can be explained by the preference of gain framing over loss framing in the context of disease prevention research . The desire to improve and regain control of one’s health status could be a nod towards Self-Regulation Theory (SRT) . A risk communication tool could act as an external stimulus or trigger to motivate intention for behavior. To support sustainable behavior change, it is likely that more comprehensive interventions enhancing key components of SRT, like goal-setting, self-monitoring, and self-efficacy, will be needed as a follow-up strategy . Trust in the institution providing the tool was a crucial factor in influencing the way the prototypes were received. The desire for the risk assessment tool to be linked to official health records and hence protected securely by governmental institutions illustrates the need for the intervention to be embedded within the larger healthcare context. Following such implementation can provide an opportunity to leverage “pre-accumulated” interagency trust, enabling key players to coordinate the dissemination of information and interventions efficiently . However, integrating with official health care records may limit access to only those who have access to health care. This element of trust also materialized in the credibility of the risk result, which influences the threat appraisal. Studies investigating risk communication during recent infectious disease outbreaks also found correlations between trust in the messenger and the public’s threat and coping appraisal, which impacted hygiene practices and physical distancing measures . The digital delivery of such a tool will need to pay close attention to the user experience and concise messaging. As noted in the findings, the appeal and ease of use were often the first impressions from users. A high barrier to use could negate the impact of the tool regardless of how effective the risk framing may be. The difference in engagement with the digital prototypes versus the paper-based prototypes is likely due to the shortened attention span and quicker need for satisfaction arising from the current use of the internet and technology . The length, format, and core message of the tool will need to grab the user’s attention quickly and make the message very easy to digest. Throughout the development process, we adopted various participatory action research methods to design, test, and iterate for culturally sensitive and meaningful risk communication tools for T2D. The ideation workshops allowed us to codesign with participants based on specific insights, which allowed the evidence translation to be through a lay perspective, reducing assumptions and preconceived biases. It has been recognized that when developing interventions, researchers and clinicians may fail to include significant design and content elements or propose an incorrect design, especially when it comes to addressing the needs of minority groups . However, often, those who are already health-seeking are the ones to participate in traditional health research or in efforts in which we ask participants to come to us. Hence, the PPI Hawker method allowed us to bring the research to the public and engage with those who may not be as health-seeking, which appropriately reflects the users who would benefit from such a tool. Guiding decision-making based on public opinion and working closely with our lay facilitators increases the potential impact and accessibility for users with different levels of health literacy and cultural traits . The study’s strong focus on the Singaporean culture and values may limit its generalizability. However, with Singapore’s diverse and multi-ethnic population, findings from the study could be used within the larger Asian context. Further, the key characteristics of risk communication tools identified, as well as the process of codesign, can be transferable to similar developments and contexts. In this study, the risk communication tool prototypes were prefilled with a dummy character’s risk profile. The reactions gathered were anticipated perceptions. Subsequent testing will benefit from having these prototypes programmed with the risk prediction model so that participants can enter their own health data and react more appropriately to personalized data. Participants can assess and experience their actual risk results for a more accurate assessment of the intention of behavior change. Further, gathering empirical evidence on the constructs of PMT as intended, namely threat and coping appraisals, in the different prototypes can provide a better understanding of how the theory translates to practice. In this study, we used ideation workshops with key stakeholders to develop potential risk communication tools, which address the gaps in threat and coping appraisals in regard to T2D risk and its preventative behaviors. We applied the “PPI Hawker” method to test and iterate on the prototypes. Participants were split between the “Relative Risk” and “Metabolic Age” prototypes as the preferred risk messaging. Further testing using functional tools will be conducted to accurately assess the efficacy of risk communication tools to influence the intention of positive behavior change. The insights on the design process and valued characteristics of a risk communication tool can inform the future development of such interventions. |
Implementation and Utility of the Da Vinci SP (Single Port) in Pediatric Urology | 7639aec5-4134-43e3-bc23-23ddd24dcbb9 | 11449982 | Pediatrics[mh] | The da Vinci SP ® (Intuitive Surgical Inc., Sunnyvale, California) is the novel fourth-generation model of the da Vinci robotic surgical platforms. The SP is designed to perform complex procedures through a single 2.5 cm canula. This canula contains four instrument lumens, permitting three 6 mm EndoWrist ® instruments with a wrist and elbow joint and a 8 mm three-dimensional, high-definition endoscope. The endoscope articulates from 0 to 30 degrees from either a top or bottom position, improving visualization in challenging-to-reach areas. It must be stated from the onset that use of the SP in children remains controversal. Some pediatric urologists have been outspoken against use of the system in children. One common criticism has been that the U.S. Food and Drug Administration (FDA) has not approved the SP for pediatric surgical indications. Importantly, the FDA has not approved the da Vinci Xi ® for pediatric surgery indications, either. The da Vinci Si ® held pediatric indications for pyeloplasty and ureteral reimplantation; however, this system is being phased out of production. As such, all pediatric robotic surgery, whether performed with the SP or the Xi, is presently considered “off-label”. It should be acknowledged that single port (SP) and single incision minimially-invasive surgery is not new to pediatric urology. Laparoendoscopic single site (LESS) surgery has been published from both periumbilical incisions and low transverse incisions . Surgeons pioneering these approaches have highlighted ergonomic challenges of operating in confined pediatric spaces with cross-handed instrumentation, especially without the use of articulating instruments. Recently, Liu and colleagues in Wuhan, China reported using the da Vinci Xi to perform an infant pyeloplasty via a single periumbilical incision with a laparoscopic gel port . A key to understanding use of the SP is that at least 10 cm of working distance is necessary between the tip of the robotic canula and the target anatomy. This distance ensures the instruments have enough room to exit the canula, flex at the elbow, and triangulate at the target anatomy. If there is less than 10 cm working distance between the trocar tip and target anatomy, surgeons will not be able to fully articulate the instruments, instrument clashes will increase, and the surgeon will find instrument movements to be jerky and unsafe. As working space within the pediatric abdomen and pelvis can be limited, some authors have adapted a “floating dock” approach. The floating dock concept describes a technique in which the robotic canula is directed through the gel cap of a wound access sleeve already placed within the body. The robotic canula “floats” outside the body but within the access sleeve, deploying the robotic instruments outside the body and effectively increasing and optimizing distance from canula tip to target anatomy. The floating dock also preserves sterility and maintains insufflation pressure. In this review, we provide an update on published reports of use of the SP in pediatric urology. The role for SP robotic surgery in pediatric urology is evolving. Some have reported longer operative times and difficulty in adapting the technology to pediatric patients compared to traditional multiport (MP) robotic surgery. Though reported techniques and experiences remain sparce, there are attributes that make SP robotic surgery inherently attractive in children: improved instrumentation compared to laparoscopy, cosmetically attractive incision compared to conventional MP robotic surgery, and the ability to perform upper tract and lower tract reconstructive, expirtation, and specimen extraction through a single, 3 cm incision. In 2021, Granberg and colleagues at Mayo Clinic published the first use of the SP in pediatric urology . The report primarily detailed a pyeloplasty in a 10-year-old girl. The authors cited the SP had been utilized in 6 additional pediatric patients from 23 months to 14 years. The authors utilized a single incision for all cases and directly inserted the SP canula through the abdominal wall and into the peritoneal cavity. With the exception of mean operative time reported at 120 min, operative details were largely omitted. The group republished these 7 cases in a video report featuring an SP robotic appendicovesicostomy in a 14 year-old patient. This case was particularly novel in that it utilized a previous gastrostomy scar for placement of the SP cannula . The authors reported the same cohort of patients in a third publication in 2023 . Between the 3 publications of the 7 patient cohort, the authors noted the necessary 10 cm working distance and loss of insufflation with introduction of assistant laparoscopic instruments through the instrument sleeve lumens as limitations to the SP in children. Notably, the authors directly inserted the robotic trocar into the abdomen and did not utilize the floating dock concept published in subsequent experiences. The authors reported a “minimal learning curve” in utilizing the SP, different from observations noted in subsequent publications from other groups . Kang et al. published the world’s first SP case series in pediatric urology in July 2021 . The South Korean series compared surgical outcomes between SP robot-assisted laparoscopic (RAL) pyeloplasty (S-RALP) versus conventional MP robot-assisted laparoscopic pyeloplasty (M-RALP). The authors compared 15 S-RALP patients to 31 M-RALP patients. For the S-RALP group, the authors utilized a periumbilical approach with a gel access sleeve to achieve a floating dock. Median operative time was shorter for S-RALP at 2.4 h compared to 3.0 h for M-RALP. Console times were also reduced (1.5 h for S-RALP versus 2.2 h for M-RALP). Conversion to open surgery, analgesic use, estimated blood loss (EBL), postoperative pain scores, postoperative complications, and hospital stay duration were comparable between groups. Smith and colleagues at University of Florida (UF) published their experience with the SP in August 2023 . Unique to this experience was the reported evolution of surgical technique: initial cases were performed with a low transverse floating dock approach and a separate periumbilical assistant trocar. Middle cases were performed with a periumbilical floating dock that included an assistant port. Later cases were performed with a 3 cm incision and floating dock directly over the pubic tubercle. The authors compared outcomes from their initial 11 S-RALP cases to 5 M-RALP cases during the same timeframe. The 11 patients who underwent S-RALP were older patients, ranging from 8 months to 17 years, while the 5 who underwent M-RALP ranged from 3 months to 14 months. S-RALP procedures had a longer overall operative time than M-RALP, with S-RALP taking a median 384 min versus 299 min for M-RALP. There was improvement in the latter five S-RALP cases. In response to criticism of operative times, the authors pointed out their reported operative times acounted for the learning curve of a new technology, the evolution in technique, interrupted anastomoses, and included all operative time, from initiation of cystoscopy to repositioning, incision and robotic docking, and completion of all skin closure(s). All other outcomes, including hospital stay, opiod administration, and surgical success, were comparable between groups. The most recent adaptation of the SP in pediatric urology comes from the Cleveland Clinic where Chavali and colleagues reported their extraperitoneal pyeloplasty technique applied to 6 patients, ranging from 12 months to 16 years . The authors utilized an off-midline, low transverse approach to access the ipsilateral retroperitoneum and affected kidney. Total operative time ranged from 178 to 240 min. Median hospital stay was 1 day, with 2 of the patients discharged on the same day as surgery. The authors reported absence of conversion to open, readmissions, or complications, and reported surgical success in all 6 cases. The team acknowledged a potentially steep learning curve to using the SP in the retroperitoneal space of children. There are particular considerations in which an SP approach may provide unique value to children undergoing robotic surgery. One area of value may be cosmetic outcome of the surgical scar. An SP transperitoneal approach to the kidney involves a 2.7–3 cm incision directly over the pubic tubercle, hidden well beneath virtually all bathing suits and undergarmets. Using a floating dock and positioning considerations, the renal anatomy is readily accessible . Preliminary data on patient experience and scar perception following pediatric SP transperitoneal pyeloplasty compared to open and MP pyeloplasty has been promising . A pilot study at UF compared validated patient-centered outcome surveys of 16 families whose children underwent SP surgery to those whose children underwent open and MP pyeloplasty during the same time span. For children 10 years and older, patients were surveyed, as well. Data suggest families and patients view the SP experience similarly to those who undergo MP surgery. In the domains of scar perception, families and patients view their SP scars at least as favorably, if not more favorably, than the open and MP patients. More patient-centered data are needed to draw firm conclusions. Beyond cosmetic considerations, an SP approach to pediatric urological robotic surgery may offer technical advantages in unique clinical scenarios. To date, these unique scenarios have included (1) utilizing previous open incisions and minimizing scar footprint, (2) using a single low, hidden transverse incision for combined nephrectomy and specimen extraction, (3) combining upper tract renal or ureteral surgery with open lower tract surgery, all performed through through the same low, hidden transverse incision, (4) accessing the renal retroperitoneal space via an offset transverse incision beneath the anterior superior iliac spine, and (5) accessing the deep pelvis via a transvesical approach for complex reconstruction demands. As Parikh and colleagues have shown, an SP approach may be particularly useful in the setting of previous open incisions . These authors utilized a previous gastrostomy closure incision for SP appendicovesicostomy creation. The authors were able to complete the procedure without making any additional incisions for assistant trocars. At UF, surgeons have previously described using a single, low transverse incision for combined robotic upper tract and open lower tract urinary reconstruction and extirpative extraction . When a nephrectomy is necessary in combination with distal ureteral or bladder reconstruction, a single incision makes intuitive sense given the need for a specimen extraction site. Though the authors only performed the upper tract surgery in a transperitoneal approach, one could imagine a retroperitoneal approach using a low transverse single-incision if both upper and lower tract surgery is needed (e.g., proximal ureteroureterostomy and distal ureterectomy in the case of complex complete ureteral duplication surgery). A final demonstrated application of an SP approach is the ability to work within the deep pelvis for transvesical applications. The Cleveland Clinic group has reported the SP to be uniquely useful for a transvesical approach to a vesicovaginal fistula repair in a 9-year-old female with extensive abdominal surgical history and limited transvaginal access . The anatomic and technical challenges to using the SP in children center on the 2.5 cm trocar width, the necessary ≥ 10 cm working distance between trocar tip and target anatomy, and the limited working space within the pediatric patient. As noted earlier, the floating dock helps increase working distance and feasibility for SP surgery in children. Perhaps counter-intuitively, locating the robotic trocar outside the body has introduced other working space difficulties: flattening the angle of the robotic boom in its approach to the target anatomy, which may on occasion create positional clashing between the robotic boom and instrument drives and the patient. When docking the SP from a low, transverse incision directly over the pubic tubercle, special positioning precautions can help minimize clashing between instrument drives and the lower extremities . The most notable limitation to widespread use of the SP in pediatric urologist is the availability of the platform. It is unlikely that free-standing children’s hospitals can justify the cost of the SP given the versitility of the Xi. Until later generations of the SP can accommodate the unique anatomic challenges to operating in young children, the only pediatric urologists that will likely have access to the SP will be those associated with primarily adult health systems. Given the increased operative time and steep learning curve demonstrated by some early adopters of the SP, continued use of the SP in pediatric urology should be performed alongside further research investigations into two specific domains. The first domain is patient-centered outcomes research to investigate whether the single incision provides patients an advantage regarding pain, perceived surgical experience, and scar perception over the conventional multiport robotic approach. Future research in this area ought to compare the single SP incision to a multiport hidden incision endoscopic (HIdES) approach as it is expected patients would be more satisfied with HIdES incisions than conventional transperitoneal robotic trocars. The second research domain of interest to demonstrate value of SP in pediatric urology is surgical complications and length of hospital stay. A panacea in pediatric urology would be same-day discharge, without sacrificing surgical success or complication rates, for complex upper tract reconstruction, all performed through a single, concealed incision. Retroperitoneal reconstruction may represent an opportunity to achieve this triad. To date, the Cleveland Clinic group is the only to show this these surgeries with same-day discharge. The next reasonable step would be to lower the incision until it is well beneath the anterior superior iliac spine and bathing suit line. The SP has been used safely and effectively in pediatric urology at four institutions: Mayo Clinic, Yonsei University in South Korea, University of Florida, and Cleveland Clinic. The technology has not been utilized uniformly across institutions. As such, the experiences reported in broad series are not directly comparable. The harked benefit of the SP robot has been a singular incision. The potentially improved cosmetic outcome and post-operative pain scores have only been preliminarily investigated at one center, and patient volumes were small . Operative times may be longer, and some have reported a steep learning curve, in transitioning from multiport to SP surgery. Unique applications for the SP include single incision upper and lower tract surgery (e.g., nephroureterectomy or combined robotic upper tract and open lower tract surgery) and same-day retroperitoneal surgery. These unique applications, in combination with potential for improved scar perception in the context of equivalent operative outcomes, justifies continued use and investigation in appropriate settings. Parikh N, Boswell T, Findlay B, Gargollo P, Granberg C. Single-port robotic pyeloplasty in a pediatric patient. Videourology 2021:53. 10.1089/vid.2020.011. First published use of SP in pediatric urology from the Mayo Clinic. Granberg C, Parikh N, Gargollo P. And then there was one … incision. First single-port pediatric robotic case series. J Pediatr Urol. 2023;19(4):426.e1-.e4. 10.1016/j.jpurol.2023.03.038. Case series from the Mayo Clinic that included the earliest published use of SP in pediatric urology. Smith JM, Hernandez AD, DeMarco RT, Bayne CE. Early Experience With Pediatric Single-port Robotic Pyeloplasty Compared to Multiport Robotic Cohorts. J Urol. 2023;210(2):236-8. 10.1097/ju.0000000000003551. First North American case-control series from University of Florida detailing evolution of surgical technique up to the time of publication and comparing SP and MP experience during the timeframe Chavali JS, Frainey B, Ramos R, et al. Single-port robotic extraperitoneal pediatric pyeloplasty using low anterior access: Description of technique and initial experience. J Pediatr Urol. 2024. 10.1016/j.jpurol.2024.01.009. Latest series on SP use in pediatric urology from the Cleveland Clinic detailing entirely extraperitoneal approach. Kang SK, Jang WS, Kim SH, Kim SW, Han SW, Lee YS. Comparison of intraoperative and short-term postoperative outcomes between robot-assisted laparoscopic multi-port pyeloplasty using the da Vinci Si system and single-port pyeloplasty using the da Vinci SP system in children. Investig Clin Urol. 2021;62(5):592-9. 10.4111/icu.20200569. World’s first SP case-control series in pediatric urology, comparing SP to MP experience, from Yonsei University in South Korea. |
Uses of Molecular Docking Simulations in Elucidating Synergistic, Additive, and/or Multi-Target (SAM) Effects of Herbal Medicines | c9423609-db2d-4969-9c8b-36ab20efc4a2 | 11597140 | Pharmacology[mh] | 1.1. Synergistic, Additive, and/or Multi-Target (SAM) Effects The overall aim of drug discovery is to obtain highly effective and safe drugs, with a low level of undesirable or toxic side effects . The introduction, and more widespread use, of the target-based strategy for drug discovery seems to have coincided with a period of steady decline in the productivity of new drugs . The target-based strategy usually involved identifying, based upon genetic analysis or biological observations, that a single gene, gene product, or molecular mechanism (e.g., a particular enzyme) was the underlying basis of a disease state and then targeting it specifically. This approach did have some early notable successes. However, many disease states are generally much more complicated than this and involve multiple processes occurring at one, or more, of the gene, cell, organ, and organism levels. This is because evolution has generated deeply integrated biological systems with many feedback interactions between levels . Supposed single-target drugs may also not have just the anticipated effect due to interactions between various pathways in the disease network and unexpected promiscuous binding to many targets . Large-scale gene knock-out experiments in model organisms have shown that, due to the evolved redundancy in biological networks making them resistant to even strong perturbation or interruption at single points, interventions may actually be required at multiple sites to achieve the desired overall effect . This is because, as will be seen below, there are often compensatory signaling (or other) routes that bypass the inhibition of single-target proteins . These considerations help to explain the recent failures of the single-target-based strategy and highlight the need for alternative strategies to find effective drug-based therapies. One such alternative strategy is the “multi-target” therapeutic concept whereby multiple sites within a biological system are targeted together . Further, while unintended interactions with other drugs was typically seen as detrimental to the single “magic bullet” compound approach of the single-target strategy, the multi-target approach can employ a mixture of various different compounds whose effects are intended to interact. The key characteristic of herbal medicines is that they contain an, often, complex mixture of active compounds. The multi-target approach is, thence, consistent with the underlying philosophy of herbal medicines because the efficacy of herbal medicines is thought to be due to the action of many components with a weak to medium biological activity and with that activity toward a range of targets . Obviously, there is still a need to check for side effects. The various, underlying mechanisms of interaction between components of the mixture may involve a variety of processes. For example, one compound can be protected from degradation by enzymes through the presence of another substance inhibitory to the same enzymes . Alternatively, one substance can modify the transport of another substance across a key barrier, such as various cell membranes. In addition, one compound may act upon a signaling system within the host’s cells that results in the changed efficacy of another compound compared to the latter alone . The necessity for the various interactions of many compounds within a mixture, making up a herbal medicine, to achieve an overall therapeutic effect may explain why the conventional single-target strategies that fractionate and individually test the various components of a herbal mixture often fail to find any significant evidence of the therapeutic effect claimed on the basis of the traditional use of the herb . The overall observational consequences of drug interactions are classified using a number of terms, including potentiating, additive, synergistic, and antagonistic effects . Efferth and Koch defined these observed effects as follows: “Positive interactions that enhance the potency of a bioactive compound by an inactive adjuvant substance” are called “potentiation”. If the “combined outcome [of two or more drugs] is equal to [just] the sum of the effects of the individual components”, then this is denoted as an “additive” effect. However, when such a combination of bioactive components results in “an effect that is greater than the sum of [that for] the different substances”, this is known as a “synergistic” effect. In addition, Spinella proposed that the interactions of a synergistic type can be further sub-divided into two sub-types depending on the nature of the interaction, namely, pharmacodynamic or pharmacokinetic. For example, pharmacodynamic synergy results from combinations of allosteric modifiers at the gamma-aminobutyric acid A (GABA) receptor, while pharmacokinetic synergy results from interactions during the processes of drug absorption, distribution, biotransformation, or elimination . Conversely, where the effect of the combination of interactions is less than the additive effect of the individual components, this is called “antagonism”. This work will concentrate on synergistic and additive interactions involving multi[ple] targets, according to the definitions above, henceforth referred to as SAM effects. However, it is noted that some differences in definitions for the above terms do exist in the literature . In addition, negative interactions called “interferences” can also be of a pharmacokinetic type if they reduce the stability or bioavailability, or increase the metabolism, of the bioactive compound . A common method to highlight drug interactions experimentally is to construct an isobologram , as seen in . This involves plotting, as x and y co-ordinates, the potential combinations of the individual doses of two drugs that result in a desired overall effect level. If the interaction between the two drugs is purely additive, then joining up the aforementioned points will result in a contour (the “isobole”) that is a straight line; but, if the interaction is synergistic, the contour will have a concave form (i.e., lie below the straight line on the plot joining the two points on the axes corresponding to the respective doses of just one of those drugs that achieves the desired effect). Conversely, if the interaction is antagonistic, the contour will have a convex form. However, this method, and similar ones, that only consider “two-dimensional” interactions cannot be used to study complex mixtures with multiple components. An alternative experimental approach that can cope with multi-component mixtures involves conducting biological assays or clinical studies that compare the outcomes from administering the same dose of an isolated, individual component on its own, versus the mixture, from which it comes, as a whole . If the response is different in the two cases, then a SAM effect may be occurring. However, a suitable assay may not be available, and such experimental studies may be too expensive. Further, this type of study tends not to reveal the mechanistic reason for any effect. In contrast, in silico methods, such as molecular docking , can be much cheaper, more readily available, and also, potentially, reveal more concerning the mechanism of a SAM effect. Hence, this work will consider molecular-docking approaches to finding and studying SAM effects. 1.2. Systems Biology Approach The increasing consciousness of the problems with the single-target strategy has led to the development of the so-called “systems biology” approach . The aim of systems biology is “to understand physiology and disease from the level of molecular metabolic pathways, regulatory networks, cells, tissues, organs and ultimately the whole organism” . Methodologies that consider systems biology approaches include networks (the modeling of flows and pathways within cellular networks using graph-like mathematical structures), systems modeling (modeling whole organ and tissue systems), cell modeling (the mathematical and computational simulation of whole cells), and target prioritization and drug development (applying the network, cell, and system models to target search and selection to assist the drug discovery process) . These methodologies promise ways to make sense of, and discern useful structure in, the enormous, and thus otherwise overwhelming, genomic, proteomic, and dynamic datasets being generated via high-throughput techniques . In particular, “omics” can be used to “ask what genes, proteins or phosphorylation states of proteins are expressed or upregulating”. Systems biology approaches can be classified into either “top-down” or “bottom-up”-type methods. Top-down methods, such as statistical analyses and static networks, are frequently applied to interpreting “omics” data in order to determine the underlying organization of a system or to mine information specific to a particular biological process. Alternatively, bottom-up methods, such as metabolic networks, model the continuous dynamic aspects of biological systems. These latter models thus require sufficient mechanistic knowledge and quantitative kinetic parameters not needed for the top-down models. Hence, the number of components that can be modeled with the bottom-up models is generally less than is possible with the top-down methods. As will be seen below, molecular docking can aid with constructing both types of methods. Issues have arisen with developing systems biology approaches. It remains infeasible to produce exhaustive models fully integrating the molecular, cellular, organ, and organism levels due to limitations on current computing power . Further, even if computing power was up to the job, the requisite level of information on the whole structure of the systems often does not exist yet. However, one practical way to still make progress in the face of these limitations is to adopt a modular approach and assume that the whole system can eventually be built up of constituent modules. Below, it will be seen what can be achieved using the de facto construction of a module for the arachidonic acid (AA) metabolic network in different types of cells. It will also be seen how molecular-docking simulations can supply the necessary model parameters that are not easily available via experimental measurements. 1.3. Aims and Objectives The main aim of this work is to review the uses of molecular-docking simulations in the detection and/or elucidation of synergistic, additive, and/or multi-target (SAM) effects in herbal medicines. It will first describe the basic principles of molecular docking, and then it will go on to describe how molecular docking is incorporated into a common approach for screening for activity of multiple components of molecular mixtures from herbal medicines and which can detect SAM effects. It will then present a survey of examples from the literature where this approach has been applied to a range of disease states. This work will then discuss how molecular docking can be combined in more complex methodologies involving various types of network-based approaches from systems biology, and also be combined with pharmacophore methods, which can greatly help to understand the mechanism of SAM effects. The overall aim of drug discovery is to obtain highly effective and safe drugs, with a low level of undesirable or toxic side effects . The introduction, and more widespread use, of the target-based strategy for drug discovery seems to have coincided with a period of steady decline in the productivity of new drugs . The target-based strategy usually involved identifying, based upon genetic analysis or biological observations, that a single gene, gene product, or molecular mechanism (e.g., a particular enzyme) was the underlying basis of a disease state and then targeting it specifically. This approach did have some early notable successes. However, many disease states are generally much more complicated than this and involve multiple processes occurring at one, or more, of the gene, cell, organ, and organism levels. This is because evolution has generated deeply integrated biological systems with many feedback interactions between levels . Supposed single-target drugs may also not have just the anticipated effect due to interactions between various pathways in the disease network and unexpected promiscuous binding to many targets . Large-scale gene knock-out experiments in model organisms have shown that, due to the evolved redundancy in biological networks making them resistant to even strong perturbation or interruption at single points, interventions may actually be required at multiple sites to achieve the desired overall effect . This is because, as will be seen below, there are often compensatory signaling (or other) routes that bypass the inhibition of single-target proteins . These considerations help to explain the recent failures of the single-target-based strategy and highlight the need for alternative strategies to find effective drug-based therapies. One such alternative strategy is the “multi-target” therapeutic concept whereby multiple sites within a biological system are targeted together . Further, while unintended interactions with other drugs was typically seen as detrimental to the single “magic bullet” compound approach of the single-target strategy, the multi-target approach can employ a mixture of various different compounds whose effects are intended to interact. The key characteristic of herbal medicines is that they contain an, often, complex mixture of active compounds. The multi-target approach is, thence, consistent with the underlying philosophy of herbal medicines because the efficacy of herbal medicines is thought to be due to the action of many components with a weak to medium biological activity and with that activity toward a range of targets . Obviously, there is still a need to check for side effects. The various, underlying mechanisms of interaction between components of the mixture may involve a variety of processes. For example, one compound can be protected from degradation by enzymes through the presence of another substance inhibitory to the same enzymes . Alternatively, one substance can modify the transport of another substance across a key barrier, such as various cell membranes. In addition, one compound may act upon a signaling system within the host’s cells that results in the changed efficacy of another compound compared to the latter alone . The necessity for the various interactions of many compounds within a mixture, making up a herbal medicine, to achieve an overall therapeutic effect may explain why the conventional single-target strategies that fractionate and individually test the various components of a herbal mixture often fail to find any significant evidence of the therapeutic effect claimed on the basis of the traditional use of the herb . The overall observational consequences of drug interactions are classified using a number of terms, including potentiating, additive, synergistic, and antagonistic effects . Efferth and Koch defined these observed effects as follows: “Positive interactions that enhance the potency of a bioactive compound by an inactive adjuvant substance” are called “potentiation”. If the “combined outcome [of two or more drugs] is equal to [just] the sum of the effects of the individual components”, then this is denoted as an “additive” effect. However, when such a combination of bioactive components results in “an effect that is greater than the sum of [that for] the different substances”, this is known as a “synergistic” effect. In addition, Spinella proposed that the interactions of a synergistic type can be further sub-divided into two sub-types depending on the nature of the interaction, namely, pharmacodynamic or pharmacokinetic. For example, pharmacodynamic synergy results from combinations of allosteric modifiers at the gamma-aminobutyric acid A (GABA) receptor, while pharmacokinetic synergy results from interactions during the processes of drug absorption, distribution, biotransformation, or elimination . Conversely, where the effect of the combination of interactions is less than the additive effect of the individual components, this is called “antagonism”. This work will concentrate on synergistic and additive interactions involving multi[ple] targets, according to the definitions above, henceforth referred to as SAM effects. However, it is noted that some differences in definitions for the above terms do exist in the literature . In addition, negative interactions called “interferences” can also be of a pharmacokinetic type if they reduce the stability or bioavailability, or increase the metabolism, of the bioactive compound . A common method to highlight drug interactions experimentally is to construct an isobologram , as seen in . This involves plotting, as x and y co-ordinates, the potential combinations of the individual doses of two drugs that result in a desired overall effect level. If the interaction between the two drugs is purely additive, then joining up the aforementioned points will result in a contour (the “isobole”) that is a straight line; but, if the interaction is synergistic, the contour will have a concave form (i.e., lie below the straight line on the plot joining the two points on the axes corresponding to the respective doses of just one of those drugs that achieves the desired effect). Conversely, if the interaction is antagonistic, the contour will have a convex form. However, this method, and similar ones, that only consider “two-dimensional” interactions cannot be used to study complex mixtures with multiple components. An alternative experimental approach that can cope with multi-component mixtures involves conducting biological assays or clinical studies that compare the outcomes from administering the same dose of an isolated, individual component on its own, versus the mixture, from which it comes, as a whole . If the response is different in the two cases, then a SAM effect may be occurring. However, a suitable assay may not be available, and such experimental studies may be too expensive. Further, this type of study tends not to reveal the mechanistic reason for any effect. In contrast, in silico methods, such as molecular docking , can be much cheaper, more readily available, and also, potentially, reveal more concerning the mechanism of a SAM effect. Hence, this work will consider molecular-docking approaches to finding and studying SAM effects. The increasing consciousness of the problems with the single-target strategy has led to the development of the so-called “systems biology” approach . The aim of systems biology is “to understand physiology and disease from the level of molecular metabolic pathways, regulatory networks, cells, tissues, organs and ultimately the whole organism” . Methodologies that consider systems biology approaches include networks (the modeling of flows and pathways within cellular networks using graph-like mathematical structures), systems modeling (modeling whole organ and tissue systems), cell modeling (the mathematical and computational simulation of whole cells), and target prioritization and drug development (applying the network, cell, and system models to target search and selection to assist the drug discovery process) . These methodologies promise ways to make sense of, and discern useful structure in, the enormous, and thus otherwise overwhelming, genomic, proteomic, and dynamic datasets being generated via high-throughput techniques . In particular, “omics” can be used to “ask what genes, proteins or phosphorylation states of proteins are expressed or upregulating”. Systems biology approaches can be classified into either “top-down” or “bottom-up”-type methods. Top-down methods, such as statistical analyses and static networks, are frequently applied to interpreting “omics” data in order to determine the underlying organization of a system or to mine information specific to a particular biological process. Alternatively, bottom-up methods, such as metabolic networks, model the continuous dynamic aspects of biological systems. These latter models thus require sufficient mechanistic knowledge and quantitative kinetic parameters not needed for the top-down models. Hence, the number of components that can be modeled with the bottom-up models is generally less than is possible with the top-down methods. As will be seen below, molecular docking can aid with constructing both types of methods. Issues have arisen with developing systems biology approaches. It remains infeasible to produce exhaustive models fully integrating the molecular, cellular, organ, and organism levels due to limitations on current computing power . Further, even if computing power was up to the job, the requisite level of information on the whole structure of the systems often does not exist yet. However, one practical way to still make progress in the face of these limitations is to adopt a modular approach and assume that the whole system can eventually be built up of constituent modules. Below, it will be seen what can be achieved using the de facto construction of a module for the arachidonic acid (AA) metabolic network in different types of cells. It will also be seen how molecular-docking simulations can supply the necessary model parameters that are not easily available via experimental measurements. The main aim of this work is to review the uses of molecular-docking simulations in the detection and/or elucidation of synergistic, additive, and/or multi-target (SAM) effects in herbal medicines. It will first describe the basic principles of molecular docking, and then it will go on to describe how molecular docking is incorporated into a common approach for screening for activity of multiple components of molecular mixtures from herbal medicines and which can detect SAM effects. It will then present a survey of examples from the literature where this approach has been applied to a range of disease states. This work will then discuss how molecular docking can be combined in more complex methodologies involving various types of network-based approaches from systems biology, and also be combined with pharmacophore methods, which can greatly help to understand the mechanism of SAM effects. 2.1. Basic Principles of the Docking Approach Protein–ligand docking is now a widely used tool for drug discovery and is already described in great detail elsewhere , so only key relevant points will be summarized here. The “central dogma” of the docking approach “is that compounds that dock correctly into the receptor are more likely to display biological activity than those that do not dock” . The application of docking requires the availability of sufficiently realistic representations of the relevant chemical entities that can be understood by a computer program. If a target structure and binding site are known, docking can be used to establish where, and how, exactly a ligand will bind and predict the strength of the interaction. Docking software generally utilizes protein structures, previously obtained via X-ray crystallography, downloaded from a database, such as the Protein Database (PDB). If the binding site of a protein is not known, it can be predicted based on the primary structure using homologous models of a known structure . The database protein structure is then often cleaned (of other molecules included with the recorded structure). The docking algorithm itself typically involves the generation of an ensemble of 3D conformers of a complex starting from the known structures of its free components. The ways this is performed are reviewed extensively elsewhere and so will not be covered in detail here. For protein–ligand docking, this involves searching through different conformations and orientations (known as the “pose”) of the ligand within the target protein and measuring the binding affinity corresponding to each alternative. The set of various poses to attempt is, itself, generated by an optimizer algorithm, which, ideally, should sample the complex search space made up of the degrees of freedom of the protein–ligand complex sufficiently exhaustively to include the true binding mode . These degrees of freedom can be just the six degrees of translational and rotational freedom if the ligand and target are both treated as rigid bodies, or many more if either the ligand, or both ligand and target, are also allowed to be flexible. The greater the degrees of freedom, the greater the complexity of the search space and, then, the greater the demands on computational power. Machine learning methods, such as genetic algorithms, are often used to improve accuracy and to speed up the optimization process . In order to identify the “true” pose, the various candidates must be evaluated and ranked by means of a scoring function of some sort that can distinguish even similar poses sufficiently to find the “true” binding mode. Scoring functions come in three major types, namely, force field based, empirical, and knowledge based , and must be fast in implementation to allow the rapid screening of many conformers . Force-field-based scoring functions calculate interaction energies, incorporating terms such as van der Waals forces and electrostatic interactions. However, knowledge-based scoring functions use statistical potentials derived from contact frequencies. Empirical scoring functions estimate binding free energies by using several different terms, frequently using linear regression methods. However, the “best” combination of the search algorithm and scoring function, to identify the true pose, is often very system (ligand and target) specific. The choice of the detail of the physical descriptions and internal parameters for the algorithms can also affect the accuracy of the final “best” pose found. These are reviewed extensively elsewhere and so will not be detailed here. More recently, Artificial Neural Network (ANN) techniques have also been used to automate and optimize the docking process, reducing the time and resources required for molecular docking . Some docking methodologies use multiple algorithms in parallel to try to get a consensus, but this adds to computational time. Typically, the pose with the highest binding affinity is the one in which free energy is the most negative. However, there is a key difference, in the importance of accuracy and precision necessary, between the use of docking in single-target strategies and searching for herbal medicines with SAM effects. This is because, as has been mentioned above and will be discussed further below, herbal medicines are often composed of mixtures of both strong and weak ligands, or even just mixtures of compounds with only middling binding affinity. Hence, a high accuracy of binding affinity estimation is not so vital to distinguish definitively the paramount best hit(s). 2.2. A Common Screening Approach Involving Docking A common basic approach has frequently been adopted in the literature for testing for, and/or understanding the mechanism of, synergistic and/or multi-target effects for herbal medicines. A particular herb, or herbs, is selected based upon traditional use, or previous suggestive findings in the scientific literature, for a certain disease state. The potential active molecules in the herb(s) are identified either from existing databases (e.g., those listed in ) or by chemical means (e.g., LC–MS analysis of the extract of the herb ). The set of targets for addressing the chosen disease state is identified from existing databases or via a literature search and the structure of the relevant binding site obtained from a database (e.g., PDC) or otherwise constructed, such as by homology modeling . The set of test molecules may be pre-screened for “drug-like” character via the application of Lipinski’s rule of five. However, recent work suggests that good candidate molecules may be discarded based upon this rule . Indeed, plants have provided successful oral drugs that violate the rule of five . These molecules tend to be of high complexity, rich in stereogenic centers, and relatively lacking in nitrogen compared to synthetic drugs. These molecules may have been optimized by evolution to take advantage of active transport, while the rule of five only applies to compounds absorbed by passive mechanisms . Furthermore, machine learning can also be used to determine pharmacokinetic profiles of molecules and extend the range of molecules accepted beyond those that meet the rule of five . Each set of test molecules is then docked with each target to obtain the docking score. In order to get a consensus, to avoid false positives, often two or more docking tools are used in parallel, e.g., . The synergistic and/or multi-target potential of a given herb is then assessed based upon the scores. The molecular-docking simulation is also often followed up with molecular dynamics simulations to validate the docking score. In the simplest version possible for this type of study, one molecule can be docked with one molecular target, such as in the study of jensenone from eucalyptus essential oil as a potential inhibitor of the main viral proteinase of COVID-19 . However, this study does not consider additive or synergistic effects between herbal constituents. Since the basic principles as described above are often adopted in many studies, the results of these have been summarized in . Details of how these studies were selected are included in the . However, the nature and depth of the reporting of the findings of molecular-docking studies varies, especially where large numbers of molecule-target pairs are involved. The study is classed as demonstrating an additive or synergy effect if such is explicitly claimed or was apparent from the published findings. Many docking studies now augment the findings with systems biology tools such as some form of network, and the use of such a tool is also recorded in . Some studies involving combined docking and network approaches will be discussed in more detail below. shows that molecular-docking studies have revealed that a variety of herbal medicines, when applied to a range of disease states, contain several active components. In some cases, multiple molecules from the mixture from one herb have affinity for the same target, and, in other cases, several different molecules have affinity for a range of targets related to a particular disease state. SAM effects have been detected in most cases. It has also been seen that there are a few studies where molecular docking has been combined with systems biology approaches, particularly static networks, as will be discussed in more detail below. The basic molecular-docking study described above is often just a component of a much broader study that also involves experimental work. The docking can be used to suggest potential trial systems to validate via experiment, or the molecular docking can be used to elucidate the mechanism of a SAM effect already discovered via experiments. It has been pointed out that virtual screening using simpler molecular-docking approaches has some issues . Some of the issues highlighted in previous studies include limitations in the protein structures available in the PDB, a high false positive rate, difficulties in considering target flexibility, the inaccuracy of scoring functions for estimating target–ligand binding free energy, and the limitations in inferring the wider physiological impact(s) of a particular ligand–target binding. Since the docking method requires the target protein structure, an alternative approach is needed when this is not available. For example, structure–activity analysis may be used for the prediction of biological activity and other properties of organic compounds based on their structural formulas . Further, while virtual screening alone using molecular docking may have a low hit rate of only ~ 30% for initial hits, Wang et al. have suggested that molecular-docking simulations can be used as a preliminary screen to determine candidate herbs to submit for more effective screening with experimental affinity mass spectrometry. The TCM database was used to identify 2920 compounds with known anti-tumor activity. This set of test compounds was screened using multiple docking software types to identify hits for the GTP-binding pocket involved in the GTPase activity of the Ras protein since this protein is an intracellular guanine nucleotide-binding protein that regulates cell proliferation, survival, differentiation and apoptosis. Analysis of the docking scores for the compounds showed that most of the high-scoring compounds came from 11 particular herbs, and scaffold cluster analysis showed that most of the high-docking-score compounds were isoamylene containing flavonoids and 20(s)-protopanoxadio saponins. Affinity MS screening was then used to verify that the related crude mixture of compounds derived from each herb had the expected affinity for the target protein. Ultimately, the affinity MS testing showed that, of 18 hits unique to the virtual screening, 11 of them could not be verified using the affinity MS technique and, thus, were likely to be false positives. However, the key structural aspects of the hit compounds identified in the virtual screening were confirmed by the affinity MS experiments. 2.3. The Reverse-Docking Approach The so-called “forward docking” approach would be to screen many potential drug (ligand) compounds against each single target, whereas, the so-called “reverse docking” approach is to screen multiple potential targets against each ligand molecule , otherwise the basic principles of the latter are very similar to the former. Hence, in reverse docking, the ratio of the number of potential targets tested per individual test molecule is bigger than unity. In order to find new anti-cancer medicines, Zhang et al. considered 902 distinct protein targets against 13 constituents of the herb Brucea javanica , thereby giving rise to 7119 possible constituent–target interactions. The targets were selected from known therapeutic targets of currently marketed commercial drugs. Since the screening of molecule–target interactions using previously reported experimental data and databases proved ineffectual, 52 of the 902 targets were selected for screening with the reverse docking against the 13 herb ingredients, involving a total of 492 target–ingredient interactions. Of the tested herb constituent–protein target interactions, 145 (covering 42 targets) of them had similar binding modes and comparable binding affinities to controls consisting of the current drug–target interaction. Zhang et al. suggested that the so-called “promiscuity” of the herbal ingredients against multiple targets that they found in the docking study means that the herbal medicine is likely to be effective regardless of potential genetic variations between patients. Protein–ligand docking is now a widely used tool for drug discovery and is already described in great detail elsewhere , so only key relevant points will be summarized here. The “central dogma” of the docking approach “is that compounds that dock correctly into the receptor are more likely to display biological activity than those that do not dock” . The application of docking requires the availability of sufficiently realistic representations of the relevant chemical entities that can be understood by a computer program. If a target structure and binding site are known, docking can be used to establish where, and how, exactly a ligand will bind and predict the strength of the interaction. Docking software generally utilizes protein structures, previously obtained via X-ray crystallography, downloaded from a database, such as the Protein Database (PDB). If the binding site of a protein is not known, it can be predicted based on the primary structure using homologous models of a known structure . The database protein structure is then often cleaned (of other molecules included with the recorded structure). The docking algorithm itself typically involves the generation of an ensemble of 3D conformers of a complex starting from the known structures of its free components. The ways this is performed are reviewed extensively elsewhere and so will not be covered in detail here. For protein–ligand docking, this involves searching through different conformations and orientations (known as the “pose”) of the ligand within the target protein and measuring the binding affinity corresponding to each alternative. The set of various poses to attempt is, itself, generated by an optimizer algorithm, which, ideally, should sample the complex search space made up of the degrees of freedom of the protein–ligand complex sufficiently exhaustively to include the true binding mode . These degrees of freedom can be just the six degrees of translational and rotational freedom if the ligand and target are both treated as rigid bodies, or many more if either the ligand, or both ligand and target, are also allowed to be flexible. The greater the degrees of freedom, the greater the complexity of the search space and, then, the greater the demands on computational power. Machine learning methods, such as genetic algorithms, are often used to improve accuracy and to speed up the optimization process . In order to identify the “true” pose, the various candidates must be evaluated and ranked by means of a scoring function of some sort that can distinguish even similar poses sufficiently to find the “true” binding mode. Scoring functions come in three major types, namely, force field based, empirical, and knowledge based , and must be fast in implementation to allow the rapid screening of many conformers . Force-field-based scoring functions calculate interaction energies, incorporating terms such as van der Waals forces and electrostatic interactions. However, knowledge-based scoring functions use statistical potentials derived from contact frequencies. Empirical scoring functions estimate binding free energies by using several different terms, frequently using linear regression methods. However, the “best” combination of the search algorithm and scoring function, to identify the true pose, is often very system (ligand and target) specific. The choice of the detail of the physical descriptions and internal parameters for the algorithms can also affect the accuracy of the final “best” pose found. These are reviewed extensively elsewhere and so will not be detailed here. More recently, Artificial Neural Network (ANN) techniques have also been used to automate and optimize the docking process, reducing the time and resources required for molecular docking . Some docking methodologies use multiple algorithms in parallel to try to get a consensus, but this adds to computational time. Typically, the pose with the highest binding affinity is the one in which free energy is the most negative. However, there is a key difference, in the importance of accuracy and precision necessary, between the use of docking in single-target strategies and searching for herbal medicines with SAM effects. This is because, as has been mentioned above and will be discussed further below, herbal medicines are often composed of mixtures of both strong and weak ligands, or even just mixtures of compounds with only middling binding affinity. Hence, a high accuracy of binding affinity estimation is not so vital to distinguish definitively the paramount best hit(s). A common basic approach has frequently been adopted in the literature for testing for, and/or understanding the mechanism of, synergistic and/or multi-target effects for herbal medicines. A particular herb, or herbs, is selected based upon traditional use, or previous suggestive findings in the scientific literature, for a certain disease state. The potential active molecules in the herb(s) are identified either from existing databases (e.g., those listed in ) or by chemical means (e.g., LC–MS analysis of the extract of the herb ). The set of targets for addressing the chosen disease state is identified from existing databases or via a literature search and the structure of the relevant binding site obtained from a database (e.g., PDC) or otherwise constructed, such as by homology modeling . The set of test molecules may be pre-screened for “drug-like” character via the application of Lipinski’s rule of five. However, recent work suggests that good candidate molecules may be discarded based upon this rule . Indeed, plants have provided successful oral drugs that violate the rule of five . These molecules tend to be of high complexity, rich in stereogenic centers, and relatively lacking in nitrogen compared to synthetic drugs. These molecules may have been optimized by evolution to take advantage of active transport, while the rule of five only applies to compounds absorbed by passive mechanisms . Furthermore, machine learning can also be used to determine pharmacokinetic profiles of molecules and extend the range of molecules accepted beyond those that meet the rule of five . Each set of test molecules is then docked with each target to obtain the docking score. In order to get a consensus, to avoid false positives, often two or more docking tools are used in parallel, e.g., . The synergistic and/or multi-target potential of a given herb is then assessed based upon the scores. The molecular-docking simulation is also often followed up with molecular dynamics simulations to validate the docking score. In the simplest version possible for this type of study, one molecule can be docked with one molecular target, such as in the study of jensenone from eucalyptus essential oil as a potential inhibitor of the main viral proteinase of COVID-19 . However, this study does not consider additive or synergistic effects between herbal constituents. Since the basic principles as described above are often adopted in many studies, the results of these have been summarized in . Details of how these studies were selected are included in the . However, the nature and depth of the reporting of the findings of molecular-docking studies varies, especially where large numbers of molecule-target pairs are involved. The study is classed as demonstrating an additive or synergy effect if such is explicitly claimed or was apparent from the published findings. Many docking studies now augment the findings with systems biology tools such as some form of network, and the use of such a tool is also recorded in . Some studies involving combined docking and network approaches will be discussed in more detail below. shows that molecular-docking studies have revealed that a variety of herbal medicines, when applied to a range of disease states, contain several active components. In some cases, multiple molecules from the mixture from one herb have affinity for the same target, and, in other cases, several different molecules have affinity for a range of targets related to a particular disease state. SAM effects have been detected in most cases. It has also been seen that there are a few studies where molecular docking has been combined with systems biology approaches, particularly static networks, as will be discussed in more detail below. The basic molecular-docking study described above is often just a component of a much broader study that also involves experimental work. The docking can be used to suggest potential trial systems to validate via experiment, or the molecular docking can be used to elucidate the mechanism of a SAM effect already discovered via experiments. It has been pointed out that virtual screening using simpler molecular-docking approaches has some issues . Some of the issues highlighted in previous studies include limitations in the protein structures available in the PDB, a high false positive rate, difficulties in considering target flexibility, the inaccuracy of scoring functions for estimating target–ligand binding free energy, and the limitations in inferring the wider physiological impact(s) of a particular ligand–target binding. Since the docking method requires the target protein structure, an alternative approach is needed when this is not available. For example, structure–activity analysis may be used for the prediction of biological activity and other properties of organic compounds based on their structural formulas . Further, while virtual screening alone using molecular docking may have a low hit rate of only ~ 30% for initial hits, Wang et al. have suggested that molecular-docking simulations can be used as a preliminary screen to determine candidate herbs to submit for more effective screening with experimental affinity mass spectrometry. The TCM database was used to identify 2920 compounds with known anti-tumor activity. This set of test compounds was screened using multiple docking software types to identify hits for the GTP-binding pocket involved in the GTPase activity of the Ras protein since this protein is an intracellular guanine nucleotide-binding protein that regulates cell proliferation, survival, differentiation and apoptosis. Analysis of the docking scores for the compounds showed that most of the high-scoring compounds came from 11 particular herbs, and scaffold cluster analysis showed that most of the high-docking-score compounds were isoamylene containing flavonoids and 20(s)-protopanoxadio saponins. Affinity MS screening was then used to verify that the related crude mixture of compounds derived from each herb had the expected affinity for the target protein. Ultimately, the affinity MS testing showed that, of 18 hits unique to the virtual screening, 11 of them could not be verified using the affinity MS technique and, thus, were likely to be false positives. However, the key structural aspects of the hit compounds identified in the virtual screening were confirmed by the affinity MS experiments. The so-called “forward docking” approach would be to screen many potential drug (ligand) compounds against each single target, whereas, the so-called “reverse docking” approach is to screen multiple potential targets against each ligand molecule , otherwise the basic principles of the latter are very similar to the former. Hence, in reverse docking, the ratio of the number of potential targets tested per individual test molecule is bigger than unity. In order to find new anti-cancer medicines, Zhang et al. considered 902 distinct protein targets against 13 constituents of the herb Brucea javanica , thereby giving rise to 7119 possible constituent–target interactions. The targets were selected from known therapeutic targets of currently marketed commercial drugs. Since the screening of molecule–target interactions using previously reported experimental data and databases proved ineffectual, 52 of the 902 targets were selected for screening with the reverse docking against the 13 herb ingredients, involving a total of 492 target–ingredient interactions. Of the tested herb constituent–protein target interactions, 145 (covering 42 targets) of them had similar binding modes and comparable binding affinities to controls consisting of the current drug–target interaction. Zhang et al. suggested that the so-called “promiscuity” of the herbal ingredients against multiple targets that they found in the docking study means that the herbal medicine is likely to be effective regardless of potential genetic variations between patients. 3.1. Static-Network-Based Approaches Docking can be used as part of a network-based approach for an in silico prediction of the efficacy of compounds. This section will describe some examples. A network-based approach using docking was used to assess the efficacy of compounds for impacting the platelet aggregation pathways, as shown in . Initially, a network was constructed from literature databases, where the enzymes important in the process of platelet aggregation were the nodes, and these included proteinase-activated receptor-1 (PAR1), PAR4, and Phospholipase-2 (PLA2). The connections between nodes (edges) were arrows, whose direction indicated downstream in the network, and each edge was given a weighting, initially a default value. Overall, the network consisted of 64 nodes and 91 edges (arrows). Nineteen of the enzymes in the network were chosen as targets for docking. Docking was performed on 413 compounds derived from Chinese herbs. Where a docking score was available for a particular target, it was used to determine the weight attached to all immediate connections arising from the corresponding node in the network if it exceeded the initial default value. The length of a path between nodes in the network was determined based on these weights, and the network efficiency was defined as the sum of the reciprocal lengths of the shortest path between each pair of nodes in the network. The network efficiency reflects the multi-target interaction of drugs. The impact of each compound on the network was assessed by the change in network efficiency using the docking scores for that compound to determine the edge weights. Overall, the effect of a compound on the network is considered more potent the more that the network efficiency decreases. The 40 compounds with the largest decreases in network efficiency were selected for experimental testing, and 19 of these were found to have antiplatelet aggregation activities, with the compounds silybin and papaverine found to be the most potent, and compared favorably with the then standard drug treatment for myocardial infarction, tirofiban. Further, the linear correlation coefficient between the decrease in network efficiency for a compound and the experimental results for blood anti-platelet aggregation activity was 0.67. However, if the impact on the network downstream of the target was included via the use of the network flux parameter, this correlation coefficient was improved to 0.73. However, the accuracy of the docking program, in determining compound affinity for a target, was found to affect the degree of correlation found. The importance of the network effect, where compounds bind to multiple inter-connected targets, was demonstrated by the fact that the correlation coefficients for single docking scores for test compounds and key protein targets versus experiment were lower than those for the network-based parameters. Network-based approaches have also been coupled with docking simulations to elucidate the mechanism of the action of TCM formulations for type II diabetes (T2D) . Consideration of the composition of the 11 herbs comprising the formulation for T2D was made using the Beilstein and Chinese Herbal Drug databases, and 676 molecules were retrieved. Principal component analysis was used to show that these molecules were widely distributed in chemical space, and some were similar in structure to known drugs for T2D. These were then docked with 37 T2D-related proteins, such as the insulin receptor. Given that T2D is a complex disease involving many genes and gene products, the impact of targeting multiple proteins was assessed through network analysis. A drug–target (D–T) network was assembled where links were made between a given test molecule and a target protein if the docking score was in the top 3%. A drug–drug (D–D) network was assembled where links were made between test compounds if they shared one or more target proteins. In the D–T network, it was found that most molecules target a few proteins. The structure of the network was then assessed using several analysis methods. For example, the k -means method was used to show that the network had three major clusters and one small cluster. The smallest major cluster only consisted of the protein glucokinase and its drugs. However, a larger major cluster linked the two proteins glycogen synthase kinase-3 beta and protein kinase C, which are both important proteins in glycogen synthesis. A further larger cluster linked the glucagon-like peptide-1 receptor (GLP1R) and insulin degrading enzyme (IDE). This is probably because when GLP1R binds its agonist glucagon-like peptide-1, it increases insulin secretion, while IDE is a protease that cleaves insulin to maintain the homeostasis of insulin. Both the D–T and D–D networks were analyzed to determine the degree (number of interconnections) of each node corresponding to a test molecule. The nodes with the highest degree correspond to the most important molecules in the network that are also likely to have the greatest activity, and about 10–12 known active compounds were found to be amongst the 20 molecules with the highest degree. The networks assembled through the analyses described above are often too complex for simple visual inspection to be useful, and so analysis algorithms are essential to extract the useful information contained therein. 3.2. Metabolic-Network-Based Approaches Metabolic network models of biological systems consist of a set of ordinary differential equations that describe the enzymic catalysis in the network and the feedback inhibition or activation of the enzyme catalysts by their metabolites . Metabolic networks are dynamic models that can simulate the perturbation of the network arising from the addition of exogenous compounds, such as from herbs. The feedback regulations and other pathways in the network mean that the effect of a particular molecule on the network, as a whole, may be very different to the effect of reaction of that molecule only at a single point in the network. A key issue with the use of metabolic networks is the ability to obtain values of the various kinetic parameters. The metabolic network model of a disease is that it represents a particular state of the network in which the production of disease-related molecules is abnormal . The normal state is the state of the network desired after treatment. The aim of therapy is to shift the network back into the normal state. Algorithms, such as the Multi-Target Optimum Intervention (MTOI) method have been invented to identify the key set of several targets within a network and whether they each need inhibition or activation for a successful intervention. This identification is achieved through testing the impacts of various perturbations to the activities of potential targets suggested by a search algorithm, such as a generic algorithm, that ultimately aims to minimize the difference between the starting (disease) network state and the desired (normal) state . The optimal solution can involve relatively mild impacts on individual enzyme activities made at multiple locations, leading to a greater overall effect on the whole network than a much larger single impact imposed at just one location. Side effects can be prevented through having multiple targets that can, between them, control the overall network balance . The metabolic network approach has been used to assess the efficacy and mechanism of the action of herbal medicines for several diseases, namely, inflammation, HIV, and cancer . Gu and Pei have suggested a general workflow for testing herbal medicines using the computerized metabolic network method. First, the metabolic network is constructed using literature information and databases, such as the Kyoto Encyclopedia of Genes and Genomes (KEGG). This search is used to specify a group of ordinary differential equations (ODEs) that describe the network. It is then necessary to collect kinetic parameters for the ODEs describing the dynamics of the network. Where these are not directly obtainable from the literature, the set of ODEs can be used to predict the concentration curves of components in the network and these fitted to experimental data . Docking simulations can also be used to quantify the interactions between compounds and proteins, using a predicted dissociation constant for each protein–ligand complex. Hence, the relevant protein structures must be found beforehand. For example, inflammation processes are controlled by the arachidonic acid (AA) metabolic network (shown in ), and Lei and co-workers constructed a model of it consisting of a set of ODEs . These equations simulate each time-dependent concentration of important enzymes [I], and molecules, in the network using a set of kinetic parameters collected from assays and computational prediction . The original AA network model was that found in human polymorphonuclear leukocytes (PMNs), but this has been extended to AA metabolism in blood vessels as a whole, including three cell types, not just PNMs . The models for the AA networks in the PMN, endothelial, and platelet cell types had 24, 29, and 11 ODEs, respectively, involving a total of 117 characteristic kinetic parameters . The ODEs for the PMN represent 24 feedback loops, thus demonstrating the complexity of the network . For example, the AA metabolic network consists of two main pathways with five key enzymes, namely, cyclooxygenase 1 and 2 (COX1/2), 5-lipooxygenase (5LOX), microsomal prostaglandin E synethase-1 (PGES), and leukotriene A4 hydrolase (LTA4H). Inflammatory syndromes can result from the overproduction of two metabolites, namely, prostaglandin E2 (PGE2) and leukotriene B4 (LTB4), within this network. For example, PGE2 is very associated with arthritis, while LTB4 is associated with coughs and asthma. Hence, the anti-inflammatory efficacy of a drug was judged by its ability to reduce the production of PGE2 and LTB4. Further, side effects from drugs for inflammatory syndromes are linked to the ratio of concentrations of prostacyclin (PGI2) and thromboxane A2 (TXA2), with the normal ratio being 0.68. If the ratio is too high, then the risk of gastrorrhagia is increased, as happens for aspirin (ratio ~5.2). If this ratio is too low, then cardiovascular risks are increased, as happens in the case of Vioxx (ratio ~0.28) . The network model has been validated by comparing its predictions of the actions of a single COX-1 or 5-LOX inhibitor with observations . The model can be used to simulate the action of inhibitors of different strengths, acting at different locations in the network. Simulations of the impact of single-target anti-inflammatory drugs (such as COX-1 inhibitors) have shown that these cannot stop the production of all inflammatory mediators . However, intervention at both LT4H and COX can augment the 12/15-LOX and 15-LOX pathways, which produce endogenous anti-inflammatory agents . Molecules that are more “promiscuous” and that target multiple locations in the network have a wider therapeutic window, even if they only have milder effects than molecules more specific for a single target, and thus the former are effective at lower plasma concentration . Further, simulations with two inhibitors used in combination showed that the mixing ratio of the two makes big differences to the efficacy and safety of the mixture, and the relative inhibition constants of the two to each enzyme determines the overall therapeutic effect. In addition, a dual-functional single inhibitor molecule has been found to be more efficacious at a lower concentration than the combination of two separate, mono-functional inhibitors . Hence, these findings suggested that the presence of more promiscuous compounds in herbal medicines would be more effective at lower doses. A single, multi-functional inhibitor also has a lower risk of drug–drug interactions that might cause side effects and will also be more robust against variations in plasma concentrations . The presence of relatively promiscuous (and thus multi-functional) molecules in herbal medicines may also explain why they still work despite the range of concentrations of active ingredients that arise from harvesting at different times of year. The AA metabolic network model was used to understand the efficacy and mechanism of action of anti-inflammatory TCM formulae . It was assumed that the efficacy and side effects of a particular herb could be understood based upon its constituent molecules. Since the inhibition coefficients of most test molecules from TCM formulations for enzymes in the AA network are unknown, an all-to-all molecular-docking approach was used to obtain them. The overall workflow was as follows: Various TCM books were used to select 28 herbs that were recommended for use with inflammation-related syndromes, such as asthma and fever. The TCM database was used to find out all known chemical compounds in these herbs. Then, steroid and glycoside compounds were removed from the list because, first, steroids are hormones that do not function in the AA network and thus may cause false positives, and, second, glycoside compounds are likely to be metabolized in the human body to remove glucose residues. This sifting left 237 remaining test molecules. Docking simulations of all the test molecules to the five key enzymes were used to obtain the “docking score” (Gibbs free energy) for each potential combination, and this was converted to the corresponding inhibition constant ( K I ). Then, the inhibitory effect of a given herb could be modelled as the sum of the effects of all its constituent test molecules using a variant of the Michaelis–Menten equation. However, it is difficult to know the likely plasma concentration that each molecule will achieve; so, it was assumed that each molecule would reach a value of 10 nM, which was set to be lower than expected for most drugs, and so a conservative estimate. The impact of plasma concentration was tested in a sensitivity study of this unknown parameter by randomly varying the plasma concentrations of the various components of a given herb mixture between values of 1 and 100 nM. It was found that this perturbation made little difference to the overall impact of the herb on the key pathways. This was interpreted to show how robust the final therapeutic effect of a herbal formulation is to variations in the mixture composition or the concentration of active ingredients in herbs due to temporal variation in harvesting, etc. As mentioned above, the AA network consists of the PGE2 and LTB4-producing pathways. An individual herb was ranked according to its ability (assessed by multiplying [I]/ K I values in the same pathway) to eliminate PGE2 or LTB4. Via this assessment, the (mixture of compounds corresponding to the) herb Glycyrrhiza uralensis was found to have the best inhibition of both PGE2 and LTB4, which is consistent with its traditional reputation as applicable to many inflammatory syndromes . However, most of the herbs tested preferentially reduced LTB4 production, rather than PGE2 production. Meng et al. suggested that may be because most of the herbs had been most often traditionally selected to treat asthma or coughs. In general, it was found that different test compounds in the same herb or herb formulation tended to have different targets, with the possibility of covering almost the whole AA network to achieve a superior therapeutic effect. Further, some combinations of herbs also had a synergistic effect. For example, the combination of Forsythia suspensa and Scutellaria baicalensis had a total inhibition of PGE2 (of 27%), which was higher than the sum of their individual numbers (20%). In addition, the same overall therapeutic effect (inhibition level) could be obtained with lower plasma concentrations of test compounds when these were in combinations corresponding to multiple herb formulations compared to just individual herbs. This may suggest how formulations of several herbs can lead to lower side effects than for single herb medicines because lower doses of the former are needed. It is suggested that the lack of topological information can lead to the failure of metabolic networks and that, in such circumstances, Boolean network modelling may be an alternative. Wang et al. suggested that Boolean networks might be used when the large wealth of quantitative kinetic data needed for metabolic network modeling is not available by experiment and/or docking. In the absence of quantitative kinetic data, a Boolean model can still model some dynamic aspects of biological systems, such as state transitions. A Boolean network consists of a set of nodes whose state is binary and is determined by other nodes in the network. Hence, such a network model lies between the static and continuous dynamic (metabolic) in complexity. The Boolean network might be amenable when the activity level of a biological entity varies more in a stepwise function of concentration. Since Boolean models do not explicitly incorporate the potentially wide-ranging individual kinetics of separate entities, the resultant dynamics can be highly sensitive to the more abstract updating scheme used in Boolean network operation. For a system where a suitable updating scheme is not feasible, a metabolic network model is, thus, required. In addition, a compromise model consisting of a combination of Boolean elements with differential equations is possible for some systems and requires fewer kinetic parameters than the full metabolic model . However, Boolean models may be less applicable to modeling the impacts of herbal medicines because their effects are often continuous, partial, or middling rather than more discrete step changes associated with bridging defined thresholds, and the influences of herbal medicines can arise over many quite different time scales, such as short-term effects contrasting with those building with long-term treatments. 3.3. Combination of Molecular Docking with Common Pharmacophore Matching The development of software for automating methods of the construction of 3D pharmacophores has enabled a general approach that can be used for screening for multi-target inhibitors, both from synthetic sources and herbal medicines, involving combining molecular docking with common pharmacophore matching, as shown in . The combination of pharmacophore methods allows the direction of the docking simulations with pharmacophore templates and can thus speed up the overall screening of a set of compounds via docking . Further, while pharmacophore screening alone can end up with a mixture of both weak and strong ligands, the combination with docking enables the strong ligands to be selected out. In particular, the combined approach can make screening many test molecules against multiple targets a feasible goal. While the combined method speeds up screening, it can potentially unduly limit the number of molecules identified since the more complex the set of pharmacophores, the more restricted will be the set of compounds with the required features. One version of the combined method involves, first, finding the sets of molecular structural features that are recognized at each binding site responsible for the biological activity of the relevant target protein(s) to develop pharmacophore models for each target. Second, the approach then identifies common pharmacophores by comparing the individual models for each site if these exist. Third, a rapid docking algorithm is used to predict the binding confirmation of all test molecules in one of the target proteins, and then those molecules whose binding configuration can accommodate the common pharmacophore identified in the second step are selected out. Fourth, the binding configurations of these initially selected-out compounds in the other target proteins are found with a more rigorous docking simulation, and the set of selected compounds is further refined to molecules where their binding configurations in all target proteins can accommodate the common pharmacophore model. The resultant, further refined set of molecules may each have relatively low affinities across all of the targets, but, as already mentioned above, the combined therapeutic effect of a given molecule at several locations across biological networks may be cumulatively larger than a single molecule with a greater affinity at just one point in the network. The combined docking and pharmacophore approach has been validated on synthetic drugs but might also be applied to compounds occurring in herbal medicines. Ehrman et al. conducted a combined pharmacophore and docking screening for multi-target anti-inflammatories in Chinese herbs and their combined formulations. The multiple protein targets were cyclo-oxygenases 1 and 2 (COX1/2), p38 MAP kinase (p38), c-Jun terminal-NH 2 kinase (JNK), and type 4 cAMP-specific phosphodiesterase (PDE4). These proteins had been previously (in the literature) found to play roles in a variety of inflammatory syndromes . Further, previously, it had been found that the PDE4 inhibitor roflumilast also prevents the phosphorylation of both p38 and JNK, thus blocking the production of inflammatory mediators such as TNF-α and interleukin (IL)-1β. Ehrman et al. proposed that this finding suggests that molecules with the ability to inhibit more than one of these targets have greater potential for treating complex inflammatory syndromes. These workers also suggested that multi-target therapy is easier to achieve with a mixture of molecules, rather than using a single scaffold, since greater chemical diversity is possible with the former. Ehrman et al. used multiple pharmacophore models of the four protein targets to screen 5978 compounds from their database of constituents of Chinese herbs that had passed initial screening for drug-like properties via the Lipinski “rule-of-five”. The resulting suggested hits were then submitted for screening with docking software. The types of phytochemical classes that were found to be most involved in inhibiting inflammatory targets were phenolics, including lignans and flavonoids, and smaller terpenoids, such as monoterpenes, iridoids, and sesquiterpenes. Overall, it was found that 48% of 100 herbs tested are likely to have inhibitors for two or more targets, and 14% of herbs had more than one inhibitor for a single target that also came from different types of the aforementioned phytochemical classes. The reverse-docking study of Zhang et al. used a parallel pharmacophore approach to independently validate the findings obtained from docking. It was found that of the 52 herb constituent–protein target pairs highlighted by the docking study, all contained at least one common pharmacophore feature, and 37 of the target proteins shared at least three common pharmacophores. Docking can be used as part of a network-based approach for an in silico prediction of the efficacy of compounds. This section will describe some examples. A network-based approach using docking was used to assess the efficacy of compounds for impacting the platelet aggregation pathways, as shown in . Initially, a network was constructed from literature databases, where the enzymes important in the process of platelet aggregation were the nodes, and these included proteinase-activated receptor-1 (PAR1), PAR4, and Phospholipase-2 (PLA2). The connections between nodes (edges) were arrows, whose direction indicated downstream in the network, and each edge was given a weighting, initially a default value. Overall, the network consisted of 64 nodes and 91 edges (arrows). Nineteen of the enzymes in the network were chosen as targets for docking. Docking was performed on 413 compounds derived from Chinese herbs. Where a docking score was available for a particular target, it was used to determine the weight attached to all immediate connections arising from the corresponding node in the network if it exceeded the initial default value. The length of a path between nodes in the network was determined based on these weights, and the network efficiency was defined as the sum of the reciprocal lengths of the shortest path between each pair of nodes in the network. The network efficiency reflects the multi-target interaction of drugs. The impact of each compound on the network was assessed by the change in network efficiency using the docking scores for that compound to determine the edge weights. Overall, the effect of a compound on the network is considered more potent the more that the network efficiency decreases. The 40 compounds with the largest decreases in network efficiency were selected for experimental testing, and 19 of these were found to have antiplatelet aggregation activities, with the compounds silybin and papaverine found to be the most potent, and compared favorably with the then standard drug treatment for myocardial infarction, tirofiban. Further, the linear correlation coefficient between the decrease in network efficiency for a compound and the experimental results for blood anti-platelet aggregation activity was 0.67. However, if the impact on the network downstream of the target was included via the use of the network flux parameter, this correlation coefficient was improved to 0.73. However, the accuracy of the docking program, in determining compound affinity for a target, was found to affect the degree of correlation found. The importance of the network effect, where compounds bind to multiple inter-connected targets, was demonstrated by the fact that the correlation coefficients for single docking scores for test compounds and key protein targets versus experiment were lower than those for the network-based parameters. Network-based approaches have also been coupled with docking simulations to elucidate the mechanism of the action of TCM formulations for type II diabetes (T2D) . Consideration of the composition of the 11 herbs comprising the formulation for T2D was made using the Beilstein and Chinese Herbal Drug databases, and 676 molecules were retrieved. Principal component analysis was used to show that these molecules were widely distributed in chemical space, and some were similar in structure to known drugs for T2D. These were then docked with 37 T2D-related proteins, such as the insulin receptor. Given that T2D is a complex disease involving many genes and gene products, the impact of targeting multiple proteins was assessed through network analysis. A drug–target (D–T) network was assembled where links were made between a given test molecule and a target protein if the docking score was in the top 3%. A drug–drug (D–D) network was assembled where links were made between test compounds if they shared one or more target proteins. In the D–T network, it was found that most molecules target a few proteins. The structure of the network was then assessed using several analysis methods. For example, the k -means method was used to show that the network had three major clusters and one small cluster. The smallest major cluster only consisted of the protein glucokinase and its drugs. However, a larger major cluster linked the two proteins glycogen synthase kinase-3 beta and protein kinase C, which are both important proteins in glycogen synthesis. A further larger cluster linked the glucagon-like peptide-1 receptor (GLP1R) and insulin degrading enzyme (IDE). This is probably because when GLP1R binds its agonist glucagon-like peptide-1, it increases insulin secretion, while IDE is a protease that cleaves insulin to maintain the homeostasis of insulin. Both the D–T and D–D networks were analyzed to determine the degree (number of interconnections) of each node corresponding to a test molecule. The nodes with the highest degree correspond to the most important molecules in the network that are also likely to have the greatest activity, and about 10–12 known active compounds were found to be amongst the 20 molecules with the highest degree. The networks assembled through the analyses described above are often too complex for simple visual inspection to be useful, and so analysis algorithms are essential to extract the useful information contained therein. Metabolic network models of biological systems consist of a set of ordinary differential equations that describe the enzymic catalysis in the network and the feedback inhibition or activation of the enzyme catalysts by their metabolites . Metabolic networks are dynamic models that can simulate the perturbation of the network arising from the addition of exogenous compounds, such as from herbs. The feedback regulations and other pathways in the network mean that the effect of a particular molecule on the network, as a whole, may be very different to the effect of reaction of that molecule only at a single point in the network. A key issue with the use of metabolic networks is the ability to obtain values of the various kinetic parameters. The metabolic network model of a disease is that it represents a particular state of the network in which the production of disease-related molecules is abnormal . The normal state is the state of the network desired after treatment. The aim of therapy is to shift the network back into the normal state. Algorithms, such as the Multi-Target Optimum Intervention (MTOI) method have been invented to identify the key set of several targets within a network and whether they each need inhibition or activation for a successful intervention. This identification is achieved through testing the impacts of various perturbations to the activities of potential targets suggested by a search algorithm, such as a generic algorithm, that ultimately aims to minimize the difference between the starting (disease) network state and the desired (normal) state . The optimal solution can involve relatively mild impacts on individual enzyme activities made at multiple locations, leading to a greater overall effect on the whole network than a much larger single impact imposed at just one location. Side effects can be prevented through having multiple targets that can, between them, control the overall network balance . The metabolic network approach has been used to assess the efficacy and mechanism of the action of herbal medicines for several diseases, namely, inflammation, HIV, and cancer . Gu and Pei have suggested a general workflow for testing herbal medicines using the computerized metabolic network method. First, the metabolic network is constructed using literature information and databases, such as the Kyoto Encyclopedia of Genes and Genomes (KEGG). This search is used to specify a group of ordinary differential equations (ODEs) that describe the network. It is then necessary to collect kinetic parameters for the ODEs describing the dynamics of the network. Where these are not directly obtainable from the literature, the set of ODEs can be used to predict the concentration curves of components in the network and these fitted to experimental data . Docking simulations can also be used to quantify the interactions between compounds and proteins, using a predicted dissociation constant for each protein–ligand complex. Hence, the relevant protein structures must be found beforehand. For example, inflammation processes are controlled by the arachidonic acid (AA) metabolic network (shown in ), and Lei and co-workers constructed a model of it consisting of a set of ODEs . These equations simulate each time-dependent concentration of important enzymes [I], and molecules, in the network using a set of kinetic parameters collected from assays and computational prediction . The original AA network model was that found in human polymorphonuclear leukocytes (PMNs), but this has been extended to AA metabolism in blood vessels as a whole, including three cell types, not just PNMs . The models for the AA networks in the PMN, endothelial, and platelet cell types had 24, 29, and 11 ODEs, respectively, involving a total of 117 characteristic kinetic parameters . The ODEs for the PMN represent 24 feedback loops, thus demonstrating the complexity of the network . For example, the AA metabolic network consists of two main pathways with five key enzymes, namely, cyclooxygenase 1 and 2 (COX1/2), 5-lipooxygenase (5LOX), microsomal prostaglandin E synethase-1 (PGES), and leukotriene A4 hydrolase (LTA4H). Inflammatory syndromes can result from the overproduction of two metabolites, namely, prostaglandin E2 (PGE2) and leukotriene B4 (LTB4), within this network. For example, PGE2 is very associated with arthritis, while LTB4 is associated with coughs and asthma. Hence, the anti-inflammatory efficacy of a drug was judged by its ability to reduce the production of PGE2 and LTB4. Further, side effects from drugs for inflammatory syndromes are linked to the ratio of concentrations of prostacyclin (PGI2) and thromboxane A2 (TXA2), with the normal ratio being 0.68. If the ratio is too high, then the risk of gastrorrhagia is increased, as happens for aspirin (ratio ~5.2). If this ratio is too low, then cardiovascular risks are increased, as happens in the case of Vioxx (ratio ~0.28) . The network model has been validated by comparing its predictions of the actions of a single COX-1 or 5-LOX inhibitor with observations . The model can be used to simulate the action of inhibitors of different strengths, acting at different locations in the network. Simulations of the impact of single-target anti-inflammatory drugs (such as COX-1 inhibitors) have shown that these cannot stop the production of all inflammatory mediators . However, intervention at both LT4H and COX can augment the 12/15-LOX and 15-LOX pathways, which produce endogenous anti-inflammatory agents . Molecules that are more “promiscuous” and that target multiple locations in the network have a wider therapeutic window, even if they only have milder effects than molecules more specific for a single target, and thus the former are effective at lower plasma concentration . Further, simulations with two inhibitors used in combination showed that the mixing ratio of the two makes big differences to the efficacy and safety of the mixture, and the relative inhibition constants of the two to each enzyme determines the overall therapeutic effect. In addition, a dual-functional single inhibitor molecule has been found to be more efficacious at a lower concentration than the combination of two separate, mono-functional inhibitors . Hence, these findings suggested that the presence of more promiscuous compounds in herbal medicines would be more effective at lower doses. A single, multi-functional inhibitor also has a lower risk of drug–drug interactions that might cause side effects and will also be more robust against variations in plasma concentrations . The presence of relatively promiscuous (and thus multi-functional) molecules in herbal medicines may also explain why they still work despite the range of concentrations of active ingredients that arise from harvesting at different times of year. The AA metabolic network model was used to understand the efficacy and mechanism of action of anti-inflammatory TCM formulae . It was assumed that the efficacy and side effects of a particular herb could be understood based upon its constituent molecules. Since the inhibition coefficients of most test molecules from TCM formulations for enzymes in the AA network are unknown, an all-to-all molecular-docking approach was used to obtain them. The overall workflow was as follows: Various TCM books were used to select 28 herbs that were recommended for use with inflammation-related syndromes, such as asthma and fever. The TCM database was used to find out all known chemical compounds in these herbs. Then, steroid and glycoside compounds were removed from the list because, first, steroids are hormones that do not function in the AA network and thus may cause false positives, and, second, glycoside compounds are likely to be metabolized in the human body to remove glucose residues. This sifting left 237 remaining test molecules. Docking simulations of all the test molecules to the five key enzymes were used to obtain the “docking score” (Gibbs free energy) for each potential combination, and this was converted to the corresponding inhibition constant ( K I ). Then, the inhibitory effect of a given herb could be modelled as the sum of the effects of all its constituent test molecules using a variant of the Michaelis–Menten equation. However, it is difficult to know the likely plasma concentration that each molecule will achieve; so, it was assumed that each molecule would reach a value of 10 nM, which was set to be lower than expected for most drugs, and so a conservative estimate. The impact of plasma concentration was tested in a sensitivity study of this unknown parameter by randomly varying the plasma concentrations of the various components of a given herb mixture between values of 1 and 100 nM. It was found that this perturbation made little difference to the overall impact of the herb on the key pathways. This was interpreted to show how robust the final therapeutic effect of a herbal formulation is to variations in the mixture composition or the concentration of active ingredients in herbs due to temporal variation in harvesting, etc. As mentioned above, the AA network consists of the PGE2 and LTB4-producing pathways. An individual herb was ranked according to its ability (assessed by multiplying [I]/ K I values in the same pathway) to eliminate PGE2 or LTB4. Via this assessment, the (mixture of compounds corresponding to the) herb Glycyrrhiza uralensis was found to have the best inhibition of both PGE2 and LTB4, which is consistent with its traditional reputation as applicable to many inflammatory syndromes . However, most of the herbs tested preferentially reduced LTB4 production, rather than PGE2 production. Meng et al. suggested that may be because most of the herbs had been most often traditionally selected to treat asthma or coughs. In general, it was found that different test compounds in the same herb or herb formulation tended to have different targets, with the possibility of covering almost the whole AA network to achieve a superior therapeutic effect. Further, some combinations of herbs also had a synergistic effect. For example, the combination of Forsythia suspensa and Scutellaria baicalensis had a total inhibition of PGE2 (of 27%), which was higher than the sum of their individual numbers (20%). In addition, the same overall therapeutic effect (inhibition level) could be obtained with lower plasma concentrations of test compounds when these were in combinations corresponding to multiple herb formulations compared to just individual herbs. This may suggest how formulations of several herbs can lead to lower side effects than for single herb medicines because lower doses of the former are needed. It is suggested that the lack of topological information can lead to the failure of metabolic networks and that, in such circumstances, Boolean network modelling may be an alternative. Wang et al. suggested that Boolean networks might be used when the large wealth of quantitative kinetic data needed for metabolic network modeling is not available by experiment and/or docking. In the absence of quantitative kinetic data, a Boolean model can still model some dynamic aspects of biological systems, such as state transitions. A Boolean network consists of a set of nodes whose state is binary and is determined by other nodes in the network. Hence, such a network model lies between the static and continuous dynamic (metabolic) in complexity. The Boolean network might be amenable when the activity level of a biological entity varies more in a stepwise function of concentration. Since Boolean models do not explicitly incorporate the potentially wide-ranging individual kinetics of separate entities, the resultant dynamics can be highly sensitive to the more abstract updating scheme used in Boolean network operation. For a system where a suitable updating scheme is not feasible, a metabolic network model is, thus, required. In addition, a compromise model consisting of a combination of Boolean elements with differential equations is possible for some systems and requires fewer kinetic parameters than the full metabolic model . However, Boolean models may be less applicable to modeling the impacts of herbal medicines because their effects are often continuous, partial, or middling rather than more discrete step changes associated with bridging defined thresholds, and the influences of herbal medicines can arise over many quite different time scales, such as short-term effects contrasting with those building with long-term treatments. The development of software for automating methods of the construction of 3D pharmacophores has enabled a general approach that can be used for screening for multi-target inhibitors, both from synthetic sources and herbal medicines, involving combining molecular docking with common pharmacophore matching, as shown in . The combination of pharmacophore methods allows the direction of the docking simulations with pharmacophore templates and can thus speed up the overall screening of a set of compounds via docking . Further, while pharmacophore screening alone can end up with a mixture of both weak and strong ligands, the combination with docking enables the strong ligands to be selected out. In particular, the combined approach can make screening many test molecules against multiple targets a feasible goal. While the combined method speeds up screening, it can potentially unduly limit the number of molecules identified since the more complex the set of pharmacophores, the more restricted will be the set of compounds with the required features. One version of the combined method involves, first, finding the sets of molecular structural features that are recognized at each binding site responsible for the biological activity of the relevant target protein(s) to develop pharmacophore models for each target. Second, the approach then identifies common pharmacophores by comparing the individual models for each site if these exist. Third, a rapid docking algorithm is used to predict the binding confirmation of all test molecules in one of the target proteins, and then those molecules whose binding configuration can accommodate the common pharmacophore identified in the second step are selected out. Fourth, the binding configurations of these initially selected-out compounds in the other target proteins are found with a more rigorous docking simulation, and the set of selected compounds is further refined to molecules where their binding configurations in all target proteins can accommodate the common pharmacophore model. The resultant, further refined set of molecules may each have relatively low affinities across all of the targets, but, as already mentioned above, the combined therapeutic effect of a given molecule at several locations across biological networks may be cumulatively larger than a single molecule with a greater affinity at just one point in the network. The combined docking and pharmacophore approach has been validated on synthetic drugs but might also be applied to compounds occurring in herbal medicines. Ehrman et al. conducted a combined pharmacophore and docking screening for multi-target anti-inflammatories in Chinese herbs and their combined formulations. The multiple protein targets were cyclo-oxygenases 1 and 2 (COX1/2), p38 MAP kinase (p38), c-Jun terminal-NH 2 kinase (JNK), and type 4 cAMP-specific phosphodiesterase (PDE4). These proteins had been previously (in the literature) found to play roles in a variety of inflammatory syndromes . Further, previously, it had been found that the PDE4 inhibitor roflumilast also prevents the phosphorylation of both p38 and JNK, thus blocking the production of inflammatory mediators such as TNF-α and interleukin (IL)-1β. Ehrman et al. proposed that this finding suggests that molecules with the ability to inhibit more than one of these targets have greater potential for treating complex inflammatory syndromes. These workers also suggested that multi-target therapy is easier to achieve with a mixture of molecules, rather than using a single scaffold, since greater chemical diversity is possible with the former. Ehrman et al. used multiple pharmacophore models of the four protein targets to screen 5978 compounds from their database of constituents of Chinese herbs that had passed initial screening for drug-like properties via the Lipinski “rule-of-five”. The resulting suggested hits were then submitted for screening with docking software. The types of phytochemical classes that were found to be most involved in inhibiting inflammatory targets were phenolics, including lignans and flavonoids, and smaller terpenoids, such as monoterpenes, iridoids, and sesquiterpenes. Overall, it was found that 48% of 100 herbs tested are likely to have inhibitors for two or more targets, and 14% of herbs had more than one inhibitor for a single target that also came from different types of the aforementioned phytochemical classes. The reverse-docking study of Zhang et al. used a parallel pharmacophore approach to independently validate the findings obtained from docking. It was found that of the 52 herb constituent–protein target pairs highlighted by the docking study, all contained at least one common pharmacophore feature, and 37 of the target proteins shared at least three common pharmacophores. The underlying philosophy of herbal medicines, such that they contain multiple active components that often may have only middling affinity but for multiple targets relevant to a given disease state, has been seen to be consistent with the recent multiple-target strategy for developing new effective drug treatments for complex diseases. Hence, given the realization of the need to target multiple sites in biological networks to perturb a disease state back into the healthy one, this has spawned many studies into SAM effects due to herbal medicines, including both single herbs and multi-herb formulae. Molecular docking enables some understanding of the underlying mechanisms of SAM effects to be discerned, as multiple molecules from a given herbal mixture can be docked with various potential targets to determine affinities. However, molecular docking has also been shown to be a critical component in several of the new systems biology approaches. Molecular docking can provide quantitative weightings for the connections within static (molecule-target) networks, or supply estimates of kinetic parameters for use in dynamic metabolic networks. Molecular docking can also be combined with pharmacophore modeling to provide a hybrid method that greatly improves the efficiency of screening. Overall, molecular docking has been shown to be a highly useful tool to aid in the provision of evidence for the efficacy of herbal medicines, previously only supported by traditional usage. |
Occupational Disease as the Bane of Workers’ Lives: A Chronological Review of the Literature and Study of Its Development in Slovakia. Part 1 | a49b83c9-c6f2-4269-8600-fddd109528b4 | 8197831 | Preventive Medicine[mh] | Occupational medicine is unique among medical fields because it focuses on the interface of the workplace and health. A healthy working environment is very important for economic and social development at the global and national levels. The occurrence of occupational diseases is a very important indicator of the quality of working conditions and the working environment. The aim of occupational hygiene is to ensure safety, health and well-being in the workplace and also to evaluate, prevent and control the risks related to the performance of work. Important occupational health problems that need to be addressed at the global level include inherent chemical, biological, physical, ergonomic and psychosocial risks. Health protection at work is a multidisciplinary and cross-sectoral area that needs to be seen in the context of a country’s history and development. Occupational medicine has undergone a long and complex development. The history of its development has been studied previously . The development of occupational diseases has been monitored and evaluated by a large number of authors . In their articles, they presented retrospective studies that analyzed the structure, causes, occurrence and trends in the development of occupational diseases over a certain period of time in a given country. In 2019, Bentham Science Publisher published an e-book Introduction to Occupational Health Hazards , in which it was stated that “The study of the cause-effect relationship of occupational diseases will contribute towards reducing cases of work-related disorders” . The book Environmental and Occupational Medicine (2007) offers information on the history, causes, prevention and treatment of occupational diseases. Quick J.C. and Tetrick L.E. (2011) point out in their guide that work-related stress, along with other factors, can affect job productivity, satisfaction, safety, absence from work, etc. . Carder M. et al. (2015) published an overview of occupational disease reporting systems in EU countries participating in the Modernet consortium . The evaluation of occupational diseases in the EU was addressed by Nikolson P.J. . The global burden of occupational diseases was tracked by Lesley Rushton (2017) , who found major gaps in data on exposure to dangerous factors, especially in developing countries. Most emerging economies in Africa still face a huge challenge in the area of occupational health and safety . In most European countries, occupational diseases are underreported. The extremely low Hungarian figures are not a reassuring sign, but rather an alarming sign . A major objective of the EU is to ensure a safer working environment for European workers. To this end, the EU issues directives that Member States implement into national law . In 2017, the Chinese government issued a National Plan for Preventive and Treatment Procedures at Work to further protect health. The plan focuses on the urgent need to promote health at work . The beginnings of workplace psychology are strongly related to the name Münsterberg H. , who published the book Psychology and Industrial Efficiency in 1913. The pioneers in the psychology of business management include Taylor F.W. (1856–1915) , the founder of “scientific management”, as well as Gilbreth F.B. (1868–1924) and his wife Gilbreth L.M. (1878–1932) . The sociocentric approach is associated with the name Mayo G.E. (1880–1949). Mayo helped lay the foundation for the human relations movement and was known for his industrial research including the “Hawthorne Studies” and his books The Human Problems of an Industrialized Civilization (1933) and The Social Problems of an Industrial Civilization (1945) . He proved that social relationships and informal social groups in the workplace are important factors in the performance (and satisfaction) of workers. The results of a labor market analysis published by Grafton Slovakia at the end of 2020 show that more than half of those employed consider their workload to be excessive. Up to 60% of Slovaks feel stress at work. According to statistics from Everest College in the U.S., up to 83% of employees experience stress in the workplace. In the UK, 79% of employees face work-related stress, according to the UK Workplace Stress Survey 2020. According to the parallel survey done by Grafton in the Czech Republic, up to 70% of Czechs experience stress at work. In Slovakia, 12% of employees are often stressed and 48% experience stress at times, but regularly. There is also positive stress, which works on the basis of adrenaline, is short-lived and is considered motivating. Twenty-four percent of respondents have this type of stress in Slovakia . This article offers a chronological overview of the literature in the field of occupational diseases, from the first mention of lung disease in stonemasons and metalworkers (4th century BC) to the present day. The aim of the article is a systematic examination of the history of occupational diseases in the world. The article also addresses the initial monitoring of the development of the incidence of occupational diseases in Slovakia. Using the method of exponential smoothing, a prediction of the number of diseases in Slovakia is made for the next five years. 2.1. General Overview The occurrence of occupational diseases and poisoning at work is one of the most important indicators in caring for the health of employees carrying out risky work. It reflects not only the state of primary prevention of clinical manifestations of occupational harm to health but also the efforts of specialized professional health services in their diagnosis and reporting . The ILO Employment Injury Benefits Recommendation, 1964 (No. 121), defines occupational diseases in the following terms: “Each Member should, under prescribed conditions, regard diseases known to arise out of the exposure to substances and dangerous conditions in processes, trades or occupations as occupational diseases”. Under the Protocol of 2002 to the Occupational Safety and Health Convention, 1981 (No. 155), the term occupational disease covers “any disease contracted as a result of exposure to risk factors arising from work activity” . According to WHO, an occupational disease is “Any disease contracted primarily as a result of exposure to risk factors arising from work activity.” . Work-related diseases have multiple causes, where factors in the work environment may play a role, together with other risk factors, in the development of such diseases. An occupational illness (or disease) is defined by the Occupational Safety and Health Administration (OSHA) as “any abnormal condition or disorder, other than one resulting from an occupational injury, caused by exposure to factors associated with employment.” . The European Agency for Safety and Health at Work (EU-OSHA) provides the definition that a work-related disease “is any illness caused or made worse by workplace factors” . Occupational diseases are characterized by the fact that the causal relationship between the pollutant and the disease is clear and indisputable. Under Section 8(2)(a) of Act no 461/2003 on social insurance: “An occupational disease under this Law is a disease recognized by the competent health establishment, included in the list of occupational diseases set out in Annex 1, if it has arisen under the conditions set out in that Annex to an employer’s employee under Section 16 in the performance of work tasks or duties or in direct connection with the performance of work tasks or duties.” . The list of occupational diseases in Slovakia contains 47 entries; in we list selected items from the list of occupational diseases, namely those that we examined when analyzing the development of the number of occupational diseases in Slovakia from 1987 until 2019 (see ). The SK ISCO-08 national classification of occupations issued by Decree of the Statistical Office of the Slovak Republic No. 286/2007 is fully compatible with the International Standard Classification of Occupations ISCO-08 as recommended by Commission Recommendation No 200/824/EC of 29.10.2009. The SK NACE Rev. 2 statistical classification of economic activities is designed for categorizing data on all work activities performed by economic operators. SK NACE Rev. 2 is issued by Decree of the Statistical Office of the Slovak Republic No. 306/2007, and it is fully compatible with the European classification for the Countries of the European Community established by Regulation (EC) No 1893/2006 of the European Parliament and of the Council of 20 December 2006). A complete treatment of the whole area of the protection of health at work can be found in European Framework Directive 89/391/EEC . In the legislation of the Slovak Republic, the area of risk assessment in the workplace is specified in the Labour Code No. 311/2011 and Act No. 355/2007 . Slovakia (the Slovak Republic) is a landlocked country in Central Europe with a total area of 49,035 km 2 . Approximately 5.45 million inhabitants live there, and the capital is Bratislava. Since 2004, it has been part of the European Union. Based on data from the Statistical Office of the Slovak Republic, there are 2.53 million working people registered in 2020. 2.2. Data Sources and Evaluation Methods In preparing the chronological overview of the literature in the field of occupational diseases, and the occupational diseases as recommended by the International Labour Organization (ILO), we relied on electronic information sources, namely full-text databases (EBSCO, IEEE, Science Direct, PubMed), bibliographic and citation databases, digital libraries (Google Scholar, JSTOR, Semantic Scholar) and commercial research-sharing sites (ResearchGate). The literature review relevant to occupational disease is based on a thorough review of the work published in those sources. When we conducted the Review of the Literature on Occupational Disease, we categorized the data by subperiods for the 18th, 19th and 20th centuries with regard to the most important doctors, reformers, innovators and visionaries in the field in question. The chronology of progress in care for occupational health with regard to the above-mentioned is given in . The historical development of the ILO list of occupational diseases was based on processing the data available in the NORMLEX information system, which brings together information on international labor standards and also national labor and social security legislation. The evaluation of the development of the incidence of occupational diseases in Slovakia in the period 1997–2019 was based on data documented by the National Health Information Centre (NHIC), which belongs to the Ministry of Health of the Slovak Republic. The status and tasks of the NHIC are regulated by Act no. 153/2013 on the National Health Information System. At the international level, NHIC cooperates with the World Health Organization (WHO), Organisation for Economic Co-operation and Development (OECD) and EUROSTAT. The basic methods of statistics and methods of analysis of time series were used to analyze and evaluate the number of occupational diseases in Slovakia. We use the time series to understand the sequence of factually and spatially comparable observations, which are clearly arranged in chronological order from the past to the present . The time series forecast enables the quantitative estimates of future time series values that arise from prolonged future developments with a horizon h, provided that these developments do not change. In the article, we used the ExponenTial Smoothing (ETS) method to predict the development of the numbers of occupational diseases. The ETS is a forecasting method that predicts future values based on existing (historical) values using the Exponential Smoothing algorithm. The method is based on all previous observations, with their weight of older observations declining under the exponential function. Each model consists of three components: Error, Trend and Seasonal. The Error component can be described as “Additive = A” or “Multiplicative = M”. The Trend component can be described as “None = N”, “Additive = A”, “Additive damped = Ad”, “Multiplicative = M” or “Multiplicative damped = Md”. The Seasonal component can be “None = N”, “Additive = A” or “Multiplicative = M” . There are 15 prediction models with additive errors and 15 models with multiplicative errors. Akaike’s Information Criterion (AIC) can be used to determine the best model. In general, the lower the AIC value, the better the model compares to a model with a higher AIC value. The time series prediction model is created in R using the package forecast. The occurrence of occupational diseases and poisoning at work is one of the most important indicators in caring for the health of employees carrying out risky work. It reflects not only the state of primary prevention of clinical manifestations of occupational harm to health but also the efforts of specialized professional health services in their diagnosis and reporting . The ILO Employment Injury Benefits Recommendation, 1964 (No. 121), defines occupational diseases in the following terms: “Each Member should, under prescribed conditions, regard diseases known to arise out of the exposure to substances and dangerous conditions in processes, trades or occupations as occupational diseases”. Under the Protocol of 2002 to the Occupational Safety and Health Convention, 1981 (No. 155), the term occupational disease covers “any disease contracted as a result of exposure to risk factors arising from work activity” . According to WHO, an occupational disease is “Any disease contracted primarily as a result of exposure to risk factors arising from work activity.” . Work-related diseases have multiple causes, where factors in the work environment may play a role, together with other risk factors, in the development of such diseases. An occupational illness (or disease) is defined by the Occupational Safety and Health Administration (OSHA) as “any abnormal condition or disorder, other than one resulting from an occupational injury, caused by exposure to factors associated with employment.” . The European Agency for Safety and Health at Work (EU-OSHA) provides the definition that a work-related disease “is any illness caused or made worse by workplace factors” . Occupational diseases are characterized by the fact that the causal relationship between the pollutant and the disease is clear and indisputable. Under Section 8(2)(a) of Act no 461/2003 on social insurance: “An occupational disease under this Law is a disease recognized by the competent health establishment, included in the list of occupational diseases set out in Annex 1, if it has arisen under the conditions set out in that Annex to an employer’s employee under Section 16 in the performance of work tasks or duties or in direct connection with the performance of work tasks or duties.” . The list of occupational diseases in Slovakia contains 47 entries; in we list selected items from the list of occupational diseases, namely those that we examined when analyzing the development of the number of occupational diseases in Slovakia from 1987 until 2019 (see ). The SK ISCO-08 national classification of occupations issued by Decree of the Statistical Office of the Slovak Republic No. 286/2007 is fully compatible with the International Standard Classification of Occupations ISCO-08 as recommended by Commission Recommendation No 200/824/EC of 29.10.2009. The SK NACE Rev. 2 statistical classification of economic activities is designed for categorizing data on all work activities performed by economic operators. SK NACE Rev. 2 is issued by Decree of the Statistical Office of the Slovak Republic No. 306/2007, and it is fully compatible with the European classification for the Countries of the European Community established by Regulation (EC) No 1893/2006 of the European Parliament and of the Council of 20 December 2006). A complete treatment of the whole area of the protection of health at work can be found in European Framework Directive 89/391/EEC . In the legislation of the Slovak Republic, the area of risk assessment in the workplace is specified in the Labour Code No. 311/2011 and Act No. 355/2007 . Slovakia (the Slovak Republic) is a landlocked country in Central Europe with a total area of 49,035 km 2 . Approximately 5.45 million inhabitants live there, and the capital is Bratislava. Since 2004, it has been part of the European Union. Based on data from the Statistical Office of the Slovak Republic, there are 2.53 million working people registered in 2020. In preparing the chronological overview of the literature in the field of occupational diseases, and the occupational diseases as recommended by the International Labour Organization (ILO), we relied on electronic information sources, namely full-text databases (EBSCO, IEEE, Science Direct, PubMed), bibliographic and citation databases, digital libraries (Google Scholar, JSTOR, Semantic Scholar) and commercial research-sharing sites (ResearchGate). The literature review relevant to occupational disease is based on a thorough review of the work published in those sources. When we conducted the Review of the Literature on Occupational Disease, we categorized the data by subperiods for the 18th, 19th and 20th centuries with regard to the most important doctors, reformers, innovators and visionaries in the field in question. The chronology of progress in care for occupational health with regard to the above-mentioned is given in . The historical development of the ILO list of occupational diseases was based on processing the data available in the NORMLEX information system, which brings together information on international labor standards and also national labor and social security legislation. The evaluation of the development of the incidence of occupational diseases in Slovakia in the period 1997–2019 was based on data documented by the National Health Information Centre (NHIC), which belongs to the Ministry of Health of the Slovak Republic. The status and tasks of the NHIC are regulated by Act no. 153/2013 on the National Health Information System. At the international level, NHIC cooperates with the World Health Organization (WHO), Organisation for Economic Co-operation and Development (OECD) and EUROSTAT. The basic methods of statistics and methods of analysis of time series were used to analyze and evaluate the number of occupational diseases in Slovakia. We use the time series to understand the sequence of factually and spatially comparable observations, which are clearly arranged in chronological order from the past to the present . The time series forecast enables the quantitative estimates of future time series values that arise from prolonged future developments with a horizon h, provided that these developments do not change. In the article, we used the ExponenTial Smoothing (ETS) method to predict the development of the numbers of occupational diseases. The ETS is a forecasting method that predicts future values based on existing (historical) values using the Exponential Smoothing algorithm. The method is based on all previous observations, with their weight of older observations declining under the exponential function. Each model consists of three components: Error, Trend and Seasonal. The Error component can be described as “Additive = A” or “Multiplicative = M”. The Trend component can be described as “None = N”, “Additive = A”, “Additive damped = Ad”, “Multiplicative = M” or “Multiplicative damped = Md”. The Seasonal component can be “None = N”, “Additive = A” or “Multiplicative = M” . There are 15 prediction models with additive errors and 15 models with multiplicative errors. Akaike’s Information Criterion (AIC) can be used to determine the best model. In general, the lower the AIC value, the better the model compares to a model with a higher AIC value. The time series prediction model is created in R using the package forecast. 3.1. Chronological Review of the Literature Occupational Diseases Occupation diseases have been with us since time immemorial and they have developed together with occupational medicine. As the nature of working activity has changed, new diseases have come along, and it has taken several decades for people to begin to associate them with the work they were doing. These diseases have been named “occupational diseases” . Here, we present an overview of the most important scientists, doctors, reformers, visionaries in the field of occupational medicine, who shaped this field as we know it today ( and ). Already at the earliest historical stages of the development and life of society, many important representatives of medicine were interested in the social aspects of health care; for example, Hippocrates (460–370 BCE), who provides the first recorded mention of occupational diseases, describing dust in the lungs of stoneworkers and metalworkers; Aristotle (384–322 BCE); and Avicenna (CE 980–1037). “When you come to a patient’s house, you should ask him what sort of pains he has, what caused them, how many days he has been ill, whether the bowels are working and what sort of food he eats”, according to Hippocrates. The history of occupational medicine began to be written by Paracelsus (1493–1541) and Georg Agricola (1494–1555) in the 16th century, who particularly noticed the health problems of workers in manufactories and mines . 3.1.1. The 18th Century Bernardino Ramazzini (†1714, Italian doctor) is considered to be “The Father of Occupational Medicine”. In 1700, he published the work “De Morbis Artificum Diatriba” “Diseases of Workers”, in which he examined occupational diseases. This manuscript is considered to be a key work in the field of occupational medicine and has played an essential role in its development . He described analytical and methodological approaches to the diagnosis and prevention of occupational diseases . His successors, for example, Smith A., Marx K.H. and Mather C., and many other authors relied on this manuscript. He introduced two causes of occupational illnesses . The first was the harmful effect of materials that employees handle at work. He found that many of them release harmful fumes and very fine particles into the air when processed, which adversely affects workers and causes serious illnesses. As a precautionary measure, he advised them to wash their hands and face frequently and even to stop working when they have difficulty breathing. He was of the opinion that insufficient ventilation and poor temperature control contribute significantly to the development of the disease. The second was attributed to intense and irregular movements, which are unnatural for proper posture. He claimed they caused such a disturbed physiological state of posture that serious occupational diseases could gradually develop. He supported rest, the need for exercise and a change in posture. Other important reformers of the 18th century include Lind J. (1753) , Scopoli G.A. (1761) , Pott P. (1775) , Parés y Franqués J. (1778) and others. 3.1.2. The 19th Century Charles Turner Thackrah (†1833, British doctor, reformer). In 1832, he drew attention to unsuitable working conditions at the Bean Ing Mills wool processing mill . He described risks in various working sectors and pointed out that dust affecting the lungs of miners, metalworkers and other workers in dusty trades is linked to the development of tuberculosis. He warned of the long hours of child laborers in linen mills. In pottery, he recommended replacing lead glazes with others or completely changing working practices . Several publications examining diseases of specific groups of workers already existed in the UK during this period. Pott P. (1775) , Bell B. (1794) and Harrison E. (1827) wrote about the incidence of cancer in chimney sweeps. A year after Thackrah’s research, Kay-Shuttleworth J.P. (1832) published the book The Moral and Physical Condition of the Working Classes Employed in the Cotton Manufacture in Manchester . Benjamin William McCready (†1892, American doctor) published “On the Influence of Trades, Professions, and Occupations in the United States, in the Production of Disease” (1837). This document is considered to be the first US study in the field of occupational medicine. Heinrich Hermann Robert Koch (†1910, German doctor) examined the bacterium Bacillus anthracis , which is the causative agent of anthrax. Based on his observations, he was able to determine the life cycle of anthrax bacteria and demonstrate a causal relationship between this microorganism and the development of the disease. In 1876, he published a study entitled “The Etiology of Anthrax Disease, Based on the Developmental History of Bacillus Anthracis” . He also examined tuberculosis, cholera and other diseases . He is considered the Father of Microbiology. In 1905, he was awarded the Nobel Prize in Physiology and Medicine for research and discovery in the treatment of tuberculosis. Louis Pasteur (†1895, a French doctor, chemist and biologist), known for his work on vaccines, was the first scientist to use live viruses in vaccination. He was working to create a vaccine against anthrax and rabies. Although Pasteur became famous because of his public speeches in 1881 and took credit for the creation of an anthrax vaccine , it is now believed that Jean Joseph Henri Toussaint (†1890, French veterinarian) was actually behind the creation of this vaccine. Pasteur’s nephew Adrien Loir (†1941, French bacteriologist) was aware of Toussaint’s work on vaccine development and therefore published a debate in 1938 entitled “À L’ombre de Pasteur”. Other important reformers of the 19th century include McCready B.W. (1837) , Chadwick E. (1842) , Engels F. (1845) , Virchow R. (1848) , Ireland G.H. (1886) and others. 3.1.3. The 20th Century Thomas Morison Legge (†1932, British doctor, inspector). He was the first factory inspector to focus on improving hygiene in industry . In 1921, he participated in the Geneva Convention on the Prohibition of Painting Interiors with White Lead. Lead poisoning was the most common occupational disease at the time. Legge is also known for his work on anthrax. Anthrax disease often occurred in wool workers who were exposed to contaminated leather and wool. The mortality rate was high, with up to 1/4 fatality in all people suffering from pulmonary anthrax “wool-sorters’ disease” . Legge summarized a wide range of industrial diseases, including cataracts, skin cancer, liver diseases and metal poisoning. A very important step in his life was the introduction of working medicine into the curriculum of the Faculty of Medicine. He was the author of several works, among the most important are (see ). John Bertram Andrews (†1943, American economist) in 1909 became executive secretary of the American Association for Labor Legislation, which under his leadership was involved in drafting legislation in the area of labor law. He was the author of the books Principles of Labor Legislation (1916) , Anthrax as an Occupational Disease (1917) and History of Labor in the United States (1918) . Alice Hamilton (†1970, American doctor, industrial toxicology innovator). If Ramazzini is considered the Father of Occupational Medicine, Hamilton can be considered the “Mother.” She was a pioneer in the field of epidemiology of work and industrial hygiene. She began her long career in public health and workplace safety in 1908. She is the author of the first American guide on “Industrial Poisons in the United States” (1925) . Her specialization was mainly industrial toxicology and health at work, which was related to her further publication “Industrial Toxicology” (1934), which she revised in cooperation with Hardy H.L. in 1949 . Her research focused on the action of toxic substances (aniline dyes, mercury, carbon monoxide, tetraethyl lead, benzene, etc.) in the working environment; she investigated their effects on the body. She wrote about carbon monoxide poisoning among American steelworkers, mercury poisoning in milliners and the appearance of pulmonary tuberculosis in carvers in granite mills. Hamilton remained active in retirement, when she released her autobiography entitled Exploring the Dangerous Trade (1943) . In 1942, a paper titled “Occupational Tumors and Allied Diseases” was published, which is considered the first medical textbook containing information about various types of cancers and the causes of their formation. In the introduction, the author pointed out the severity of chronic diseases . This work was published by Wilhelm C. Hueper (†1979, a German-American doctor), who was a central figure in the field of occupational medicine and toxicology in the mid-20th century. He was one of the first scientists to attempt to educate the public about asbestos as a carcinogen in the working environment. In 1955, he published another one of his studies, “Silicosis, Asbestosis and Cancer of the Lung” . He focused his attention on the effects of asbestos and coal tar. However, he erred in claiming that smoking contributes to occupational diseases to a lesser extent. This was before Selikoff I.J. et al. (1968) provided evidence that in insulation workers there was a synergistic effect of smoking and working with asbestos, resulting in lung cancer. Throughout his career, Hueper sought to draw attention to the reluctance of businesses to acknowledge the fact that chemicals used in industry cause different types of cancer in employees. Twenty years later, he published the book Chemical Carcinogenesis and Cancers (1964) . The culmination of his career was the book Occupational and Environmental Cancers of the Urinary System (1969) . Robert A. Kehoe (†1970, an American toxicologist) was an expert in toxicokinetics. He focused his attention on monitoring the clinical manifestations of lead poisoning. From 1925 to 1965, he was a senior expert on lead in the US. In 1930, he became director of the Kettering Laboratory of Applied Physiology at the University of Cincinnati, the first university laboratory focused on toxicological problems in industry . In 1953, he published research on “Experimental Studies on the Inhalation of Lead by Human Subjects” , in which he argued that the presence of lead in humans is normal and that exposure to it at low levels is not harmful. With this claim, he convinced the Ethyl Corporation that it did not have to worry about lead not only in the work area, but also in the environment, which contradicted the study by Hamilton . Irving J. Selikoff (†1992, American doctor, researcher) created an extensive body of work documenting the high incidence of asbestos-related diseases during his four decades of work . He found that workers exposed to asbestos also had scar tissue 30 years after their work ended. His research has put enormous pressure on OSHA. New York subsequently banned the use of sprayed asbestos during construction work in Manhattan. His study on insulation workers drew attention to the synergy between asbestos and tobacco smoking . In 1966, he founded the Department of Environmental Medicine at Mount Sinai Hospital in New York. He was one of the founders of the Collegium Ramazzini, an independent international academy. Jean Spencer Felton (†2003, American academic, general practitioner). In 1958, he created the basis for a resident program in occupational medicine. In 1968, he became director of the health service in Los Angeles and later in Long Beach, where he researched the effects of asbestos. He has published countless books and articles on the history and practice of working medicine, upon which researchers around the world rely to this day. Thomas F. Mancuso (†2004, American doctor, epidemiologist) “changed the standards of occupational health protection” . He published a series of articles highlighting the toxicological effects of materials such as asbestos, beryllium, chromium, cadmium, manganese, mercury and many other toxins. Working together with Hueper in 1951, he published research on the connection between chromium and lung cancer . He revised these papers in 1997 and published them under the title “Chromium as an industrial carcinogen: Part I. and II.” . In 1965, he was asked by the Atomic Energy Commission (AEC) to conduct a study on the effects of low-level radiation on a sample of half a million workers at the Hanford Nuclear Complex. He suggested that a long-term study was needed to accurately examine the cumulative effects. It showed that workers developed an increased risk of cancer caused by radiation levels that were considered safe at the time. Mancuso was later joined by doctor Stewart A. and statistician Kneale G. In 1977, they jointly published an article stating that workers at a nuclear weapons complex were dying of cancer induced by radiation at values that were well below the norm . Kivimaki, Mika ( h -index 125; 55,255), Shipley, Martin J. ( h -index 92; 27,536), Ferrie, Jane (h-index 75; 11,391), Donhal, Kelley J. ( h -index 39; 2755), Soteriades, Elpidoforos S. ( h -index 22; 2640), Franco, G. ( h -index 13; 394) and Balmes, John r. ( h -index 12; 515) are all authors who can be considered 21st-century experts in this field, based on the h -index and the number of citations, not including self-citing articles. 3.2. Chronological Review of the List of Occupational Diseases Recommended by the International Labour Organization The list of occupational diseases established by international and national legal systems plays an important role both in prevention and treatment and in compensation for workers’ diseases. This list is a set of officially recognized occupational diseases caused by exposure to danger during working activity. The list contains a definition of each occupational disease and it is based on basic legislation on occupational health and safety . The first compensation schemes began to appear in the early 19th century. A number of factors (e.g., rapidly growing industrialization) contributed to their development. Since the introduction of occupational diseases as compensable diseases in the Act for the Compensation of Workers in Germany (1871) subsequently in Switzerland (1877) and England (1880), legislation of this type was introduced in rapid succession across Europe : Austria (1887), Norway (1895), Denmark (1897), Finland and Italy (1898) and France, Spain and Switzerland (1899). In the USA, the compensation scheme was only adopted in full in 1911 . The International Labour Organisation is a UN agency whose mandate is to promote social and economic justice by setting international labor standards. It was founded in 1919 after the end of the First World War in France. On 29 January 1919, the Commission on International Labour Legislation was established by the Peace Conference, with a view to drawing up the ILO Constitution. As a result of its work, a recommendation was made to create the ILO as a tripartite organization that would bring together representatives of member states’ governments, employers’ and workers’ representatives. The Labour Commission drafted a text entitled “Labour” , which became Part XIII of the Treaty of Versailles. In particular, the labor commission promoted the principles applicable to the conditions of work to be followed by the policy of the ILO Member States and that were incorporated into Part XIII of the Treaty of Versailles—the Preamble to the ILO Constitution . 3.2.1. The Era of Industrial Poisoning Following the ILO Conference on 29 October 1919 in Washington, anthrax and lead poisoning were declared occupational diseases. On 28.11.1919, the first two recommendations on the prevention of these diseases. R003—Anthrax Prevention Recommendation, 1919 (No. 3)—and R004—Lead Poisoning (Women and Children) Recommendation, 1919 (No. 4)—were adopted . Although anthrax was first discovered in 1250 CE, the industrial revolution was responsible for the dangers created by it, and it was therefore the center of attention in those years. Maret (1752), Dym (1769) and Fournier (1769) made the first mentions of cutaneous anthrax . Anthrax disease often occurred in English wool sorters and was known as “wool-sorters’ disease”; its mortality rate was high . In addition to anthrax, the first industrial revolution brought with it a huge increase in demand for lead. Women and children were employed in all stages of the process, including very dangerous work in glazing ceramics, melting lead ores and the production of lead compounds . This topic was examined by Ellenbog U. (1473), Thackrah C.T. (1832) and Kehoe, R.A. (1953) (see ). After 1900, as a result of studies of industrial hygiene, many countries adopted legislation relating to the protection of workers’ health . At its seventh session on 19 May 1925 in Geneva, after resolving to accept the proposal for workmen’s compensation, on 10.6.2015, the ILO adopted C018—Workmen’s Compensation (Occupational Diseases) Convention, 1925 (No. 18). Mercury poisoning was added to the list of diseases . Like anthrax and lead, mercury experienced its biggest boom in the mid-19 th century with the development of industry. Hamilton A. (1943) wrote about mercury poisoning in milliners. The production of hats at that time was dependent on mercury, and it was used in the form of a solution to accelerate the production of felt. An employee who entered a hat factory did not live for more than 3–5 years. Scopoli G.A. (1761) and Parés y Franqués J. (1778) wrote about mercury poisoning among miners as early as the second half of the 18th century (see ). In 1934, C018 was amended and a new C042, Workmen’s Compensation (Occupational Diseases) Convention (Revised), 1934 (No. 42), was adopted. Another seven items were added to the list; see . 3.2.2. Expansion of the ILO List of Occupational Diseases The list of occupational diseases with 10 items was used for 30 years unchanged until on 17 June 1964 it was revised, and a new C121—Employment Injury Benefits Convention, 1964 (No. 121) , was adopted. It continued to contain only a limited number of diseases, such as those identified by Stockhausen S. (1656) and Hoffmann F. (1716) and skin cancer as first described by Pott P. in 1775 (see ). The compensation system was very difficult to regulate. While the causal link in the case of poisoning was obvious, it was difficult to distinguish hearing loss or various bronchopulmonary and infectious diseases from those of the general population . In the US in the early 20th century, it was the case that only hearing loss caused by immediate injury—explosion rather than gradually developed hearing loss—would be considered compensable . In his book Effects of Noise on Man ” , Kryter K.D. (1950) states that most published information on the effects of noise on humans is an “unsubstantiated expression” or is justified by “poorly designed experiments” . The inclusion of noise-induced hearing loss in the list of diseases was therefore very difficult, and it only succeeded in 1980. This was also the case for asthma in the textile industry. Already in the early 18th century, Ramazzini B. described a special form of asthma in those who processed cotton, flax and hemp. He said the dust he observed while processing them “causes workers to cough constantly.” While many authors during the 19th and early 20th centuries were describing respiratory manifestations of occupational diseases in textile factories with increasing frequency, in the U.S., these diseases remained unnoticed until the mid-20th century when Schilling R.S.F. (1956) published the study “Byssinosis in Cotton and other Textile Workers” . In 1980, with the growing public awareness of occupational diseases, Convention C121 was revised . It now reflected the lessons learned over the last 70 years. During this period, there had been several fundamental changes, not only in the structure of industry (transition from heavy industry to services) but also in changes to the workplace risks (use of new industrial chemicals) and compensation policy. The revised version of C121 extended the original list with not only seven more types of poisoning, but also respiratory diseases, skin and infectious diseases, and disturbances caused by physical factors, and several types of work-related cancer were added. These were subsequently incorporated into the various compensation systems of different states . In general, the lists were designed to identify specific diseases for which there would be evidence of causalities with one or more specific exposures at the workplace. The Employment Injury Benefits Convention, 1964 (No. 121), has so far been approved by 24 countries around the world . Many countries have their own equivalent of this convention. On 22 May 1990, the Commission of the European Communities in Brussels approved recommendation (90/326/ECC) on the adoption of the European Schedule of Occupational Diseases, which was revised 13 years later on 19 September 2003 (2003/670/EC) . This list was more comprehensive than the ILO C121 list. Already in 1990, the European List contained a further 24 diseases caused by chemicals not listed in Convention C121. It also contained nine causes of skin diseases, including skin cancer, and 10 diseases caused by physical factors, including eight musculoskeletal disorders. This situation prompted the ILO in 1990–1991 to agree to the addition of Annex 1 to Convention C121, taking into account all legislation and practice, the most significant extension being the introduction of a detailed description of the procedures for diagnosing, reporting and evaluating occupational diseases in order to compensate them . Among other things, the ILO prepared a list of occupational diseases that considered the currently valid lists and national practice in 76 countries. However, Article 31 of Convention No. 121 provides for a specific procedure for amending the list of occupational diseases set out in Annex 1 by a minimum of a two-thirds majority. Due to the competing priorities of the tripartite parties, the revision of the list could not be placed on the agenda. At the 90th session of the International Labour Organisation Conference on 3 June 2002 in Geneva, the process of changing the notification, diagnosis and identification of occupational diseases for the purpose of compensating them was approved. 3.2.3. Further Updates to the ILO List Appended to R194 The adoption of these changes was helped by the drafting and adoption of a new list of R194 recommendations—Letter of Occupational Diseases Recommendation, 2002 (No. 194)—which entered into force on 20 December 2002. This resolution recommended a new format for the list of occupational diseases, consisting of three basic categories for diagnosing diseases: the causative agent of the disease (chemicals, biological agents, physical factors), diseases by target organ (respiratory tract, skin diseases, musculoskeletal disorders and behavioral disorders) and occupational diseases of the cancerous-type. Sixteen chemicals, two physical agents, four pulmonary disorders and one skin disease were added to the list. The category of cancer-type occupational diseases consisted of 14 carcinogens, the classification criterion of which was the category 1 list of the International Agency for Research on Cancer. Musculoskeletal disorders are also listed with a general definition of work-related diseases. The category “other diseases” is a flexible category that includes diseases not listed elsewhere. Recommendation R194 emphasizes its role as a tool for notification, the introduction of preventive measures, the improvement of the compensation procedure and the identification of the causes of occupational diseases . Moving the list of occupational diseases from the Compensation Convention (C121) to Recommendation R194 has provided greater flexibility in drawing up a more comprehensive list. At its 279th session (November 2000), the governing body of the ILO recommended that the International Labour Organisation, at its 90th session, consider the development of a new mechanism for regularly updating the list of occupational diseases . R194 was revised through two tripartite meetings in 2005 and 2009 . The managing authority of the ILO convened a meeting of experts on 13–20 November 2008 in Geneva to update the list of occupational diseases . In preparing the meeting, the ILO analyzed the 50 most up-to-date national lists of occupational diseases, including the recommended European Schedule of Occupational Diseases 2003/670/EC, and prepared a questionnaire on 34 issues related to changes, replacement, addition and re-categorization of occupational diseases, etc. Eighty Member States responded to this, 17 of which indicated that their responses were prepared after consultation with employers’ and employees’ representatives. Although most of the responses confirmed the proposed list with a few additional comments, some items, such as disease caused by radiofrequency radiation, cancer caused by formaldehyde and silica, or psychosomatic syndromes caused by bullying, were not accepted on the final list . New entries included four diseases caused by chemicals, one caused by physical agents, five diseases caused by biological agents, two skin diseases, seven musculoskeletal disorders, two psychiatric and behavioral disorders and eight carcinogenic substances . A further meeting on the revision of the Recommendation (No. 194) was held on 20–30 October 2009 involving 21 experts . The new list was approved at the 307th meeting in March 2010. It replaced the previous list approved in 2002 as set out in Annex 1 to the Recommendation (No. 194). The new list included a total of 106 entries divided into three basic categories: disease agents (41 chemicals, 9 biological agents, 7 physical factors), target organ diseases (12 respiratory tract, 4 skin diseases, 8 musculoskeletal disorders and 2 behavioral disorders) and 21 cancer-type occupational diseases . All revisions to those conventions or recommendations were influenced not only by the modernization of industry, but also by international organizations, and the European Union, and the development and revision of each state’s lists, which reflect the social, cultural and technological background of the time or country. 3.3. Development of Occupational Diseases in Slovakia In the second half of the 16th century and in the 17th century, a number of important Central European scholars appeared. There were humanitarians from various fields of science who came from Slovakia. Juraj (Georgius) Henisch (†1618, Slovak-German doctor, poet, polyhistor), born 24 April 1549 in Bardejov. He worked as a doctor in Augsburg, Germany. His scientific study “Arztney-Buch” was one of the most popular medical works of the time. Karol Rayger (1641–1709) and Karol Oto Moller (1670–1747) made significant contributions to the development of medical sciences through their discoveries. In 1721, Prešov native doctor and pharmacist Ján Adam Raymann (1690–1770) entered world medical history with his research. Kežmarok doctor Daniel Perlitzi (1705–1778) prepared a proposal for the establishment of a university of medicine based in Banská Štiavnica. This proposal met with resistance from the Hungarian rulers of the time, who did not want to increase the educational level, even among children, of the Slovak nation they were ruling over, so it and many other attempts were unsuccessful. The beginnings of care for health and safety care in our country date back to the 19th century, the Austro-Hungarian period. One of the pioneers of occupational medicine in Slovakia was František Xaver Schillinger (†1892, doctor) , who wrote a paper on cholera and first aid for miners. Gustáv Kazimír Zechenter-Laskomerský (†1908, doctor, writer, natural scientist) studied the hygiene of the life and work of forest and mining workers and studied their diseases. Imre Tóth (†1928, doctor) was the chief mining doctor in Banská Štiavnica. He wrote articles on the need to improve the environment and working environment of miners. In the fight against infectious and mining diseases, he contributed to reducing the incidence of diseases of lead miners, very widespread among metallurgical workers in Banská Štiavnica in the production of silver with lead. He proposed a range of measures to prevent this disease, which were directed to personal hygiene (handwashing, cleaning of workplaces and the use of respiratory protection). He also proposed technical measures to remove fumes from metallurgical furnaces. He also contributed to curbing the spread of tuberculosis and typhoid, and he publicly fought alcoholism. These authors understood health education as an integral part of medical activity. Later, the Czechoslovak Republic (CSR) adopted the Act on the Compensation of Occupational Diseases on the basis of the Workmen’s Compensation (Occupational Diseases) Convention (No. 18) in 1932. In 1932, under the leadership of J. Teissinger the Occupational Diseases Advisory Board was created, which was transformed into the Occupational Medicine Advisory Board after 1942. After 1945, there was a strong development of occupational medicine institutes across the country. In 1949–1953, three institutes were established in Slovakia: in Bratislava, Martin and Košice. Their work was concerned with labor hygiene, the physiology of work and occupational diseases. In 1952, a Slovak branch of the society for occupational medicine was established within the J. E. Purkyně Czechoslovak Medical Society, which became independent in 1968 and still operates as an organizational component of the Slovak Medical Society. In the 1970s and 1980s, the issue of coal and ore mines came to the fore. In view of the occurrence of work-related diseases such as noise-related hearing loss, vibration diseases, dust on the lungs from dust containing silica (silicosis) and other respiratory diseases, these problems needed to be addressed without delay. The gradual reinforcement of the field with qualified personnel made it possible to develop and apply new methods of work and procedures in the field and in the laboratory. Industrial production in Slovakia was also focused on the extraction and processing of minerals, including coal and wood, iron and steel, heavy engineering and chemicals, posing high health risks to employees. These were large state-owned enterprises employing thousands of employees. With the adoption of Act no. 20/1966 on Care for Human Health, the requirements for the quality of the working environment and the conditions of work were further regulated and specified. Limits were set for harmful factors in the working environment. Directive 17/1970 of the Slovak Ministry of Health on the Assessment of Medical Fitness for Work laid down requirements for employers for the content, scope and frequency of medical preventive examinations and identified the categories of workers to undergo these examinations. In 1989, the Czechoslovak government ratified Convention (No. 155) from 1981 on Occupational Health and safety. In 1997, the National Reference Centre for Personal Exposure and Health Risk Assessment, today’s NHIC, was established. 3.3.1. Legislation The values of determining variables help to answer the question of to what extent the physical factors of work and the working environment pose a risk to the health of the employee or to what extent the measures taken are effective. Whether they are maintained or exceeded speaks not only of the level of risk, but also of the level of protection of employees’ health. Within the Slovak Republic, the basis for assessing the fulfillment of these requirements is the result of direct or indirect measurement and comparison with the values of determining variables laid down in decrees, government regulations and STN standards (taken from international standards). The objectivity of physical factors of the environment and the working environment is monitored under Guideline OOFŽP-7674/2010 . This guideline is used for the measurement of noise and vibration, daylight and artificial lighting, electromagnetic fields, the thermal-humidity microclimate and the other physical factors to be determined or evaluated at their place of occurrence. A complete treatment of the whole area of health protection at work can be found in European Framework Directive 89/391/EEC . This Directive addresses the fact that employees may be exposed to dangerous environmental factors at the workplace during their working life. Since our legislation is currently harmonized with the EU, the notion of risk assessment and other concepts related to this procedure have also entered the legal norms of the Slovak Republic. In the legislation of the Slovak Republic, the area of risk assessment in the workplace is specified in Act no. 311/2011, the Labour Code , and in act No. 355/2007 . Details of the factors of work and the working environment under the classification of works into categories are given in Annex 1 in Decree No 448/2007 . The method of reporting and registering occupational disease and threatened occupational disease in the Slovak Republic is laid down by Act no. 355/2007 in Section 31b(1,2) . The general principles of prevention and the basic conditions for ensuring health at work are laid down by Act no 124/2006 , and the requirements for the provision and use of personal protective equipment are laid down in Regulation No 395/2006 . 3.3.2. Development of the Incidence of Occupational Disease in Slovakia from 1987 to 2019 The basic tasks of clinical occupational medicine and clinical toxicology in Slovakia include the comprehensive diagnosis, treatment and assessment of diseases arising in connection with adverse and health-damaging factors from work and the working environment. This includes reporting occupational diseases and threatened occupational disease. A total of 21,025 new occupational disease cases were reported in Slovakia between 1987 and 2019, based on data documented by the National Health Information Centre (NHIC). A graphical representation of the development of the number of occupational diseases in Slovakia for the period 1987 to 2019 is shown in . The average annual number of recognized occupational diseases in the given period was almost 637. A significant decrease in the number of reported occupational diseases was recorded up to 1995, from 1262 reports (1987) with a slight increase of 1331 reports (1991) to 601 reports (1995). Between 1995–2019, the number of newly acquired occupational diseases decreased roughly in half with slight fluctuations, to 347 reports (2019), with an all-time low in 2013 (301 reports). In the long term, we are seeing a downward trend in the number of occupational diseases. The graph shows the development of employment in Slovakia (1987–2019). The average annual value of the number of workers over the period is 2262.5 thousand persons. The assessment of occupational diseases reported in the last 32 years (1987–2019) has seen a more pronounced decrease in the second half of the reference period (2003–2019), representing 49.76%, i.e., 6971 cases. The most commonly reported occupational diseases include those listed in (item 22, items 24–26, item 28, item 29, items 33–34 and item 38). Over the period considered, 19,142 new cases related to the diseases were reported, representing almost 91% of the total number of reported occupational diseases. The development of the number of occupational diseases in terms of selected diseases is shown in . For the sake of clarity, only those diseases for which the average percentage of the total number of occupational diseases over a given period exceeded 10% are plotted in the graph. The percentage of selected occupational diseases out of the total number of reported cases in each year is shown in . Compared to the first half of the period (1987–2002), we can see a decrease in almost all the selected types of occupational diseases in the second half (2003–2019) ( and ). The only exceptions are diseases affecting the musculoskeletal, vascular and nervous systems of employees exposed at work to prolonged excessive and one-sided loads on the upper limbs (item 29). Despite a significant decreasing trend in the incidence of reported occupational diseases, limb disease from long-term, excessive and one-sided loads (item 29) is not developing very favorably . The annual incidence of reported diseases of the limbs from long-term excessive and one-sided loads on the limbs began to increase significantly from 1991. The largest number of reports (230 cases) was recorded in 2006, representing almost 46% of the total number of cases (504) in that year. In 2016, the proportion had increased to 55% . Compared to 1987, there was an increase of 885% in reported limb diseases in 2006 due to long-term excessive and one-sided loads. Between 2003 and 2019, we saw an increase of 55.58% in the incidence of occupational diseases, i.e., 3065 cases, overwhelmingly in women (item 29). Vibration occupational disease (item 28) has long been one of the most common occupational diseases in Slovakia. After limb disease from prolonged excessive and one-sided loads, vibration disease (with the exception of 2011, when noise-related hearing loss was temporarily in second place) has consistently come second among the numbers of annually reported occupational diseases in the last two decades. The high numbers in 1987–2007 gradually led to a significant decrease over 2008–2019, with the lowest incidence in 2011 (40 cases), and in the following years, the numbers have increased slightly . Between 2003 and 2019, there was a very significant decrease in the incidence of skin diseases (excluding skin cancer) and communicable skin diseases (item 22) compared to the previous period (1987–2002), by 79.29%, a decrease of 1685 cases . Almost the same percentage decrease (79.57%) was also seen in cases of infectious diseases and parasitic diseases and diseases communicable from animals to humans (items 24–26). Noise-related hearing loss (item 38) is repeatedly in fourth or fifth place in the order of frequency of the number of cases of annually reported occupational diseases. The annual incidence of reported noise-related hearing damage decreased significantly between 1987 and 2008. In 2009–2014, a rise in these diseases was again noted, and they subsequently decreased in 2015 with slight fluctuations. The lowest incidence was recorded in 2008 and 2019, with 17 cases. Cancer-type occupation diseases listed under (items 21 and 23) were reported in 177 employees. The number of annual reports fell by 77.08% between 2002 and 2019, and a decrease of 111 cases . The highest incidence was recorded in 1993; with 15 cases. The average annual number of reports was five cases. The average annual incidence of lung-related occupational diseases (items 33–34) is 27 cases, representing 3% of the total number of occupational diseases over the whole reporting period. In the case of (item 46), we can mention a negligible number of reported occupational diseases over the whole period under review (1987–2019), namely 37 cases. According to archive records, the disease was not diagnosed until 2003. 3.3.3. Analysis of the Development of Occupational Diseases in Slovakia over the Last 20 Years Available data show that a total of 8883 new cases of occupational diseases were reported in the last 20 years (from 2000 to 2019). The average annual number of cases of recognized occupational diseases in the given period is 444 cases. The trend in the incidence of occupational diseases in Slovakia is decreasing in nature. The average annual decrease in the number of occupational diseases is 16, representing an annual decrease of about 3%. For example, the calculated dynamics in the number of occupational diseases show that in 2005, the number decreased by 200 cases compared to the previous year, representing a decrease of around 67.4%. On the other hand, there was an increase of 91 cases in 2006, representing an increase of around 22% in the number of occupational diseases compared to 2005. In 2019, 347 cases of occupational diseases were reported. This is 13.4% per 100,000 workers. Compared to the situation as of 31 December 2018, the number of reported occupational diseases increased by 39 cases (11.24%). Compared to 2000, there are 313 fewer cases of occupational diseases in 2019, almost 53% fewer cases than in 2000. When analyzing the number of occupational diseases, we selected three indicators : the gender of workers (two subcategories), the age category (five subcategories) and the sectoral classification of economic activities (four subcategories). A graphical representation of the development of the number of occupational diseases by workers’ gender is shown in . Men are more heavily represented in the total number of diseases. Men were diagnosed with occupational diseases 1.8 times more often than women. In 2007, men were diagnosed with 422 cases of occupational diseases (as much as 73% of the total number of reported diseases), representing almost 2.8 times more cases than in women. The data show that over a period of 20 years we can see a significantly decreasing trend in the number of occupational diseases in men. Since 2008, the most commonly reported cases have been in the age group 50–59 years . The average representation of this age group in the total number of occupational diseases is almost 42%, compared with 52% in 2019. The second most common age category is the 40–49 category, for whom the average share of the total number of diseases diagnosed is almost 34%. In recent years, the number of reported cases in the over-60 category has increased slightly. On the other hand, the number of reported occupational diseases in the 30 to 39 age group is on a downward trend. A graphical representation of the development of the number of cases of diseases by age category is shown in . A graphical representation of the development of the number of diseases by sector of economic activity is shown in . The highest incidence of occupational diseases based on the sectoral classification of economic activities was in the industrial production sector (Sector 3). Over 20 years, 3748 cases were reported in the sector, representing 42.2% of the total number reported during the period. The lowest number of recognized occupational diseases in the period was in construction (Sector 4, 340 cases, 3.8% of the total number of diseases diagnosed). In 2007, the number of diseases from mining and quarrying professions (Sector 2) increased sharply. This was an increase of as much as 38% compared to the previous year and a 139% increase compared to 2005. In almost all sectors, we see a downward trend in the number of diseases. The only sector that maintains a constant trend is the construction sector (Sector 4). The average proportion of occupational diseases diagnosed in construction is 4%. We used the ETS (ExponenTial Smoothing) method to determine the time-series model for the number of occupational diseases for the period 2000–2019 and the forecasts for the coming period. The resulting time-series prediction model consists of three components: Error, Trend and Seasonal. We have taken into account several models with different suitable combinations of the types of all three components. The ETS(M,A,N) model with multiplicative errors, additive trend and no seasonality represents Holt’s linear method with multiplicative errors. The ETS(A,A,N) model means Holt’s linear method with additive errors, ETS(A,N,N) means simple exponential smoothing with additive errors, etc. We compared the created models using the AIC criteria, with the best model being the model with the lowest AIC value. It was found that the best model is in the form of ETS(M,Md,N), which means a damped trend (Md) with multiplicative errors (M) and no seasonality (N). The damping parameter is 0.97. A graphical representation of the original and equal time series obtained using the ETS method is shown in . The graph shows a forecast for the development of the number of occupational diseases over the next five years. In addition to the forecast point estimate, prediction intervals are also created. The grey or blue area displays 95%, or 80% prediction intervals for forecasts obtained by the ETS(M,Md,N) model. The projection of the development of the number of occupational diseases in Slovakia over a period of 5 years obtained through the best model of ETS(M,Md,N) is shown in . We can state that the development of the number of occupational diseases diagnosed in Slovakia has been on a downward trend during 20 years of monitoring. This favorable trend may be related to a number of factors, including, in particular, increased responsibility of employers and employees who comply with the statutory principles of occupational health and safety. Occupation diseases have been with us since time immemorial and they have developed together with occupational medicine. As the nature of working activity has changed, new diseases have come along, and it has taken several decades for people to begin to associate them with the work they were doing. These diseases have been named “occupational diseases” . Here, we present an overview of the most important scientists, doctors, reformers, visionaries in the field of occupational medicine, who shaped this field as we know it today ( and ). Already at the earliest historical stages of the development and life of society, many important representatives of medicine were interested in the social aspects of health care; for example, Hippocrates (460–370 BCE), who provides the first recorded mention of occupational diseases, describing dust in the lungs of stoneworkers and metalworkers; Aristotle (384–322 BCE); and Avicenna (CE 980–1037). “When you come to a patient’s house, you should ask him what sort of pains he has, what caused them, how many days he has been ill, whether the bowels are working and what sort of food he eats”, according to Hippocrates. The history of occupational medicine began to be written by Paracelsus (1493–1541) and Georg Agricola (1494–1555) in the 16th century, who particularly noticed the health problems of workers in manufactories and mines . 3.1.1. The 18th Century Bernardino Ramazzini (†1714, Italian doctor) is considered to be “The Father of Occupational Medicine”. In 1700, he published the work “De Morbis Artificum Diatriba” “Diseases of Workers”, in which he examined occupational diseases. This manuscript is considered to be a key work in the field of occupational medicine and has played an essential role in its development . He described analytical and methodological approaches to the diagnosis and prevention of occupational diseases . His successors, for example, Smith A., Marx K.H. and Mather C., and many other authors relied on this manuscript. He introduced two causes of occupational illnesses . The first was the harmful effect of materials that employees handle at work. He found that many of them release harmful fumes and very fine particles into the air when processed, which adversely affects workers and causes serious illnesses. As a precautionary measure, he advised them to wash their hands and face frequently and even to stop working when they have difficulty breathing. He was of the opinion that insufficient ventilation and poor temperature control contribute significantly to the development of the disease. The second was attributed to intense and irregular movements, which are unnatural for proper posture. He claimed they caused such a disturbed physiological state of posture that serious occupational diseases could gradually develop. He supported rest, the need for exercise and a change in posture. Other important reformers of the 18th century include Lind J. (1753) , Scopoli G.A. (1761) , Pott P. (1775) , Parés y Franqués J. (1778) and others. 3.1.2. The 19th Century Charles Turner Thackrah (†1833, British doctor, reformer). In 1832, he drew attention to unsuitable working conditions at the Bean Ing Mills wool processing mill . He described risks in various working sectors and pointed out that dust affecting the lungs of miners, metalworkers and other workers in dusty trades is linked to the development of tuberculosis. He warned of the long hours of child laborers in linen mills. In pottery, he recommended replacing lead glazes with others or completely changing working practices . Several publications examining diseases of specific groups of workers already existed in the UK during this period. Pott P. (1775) , Bell B. (1794) and Harrison E. (1827) wrote about the incidence of cancer in chimney sweeps. A year after Thackrah’s research, Kay-Shuttleworth J.P. (1832) published the book The Moral and Physical Condition of the Working Classes Employed in the Cotton Manufacture in Manchester . Benjamin William McCready (†1892, American doctor) published “On the Influence of Trades, Professions, and Occupations in the United States, in the Production of Disease” (1837). This document is considered to be the first US study in the field of occupational medicine. Heinrich Hermann Robert Koch (†1910, German doctor) examined the bacterium Bacillus anthracis , which is the causative agent of anthrax. Based on his observations, he was able to determine the life cycle of anthrax bacteria and demonstrate a causal relationship between this microorganism and the development of the disease. In 1876, he published a study entitled “The Etiology of Anthrax Disease, Based on the Developmental History of Bacillus Anthracis” . He also examined tuberculosis, cholera and other diseases . He is considered the Father of Microbiology. In 1905, he was awarded the Nobel Prize in Physiology and Medicine for research and discovery in the treatment of tuberculosis. Louis Pasteur (†1895, a French doctor, chemist and biologist), known for his work on vaccines, was the first scientist to use live viruses in vaccination. He was working to create a vaccine against anthrax and rabies. Although Pasteur became famous because of his public speeches in 1881 and took credit for the creation of an anthrax vaccine , it is now believed that Jean Joseph Henri Toussaint (†1890, French veterinarian) was actually behind the creation of this vaccine. Pasteur’s nephew Adrien Loir (†1941, French bacteriologist) was aware of Toussaint’s work on vaccine development and therefore published a debate in 1938 entitled “À L’ombre de Pasteur”. Other important reformers of the 19th century include McCready B.W. (1837) , Chadwick E. (1842) , Engels F. (1845) , Virchow R. (1848) , Ireland G.H. (1886) and others. 3.1.3. The 20th Century Thomas Morison Legge (†1932, British doctor, inspector). He was the first factory inspector to focus on improving hygiene in industry . In 1921, he participated in the Geneva Convention on the Prohibition of Painting Interiors with White Lead. Lead poisoning was the most common occupational disease at the time. Legge is also known for his work on anthrax. Anthrax disease often occurred in wool workers who were exposed to contaminated leather and wool. The mortality rate was high, with up to 1/4 fatality in all people suffering from pulmonary anthrax “wool-sorters’ disease” . Legge summarized a wide range of industrial diseases, including cataracts, skin cancer, liver diseases and metal poisoning. A very important step in his life was the introduction of working medicine into the curriculum of the Faculty of Medicine. He was the author of several works, among the most important are (see ). John Bertram Andrews (†1943, American economist) in 1909 became executive secretary of the American Association for Labor Legislation, which under his leadership was involved in drafting legislation in the area of labor law. He was the author of the books Principles of Labor Legislation (1916) , Anthrax as an Occupational Disease (1917) and History of Labor in the United States (1918) . Alice Hamilton (†1970, American doctor, industrial toxicology innovator). If Ramazzini is considered the Father of Occupational Medicine, Hamilton can be considered the “Mother.” She was a pioneer in the field of epidemiology of work and industrial hygiene. She began her long career in public health and workplace safety in 1908. She is the author of the first American guide on “Industrial Poisons in the United States” (1925) . Her specialization was mainly industrial toxicology and health at work, which was related to her further publication “Industrial Toxicology” (1934), which she revised in cooperation with Hardy H.L. in 1949 . Her research focused on the action of toxic substances (aniline dyes, mercury, carbon monoxide, tetraethyl lead, benzene, etc.) in the working environment; she investigated their effects on the body. She wrote about carbon monoxide poisoning among American steelworkers, mercury poisoning in milliners and the appearance of pulmonary tuberculosis in carvers in granite mills. Hamilton remained active in retirement, when she released her autobiography entitled Exploring the Dangerous Trade (1943) . In 1942, a paper titled “Occupational Tumors and Allied Diseases” was published, which is considered the first medical textbook containing information about various types of cancers and the causes of their formation. In the introduction, the author pointed out the severity of chronic diseases . This work was published by Wilhelm C. Hueper (†1979, a German-American doctor), who was a central figure in the field of occupational medicine and toxicology in the mid-20th century. He was one of the first scientists to attempt to educate the public about asbestos as a carcinogen in the working environment. In 1955, he published another one of his studies, “Silicosis, Asbestosis and Cancer of the Lung” . He focused his attention on the effects of asbestos and coal tar. However, he erred in claiming that smoking contributes to occupational diseases to a lesser extent. This was before Selikoff I.J. et al. (1968) provided evidence that in insulation workers there was a synergistic effect of smoking and working with asbestos, resulting in lung cancer. Throughout his career, Hueper sought to draw attention to the reluctance of businesses to acknowledge the fact that chemicals used in industry cause different types of cancer in employees. Twenty years later, he published the book Chemical Carcinogenesis and Cancers (1964) . The culmination of his career was the book Occupational and Environmental Cancers of the Urinary System (1969) . Robert A. Kehoe (†1970, an American toxicologist) was an expert in toxicokinetics. He focused his attention on monitoring the clinical manifestations of lead poisoning. From 1925 to 1965, he was a senior expert on lead in the US. In 1930, he became director of the Kettering Laboratory of Applied Physiology at the University of Cincinnati, the first university laboratory focused on toxicological problems in industry . In 1953, he published research on “Experimental Studies on the Inhalation of Lead by Human Subjects” , in which he argued that the presence of lead in humans is normal and that exposure to it at low levels is not harmful. With this claim, he convinced the Ethyl Corporation that it did not have to worry about lead not only in the work area, but also in the environment, which contradicted the study by Hamilton . Irving J. Selikoff (†1992, American doctor, researcher) created an extensive body of work documenting the high incidence of asbestos-related diseases during his four decades of work . He found that workers exposed to asbestos also had scar tissue 30 years after their work ended. His research has put enormous pressure on OSHA. New York subsequently banned the use of sprayed asbestos during construction work in Manhattan. His study on insulation workers drew attention to the synergy between asbestos and tobacco smoking . In 1966, he founded the Department of Environmental Medicine at Mount Sinai Hospital in New York. He was one of the founders of the Collegium Ramazzini, an independent international academy. Jean Spencer Felton (†2003, American academic, general practitioner). In 1958, he created the basis for a resident program in occupational medicine. In 1968, he became director of the health service in Los Angeles and later in Long Beach, where he researched the effects of asbestos. He has published countless books and articles on the history and practice of working medicine, upon which researchers around the world rely to this day. Thomas F. Mancuso (†2004, American doctor, epidemiologist) “changed the standards of occupational health protection” . He published a series of articles highlighting the toxicological effects of materials such as asbestos, beryllium, chromium, cadmium, manganese, mercury and many other toxins. Working together with Hueper in 1951, he published research on the connection between chromium and lung cancer . He revised these papers in 1997 and published them under the title “Chromium as an industrial carcinogen: Part I. and II.” . In 1965, he was asked by the Atomic Energy Commission (AEC) to conduct a study on the effects of low-level radiation on a sample of half a million workers at the Hanford Nuclear Complex. He suggested that a long-term study was needed to accurately examine the cumulative effects. It showed that workers developed an increased risk of cancer caused by radiation levels that were considered safe at the time. Mancuso was later joined by doctor Stewart A. and statistician Kneale G. In 1977, they jointly published an article stating that workers at a nuclear weapons complex were dying of cancer induced by radiation at values that were well below the norm . Kivimaki, Mika ( h -index 125; 55,255), Shipley, Martin J. ( h -index 92; 27,536), Ferrie, Jane (h-index 75; 11,391), Donhal, Kelley J. ( h -index 39; 2755), Soteriades, Elpidoforos S. ( h -index 22; 2640), Franco, G. ( h -index 13; 394) and Balmes, John r. ( h -index 12; 515) are all authors who can be considered 21st-century experts in this field, based on the h -index and the number of citations, not including self-citing articles. Bernardino Ramazzini (†1714, Italian doctor) is considered to be “The Father of Occupational Medicine”. In 1700, he published the work “De Morbis Artificum Diatriba” “Diseases of Workers”, in which he examined occupational diseases. This manuscript is considered to be a key work in the field of occupational medicine and has played an essential role in its development . He described analytical and methodological approaches to the diagnosis and prevention of occupational diseases . His successors, for example, Smith A., Marx K.H. and Mather C., and many other authors relied on this manuscript. He introduced two causes of occupational illnesses . The first was the harmful effect of materials that employees handle at work. He found that many of them release harmful fumes and very fine particles into the air when processed, which adversely affects workers and causes serious illnesses. As a precautionary measure, he advised them to wash their hands and face frequently and even to stop working when they have difficulty breathing. He was of the opinion that insufficient ventilation and poor temperature control contribute significantly to the development of the disease. The second was attributed to intense and irregular movements, which are unnatural for proper posture. He claimed they caused such a disturbed physiological state of posture that serious occupational diseases could gradually develop. He supported rest, the need for exercise and a change in posture. Other important reformers of the 18th century include Lind J. (1753) , Scopoli G.A. (1761) , Pott P. (1775) , Parés y Franqués J. (1778) and others. Charles Turner Thackrah (†1833, British doctor, reformer). In 1832, he drew attention to unsuitable working conditions at the Bean Ing Mills wool processing mill . He described risks in various working sectors and pointed out that dust affecting the lungs of miners, metalworkers and other workers in dusty trades is linked to the development of tuberculosis. He warned of the long hours of child laborers in linen mills. In pottery, he recommended replacing lead glazes with others or completely changing working practices . Several publications examining diseases of specific groups of workers already existed in the UK during this period. Pott P. (1775) , Bell B. (1794) and Harrison E. (1827) wrote about the incidence of cancer in chimney sweeps. A year after Thackrah’s research, Kay-Shuttleworth J.P. (1832) published the book The Moral and Physical Condition of the Working Classes Employed in the Cotton Manufacture in Manchester . Benjamin William McCready (†1892, American doctor) published “On the Influence of Trades, Professions, and Occupations in the United States, in the Production of Disease” (1837). This document is considered to be the first US study in the field of occupational medicine. Heinrich Hermann Robert Koch (†1910, German doctor) examined the bacterium Bacillus anthracis , which is the causative agent of anthrax. Based on his observations, he was able to determine the life cycle of anthrax bacteria and demonstrate a causal relationship between this microorganism and the development of the disease. In 1876, he published a study entitled “The Etiology of Anthrax Disease, Based on the Developmental History of Bacillus Anthracis” . He also examined tuberculosis, cholera and other diseases . He is considered the Father of Microbiology. In 1905, he was awarded the Nobel Prize in Physiology and Medicine for research and discovery in the treatment of tuberculosis. Louis Pasteur (†1895, a French doctor, chemist and biologist), known for his work on vaccines, was the first scientist to use live viruses in vaccination. He was working to create a vaccine against anthrax and rabies. Although Pasteur became famous because of his public speeches in 1881 and took credit for the creation of an anthrax vaccine , it is now believed that Jean Joseph Henri Toussaint (†1890, French veterinarian) was actually behind the creation of this vaccine. Pasteur’s nephew Adrien Loir (†1941, French bacteriologist) was aware of Toussaint’s work on vaccine development and therefore published a debate in 1938 entitled “À L’ombre de Pasteur”. Other important reformers of the 19th century include McCready B.W. (1837) , Chadwick E. (1842) , Engels F. (1845) , Virchow R. (1848) , Ireland G.H. (1886) and others. Thomas Morison Legge (†1932, British doctor, inspector). He was the first factory inspector to focus on improving hygiene in industry . In 1921, he participated in the Geneva Convention on the Prohibition of Painting Interiors with White Lead. Lead poisoning was the most common occupational disease at the time. Legge is also known for his work on anthrax. Anthrax disease often occurred in wool workers who were exposed to contaminated leather and wool. The mortality rate was high, with up to 1/4 fatality in all people suffering from pulmonary anthrax “wool-sorters’ disease” . Legge summarized a wide range of industrial diseases, including cataracts, skin cancer, liver diseases and metal poisoning. A very important step in his life was the introduction of working medicine into the curriculum of the Faculty of Medicine. He was the author of several works, among the most important are (see ). John Bertram Andrews (†1943, American economist) in 1909 became executive secretary of the American Association for Labor Legislation, which under his leadership was involved in drafting legislation in the area of labor law. He was the author of the books Principles of Labor Legislation (1916) , Anthrax as an Occupational Disease (1917) and History of Labor in the United States (1918) . Alice Hamilton (†1970, American doctor, industrial toxicology innovator). If Ramazzini is considered the Father of Occupational Medicine, Hamilton can be considered the “Mother.” She was a pioneer in the field of epidemiology of work and industrial hygiene. She began her long career in public health and workplace safety in 1908. She is the author of the first American guide on “Industrial Poisons in the United States” (1925) . Her specialization was mainly industrial toxicology and health at work, which was related to her further publication “Industrial Toxicology” (1934), which she revised in cooperation with Hardy H.L. in 1949 . Her research focused on the action of toxic substances (aniline dyes, mercury, carbon monoxide, tetraethyl lead, benzene, etc.) in the working environment; she investigated their effects on the body. She wrote about carbon monoxide poisoning among American steelworkers, mercury poisoning in milliners and the appearance of pulmonary tuberculosis in carvers in granite mills. Hamilton remained active in retirement, when she released her autobiography entitled Exploring the Dangerous Trade (1943) . In 1942, a paper titled “Occupational Tumors and Allied Diseases” was published, which is considered the first medical textbook containing information about various types of cancers and the causes of their formation. In the introduction, the author pointed out the severity of chronic diseases . This work was published by Wilhelm C. Hueper (†1979, a German-American doctor), who was a central figure in the field of occupational medicine and toxicology in the mid-20th century. He was one of the first scientists to attempt to educate the public about asbestos as a carcinogen in the working environment. In 1955, he published another one of his studies, “Silicosis, Asbestosis and Cancer of the Lung” . He focused his attention on the effects of asbestos and coal tar. However, he erred in claiming that smoking contributes to occupational diseases to a lesser extent. This was before Selikoff I.J. et al. (1968) provided evidence that in insulation workers there was a synergistic effect of smoking and working with asbestos, resulting in lung cancer. Throughout his career, Hueper sought to draw attention to the reluctance of businesses to acknowledge the fact that chemicals used in industry cause different types of cancer in employees. Twenty years later, he published the book Chemical Carcinogenesis and Cancers (1964) . The culmination of his career was the book Occupational and Environmental Cancers of the Urinary System (1969) . Robert A. Kehoe (†1970, an American toxicologist) was an expert in toxicokinetics. He focused his attention on monitoring the clinical manifestations of lead poisoning. From 1925 to 1965, he was a senior expert on lead in the US. In 1930, he became director of the Kettering Laboratory of Applied Physiology at the University of Cincinnati, the first university laboratory focused on toxicological problems in industry . In 1953, he published research on “Experimental Studies on the Inhalation of Lead by Human Subjects” , in which he argued that the presence of lead in humans is normal and that exposure to it at low levels is not harmful. With this claim, he convinced the Ethyl Corporation that it did not have to worry about lead not only in the work area, but also in the environment, which contradicted the study by Hamilton . Irving J. Selikoff (†1992, American doctor, researcher) created an extensive body of work documenting the high incidence of asbestos-related diseases during his four decades of work . He found that workers exposed to asbestos also had scar tissue 30 years after their work ended. His research has put enormous pressure on OSHA. New York subsequently banned the use of sprayed asbestos during construction work in Manhattan. His study on insulation workers drew attention to the synergy between asbestos and tobacco smoking . In 1966, he founded the Department of Environmental Medicine at Mount Sinai Hospital in New York. He was one of the founders of the Collegium Ramazzini, an independent international academy. Jean Spencer Felton (†2003, American academic, general practitioner). In 1958, he created the basis for a resident program in occupational medicine. In 1968, he became director of the health service in Los Angeles and later in Long Beach, where he researched the effects of asbestos. He has published countless books and articles on the history and practice of working medicine, upon which researchers around the world rely to this day. Thomas F. Mancuso (†2004, American doctor, epidemiologist) “changed the standards of occupational health protection” . He published a series of articles highlighting the toxicological effects of materials such as asbestos, beryllium, chromium, cadmium, manganese, mercury and many other toxins. Working together with Hueper in 1951, he published research on the connection between chromium and lung cancer . He revised these papers in 1997 and published them under the title “Chromium as an industrial carcinogen: Part I. and II.” . In 1965, he was asked by the Atomic Energy Commission (AEC) to conduct a study on the effects of low-level radiation on a sample of half a million workers at the Hanford Nuclear Complex. He suggested that a long-term study was needed to accurately examine the cumulative effects. It showed that workers developed an increased risk of cancer caused by radiation levels that were considered safe at the time. Mancuso was later joined by doctor Stewart A. and statistician Kneale G. In 1977, they jointly published an article stating that workers at a nuclear weapons complex were dying of cancer induced by radiation at values that were well below the norm . Kivimaki, Mika ( h -index 125; 55,255), Shipley, Martin J. ( h -index 92; 27,536), Ferrie, Jane (h-index 75; 11,391), Donhal, Kelley J. ( h -index 39; 2755), Soteriades, Elpidoforos S. ( h -index 22; 2640), Franco, G. ( h -index 13; 394) and Balmes, John r. ( h -index 12; 515) are all authors who can be considered 21st-century experts in this field, based on the h -index and the number of citations, not including self-citing articles. The list of occupational diseases established by international and national legal systems plays an important role both in prevention and treatment and in compensation for workers’ diseases. This list is a set of officially recognized occupational diseases caused by exposure to danger during working activity. The list contains a definition of each occupational disease and it is based on basic legislation on occupational health and safety . The first compensation schemes began to appear in the early 19th century. A number of factors (e.g., rapidly growing industrialization) contributed to their development. Since the introduction of occupational diseases as compensable diseases in the Act for the Compensation of Workers in Germany (1871) subsequently in Switzerland (1877) and England (1880), legislation of this type was introduced in rapid succession across Europe : Austria (1887), Norway (1895), Denmark (1897), Finland and Italy (1898) and France, Spain and Switzerland (1899). In the USA, the compensation scheme was only adopted in full in 1911 . The International Labour Organisation is a UN agency whose mandate is to promote social and economic justice by setting international labor standards. It was founded in 1919 after the end of the First World War in France. On 29 January 1919, the Commission on International Labour Legislation was established by the Peace Conference, with a view to drawing up the ILO Constitution. As a result of its work, a recommendation was made to create the ILO as a tripartite organization that would bring together representatives of member states’ governments, employers’ and workers’ representatives. The Labour Commission drafted a text entitled “Labour” , which became Part XIII of the Treaty of Versailles. In particular, the labor commission promoted the principles applicable to the conditions of work to be followed by the policy of the ILO Member States and that were incorporated into Part XIII of the Treaty of Versailles—the Preamble to the ILO Constitution . 3.2.1. The Era of Industrial Poisoning Following the ILO Conference on 29 October 1919 in Washington, anthrax and lead poisoning were declared occupational diseases. On 28.11.1919, the first two recommendations on the prevention of these diseases. R003—Anthrax Prevention Recommendation, 1919 (No. 3)—and R004—Lead Poisoning (Women and Children) Recommendation, 1919 (No. 4)—were adopted . Although anthrax was first discovered in 1250 CE, the industrial revolution was responsible for the dangers created by it, and it was therefore the center of attention in those years. Maret (1752), Dym (1769) and Fournier (1769) made the first mentions of cutaneous anthrax . Anthrax disease often occurred in English wool sorters and was known as “wool-sorters’ disease”; its mortality rate was high . In addition to anthrax, the first industrial revolution brought with it a huge increase in demand for lead. Women and children were employed in all stages of the process, including very dangerous work in glazing ceramics, melting lead ores and the production of lead compounds . This topic was examined by Ellenbog U. (1473), Thackrah C.T. (1832) and Kehoe, R.A. (1953) (see ). After 1900, as a result of studies of industrial hygiene, many countries adopted legislation relating to the protection of workers’ health . At its seventh session on 19 May 1925 in Geneva, after resolving to accept the proposal for workmen’s compensation, on 10.6.2015, the ILO adopted C018—Workmen’s Compensation (Occupational Diseases) Convention, 1925 (No. 18). Mercury poisoning was added to the list of diseases . Like anthrax and lead, mercury experienced its biggest boom in the mid-19 th century with the development of industry. Hamilton A. (1943) wrote about mercury poisoning in milliners. The production of hats at that time was dependent on mercury, and it was used in the form of a solution to accelerate the production of felt. An employee who entered a hat factory did not live for more than 3–5 years. Scopoli G.A. (1761) and Parés y Franqués J. (1778) wrote about mercury poisoning among miners as early as the second half of the 18th century (see ). In 1934, C018 was amended and a new C042, Workmen’s Compensation (Occupational Diseases) Convention (Revised), 1934 (No. 42), was adopted. Another seven items were added to the list; see . 3.2.2. Expansion of the ILO List of Occupational Diseases The list of occupational diseases with 10 items was used for 30 years unchanged until on 17 June 1964 it was revised, and a new C121—Employment Injury Benefits Convention, 1964 (No. 121) , was adopted. It continued to contain only a limited number of diseases, such as those identified by Stockhausen S. (1656) and Hoffmann F. (1716) and skin cancer as first described by Pott P. in 1775 (see ). The compensation system was very difficult to regulate. While the causal link in the case of poisoning was obvious, it was difficult to distinguish hearing loss or various bronchopulmonary and infectious diseases from those of the general population . In the US in the early 20th century, it was the case that only hearing loss caused by immediate injury—explosion rather than gradually developed hearing loss—would be considered compensable . In his book Effects of Noise on Man ” , Kryter K.D. (1950) states that most published information on the effects of noise on humans is an “unsubstantiated expression” or is justified by “poorly designed experiments” . The inclusion of noise-induced hearing loss in the list of diseases was therefore very difficult, and it only succeeded in 1980. This was also the case for asthma in the textile industry. Already in the early 18th century, Ramazzini B. described a special form of asthma in those who processed cotton, flax and hemp. He said the dust he observed while processing them “causes workers to cough constantly.” While many authors during the 19th and early 20th centuries were describing respiratory manifestations of occupational diseases in textile factories with increasing frequency, in the U.S., these diseases remained unnoticed until the mid-20th century when Schilling R.S.F. (1956) published the study “Byssinosis in Cotton and other Textile Workers” . In 1980, with the growing public awareness of occupational diseases, Convention C121 was revised . It now reflected the lessons learned over the last 70 years. During this period, there had been several fundamental changes, not only in the structure of industry (transition from heavy industry to services) but also in changes to the workplace risks (use of new industrial chemicals) and compensation policy. The revised version of C121 extended the original list with not only seven more types of poisoning, but also respiratory diseases, skin and infectious diseases, and disturbances caused by physical factors, and several types of work-related cancer were added. These were subsequently incorporated into the various compensation systems of different states . In general, the lists were designed to identify specific diseases for which there would be evidence of causalities with one or more specific exposures at the workplace. The Employment Injury Benefits Convention, 1964 (No. 121), has so far been approved by 24 countries around the world . Many countries have their own equivalent of this convention. On 22 May 1990, the Commission of the European Communities in Brussels approved recommendation (90/326/ECC) on the adoption of the European Schedule of Occupational Diseases, which was revised 13 years later on 19 September 2003 (2003/670/EC) . This list was more comprehensive than the ILO C121 list. Already in 1990, the European List contained a further 24 diseases caused by chemicals not listed in Convention C121. It also contained nine causes of skin diseases, including skin cancer, and 10 diseases caused by physical factors, including eight musculoskeletal disorders. This situation prompted the ILO in 1990–1991 to agree to the addition of Annex 1 to Convention C121, taking into account all legislation and practice, the most significant extension being the introduction of a detailed description of the procedures for diagnosing, reporting and evaluating occupational diseases in order to compensate them . Among other things, the ILO prepared a list of occupational diseases that considered the currently valid lists and national practice in 76 countries. However, Article 31 of Convention No. 121 provides for a specific procedure for amending the list of occupational diseases set out in Annex 1 by a minimum of a two-thirds majority. Due to the competing priorities of the tripartite parties, the revision of the list could not be placed on the agenda. At the 90th session of the International Labour Organisation Conference on 3 June 2002 in Geneva, the process of changing the notification, diagnosis and identification of occupational diseases for the purpose of compensating them was approved. 3.2.3. Further Updates to the ILO List Appended to R194 The adoption of these changes was helped by the drafting and adoption of a new list of R194 recommendations—Letter of Occupational Diseases Recommendation, 2002 (No. 194)—which entered into force on 20 December 2002. This resolution recommended a new format for the list of occupational diseases, consisting of three basic categories for diagnosing diseases: the causative agent of the disease (chemicals, biological agents, physical factors), diseases by target organ (respiratory tract, skin diseases, musculoskeletal disorders and behavioral disorders) and occupational diseases of the cancerous-type. Sixteen chemicals, two physical agents, four pulmonary disorders and one skin disease were added to the list. The category of cancer-type occupational diseases consisted of 14 carcinogens, the classification criterion of which was the category 1 list of the International Agency for Research on Cancer. Musculoskeletal disorders are also listed with a general definition of work-related diseases. The category “other diseases” is a flexible category that includes diseases not listed elsewhere. Recommendation R194 emphasizes its role as a tool for notification, the introduction of preventive measures, the improvement of the compensation procedure and the identification of the causes of occupational diseases . Moving the list of occupational diseases from the Compensation Convention (C121) to Recommendation R194 has provided greater flexibility in drawing up a more comprehensive list. At its 279th session (November 2000), the governing body of the ILO recommended that the International Labour Organisation, at its 90th session, consider the development of a new mechanism for regularly updating the list of occupational diseases . R194 was revised through two tripartite meetings in 2005 and 2009 . The managing authority of the ILO convened a meeting of experts on 13–20 November 2008 in Geneva to update the list of occupational diseases . In preparing the meeting, the ILO analyzed the 50 most up-to-date national lists of occupational diseases, including the recommended European Schedule of Occupational Diseases 2003/670/EC, and prepared a questionnaire on 34 issues related to changes, replacement, addition and re-categorization of occupational diseases, etc. Eighty Member States responded to this, 17 of which indicated that their responses were prepared after consultation with employers’ and employees’ representatives. Although most of the responses confirmed the proposed list with a few additional comments, some items, such as disease caused by radiofrequency radiation, cancer caused by formaldehyde and silica, or psychosomatic syndromes caused by bullying, were not accepted on the final list . New entries included four diseases caused by chemicals, one caused by physical agents, five diseases caused by biological agents, two skin diseases, seven musculoskeletal disorders, two psychiatric and behavioral disorders and eight carcinogenic substances . A further meeting on the revision of the Recommendation (No. 194) was held on 20–30 October 2009 involving 21 experts . The new list was approved at the 307th meeting in March 2010. It replaced the previous list approved in 2002 as set out in Annex 1 to the Recommendation (No. 194). The new list included a total of 106 entries divided into three basic categories: disease agents (41 chemicals, 9 biological agents, 7 physical factors), target organ diseases (12 respiratory tract, 4 skin diseases, 8 musculoskeletal disorders and 2 behavioral disorders) and 21 cancer-type occupational diseases . All revisions to those conventions or recommendations were influenced not only by the modernization of industry, but also by international organizations, and the European Union, and the development and revision of each state’s lists, which reflect the social, cultural and technological background of the time or country. Following the ILO Conference on 29 October 1919 in Washington, anthrax and lead poisoning were declared occupational diseases. On 28.11.1919, the first two recommendations on the prevention of these diseases. R003—Anthrax Prevention Recommendation, 1919 (No. 3)—and R004—Lead Poisoning (Women and Children) Recommendation, 1919 (No. 4)—were adopted . Although anthrax was first discovered in 1250 CE, the industrial revolution was responsible for the dangers created by it, and it was therefore the center of attention in those years. Maret (1752), Dym (1769) and Fournier (1769) made the first mentions of cutaneous anthrax . Anthrax disease often occurred in English wool sorters and was known as “wool-sorters’ disease”; its mortality rate was high . In addition to anthrax, the first industrial revolution brought with it a huge increase in demand for lead. Women and children were employed in all stages of the process, including very dangerous work in glazing ceramics, melting lead ores and the production of lead compounds . This topic was examined by Ellenbog U. (1473), Thackrah C.T. (1832) and Kehoe, R.A. (1953) (see ). After 1900, as a result of studies of industrial hygiene, many countries adopted legislation relating to the protection of workers’ health . At its seventh session on 19 May 1925 in Geneva, after resolving to accept the proposal for workmen’s compensation, on 10.6.2015, the ILO adopted C018—Workmen’s Compensation (Occupational Diseases) Convention, 1925 (No. 18). Mercury poisoning was added to the list of diseases . Like anthrax and lead, mercury experienced its biggest boom in the mid-19 th century with the development of industry. Hamilton A. (1943) wrote about mercury poisoning in milliners. The production of hats at that time was dependent on mercury, and it was used in the form of a solution to accelerate the production of felt. An employee who entered a hat factory did not live for more than 3–5 years. Scopoli G.A. (1761) and Parés y Franqués J. (1778) wrote about mercury poisoning among miners as early as the second half of the 18th century (see ). In 1934, C018 was amended and a new C042, Workmen’s Compensation (Occupational Diseases) Convention (Revised), 1934 (No. 42), was adopted. Another seven items were added to the list; see . The list of occupational diseases with 10 items was used for 30 years unchanged until on 17 June 1964 it was revised, and a new C121—Employment Injury Benefits Convention, 1964 (No. 121) , was adopted. It continued to contain only a limited number of diseases, such as those identified by Stockhausen S. (1656) and Hoffmann F. (1716) and skin cancer as first described by Pott P. in 1775 (see ). The compensation system was very difficult to regulate. While the causal link in the case of poisoning was obvious, it was difficult to distinguish hearing loss or various bronchopulmonary and infectious diseases from those of the general population . In the US in the early 20th century, it was the case that only hearing loss caused by immediate injury—explosion rather than gradually developed hearing loss—would be considered compensable . In his book Effects of Noise on Man ” , Kryter K.D. (1950) states that most published information on the effects of noise on humans is an “unsubstantiated expression” or is justified by “poorly designed experiments” . The inclusion of noise-induced hearing loss in the list of diseases was therefore very difficult, and it only succeeded in 1980. This was also the case for asthma in the textile industry. Already in the early 18th century, Ramazzini B. described a special form of asthma in those who processed cotton, flax and hemp. He said the dust he observed while processing them “causes workers to cough constantly.” While many authors during the 19th and early 20th centuries were describing respiratory manifestations of occupational diseases in textile factories with increasing frequency, in the U.S., these diseases remained unnoticed until the mid-20th century when Schilling R.S.F. (1956) published the study “Byssinosis in Cotton and other Textile Workers” . In 1980, with the growing public awareness of occupational diseases, Convention C121 was revised . It now reflected the lessons learned over the last 70 years. During this period, there had been several fundamental changes, not only in the structure of industry (transition from heavy industry to services) but also in changes to the workplace risks (use of new industrial chemicals) and compensation policy. The revised version of C121 extended the original list with not only seven more types of poisoning, but also respiratory diseases, skin and infectious diseases, and disturbances caused by physical factors, and several types of work-related cancer were added. These were subsequently incorporated into the various compensation systems of different states . In general, the lists were designed to identify specific diseases for which there would be evidence of causalities with one or more specific exposures at the workplace. The Employment Injury Benefits Convention, 1964 (No. 121), has so far been approved by 24 countries around the world . Many countries have their own equivalent of this convention. On 22 May 1990, the Commission of the European Communities in Brussels approved recommendation (90/326/ECC) on the adoption of the European Schedule of Occupational Diseases, which was revised 13 years later on 19 September 2003 (2003/670/EC) . This list was more comprehensive than the ILO C121 list. Already in 1990, the European List contained a further 24 diseases caused by chemicals not listed in Convention C121. It also contained nine causes of skin diseases, including skin cancer, and 10 diseases caused by physical factors, including eight musculoskeletal disorders. This situation prompted the ILO in 1990–1991 to agree to the addition of Annex 1 to Convention C121, taking into account all legislation and practice, the most significant extension being the introduction of a detailed description of the procedures for diagnosing, reporting and evaluating occupational diseases in order to compensate them . Among other things, the ILO prepared a list of occupational diseases that considered the currently valid lists and national practice in 76 countries. However, Article 31 of Convention No. 121 provides for a specific procedure for amending the list of occupational diseases set out in Annex 1 by a minimum of a two-thirds majority. Due to the competing priorities of the tripartite parties, the revision of the list could not be placed on the agenda. At the 90th session of the International Labour Organisation Conference on 3 June 2002 in Geneva, the process of changing the notification, diagnosis and identification of occupational diseases for the purpose of compensating them was approved. The adoption of these changes was helped by the drafting and adoption of a new list of R194 recommendations—Letter of Occupational Diseases Recommendation, 2002 (No. 194)—which entered into force on 20 December 2002. This resolution recommended a new format for the list of occupational diseases, consisting of three basic categories for diagnosing diseases: the causative agent of the disease (chemicals, biological agents, physical factors), diseases by target organ (respiratory tract, skin diseases, musculoskeletal disorders and behavioral disorders) and occupational diseases of the cancerous-type. Sixteen chemicals, two physical agents, four pulmonary disorders and one skin disease were added to the list. The category of cancer-type occupational diseases consisted of 14 carcinogens, the classification criterion of which was the category 1 list of the International Agency for Research on Cancer. Musculoskeletal disorders are also listed with a general definition of work-related diseases. The category “other diseases” is a flexible category that includes diseases not listed elsewhere. Recommendation R194 emphasizes its role as a tool for notification, the introduction of preventive measures, the improvement of the compensation procedure and the identification of the causes of occupational diseases . Moving the list of occupational diseases from the Compensation Convention (C121) to Recommendation R194 has provided greater flexibility in drawing up a more comprehensive list. At its 279th session (November 2000), the governing body of the ILO recommended that the International Labour Organisation, at its 90th session, consider the development of a new mechanism for regularly updating the list of occupational diseases . R194 was revised through two tripartite meetings in 2005 and 2009 . The managing authority of the ILO convened a meeting of experts on 13–20 November 2008 in Geneva to update the list of occupational diseases . In preparing the meeting, the ILO analyzed the 50 most up-to-date national lists of occupational diseases, including the recommended European Schedule of Occupational Diseases 2003/670/EC, and prepared a questionnaire on 34 issues related to changes, replacement, addition and re-categorization of occupational diseases, etc. Eighty Member States responded to this, 17 of which indicated that their responses were prepared after consultation with employers’ and employees’ representatives. Although most of the responses confirmed the proposed list with a few additional comments, some items, such as disease caused by radiofrequency radiation, cancer caused by formaldehyde and silica, or psychosomatic syndromes caused by bullying, were not accepted on the final list . New entries included four diseases caused by chemicals, one caused by physical agents, five diseases caused by biological agents, two skin diseases, seven musculoskeletal disorders, two psychiatric and behavioral disorders and eight carcinogenic substances . A further meeting on the revision of the Recommendation (No. 194) was held on 20–30 October 2009 involving 21 experts . The new list was approved at the 307th meeting in March 2010. It replaced the previous list approved in 2002 as set out in Annex 1 to the Recommendation (No. 194). The new list included a total of 106 entries divided into three basic categories: disease agents (41 chemicals, 9 biological agents, 7 physical factors), target organ diseases (12 respiratory tract, 4 skin diseases, 8 musculoskeletal disorders and 2 behavioral disorders) and 21 cancer-type occupational diseases . All revisions to those conventions or recommendations were influenced not only by the modernization of industry, but also by international organizations, and the European Union, and the development and revision of each state’s lists, which reflect the social, cultural and technological background of the time or country. In the second half of the 16th century and in the 17th century, a number of important Central European scholars appeared. There were humanitarians from various fields of science who came from Slovakia. Juraj (Georgius) Henisch (†1618, Slovak-German doctor, poet, polyhistor), born 24 April 1549 in Bardejov. He worked as a doctor in Augsburg, Germany. His scientific study “Arztney-Buch” was one of the most popular medical works of the time. Karol Rayger (1641–1709) and Karol Oto Moller (1670–1747) made significant contributions to the development of medical sciences through their discoveries. In 1721, Prešov native doctor and pharmacist Ján Adam Raymann (1690–1770) entered world medical history with his research. Kežmarok doctor Daniel Perlitzi (1705–1778) prepared a proposal for the establishment of a university of medicine based in Banská Štiavnica. This proposal met with resistance from the Hungarian rulers of the time, who did not want to increase the educational level, even among children, of the Slovak nation they were ruling over, so it and many other attempts were unsuccessful. The beginnings of care for health and safety care in our country date back to the 19th century, the Austro-Hungarian period. One of the pioneers of occupational medicine in Slovakia was František Xaver Schillinger (†1892, doctor) , who wrote a paper on cholera and first aid for miners. Gustáv Kazimír Zechenter-Laskomerský (†1908, doctor, writer, natural scientist) studied the hygiene of the life and work of forest and mining workers and studied their diseases. Imre Tóth (†1928, doctor) was the chief mining doctor in Banská Štiavnica. He wrote articles on the need to improve the environment and working environment of miners. In the fight against infectious and mining diseases, he contributed to reducing the incidence of diseases of lead miners, very widespread among metallurgical workers in Banská Štiavnica in the production of silver with lead. He proposed a range of measures to prevent this disease, which were directed to personal hygiene (handwashing, cleaning of workplaces and the use of respiratory protection). He also proposed technical measures to remove fumes from metallurgical furnaces. He also contributed to curbing the spread of tuberculosis and typhoid, and he publicly fought alcoholism. These authors understood health education as an integral part of medical activity. Later, the Czechoslovak Republic (CSR) adopted the Act on the Compensation of Occupational Diseases on the basis of the Workmen’s Compensation (Occupational Diseases) Convention (No. 18) in 1932. In 1932, under the leadership of J. Teissinger the Occupational Diseases Advisory Board was created, which was transformed into the Occupational Medicine Advisory Board after 1942. After 1945, there was a strong development of occupational medicine institutes across the country. In 1949–1953, three institutes were established in Slovakia: in Bratislava, Martin and Košice. Their work was concerned with labor hygiene, the physiology of work and occupational diseases. In 1952, a Slovak branch of the society for occupational medicine was established within the J. E. Purkyně Czechoslovak Medical Society, which became independent in 1968 and still operates as an organizational component of the Slovak Medical Society. In the 1970s and 1980s, the issue of coal and ore mines came to the fore. In view of the occurrence of work-related diseases such as noise-related hearing loss, vibration diseases, dust on the lungs from dust containing silica (silicosis) and other respiratory diseases, these problems needed to be addressed without delay. The gradual reinforcement of the field with qualified personnel made it possible to develop and apply new methods of work and procedures in the field and in the laboratory. Industrial production in Slovakia was also focused on the extraction and processing of minerals, including coal and wood, iron and steel, heavy engineering and chemicals, posing high health risks to employees. These were large state-owned enterprises employing thousands of employees. With the adoption of Act no. 20/1966 on Care for Human Health, the requirements for the quality of the working environment and the conditions of work were further regulated and specified. Limits were set for harmful factors in the working environment. Directive 17/1970 of the Slovak Ministry of Health on the Assessment of Medical Fitness for Work laid down requirements for employers for the content, scope and frequency of medical preventive examinations and identified the categories of workers to undergo these examinations. In 1989, the Czechoslovak government ratified Convention (No. 155) from 1981 on Occupational Health and safety. In 1997, the National Reference Centre for Personal Exposure and Health Risk Assessment, today’s NHIC, was established. 3.3.1. Legislation The values of determining variables help to answer the question of to what extent the physical factors of work and the working environment pose a risk to the health of the employee or to what extent the measures taken are effective. Whether they are maintained or exceeded speaks not only of the level of risk, but also of the level of protection of employees’ health. Within the Slovak Republic, the basis for assessing the fulfillment of these requirements is the result of direct or indirect measurement and comparison with the values of determining variables laid down in decrees, government regulations and STN standards (taken from international standards). The objectivity of physical factors of the environment and the working environment is monitored under Guideline OOFŽP-7674/2010 . This guideline is used for the measurement of noise and vibration, daylight and artificial lighting, electromagnetic fields, the thermal-humidity microclimate and the other physical factors to be determined or evaluated at their place of occurrence. A complete treatment of the whole area of health protection at work can be found in European Framework Directive 89/391/EEC . This Directive addresses the fact that employees may be exposed to dangerous environmental factors at the workplace during their working life. Since our legislation is currently harmonized with the EU, the notion of risk assessment and other concepts related to this procedure have also entered the legal norms of the Slovak Republic. In the legislation of the Slovak Republic, the area of risk assessment in the workplace is specified in Act no. 311/2011, the Labour Code , and in act No. 355/2007 . Details of the factors of work and the working environment under the classification of works into categories are given in Annex 1 in Decree No 448/2007 . The method of reporting and registering occupational disease and threatened occupational disease in the Slovak Republic is laid down by Act no. 355/2007 in Section 31b(1,2) . The general principles of prevention and the basic conditions for ensuring health at work are laid down by Act no 124/2006 , and the requirements for the provision and use of personal protective equipment are laid down in Regulation No 395/2006 . 3.3.2. Development of the Incidence of Occupational Disease in Slovakia from 1987 to 2019 The basic tasks of clinical occupational medicine and clinical toxicology in Slovakia include the comprehensive diagnosis, treatment and assessment of diseases arising in connection with adverse and health-damaging factors from work and the working environment. This includes reporting occupational diseases and threatened occupational disease. A total of 21,025 new occupational disease cases were reported in Slovakia between 1987 and 2019, based on data documented by the National Health Information Centre (NHIC). A graphical representation of the development of the number of occupational diseases in Slovakia for the period 1987 to 2019 is shown in . The average annual number of recognized occupational diseases in the given period was almost 637. A significant decrease in the number of reported occupational diseases was recorded up to 1995, from 1262 reports (1987) with a slight increase of 1331 reports (1991) to 601 reports (1995). Between 1995–2019, the number of newly acquired occupational diseases decreased roughly in half with slight fluctuations, to 347 reports (2019), with an all-time low in 2013 (301 reports). In the long term, we are seeing a downward trend in the number of occupational diseases. The graph shows the development of employment in Slovakia (1987–2019). The average annual value of the number of workers over the period is 2262.5 thousand persons. The assessment of occupational diseases reported in the last 32 years (1987–2019) has seen a more pronounced decrease in the second half of the reference period (2003–2019), representing 49.76%, i.e., 6971 cases. The most commonly reported occupational diseases include those listed in (item 22, items 24–26, item 28, item 29, items 33–34 and item 38). Over the period considered, 19,142 new cases related to the diseases were reported, representing almost 91% of the total number of reported occupational diseases. The development of the number of occupational diseases in terms of selected diseases is shown in . For the sake of clarity, only those diseases for which the average percentage of the total number of occupational diseases over a given period exceeded 10% are plotted in the graph. The percentage of selected occupational diseases out of the total number of reported cases in each year is shown in . Compared to the first half of the period (1987–2002), we can see a decrease in almost all the selected types of occupational diseases in the second half (2003–2019) ( and ). The only exceptions are diseases affecting the musculoskeletal, vascular and nervous systems of employees exposed at work to prolonged excessive and one-sided loads on the upper limbs (item 29). Despite a significant decreasing trend in the incidence of reported occupational diseases, limb disease from long-term, excessive and one-sided loads (item 29) is not developing very favorably . The annual incidence of reported diseases of the limbs from long-term excessive and one-sided loads on the limbs began to increase significantly from 1991. The largest number of reports (230 cases) was recorded in 2006, representing almost 46% of the total number of cases (504) in that year. In 2016, the proportion had increased to 55% . Compared to 1987, there was an increase of 885% in reported limb diseases in 2006 due to long-term excessive and one-sided loads. Between 2003 and 2019, we saw an increase of 55.58% in the incidence of occupational diseases, i.e., 3065 cases, overwhelmingly in women (item 29). Vibration occupational disease (item 28) has long been one of the most common occupational diseases in Slovakia. After limb disease from prolonged excessive and one-sided loads, vibration disease (with the exception of 2011, when noise-related hearing loss was temporarily in second place) has consistently come second among the numbers of annually reported occupational diseases in the last two decades. The high numbers in 1987–2007 gradually led to a significant decrease over 2008–2019, with the lowest incidence in 2011 (40 cases), and in the following years, the numbers have increased slightly . Between 2003 and 2019, there was a very significant decrease in the incidence of skin diseases (excluding skin cancer) and communicable skin diseases (item 22) compared to the previous period (1987–2002), by 79.29%, a decrease of 1685 cases . Almost the same percentage decrease (79.57%) was also seen in cases of infectious diseases and parasitic diseases and diseases communicable from animals to humans (items 24–26). Noise-related hearing loss (item 38) is repeatedly in fourth or fifth place in the order of frequency of the number of cases of annually reported occupational diseases. The annual incidence of reported noise-related hearing damage decreased significantly between 1987 and 2008. In 2009–2014, a rise in these diseases was again noted, and they subsequently decreased in 2015 with slight fluctuations. The lowest incidence was recorded in 2008 and 2019, with 17 cases. Cancer-type occupation diseases listed under (items 21 and 23) were reported in 177 employees. The number of annual reports fell by 77.08% between 2002 and 2019, and a decrease of 111 cases . The highest incidence was recorded in 1993; with 15 cases. The average annual number of reports was five cases. The average annual incidence of lung-related occupational diseases (items 33–34) is 27 cases, representing 3% of the total number of occupational diseases over the whole reporting period. In the case of (item 46), we can mention a negligible number of reported occupational diseases over the whole period under review (1987–2019), namely 37 cases. According to archive records, the disease was not diagnosed until 2003. 3.3.3. Analysis of the Development of Occupational Diseases in Slovakia over the Last 20 Years Available data show that a total of 8883 new cases of occupational diseases were reported in the last 20 years (from 2000 to 2019). The average annual number of cases of recognized occupational diseases in the given period is 444 cases. The trend in the incidence of occupational diseases in Slovakia is decreasing in nature. The average annual decrease in the number of occupational diseases is 16, representing an annual decrease of about 3%. For example, the calculated dynamics in the number of occupational diseases show that in 2005, the number decreased by 200 cases compared to the previous year, representing a decrease of around 67.4%. On the other hand, there was an increase of 91 cases in 2006, representing an increase of around 22% in the number of occupational diseases compared to 2005. In 2019, 347 cases of occupational diseases were reported. This is 13.4% per 100,000 workers. Compared to the situation as of 31 December 2018, the number of reported occupational diseases increased by 39 cases (11.24%). Compared to 2000, there are 313 fewer cases of occupational diseases in 2019, almost 53% fewer cases than in 2000. When analyzing the number of occupational diseases, we selected three indicators : the gender of workers (two subcategories), the age category (five subcategories) and the sectoral classification of economic activities (four subcategories). A graphical representation of the development of the number of occupational diseases by workers’ gender is shown in . Men are more heavily represented in the total number of diseases. Men were diagnosed with occupational diseases 1.8 times more often than women. In 2007, men were diagnosed with 422 cases of occupational diseases (as much as 73% of the total number of reported diseases), representing almost 2.8 times more cases than in women. The data show that over a period of 20 years we can see a significantly decreasing trend in the number of occupational diseases in men. Since 2008, the most commonly reported cases have been in the age group 50–59 years . The average representation of this age group in the total number of occupational diseases is almost 42%, compared with 52% in 2019. The second most common age category is the 40–49 category, for whom the average share of the total number of diseases diagnosed is almost 34%. In recent years, the number of reported cases in the over-60 category has increased slightly. On the other hand, the number of reported occupational diseases in the 30 to 39 age group is on a downward trend. A graphical representation of the development of the number of cases of diseases by age category is shown in . A graphical representation of the development of the number of diseases by sector of economic activity is shown in . The highest incidence of occupational diseases based on the sectoral classification of economic activities was in the industrial production sector (Sector 3). Over 20 years, 3748 cases were reported in the sector, representing 42.2% of the total number reported during the period. The lowest number of recognized occupational diseases in the period was in construction (Sector 4, 340 cases, 3.8% of the total number of diseases diagnosed). In 2007, the number of diseases from mining and quarrying professions (Sector 2) increased sharply. This was an increase of as much as 38% compared to the previous year and a 139% increase compared to 2005. In almost all sectors, we see a downward trend in the number of diseases. The only sector that maintains a constant trend is the construction sector (Sector 4). The average proportion of occupational diseases diagnosed in construction is 4%. We used the ETS (ExponenTial Smoothing) method to determine the time-series model for the number of occupational diseases for the period 2000–2019 and the forecasts for the coming period. The resulting time-series prediction model consists of three components: Error, Trend and Seasonal. We have taken into account several models with different suitable combinations of the types of all three components. The ETS(M,A,N) model with multiplicative errors, additive trend and no seasonality represents Holt’s linear method with multiplicative errors. The ETS(A,A,N) model means Holt’s linear method with additive errors, ETS(A,N,N) means simple exponential smoothing with additive errors, etc. We compared the created models using the AIC criteria, with the best model being the model with the lowest AIC value. It was found that the best model is in the form of ETS(M,Md,N), which means a damped trend (Md) with multiplicative errors (M) and no seasonality (N). The damping parameter is 0.97. A graphical representation of the original and equal time series obtained using the ETS method is shown in . The graph shows a forecast for the development of the number of occupational diseases over the next five years. In addition to the forecast point estimate, prediction intervals are also created. The grey or blue area displays 95%, or 80% prediction intervals for forecasts obtained by the ETS(M,Md,N) model. The projection of the development of the number of occupational diseases in Slovakia over a period of 5 years obtained through the best model of ETS(M,Md,N) is shown in . We can state that the development of the number of occupational diseases diagnosed in Slovakia has been on a downward trend during 20 years of monitoring. This favorable trend may be related to a number of factors, including, in particular, increased responsibility of employers and employees who comply with the statutory principles of occupational health and safety. The values of determining variables help to answer the question of to what extent the physical factors of work and the working environment pose a risk to the health of the employee or to what extent the measures taken are effective. Whether they are maintained or exceeded speaks not only of the level of risk, but also of the level of protection of employees’ health. Within the Slovak Republic, the basis for assessing the fulfillment of these requirements is the result of direct or indirect measurement and comparison with the values of determining variables laid down in decrees, government regulations and STN standards (taken from international standards). The objectivity of physical factors of the environment and the working environment is monitored under Guideline OOFŽP-7674/2010 . This guideline is used for the measurement of noise and vibration, daylight and artificial lighting, electromagnetic fields, the thermal-humidity microclimate and the other physical factors to be determined or evaluated at their place of occurrence. A complete treatment of the whole area of health protection at work can be found in European Framework Directive 89/391/EEC . This Directive addresses the fact that employees may be exposed to dangerous environmental factors at the workplace during their working life. Since our legislation is currently harmonized with the EU, the notion of risk assessment and other concepts related to this procedure have also entered the legal norms of the Slovak Republic. In the legislation of the Slovak Republic, the area of risk assessment in the workplace is specified in Act no. 311/2011, the Labour Code , and in act No. 355/2007 . Details of the factors of work and the working environment under the classification of works into categories are given in Annex 1 in Decree No 448/2007 . The method of reporting and registering occupational disease and threatened occupational disease in the Slovak Republic is laid down by Act no. 355/2007 in Section 31b(1,2) . The general principles of prevention and the basic conditions for ensuring health at work are laid down by Act no 124/2006 , and the requirements for the provision and use of personal protective equipment are laid down in Regulation No 395/2006 . The basic tasks of clinical occupational medicine and clinical toxicology in Slovakia include the comprehensive diagnosis, treatment and assessment of diseases arising in connection with adverse and health-damaging factors from work and the working environment. This includes reporting occupational diseases and threatened occupational disease. A total of 21,025 new occupational disease cases were reported in Slovakia between 1987 and 2019, based on data documented by the National Health Information Centre (NHIC). A graphical representation of the development of the number of occupational diseases in Slovakia for the period 1987 to 2019 is shown in . The average annual number of recognized occupational diseases in the given period was almost 637. A significant decrease in the number of reported occupational diseases was recorded up to 1995, from 1262 reports (1987) with a slight increase of 1331 reports (1991) to 601 reports (1995). Between 1995–2019, the number of newly acquired occupational diseases decreased roughly in half with slight fluctuations, to 347 reports (2019), with an all-time low in 2013 (301 reports). In the long term, we are seeing a downward trend in the number of occupational diseases. The graph shows the development of employment in Slovakia (1987–2019). The average annual value of the number of workers over the period is 2262.5 thousand persons. The assessment of occupational diseases reported in the last 32 years (1987–2019) has seen a more pronounced decrease in the second half of the reference period (2003–2019), representing 49.76%, i.e., 6971 cases. The most commonly reported occupational diseases include those listed in (item 22, items 24–26, item 28, item 29, items 33–34 and item 38). Over the period considered, 19,142 new cases related to the diseases were reported, representing almost 91% of the total number of reported occupational diseases. The development of the number of occupational diseases in terms of selected diseases is shown in . For the sake of clarity, only those diseases for which the average percentage of the total number of occupational diseases over a given period exceeded 10% are plotted in the graph. The percentage of selected occupational diseases out of the total number of reported cases in each year is shown in . Compared to the first half of the period (1987–2002), we can see a decrease in almost all the selected types of occupational diseases in the second half (2003–2019) ( and ). The only exceptions are diseases affecting the musculoskeletal, vascular and nervous systems of employees exposed at work to prolonged excessive and one-sided loads on the upper limbs (item 29). Despite a significant decreasing trend in the incidence of reported occupational diseases, limb disease from long-term, excessive and one-sided loads (item 29) is not developing very favorably . The annual incidence of reported diseases of the limbs from long-term excessive and one-sided loads on the limbs began to increase significantly from 1991. The largest number of reports (230 cases) was recorded in 2006, representing almost 46% of the total number of cases (504) in that year. In 2016, the proportion had increased to 55% . Compared to 1987, there was an increase of 885% in reported limb diseases in 2006 due to long-term excessive and one-sided loads. Between 2003 and 2019, we saw an increase of 55.58% in the incidence of occupational diseases, i.e., 3065 cases, overwhelmingly in women (item 29). Vibration occupational disease (item 28) has long been one of the most common occupational diseases in Slovakia. After limb disease from prolonged excessive and one-sided loads, vibration disease (with the exception of 2011, when noise-related hearing loss was temporarily in second place) has consistently come second among the numbers of annually reported occupational diseases in the last two decades. The high numbers in 1987–2007 gradually led to a significant decrease over 2008–2019, with the lowest incidence in 2011 (40 cases), and in the following years, the numbers have increased slightly . Between 2003 and 2019, there was a very significant decrease in the incidence of skin diseases (excluding skin cancer) and communicable skin diseases (item 22) compared to the previous period (1987–2002), by 79.29%, a decrease of 1685 cases . Almost the same percentage decrease (79.57%) was also seen in cases of infectious diseases and parasitic diseases and diseases communicable from animals to humans (items 24–26). Noise-related hearing loss (item 38) is repeatedly in fourth or fifth place in the order of frequency of the number of cases of annually reported occupational diseases. The annual incidence of reported noise-related hearing damage decreased significantly between 1987 and 2008. In 2009–2014, a rise in these diseases was again noted, and they subsequently decreased in 2015 with slight fluctuations. The lowest incidence was recorded in 2008 and 2019, with 17 cases. Cancer-type occupation diseases listed under (items 21 and 23) were reported in 177 employees. The number of annual reports fell by 77.08% between 2002 and 2019, and a decrease of 111 cases . The highest incidence was recorded in 1993; with 15 cases. The average annual number of reports was five cases. The average annual incidence of lung-related occupational diseases (items 33–34) is 27 cases, representing 3% of the total number of occupational diseases over the whole reporting period. In the case of (item 46), we can mention a negligible number of reported occupational diseases over the whole period under review (1987–2019), namely 37 cases. According to archive records, the disease was not diagnosed until 2003. Available data show that a total of 8883 new cases of occupational diseases were reported in the last 20 years (from 2000 to 2019). The average annual number of cases of recognized occupational diseases in the given period is 444 cases. The trend in the incidence of occupational diseases in Slovakia is decreasing in nature. The average annual decrease in the number of occupational diseases is 16, representing an annual decrease of about 3%. For example, the calculated dynamics in the number of occupational diseases show that in 2005, the number decreased by 200 cases compared to the previous year, representing a decrease of around 67.4%. On the other hand, there was an increase of 91 cases in 2006, representing an increase of around 22% in the number of occupational diseases compared to 2005. In 2019, 347 cases of occupational diseases were reported. This is 13.4% per 100,000 workers. Compared to the situation as of 31 December 2018, the number of reported occupational diseases increased by 39 cases (11.24%). Compared to 2000, there are 313 fewer cases of occupational diseases in 2019, almost 53% fewer cases than in 2000. When analyzing the number of occupational diseases, we selected three indicators : the gender of workers (two subcategories), the age category (five subcategories) and the sectoral classification of economic activities (four subcategories). A graphical representation of the development of the number of occupational diseases by workers’ gender is shown in . Men are more heavily represented in the total number of diseases. Men were diagnosed with occupational diseases 1.8 times more often than women. In 2007, men were diagnosed with 422 cases of occupational diseases (as much as 73% of the total number of reported diseases), representing almost 2.8 times more cases than in women. The data show that over a period of 20 years we can see a significantly decreasing trend in the number of occupational diseases in men. Since 2008, the most commonly reported cases have been in the age group 50–59 years . The average representation of this age group in the total number of occupational diseases is almost 42%, compared with 52% in 2019. The second most common age category is the 40–49 category, for whom the average share of the total number of diseases diagnosed is almost 34%. In recent years, the number of reported cases in the over-60 category has increased slightly. On the other hand, the number of reported occupational diseases in the 30 to 39 age group is on a downward trend. A graphical representation of the development of the number of cases of diseases by age category is shown in . A graphical representation of the development of the number of diseases by sector of economic activity is shown in . The highest incidence of occupational diseases based on the sectoral classification of economic activities was in the industrial production sector (Sector 3). Over 20 years, 3748 cases were reported in the sector, representing 42.2% of the total number reported during the period. The lowest number of recognized occupational diseases in the period was in construction (Sector 4, 340 cases, 3.8% of the total number of diseases diagnosed). In 2007, the number of diseases from mining and quarrying professions (Sector 2) increased sharply. This was an increase of as much as 38% compared to the previous year and a 139% increase compared to 2005. In almost all sectors, we see a downward trend in the number of diseases. The only sector that maintains a constant trend is the construction sector (Sector 4). The average proportion of occupational diseases diagnosed in construction is 4%. We used the ETS (ExponenTial Smoothing) method to determine the time-series model for the number of occupational diseases for the period 2000–2019 and the forecasts for the coming period. The resulting time-series prediction model consists of three components: Error, Trend and Seasonal. We have taken into account several models with different suitable combinations of the types of all three components. The ETS(M,A,N) model with multiplicative errors, additive trend and no seasonality represents Holt’s linear method with multiplicative errors. The ETS(A,A,N) model means Holt’s linear method with additive errors, ETS(A,N,N) means simple exponential smoothing with additive errors, etc. We compared the created models using the AIC criteria, with the best model being the model with the lowest AIC value. It was found that the best model is in the form of ETS(M,Md,N), which means a damped trend (Md) with multiplicative errors (M) and no seasonality (N). The damping parameter is 0.97. A graphical representation of the original and equal time series obtained using the ETS method is shown in . The graph shows a forecast for the development of the number of occupational diseases over the next five years. In addition to the forecast point estimate, prediction intervals are also created. The grey or blue area displays 95%, or 80% prediction intervals for forecasts obtained by the ETS(M,Md,N) model. The projection of the development of the number of occupational diseases in Slovakia over a period of 5 years obtained through the best model of ETS(M,Md,N) is shown in . We can state that the development of the number of occupational diseases diagnosed in Slovakia has been on a downward trend during 20 years of monitoring. This favorable trend may be related to a number of factors, including, in particular, increased responsibility of employers and employees who comply with the statutory principles of occupational health and safety. During the nineteenth century, the essence of work underwent a major change. Production increased due to the increased efficiency and effectiveness of the means of production, while working conditions in factories, mines or workshops were often unfavorable. We are now at the beginning of the fourth industrial revolution, which is led by multinationals and information technology. Yet, even at this time, occupational diseases are a society-wide health problem with economic, social and labor-law aspects. It is estimated that up to 2 million people die from occupational disease per year, and up to 160 million are diagnosed with diseases that have been caused by the work being done . This study provides a historical overview of the development of occupational diseases in the world and in Slovakia. The study also includes the development of the incidence of occupational diseases in Slovakia in the period 1997–2019, and it forecasts the development of the numbers for the next five years. The results from the available data show that the trend in the number of occupational diseases in Slovakia is favorable; i.e., the number of cases of occupational diseases diagnosed is declining in the long term. This favorable development can also be attributed to the activities of regional public health authorities and occupational health services. Their activities include guiding the social and health prevention of diseases and harm to health from work by promoting national strategies, priorities and programs for the protection, promotion and development of the public health of employees. Through long-term improvement in the development of preventive occupational medicine and public health, various preventive measures, education of workers and raising awareness, these professions have contributed to the reduction of occupational diseases in Slovakia. In this paper (Part 1), we examined the historical development of occupational diseases and at the same time the development of the incidence of occupational diseases in Slovakia. In the next part of our research, we will specifically focus on a group of diseases from professions that arise when working in noisy or dusty environments, when working with vibrating tools, or when working long-term with one-sided loads. The aim of the research will be to identify, through appropriate statistical methods, the extent to which physical factors of work and the working environment, or other input variables (e.g., age, employment, general state of health of the working person), affect the development of an occupational disease. This research is of great importance for practice, since the occurrence of occupational diseases is one of the important indicators of the level of care for the health of employees and reflects the state of primary prevention of occupational diseases. |
Perioperatives Management bei der Versorgung mit aktiven Rhythmusimplantaten | 61e99b23-8483-4ca4-956b-26d16aec726e | 10879261 | Internal Medicine[mh] | Zunächst ist im Rahmen der präoperativen Planung die Indikation für den geplanten Eingriff zu reevaluieren. Neben der Anamneseerhebung ist die körperliche Untersuchung und eine Sichtung der Vorbefunde (bei Revisionsoperation z. B. OP-Berichte von vorangegangenen CIED-Eingriffen) der Patienten unerlässlich. Die Händigkeit des Patienten zur Entscheidung über eine mögliche kontralaterale Implantation sollte erfragt werden. Zusätzliche anamnestische Überlegungen zur präoperativen Planung sind in Tab. dargestellt. Eine Inspektion des Operationsgebiets, besonders im Rahmen von Revisionseingriffen, ist unerlässlich. Die Haut sollte intakt und frei von Infektionen sein. Narben können auf Voroperationen, wie z. B. eine Port- oder Bypassoperation hinweisen. Eine vermehrte Gefäßzeichnung kann eine Stenose oder einen Verschluss des venösen Abstromgebiets signalisieren, weshalb insbesondere vor Revisionseingriffen oder Aufrüstungen präoperativ eine Phlebographie durchgeführt werden sollte (Abb. ). Die relevante präoperative apparative Diagnostik ist in Tab. zusammengefasst. Aufklärung Eine schriftliche Aufklärung muss im adäquaten zeitlichen Abstand zum operativen Eingriff erfolgen. Unumstritten ist, dass eine frühzeitige Aufklärung es ermöglicht, offene Fragen rechtzeitig zu klären. Die relevanten Komplikationen mit ihren Häufigkeiten sind in Tab. zusammengefasst. Ferner sind gesetzlich versicherte Patienten über den Anspruch auf Einholung einer Zweitmeinungen aufzuklären . Blutungen Blutungskomplikationen sind bei CIED-Eingriffen mit 0,2–16 % die häufigste Komplikation, weshalb das perioperative Management der antithrombozytären Therapie von entscheidender Bedeutung ist. Die perioperative Fortsetzung einer bestehenden Therapie mit Vitamin-K-Antagonisten (VKA; ) oder NOAC führt zu einer signifikanten Reduktion von Hämatomen gegenüber einem Bridging mit Heparin. Die perioperative Fortsetzung einer Thrombozytenaggregation kann in Einzelfällen 3 bis 7 Tage vor der Operation pausiert werden (Tab. ; ). Grundsätzlich sollten nichtdingliche Eingriffe verschoben werden, bis die reguläre Dauer der dualen Thrombozytenaggregationshemmung beendet werden kann. Bei Patienten mit erhöhtem Blutungsrisiko kann ein Druckverband oder Sandsack hilfreich sein, um die Ausbildung eines relevanten Hämatoms zu vermeiden. Dieser sollte für max. 24 h angewendet werden. Entscheidend ist, dass durch die Verhinderung von Blutungskomplikationen das Risiko für eine CIED-Infektion signifikant reduziert wird . CIED-Infektionen Die Patienten müssen präoperativ über das Risiko einer frühen oder späten CIED-Infektion aufgeklärt werden. Zweiteingriffe, vor allem ICD- oder CRT-assoziierte Eingriffe haben ein höheres Infektionsrisiko . Auch die patientenassoziierten Faktoren haben einen Einfluss auf das Auftreten einer CIED-Infektion (Infobox ). Eine Zusammenfassung der wichtigsten Punkte zur Verhinderung dieser Faktoren wird in Infobox gegeben . Infobox 1 Risikofaktoren für CIED-Infektionen Patientenassoziierte Risiken: Z. n. Endokarditis, dialysepflichtige Niereninsuffizienz, Diabetes mellitus Eingriffsassoziierte Risiken: Reoperation, Revision, Aggregegatwechsel Operative Konsequenzen: sondenloser SM, subkutaner oder extravaskulärer ICD, antibakterielle Hülle Infobox 2 Maßnahmen zur Vermeidung von CIED-Infektionen „Dos“: Antibiotikaprophylaxe innerhalb von 1 h vor Inzision Haarentfernung mit elektrischem Clipper (am Operationstag) Chirurgische Vorbereitung mit alkoholischem Chlorhexidin Spülung der Wunde mit sterilem NaCl Steriler Verband für 2–10 Tage „Don’ts“: CIED-Implantationen bei akuten Infektionen (< 24 h Fieber) Instillation von Antiseptika und Antibiotika in die CIED-Tasche Verwendung von geflochtenem Nahtmaterial für den finalen Hautverschluss Routinemäßige postoperative Antibiotikatherapie Temporäre transvenöse Schrittmacher und zentrale Venenkanüle Elektrodenkomplikationen Die häufigsten Komplikationen, die zu einer Reoperation führen, sind frühe Sondendislokationen (2,4 %; ), insbesondere Dislokationen der LV-Elektrode im Rahmen von CRT-Eingriffen. Eine Abnahme der Reizschwelle oder ein Verlust der Wahrnehmung können genauso wie ein Isolationsdefekt oder Elektrodenbrüche im Langzeitverlauf zu einer Revision führen und sollten in der Aufklärung erwähnt werden. Pneumo- und Hämatothorax Die V. cephalica oder die V. axillaris sollte der bevorzugte Zugangsweg sein . Die Punktion der V. subclavia lässt sich nicht immer vermeiden, gerade bei einem Revisions- oder CRT-Eingriff, so dass Komplikationen wie ein Pneumo- oder Hämatothorax sowie eine späte Sondendysfunktion durch ein Subclavian-Crush-Syndrom nicht gänzlich auszuschließen sind. Die ultraschallgestützte Punktion der V. axillaris stellt eine sichere Alternative zur Punktion der V. subclavia dar und zeigt im Langzeitverlauf eine Reduktion der Komplikationen gegenüber der Punktion der V. subclavia . Diese möglichen Komplikationen müssen mit den Patienten kommuniziert werden. Fahreignung Die Patienten sollten bereits im Rahmen des Aufklärungsgesprächs über eine mögliche Einschränkung der Fahreignung über einen gewissen postoperativen Zeitraum aufgeklärt werden. Wir verweisen auf die Begutachtungsleitlinien zur Kraftfahreignung . Aufklärung vor ICD-Implantation Patienten, die einen ICD oder CRT‑D erhalten, sei es aus primär- oder sekundärprophylaktischer Indikation, sollten gesondert über die höheren Komplikationsraten informiert werden . In diesem Zusammenhang sollte den Patienten auch das Vorgehen nach einem ICD-Schock und die Möglichkeit adäquater, aber auch inadäquater Schocks erläutert werden. Eingriffsspezifische Fragestellungen Abhängig von der Indikation sollte präoperativ festgelegt werden, welches CIED die Patienten benötigen. Ein entsprechendes Flussschema zur Entscheidung, angelehnt an die Empfehlungen der ESC ist in Abb. dargestellt. Vor einem geplanten Eingriff sind bei der Wahl der Elektroden und des Aggregats einzelne Punkte zu beachten, die in Tab. genauer erläutert werden. Insbesondere vor einem geplanten Aggregatwechsel, der häufig ambulant durchgeführt wird und bei dem die Gefahr besteht, dass dem Operateur nicht alle relevanten Vorbefunde vorliegen, sollte eine aktuelle Echokardiographie mit Bestimmung der linksventrikulären Funktion vorliegen. Denn in der Zwischenzeit kann sich eine Schrittmacher-induzierte Herzinsuffizienz entwickelt haben, die im Rahmen des geplanten Aggregatwechsels durch eine Aufrüstung auf eine biventrikuläre oder eine Stimulation des Reizleitungssystems (Conduction System Pacing / CSP) mitbehandelt werden kann. Das CSP gewinnt zunehmend an Bedeutung, insbesondere im Rahmen der Herzinsuffizienztherapie und bei frustranen Implantationen von linksventrikulären Elektroden. Die perioperative Vorbereitung ist vergleichbar mit der konventionellen CIED-Therapie, auf eine detaillierte Erläuterung wird in dieser Arbeit verzichtet. Strukturelle Voraussetzungen Vor der Planung des Eingriffs sollte geklärt sein, dass die räumlichen und apparativen Gegebenheiten für die Durchführung eines CIED-Eingriffs geeignet sind . Eine entsprechende Raumlufttechnik muss vorhanden und die hygienischen Voraussetzungen müssen gegeben sein. Grundsätzlich können alle CIED-Eingriffe in (Hybrid‑)Operationssälen durchgeführt werden, sofern diese die entsprechende Lüftungsklasse erfüllen. Eine enge Abstimmung mit der Hygiene sollte erfolgen. Personen, die im Zusammenhang mit der Anwendung von Röntgenstrahlen tätig werden, insbesondere solche, die Zugang zum Kontrollbereich (Röntgenraum) haben, müssen gemäß § 63 StrlSchV vor Aufnahme der Tätigkeit und danach jährlich über die Arbeitsverfahren, die anzuwendenden Schutzmaßnahmen und mögliche Gefahren unterwiesen werden. Insgesamt sollten die Operateure und das Assistenzpersonal die entsprechende Qualifikation und Erfahrung haben, um mögliche Komplikationen beherrschen zu können . Neben der Möglichkeit zur Überwachung der Vitalwerte sollten im Operationssaal Notfallausrüstung wie Defibrillator und Notfallmedikamente (Adrenalin, Atropin, Isoprenalin etc.) verfügbar sein .
Eine schriftliche Aufklärung muss im adäquaten zeitlichen Abstand zum operativen Eingriff erfolgen. Unumstritten ist, dass eine frühzeitige Aufklärung es ermöglicht, offene Fragen rechtzeitig zu klären. Die relevanten Komplikationen mit ihren Häufigkeiten sind in Tab. zusammengefasst. Ferner sind gesetzlich versicherte Patienten über den Anspruch auf Einholung einer Zweitmeinungen aufzuklären . Blutungen Blutungskomplikationen sind bei CIED-Eingriffen mit 0,2–16 % die häufigste Komplikation, weshalb das perioperative Management der antithrombozytären Therapie von entscheidender Bedeutung ist. Die perioperative Fortsetzung einer bestehenden Therapie mit Vitamin-K-Antagonisten (VKA; ) oder NOAC führt zu einer signifikanten Reduktion von Hämatomen gegenüber einem Bridging mit Heparin. Die perioperative Fortsetzung einer Thrombozytenaggregation kann in Einzelfällen 3 bis 7 Tage vor der Operation pausiert werden (Tab. ; ). Grundsätzlich sollten nichtdingliche Eingriffe verschoben werden, bis die reguläre Dauer der dualen Thrombozytenaggregationshemmung beendet werden kann. Bei Patienten mit erhöhtem Blutungsrisiko kann ein Druckverband oder Sandsack hilfreich sein, um die Ausbildung eines relevanten Hämatoms zu vermeiden. Dieser sollte für max. 24 h angewendet werden. Entscheidend ist, dass durch die Verhinderung von Blutungskomplikationen das Risiko für eine CIED-Infektion signifikant reduziert wird . CIED-Infektionen Die Patienten müssen präoperativ über das Risiko einer frühen oder späten CIED-Infektion aufgeklärt werden. Zweiteingriffe, vor allem ICD- oder CRT-assoziierte Eingriffe haben ein höheres Infektionsrisiko . Auch die patientenassoziierten Faktoren haben einen Einfluss auf das Auftreten einer CIED-Infektion (Infobox ). Eine Zusammenfassung der wichtigsten Punkte zur Verhinderung dieser Faktoren wird in Infobox gegeben . Infobox 1 Risikofaktoren für CIED-Infektionen Patientenassoziierte Risiken: Z. n. Endokarditis, dialysepflichtige Niereninsuffizienz, Diabetes mellitus Eingriffsassoziierte Risiken: Reoperation, Revision, Aggregegatwechsel Operative Konsequenzen: sondenloser SM, subkutaner oder extravaskulärer ICD, antibakterielle Hülle Infobox 2 Maßnahmen zur Vermeidung von CIED-Infektionen „Dos“: Antibiotikaprophylaxe innerhalb von 1 h vor Inzision Haarentfernung mit elektrischem Clipper (am Operationstag) Chirurgische Vorbereitung mit alkoholischem Chlorhexidin Spülung der Wunde mit sterilem NaCl Steriler Verband für 2–10 Tage „Don’ts“: CIED-Implantationen bei akuten Infektionen (< 24 h Fieber) Instillation von Antiseptika und Antibiotika in die CIED-Tasche Verwendung von geflochtenem Nahtmaterial für den finalen Hautverschluss Routinemäßige postoperative Antibiotikatherapie Temporäre transvenöse Schrittmacher und zentrale Venenkanüle Elektrodenkomplikationen Die häufigsten Komplikationen, die zu einer Reoperation führen, sind frühe Sondendislokationen (2,4 %; ), insbesondere Dislokationen der LV-Elektrode im Rahmen von CRT-Eingriffen. Eine Abnahme der Reizschwelle oder ein Verlust der Wahrnehmung können genauso wie ein Isolationsdefekt oder Elektrodenbrüche im Langzeitverlauf zu einer Revision führen und sollten in der Aufklärung erwähnt werden. Pneumo- und Hämatothorax Die V. cephalica oder die V. axillaris sollte der bevorzugte Zugangsweg sein . Die Punktion der V. subclavia lässt sich nicht immer vermeiden, gerade bei einem Revisions- oder CRT-Eingriff, so dass Komplikationen wie ein Pneumo- oder Hämatothorax sowie eine späte Sondendysfunktion durch ein Subclavian-Crush-Syndrom nicht gänzlich auszuschließen sind. Die ultraschallgestützte Punktion der V. axillaris stellt eine sichere Alternative zur Punktion der V. subclavia dar und zeigt im Langzeitverlauf eine Reduktion der Komplikationen gegenüber der Punktion der V. subclavia . Diese möglichen Komplikationen müssen mit den Patienten kommuniziert werden. Fahreignung Die Patienten sollten bereits im Rahmen des Aufklärungsgesprächs über eine mögliche Einschränkung der Fahreignung über einen gewissen postoperativen Zeitraum aufgeklärt werden. Wir verweisen auf die Begutachtungsleitlinien zur Kraftfahreignung . Aufklärung vor ICD-Implantation Patienten, die einen ICD oder CRT‑D erhalten, sei es aus primär- oder sekundärprophylaktischer Indikation, sollten gesondert über die höheren Komplikationsraten informiert werden . In diesem Zusammenhang sollte den Patienten auch das Vorgehen nach einem ICD-Schock und die Möglichkeit adäquater, aber auch inadäquater Schocks erläutert werden.
Blutungskomplikationen sind bei CIED-Eingriffen mit 0,2–16 % die häufigste Komplikation, weshalb das perioperative Management der antithrombozytären Therapie von entscheidender Bedeutung ist. Die perioperative Fortsetzung einer bestehenden Therapie mit Vitamin-K-Antagonisten (VKA; ) oder NOAC führt zu einer signifikanten Reduktion von Hämatomen gegenüber einem Bridging mit Heparin. Die perioperative Fortsetzung einer Thrombozytenaggregation kann in Einzelfällen 3 bis 7 Tage vor der Operation pausiert werden (Tab. ; ). Grundsätzlich sollten nichtdingliche Eingriffe verschoben werden, bis die reguläre Dauer der dualen Thrombozytenaggregationshemmung beendet werden kann. Bei Patienten mit erhöhtem Blutungsrisiko kann ein Druckverband oder Sandsack hilfreich sein, um die Ausbildung eines relevanten Hämatoms zu vermeiden. Dieser sollte für max. 24 h angewendet werden. Entscheidend ist, dass durch die Verhinderung von Blutungskomplikationen das Risiko für eine CIED-Infektion signifikant reduziert wird .
Die Patienten müssen präoperativ über das Risiko einer frühen oder späten CIED-Infektion aufgeklärt werden. Zweiteingriffe, vor allem ICD- oder CRT-assoziierte Eingriffe haben ein höheres Infektionsrisiko . Auch die patientenassoziierten Faktoren haben einen Einfluss auf das Auftreten einer CIED-Infektion (Infobox ). Eine Zusammenfassung der wichtigsten Punkte zur Verhinderung dieser Faktoren wird in Infobox gegeben .
Patientenassoziierte Risiken: Z. n. Endokarditis, dialysepflichtige Niereninsuffizienz, Diabetes mellitus Eingriffsassoziierte Risiken: Reoperation, Revision, Aggregegatwechsel Operative Konsequenzen: sondenloser SM, subkutaner oder extravaskulärer ICD, antibakterielle Hülle
„Dos“: Antibiotikaprophylaxe innerhalb von 1 h vor Inzision Haarentfernung mit elektrischem Clipper (am Operationstag) Chirurgische Vorbereitung mit alkoholischem Chlorhexidin Spülung der Wunde mit sterilem NaCl Steriler Verband für 2–10 Tage „Don’ts“: CIED-Implantationen bei akuten Infektionen (< 24 h Fieber) Instillation von Antiseptika und Antibiotika in die CIED-Tasche Verwendung von geflochtenem Nahtmaterial für den finalen Hautverschluss Routinemäßige postoperative Antibiotikatherapie Temporäre transvenöse Schrittmacher und zentrale Venenkanüle
Die häufigsten Komplikationen, die zu einer Reoperation führen, sind frühe Sondendislokationen (2,4 %; ), insbesondere Dislokationen der LV-Elektrode im Rahmen von CRT-Eingriffen. Eine Abnahme der Reizschwelle oder ein Verlust der Wahrnehmung können genauso wie ein Isolationsdefekt oder Elektrodenbrüche im Langzeitverlauf zu einer Revision führen und sollten in der Aufklärung erwähnt werden.
Die V. cephalica oder die V. axillaris sollte der bevorzugte Zugangsweg sein . Die Punktion der V. subclavia lässt sich nicht immer vermeiden, gerade bei einem Revisions- oder CRT-Eingriff, so dass Komplikationen wie ein Pneumo- oder Hämatothorax sowie eine späte Sondendysfunktion durch ein Subclavian-Crush-Syndrom nicht gänzlich auszuschließen sind. Die ultraschallgestützte Punktion der V. axillaris stellt eine sichere Alternative zur Punktion der V. subclavia dar und zeigt im Langzeitverlauf eine Reduktion der Komplikationen gegenüber der Punktion der V. subclavia . Diese möglichen Komplikationen müssen mit den Patienten kommuniziert werden.
Die Patienten sollten bereits im Rahmen des Aufklärungsgesprächs über eine mögliche Einschränkung der Fahreignung über einen gewissen postoperativen Zeitraum aufgeklärt werden. Wir verweisen auf die Begutachtungsleitlinien zur Kraftfahreignung .
Patienten, die einen ICD oder CRT‑D erhalten, sei es aus primär- oder sekundärprophylaktischer Indikation, sollten gesondert über die höheren Komplikationsraten informiert werden . In diesem Zusammenhang sollte den Patienten auch das Vorgehen nach einem ICD-Schock und die Möglichkeit adäquater, aber auch inadäquater Schocks erläutert werden.
Abhängig von der Indikation sollte präoperativ festgelegt werden, welches CIED die Patienten benötigen. Ein entsprechendes Flussschema zur Entscheidung, angelehnt an die Empfehlungen der ESC ist in Abb. dargestellt. Vor einem geplanten Eingriff sind bei der Wahl der Elektroden und des Aggregats einzelne Punkte zu beachten, die in Tab. genauer erläutert werden. Insbesondere vor einem geplanten Aggregatwechsel, der häufig ambulant durchgeführt wird und bei dem die Gefahr besteht, dass dem Operateur nicht alle relevanten Vorbefunde vorliegen, sollte eine aktuelle Echokardiographie mit Bestimmung der linksventrikulären Funktion vorliegen. Denn in der Zwischenzeit kann sich eine Schrittmacher-induzierte Herzinsuffizienz entwickelt haben, die im Rahmen des geplanten Aggregatwechsels durch eine Aufrüstung auf eine biventrikuläre oder eine Stimulation des Reizleitungssystems (Conduction System Pacing / CSP) mitbehandelt werden kann. Das CSP gewinnt zunehmend an Bedeutung, insbesondere im Rahmen der Herzinsuffizienztherapie und bei frustranen Implantationen von linksventrikulären Elektroden. Die perioperative Vorbereitung ist vergleichbar mit der konventionellen CIED-Therapie, auf eine detaillierte Erläuterung wird in dieser Arbeit verzichtet.
Vor der Planung des Eingriffs sollte geklärt sein, dass die räumlichen und apparativen Gegebenheiten für die Durchführung eines CIED-Eingriffs geeignet sind . Eine entsprechende Raumlufttechnik muss vorhanden und die hygienischen Voraussetzungen müssen gegeben sein. Grundsätzlich können alle CIED-Eingriffe in (Hybrid‑)Operationssälen durchgeführt werden, sofern diese die entsprechende Lüftungsklasse erfüllen. Eine enge Abstimmung mit der Hygiene sollte erfolgen. Personen, die im Zusammenhang mit der Anwendung von Röntgenstrahlen tätig werden, insbesondere solche, die Zugang zum Kontrollbereich (Röntgenraum) haben, müssen gemäß § 63 StrlSchV vor Aufnahme der Tätigkeit und danach jährlich über die Arbeitsverfahren, die anzuwendenden Schutzmaßnahmen und mögliche Gefahren unterwiesen werden. Insgesamt sollten die Operateure und das Assistenzpersonal die entsprechende Qualifikation und Erfahrung haben, um mögliche Komplikationen beherrschen zu können . Neben der Möglichkeit zur Überwachung der Vitalwerte sollten im Operationssaal Notfallausrüstung wie Defibrillator und Notfallmedikamente (Adrenalin, Atropin, Isoprenalin etc.) verfügbar sein .
Ein Team-Time-out-Bogen hilft Fehler zu vermeiden. Direkt vor der Operation sollte eine Patientenvisite durchgeführt werden, um die persönlichen Daten, die Nüchternheit und die Medikamenteneinnahme bzw. -pause zu überprüfen. Außerdem sollten die Patienten u. a. bzgl. Fieberfreiheit (> 24 h) befragt werden. Vor der Implantation sollte ein peripher-venöser Zugang gelegt werden, um Medikamente verabreichen zu können. Dabei ist ein zum Operationsgebiet ipsilateraler Venenzugang sinnvoll (z. B. V. mediana cubiti), um ggf. frühzeitig eine Phlebographie zur Darstellung des V. axillaris/subclavia-Systems durchzuführen. Da die linke Seite bei den meisten Patienten die nichtdominante Seite in Bezug auf die Händigkeit darstellt und bei ICD-Therapien eine geringere Defibrillationsschwelle (DFT; ) zeigt, sollte diese Seite der bevorzugte Zugangsweg sein (Abb. ). Operative Überwachung Nach einer adäquaten Lagerung der Patienten erfolgt die Anlage der Monitorüberwachung. Diese sollte aus einer Überwachung von EKG, Blutdruck- und Sauerstoffsättigung bestehen, wobei letztere über eine akustische Rückmeldung verfügen sollte, um einen Abfall von Sauerstoffsättigung und Puls frühzeitig zu erkennen. Zusätzlich sollten bei Patienten mit ICD-Eingriffen und ggf. bei SM-Eingriffen Defibrillationspatches aufgeklebt und ein externer Defibrillator mit der Möglichkeit zur transkutanen Stimulation angeschlossen werden. Antibiotikagabe Eine präoperative Antibiotikagabe zur Verhinderung von Infektionen sollte innerhalb von 1 h vor Inzision erfolgen. Bevorzugtes Antibiotikum ist Cefazolin 1–2 g i.v. oder Flucloxacillin 1–2 g i.v., bei Allergie oder erhöhtem Risikoprofil für resistente Keime alternativ Vancomycin (15 mg/kgKG, dann innerhalb 90–120 min; ). Eine postoperative Antibiose wird nicht routinemäßig empfohlen. Analgosedierung Eine präoperative Analgesie z. B. mit Morphin (3–5 mg i.v.) kann unabhängig von der Art der Lokalanästhesie (z. B. mit Xylocain oder Lidocain) in Kombination mit einer antiemetischen Medikation (z. B. Ondansetron 4 mg i.v.) zu einer deutlichen Schmerzreduktion führen. Obwohl Morphin sicherlich auch eine sedierende Komponente hat, kann zur weiteren Sedierung oder Anxiolyse eine Medikation mit Midazolam erwogen werden. Zu beachten sind jedoch – insbesondere bei älteren Patienten – die damit verbundenen möglichen paradoxen Reaktionen bzw. das Auftreten einer Atemdepression. Sinnvoll erscheint dann eine kontinuierliche Sauerstoffinsufflation von 1–2 Litern über eine Nasenbrille. Eine tiefere Analgosedierung mit kontinuierlicher Gabe von Propofol oder Remifentanyl ist ebenfalls möglich. Zur Durchführung einer Analgosedierung und zur Beherrschung möglicher Anpassungen und Komplikationen sollte das ärztliche und Assistenz-Personal entsprechend geschult und die Patienten dementsprechend aufgeklärt sein. Die Analgosedierung ist eine separate medizinische Leistung und bedarf daher auch einer separaten Aufklärung . Die Art und Intensität der Sedierung müssen vor allem auch vor dem Hintergrund potenziell ambulant durchzuführender Prozeduren (adäquate postoperative Überwachung!) sorgfältig ausgewählt und titriert werden. Desinfektion des Operationsgebiets Als präoperatives Hautantiseptikum für allgemeine chirurgische Operationen und intravaskuläre Katheterisierungen zeigten randomisierte Studien eine Überlegenheit bei Verwendung von alkoholhaltigen 2 %-Chlorhexidin-Lösungen (zunehmend durch Octenidinhydrochlorid ersetzt) im Vergleich zu Iod-Povidon-Lösungen . Allerdings gibt es bis dato keine randomisierten Daten zur Anwendung bei CIED-Eingriffen. Wichtig ist vor allem die ausreichende Einwirk- (Octenidin 60–120 s) und die Trockenzeit vor Inzision.
Nach einer adäquaten Lagerung der Patienten erfolgt die Anlage der Monitorüberwachung. Diese sollte aus einer Überwachung von EKG, Blutdruck- und Sauerstoffsättigung bestehen, wobei letztere über eine akustische Rückmeldung verfügen sollte, um einen Abfall von Sauerstoffsättigung und Puls frühzeitig zu erkennen. Zusätzlich sollten bei Patienten mit ICD-Eingriffen und ggf. bei SM-Eingriffen Defibrillationspatches aufgeklebt und ein externer Defibrillator mit der Möglichkeit zur transkutanen Stimulation angeschlossen werden.
Eine präoperative Antibiotikagabe zur Verhinderung von Infektionen sollte innerhalb von 1 h vor Inzision erfolgen. Bevorzugtes Antibiotikum ist Cefazolin 1–2 g i.v. oder Flucloxacillin 1–2 g i.v., bei Allergie oder erhöhtem Risikoprofil für resistente Keime alternativ Vancomycin (15 mg/kgKG, dann innerhalb 90–120 min; ). Eine postoperative Antibiose wird nicht routinemäßig empfohlen.
Eine präoperative Analgesie z. B. mit Morphin (3–5 mg i.v.) kann unabhängig von der Art der Lokalanästhesie (z. B. mit Xylocain oder Lidocain) in Kombination mit einer antiemetischen Medikation (z. B. Ondansetron 4 mg i.v.) zu einer deutlichen Schmerzreduktion führen. Obwohl Morphin sicherlich auch eine sedierende Komponente hat, kann zur weiteren Sedierung oder Anxiolyse eine Medikation mit Midazolam erwogen werden. Zu beachten sind jedoch – insbesondere bei älteren Patienten – die damit verbundenen möglichen paradoxen Reaktionen bzw. das Auftreten einer Atemdepression. Sinnvoll erscheint dann eine kontinuierliche Sauerstoffinsufflation von 1–2 Litern über eine Nasenbrille. Eine tiefere Analgosedierung mit kontinuierlicher Gabe von Propofol oder Remifentanyl ist ebenfalls möglich. Zur Durchführung einer Analgosedierung und zur Beherrschung möglicher Anpassungen und Komplikationen sollte das ärztliche und Assistenz-Personal entsprechend geschult und die Patienten dementsprechend aufgeklärt sein. Die Analgosedierung ist eine separate medizinische Leistung und bedarf daher auch einer separaten Aufklärung . Die Art und Intensität der Sedierung müssen vor allem auch vor dem Hintergrund potenziell ambulant durchzuführender Prozeduren (adäquate postoperative Überwachung!) sorgfältig ausgewählt und titriert werden.
Als präoperatives Hautantiseptikum für allgemeine chirurgische Operationen und intravaskuläre Katheterisierungen zeigten randomisierte Studien eine Überlegenheit bei Verwendung von alkoholhaltigen 2 %-Chlorhexidin-Lösungen (zunehmend durch Octenidinhydrochlorid ersetzt) im Vergleich zu Iod-Povidon-Lösungen . Allerdings gibt es bis dato keine randomisierten Daten zur Anwendung bei CIED-Eingriffen. Wichtig ist vor allem die ausreichende Einwirk- (Octenidin 60–120 s) und die Trockenzeit vor Inzision.
Unabhängig davon, ob es sich um einen ambulanten oder stationären Eingriff handelt, ist eine weitere Überwachung indiziert. Je nach Art des Eingriffs und des damit verbundenen Risikos sollte eine 1‑ bis 4‑stündige Überwachung erfolgen, außer bei Sondenextraktionen, bei denen eine 12-stündige Überwachung angezeigt ist . Röntgen-Thorax Vor der Entlassung sollte eine Röntgenuntersuchung des Thorax in 2 Ebenen zum Ausschluss eines Pneumothorax und Dokumentation der Elektrodenlage erfolgen. Eine Vergleichsaufnahme kann bei Elektrodenproblemen wie V. a. Dislokationen helfen, das Problem frühzeitig zu erkennen. CIED-Abfrage Vor der Entlassung müssen eine ärztliche Visite und eine Kontrolle des implantierten Systems erfolgen. Es sollte eine individualisierte Programmierung entsprechend der Implantationsindikation und die Aktivierung möglicher Zusatzfunktionen erfolgen. Das weitere Verband- und Wundmanagement in Abhängigkeit von der Hautnaht sowie eine zeitnahe Wiedervorstellung bei Beschwerden oder Auffälligkeiten im Wundbereich sollten besprochen werden. Eine komplette Ruhigstellung des Arms auf der Eingriffsseite sollte nicht erfolgen . Die Patienten müssen vor der Entlassung einen Implantatausweis erhalten, erneut über eine mögliche Einschränkung der Fahreignung aufgeklärt werden und erhalten einen Termin zur Nachsorge innerhalb von 2–12 Wochen . Patienteninformation Eine ausführliche Information für Patienten vor Entlassung, idealerweise auch schriftlich, bezüglich Verhaltensmaßnahmen scheint eine sinnvolle Möglichkeit, postoperative Verhaltensempfehlungen zu vermitteln und den Patienten eventuelle Unsicherheiten zu nehmen.
Vor der Entlassung sollte eine Röntgenuntersuchung des Thorax in 2 Ebenen zum Ausschluss eines Pneumothorax und Dokumentation der Elektrodenlage erfolgen. Eine Vergleichsaufnahme kann bei Elektrodenproblemen wie V. a. Dislokationen helfen, das Problem frühzeitig zu erkennen.
Vor der Entlassung müssen eine ärztliche Visite und eine Kontrolle des implantierten Systems erfolgen. Es sollte eine individualisierte Programmierung entsprechend der Implantationsindikation und die Aktivierung möglicher Zusatzfunktionen erfolgen. Das weitere Verband- und Wundmanagement in Abhängigkeit von der Hautnaht sowie eine zeitnahe Wiedervorstellung bei Beschwerden oder Auffälligkeiten im Wundbereich sollten besprochen werden. Eine komplette Ruhigstellung des Arms auf der Eingriffsseite sollte nicht erfolgen . Die Patienten müssen vor der Entlassung einen Implantatausweis erhalten, erneut über eine mögliche Einschränkung der Fahreignung aufgeklärt werden und erhalten einen Termin zur Nachsorge innerhalb von 2–12 Wochen .
Eine ausführliche Information für Patienten vor Entlassung, idealerweise auch schriftlich, bezüglich Verhaltensmaßnahmen scheint eine sinnvolle Möglichkeit, postoperative Verhaltensempfehlungen zu vermitteln und den Patienten eventuelle Unsicherheiten zu nehmen.
Seit dem MDK-Reformgesetz ist nach § 115b SGB V Abs. 1A eine Erweiterung des Katalogs zum ambulanten Operieren im Krankenhaus (AOP) erfolgt. Diese wurde zum 01.01.2023 vorgelegt und gilt nach einer Übergangsfrist seit dem 01.04.2023. Danach sollen kardiologische Eingriffe, v. a. Herzschrittmacher–Implantationen, aber auch ICD-Aggregatwechsel verstärkt ambulant durchgeführt werden. Um Leistungen des AOP-Katalogs stationär abrechnen bzw. Patienten stationär aufnehmen zu können, wurden die bisherigen G‑AEP-Kriterien durch Kontextfaktoren ersetzt. Dabei werden die Komorbiditäten durch komplexe Scores berechnet und fast nur Akuterkrankungen in die Berechnungen einbezogen. Aufgrund dessen ergeben sich abweichende Empfehlungen der DGK zur ambulanten Durchführbarkeit von CIED-Eingriffen und den Vorgaben des aktuellen AOP-Katalogs . Patienten haben kein Mitspracherecht über die ambulante oder stationäre Durchführung des Eingriffs im Sinne eines „shared decision making“. Nichtsdestotrotz müssen CIED-Eingriffe auf die Möglichkeit einer ambulanten Durchführbarkeit überprüft werden, und dies sollte mit den Patienten im Rahmen der Vorbereitung geklärt werden. In diesem Zusammenhang müssen im implantierenden Zentrum eine Struktur und das Personal vorhanden sein, um die unter dem Punkt postoperative Nachsorge genannten Punkte rechtzeitig vor der Entlassung aus dem Krankenhaus durchführen zu können. Es muss eine Struktur geschaffen werden, die den Patenten im Notfall eine Vorstellung im eigenen oder einem kooperierenden Zentrum vorhält. Im Vordergrund muss jedoch immer die Patientensicherheit stehen und die Möglichkeit, einen ambulanten Fall in einen stationären Fall umzuwandeln, um die Patienten stationär aufnehmen zu können. Dies muss unabhängig von wirtschaftlichen Überlegungen der Leistungserbringung geschehen.
Eine gute perioperative Planung hilft, Behandlungsfehler zu vermeiden, Komplikationen zu minimieren sowie die Patientensicherheit und -zufriedenheit zu erhöhen. Zur Verringerung des Infektionsrisikos ist die Vermeidung von Blutungskomplikationen essenziell. Insbesondere im Hinblick auf die zunehmende Ambulantisierung sollten Strukturen zur postoperativen Betreuung der Patienten etabliert werden.
|
Butorphanol Tartrate Nasal Spray for Post-Cesarean Analgesia and Prolactin Secretion | 7462c16b-3415-427d-972b-99c9efe0731b | 11760558 | Surgical Procedures, Operative[mh] | The number of cesarean sections performed has increased rapidly following the development of medical technology and increasing attention to maternal and infant safety issues. The cesarean section rate in China increased from 28.8% in 2008 to 36.7% in 2018. Cesarean section has become one of the most common hospitalization procedures in China and worldwide . The increased number of cesarean sections poses a serious challenge to the management of postpartum pain. Patients experience painful uterine spasms due to the huge uterine trauma caused by the cesarean section and the routine use of oxytocin for postoperative treatment . Uncontrolled postpartum uterine contraction pain can affect breast milk secretion and early mother–infant contact, limit early maternal activities, and further increase the risk of thromboembolism; in severe cases, it can even lead to postpartum depression and delay early maternal recovery . The Society for Obstetric Anesthesia and Perinatology proposed the concept of enhanced recovery after a cesarean section, believing that efficient and safe pain management strategies are crucial for rapid postoperative recovery . Good postoperative analgesia can relieve maternal pain, reduce postoperative complications, shorten hospitalization time, reduce the burden on society and families, and improve patient satisfaction . The European Society for Regional Anesthesia and Pain Management recommends that intrathecal injection, patient-controlled intravenous analgesia (PCIA), patient-controlled epidural analgesia (PCEA), and regional analgesia techniques can achieve ideal analgesic effect in the management of pain after cesarean section under spinal anesthesia . However, patients who use PCEA for a long time may have risk of catheter prolapse and epidural infection due to their own negligence or inappropriate care. Intrathecal injection and regional analgesia techniques are also invasive procedures . At present, there is a lack of research on the effects of multiple confounding factors on breastfeeding, including the type of anesthesia, the dose and type of analgesic drugs, and intrapartum intervention, but there is evidence that effective control of postoperative pain is important for breastfeeding . Nonsteroidal anti-inflammatory drugs are usually the first choice after cesarean section, and to deal with the burst pain after surgery, they are often combined with higher-safety opioids such as sufentanil, morphine, or butorphanol. However, excessive intake of opioids can delay the start of breastfeeding . To achieve the goals of comfort, non-invasiveness, and minimizing the use of opioids, the search for the best analgesic regimen to treat postoperative pain in women undergoing cesarean section is still ongoing. Butorphanol tartrate is a mixed opioid receptor agonist–antagonist drug. The intensity of action on 3 opioid receptors is 1: 4: 25 (μ: δ: κ), respectively. It is usually used for postoperative analgesia in the form of injection by intravenous or intramuscular injection, intrathecal administration, or epidural administration. The most common adverse reactions are somnolence, dizziness, nausea, and vomiting. Compared with other opioids, butorphanol has low somatic dependence and low incidence of adverse reactions such as skin itching and respiratory depression . It has a high affinity for kappa opioid receptors, can effectively relieve visceral pain, and is beneficial for managing uterine contraction pain after cesarean section . One study showed that the intramuscular injection of butorphanol in men can significantly promote prolactin secretion . Since its launch in the market, butorphanol tartrate nasal spray has become an emerging option for perioperative pain management because of its advantages of non-invasiveness, convenient administration, rapid onset of action, and high bioavailability . Abboud et al found that intranasal butorphanol can be safely and effectively applied for analgesia after cesarean section, with better and longer duration of analgesia, but the assessment of pain did not distinguish between uterine contraction pain and incision pain, nor did it involve studies on the effects on prolactin and breastfeeding . Therefore, the present study aimed to explore its analgesic effect on uterine contraction pain after a cesarean section and whether it can promote prolactin secretion in patients after delivery, providing a basis for a new model of pain management and accelerated recovery after a cesarean section.
Study Population This study was a prospective, randomized, controlled, double-blind, single-center, clinical trial, included 120 patients (who chose to undergo cesarean section under combined spinal-epidural anesthesia at the Western Theater Command General Hospital from August 2022 to December 2022). The study was approved by the Ethics Committee of the General Hospital of the Western Theater Command (Ethic Approval No. 2022EC2-ky050) and registered with the Chinese Clinical Trials Registry (ChiCTR2200063746). All patients had signed informed consent forms. The patients were aged 18–45 years old, with American Society of Anesthesiologists (ASA) physical status I to II, a singleton pregnancy, planned to breastfeed, and received PCIA postoperatively. The exclusion criteria were: (1) hemolysis, elevated liver enzymes, low platelet syndrome, cardiovascular disease, and hyperprolactinemia; (2) chronic alcohol, sedative hypnotics, or analgesic use; (3) mental disorder; (4) any previous abdominal surgery (except cesarean section); and (5) contraindications for neuraxial anesthesia and nasal administration. The exclusion criteria were: (1) butorphanol tartrate or other analgesics used by other means during the trial; (2) unforeseen adverse events (such as drug allergy, anesthesia, and surgical accidents); (3) patient loss during a visit or automatic patient exit. Sample Size Calculation and Randomization Sample size calculations were based on patients’ VAS scores at 6 h postoperatively in a pretrial trial, and to achieve 80% efficacy at the α=0.05 (two-tailed) level, we calculated the need for 36 patients in each group using PASS 15.0 (NCSS, Kaysville, UT), and taking into account a 10% loss-to-follow-up rate, we included 40 patients in each group. Before the patients entered the operating room, the researcher distributed 120 opaque sealed envelopes (40 per group) for simple randomization. The anesthesiologists, patients, and assessors were not informed of the grouping situation, and the researcher did not participate in the anesthesia and assessment process. Administration Method In the BI group, after the fetus was removed, the patient was intranasally administered 1 mg of butorphanol tartrate into the unilateral nasal cavity (the patient was kept in supine position with the head tilted back to clear the nasal secretions, and the drug was administered intranasally using a special butorphanol tartrate nasal spray (Jiangsu Hengrui Pharmaceutical Co., Ltd). After the drug was administered, both sides of the nasal flanks were gently pinched to bring the drug into full contact with the mucosa of the nasal cavity to minimize its entry into the pharynx). In the control group, an equivalent dose of saline was administered using the same method. The 2 groups of patients used the same protocol for the postoperative analgesic pumps (1.5 μg/kg sufentanil, Yichang Renfu Pharmaceutical Co., LTD) and 10 mg tropisetron hydrochloride (Rui Yang Pharmaceutical Co., LTD, total volume 100 mL). In the BV group, the postoperative analgesic pump used was 5 mg butorphanol tartrate (Jiangsu Hengrui Pharmaceutical Co., Ltd), 1.5 μg/kg sufentanil, and 10 mg tropisetron hydrochloride, to a total volume of 100 mL. After closing the peritoneum, the patient was immediately started on a PCIA device. The PCIA parameters were set to a continuous background dose of 3 mL/h, a patient-controlled single dose of 3 mL, and a lockout time of 30 min. After the operation, an anesthesiologist returned the patients to the ward, and the patients and their families were informed on how to use the PCIA device. Anesthetic Management No patient was on any medications before the surgery. Upon arrival in the operating room, the patient was immediately monitored for electrocardiogram, non-invasive blood pressure, and pulse oximetry, and an intravenous catheter was placed. Non-invasive blood pressure was measured every 5 min during the operation. The patient was placed in the left lateral position with the head and knees bent toward the chest, and the puncture point was selected at the L2–L3 or L3–L4 gap. The puncture was performed in a sterile manner using an epidural puncture needle, and after confirming that the tip of the needle was in the epidural space, a lumbar puncture needle was used to perform the puncture. When cerebrospinal fluid outflow was observed, a mixture of 1 mL glucose (10%) and 1.5 mL ropivacaine hydrochloride (1%) was injected into the subarachnoid space, the lumbar puncture needle was withdrawn, and the epidural catheter was inserted into the epidural space (3–4 cm cephalad) until it was removed at the end of the surgery. The patient was returned to the supine position and the plane of block was detected. Surgery was initiated when a T4–T6 sensory block was achieved. Data Collection Patient characteristics and medical history were recorded, including age, body mass index (BMI), gestational age, number of cesarean sections, educational level, operation time, and ASA classification. Professional anesthesia evaluators followed up all patients at 6, 12, and 24 hours postoperatively. Their postoperative uterine contraction pain after cesarean delivery was assessed using the VAS scale, which is presented as a 10-cm horizontal line with 2 extremes at either end (0, no pain; 10 cm, agonizing pain). Patients marked their level of postoperative pain on this line . The RASS scale (1, patient is anxious and agitated or restless, or both; 2, patient is cooperative, oriented, and tranquil; 3, patient responds to commands only; 4, patient exhibits brisk response to light tactile stimuli or loud auditory stimulus; 5, patient exhibits sluggish response to light tactile stimuli or loud auditory stimulus; 6, patient exhibits no response) was used to assess the patient’s postoperative sedation level . In addition, the number of effective postoperative analgesic pump compressions, consumption of butorphanol tartrate, and adverse effects (nausea and vomiting, dizziness, skin itching, and respiratory depression) were recorded. The time of initiation of lactation was assessed based on maternal sensations (breast fullness, engorgement, or leakage). Furthermore, 3 mL of venous blood was collected preoperatively and 24 h postoperatively. Prolactin levels were determined using an enzyme-linked immunosorbent assay (Wuhan EILerite Biotechnology Co., LTD). Presentation of Observations The primary outcomes were postoperative uterine contraction pain intensity assessed using the VAS at 6 h postoperatively and patients’ preoperative and postoperative prolactin levels. The secondary outcomes were postoperative uterine contraction pain at 12 and 24 h. The level of sedation at 6, 12, and 24 h postoperatively was assessed using the RASS. The effective number of PCIA presses at 6, 12, and 24 h, the amount of butorphanol consumed, and the time at which lactation was initiated postoperatively were also assessed. Statistical Analysis The collected data were first analyzed using the Shapiro–Wilk test to determine if the continuous variables conformed to a normal distribution. Normally distributed continuous data are expressed as mean ± standard deviation and analyzed using one-way analysis of variance. Continuous data that were not normally distributed between groups were analyzed using the Kruskal–Wallis test, and data are expressed using the median (interquartile spacing). Categorical variables were analyzed using Pearson’s chi-squared test or Fisher’s exact test. A P value <0.05 was considered statistically significant and data were analyzed using SPSS 26.0 (IBM, Armonk, NY).
This study was a prospective, randomized, controlled, double-blind, single-center, clinical trial, included 120 patients (who chose to undergo cesarean section under combined spinal-epidural anesthesia at the Western Theater Command General Hospital from August 2022 to December 2022). The study was approved by the Ethics Committee of the General Hospital of the Western Theater Command (Ethic Approval No. 2022EC2-ky050) and registered with the Chinese Clinical Trials Registry (ChiCTR2200063746). All patients had signed informed consent forms. The patients were aged 18–45 years old, with American Society of Anesthesiologists (ASA) physical status I to II, a singleton pregnancy, planned to breastfeed, and received PCIA postoperatively. The exclusion criteria were: (1) hemolysis, elevated liver enzymes, low platelet syndrome, cardiovascular disease, and hyperprolactinemia; (2) chronic alcohol, sedative hypnotics, or analgesic use; (3) mental disorder; (4) any previous abdominal surgery (except cesarean section); and (5) contraindications for neuraxial anesthesia and nasal administration. The exclusion criteria were: (1) butorphanol tartrate or other analgesics used by other means during the trial; (2) unforeseen adverse events (such as drug allergy, anesthesia, and surgical accidents); (3) patient loss during a visit or automatic patient exit.
Sample size calculations were based on patients’ VAS scores at 6 h postoperatively in a pretrial trial, and to achieve 80% efficacy at the α=0.05 (two-tailed) level, we calculated the need for 36 patients in each group using PASS 15.0 (NCSS, Kaysville, UT), and taking into account a 10% loss-to-follow-up rate, we included 40 patients in each group. Before the patients entered the operating room, the researcher distributed 120 opaque sealed envelopes (40 per group) for simple randomization. The anesthesiologists, patients, and assessors were not informed of the grouping situation, and the researcher did not participate in the anesthesia and assessment process.
In the BI group, after the fetus was removed, the patient was intranasally administered 1 mg of butorphanol tartrate into the unilateral nasal cavity (the patient was kept in supine position with the head tilted back to clear the nasal secretions, and the drug was administered intranasally using a special butorphanol tartrate nasal spray (Jiangsu Hengrui Pharmaceutical Co., Ltd). After the drug was administered, both sides of the nasal flanks were gently pinched to bring the drug into full contact with the mucosa of the nasal cavity to minimize its entry into the pharynx). In the control group, an equivalent dose of saline was administered using the same method. The 2 groups of patients used the same protocol for the postoperative analgesic pumps (1.5 μg/kg sufentanil, Yichang Renfu Pharmaceutical Co., LTD) and 10 mg tropisetron hydrochloride (Rui Yang Pharmaceutical Co., LTD, total volume 100 mL). In the BV group, the postoperative analgesic pump used was 5 mg butorphanol tartrate (Jiangsu Hengrui Pharmaceutical Co., Ltd), 1.5 μg/kg sufentanil, and 10 mg tropisetron hydrochloride, to a total volume of 100 mL. After closing the peritoneum, the patient was immediately started on a PCIA device. The PCIA parameters were set to a continuous background dose of 3 mL/h, a patient-controlled single dose of 3 mL, and a lockout time of 30 min. After the operation, an anesthesiologist returned the patients to the ward, and the patients and their families were informed on how to use the PCIA device.
No patient was on any medications before the surgery. Upon arrival in the operating room, the patient was immediately monitored for electrocardiogram, non-invasive blood pressure, and pulse oximetry, and an intravenous catheter was placed. Non-invasive blood pressure was measured every 5 min during the operation. The patient was placed in the left lateral position with the head and knees bent toward the chest, and the puncture point was selected at the L2–L3 or L3–L4 gap. The puncture was performed in a sterile manner using an epidural puncture needle, and after confirming that the tip of the needle was in the epidural space, a lumbar puncture needle was used to perform the puncture. When cerebrospinal fluid outflow was observed, a mixture of 1 mL glucose (10%) and 1.5 mL ropivacaine hydrochloride (1%) was injected into the subarachnoid space, the lumbar puncture needle was withdrawn, and the epidural catheter was inserted into the epidural space (3–4 cm cephalad) until it was removed at the end of the surgery. The patient was returned to the supine position and the plane of block was detected. Surgery was initiated when a T4–T6 sensory block was achieved.
Patient characteristics and medical history were recorded, including age, body mass index (BMI), gestational age, number of cesarean sections, educational level, operation time, and ASA classification. Professional anesthesia evaluators followed up all patients at 6, 12, and 24 hours postoperatively. Their postoperative uterine contraction pain after cesarean delivery was assessed using the VAS scale, which is presented as a 10-cm horizontal line with 2 extremes at either end (0, no pain; 10 cm, agonizing pain). Patients marked their level of postoperative pain on this line . The RASS scale (1, patient is anxious and agitated or restless, or both; 2, patient is cooperative, oriented, and tranquil; 3, patient responds to commands only; 4, patient exhibits brisk response to light tactile stimuli or loud auditory stimulus; 5, patient exhibits sluggish response to light tactile stimuli or loud auditory stimulus; 6, patient exhibits no response) was used to assess the patient’s postoperative sedation level . In addition, the number of effective postoperative analgesic pump compressions, consumption of butorphanol tartrate, and adverse effects (nausea and vomiting, dizziness, skin itching, and respiratory depression) were recorded. The time of initiation of lactation was assessed based on maternal sensations (breast fullness, engorgement, or leakage). Furthermore, 3 mL of venous blood was collected preoperatively and 24 h postoperatively. Prolactin levels were determined using an enzyme-linked immunosorbent assay (Wuhan EILerite Biotechnology Co., LTD).
The primary outcomes were postoperative uterine contraction pain intensity assessed using the VAS at 6 h postoperatively and patients’ preoperative and postoperative prolactin levels. The secondary outcomes were postoperative uterine contraction pain at 12 and 24 h. The level of sedation at 6, 12, and 24 h postoperatively was assessed using the RASS. The effective number of PCIA presses at 6, 12, and 24 h, the amount of butorphanol consumed, and the time at which lactation was initiated postoperatively were also assessed.
The collected data were first analyzed using the Shapiro–Wilk test to determine if the continuous variables conformed to a normal distribution. Normally distributed continuous data are expressed as mean ± standard deviation and analyzed using one-way analysis of variance. Continuous data that were not normally distributed between groups were analyzed using the Kruskal–Wallis test, and data are expressed using the median (interquartile spacing). Categorical variables were analyzed using Pearson’s chi-squared test or Fisher’s exact test. A P value <0.05 was considered statistically significant and data were analyzed using SPSS 26.0 (IBM, Armonk, NY).
Patient Characteristics and Clinical Data This study successfully included 120 patients who were undergoing combined spinal-epidural anesthesia for cesarean delivery , and all received PCIA and were breastfeeding. There were no significant differences in patient characteristics among the 3 groups . VAS and Ramsay Scores, Number of PCIA Presses, and Butorphanol Consumption Patients in the BI and BV groups had significantly lower VAS scores and higher RASS scores at 6 h postoperatively ( P <0.05) than those in the control group. However, there was no significant difference between the BI and BV groups ( P >0.05). Conversely, the number of effective analgesic pump presses in the BI group was significantly lower than that in the control and BV groups, and the consumption of butorphanol tartrate in the BI group was lower than that in the BV group ( P <0.05). Patients in the BV group had lower VAS scores, higher RASS scores, and fewer effective analgesic pump presses at 12 h postoperatively than those in the control and BI groups ( P <0.05). The 3 groups showed no significant differences in the VAS scores, RASS scores, or the number of effective analgesic pump presses at 24 h postoperatively ( P >0.05; ). Time to First Lactation and Serum Prolactin Levels There were no significant differences among the 3 groups at the initiation of lactation. The 3 groups showed no significant differences in prolactin levels pre- and postoperatively ( P >0.05; ). Postoperative Adverse Maternal Events There were no significant differences in the incidence of postoperative nausea, vomiting, dizziness, and drowsiness among the 3 groups, and no patients experienced skin itching or respiratory depression ( P >0.05; ).
This study successfully included 120 patients who were undergoing combined spinal-epidural anesthesia for cesarean delivery , and all received PCIA and were breastfeeding. There were no significant differences in patient characteristics among the 3 groups .
Patients in the BI and BV groups had significantly lower VAS scores and higher RASS scores at 6 h postoperatively ( P <0.05) than those in the control group. However, there was no significant difference between the BI and BV groups ( P >0.05). Conversely, the number of effective analgesic pump presses in the BI group was significantly lower than that in the control and BV groups, and the consumption of butorphanol tartrate in the BI group was lower than that in the BV group ( P <0.05). Patients in the BV group had lower VAS scores, higher RASS scores, and fewer effective analgesic pump presses at 12 h postoperatively than those in the control and BI groups ( P <0.05). The 3 groups showed no significant differences in the VAS scores, RASS scores, or the number of effective analgesic pump presses at 24 h postoperatively ( P >0.05; ).
There were no significant differences among the 3 groups at the initiation of lactation. The 3 groups showed no significant differences in prolactin levels pre- and postoperatively ( P >0.05; ).
There were no significant differences in the incidence of postoperative nausea, vomiting, dizziness, and drowsiness among the 3 groups, and no patients experienced skin itching or respiratory depression ( P >0.05; ).
This prospective, randomized, controlled, double-blind, clinical study explored the analgesic and sedative effects of the transnasal administration of butorphanol tartrate in patients undergoing cesarean section under combined spinal-epidural anesthesia and its effect on postpartum prolactin secretion. We found that the VAS score of the BI group was maintained at approximately 2–3 at 6 h. Compared with the control group, the VAS score of the patients in the BI group at 6 h postoperatively was significantly lower, the RASS score was significantly higher, and the number of effective presses of the analgesic pumps was significantly reduced, proving the butorphanol tartrate nasal spray’s effective analgesic and sedative effect in the early period after cesarean delivery. The patients in the BI and BV group had the same VAS score, RASS score, and number of effective analgesic pump presses at 6 h postoperatively, and the patients in the BI group had worse sedation and analgesia than those in the BV group at 12 h postoperatively. However, patients in the BI group consumed less butorphanol tartrate than those in the BV group 6 h postoperatively. There were no significant differences in the time of lactation initiation, prolactin levels, or incidence of postoperative adverse events among the 3 patient groups. The safety and efficacy of intranasal butorphanol in our study are in agreement with the study by Abboud et al, who reported a longer duration of analgesia and no difference in the incidence of side effects with intranasal butorphanol compared with intravenous butorphanol , but their study did not differentiate between the role of uterine contraction pain and incisional pain. Our results are supported by studies of Zhang et al and Cai et al, who reported that in patients undergoing cesarean delivery, receiving butorphanol tartrate can effectively relieve postoperative uterine contraction pain . Butorphanol tartrate nasal spray is simple and easy to use, does not require invasive manipulation, shows good patient compliance, and facilitates postoperative pain management. Intranasal administration avoids drug degradation in gastrointestinal fluids and first-pass elimination by the liver and has a higher bioavailability of approximately 48–70% . The drug is rapidly absorbed through the nasal mucosa and has a rapid onset of action, with analgesic effects generally achieved within 15 min . In our study, patients in the BI and BV groups had better analgesia and sedation than those in the control group at 6 h postoperatively. However, patients in the BV group had better analgesia and sedation scores than those in the BI group at 12 h. This is consistent with the pharmacokinetic results of buprenorphine tartrate after intranasal administration studied by Wermeling et al, which was effective for approximately 4–6 h after a single nasal spray . The difference in outcome indicators between the BI and BV groups at 6 h postoperatively was not statistically significant; however, the consumption of butorphanol in the BI group was lower than that in the BV group, and transnasal administration of the drug was simple and comfortable, resulting in greater patient acceptance and satisfaction. Breastfeeding is essential for healthy development of newborns, and its nutritional value is superior to that of all artificial nutrients; lactogen is the key to successful breastfeeding . Prolactin secretion is affected by several factors. Pain after cesarean section causes sympathetic nerve excitement, increased catecholamine secretion, and elevated levels of lactation hormone inhibitors in the hypothalamus, inhibiting lactotropin secretion and delaying colostrum production. Effective postoperative analgesia can reduce sympathetic excitation and catecholamine secretion, which can promote prolactin secretion . In our study, postoperative pain scores were significantly lower in patients using butorphanol; however, there were no differences in the onset of lactation or serum prolactin levels among the 3 groups. Prolactin secretion is also closely associated with early bonding and contact between the mother and newborn . Effective sucking of newborns can stimulate sympathetic nerves of the breast and nipple, which promote prolactin secretion and an earlier onset of breastfeeding . In this study, the patients were sedated for a longer period after using butorphanol. The sedative effect of butorphanol was within the safe range; however, it reduced the direct communication and effective contact between the mother and newborn, and the time of breastfeeding initiation may be delayed. There were no serious adverse events during the entire study period. However, a longer observation period is still needed to determine the long-term safety of butorphanol. Common postoperative adverse reactions to opioids include nausea and vomiting. In this study, the incidence of nausea and vomiting after intranasal administration of butorphanol was low, and the patient’s symptoms were immediately relieved after using a small dose of antiemetic; there was no single case of serious adverse events, such as respiratory depression, which was consistent with the findings of Zhu et al, who found that, as an agonistic–antagonistic opioid analgesic, butorphanol has a lower incidence of postoperative adverse events than conventional opioids . The incidence of adverse events, especially dizziness and somnolence, was high after intranasal administration of high-dose butorphanol. The doses of nasal administration in this trial were according to the instruction manual. After a single use of 1 mg butorphanol, the number of patients with somnolence increased, but the difference was not statistically significant. One study showed that a single nasal spray of 2 mg butorphanol achieved better analgesic effects but with a significantly higher rate of associated adverse events . In this study’s preliminary trial, the incidence of dizziness and drowsiness in patients was extremely high after a single dose of 2 mg, which improved after changing the regimen. However, an intermittent single nasal spray of 1 mg butorphanol was not used to observe its effectiveness and safety. The present study has some limitations. First, we cannot guarantee that the dose delivered with the butorphanol nasal spray device was be standard for every patient, because each person may have differences in the absorbed dose due to differences in the nasal mucosa and may also have differences in the nasal spray dose due to the device. Secondly, we abandoned the design of intermittent 1-mg butorphanol nasal spray for the pilot study, which caused us to observe the effect of butorphanol nasal spray only in the early postoperative period, which may be the reason why there was no difference in the start time of prolactin and breastfeeding; this should be further investigated by optimizing the trial’s design and increasing the sample size. Finally, we did not observe lactation quality or breastfeeding duration, nor did we evaluate the content of butorphanol tartrate in breast milk because we did not have the required experimental equipment. Butorphanol tartrate is lipophilic and can be released into breast milk and acquired by newborns through milk . Previous studies have reported no adverse events associated with maternal breastfeeding in the early postpartum period after intermittent use of buprenorphine tartrate during delivery . Therefore, we still need to further investigate the issue of maternal and infant safety after cesarean section in future studies.
Intranasal administration of 1 mg butorphanol tartrate reduces early uterine contraction pain in patients after cesarean section, achieves good analgesic effects without increasing the incidence of adverse events, does not increase the perinatal risk, and reduces the dose of opioids consumed. Its administration is not traumatic, and patient comfort is high. The dose and administration method used in this study did not significantly affect the start of lactation or postpartum prolactin secretion in patients.
|
Oral condition of patients hospitalized for Covid-19 and its impact on quality of life | 0307a214-db8d-4d11-bfae-0b2233d23dea | 11790071 | Dentistry[mh] | Oral diseases can be conceptualized as a category of non-lethal chronic processes. However, in their acute phase, they manifest as symptomatic consequences, such as pain and discomfort; functional impairments, such as speech, swallowing, and chewing; and psychosocial consequences that have significant implications for people’s daily lives. In hospitalized patients, compromised oral health, aggravated by limited oral hygiene, has been observed to reflect on the general clinical condition of the patient, resulting in prolonged hospital stays and compromised quality of life. Oral health-related quality of life (OHRQoL) is a multidimensional construct that reflects an individual’s comfort when eating, sleeping, and engaging in social interactions. It also reflects their self-esteem, satisfaction with oral health, and other factors. This indicates that when oral health is compromised, it can impact various aspects of life, including function, appearance, and interpersonal relationships. The OHRQoL represents the assessment of oral health impairments experienced by individuals, serving as a supplementary measure to the clinical diagnosis made by healthcare professionals. In the case of individuals hospitalized for complications associated with SARS-CoV-2 virus, in addition to the systemic condition primarily affecting the respiratory system, there is presence of oral manifestations such as decreased salivary flow, soft tissue lesions, opportunistic infections, and taste deficiencies. - A lack of oral hygiene and salivary flow, whether due to negligence or other causes, can contribute to the development of co-infections such as oral candidiasis, gingivitis, and dental caries. To date, only a limited number of studies have examined the construct of quality of very rapid self-assessment of breathing (OHRQoL) within the context of the ongoing Coronavirus Disease 2019 (COVID-19) pandemic. Among these studies, one investigated the impact of pain resulting from temporomandibular disorders (TMD) before and during the COVID-19 pandemic and another examined the impact of dental pain on people’s daily lives during the period of social distancing. However, to date, no studies have been found in the databases used to conduct the bibliographic search performed for the purpose of evaluating the impact of the clinical oral condition on quality of life of individuals hospitalized for COVID-19. This study addressed this gap by examining the oral health of hospitalized patients with COVID-19 and its impact on their quality of life. Understanding this relationship is crucial as it can provide insights into the broader implications of oral health on patient outcomes during hospitalization. The findings could inform those who engage in healthcare practices and establish policies, with the aim of improving oral hygiene and overall care for hospitalized patients, potentially reducing hospital stays and enhancing their quality of life. Furthermore, this study could pave the way for further research exploring the interconnectedness of oral and systemic health, particularly within the context of infectious diseases such as COVID-19. Therefore, the aim of this study was to assess the oral health of individuals hospitalized for COVID-19 and its impact on their quality of life. The research was approved by the Research Ethics Committee of Ceuma University (4.610.070) and was conducted as a cross-sectional study involving individuals who were hospitalized in São Luís, Maranhão, in the northeast region of Brazil. The eligibility criteria for the study participants were patients with or without diagnosis of COVID-19, with room air and conscious, admitted to the intensive care units (ICUs) and wards of public hospital units in São Luís, Maranhão, Brazil. For this study, a sample power calculation was performed considering two groups: a group without COVID-19 with 115 participants and a group with COVID-19 with 167 participants. With an effect size (d) of 0.8 and a significance level (αα) of 0.05, the sample power was calculated to be 0.99 using the G*Power 3.1 program. The following data were extracted from the medical and dental records of patients hospitalized with or without a diagnosis of COVID-19: demographic characteristics, hospitalization sector, length of stay, comorbidities, type of diet (oral or enteral), oral condition (measured by the Bedside Oral Scale Exam [BOE]), oral hygiene (tooth hygiene and tongue hygiene [degree of lingual coating]), and salivary flow. The BOE encompasses an assessment of the following areas: swallowing, lips, tongue, saliva, mucosa, gums, teeth or prostheses, and odor. Each item is categorized into one of three levels of dysfunction: normal (score 1), moderate (score 2), and severe (score 3). Based on the total score, oral health is classified as good (total score of 8 to 10 points), moderate (total score of 11 to 14 points), or poor (total score of 15 to 24 points) . The subjects’ dental hygiene was evaluated using the Oral Hygiene Index - Simplified (OHI-S) . A total of six teeth namely Teeth 16, 11, 26, and 31 were examined buccally, while teeth 36 and 46 were examined lingually. In the absence of any of these teeth, the adjacent tooth could be used as a substitute. The plaque scores were as follows: 0 (absence of plaque); 1 (plaque covering less than one-third of the tooth surface); 2 (plaque covering more than one-third but less than two-thirds of the tooth surface); 3 (plaque covering two-thirds of the tooth surface); and X (no index tooth or substitute). The OHI-S was calculated by adding up the plaque scores of each tooth and dividing the result by six. The resulting value was then classified as follows: scores of 0-1.2 are indicative of good oral hygiene, while scores of 1.3-3 are indicative of fair oral hygiene, and scores of > 3.1 are indicative of poor oral hygiene. Lingual hygiene was evaluated using the Lingual Saburra Degree (LSD) . The data were classified as follows: The scale ranged from 0 (absence of saburra), 1 (light saburra on the posterior third of the tongue), 2 (light saburra on the posterior and middle thirds of the tongue), 3 (moderate saburra on the posterior third of the tongue), 4 (moderate saburra on the posterior and middle thirds of the tongue) and 5 (moderate saburra on the posterior, middle, and anterior thirds of the tongue). Mild saburra was defined as the presence of visible lingual papillae, whereas moderate saburra was characterized by the absence of these papillae, with the lingual papillae covered by the saburra. In order to quantify the salivary flow of conscious patients, they were seated comfortably, and the procedure commenced after they had swallowed once in order to remove the saliva present in the oral cavity (zero time). The patient was instructed to refrain from swallowing or chewing during the collection period, which lasted seven minutes. Saliva was collected in polypropylene tubes. Subsequently, the salivary flow (ml/min) was quantified by assessing the weight of the empty tube, with a ratio of 1 mg equal to 1 ul. The results were interpreted in accordance with the criteria established by Flink, whereby a salivary flow rate exceeding 1.0 ml/min is indicative of normal salivary function, a rate between 0.7 and 0.99 ml/min is indicative of hyposalivation, and a rate below 0.7 ml/min is indicative of xerostomia. The Brazilian version of the Oral Health Impact Profile (OHIP-14) was used to assess the impact of oral conditions on the quality of life of patients over the past six months. The instrument consists of seven domains, each comprising two items: functional limitation, physical pain, psychological discomfort, physical disability, psychological disability, social disability, and disability. The impact frequency was gauged on a Likert scale, with the following response categories: The response categories were as follows: never (0), rarely (1), sometimes (2), often (3), and always (4). The total OHIP-14 score was calculated using the additive method, with a potential range of 0 to 56. A higher score indicated a greater negative impact on OHRQoL. The data were submitted to descriptive and inferential statistical analyses. Then they were evaluated for distribution and homogeneity of variance using the Kolmogorov-Smirnov and Levene tests, respectively. A chi-square test was used to evaluate the comparative impact of the independent variables on the two groups (with and without a history of SARS-CoV-2 infection). To assess the mean score for each domain and the total OHIP-14 score, a Mann-Whitney test was used. The significance level adopted was 5%. All statistical analyses were conducted using the Statistical Package for Social Sciences (SPSS, version 21.0, IBM Corporation, Armonk, USA). Among patients hospitalized for COVID-19, 61.9% were male and 53.7% were admitted to the intensive care unit (ICU). The majority of patients remained in hospital for a period of between one and seven days, and the prevalence of comorbidities was comparable between groups . illustrates the prevalence of oral conditions among individuals hospitalized with a diagnosis of COVID-19. A significant association was observed between BOE and a diagnosis of COVID-19 (p < 0.001). Among individuals with COVID-19, 53% exhibited a moderate oral condition, while 9% had a deteriorated oral condition. Among the oral aspects of which the BOE consists, hyposalivation was observed in 81.3% of individuals hospitalized for complications associated with the SARS-CoV-2 virus. The impact of moderate/poor oral health (BOE with a score of 11 –24) on the quality of life of hospitalized patients revealed that those with a diagnosis of COVID-19 exhibited a more pronounced impairment than those without this diagnosis (p < 0.001). The impact was verified by means of the domains “psychological discomfort”, “social disability”, “disability”, and “total score” (p = 0.001; p = 0.017; p < 0.001 and p = 0.014, respectively) . The primary findings of the study indicated a higher prevalence of COVID-19 among male patients and those requiring intensive care unit (ICU) admission. The BOE assessment revealed a correlation between oral health and the diagnosis of COVID-19. Specifically, individuals diagnosed with the virus exhibited poorer oral health outcomes. The prevalence of hyposalivation was higher among individuals with a confirmed diagnosis of COVID-19 than among those hospitalized for other medical conditions. With regard to the impact on quality of life, moderate and poor oral health significantly compromised those infected with the virus. With regard to the preponderance of cases among men, a systematic review by Fang indicated that this demographic factor exhibited a greater severity of disease. Bourgonje posited that the greater involvement of men by the disease may be attributed to differences in exposure to the virus, smoking behavior, lifestyle, and chromosomal expression. Angiotensin-converting enzyme 2 (ACE2), ACE2 expression in testicular tissue, regulation of the immune system by sex hormones, and sex differences in regulation of the renin-angiotensin-aldosterone system (RAAS) are among the factors that may contribute to this phenomenon. With regard to comorbidities, the most prevalent were type 2 diabetes mellitus (DM-2), systemic arterial hypertension (SAH), and other heart diseases for both study groups. The higher frequency of these three comorbidities in individuals with COVID-19 was consistent with the findings of a retrospective cohort study conducted using a sample of medical records of hospitalized patients in Wuhan, China. These comorbidities were associated with severe forms of the disease. Furthermore, they are predictors of mortality from the disease when present in conjunction with age, secondary infection, and elevated blood inflammatory markers. The hospitalization sectors found to be significantly associated with the novel COVID-19 were the intensive care units, in which a notable number of individuals with confirmed or suspected cases of the disease received treatment. This further substantiated the role of comorbidities as explanatory factors for the severe form of the disease, which in turn needed more intensive care during hospitalization. The act of hospitalization, particularly within the context of intensive care units (ICUs), renders patients susceptible to a multitude of external and internal threats to their oral health. A variety of oral health issues that pose a threat to life or result in long-term complications may be present. Infection with the acute respiratory syndrome coronavirus 2 (SARS-CoV-2) resulted in oral manifestations affecting the buccal mucosa, salivary glands, and/or sensory elements. These oral manifestations resulted from the presence of SARS-CoV-2 due to the high expression of ACE2 receptors in oral epithelial cells and interactions between drugs used in the treatment of COVID-19. , The presence of SARS-CoV-2 in the epithelial cells of the salivary glands can trigger an inflammatory response by initiating the replication process and cell lysis, which ultimately leads to the destruction of the glandular tissue. Consequently, in an attempt to repair the inflammatory damage, fibroblast proliferation and the formation of fibrous connective tissue occur, which results in a decrease in the immune reaction.?/response?/ This repair process can result in dysfunctions of the salivary glands, manifesting as a decrease in salivary flow, chronic sialadenitis, and infections. However, in the present study, hyposalivation was observed in the majority of individuals with COVID-19, as well as in patients hospitalized for other reasons. In patients admitted to the ICUs, low salivary flow, reduced natural cleansing of the oral cavity provided by mastication and movement of the tongue and cheeks, coupled with poor oral hygiene, facilitate the growth and formation of pathogenic bacterial biofilms on the dental surface and on the dorsum of the tongue. It is therefore imperative that an oral hygiene protocol be established for ICU patients in order to prevent oral complications that may result in a deterioration of the patient’s overall clinical condition. A meticulous examination of the oral condition is imperative to forestall the onset of oral infections. The BOE is the instrument recommended for the oral evaluation of patients hospitalized in ICUs. It assesses a range of oral structures and functions, including swallowing, lips, tongue, saliva, mucosa, gums, teeth or prostheses, and odor. In the present study, individuals hospitalized in the hospital units researched were evaluated using the BOE. The results indicated that all those with poor oral health were part of the group with confirmed cases of COVID-19. Anosmia and ageusia are symptoms frequently reported in the initial phase of COVID-19, which facilitates its diagnosis. As taste is the primary stimulant for salivary secretion, its absence may indicate the presence of hyposalivation and xerostomia, thereby signifying the need for increasingly efficient oral care. Given that oral alterations such as tongue coating and low salivary flow are known to cause considerable discomfort to hospitalized patients and to predispose them to the development of periodontal disease and dental caries, it is imperative that the OHRQoL of these individuals must be assessed as soon as feasible. In this study, the negative impact on quality of life was significantly higher for patients with moderate or poor oral health, who had been diagnosed with COIVD-19 than for patients without the infection. This impact was observed by means of the domains “psychological discomfort,” indicating concern and feelings of stress; “social disability,” represented by irritation; and “disability,” and by the feeling that life had become worse. To date, only studies that have evaluated OHRQoL during the ongoing pandemic have explored the impact of the virus on symptoms related to TMD and teeth. The study evaluating the impact of TMD pain on the quality of life of women found that this pain did not worsen with the advent of the pandemic, nor did it exert any influence on OHRQoL. Conversely, the duration and intensity of dental pain were identified as significant explanatory factors that influenced the impact on the quality of life of individuals who sought care at a tertiary dental care center during the course of the pandemic. In contrast to the aforementioned studies, the present study used a clinical diagnosis of the oral condition of individuals hospitalized for complications related to the SARS-CoV-2 and its impact on quality of life. One limitation of this study is the relatively small sample size, which may affect the generalizability of the findings to broader populations. Additionally, the cross-sectional design limits the ability to establish causality between COVID-19 and oral health outcomes. The reliance on self-reported measures for some aspects of oral health may also introduce bias. Furthermore, the study was conducted within a single healthcare setting, which may not capture variations in oral health care practices and patient demographics across different regions or healthcare systems. Despite these limitations, the study has several strengths. It addressed a significant gap in the literature by exploring the impact of oral health on quality of life in patients hospitalized with COVID-19, a topic that has received limited attention. The use of a comprehensive oral health evaluation tool (BOE) allowed for a detailed assessment of various oral health parameters, providing a robust measure of oral health status. By highlighting the prevalence of poor oral health and its association with COVID-19, the study underscored the importance of integrating oral health care into the overall management of hospitalized patients. Moreover, the findings emphasize the need for more intensive oral care protocols in ICU settings, which could improve patient outcomes and quality of life. The study also contributes to the broader understanding of patient-reported outcome measures (PROMs) in this population, potentially influencing patient-centered care practices among dentists and other healthcare professionals. The study found a significant association between oral health and a diagnosis of COVID-19. Furthermore, the study revealed that moderate to poor oral health significantly impaired the quality of life of patients hospitalized with COVID-19 compared with patients without the infection. These results underscore the critical impact of oral health on the overall well-being and quality of life of patients hospitalized with COVID-19, emphasizing the need for comprehensive oral care in this population. The findings of this study reinforce the fundamental role of the dentist in the multidisciplinary team involved in hospital patient care. It is recommended that professional oral clinical evaluation, in conjunction with the patient’s perception of impact, be established as a protocol of conduct. This protocol will serve as the basis for an overall view of the patient’s health status, enabling the health team involved in the care to establish behaviors that may lead to the remission of inflammatory and infectious processes, and the restoration of a general health condition. |
Creating, publishing, and spreading processes of health-related contents in internet news sites: evaluation of the opinions of actors in health communication | 79a3bcc4-496e-4c28-aafd-031a732f9c85 | 11319292 | Health Communication[mh] | Introduction World Health Organization (WHO) states, “The extension to all peoples of the benefits of medical, psychological and related knowledge is essential to the fullest attainment of health” . In the digital age, providing accurate, clear, unbiased, up-to-date, and evidence-based health information to the public is critical in all aspects of health . The lack of access to essential health information, significantly influences morbidity and mortality rates, particularly in low to middle-income countries and among vulnerable populations worldwide . This condition arises when individuals, healthcare professionals, or policymakers lack the necessary health information to protect their own health or that of others, leading to what is termed “health information poverty” . Its detrimental effects, in turn, have negative impacts on the health of populations, which include poor levels of health education, challenges in reaching or understanding vital health information, inadequate critical information literacy skills, and an increased susceptibility to misinformation. Digital platforms’ health information is, more often than not, biased and not credible, possibly impacting public health intervention outcomes negatively . The use of information technology presents a paradoxical view in the context of improving health, as it is both a part of the problem and a component of the solution . Currently, 64.4% of the global population uses the Internet, and 59.4% are engaged in social media . Türkiye’s digital landscape, where 71.4 million individuals are internet users (83.4% of the population), 62.6 million (73.1% of the population) engage actively on social media, and with a staggering 95.4% of the adult demographic using smartphones represents a critical juncture for examining health communication dynamics. The average time spent on the Internet on any device is 7 h and 57 min a day, while on social media, the average is 2.57 h a day, highlighting the pivotal role of digital platforms in both active and passive health information acquisition . Due to its widespread use, information technology plays an important role in the active and passive information acquisition process: Information from these sources can be actively acquired as part of health information search behaviors for purposes such as obtaining information about a medical condition, medication, testing, treatment, understanding the cause of health-related changes, symptoms, changing behavior or daily routine, getting information on a doctor or health institution, and dealing with an existing medical condition; on the other hand, information on social media and internet news sites can be passed on to individuals by chance or incidental exposure, causing them to be passively informed . Just as the lack of quality information, the quantitatively large amount of health-related misinformation spread from internet sources also deepens the health information poverty . Today, digital mass media are used with increasing momentum to eliminate the information gap. As delineated by the Turkish Statistical Institute in its Household Information Technology Usage Survey (2023), over the past 3 months, a significant 61.4% of internet users accessed online news, while 66.3% sought health-related information (e.g., injury, disease, nutrition, improving health, etc.) . These figures underscore the internet’s role as the preeminent source for news and health information in Türkiye, with an engagement rate for news access reaching 75% . As delineated in the literature, the propagation of health-related misinformation on topics such as vaccines, medications, nutrition, cancer, HIV/AIDS, outbreaks pertinent to Ebola and H1N1, tobacco, and e-cigarettes, constitutes a menace to public health . During the COVID-19 pandemic, a significant crisis of trust in information has emerged. Individuals, caught in a state of “confusion” due to unclear information and uncertain sources, now approach even reputable sources with skepticism. Despite the vast availability of information, there is a noticeable decline in the acceptance of shared truths, which are crucial for societal decisions. This has led to the fragmentation of society into “truth publics,” where parallel realities and narratives proliferate within echo chambers. Consequently, the burden of truth establishment has been shifted to organizations characterized by weak transparency and accountability bases. This unethical accountability tendency may in the end breed a long-lasting disinterest or apathy that will make it easy to experience alienation from society’s norms and values ( , p. 10). Other research has shown that, compared to correct health information, this misinformation is more likely to spread and diffuse in online contexts, adding the urgency of countermeasures and difficulty in controlling it . The “dilemma of trust” around science, using media as the primary channel to reach the public, could significantly endanger the diffusion of correct health information based on evidence. While information and communication technologies (ICT) represent essential ingredients of our modern societies and economies, at the same time, they have the potential to deepen digital inequalities. The fact that ICTs can be used to exclude particular populations from services based on new technologies, such as e-government, ICT-based health, or education, is actual indeed. Socioeconomic inequalities thus influence the type and quality of practical and scientific knowledge acquisition by different groups, particularly in the context of public health issues . For instance, it is shown by communication theories, including the “knowledge gap” hypothesis, that disparities in information access can mirror those in wealth, leading to unequal distributions of knowledge within society. According to this hypothesis, people who continuously access information through mass media or the internet are often better informed than those not accessing them, increasing their level of knowledge regarding social contrasts of expertise . During the development of digital technologies, this difference has not only remained between them but also increased . “Digital divide” is often segmented into three clearly defined levels in research of this phenomenon: access to technology, use of this access, and information literacy. Each of these levels directly influences the outcomes and effects of internet usage . Future studies were also challenged to conduct further in-depth research into the impacts and effects of internet usage, especially in the domain of the health-related digital divide . Further, for this to occur, the overall social resources need to be determined to ensure the equitable provision of access to information technology and its contents by all persons and to foster the development of crucial information literacy skills . The need for reliable and accurate health communication is more important than ever, given the urgent issues brought to light by the spread of infodemic and the crisis of trust. The digital divide and associated disparities in access to information exacerbate these challenges, demanding a focused response from both researchers and policymakers. Within this contextual framework, the study is structured with three primary objectives: First, to elucidate the prevailing scenario through content analysis, this initial section evaluates the health-related content featured on designated internet news sites. Second, through quantitative research, this part aims to gauge the perspectives of chosen stakeholders from diverse sectors. It assesses their sociodemographic attributes, competencies in health communication, and views on the reliability and impact of health-related content, standard publishing criteria, resource, and medium control to mitigate infodemic, oversight, and sanctions, as well as their opinions on content creation, publication, and dissemination processes. Third, the study concludes with a qualitative analysis in its final section, providing a detailed exploration of the significance of health-related content on internet news sites regarding public health. This section delves into the challenges surrounding the accuracy, reliability, and legitimacy of information, integrating insights from the previous sections to propose solutions. Methods 2.1 Type of research This research, encompassing three sections, is a descriptive investigation employing a mixed-methods approach, integrating both quantitative and qualitative research methodologies. In the first section, content analysis is conducted on internet news sites to delineate the current scenario. In the second section, quantitative research techniques are utilized, and the views of stakeholders from diverse sectors are captured via an online data collection form. Following the insights garnered from the first and second sections, in-depth interviews with stakeholders from varied sectors have been carried out in the third section. 2.2 Setting In the first section dedicated to content analysis, a scrutiny of health-related content has been carried out on the following internet news sites: Sözcü - sozcu.com.tr , Hürriyet - hurriyet.com.tr , Sabah - sabah.com.tr , Milliyet - milliyet.com.tr , Habertürk - haberturk.com ; Voice of America Turkish (VOA TR) - amerikaninsesi.com , BBC News Turkish (BBC TR) - bbc.com/turkce , Sputnik Turkey (Sputnik TR) - tr.sputniknews.com , Deutsche Welle Turkish (DW TR) - dw.com/tr , Bianet - bianet.org , NTV - ntv.com.tr . In the subsequent sections, namely the Quantitative Research (2nd Section) and Qualitative Research (3rd Section), interviews have been administered both in-person and online, aligning with the COVID-19 pandemic precautions. 2.3 Quantitative and qualitative research sample The section on content analysis was executed on 11 internet news sites identified above, selected through purposive sampling. These news sites were chosen based on their rankings provided on SimilarWeb’s website, a proprietary firm inaugurated in 2007 offering internet analytics services to enterprises based on composite indices like visit frequency and duration spent on the site, showcasing the most popular sites in the news/media category in Türkiye as of May 2019. The sites sozcu.com.tr , hurriyet.com.tr , sabah.com.tr , milliyet.com.tr , and haberturk.com were designated as “mainstream” media. For alternative media, news outlets financed by the United States, the United Kingdom, Russia, Germany, and Sweden, delivering news in Turkish, namely amerikaninsesi.com , bbc.com/turkce , tr.sputniknews.com , dw.com/tr , and bianet.org were chosen. Lastly, as a good practice exemplar, ntv.com.tr was selected as an online news portal whose editor has garnered accolades from professional bodies and academic entities in the realm of health communication. These internet news sites were scrutinized over a 7-day span from 16.03.2020 to 22.03.2020, with all health-related shares in text and photo gallery format containing information, recommendations, and other relevant content published throughout each day being encompassed in the analysis. The sample for the Quantitative Research section was purposively determined, encompassing five distinct stakeholder groups engaged in health communication: bureaucrats allocated in health communication-related units within the Republic of Türkiye’s Ministry of Health ( n = 5), two representatives each as endorsed by the Executive Boards of Professional Unions in the health sector, namely the Turkish Medical Association, Turkish Dental Association, Turkish Veterinary Medical Association, and Turkish Pharmacists Association ( n = 8), journalists functioning as health editors or reporters in Internet News Media ( n = 22), representatives from Medical Specialty Associations within the Coordination Board of Specialty Associations of the Turkish Medical Association ( n = 93), and academicians who have served as advisors for theses concerning health misinformation over the last decade (2010–2020), as per the database of the Higher Education Council Presidency National Thesis Center ( n = 27). From the envisaged total of 155 health communication actors, engagement was established with 84; amongst these actors, 78 have partaken in the research. In the section of qualitative research, in-depth discussions were orchestrated with three individuals from each identified group, chosen predicated on their topical background and the responses they rendered to the quantitative inquiries, culminating in a total of 15 participants. Vasileiou, Barnett, et al., in their systematic analysis spanning 15 years, conducted in 2018 , scrutinized prevailing factors that dictate the sufficient sample size in qualitative explorations; it was discerned that saturation and pragmatic considerations were the most recurrently cited legitimacy rationales. Despite the pragmatic selection of three individuals from disparate groups within the delineated universe, saturation was perceived to have been attained nearing the culmination of the 15-participant discussions, attributed to the repetition of statements. 2.4 Data collection instruments and research procedure In the Content Analysis section of our study, we implemented a comprehensive content assessment schema comprising 33 meticulously devised queries tailored to reflect both the literature and the research objectives. The schema included: 2 queries for recording the URL and headline of each news item; 13 queries for evaluating structural attributes (metadata); 7 queries for thematic analysis; and 11 queries for a holistic review of the internet news portals examined. Thematic evaluation was guided by Schema.org’s health and medical types model, which provides a structured framework for categorizing medical entities . Accordingly, content was thematically grouped and analyzed in relation to the Sustainable Development Goals’ health-related targets ( n = 29) and classified based on the Global Burden of Disease Study-2017 (GBD-2017) cause hierarchy and risk, impairment, etiology, and injury n-code (REI) hierarchy . The news platforms were then assessed using criteria developed based on Health on the Net (HON) Codes . Due to the lack of standardized criteria for classifying health-related content in the existing literature, we employed a variety of specific classification criteria. This approach allowed us to clearly identify the associations within the content, using a total of 133 distinct criteria to ensure a thorough and targeted analysis. In the Quantitative Analysis section, the digital survey was designed in seven distinct segments comprising 41 questions, both multiple-choice and open-ended. These segments included: Sociodemographic Attributes (4 questions); Individual Proficiencies/Experiences in Health Communication (4 questions); Digital Media Engagement and Digital Health Literacy (13 questions); Perceptions concerning the Reliability/Legitimacy of Health-Oriented Content (4 questions); Perspectives on Health-Oriented Content within Internet News Portals (13 questions); Proposals for Resolutions (2 questions); Individual Contributions toward Resolutions (1 question). Regarding perceptions concerning the reliability and legitimacy of health-oriented content, respondents were asked to reflect on what constitutes the reliability and legitimacy of health information, characteristics that make health information accurate, what they consider to be incorrect health information, and whether they think there are verification mechanisms in place before the news is published. In terms of perspectives on health-oriented content within Internet news portals, participants were questioned on their thoughts about the risk posed by infodemic in health news, their sense of personal responsibility in combating such infodemic, and their views on whether specific standards should be maintained in health-related content on internet news sites. Additionally, the research also considered the sources that individuals believed to be the main factors of the infodemic and their opinions on whether it was necessary to have oversights and sanctions to curb the infodemic. The researchers designed the questions solely for this research purpose and were not selected from any existing scales. This approach will allow for conducting an in-depth analysis of numerous topics addressed in the seven different sections of the survey. The implementation of this approach will help capture the complex views and nuanced views about digital health communication that current scales may not be represented well. Again, with the explorative character of the survey, it aimed at collecting wide-ranging information concerning the dynamics of digital health information and not testing a priori hypotheses or hypotheses derived from considerations. From the insights acquired in sections 1 and 2, a semi-structured template with six key questions was applied to the Qualitative Analysis domain for open-ended discussions. These six key questions captured participants’ perception of the current status of health-related information available on news internet sites, as well as the basis of its reliability and accuracy while illuminating potential solutions and their contributions. The flowchart of the research is summarized in . 2.5 Data analysis framework The content analysis was conducted using MAXQDA v.2020 for qualitative data analysis and SPSS v.23 for quantitative data analysis. The refined results are presented in descriptive tables, showing numerical and percentage divisions. The chi-square test facilitated the comparative analysis of mainstream versus alternative media content. The quantitative analysis was executed with IBM SPSS v.23, with findings represented as numerical and percentage distributions in descriptive tabulations. When scrutinizing the correlation between descriptive and described variables, continuous variables’ distributions were probed through normality tests; the Mann Whitney-U test was employed amid determined categorical variables and those deviating from a normal distribution. Chi-square and Fisher’s Exact tests underpinned the analyses among categorical variables, with a p -value <0.05 deemed statistically significant. Qualitative data inquiry was conducted through MAXQDA v.2020. To bolster the rigor of qualitative analyses, a preliminary pilot study was undertaken, and the findings accrued by the observer were vetted by other investigators, selecting a 5% sample for audit. Type of research This research, encompassing three sections, is a descriptive investigation employing a mixed-methods approach, integrating both quantitative and qualitative research methodologies. In the first section, content analysis is conducted on internet news sites to delineate the current scenario. In the second section, quantitative research techniques are utilized, and the views of stakeholders from diverse sectors are captured via an online data collection form. Following the insights garnered from the first and second sections, in-depth interviews with stakeholders from varied sectors have been carried out in the third section. Setting In the first section dedicated to content analysis, a scrutiny of health-related content has been carried out on the following internet news sites: Sözcü - sozcu.com.tr , Hürriyet - hurriyet.com.tr , Sabah - sabah.com.tr , Milliyet - milliyet.com.tr , Habertürk - haberturk.com ; Voice of America Turkish (VOA TR) - amerikaninsesi.com , BBC News Turkish (BBC TR) - bbc.com/turkce , Sputnik Turkey (Sputnik TR) - tr.sputniknews.com , Deutsche Welle Turkish (DW TR) - dw.com/tr , Bianet - bianet.org , NTV - ntv.com.tr . In the subsequent sections, namely the Quantitative Research (2nd Section) and Qualitative Research (3rd Section), interviews have been administered both in-person and online, aligning with the COVID-19 pandemic precautions. Quantitative and qualitative research sample The section on content analysis was executed on 11 internet news sites identified above, selected through purposive sampling. These news sites were chosen based on their rankings provided on SimilarWeb’s website, a proprietary firm inaugurated in 2007 offering internet analytics services to enterprises based on composite indices like visit frequency and duration spent on the site, showcasing the most popular sites in the news/media category in Türkiye as of May 2019. The sites sozcu.com.tr , hurriyet.com.tr , sabah.com.tr , milliyet.com.tr , and haberturk.com were designated as “mainstream” media. For alternative media, news outlets financed by the United States, the United Kingdom, Russia, Germany, and Sweden, delivering news in Turkish, namely amerikaninsesi.com , bbc.com/turkce , tr.sputniknews.com , dw.com/tr , and bianet.org were chosen. Lastly, as a good practice exemplar, ntv.com.tr was selected as an online news portal whose editor has garnered accolades from professional bodies and academic entities in the realm of health communication. These internet news sites were scrutinized over a 7-day span from 16.03.2020 to 22.03.2020, with all health-related shares in text and photo gallery format containing information, recommendations, and other relevant content published throughout each day being encompassed in the analysis. The sample for the Quantitative Research section was purposively determined, encompassing five distinct stakeholder groups engaged in health communication: bureaucrats allocated in health communication-related units within the Republic of Türkiye’s Ministry of Health ( n = 5), two representatives each as endorsed by the Executive Boards of Professional Unions in the health sector, namely the Turkish Medical Association, Turkish Dental Association, Turkish Veterinary Medical Association, and Turkish Pharmacists Association ( n = 8), journalists functioning as health editors or reporters in Internet News Media ( n = 22), representatives from Medical Specialty Associations within the Coordination Board of Specialty Associations of the Turkish Medical Association ( n = 93), and academicians who have served as advisors for theses concerning health misinformation over the last decade (2010–2020), as per the database of the Higher Education Council Presidency National Thesis Center ( n = 27). From the envisaged total of 155 health communication actors, engagement was established with 84; amongst these actors, 78 have partaken in the research. In the section of qualitative research, in-depth discussions were orchestrated with three individuals from each identified group, chosen predicated on their topical background and the responses they rendered to the quantitative inquiries, culminating in a total of 15 participants. Vasileiou, Barnett, et al., in their systematic analysis spanning 15 years, conducted in 2018 , scrutinized prevailing factors that dictate the sufficient sample size in qualitative explorations; it was discerned that saturation and pragmatic considerations were the most recurrently cited legitimacy rationales. Despite the pragmatic selection of three individuals from disparate groups within the delineated universe, saturation was perceived to have been attained nearing the culmination of the 15-participant discussions, attributed to the repetition of statements. Data collection instruments and research procedure In the Content Analysis section of our study, we implemented a comprehensive content assessment schema comprising 33 meticulously devised queries tailored to reflect both the literature and the research objectives. The schema included: 2 queries for recording the URL and headline of each news item; 13 queries for evaluating structural attributes (metadata); 7 queries for thematic analysis; and 11 queries for a holistic review of the internet news portals examined. Thematic evaluation was guided by Schema.org’s health and medical types model, which provides a structured framework for categorizing medical entities . Accordingly, content was thematically grouped and analyzed in relation to the Sustainable Development Goals’ health-related targets ( n = 29) and classified based on the Global Burden of Disease Study-2017 (GBD-2017) cause hierarchy and risk, impairment, etiology, and injury n-code (REI) hierarchy . The news platforms were then assessed using criteria developed based on Health on the Net (HON) Codes . Due to the lack of standardized criteria for classifying health-related content in the existing literature, we employed a variety of specific classification criteria. This approach allowed us to clearly identify the associations within the content, using a total of 133 distinct criteria to ensure a thorough and targeted analysis. In the Quantitative Analysis section, the digital survey was designed in seven distinct segments comprising 41 questions, both multiple-choice and open-ended. These segments included: Sociodemographic Attributes (4 questions); Individual Proficiencies/Experiences in Health Communication (4 questions); Digital Media Engagement and Digital Health Literacy (13 questions); Perceptions concerning the Reliability/Legitimacy of Health-Oriented Content (4 questions); Perspectives on Health-Oriented Content within Internet News Portals (13 questions); Proposals for Resolutions (2 questions); Individual Contributions toward Resolutions (1 question). Regarding perceptions concerning the reliability and legitimacy of health-oriented content, respondents were asked to reflect on what constitutes the reliability and legitimacy of health information, characteristics that make health information accurate, what they consider to be incorrect health information, and whether they think there are verification mechanisms in place before the news is published. In terms of perspectives on health-oriented content within Internet news portals, participants were questioned on their thoughts about the risk posed by infodemic in health news, their sense of personal responsibility in combating such infodemic, and their views on whether specific standards should be maintained in health-related content on internet news sites. Additionally, the research also considered the sources that individuals believed to be the main factors of the infodemic and their opinions on whether it was necessary to have oversights and sanctions to curb the infodemic. The researchers designed the questions solely for this research purpose and were not selected from any existing scales. This approach will allow for conducting an in-depth analysis of numerous topics addressed in the seven different sections of the survey. The implementation of this approach will help capture the complex views and nuanced views about digital health communication that current scales may not be represented well. Again, with the explorative character of the survey, it aimed at collecting wide-ranging information concerning the dynamics of digital health information and not testing a priori hypotheses or hypotheses derived from considerations. From the insights acquired in sections 1 and 2, a semi-structured template with six key questions was applied to the Qualitative Analysis domain for open-ended discussions. These six key questions captured participants’ perception of the current status of health-related information available on news internet sites, as well as the basis of its reliability and accuracy while illuminating potential solutions and their contributions. The flowchart of the research is summarized in . Data analysis framework The content analysis was conducted using MAXQDA v.2020 for qualitative data analysis and SPSS v.23 for quantitative data analysis. The refined results are presented in descriptive tables, showing numerical and percentage divisions. The chi-square test facilitated the comparative analysis of mainstream versus alternative media content. The quantitative analysis was executed with IBM SPSS v.23, with findings represented as numerical and percentage distributions in descriptive tabulations. When scrutinizing the correlation between descriptive and described variables, continuous variables’ distributions were probed through normality tests; the Mann Whitney-U test was employed amid determined categorical variables and those deviating from a normal distribution. Chi-square and Fisher’s Exact tests underpinned the analyses among categorical variables, with a p -value <0.05 deemed statistically significant. Qualitative data inquiry was conducted through MAXQDA v.2020. To bolster the rigor of qualitative analyses, a preliminary pilot study was undertaken, and the findings accrued by the observer were vetted by other investigators, selecting a 5% sample for audit. Results In the preliminary section where content analysis was undertaken, 11 online news outlets were assessed against 133 criteria, unearthing that amongst 846 health-related pieces: the author/responsible party was undisclosed in 63%; in 24.5% solely news agency data was divulged. The transparency concerning author/agency/responsible entity is markedly lesser in mainstream media channels ( p < 0.05). It was discerned that 23.2% of the contents lacked source attribution. In 43.7%, a minimum of one expert viewpoint was incorporated, affirming subject-matter competence via disclosed education and specialization details; in 22.7% at least one medical practitioner’s opinion, and in 16.4% a scholarly article/report/book was cited as a source. Advisories to the readers were rendered in 71.4%. Merely in 3.5% were their open citations with web links, allowing universal access and appraisal concerning the disseminated information or data. Clickbait terminologies (cure, definitive solution, remedy, etc.) were employed in 4.4% of the headings. In thematic scrutiny, with respect to Sustainable Development Goals’ 29 targets related to health, 65.5% are related with Communicable Diseases (SDG Target 3.3). Per the GBD-2017 cause hierarchy, 63.3% are Non-communicable diseases; (COVID-19 is not encompassed in this categorization) 13.6% pertain to communicable, maternal, neonatal, and nutritional diseases. As per the GBD-2017 REI hierarchy, 33.6% are tied to environmental/occupational risks, 15.7% to behavioral risks, and 8% to metabolic risks. In 31.2%, promotion of products and/or services was observed in one or more clusters (Clusters: pharmaceutical, therapeutic, or medical merchandize; botanical product, nutritional aid; examination, surgical procedure, investigation, or protocol). While nearly all promotional contents mentioned objectives and advantages, alternatives were discussed in 49.6%, risks and side effects in 31.1%, and the advisement of “seeking physician consultation prior to utilization” was merely articulated in 14.5% . In the segment encompassing quantitative analysis, the perspectives of 78 respondents hailing from five diverse sectors were appraised, with a staggering 96.2% concurring that the current proliferation of inaccurate health information within digital news platforms poses a palpable threat to public health. The predominant catalysts for this infodemic were identified as influential personalities within the media sphere (78.2%), news agencies (60.3%), groups harboring skepticism toward health services (53.8%), and health journalists and editors (51.3%). Participants pinpointed “Media” (90.9%), content generators (76.6%), internet users (66.2%), and the deficit of coherent and accurate health information disseminated by governmental entities (49.4%) as the fundamental drivers behind the online dissemination of erroneous health insights. A significant 93.5% acknowledged an interruption in the accurate health information generation and dissemination continuum; within this disruption, 67.5% underscored the predilection of “gatekeepers/decision-makers for speculative content driven by economic and political motives over factual information,” while 53.2% accentuated the “inadequacy of adept individuals in generating accurate and publicly comprehensible information.” The realms most plagued by the distribution of incorrect health data, as perceived by 91% of respondents, are “commercial internet platforms,” followed by television productions (60.3%), and print media (51.3%). Education emerged as a paramount instrument in combatting infodemic, as endorsed by 55.1%, with 17.9% advocating for systemic alterations entailing deterrent sanctions by both public and private sectors to curb misinformation. The lack or insufficiency of verification mechanisms within publishing entities was acknowledged by 92.2%. A robust 93.6% championed the imperative of oversight to mitigate incorrect health information dissemination: the Ministry of Health (69.2%), the Turkish Medical Association (51.3%), and subject-specific Medical Specialty Associations (42.3%) were mooted as suitable overseers. The call for sanctions resonated with 92.2%, wherein 77.9% pinpointed the infodemic source, 72.7% the publishers, and 48.1% the sharers as liable entities. Upon a deeper analysis bifurcating media personnel from other stakeholders, a mere 20% of media professionals, contrasting with 54% of other actors, endorsed sanctions for misinformation purveyors, delineating a statistically substantial discrepancy ( p = 0.022) . All participants exhibited consensus on the necessity of adhering to certain standards while generating health-related content on internet news platforms. The percentage of agreement concerning the delineated standards is documented in the . In the third section wherein, the qualitative research was undertaken, through comprehensive discussions with 15 participants across five distinct groups, it was articulated that there necessitates a “collective responsibility, apportioned among readers, media, public authorities, and the academia.” Within the media spectrum, the onus of responsibility is envisaged to reside within the “editorial chain.” The paramount responsibility is underscored to vest with the “Public Authority” to orchestrate the process on society’s behalf and to ensure the fulfillment of obligations by all societal individuals and establishments. It was highlighted that, given its direct bearing on health, media institutions should harbor a control mechanism imbued with a sense of responsibility. Apprehensions were aired regarding potential encroachments on press freedom in the presence of an external control mechanism, propelling the recommendation for the cultivation of an internal control mechanism. Pertaining to the extant scenario, foundational expectations from academia, media, public establishments, and legislators encompass a holistic approach at every juncture, meticulously delineated boundaries of health rights and press freedom, and engagement with all identified responsible stakeholders in all ensuing steps. Discussion A significant 96.2% of participants are of the view that the inaccurate health-related content present in today’s internet news poses a public health risk; a minority of 3 participants (3.8%) acknowledge this assertion to be true in certain scenarios. The quality of health information available online has been substantially impacted by the transformation of the Internet into a participatory and social platform with the emergence of Web 2.0 . Wardle and Derakhshan’s paper offers a framework for analyzing information disorder, classifying it into three types: misinformation, disinformation, and malinformation, based on the accuracy of the information and the intent to harm . In the digital era, which is also defined by the “weaponization of mistrust” and “computational propaganda” , information disorder has become a serious public health concern due to the rapid increase in the speed, scale, and scope of information flow. The widespread use of the internet, social media, and mobile phones has fundamentally disrupted established business models in the news sector. New business models often grapple with budget constraints, infrastructure challenges, and a scarcity of resources, leading to a reduction in “on-the-ground,” real-life news coverage . The pressure to continuously create content to feed the homepage and social media accounts, along with the speed of publication demands, has reduced the quality control processes such as verification, diversity of data, and content enhancement. The blending of news and commercial information, along with the risk of eroding reader trust through hidden advertisements and “clickbait” headlines, has increased information disorder. In an increasingly competitive online world, content produced to attract visitors to websites rather than inform the public is promoted to increase digital advertising sales, sometimes at the cost of excellence and viability in journalism practice. The demand for “real-time” content increases the potential for errors and the merging of all types of media blur expertise in specialized areas. This pressure often translates into a “publish first, check later” approach . It becomes desperately important to enforce robust internal controls within media organizations to check the spread of non-factual information. Overcoming these challenges is possible only when media organizations and journalists base the centrality of transparency on their practice of ethical journalism and chase down evidence-based reporting. The implementation of rigorous verification processes to identify the prevalence of misinformation and thorough validation of data, sources, and digital images are necessary. Furthermore, additionally, it is essential that the framing of news agendas is consistent with the public’s requirements and benefits, thereby guaranteeing that the media act as a constructive force in society . The digital shift, particularly the move to digital advertising dominated by giants like Google and Facebook, has not fully supported media organizations, compelling them to develop new business models. The research underscores social media corporations as pivotal conduits for the dissemination of health misinformation online, a viewpoint further enriched by Farkas and Schou’s discourse on “digital capitalism” . Delving into the underlying causality with a holistic lens, beyond the “political power” deliberated in ensuing sections, the nexus between advertising revenue distribution and content formulation in media entities warrants scrutiny. In Türkiye, during 2021, a staggering 99.2% of internet users utilized search engines within the preceding month , with a dominant majority (over 80%) opting for Google . Anticipations are rife for Google, the online advertising vanguard, to steer 29% of the global digital ad outlays in 2021, with Facebook trailing at 24% . Peering into the European landscape, notably the UK, a presumed ‘Duopoly’ held by these behemoths commandeers nearly 70% of the market share , while a ‘Digitalization and Competition Policy Report’ initiated by Türkiye’s Competition Authority in January 2021 could shed light on the analogous scenario locally . The year 2020 saw a purported investment of around 7.5 billion TL in digital media ventures in Türkiye. A dissection of the investment spread across ad modalities unveils that paid ad campaigns ensuring prime search engine rankings (37.9%), impression or click-centric ads (35.2%), and video ads (20.5%) are poised to engulf a substantial portion of the nearly 7 billion TL investment . Yet, post the 7.5% digital service tax amendment in March 2020, the revenue accrued from April 2020 to March 2021 stood at 1.66 billion, with the implicated sector boasting a transaction girth of 22 billion TL . A foray by the Reuters Institute, encompassing 234 digital media chieftains across 43 nations, revealed that a hefty 66 and 61% acknowledged impression-based and native ads, respectively, as significant revenue streams . Internet news outlets, in a bid to bolster ad revenues, are veering toward marketing “content” crafted to fuel site traffic over bona fide “news,” employing SEO tactics like clickbait, content pagination, ‘click to continue reading’ prompts, and auto-refresh features . This paradigm of churning out “cheap” content, gauged by metrics like views, clicks, site duration, and shares, is embarking on a quality compromise journey, undermining public trust in securing timely, accurate, and comprehensible information. The 2021 Turkey Digital Media Report by the International Press Institute accentuates, through engagements with media moguls, that colossal platforms are swaying the publishing ecosystem by “propagating clickbait” . The prevailing revenue distribution algorithms are propelling large media houses with hefty SEO arsenals to eclipse other media entities in search engine visibility, thereby stifling the distribution share for outlets disseminating alternative viewpoints and local news narratives. Media professionals, influenced by routine media practices, institutional goals, external pressures, and ideological influences - as outlined in the agenda-setting framework , which focuses on how media prioritize issues to shape public perceptions - actively engage in “marketing” health information. The communal benefits of disseminating critical public health information may be overshadowed by the prioritization of content that generates the most clicks, views, and shares. For instance, prevalent and often fatal diseases such as cardiovascular diseases, cancers, chronic respiratory disorders, diabetes, and chronic kidney diseases receive significant attention. Nevertheless, there is a significant inclination among internet news sites to prioritize sensationalist and ambiguous lifestyle advice over clear and actionable guidance on preventable risk factors, including the cessation of tobacco, the reduction of harmful alcohol consumption, the reduction of salt intake, the reduction of trans-fat and sugar-sweetened beverages, and the increase in physical activity . This method has the potential to diminish the effectiveness of disease prevention and management strategies and weaken the impact of critical public health messaging. The research question onto the accountability for the accuracy and reliability of health-related information on internet news platforms introduces the notion of collective responsibility. In many cases, it is posited that responsibility is distributed among a number of different stakeholders, such as the reader, the source of the information, media entities, public authorities, and academic institutions, to varying degrees. In addition, a sizeable number of respondents emphasized that the public authorities bear the lion’s share of this responsibility. This is because of the role that they play in orchestrating the processes that are involved. In Türkiye, examining the governance of the Internet reveals that the Ministry of Transport and Infrastructure set up through Decree-Law No. 655, is designated with powers concerning the electronic communication sector under Law No. 5809. Additionally, an Internet Development Board operates under this ministry, is mandated to foster a conducive environment for internet growth through research and assessments, and is entrusted with shaping the national internet policy. The Information and Communication Technologies Authority (ICTA), affiliated with the ministry via Law No. 2813, is tasked with executing the board’s decisions . The ICTA holds the regulatory reins in electronic communication, as outlined in Law No. 5651, which addresses the regulation of online publications and the combat against online crimes . Other pivotal legislations in the domain of Internet law include Law No. 5369 on Universal Service and Law No. 5809 on Electronic Communication . At the time of this study, the outdated definitions and responsibilities in the Press Law for internet news sites, along with the lack of adherence to author identification in periodic publications, contribute to legislative gaps fostering information disorder . This research discovered that 63% of the evaluated contents lacked author, agency, or responsible party identification, and some respondents pinpointed anonymous news as a significant misinformation catalyst. Unanimously, participants advocated for a standard requirement of disclosing the author’s name, their subject-matter expertise, and the creation and last update dates of the content. The necessity of standardly presenting an author’s name and credentials in every piece of content is partly driven by concerns around copyright issues. A study engaging news website editors revealed that they unanimously source information from “rival news outlets” and “social media” . The accountability of content providers is defined in Law No. 5651, and Law No. 5846 on Intellectual and Artistic Works extends this definition to digital transmissions in its additional article no.4 . However, the present regulation may fall short in deterrence, as it positions the “Notice-Takedown System” at the forefront, coupled with a 3-day timeframe allocated for the rights holder’s request. Moreover, the practice of amplifying individuals’ visibility—sometimes in sensitive scenarios—by featuring personal opinions from social media on news websites, brings the discussion of “usage permissions” and accurate attribution to the fore, a discourse evident not only in Türkiye but also in broader international dialogs . Participants underscored two key considerations concerning the amendments needed for the current deficiencies: firstly, the necessity of accurately delineating the constitutional boundaries of press freedom, personal rights, and health rights while establishing legal frameworks for publications; secondly, ensuring that these legislative amendments are crafted in a collaborative manner, with extensive engagement from public, private, and civil society entities. Conversely, the global scenario paints a different picture, where many nations have faced criticism for infringing upon freedom of expression and press liberty, often justified by the ongoing pandemic . In the COVID-19 epoch, scrutinizing nations’ legal battles against the surging “disinformation” tide, amplified by the infodemic, reveals a spectrum of responses. For instance, new legislation categorizing disinformation as a criminal offence has emerged in countries like Hungary, Bolivia, South Africa, Botswana, Zimbabwe, and the Philippines. Additionally, instances of detentions have been reported in Kenya, the Philippines, Sri Lanka, and Cambodia, triggered by critiques of governmental approaches toward COVID-19 containment. Meanwhile, Serbia and India have instituted “directive” frameworks permitting only official or government-sanctioned COVID-19 information to be disseminated. Lastly, notable restrictions on COVID-19-related information dissemination have been imposed by authorities in China, Belarus, and Kuwait . The notion of “responsibility” in internet news media naturally leads to the need to define oversight and accountability. According to quantitative research findings, a significant 93.6% of participants believe that oversight is crucial to prevent misinformation related to health; 92.2% mention the lack of or inadequacy of a verification mechanism as an internal oversight process in broadcasting institutions. On the flip side, when it comes to external oversight mechanisms, participants suggest that the Ministry of Health of the Republic of Türkiye (69.2%), Turkish Medical Association (51.3%), and relevant medical specialty associations (42.3%) could be responsible for oversight, depending on the subject matter. There is an expectation from the academic community to establish oversight mechanisms, while public institutions are anticipated to organize oversight and regulatory activities. Qualitative research findings collectively emphasize that due to the direct impact of health news on individual and community health, it should be carried out with a particular sensitivity. Therefore, a sense of responsibility throughout all stages of the publication process is vital within media organizations, necessitating an internal oversight mechanism. A heavily stressed point regarding internal oversight is “professional ethics.” The ethical regulations and legislation concerning health professionals who could serve as sources have been defined by professional organizations: Law on the Practice of Medicine and Its Branches (Article 24) , Medical Deontology Regulation (Articles 8–9) , Guide on Shares of Physicians and Health Institutions in Electronic Media , Turkish Medical Association Principles on Physician and Drug Promotion , Guide on Publications of Dentists in All Communication Media , Turkish Dental Association and Chambers of Dentists Discipline Regulation (Article 8/a) and the Regulation on Promotion and Information Activities in Health Services issued in 2023 . A crucial component of internal oversight is the decision-makers at the pinnacle of the editorial chain. Research by Ioannidis highlights a shortfall in media coverage of significant public health issues and their modifiable risk factors, while individualized suggestions are prominently featured . Sezgin, critically examining health discourse in media, bases his assessments on the implications of neoliberal economy on healthcare systems . The investigation delves into the transformation in biotechnology, the pharmaceutical industry, health insurance, and the cosmetic industry under the banner of “for a healthier society,” alongside the medicalization of everyday life and physiological concepts like birth, death, menopause, and aging. The impact of gatekeepers on content selection is explored in a study by Yalçınkaya (2019) involving news site editors , where it’s found that editors’ judgments are influenced by their institution’s political stance, fears of political pressures, the publication policy, and the expectation of high click-through rates. Ayaz’s study unveils the ideological influences on gatekeepers’ decision-making processes, emphasizing the need for revisiting editorial independence ( , p. 278). Reports by Turkish Journalists Society , Turkish Journalists Union , Turkish Media and Law Studies Association , Freedom House , and European Commission ( , p. 37) have shed light on press freedom violations. In this context, legal frameworks should uphold press freedom, fostering a transparent structure to mitigate economic and political influences on editorial independence, and encouraging unionization to rekindle a journalist’s primary accountability toward the public and truth. When examined through the lenses of information disorder, responsibility, and oversight, a notable “legal disorder” that potentially infringes on various rights is observed. Consequently, the interviewees frequently expressed reservations about the yet-to-be-defined external oversight and punitive mechanisms under the current legal conditions, fearing they might encroach upon fundamental rights and freedoms. They advocated for the promotion and endorsement of “good practice examples” as corrective measures. Participants are looking to legislators to delineate boundaries concerning the focus of sanctions (information source, publishing institutions, social media and internet service providers, health information communication tools, sharers, advertisements); the limits of sanctions (safeguarding public health for the common good, not impeding personal freedoms, and not hindering scientific advancements); and the conditions under which they will be applied (non-scientific, commercially-driven publications, those without clear references, unethical ones). They underscored the necessity for formulating regulations directed at oversight and demonstrating steadfastness in implementing these regulations. When comparing Türkiye’s response to the infodemic with global initiatives, certain similarities as well as distinctions become apparent. In recognition of the fact that misinformation is a substantial obstacle in the public health response to the pandemic, WHO has brought attention to the concept of an “infodemic” . It is important to note that WHO has established the WHO Information Network for Epidemics (EPI-WIN) in order to guarantee that communities receive trustworthy, timely, and easily understandable advice and information regarding public health events and outbreaks . A Public Health Research Agenda for Infodemics Management has been developed through the global collaboration of nations under the aegis of the World Health Organization . Many contributions forming the process made by this agenda included Artificial Intelligence tools like WHO-EARS to guide social listening and identify information gaps . As part of this strategy, there are numerous policies that the European Union has put in place to enhance accountability and transparency within digital communications. Some of these policies include the EU Code of Practice on Disinformation, the COVID-19 Disinformation Monitoring Programme, and the Digital Services Act, which is aimed at regulating online platforms to curb the spread of false information through strict monitoring and reporting mechanisms . Such measures are also part of the most recent legislation in Türkiye, even though it deviates significantly from the country’s policy. However, it seems that Turkey, unlike the EU member states, has focused more on legal infrastructures and strict regulations meant to oversee the distribution of such content that is realized as harmful or false. While in the EU, independent bodies and NGOs contribute to the multi-stakeholder, decentralized approach to information oversight and verification—this is seen, for example, with the European Digital Media Observatory —recent Türkiye legislative changes, such as Law No. 7418 , bringing state mechanisms directly into the picture through monitoring and controlling online content . In addition, Türkiye’s regulations place a strong emphasis on the legal ramifications of infractions, including particular criminal penalties for disseminating false information. This goes beyond the administrative and civil remedies that are generally preferred in Western approaches . This divergence highlights a more stringent and controlled method in Türkiye, aiming to quickly stem the dissemination of disinformation, whereas the EU and countries like Canada and the UK’s strategies often emphasize long-term educational strategies and technological solutions to foster a more informed and resilient public . 4.1 Strengths and limitations Our study is a pioneering investigation into the topic of “infodemic” before the WHO had formally defined the concept, thus laying the groundwork for future research in the critical area of accurate health information during a pandemic. It benefits from the collective insights of diverse fields, enhancing problem-solving and intervention strategies. Nevertheless, it has constraints. The data collection phase coincided with the announcement of the COVID-19 pandemic, overshadowing other health-related topics we intended to analyze. The pandemic also impeded direct access to health-related actors, which could potentially reduce the participation of health professionals. Due to the fact that we lacked the specialized expertise necessary to verify the factual accuracy or scientific validity of each health content, we relied on practical criteria to ensure the reliability of the information they contained. A purposive sampling approach was required due to resource constraints and the pandemic, which restricted the generalizability of our findings. Future research could resolve these limitations by incorporating broader actor participation and expanding the evaluative criteria for health-related content on internet sites. Strengths and limitations Our study is a pioneering investigation into the topic of “infodemic” before the WHO had formally defined the concept, thus laying the groundwork for future research in the critical area of accurate health information during a pandemic. It benefits from the collective insights of diverse fields, enhancing problem-solving and intervention strategies. Nevertheless, it has constraints. The data collection phase coincided with the announcement of the COVID-19 pandemic, overshadowing other health-related topics we intended to analyze. The pandemic also impeded direct access to health-related actors, which could potentially reduce the participation of health professionals. Due to the fact that we lacked the specialized expertise necessary to verify the factual accuracy or scientific validity of each health content, we relied on practical criteria to ensure the reliability of the information they contained. A purposive sampling approach was required due to resource constraints and the pandemic, which restricted the generalizability of our findings. Future research could resolve these limitations by incorporating broader actor participation and expanding the evaluative criteria for health-related content on internet sites. Conclusion This research brings forth the critical role of journalists in putting public health at the center of reporting. To effectively fight the infodemic and ensure the success of health interventions with the population, it is essential to regain trust in journalism as a sector that has always safeguarded the truth. Research shows that such efforts must be undertaken in collaboration with various stakeholders, including media, academic institutions, and regulators, to guide ethical standards and increase transparency. The paper suggests an integrative vision of health communication that brings forward the awareness of a public health agenda as fundamentally and increasingly interconnected with democratic processes, human rights, and social cohesiveness. In public health protection, public authorities play a crucial role in ensuring that all people have access to quality internet and accurate and dependable information. provides recommendations to the public authorities to assist the public authorities in fighting information disorder. Lastly, it is imperative for the state to undertake positive actions to facilitate the realization of the right to health and the enhancement of public health, thereby creating an environment where all members of the community can fulfill their responsibilities. The datasets presented in this article are not readily available because the second part of the study – the quantitative research, and the third part – the qualitative research, might reveal the participants’ identities when the data is shared. Therefore, data can only be shared upon a reasonable request. Requests to access the datasets should be directed to [email protected] . The studies involving humans were approved by the Non-Interventional Clinical Research Ethics Committee of Hacettepe University, Ankara, Türkiye under the decree number GO20/129 (Evaluation Date: 27.01.2020), 2020/03–08. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study. EÖ: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Software, Validation, Writing – original draft, Writing – review & editing. ŞB-Ö: Conceptualization, Formal analysis, Methodology, Project administration, Supervision, Validation, Writing – review & editing. BŞ: Conceptualization, Formal analysis, Methodology, Project administration, Supervision, Validation, Writing – review & editing. |
Internal Medicine Year in Review 2022 | dae24ddb-a2c5-4bed-b7eb-9deb8dbb7bf5 | 10749803 | Internal Medicine[mh] | Internal Medicine , the official journal of The Japanese Society of Internal Medicine (JSIM), is now in its 62nd volume and is published online twice a month (1st and 15th), 24 times a year. The types of articles accepted are Review Articles, Original Articles, Case Reports, Pictures in Clinical Medicine, Letters to the Editor, and Editorials. The articles are reviewed by experts in the corresponding field. Internal Medicine publishes accepted articles as open-access articles on J-STAGE and PubMed Central.
Last year was a year of great significance for Internal Medicine in two ways. First, while the number of papers submitted to the journal due to the spread of coronavirus disease 2019 (COVID-19) reached a record high in 2020 and remained at a high level in 2021, the number of papers submitted to the journal last year, 2022, showed signs of returning to the prepandemic levels. Second, the impact factor (IF) reached a record high. (1) Transition in the number of submissions Looking at the number of article submissions since 2016, we observed that the number of submissions remained steady at approximately 1,500-1,600 per year until 2019 . However, in 2020, the global outbreak of COVID-19 led to a significant increase in the number of research articles related to the virus being submitted worldwide. Similarly, our journal has received a substantial number of research articles related to COVID-19 since 2020 . Due to this background and a significant surge in submissions, we received a record-breaking number of articles (2,325) in 2020. 2) Acceptance rate in 2022 The acceptance rates according to article type were as follows: Original Articles, 34.1%; Case Reports, 40.6%; Pictures in Clinical Medicine, 29.2%; Review Articles, 46.7%; Letters to the Editor, 77.4%; and Editorials, 100.0% . 3) Transition in Impact Factor The Impact Factor (IF) 2021 increased to "1.282" (+0.012 year-on-year). shows the transition of the IF.
Looking at the number of article submissions since 2016, we observed that the number of submissions remained steady at approximately 1,500-1,600 per year until 2019 . However, in 2020, the global outbreak of COVID-19 led to a significant increase in the number of research articles related to the virus being submitted worldwide. Similarly, our journal has received a substantial number of research articles related to COVID-19 since 2020 . Due to this background and a significant surge in submissions, we received a record-breaking number of articles (2,325) in 2020.
The acceptance rates according to article type were as follows: Original Articles, 34.1%; Case Reports, 40.6%; Pictures in Clinical Medicine, 29.2%; Review Articles, 46.7%; Letters to the Editor, 77.4%; and Editorials, 100.0% .
The Impact Factor (IF) 2021 increased to "1.282" (+0.012 year-on-year). shows the transition of the IF.
3.1. Highly Cited Article (Hot Paper) A case report of a 57-year-old man with no underlying disease who developed thrombotic thrombocytopenic purpura (TTP) after vaccination with BNT162b2 (Pfizer/BioNTech), a messenger RNA (mRNA) -based vaccine using the spike protein gene of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) (Yoshida K et al. ) , is recognized as a highly cited reference, which is within the top 1% of citations in the field of clinical medicine, according to the Essential Science Indicators data of Web of Science . 3.2. Citation Trends of Published Articles Using Web of Science, we surveyed the papers published in 2022 in order of the number of citations and found that 23 of 36 articles (64%) with three or more citations were related to COVID-19 (as of April 27,2023). These included type 1 diabetes mellitus , thrombotic thrombocytopenic purpura , myocarditis , severe immune thrombocytopenia , Miller-Fisher syndrome , pneumonia , IgA nephropathy , and polymyalgia arthritis that developed after vaccination against COVID-19 .
A case report of a 57-year-old man with no underlying disease who developed thrombotic thrombocytopenic purpura (TTP) after vaccination with BNT162b2 (Pfizer/BioNTech), a messenger RNA (mRNA) -based vaccine using the spike protein gene of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) (Yoshida K et al. ) , is recognized as a highly cited reference, which is within the top 1% of citations in the field of clinical medicine, according to the Essential Science Indicators data of Web of Science .
Using Web of Science, we surveyed the papers published in 2022 in order of the number of citations and found that 23 of 36 articles (64%) with three or more citations were related to COVID-19 (as of April 27,2023). These included type 1 diabetes mellitus , thrombotic thrombocytopenic purpura , myocarditis , severe immune thrombocytopenia , Miller-Fisher syndrome , pneumonia , IgA nephropathy , and polymyalgia arthritis that developed after vaccination against COVID-19 .
In 2020, our journal received a record-high number of article submissions. The number has now returned to prepandemic levels in 2022. However, highly cited papers and those published in the journal still demonstrate a trend toward a significant number of COVID-19-related articles being published and cited. Despite society returning to prepandemic economic and lifestyle activities, research on COVID-19 continues to expand across various fields, including epidemiology, diagnosis, treatment, vaccine development, and the elucidation of mechanisms associated with the development of severe disease. There are no indications that interest in the disease has waned in the post-pandemic era.
|
Quality Studies on | 274a9197-f036-49fe-af7f-ef3d25b33298 | 11173719 | Pharmacology[mh] | Cynometra iripa Kostel., commonly known as “Shingra”, is classified as “Least Concern” in the IUCN Red List of Threatened Species . It is a globally recognized mangrove species that belongs to the Fabaceae ( Leguminosae ) family and to the polyphyletic Cynometra L. genus, which comprises 113 species of shrubs to large trees . C. iripa is a characteristic species of mangrove swamps , exhibiting a scattered distribution. It is present in various regions, including India, Bangladesh, Myanmar, Thailand, Northeast Australia, Papua New Guinea, Eastern Indonesia (such as West Irian, Halmahera, Moluccas, Seram, Ambon, Aru, and Tanimbar Islands), and the Philippines, ranging from Panay Island to Mindanao . C. iripa is a small tree (6–15 m), sometimes multi-stemmed . The leaf ( a) is green color, 1–2-jugate, and asymmetrical. The flower is aromatic and appears in white or a delicate pale pink hue. The fruit is one-seeded, asymmetrical, with a pronounced beak at the apex of the dorsal suture, extending partially along the dorsal side. It is suborbicular, laterally compressed, deeply wrinkled, woody, and transitions from green to brown as it matures. The bark ( b) is smooth, displays brown-grey and patchy tones, and is finely fissured . Traditionally, in India, a paste of the leaf, seed, and stem of C. iripa is used to heal wounds , and a decoction of the leaf is used to treat ulcers . Tribal people extract oil from the seeds to treat cholera . Although this species is found in Bangladesh and traditionally used by local people, no specific therapeutic indications have been found in the literature. Some chemical studies have already been conducted on different C. iripa plant parts . A total of 10 fatty acids were detected in the leaf oil, while 14 fatty acids were detected in the seed oil . Fifteen compounds were identified by GC-MS from the seed and seed coats of this species . Basak et al. (1996) reported the presence of chlorophyll, carotenoids, proteins, polyphenols, and tannins in the leaf ethanolic extract of this species. The carotenoid, polyphenol, tannin, and protein contents were 0.08, 30.15, 18.34, and 22.58% of the dry weight of the extract, respectively . Methanol, ethyl acetate, and chloroform–methanol (1:1) extracts of C. iripa leaf showed antibacterial activity against two strains of Aeromonas hydrophila , Edwardsiella tarda , Pseudomonas fluorescens , Pseudomonas aeruginosa , and Vibrio alginolyticus by the diffusion method . It has also been reported that an ethanolic and methanolic extract of the C. iripa aerial parts showed in vitro antimicrobial activity against Bacillus cereus , P. aeruginosa , Staphylococcus aureus , and Salmonella typhimurium through the diffusion method. In comparison to ethanolic extracts, the methanolic extracts of leaf, stem, early seed, mature seed, and seed coat presented higher antimicrobial activity against P. aerunginosa . In addition, the methanol extract of the bark showed antifungal activity against Alternaria alternata and Fusarium moniliforme using the poison food technique . C. iripa is often confused with another species, Cynometra ramiflora (L.), and previously, it was described as a variety of this species. According to the original description made by Linnaeus (1753), C. ramiflora is characterized by unijugate leaves , whereas Kosteletzky (1835) described C. iripa as having bijugate leaf . For this reason, C. ramiflora var. bijuga has been considered a synonym of C. iripa . Although based on key characteristics like the apex of leaflets, the length of inflorescences, the length of pedicels, the apex of the anthers shape, and the position of the fruit beak, this species is considered to be different . As different plant parts of this species are used in Ayurveda and other systems of Indian traditional medicine, to allow their use as herbal medicines, monographic quality parameters are essential, which is the main goal of the present work. Concerning the identification, macroscopic and microscopic analyses of the whole, fragmented, and powdered plant materials (leaf and bark) are performed together with the establishment of the chemical fingerprints and quantification of the main class of compounds. Additionally, in attending to the chemical fingerprint results, the antioxidant potential of the extracts of these medicinal plants is also evaluated. 2.1. Macroscopic and Microscopic Analyses 2.1.1. Leaf Macroscopic Characteristics The macroscopic observation ( and ) revealed a dried grey color asymmetrical leaflet, with leaf lamina 3.0–6.6 cm in length and 1.4–2.8 cm wide, emarginate apex ( a) and cuneate base ( c), and venation brochidodromous, prominent on the abaxial surface ( b) and entire margin. Trichomes were observed on leaf rachis and petiolules (0.2–0.3 cm long) ( d). Microscopic Characteristics Light microscopy (LM) analysis of transversal sections of C. iripa leaf showed rectangular to polygonal upper epidermis cells, smaller (10.26–18.64 µm) than the lower epidermis cells (12.84–34.20 µm), mucilage-containing cells on the abaxial epidermis, and double-layered palisade parenchyma on the adaxial mesophyll tissue. Hypodermal layers were absent. The presence of adaxial xylem and abaxial phloem surrounded by 3–6 layers of sclerenchyma was observed ( a,b). The paracytic type of stomata (two subsidiary cells parallel to the guard cells) was only detected on the abaxial surface ( c,d). Calcium oxalate prismatic crystals (3.52–8.70 µm) were observed in the veins ( e), and unicellular and pointed non-glandular trichomes were observed in the petiolules ( f). The leaf powder of C. iripa was greyish-green in color and had a specific odor. By LM, it was possible to identify the presence of characteristic leaf microscopic elements like palisade parenchyma consisting of two layers of cells, fibers, free calcium oxalate prismatic crystals, and free non-glandular trichomes ( a–d). 2.1.2. Bark Macroscopic Characteristics The dried stem bark was nearly flat in the piece, smooth, brown-grey in color, and finely fissured; thickness is usually 2–3 mm ( a,b). Microscopic Characteristics LM analysis of the C. iripa bark transversal sections showed the presence of lenticel, periderm, narrow phelloderm (composed of tangentially elongated cells), broad cortex with large elliptical groups of sclereids (heterogenous in shape and size), parenchyma ( a,b), numerous calcium oxalate prismatic crystals ( c), and secondary phloem with distinct 2–5 seriated medullary rays ( d). No calcium oxalate prismatic crystals were found on medullary rays ( d). The LM longitudinal section analysis revealed the presence of fibers, some with calcium oxalate prismatic crystals along and parenchyma cell layers and numerous irregularly shaped starch granules (1.65–5.96 µm), isolated or conjugated, scattered in cell layers ( e,f). The powdered C. iripa bark was greyish brown in color and characterized by the presence of fragments of fibers ( a), fragments of parenchyma and reddish-brown periderm ( b), calcium oxalate prismatic crystals ( c), and occasional starch granules. 2.2. Quantitative Microscopic Analysis The principal microscopical characteristics of C. iripa leaf and C. iripa bark were quantified to provide additional distinctive elements for quality control purposes of these medicinal plants as possible herbal drugs. The results are presented in . Noticeably, the abaxial epidermal cells were larger in the leaf than the adaxial epidermis cells, and the calcium oxalate prismatic crystals were wider in the bark than in the leaf. 2.3. Chemical Studies 2.3.1. Yield of Extraction Chemical studies were performed using extracts prepared with botanically characterized raw plant materials. The obtained extraction yields and drug extract ratio (DER) are presented in . The extract yield percentage was higher in C. iripa bark (CIB) than in the C. iripa leaf (CIL), corresponding to a lower DER, as verified. 2.3.2. Qualitative Phytochemical Analysis A portion of each extract (CIL and CIB) was analyzed using characteristic colorimetric methods for secondary metabolites. The results confirm the absence of alkaloids in both plant extracts, whereas the presence of phenolic compounds (in ferric chloride test, the bluish-black color formation, and in the acetic acid test, the red color formation) and triterpenoids, namely saponins (stable foam formation), is confirmed in both (CIL and CIB) extracts. 2.3.3. LC-UV/DAD-ESI/MS Fingerprint The obtained results of the analysis by high-resolution liquid chromatography coupled to a photodiode array and a mass spectrometry detector using electrospray ionization (LC-UV/DAD-ESI/MS) are presented in and and . The tentative identification of the main compounds was assigned by co-chromatography with authentic standards, comparison of their UV spectra and retention time, and mass spectrometric data based on the PubChem database and different scientific literature. Negative ionization data were selected for identification. presents data on the main compounds identified from C. iripa leaf extracts by LC-UV/DAD-ESI/MS. The obtained chromatograms for CIL extracts showed a total of 11 major peaks. Peak a showed a [M − H] − ion at m / z 1154 and fragment ions at m / z 865 [M − H − 289] − , 577 [M − H − 289 − 288] − , and 425 [M − H − 289 − 288 − 152 ] − that formed due to RDA fragmentation (one of the most common fragmentation pathways of B-type proanthocyanidins), and 287, that is a monomeric catechin unit, formed due to quinone methide cleavage . Notably, for B-type proanthocyanidins, fragments form the monomeric ions of m / z 287 or m / z 289 , and according to Karonen et al., 2004, B-type procyanidin oligomers are composed of multiple monomer subunits with interflavonoid C-C linkages that differ by multiples of 288 . Considering the differences between monomer units, UV spectra, and fragmentation pattern, this compound was identified as a B-type proanthocyanidins tetramer. Peak b exhibited a [M − H] − ion at m / z 865 and fragment ions at m / z 577 [M − H − 288] − , and 289 (monomeric catechin unit) was identified as a B-type proanthocyanidins trimer. Peaks c and d showed the [M − H] − ion at m / z 1442 and subsequent fragment ions at m / z 1154 [M − H − 288] − , 865 [M − H − 288 − 289] − , 577 [M − H − 288 − 289 − 288] − , and 289 (monomeric catechin unit), which indicated a molecular weight of 1443. Based on the differences between monomer units, UV spectra, and fragmentation pattern, these compounds were also identified as B-type proanthocyanidin pentamers. Peaks e, f, and h presented a [M − H] − ion at m / z 435, corresponding to a molecular weight of 436, and produced taxifolin aglycone fragment ions at m / z 303 [M − H − 132] − (indicating loss of a pentose moiety), 285 [M − H − 132 − 18] − (indicating loss of water), and 151 (a fragment produced due to a RDA reaction). The pentose moiety could be attributed to arabinose or xylose. Since arabinose and xylose are monosaccharides with the same molecular formula (C 5 H 10 O 5 ) and molecular weight (150 g/mol), more experiments are needed to obtain the correct identity of the sugar moiety in these peaks and based on the UV–Vis and MS spectral data, these peaks have tentatively been identified as taxifolin pentoside isomers . Peak g showed a [M − H] − ion at m / z 463 corresponding to molecular weight 464, with respective fragment ions at m / z 435 [M − H − 28] − , indicating loss of CO, and 301 [M − H − 162] − , indicating loss of a glycosyl unit. Based on the UV, fragmentation pattern, and co-chromatography with standards (quercetin-3- O -glucoside), this compound was assigned as quercetin-3- O -glucoside . Peak i showed [M − H] − ion at m / z 433 and characteristic fragment ion at m / z 301 [M − H − 132] − . By comparison of its fragmentation behavior with previous work in the literature, this peak was tentatively identified as quercetin 7- O -pentose/apiose . Peak j showed an m / z 447 [M − H] − and fragment ions at m / z 419 [M − H − 28] − , indicating loss of CO, and 285 [M − H − 162] − , a Kaempferol aglycone formed by the loss of the glycosyl unit and a UV spectrum compatible with its flavonol nature, namely kaempferol-7- O -glucoside (MW 448 g/mol). This identity was confirmed by the spectral information of the kaempferol 7- O -glucoside of the PubChem database . Peak k showed a base peak at m / z 269 [M − H] − with fragment ions at m / z 89 and a UV spectrum compatible with its flavone nature, namely apigenin (MW 270 g/mol). This identity was also confirmed by LC/UV-DAD co-chromatography with authentic standards (apigenin). presents data on the main compounds identified from C. iripa bark extracts by LC-UV/DAD-ESI/MS. The obtained chromatograms for CIB extracts showed a total of 11 major peaks. Both peaks a′ and c′ showed a [M − H] − ion at m / z 1154 and a similar fragmentation pattern corresponding to the B-type proanthocyanidins tetramer that was observed for peak a in CIL. Peak b′ also showed a [M − H] − ion at m / z at 865 with the fragmentation behavior of the B-type proanthocyanidins trimer, which was also similar to peak b in CIL. Peak d′ and e′ exhibited a [M − H] − ion at m / z at 561 and fragment ion at m / z 433 [M-H-126] − which formed due to RDA fragmentation. The other two fragment ions 287 and 273/271 formed due to a quinine methide reaction (QM) that indicated catechin and afzelechin derivatives, respectively. Based on the UV and fragmentation patterns reported in previous work, these peaks were identified as B-type proanthocyanidins dimers . Peak f′ showed a [M − H] − ion at m / z 565 and fragment ion 301 [M − H − 264] − , a typical fragment for quercetin derivatives formed by the loss of two pentose units (132 + 132). Concerning the UV and fragments behavior, this compound was tentatively identified as quercetin- 3-O -pentosyl-pentoside . Peak g′ , h′ , and i′ presented a [M − H] − ion at m / z 449 and characteristic fragment ions at m / z 303 [M − H − 146] − assigned to [aglycone H] − 285 and 151, which is similar fragmentation behavior to taxifolin. Based on the obtained UV–Vis and MS spectral data, this peak was tentatively identified as a deoxyhexose (rhamnose) of taxifolin, namely as taxifolin 3 -O -rhamnoside . Peak j′ exhibited a [M − H] − ion at m / z 599 and fragment ions at m / z 447 [M − H − 152] − formed by the removal of the galloyl moiety and the other fragment ion at m / z 301 [M − H − 298 (152 + 146)] − , indicating the removal of the galloyl-rhamnoside moiety. Based on the UV and typical fragmentation behavior, this compound was identified as quercitrin 3″- O -gallate . Peak k′ was tentatively identified as apigenin, similar to peak k in CIL as it exhibited a base peak at m / z 269 [M − H] − with fragment ions at m / z 89 corresponding to a molecular weight of 270. Quercetin-3- O -glucoside ( a) and taxifolin pentoside ( b) were found as the major compounds identified in the CIL, whereas B-type dimeric proanthocyanidins ( c) and taxifolin 3- O -rhamnoside ( d) were the main compounds identified in the CIB extracts. 2.3.4. Quantitative Phytochemical Analysis From the qualitative analysis, phenolic derivatives were identified as the main chemical class. For this reason, the total phenolic content (TPC), total flavonoid content (TFC), and total condensed tannin content (TCTC) were determined in both extracts and are presented in . Gallic acid, catechin, and cyanidin chloride were used as standard, respectively. TPC was significantly higher ( p < 0.05) in CIL than CIB, whereas TFC and TCTC were higher ( p < 0.05) in the CIB extract. 2.4. Antioxidant Activity 2.4.1. DPPH Scavenging Activity The results of the scavenging activities of the CIL and CIB extracts by the DPPH method are presented in . Both extracts showed a higher antioxidant activity than ascorbic acid (ASC) and concentration-dependent scavenging activity. In fact, the IC 50 (half maximal inhibitory concentration) values of CIL and CIB are similar (23.95 ± 0.93 and 23.63 ± 1.37 µg/mL, respectively) and lower than the obtained value of ASC (30.75 ± 0.51 µg/mL), indicating the higher antioxidant activity of the extracts in comparison with this recognized antioxidant. However, the CIB extract showed the highest percentage of scavenging, 80.3%, at a concentration of 40 µg/mL. 2.4.2. Ferric Reducing Capability The results of the ferric-reducing capacity of CIL and CIB determined by the FRAP test are presented in . Both the CIL and CIB extracts showed a lower ferric reduction capacity than quercetin and ascorbic acid used as standards. The FRAP value of the CIL extract is 61.11 ± 2.91 µmol Fe 2+ /g dry weight, and of the CIB extract, this is 77.94 ± 2.02 µmol Fe 2+ /g dry weight, while the FRAP value of the quercetin and ASC was 121.51 ± 0.94 µmol Fe 2+ /g dry weight and 149.84 ± 1.08 µmol Fe 2+ /g dry weight, respectively. Comparatively, the CIB extract was shown to have more antioxidant potential than CIL. 2.5. Correlation between Phenolic Content and Antioxidant Activity The results of the statistical calculation concerning the possible correlation between the phenolic content of the CIL and CIB extracts and the antioxidant potential are presented in . For both the CIL and CIB extracts, a positive and statistically significant correlation ( p < 0.05) has been found between the phenolic content and antioxidant activity. For both the CIL and CIB extracts, TFC and TCTC showed a positive and strong correlation with FRAP activity, giving a Pearson correlation coefficient (r) of 0.99. On the other hand, TPC was positively correlated with DPPH activity, giving an r of 0.85 and 1.00 for CIL and CIB, respectively . Therefore, the results indicate that different phenolic derivatives, mainly procyanidins, made an outstanding contribution to the antioxidant activity of the CIL and CIB extracts. 2.1.1. Leaf Macroscopic Characteristics The macroscopic observation ( and ) revealed a dried grey color asymmetrical leaflet, with leaf lamina 3.0–6.6 cm in length and 1.4–2.8 cm wide, emarginate apex ( a) and cuneate base ( c), and venation brochidodromous, prominent on the abaxial surface ( b) and entire margin. Trichomes were observed on leaf rachis and petiolules (0.2–0.3 cm long) ( d). Microscopic Characteristics Light microscopy (LM) analysis of transversal sections of C. iripa leaf showed rectangular to polygonal upper epidermis cells, smaller (10.26–18.64 µm) than the lower epidermis cells (12.84–34.20 µm), mucilage-containing cells on the abaxial epidermis, and double-layered palisade parenchyma on the adaxial mesophyll tissue. Hypodermal layers were absent. The presence of adaxial xylem and abaxial phloem surrounded by 3–6 layers of sclerenchyma was observed ( a,b). The paracytic type of stomata (two subsidiary cells parallel to the guard cells) was only detected on the abaxial surface ( c,d). Calcium oxalate prismatic crystals (3.52–8.70 µm) were observed in the veins ( e), and unicellular and pointed non-glandular trichomes were observed in the petiolules ( f). The leaf powder of C. iripa was greyish-green in color and had a specific odor. By LM, it was possible to identify the presence of characteristic leaf microscopic elements like palisade parenchyma consisting of two layers of cells, fibers, free calcium oxalate prismatic crystals, and free non-glandular trichomes ( a–d). 2.1.2. Bark Macroscopic Characteristics The dried stem bark was nearly flat in the piece, smooth, brown-grey in color, and finely fissured; thickness is usually 2–3 mm ( a,b). Microscopic Characteristics LM analysis of the C. iripa bark transversal sections showed the presence of lenticel, periderm, narrow phelloderm (composed of tangentially elongated cells), broad cortex with large elliptical groups of sclereids (heterogenous in shape and size), parenchyma ( a,b), numerous calcium oxalate prismatic crystals ( c), and secondary phloem with distinct 2–5 seriated medullary rays ( d). No calcium oxalate prismatic crystals were found on medullary rays ( d). The LM longitudinal section analysis revealed the presence of fibers, some with calcium oxalate prismatic crystals along and parenchyma cell layers and numerous irregularly shaped starch granules (1.65–5.96 µm), isolated or conjugated, scattered in cell layers ( e,f). The powdered C. iripa bark was greyish brown in color and characterized by the presence of fragments of fibers ( a), fragments of parenchyma and reddish-brown periderm ( b), calcium oxalate prismatic crystals ( c), and occasional starch granules. Macroscopic Characteristics The macroscopic observation ( and ) revealed a dried grey color asymmetrical leaflet, with leaf lamina 3.0–6.6 cm in length and 1.4–2.8 cm wide, emarginate apex ( a) and cuneate base ( c), and venation brochidodromous, prominent on the abaxial surface ( b) and entire margin. Trichomes were observed on leaf rachis and petiolules (0.2–0.3 cm long) ( d). Microscopic Characteristics Light microscopy (LM) analysis of transversal sections of C. iripa leaf showed rectangular to polygonal upper epidermis cells, smaller (10.26–18.64 µm) than the lower epidermis cells (12.84–34.20 µm), mucilage-containing cells on the abaxial epidermis, and double-layered palisade parenchyma on the adaxial mesophyll tissue. Hypodermal layers were absent. The presence of adaxial xylem and abaxial phloem surrounded by 3–6 layers of sclerenchyma was observed ( a,b). The paracytic type of stomata (two subsidiary cells parallel to the guard cells) was only detected on the abaxial surface ( c,d). Calcium oxalate prismatic crystals (3.52–8.70 µm) were observed in the veins ( e), and unicellular and pointed non-glandular trichomes were observed in the petiolules ( f). The leaf powder of C. iripa was greyish-green in color and had a specific odor. By LM, it was possible to identify the presence of characteristic leaf microscopic elements like palisade parenchyma consisting of two layers of cells, fibers, free calcium oxalate prismatic crystals, and free non-glandular trichomes ( a–d). The macroscopic observation ( and ) revealed a dried grey color asymmetrical leaflet, with leaf lamina 3.0–6.6 cm in length and 1.4–2.8 cm wide, emarginate apex ( a) and cuneate base ( c), and venation brochidodromous, prominent on the abaxial surface ( b) and entire margin. Trichomes were observed on leaf rachis and petiolules (0.2–0.3 cm long) ( d). Light microscopy (LM) analysis of transversal sections of C. iripa leaf showed rectangular to polygonal upper epidermis cells, smaller (10.26–18.64 µm) than the lower epidermis cells (12.84–34.20 µm), mucilage-containing cells on the abaxial epidermis, and double-layered palisade parenchyma on the adaxial mesophyll tissue. Hypodermal layers were absent. The presence of adaxial xylem and abaxial phloem surrounded by 3–6 layers of sclerenchyma was observed ( a,b). The paracytic type of stomata (two subsidiary cells parallel to the guard cells) was only detected on the abaxial surface ( c,d). Calcium oxalate prismatic crystals (3.52–8.70 µm) were observed in the veins ( e), and unicellular and pointed non-glandular trichomes were observed in the petiolules ( f). The leaf powder of C. iripa was greyish-green in color and had a specific odor. By LM, it was possible to identify the presence of characteristic leaf microscopic elements like palisade parenchyma consisting of two layers of cells, fibers, free calcium oxalate prismatic crystals, and free non-glandular trichomes ( a–d). Macroscopic Characteristics The dried stem bark was nearly flat in the piece, smooth, brown-grey in color, and finely fissured; thickness is usually 2–3 mm ( a,b). Microscopic Characteristics LM analysis of the C. iripa bark transversal sections showed the presence of lenticel, periderm, narrow phelloderm (composed of tangentially elongated cells), broad cortex with large elliptical groups of sclereids (heterogenous in shape and size), parenchyma ( a,b), numerous calcium oxalate prismatic crystals ( c), and secondary phloem with distinct 2–5 seriated medullary rays ( d). No calcium oxalate prismatic crystals were found on medullary rays ( d). The LM longitudinal section analysis revealed the presence of fibers, some with calcium oxalate prismatic crystals along and parenchyma cell layers and numerous irregularly shaped starch granules (1.65–5.96 µm), isolated or conjugated, scattered in cell layers ( e,f). The powdered C. iripa bark was greyish brown in color and characterized by the presence of fragments of fibers ( a), fragments of parenchyma and reddish-brown periderm ( b), calcium oxalate prismatic crystals ( c), and occasional starch granules. The dried stem bark was nearly flat in the piece, smooth, brown-grey in color, and finely fissured; thickness is usually 2–3 mm ( a,b). LM analysis of the C. iripa bark transversal sections showed the presence of lenticel, periderm, narrow phelloderm (composed of tangentially elongated cells), broad cortex with large elliptical groups of sclereids (heterogenous in shape and size), parenchyma ( a,b), numerous calcium oxalate prismatic crystals ( c), and secondary phloem with distinct 2–5 seriated medullary rays ( d). No calcium oxalate prismatic crystals were found on medullary rays ( d). The LM longitudinal section analysis revealed the presence of fibers, some with calcium oxalate prismatic crystals along and parenchyma cell layers and numerous irregularly shaped starch granules (1.65–5.96 µm), isolated or conjugated, scattered in cell layers ( e,f). The powdered C. iripa bark was greyish brown in color and characterized by the presence of fragments of fibers ( a), fragments of parenchyma and reddish-brown periderm ( b), calcium oxalate prismatic crystals ( c), and occasional starch granules. The principal microscopical characteristics of C. iripa leaf and C. iripa bark were quantified to provide additional distinctive elements for quality control purposes of these medicinal plants as possible herbal drugs. The results are presented in . Noticeably, the abaxial epidermal cells were larger in the leaf than the adaxial epidermis cells, and the calcium oxalate prismatic crystals were wider in the bark than in the leaf. 2.3.1. Yield of Extraction Chemical studies were performed using extracts prepared with botanically characterized raw plant materials. The obtained extraction yields and drug extract ratio (DER) are presented in . The extract yield percentage was higher in C. iripa bark (CIB) than in the C. iripa leaf (CIL), corresponding to a lower DER, as verified. 2.3.2. Qualitative Phytochemical Analysis A portion of each extract (CIL and CIB) was analyzed using characteristic colorimetric methods for secondary metabolites. The results confirm the absence of alkaloids in both plant extracts, whereas the presence of phenolic compounds (in ferric chloride test, the bluish-black color formation, and in the acetic acid test, the red color formation) and triterpenoids, namely saponins (stable foam formation), is confirmed in both (CIL and CIB) extracts. 2.3.3. LC-UV/DAD-ESI/MS Fingerprint The obtained results of the analysis by high-resolution liquid chromatography coupled to a photodiode array and a mass spectrometry detector using electrospray ionization (LC-UV/DAD-ESI/MS) are presented in and and . The tentative identification of the main compounds was assigned by co-chromatography with authentic standards, comparison of their UV spectra and retention time, and mass spectrometric data based on the PubChem database and different scientific literature. Negative ionization data were selected for identification. presents data on the main compounds identified from C. iripa leaf extracts by LC-UV/DAD-ESI/MS. The obtained chromatograms for CIL extracts showed a total of 11 major peaks. Peak a showed a [M − H] − ion at m / z 1154 and fragment ions at m / z 865 [M − H − 289] − , 577 [M − H − 289 − 288] − , and 425 [M − H − 289 − 288 − 152 ] − that formed due to RDA fragmentation (one of the most common fragmentation pathways of B-type proanthocyanidins), and 287, that is a monomeric catechin unit, formed due to quinone methide cleavage . Notably, for B-type proanthocyanidins, fragments form the monomeric ions of m / z 287 or m / z 289 , and according to Karonen et al., 2004, B-type procyanidin oligomers are composed of multiple monomer subunits with interflavonoid C-C linkages that differ by multiples of 288 . Considering the differences between monomer units, UV spectra, and fragmentation pattern, this compound was identified as a B-type proanthocyanidins tetramer. Peak b exhibited a [M − H] − ion at m / z 865 and fragment ions at m / z 577 [M − H − 288] − , and 289 (monomeric catechin unit) was identified as a B-type proanthocyanidins trimer. Peaks c and d showed the [M − H] − ion at m / z 1442 and subsequent fragment ions at m / z 1154 [M − H − 288] − , 865 [M − H − 288 − 289] − , 577 [M − H − 288 − 289 − 288] − , and 289 (monomeric catechin unit), which indicated a molecular weight of 1443. Based on the differences between monomer units, UV spectra, and fragmentation pattern, these compounds were also identified as B-type proanthocyanidin pentamers. Peaks e, f, and h presented a [M − H] − ion at m / z 435, corresponding to a molecular weight of 436, and produced taxifolin aglycone fragment ions at m / z 303 [M − H − 132] − (indicating loss of a pentose moiety), 285 [M − H − 132 − 18] − (indicating loss of water), and 151 (a fragment produced due to a RDA reaction). The pentose moiety could be attributed to arabinose or xylose. Since arabinose and xylose are monosaccharides with the same molecular formula (C 5 H 10 O 5 ) and molecular weight (150 g/mol), more experiments are needed to obtain the correct identity of the sugar moiety in these peaks and based on the UV–Vis and MS spectral data, these peaks have tentatively been identified as taxifolin pentoside isomers . Peak g showed a [M − H] − ion at m / z 463 corresponding to molecular weight 464, with respective fragment ions at m / z 435 [M − H − 28] − , indicating loss of CO, and 301 [M − H − 162] − , indicating loss of a glycosyl unit. Based on the UV, fragmentation pattern, and co-chromatography with standards (quercetin-3- O -glucoside), this compound was assigned as quercetin-3- O -glucoside . Peak i showed [M − H] − ion at m / z 433 and characteristic fragment ion at m / z 301 [M − H − 132] − . By comparison of its fragmentation behavior with previous work in the literature, this peak was tentatively identified as quercetin 7- O -pentose/apiose . Peak j showed an m / z 447 [M − H] − and fragment ions at m / z 419 [M − H − 28] − , indicating loss of CO, and 285 [M − H − 162] − , a Kaempferol aglycone formed by the loss of the glycosyl unit and a UV spectrum compatible with its flavonol nature, namely kaempferol-7- O -glucoside (MW 448 g/mol). This identity was confirmed by the spectral information of the kaempferol 7- O -glucoside of the PubChem database . Peak k showed a base peak at m / z 269 [M − H] − with fragment ions at m / z 89 and a UV spectrum compatible with its flavone nature, namely apigenin (MW 270 g/mol). This identity was also confirmed by LC/UV-DAD co-chromatography with authentic standards (apigenin). presents data on the main compounds identified from C. iripa bark extracts by LC-UV/DAD-ESI/MS. The obtained chromatograms for CIB extracts showed a total of 11 major peaks. Both peaks a′ and c′ showed a [M − H] − ion at m / z 1154 and a similar fragmentation pattern corresponding to the B-type proanthocyanidins tetramer that was observed for peak a in CIL. Peak b′ also showed a [M − H] − ion at m / z at 865 with the fragmentation behavior of the B-type proanthocyanidins trimer, which was also similar to peak b in CIL. Peak d′ and e′ exhibited a [M − H] − ion at m / z at 561 and fragment ion at m / z 433 [M-H-126] − which formed due to RDA fragmentation. The other two fragment ions 287 and 273/271 formed due to a quinine methide reaction (QM) that indicated catechin and afzelechin derivatives, respectively. Based on the UV and fragmentation patterns reported in previous work, these peaks were identified as B-type proanthocyanidins dimers . Peak f′ showed a [M − H] − ion at m / z 565 and fragment ion 301 [M − H − 264] − , a typical fragment for quercetin derivatives formed by the loss of two pentose units (132 + 132). Concerning the UV and fragments behavior, this compound was tentatively identified as quercetin- 3-O -pentosyl-pentoside . Peak g′ , h′ , and i′ presented a [M − H] − ion at m / z 449 and characteristic fragment ions at m / z 303 [M − H − 146] − assigned to [aglycone H] − 285 and 151, which is similar fragmentation behavior to taxifolin. Based on the obtained UV–Vis and MS spectral data, this peak was tentatively identified as a deoxyhexose (rhamnose) of taxifolin, namely as taxifolin 3 -O -rhamnoside . Peak j′ exhibited a [M − H] − ion at m / z 599 and fragment ions at m / z 447 [M − H − 152] − formed by the removal of the galloyl moiety and the other fragment ion at m / z 301 [M − H − 298 (152 + 146)] − , indicating the removal of the galloyl-rhamnoside moiety. Based on the UV and typical fragmentation behavior, this compound was identified as quercitrin 3″- O -gallate . Peak k′ was tentatively identified as apigenin, similar to peak k in CIL as it exhibited a base peak at m / z 269 [M − H] − with fragment ions at m / z 89 corresponding to a molecular weight of 270. Quercetin-3- O -glucoside ( a) and taxifolin pentoside ( b) were found as the major compounds identified in the CIL, whereas B-type dimeric proanthocyanidins ( c) and taxifolin 3- O -rhamnoside ( d) were the main compounds identified in the CIB extracts. 2.3.4. Quantitative Phytochemical Analysis From the qualitative analysis, phenolic derivatives were identified as the main chemical class. For this reason, the total phenolic content (TPC), total flavonoid content (TFC), and total condensed tannin content (TCTC) were determined in both extracts and are presented in . Gallic acid, catechin, and cyanidin chloride were used as standard, respectively. TPC was significantly higher ( p < 0.05) in CIL than CIB, whereas TFC and TCTC were higher ( p < 0.05) in the CIB extract. Chemical studies were performed using extracts prepared with botanically characterized raw plant materials. The obtained extraction yields and drug extract ratio (DER) are presented in . The extract yield percentage was higher in C. iripa bark (CIB) than in the C. iripa leaf (CIL), corresponding to a lower DER, as verified. A portion of each extract (CIL and CIB) was analyzed using characteristic colorimetric methods for secondary metabolites. The results confirm the absence of alkaloids in both plant extracts, whereas the presence of phenolic compounds (in ferric chloride test, the bluish-black color formation, and in the acetic acid test, the red color formation) and triterpenoids, namely saponins (stable foam formation), is confirmed in both (CIL and CIB) extracts. The obtained results of the analysis by high-resolution liquid chromatography coupled to a photodiode array and a mass spectrometry detector using electrospray ionization (LC-UV/DAD-ESI/MS) are presented in and and . The tentative identification of the main compounds was assigned by co-chromatography with authentic standards, comparison of their UV spectra and retention time, and mass spectrometric data based on the PubChem database and different scientific literature. Negative ionization data were selected for identification. presents data on the main compounds identified from C. iripa leaf extracts by LC-UV/DAD-ESI/MS. The obtained chromatograms for CIL extracts showed a total of 11 major peaks. Peak a showed a [M − H] − ion at m / z 1154 and fragment ions at m / z 865 [M − H − 289] − , 577 [M − H − 289 − 288] − , and 425 [M − H − 289 − 288 − 152 ] − that formed due to RDA fragmentation (one of the most common fragmentation pathways of B-type proanthocyanidins), and 287, that is a monomeric catechin unit, formed due to quinone methide cleavage . Notably, for B-type proanthocyanidins, fragments form the monomeric ions of m / z 287 or m / z 289 , and according to Karonen et al., 2004, B-type procyanidin oligomers are composed of multiple monomer subunits with interflavonoid C-C linkages that differ by multiples of 288 . Considering the differences between monomer units, UV spectra, and fragmentation pattern, this compound was identified as a B-type proanthocyanidins tetramer. Peak b exhibited a [M − H] − ion at m / z 865 and fragment ions at m / z 577 [M − H − 288] − , and 289 (monomeric catechin unit) was identified as a B-type proanthocyanidins trimer. Peaks c and d showed the [M − H] − ion at m / z 1442 and subsequent fragment ions at m / z 1154 [M − H − 288] − , 865 [M − H − 288 − 289] − , 577 [M − H − 288 − 289 − 288] − , and 289 (monomeric catechin unit), which indicated a molecular weight of 1443. Based on the differences between monomer units, UV spectra, and fragmentation pattern, these compounds were also identified as B-type proanthocyanidin pentamers. Peaks e, f, and h presented a [M − H] − ion at m / z 435, corresponding to a molecular weight of 436, and produced taxifolin aglycone fragment ions at m / z 303 [M − H − 132] − (indicating loss of a pentose moiety), 285 [M − H − 132 − 18] − (indicating loss of water), and 151 (a fragment produced due to a RDA reaction). The pentose moiety could be attributed to arabinose or xylose. Since arabinose and xylose are monosaccharides with the same molecular formula (C 5 H 10 O 5 ) and molecular weight (150 g/mol), more experiments are needed to obtain the correct identity of the sugar moiety in these peaks and based on the UV–Vis and MS spectral data, these peaks have tentatively been identified as taxifolin pentoside isomers . Peak g showed a [M − H] − ion at m / z 463 corresponding to molecular weight 464, with respective fragment ions at m / z 435 [M − H − 28] − , indicating loss of CO, and 301 [M − H − 162] − , indicating loss of a glycosyl unit. Based on the UV, fragmentation pattern, and co-chromatography with standards (quercetin-3- O -glucoside), this compound was assigned as quercetin-3- O -glucoside . Peak i showed [M − H] − ion at m / z 433 and characteristic fragment ion at m / z 301 [M − H − 132] − . By comparison of its fragmentation behavior with previous work in the literature, this peak was tentatively identified as quercetin 7- O -pentose/apiose . Peak j showed an m / z 447 [M − H] − and fragment ions at m / z 419 [M − H − 28] − , indicating loss of CO, and 285 [M − H − 162] − , a Kaempferol aglycone formed by the loss of the glycosyl unit and a UV spectrum compatible with its flavonol nature, namely kaempferol-7- O -glucoside (MW 448 g/mol). This identity was confirmed by the spectral information of the kaempferol 7- O -glucoside of the PubChem database . Peak k showed a base peak at m / z 269 [M − H] − with fragment ions at m / z 89 and a UV spectrum compatible with its flavone nature, namely apigenin (MW 270 g/mol). This identity was also confirmed by LC/UV-DAD co-chromatography with authentic standards (apigenin). presents data on the main compounds identified from C. iripa bark extracts by LC-UV/DAD-ESI/MS. The obtained chromatograms for CIB extracts showed a total of 11 major peaks. Both peaks a′ and c′ showed a [M − H] − ion at m / z 1154 and a similar fragmentation pattern corresponding to the B-type proanthocyanidins tetramer that was observed for peak a in CIL. Peak b′ also showed a [M − H] − ion at m / z at 865 with the fragmentation behavior of the B-type proanthocyanidins trimer, which was also similar to peak b in CIL. Peak d′ and e′ exhibited a [M − H] − ion at m / z at 561 and fragment ion at m / z 433 [M-H-126] − which formed due to RDA fragmentation. The other two fragment ions 287 and 273/271 formed due to a quinine methide reaction (QM) that indicated catechin and afzelechin derivatives, respectively. Based on the UV and fragmentation patterns reported in previous work, these peaks were identified as B-type proanthocyanidins dimers . Peak f′ showed a [M − H] − ion at m / z 565 and fragment ion 301 [M − H − 264] − , a typical fragment for quercetin derivatives formed by the loss of two pentose units (132 + 132). Concerning the UV and fragments behavior, this compound was tentatively identified as quercetin- 3-O -pentosyl-pentoside . Peak g′ , h′ , and i′ presented a [M − H] − ion at m / z 449 and characteristic fragment ions at m / z 303 [M − H − 146] − assigned to [aglycone H] − 285 and 151, which is similar fragmentation behavior to taxifolin. Based on the obtained UV–Vis and MS spectral data, this peak was tentatively identified as a deoxyhexose (rhamnose) of taxifolin, namely as taxifolin 3 -O -rhamnoside . Peak j′ exhibited a [M − H] − ion at m / z 599 and fragment ions at m / z 447 [M − H − 152] − formed by the removal of the galloyl moiety and the other fragment ion at m / z 301 [M − H − 298 (152 + 146)] − , indicating the removal of the galloyl-rhamnoside moiety. Based on the UV and typical fragmentation behavior, this compound was identified as quercitrin 3″- O -gallate . Peak k′ was tentatively identified as apigenin, similar to peak k in CIL as it exhibited a base peak at m / z 269 [M − H] − with fragment ions at m / z 89 corresponding to a molecular weight of 270. Quercetin-3- O -glucoside ( a) and taxifolin pentoside ( b) were found as the major compounds identified in the CIL, whereas B-type dimeric proanthocyanidins ( c) and taxifolin 3- O -rhamnoside ( d) were the main compounds identified in the CIB extracts. From the qualitative analysis, phenolic derivatives were identified as the main chemical class. For this reason, the total phenolic content (TPC), total flavonoid content (TFC), and total condensed tannin content (TCTC) were determined in both extracts and are presented in . Gallic acid, catechin, and cyanidin chloride were used as standard, respectively. TPC was significantly higher ( p < 0.05) in CIL than CIB, whereas TFC and TCTC were higher ( p < 0.05) in the CIB extract. 2.4.1. DPPH Scavenging Activity The results of the scavenging activities of the CIL and CIB extracts by the DPPH method are presented in . Both extracts showed a higher antioxidant activity than ascorbic acid (ASC) and concentration-dependent scavenging activity. In fact, the IC 50 (half maximal inhibitory concentration) values of CIL and CIB are similar (23.95 ± 0.93 and 23.63 ± 1.37 µg/mL, respectively) and lower than the obtained value of ASC (30.75 ± 0.51 µg/mL), indicating the higher antioxidant activity of the extracts in comparison with this recognized antioxidant. However, the CIB extract showed the highest percentage of scavenging, 80.3%, at a concentration of 40 µg/mL. 2.4.2. Ferric Reducing Capability The results of the ferric-reducing capacity of CIL and CIB determined by the FRAP test are presented in . Both the CIL and CIB extracts showed a lower ferric reduction capacity than quercetin and ascorbic acid used as standards. The FRAP value of the CIL extract is 61.11 ± 2.91 µmol Fe 2+ /g dry weight, and of the CIB extract, this is 77.94 ± 2.02 µmol Fe 2+ /g dry weight, while the FRAP value of the quercetin and ASC was 121.51 ± 0.94 µmol Fe 2+ /g dry weight and 149.84 ± 1.08 µmol Fe 2+ /g dry weight, respectively. Comparatively, the CIB extract was shown to have more antioxidant potential than CIL. The results of the scavenging activities of the CIL and CIB extracts by the DPPH method are presented in . Both extracts showed a higher antioxidant activity than ascorbic acid (ASC) and concentration-dependent scavenging activity. In fact, the IC 50 (half maximal inhibitory concentration) values of CIL and CIB are similar (23.95 ± 0.93 and 23.63 ± 1.37 µg/mL, respectively) and lower than the obtained value of ASC (30.75 ± 0.51 µg/mL), indicating the higher antioxidant activity of the extracts in comparison with this recognized antioxidant. However, the CIB extract showed the highest percentage of scavenging, 80.3%, at a concentration of 40 µg/mL. The results of the ferric-reducing capacity of CIL and CIB determined by the FRAP test are presented in . Both the CIL and CIB extracts showed a lower ferric reduction capacity than quercetin and ascorbic acid used as standards. The FRAP value of the CIL extract is 61.11 ± 2.91 µmol Fe 2+ /g dry weight, and of the CIB extract, this is 77.94 ± 2.02 µmol Fe 2+ /g dry weight, while the FRAP value of the quercetin and ASC was 121.51 ± 0.94 µmol Fe 2+ /g dry weight and 149.84 ± 1.08 µmol Fe 2+ /g dry weight, respectively. Comparatively, the CIB extract was shown to have more antioxidant potential than CIL. The results of the statistical calculation concerning the possible correlation between the phenolic content of the CIL and CIB extracts and the antioxidant potential are presented in . For both the CIL and CIB extracts, a positive and statistically significant correlation ( p < 0.05) has been found between the phenolic content and antioxidant activity. For both the CIL and CIB extracts, TFC and TCTC showed a positive and strong correlation with FRAP activity, giving a Pearson correlation coefficient (r) of 0.99. On the other hand, TPC was positively correlated with DPPH activity, giving an r of 0.85 and 1.00 for CIL and CIB, respectively . Therefore, the results indicate that different phenolic derivatives, mainly procyanidins, made an outstanding contribution to the antioxidant activity of the CIL and CIB extracts. The quality control of medicinal plant materials is essential to allow them to be used as herbal medicines for human and veterinary use . Therefore, the botanical macroscopic and microscopic characteristics are essential for identifying whole, fragmented, and powdered samples of C. iripa leaf and C. iripa bark. Considering the external leaf morphology observed, C. iripa dried leaf has features like alternate jugate leaf arrangement, emarginate apex, cuneate base, and petiolules size up to 0.3 cm similar to those reported by Cooper, W.E. (2015) and Ragavan et al. (2017) for the fresh leaf of this species. In addition, common Leguminosae features are noticed for the first time in this species, like paracytic stomata and calcium oxalate prismatic crystals. Saenger and West (2016) referred to the presence of a single palisade layer and the absence of the hypodermal layer as characteristics of C. iripa leaf . Our results differ from their study, except for the hypodermal layer, as we found a double layer of palisade parenchyma on the adaxial. Other characteristics found in C. iripa leaf are the presence of a vascular bundle surrounded by layers of sclerenchyma, mucilage-filled cells, and paracytic stomata found only on the abaxial surface. Pan (2010) reported the presence of paracytic stomata only on the abaxial surface in another species, Cynometra chaka . Furthermore, in C. chaka and Cynometra lujae De Wild., multicellular uniseriate trichomes were found on the abaxial surface of the leaf , whereas in C. iripa , non-glandular unicellular trichomes were found only in the petiolule. In this study, for the first time, the microscopic features of the dried bark of C. iripa are discussed. The most distinctive elements for quality control proposed are the presence of lenticel, periderm, narrow phelloderm, broad cortex with large elliptical groups of sclereids, calcium oxalate prismatic crystals, secondary phloem with 2–5 seriated medullary rays, and irregularly shaped starch granules in all parenchymatous tissues. The chemical profile analysis showed that phenolic compounds, mainly condensed tannins, and flavonoids, are the main classes identified in C. iripa leaf and bark extracts. The major compounds identified in the leaf were quercetin-3- O -glucoside and taxifolin pentoside. Quercetin and its glycosides are vital plant flavonoids with neuroprotective, cardioprotective, chemo-preventive, antioxidant, anti-inflammatory, and anti-allergic properties . These compounds have been shown to suppress inflammatory responses by inhibiting inflammatory enzymes cyclooxygenase (COX) and lipoxygenase , and also by inhibiting the production of pro-inflammatory cytokines, such as interleukin-6 (IL-6), tumor necrosis factor-alpha (TNF-α), and interleukin-1 beta (IL-1β), in various cell types . Quercetin-3- O -glucoside demonstrated strong antioxidant and anti-inflammatory properties in vitro, as it showed the highest activity against cyclooxygenase (COX)-1, COX-2, and lipoxygenase (LOX-5), with IC 50 values of 3.62, 5.66, and 2.31 µg/mL, respectively. Additionally, it exhibited considerable cytotoxic effects on HeLa cells in a dose- and time-dependent manner . Taxifolin is a potent antioxidant that inhibits the increased activity of NF-κB in rats with cerebral ischemia–reperfusion injury . It also exhibited notable anti-inflammatory effects by reducing the transcription of TNF-α, IFN-γ, IL-10, and TLR-4 in Raw 264.7 cells in mice . The other marker compounds detected were B-type trimeric, tetrameric, and pentameric proanthocyanidins, quercetin 7- O -pentoside/apioside, kaempferol 7- O -glucoside, and apigenin. All these compounds are identified for the first time in C. iripa leaf. Different phenolic derivatives are reported to be found in other species of Cynometra . For example, proanthocyanidins, taxifolin pentoside, taxifolin 3- O -arabinofuranoside, catechin, apigenin 8- C -glucoside (vitexin), apigenin 6- C -glucoside (isovitexin), kaempferol hexoside, quercetin pentoside, quercetin hexoside, kaempferol–coumaroyl hexoside, isorhamnetin hexoside, and acacetin 7- O - β -glucoside have been isolated from the ethyl acetate and n-butanol, fractions of the leaf of Cynometra cauliflora L. . The major compounds detected in the C. iripa bark were B-type dimeric proanthocyanidins and taxifolin 3- O -rhamnoside. Proanthocyanidins (condensed tannins) are reported to have significant antioxidant, anti-cancer, anti-diabetic, antimicrobial, and immunomodulatory potential . Several studies in the literature reported on the different biological activities of B-type proanthocyanidins like anti-cancer activity by decreasing the in vitro growth of androgen-sensitive (LnCaP) and androgen-resistant (DU145) human prostate cancer cell lines , antimicrobial activity against Candida albicans and Cryptococcus neoformans , with MIC values of 250 to 1000 µg/mL , and anti-aging activity by reducing the content of ROS and nicotinamide adenine dinucleotide phosphate oxidases 4 (NOX4) mRNA levels in luteinized granulosa cells (hGC) and tumor granulosa cells (KGN) . Taxifolin 3- O -rhamnoside is an important flavonoid that showed anti-tumor activity on PANC-1 and A-549 cancer cell lines by inhibiting about 30% of the cell growth at 30 µM concentrations . The other marker compounds detected were B-type trimeric and tetrameric proanthocyanidins, quercetin 3- O -pentosyl-pentoside, taxifolin 3- O -rhamnoside, quercitrin 3″- O -gallate, and apigenin. Like C. iripa leaf, all these compounds are identified for the first time in the C. iripa bark. There are no other studies in the literature found concerning compounds in the C. iripa bark. The bark extracts of different Fabaceae species, like Stryphnodendron adstringens (Mart.), Mimosa tenuiflora (Mart.), Mimosa arenosa (Willd.) Poir., Mimosa caesalpiniifolia Benth., Anadenanthera colubrina var. cebil. , and Plathymenia reticulata Benth , are a potential source of condensed tannins. Besides this, different flavonoids have been detected in the bark of different Fabaceae species like quercetin, quercitrin, taxifolin, apigenin, astibilin, and kaempferol, which have been identified in the Hymenaea martiana bark . Another study reported the presence of isoquercitrin, quercetin, and rutin in Dimorphandra gardneriana Tul. bark . However, the biological properties of polyphenols depend on their bioavailability for intestinal absorption, metabolization, and subsequent interaction with target tissues or organs . In fact, the metabolism of flavonoid glycosides involves several enzymatic activities and interactions with the gut microbiota, leading to the release of bioactive aglycones . For instance, glycosylation improves the solubility and bioavailability of quercetin, which can enhance its therapeutic potential as quercetin has relatively low bioavailability due to poor absorption, rapid metabolism, and extensive first-pass elimination in the liver . Imidazole alkaloids have been noticed in different plant parts of some Cynometra species, like anantine, cynometrine, and cynodine, from Cynometra anata Hutch. and Dalziel (leaf) , N1-demethyl cynometrine, N1-demethyl cynodine, cynometrine, and cynodine from Cynometra hankei Harms (stem bark and seed) and anantine, cynometrine, isoanantine, isocynometrine, isocynodine, noranantine, hydroxyanantine, and cynolujine from C. lujae (plant part not referred) . However, in our study, no trace of alkaloids was detected in C. iripa leaf (CIL) or C. iripa bark extracts. The C. iripa leaf extract showed a higher total phenolic content (TPC), 1521 ± 4.71 mg of GAE/g dry weight, than the C. iripa bark extracts, which was 1476 ± 4.09 mg GAE/g dry weight. A higher TPC was also reported in C. cauliflora , in which the TPC of an aqueous extract of young leaf was 1831.47 ± 1.03 mg GAE/g and a lower TPC was found in a C. ramiflora stem methanolic extract (96.2 mg GAE/g dry weight) exhibiting the influence of extraction methods in the quantification of the secondary metabolites in different Cynometra species . The obtained values for the total flavonoid content (TFC) of C. iripa leaf and bark extracts were 64 ± 1.00 CE/g dry weight and 82 ± 0.58 mg CE/g, respectively. In a study, an aqueous extract of C. cauliflora leaf exhibited a TFC of 33.63 ± 0.25 mg CE/g dry weight , and a high TFC of 166.4 mg QE/g was reported in a methanol extract of C. ramiflora stem . However, in our study, C. iripa bark extracts exhibited a higher TFC than the leaf extracts. In a study, the hydroethanolic and hydromethanolic extracts of a Fabaceae species named Pongamia pinnata (L.) Pierre bark showed a higher TFC 2.28 ± 0.01 and 3.44 ± 0.04 g CE/100 g dry weight, respectively, than the leaf extract . Besides this, a higher TFC was also reported in the methanolic extract of stem bark (902 ± 0.7 mg quercetin equivalents/g) than the root and leaf extract of Rhizophora mucronate , which is also a mangrove species . The total condensed tannin content (TCTC) of C. iripa leaf and C. iripa bark were 755 ± 4.4 mg and 1021 ± 5.51 mg CCE/g dry weight, respectively. A lower TCTC of 80.4 mg GAE/g dry weight was reported in another species C. ramiflora stem methanolic extract because of the differences in the species, plant part, extraction solvent, and methodology. In addition, we expressed our result in CCE (Cyanidin Chloride Equivalent), whereas they expressed their result in Gallic Acid Equivalent (GAE) . No more studies were found in other Cynometra species concerning the tannin content in cyanidine chloride equivalent. CIL and CIB extracts showed antioxidant activity by the DPPH assay with IC 50 23.95 ± 0.93 and 23.63 ± 1.37 µg/mL, respectively, and by the FRAP assay with values of 61.11 ± 2.91 and 77.94 ± 2.02 µmol Fe 2+ /g dry weight, respectively. In the DPPH assay, both extracts showed a concentration-dependent scavenging activity higher than standard ascorbic acid. By the FRAP assay, both extracts showed a ferric-reducing capability lower than the used standards. Phenolic compounds, including procyanidins, were believed to be involved in the demonstrated antioxidant activity of both extracts. Comparatively, in both DPPH and FRAP assays, C. iripa bark (CIB) was shown to possess a higher antioxidant activity than C. iripa leaf (CIL). No more information has been found concerning the antioxidant activity of CIL and CIB hydroethanolic extracts. Ethanolic extracts of the leaf of another species C. cauliflora exhibited remarkable antioxidant activity with an IC 50 value of 2.88 ± 0.05 µg/mL than the standard quercetin in the DPPH assay . Also, aqueous extracts of fruit of the same species showed potent antioxidant capacity in both DPPH with an IC 50 value of 0.47 ± 0.03 g of dry weight/mL and a FRAP assay with reducing power of 25.07 ± 0.73 µmol Fe 2+ /g dry weight . A positive and statistically significant ( p < 0.05) correlation was noticed between the phenolic content and antioxidant activity. So, these phenolic derivatives are mainly responsible for the antioxidant activity of both extracts. 4.1. Plant Materials The leaf and bark of C. iripa were collected in April 2019 from the Koromjol and Harbaria eco-tourism center in the Chadpai range of the Sundarbans, Khulna District, Bangladesh. The identification of the collected samples was confirmed by Dr. Fahmida Khanam, director, Bangladesh National Herbarium, and the corresponding voucher samples were deposited in this herbarium with the voucher number DACB-47644. A copy of them was also kept in the Laboratory of Pharmacognosy (Department of Pharmacy, Pharmacology, and Health Technologies) Faculdade de Farmacia, Universidade de Lisboa, (FFUL), Portugal. After identification, the plant’s raw material for laboratory studies was dried in the dark at room temperature (±22 °C) in Bangladesh and transferred to the Laboratory of Pharmacognosy at the FFUL. 4.2. Botanical Studies 4.2.1. Samples To conduct the botanical analysis, fifty samples were randomly selected from the 250 g of collected raw material in accordance with the sampling guidelines set forth in the European Pharmacopoeia for herbal drugs. A representative portion of the total collected plant material was powdered using a mill and then mounted in a 60% chloral hydrate solution, following the procedures outlined in the European Pharmacopoeia . 4.2.2. Macroscopic Analysis The macroscopic analysis was performed with the naked eye and an Olympus SZ61 stereo microscope (Switzerland) equipped with a Leica MC170 HD digital camera. Image capture and analysis were facilitated by the Leica Application Suite (LAS) Version 4.8.0 software (Switzerland). 4.2.3. Microscopic Analysis Transverse sections (midrib, distal part of the blade, and petiolule) and tangential longitudinal sections (leaf surface) were cleared and mounted in a 60% chloral hydrate aqueous solution. Microscopic analysis of the prepared leaf sections and powdered plant material was carried out using an Olympus CX31 microscope fitted with a Leica MC170 HD digital camera, with imaging processed via the LAS Version 4.8.0 software (Switzerland). For macroscopic feature determination, observations were made on 15 adult leaves. For microscopic measurements, 30 samples were analyzed (1 mm² per sample). The stomatal index (SI) was calculated using the following formula: SI = ( S × 100)/( S + E) where ( S ) represents the number of stomata per unit area of the leaf and (E) the number of epidermal cells in the same area of the leaf . 4.3. Chemical Studies Plant Extract Preparation The hydroethanolic (70%) extracts of each herbal substance (leaf and bark) were prepared using ethanol and water in a ratio of 70:30 at room temperature by maceration (a minimum of 3 × 24 h each). This solvent mixture assures the extraction of polar and apolar secondary metabolites. After extraction and filtration using the G4 glass filter under vacuum, the solution was evaporated by a rotary evaporator (Buchi R-100, Flawil, Switzerland) at a temperature less than 40 °C and then put in the freezer (−20 °C) and finally lyophilized at −55 °C (Heto LyoLab-3000, Dietikon, Switzerland) . The Drug Extract Ratio (the ratio of the amount of plant material to the amount of the obtained extract) was evaluated, and the following equation was used to calculate the percentage of yield: Yield of extraction (%, w / w ) = Wt 1 /Wt 2 × 100% Wt 1 and Wt 2 represent the final weight of the dried extract and the primary weight of the leaf/bark powder . 4.4. Qualitative Phytochemical Analysis The hydroethanolic extracts were qualitatively analyzed for different secondary metabolites using conventional procedures and LC/UV-DAD/ESI-MS analysis. Preliminary phytochemical screening was conducted for alkaloids by the Bouchardat/Mayer/Dragendorff test , phenolic compounds by the ferric chloride test and acetic acid test , and saponins by the foam test . LC-UV/DAD-ESI/MS Analysis A Waters Alliance 2695 high-performance liquid chromatography (HPLC) system with an autosampler and photodiode array detector (Waters PDA 2996) was used in conjunction with a MicroMass Quattro MicroTM API triple quadrupole tandem mass spectrometer (Waters, Drinagh, Ireland). The separation module, also from Waters, included a quaternary pump system, degasser, autosampler, and column oven. Chromatograms were captured over a wavelength range of 210–700 nm. An electrospray ionization source (ESI) was operated in negative mode. Separation was carried out using a LiCrospher ® 100 RP-18 column (5 µm, 250 × 4 mm, Merck, Darmstadt, Germany) maintained at 35 °C. The flow rate was set to 0.3 mL/min with an injection volume of 20 μL. The mobile phase comprised water containing 0.1% formic acid (Phase A) and acetonitrile (Phase B), with a total run time of 90 min. The gradient conditions were 5% Phase B at 0 min, 20% Phase B at 20 min, 50% Phase B at 60 min, and 100% Phase B at 90 min. The peaks were analyzed by MassLynx™ V4.1 software (Waters ® , Drinagh, Ireland). The compounds were identified by co-chromatography and by comparison of retention time, UV, and mass spectral data with reference standards (quercetin-3- O -glucoside from Honeywell Fluka, Germany, and apigenin from Extrasynthese, Genay, France) or tentatively identified according to the literature and databases. 4.5. Quantitative Phytochemical Analysis All values were obtained in 3 sets of experiments and evaluated in triplicate by spectrophotometry using a Hitachi U-2000 UV–Vis spectrophotometer (Tokyo, Japan). Total phenolic content The total phenolic content of each of the extracts was determined using the Folin–Ciocalteu assay , where 2 mL of Folin–Ciocalteu reagent (diluted with water 1:10 v / v ) was mixed with 0.4 mL of extract and then 1.6 mL of anhydrous Na 2 CO 3 (75 g/L) solution. After two hours, the absorbance was measured at 765 nm. The gallic acid was used to obtain a standard calibration curve, and distilled water was used as blank. Results were expressed as mg of gallic acid equivalents (GAE)/g dried plant materials. Data are presented as the mean ± standard deviation. Total flavonoid content The total flavonoid content of each extract was determined by using the aluminum chloride colorimetric assay by Oliveira et al. (2008) with some modifications . To 0.5 mL of extract, 2 mL of distilled water and 150 µL of 5% NaNO 2 were added, and the mixture was left to incubate for 5 min. After that, 150 µL of 10% AlCl 3 was added and incubated for 6 min. Then, finally, 1 mL of 1M NaOH was added and incubated at 18 °C in the dark for 20 min. Absorbance was measured at 510 nm. An increasing catechin concentration was used to obtain a standard calibration curve. The results were expressed as mg of catechin equivalents (CE)/g dried plant materials. Data are presented as the mean ± standard deviation. Total condensed tannin content The total condensed tannin content of each of the extracts was evaluated using the method of Porter et al. (1986) . To 0.5 mL of plant extracts (diluted in 70% Acetone) we added 3 mL of butanol–HCl reagent (butanol-HCI 95:5 v / v ) and 0.1 mL of ferric reagent (2% ferric ammonium sulfate in 2N HCl). Then, the solution was mixed and incubated at 97 to 100 °C for 1 h using a hot water bath. Absorbance was measured at 550 nm. The cyanidin chloride concentration was used as the standard calibration curve. The blank for each sample comprised 0.5 mL of the extract, 3 mL of butanol–HCl reagent, and 0.1 mL of the ferric reagent. Results were expressed as mg of cyanidin chloride (CCE)/g dried plant material. Data are presented as the mean ± standard deviation. 4.6. Antioxidant Activity 4.6.1. DPPH (2,2-Diphenyl-1-picrylhydrazyl) Free Radical Scavenging Assay The free radical scavenging activity was determined by the DPPH assay . In this assay, the purple-colored DPPH becomes reduced by a hydrogen or electron donor, and its color changes to yellow. DPPH solution (3.9 mL, 6 × 10 −5 M in methanol) was mixed with 100 µL of each extract. After 30 min of incubation at room temperature, the absorbance of the samples and standard solution was measured at 517 nm. Ascorbic acid was used as the reference standard. The inhibition ratio (percent) was calculated from the following equation. % Inhibition = A 0 − A 1 /A 0 × 100 where A 0 = the absorbance of the control and A 1 = the absorbance of the standard. The IC 50 value is the concentration of the sample required to scavenge 50% of free radicals, and we calculated this from the plot of % inhibition against the concentration of each extract. 4.6.2. FRAP Assay Under acidic conditions, the ferric 2,4,6-tri-2-pyridyl-s-triazine (Fe³⁺-TPTZ) complex is reduced to its ferrous form (Fe²⁺) by antioxidants, resulting in a vivid blue coloration with an absorption peak at 593 nm . To prepare the FRAP reagent, 25 mL of acetate buffer (pH 3.6), 2.5 mL of ferric chloride solution (prepared by dissolving 0.5406 g of ferric chloride in 100 mL of distilled water), and 2.5 mL of TPTZ solution (prepared by dissolving 0.0781 g of TPTZ in 40 mM of HCl) were combined. The mixture was then incubated in a water bath at 37 °C for 10 min. For the assay, 300 μL of water and 100 μL of the test sample were added to a cuvette. About 3000 μL of the prepared FRAP reagent was subsequently introduced into the cuvette and mixed by inversion. A control assay was performed using water in place of the sample. The absorbance at 593 nm was recorded with a spectrophotometer exactly 4 min after adding the FRAP reagent. 4.7. Statistical Analysis All the macroscopic and microscopic results were obtained by using Excel 365 software (version 2401) from Microsoft and expressed as minimum, maximum, mean ± SD, and median, except for determining the stomatal index . Pearson’s correlation test was used to establish the correlation between TPC, TFC, TCTC, and antioxidant assays (DPPH, FRAP). The leaf and bark of C. iripa were collected in April 2019 from the Koromjol and Harbaria eco-tourism center in the Chadpai range of the Sundarbans, Khulna District, Bangladesh. The identification of the collected samples was confirmed by Dr. Fahmida Khanam, director, Bangladesh National Herbarium, and the corresponding voucher samples were deposited in this herbarium with the voucher number DACB-47644. A copy of them was also kept in the Laboratory of Pharmacognosy (Department of Pharmacy, Pharmacology, and Health Technologies) Faculdade de Farmacia, Universidade de Lisboa, (FFUL), Portugal. After identification, the plant’s raw material for laboratory studies was dried in the dark at room temperature (±22 °C) in Bangladesh and transferred to the Laboratory of Pharmacognosy at the FFUL. 4.2.1. Samples To conduct the botanical analysis, fifty samples were randomly selected from the 250 g of collected raw material in accordance with the sampling guidelines set forth in the European Pharmacopoeia for herbal drugs. A representative portion of the total collected plant material was powdered using a mill and then mounted in a 60% chloral hydrate solution, following the procedures outlined in the European Pharmacopoeia . 4.2.2. Macroscopic Analysis The macroscopic analysis was performed with the naked eye and an Olympus SZ61 stereo microscope (Switzerland) equipped with a Leica MC170 HD digital camera. Image capture and analysis were facilitated by the Leica Application Suite (LAS) Version 4.8.0 software (Switzerland). 4.2.3. Microscopic Analysis Transverse sections (midrib, distal part of the blade, and petiolule) and tangential longitudinal sections (leaf surface) were cleared and mounted in a 60% chloral hydrate aqueous solution. Microscopic analysis of the prepared leaf sections and powdered plant material was carried out using an Olympus CX31 microscope fitted with a Leica MC170 HD digital camera, with imaging processed via the LAS Version 4.8.0 software (Switzerland). For macroscopic feature determination, observations were made on 15 adult leaves. For microscopic measurements, 30 samples were analyzed (1 mm² per sample). The stomatal index (SI) was calculated using the following formula: SI = ( S × 100)/( S + E) where ( S ) represents the number of stomata per unit area of the leaf and (E) the number of epidermal cells in the same area of the leaf . To conduct the botanical analysis, fifty samples were randomly selected from the 250 g of collected raw material in accordance with the sampling guidelines set forth in the European Pharmacopoeia for herbal drugs. A representative portion of the total collected plant material was powdered using a mill and then mounted in a 60% chloral hydrate solution, following the procedures outlined in the European Pharmacopoeia . The macroscopic analysis was performed with the naked eye and an Olympus SZ61 stereo microscope (Switzerland) equipped with a Leica MC170 HD digital camera. Image capture and analysis were facilitated by the Leica Application Suite (LAS) Version 4.8.0 software (Switzerland). Transverse sections (midrib, distal part of the blade, and petiolule) and tangential longitudinal sections (leaf surface) were cleared and mounted in a 60% chloral hydrate aqueous solution. Microscopic analysis of the prepared leaf sections and powdered plant material was carried out using an Olympus CX31 microscope fitted with a Leica MC170 HD digital camera, with imaging processed via the LAS Version 4.8.0 software (Switzerland). For macroscopic feature determination, observations were made on 15 adult leaves. For microscopic measurements, 30 samples were analyzed (1 mm² per sample). The stomatal index (SI) was calculated using the following formula: SI = ( S × 100)/( S + E) where ( S ) represents the number of stomata per unit area of the leaf and (E) the number of epidermal cells in the same area of the leaf . Plant Extract Preparation The hydroethanolic (70%) extracts of each herbal substance (leaf and bark) were prepared using ethanol and water in a ratio of 70:30 at room temperature by maceration (a minimum of 3 × 24 h each). This solvent mixture assures the extraction of polar and apolar secondary metabolites. After extraction and filtration using the G4 glass filter under vacuum, the solution was evaporated by a rotary evaporator (Buchi R-100, Flawil, Switzerland) at a temperature less than 40 °C and then put in the freezer (−20 °C) and finally lyophilized at −55 °C (Heto LyoLab-3000, Dietikon, Switzerland) . The Drug Extract Ratio (the ratio of the amount of plant material to the amount of the obtained extract) was evaluated, and the following equation was used to calculate the percentage of yield: Yield of extraction (%, w / w ) = Wt 1 /Wt 2 × 100% Wt 1 and Wt 2 represent the final weight of the dried extract and the primary weight of the leaf/bark powder . The hydroethanolic (70%) extracts of each herbal substance (leaf and bark) were prepared using ethanol and water in a ratio of 70:30 at room temperature by maceration (a minimum of 3 × 24 h each). This solvent mixture assures the extraction of polar and apolar secondary metabolites. After extraction and filtration using the G4 glass filter under vacuum, the solution was evaporated by a rotary evaporator (Buchi R-100, Flawil, Switzerland) at a temperature less than 40 °C and then put in the freezer (−20 °C) and finally lyophilized at −55 °C (Heto LyoLab-3000, Dietikon, Switzerland) . The Drug Extract Ratio (the ratio of the amount of plant material to the amount of the obtained extract) was evaluated, and the following equation was used to calculate the percentage of yield: Yield of extraction (%, w / w ) = Wt 1 /Wt 2 × 100% Wt 1 and Wt 2 represent the final weight of the dried extract and the primary weight of the leaf/bark powder . The hydroethanolic extracts were qualitatively analyzed for different secondary metabolites using conventional procedures and LC/UV-DAD/ESI-MS analysis. Preliminary phytochemical screening was conducted for alkaloids by the Bouchardat/Mayer/Dragendorff test , phenolic compounds by the ferric chloride test and acetic acid test , and saponins by the foam test . LC-UV/DAD-ESI/MS Analysis A Waters Alliance 2695 high-performance liquid chromatography (HPLC) system with an autosampler and photodiode array detector (Waters PDA 2996) was used in conjunction with a MicroMass Quattro MicroTM API triple quadrupole tandem mass spectrometer (Waters, Drinagh, Ireland). The separation module, also from Waters, included a quaternary pump system, degasser, autosampler, and column oven. Chromatograms were captured over a wavelength range of 210–700 nm. An electrospray ionization source (ESI) was operated in negative mode. Separation was carried out using a LiCrospher ® 100 RP-18 column (5 µm, 250 × 4 mm, Merck, Darmstadt, Germany) maintained at 35 °C. The flow rate was set to 0.3 mL/min with an injection volume of 20 μL. The mobile phase comprised water containing 0.1% formic acid (Phase A) and acetonitrile (Phase B), with a total run time of 90 min. The gradient conditions were 5% Phase B at 0 min, 20% Phase B at 20 min, 50% Phase B at 60 min, and 100% Phase B at 90 min. The peaks were analyzed by MassLynx™ V4.1 software (Waters ® , Drinagh, Ireland). The compounds were identified by co-chromatography and by comparison of retention time, UV, and mass spectral data with reference standards (quercetin-3- O -glucoside from Honeywell Fluka, Germany, and apigenin from Extrasynthese, Genay, France) or tentatively identified according to the literature and databases. A Waters Alliance 2695 high-performance liquid chromatography (HPLC) system with an autosampler and photodiode array detector (Waters PDA 2996) was used in conjunction with a MicroMass Quattro MicroTM API triple quadrupole tandem mass spectrometer (Waters, Drinagh, Ireland). The separation module, also from Waters, included a quaternary pump system, degasser, autosampler, and column oven. Chromatograms were captured over a wavelength range of 210–700 nm. An electrospray ionization source (ESI) was operated in negative mode. Separation was carried out using a LiCrospher ® 100 RP-18 column (5 µm, 250 × 4 mm, Merck, Darmstadt, Germany) maintained at 35 °C. The flow rate was set to 0.3 mL/min with an injection volume of 20 μL. The mobile phase comprised water containing 0.1% formic acid (Phase A) and acetonitrile (Phase B), with a total run time of 90 min. The gradient conditions were 5% Phase B at 0 min, 20% Phase B at 20 min, 50% Phase B at 60 min, and 100% Phase B at 90 min. The peaks were analyzed by MassLynx™ V4.1 software (Waters ® , Drinagh, Ireland). The compounds were identified by co-chromatography and by comparison of retention time, UV, and mass spectral data with reference standards (quercetin-3- O -glucoside from Honeywell Fluka, Germany, and apigenin from Extrasynthese, Genay, France) or tentatively identified according to the literature and databases. All values were obtained in 3 sets of experiments and evaluated in triplicate by spectrophotometry using a Hitachi U-2000 UV–Vis spectrophotometer (Tokyo, Japan). Total phenolic content The total phenolic content of each of the extracts was determined using the Folin–Ciocalteu assay , where 2 mL of Folin–Ciocalteu reagent (diluted with water 1:10 v / v ) was mixed with 0.4 mL of extract and then 1.6 mL of anhydrous Na 2 CO 3 (75 g/L) solution. After two hours, the absorbance was measured at 765 nm. The gallic acid was used to obtain a standard calibration curve, and distilled water was used as blank. Results were expressed as mg of gallic acid equivalents (GAE)/g dried plant materials. Data are presented as the mean ± standard deviation. Total flavonoid content The total flavonoid content of each extract was determined by using the aluminum chloride colorimetric assay by Oliveira et al. (2008) with some modifications . To 0.5 mL of extract, 2 mL of distilled water and 150 µL of 5% NaNO 2 were added, and the mixture was left to incubate for 5 min. After that, 150 µL of 10% AlCl 3 was added and incubated for 6 min. Then, finally, 1 mL of 1M NaOH was added and incubated at 18 °C in the dark for 20 min. Absorbance was measured at 510 nm. An increasing catechin concentration was used to obtain a standard calibration curve. The results were expressed as mg of catechin equivalents (CE)/g dried plant materials. Data are presented as the mean ± standard deviation. Total condensed tannin content The total condensed tannin content of each of the extracts was evaluated using the method of Porter et al. (1986) . To 0.5 mL of plant extracts (diluted in 70% Acetone) we added 3 mL of butanol–HCl reagent (butanol-HCI 95:5 v / v ) and 0.1 mL of ferric reagent (2% ferric ammonium sulfate in 2N HCl). Then, the solution was mixed and incubated at 97 to 100 °C for 1 h using a hot water bath. Absorbance was measured at 550 nm. The cyanidin chloride concentration was used as the standard calibration curve. The blank for each sample comprised 0.5 mL of the extract, 3 mL of butanol–HCl reagent, and 0.1 mL of the ferric reagent. Results were expressed as mg of cyanidin chloride (CCE)/g dried plant material. Data are presented as the mean ± standard deviation. 4.6.1. DPPH (2,2-Diphenyl-1-picrylhydrazyl) Free Radical Scavenging Assay The free radical scavenging activity was determined by the DPPH assay . In this assay, the purple-colored DPPH becomes reduced by a hydrogen or electron donor, and its color changes to yellow. DPPH solution (3.9 mL, 6 × 10 −5 M in methanol) was mixed with 100 µL of each extract. After 30 min of incubation at room temperature, the absorbance of the samples and standard solution was measured at 517 nm. Ascorbic acid was used as the reference standard. The inhibition ratio (percent) was calculated from the following equation. % Inhibition = A 0 − A 1 /A 0 × 100 where A 0 = the absorbance of the control and A 1 = the absorbance of the standard. The IC 50 value is the concentration of the sample required to scavenge 50% of free radicals, and we calculated this from the plot of % inhibition against the concentration of each extract. 4.6.2. FRAP Assay Under acidic conditions, the ferric 2,4,6-tri-2-pyridyl-s-triazine (Fe³⁺-TPTZ) complex is reduced to its ferrous form (Fe²⁺) by antioxidants, resulting in a vivid blue coloration with an absorption peak at 593 nm . To prepare the FRAP reagent, 25 mL of acetate buffer (pH 3.6), 2.5 mL of ferric chloride solution (prepared by dissolving 0.5406 g of ferric chloride in 100 mL of distilled water), and 2.5 mL of TPTZ solution (prepared by dissolving 0.0781 g of TPTZ in 40 mM of HCl) were combined. The mixture was then incubated in a water bath at 37 °C for 10 min. For the assay, 300 μL of water and 100 μL of the test sample were added to a cuvette. About 3000 μL of the prepared FRAP reagent was subsequently introduced into the cuvette and mixed by inversion. A control assay was performed using water in place of the sample. The absorbance at 593 nm was recorded with a spectrophotometer exactly 4 min after adding the FRAP reagent. The free radical scavenging activity was determined by the DPPH assay . In this assay, the purple-colored DPPH becomes reduced by a hydrogen or electron donor, and its color changes to yellow. DPPH solution (3.9 mL, 6 × 10 −5 M in methanol) was mixed with 100 µL of each extract. After 30 min of incubation at room temperature, the absorbance of the samples and standard solution was measured at 517 nm. Ascorbic acid was used as the reference standard. The inhibition ratio (percent) was calculated from the following equation. % Inhibition = A 0 − A 1 /A 0 × 100 where A 0 = the absorbance of the control and A 1 = the absorbance of the standard. The IC 50 value is the concentration of the sample required to scavenge 50% of free radicals, and we calculated this from the plot of % inhibition against the concentration of each extract. Under acidic conditions, the ferric 2,4,6-tri-2-pyridyl-s-triazine (Fe³⁺-TPTZ) complex is reduced to its ferrous form (Fe²⁺) by antioxidants, resulting in a vivid blue coloration with an absorption peak at 593 nm . To prepare the FRAP reagent, 25 mL of acetate buffer (pH 3.6), 2.5 mL of ferric chloride solution (prepared by dissolving 0.5406 g of ferric chloride in 100 mL of distilled water), and 2.5 mL of TPTZ solution (prepared by dissolving 0.0781 g of TPTZ in 40 mM of HCl) were combined. The mixture was then incubated in a water bath at 37 °C for 10 min. For the assay, 300 μL of water and 100 μL of the test sample were added to a cuvette. About 3000 μL of the prepared FRAP reagent was subsequently introduced into the cuvette and mixed by inversion. A control assay was performed using water in place of the sample. The absorbance at 593 nm was recorded with a spectrophotometer exactly 4 min after adding the FRAP reagent. All the macroscopic and microscopic results were obtained by using Excel 365 software (version 2401) from Microsoft and expressed as minimum, maximum, mean ± SD, and median, except for determining the stomatal index . Pearson’s correlation test was used to establish the correlation between TPC, TFC, TCTC, and antioxidant assays (DPPH, FRAP). C. iripa is a medicinally important mangrove species. The different plant parts of this species are traditionally used to treat different ailments in Bangladesh and India. There are a few studies concerning the quality, safety, and efficacy of Cynometra species. For the first time, this study has been conducted on the establishment of quality parameters for C. iripa leaf and C. iripa bark as herbal medicines. C. iripa extracts have been found to be a good source of phenolic derivatives, mainly proanthocyanidins, believed to be responsible for their antioxidant activity. However, our future step will be a deeper phytochemical investigation to identify more secondary metabolites and their isolation, and besides this, pharmacological studies will be conducted to clarify their traditional use. |
Combining Congenital Heart Surgical and Interventional Cardiology Outcome Data in a Single Database: The Development of a Patient-Centered Collaboration of the European Congenital Heart Surgeons Association (ECHSA) and the Association for European Paediatric and Congenital Cardiology (AEPC) | 668780b4-3369-44a7-a01a-57989fa22eb3 | 10411030 | Internal Medicine[mh] | It is well recognized that optimal care of patients with congenital heart disease (CHD) requires a multidisciplinary approach centered around the needs of the patient. Management of the patient may involve various interventional cardiology procedures, surgical operations, or even combined hybrid procedures, and frequently more than once during the life of the patient. Outcomes depend on multiple factors including the complexity of the disease itself, other patient-related factors including concomitant pathologies and comorbidities, the clinical status of the patient, and factors external to the patient that are related to the available resources and organization of the health care team. The complex interactions of all these features render the evaluation of outcomes sometimes challenging. It is also clear that the determination of outcomes increasingly should involve not only tracking mortality but also, perhaps most importantly, various complications (many of which are of a general nature, while others are procedure-specific), and other quality metrics. Efforts to evaluate the benefit to our patients of operations and transcatheter procedures depend on collecting the relevant information in well-organized databases with a high degree of participation and coverage, a task which requires the existence of a common nomenclature to be used by all data contributors – and the development of appropriate analytical tools. – The European Congenital Heart Surgeons Association (ECHSA) Congenital Database (CD) is the second largest clinical pediatric and congenital cardiac surgical database in the world and the largest in Europe, where various smaller national or regional databases exist. ECHSA-CD has also recently developed powerful artificial intelligence and machine learning–based methodologies that will enhance the art and science of pediatric and congenital cardiac outcomes analysis. , On the other hand, despite the remarkable increase in interventional cardiology procedures over recent years, only scattered national or regional databases of such procedures exist in Europe. To date, few national databases combine both cardiac surgical and interventional cardiology data in the same database. , Most importantly, however, no congenital cardiac database exists in the world that seamlessly combines both surgical and interventional cardiology data on an international level; therefore, the outcomes of surgical and interventional procedures performed on the same or similar patients cannot easily be tracked, assessed, and analyzed. In order to fill this important gap in our capability to gather and analyze information on our common patients, ECHSA and The Association for European Paediatric and Congenital Cardiology (AEPC) have embarked on a collaborative effort to expand the ECHSA-CD with a new module designed to capture data about interventional cardiology procedures. The purpose of this manuscript is to describe the concept, the structure, and the function of the new AEPC Interventional Cardiology Part of the ECHSA-CD , as well as the potentially valuable synergies provided by the shared interventional and surgical analyses of outcomes of patients.
The AEPC ( https://www.aepc.org/ ) was founded in 1963. Currently, more than 1,200 members are organized in a network of specialists who are committed to the practice and advancement of Congenital Cardiology and closely related fields. The AEPC members originate from 32 European countries, but there are several members from outside Europe, too. The mission of AEPC is ( https://www.aepc.org/our-mission ): “(a) Knowledge of the normal and diseased heart and circulation in a growing individual and (b) Exchange of expertise between experts from Europe and globally and (c) Continuous medical education (d) Harmonizing training in Paediatric Cardiology and its subspecialties in Europe. This is done by means of creating European recommendations for training and by organizing several Teaching Courses for Fellows in training.” Several working groups represent the different aspects of diagnosis and treatment of congenital cardiac patients from fetal life to geriatric age. The 14 working groups of AEPC are responsible for the development of education, training, and exchange of knowledge within the different subspecialties. One important subspecialty is organized in the Interventional Working Group, where current knowledge and new developments are frequently shared. Apart from pediatric and adult cardiologists, several cardiac surgeons are also members. The interdisciplinary collaboration is also reflected in the relationship with other organizations focusing on the care of patients with congenital heart disease. The ECHSA ( https://www.echsa.org/ ) arose in 2003 following the renaming of its parent society, the European Congenital Heart Surgeons Foundation, which had been established in 1992. The development of the congenital cardiac surgical database began in 1994. ECHSA-CD was initially named the European Congenital Heart Defects Database, renamed as the European Association for Cardio-Thoracic Surgery Congenital Database (EACTS CD) in 1999, and acquired its final name of “ECHSA Congenital Database” in 2015, owned and directed by ECHSA. documents the history of ECHSA-CD. Over the years, a strong collaboration and harmonization with The Society of Thoracic Surgeons (STS) Congenital Heart Surgery Database (CHSD) was maintained by using common nomenclature and common data structure and fields. , , , For standardization, the International Pediatric and Congenital Cardiac Code (IPCCC) , , is used for coding. The translation into various languages enables further integration of the international community of pediatric and congenital cardiac care. The ECHSA-CD is based in Europe as a worldwide database, and ECHSA-CD is open to everyone. By May 2023, data pertaining to 303,892 patients and 358,052 operations have been collected. The database functions allow users to create customized online reports on subgroups of patients and procedures. Verification of the completeness and accuracy of the data in ECHSA-CD is performed utilizing “ source data verification .” – The technical details of “ source data verification ” have been previously published – and include an audit of the data at individual hospitals (for both completeness and accuracy ), with comparison of the data in ECHSA-CD to the primary source of the data at the hospital (eg, hospital operative logs and hospital medical records). (This process of “ source data verification ” that is currently applied to ECHSA-CD will continue to be applied in ECHSA-CD and will also be applied to the AEPC Interventional Cardiology Part of the ECHSA-CD .) The aims of collecting data with ECHSA-CD across Europe on the outcomes of congenital cardiac surgery procedures are multifactorial and include: measure and assess quality; provide a platform for benchmarking individual and programmatic results in comparison to national and international aggregate data (in the domains of mortality and morbidity); determine risk factors; improve quality; generate new knowledge, in other words, research; and enable predictive statistical analysis according to pathologies and procedures from various centers and countries, helping to define official European standards available for the scientific community and health care. The research publications of ECHSA-CD are summarized on the official website of ECHSA ( https://echsacongenitaldb.org/ ). For transcatheter interventions in patients with CHD, several local and national databases exist. These local and national databases differ markedly, even if they all have the same goals of quality assessment and quality improvement. In some countries, only a minimal dataset (eg, the age of the patient at the procedure and the type of procedure) is collected, while in others, the depth of data collection is remarkable. Many local databases are programmed by single specialists in information technology, and not all of these databases can be considered user-friendly. The reports which can be obtained from these databases differ. While some allow benchmarking of outcomes (ie, National Institute for Cardiovascular Outcomes Research [NICOR] in the United Kingdom [ https://www.nicor.org.uk/ ]), others allow pure counting of procedures, sometimes with, and sometimes without, tracking specified complications. One of these national databases allows a comparison of key quality indicators and key procedural performance indicators about transcatheter interventions with cardiac surgical data. Importantly, none of these databases allows direct comparison of data on an international level about transcatheter interventions with cardiac surgical data, which is especially valuable for diseases, that are treatable both by transcatheter and surgical intervention. For several years, the AEPC Interventional Working Group had plans to develop a European database with the following criteria: Data can be entered in a user-friendly manner The database will allow both the entry of basic data alone and the entry of in-depth data, according to the needs of each single center or the specific national requirements for quality control. The database will generate reports of the outcomes of specified procedures. The database will generate reports of own center data, as well as reports of national and Europe-wide data. The ECHSA-CD had also long desired to include data about interventional cardiology in their analyses, and several early preliminary discussions of cooperation with the AEPC had taken place. By 2015, cooperation of ECHSA and AEPC had matured, with the ECHSA Secretary General George E. Sarris, MD representing the surgical community in the AEPC Council. ECHSA, under the leadership of Jose Fragata, proposed collaboration on development of an AEPC Interventional Cardiology Part of the ECHSA-CD , in the context of the existing ECHSA-CD, and the AEPC, under the leadership of Gurleen Sharland, officially accepted.
Development of an AEPC Interventional Cardiology Part of the ECHSA-CD in the context of the existing ECHSA-CD has multiple potential advantages: The legalities of data protection according to different national law have already been addressed and solved by ECHSA. The process of data verification is already established. A large body of surgical data already exists in ECHSA-CD, harmonized with the data in STS CHSD. Refined data assessment tools have been developed and are available. Multiple extant scientific publications demonstrate the scientific power of ECHSA-CD. The collaboration between AEPC and ECHSA and the addition of data about interventional cardiology to ECHSA-CD creates the only congenital cardiac database in the world with combined, detailed data about congenital interventional cardiology and congenital cardiac surgery. The AEPC Interventional Cardiology Part of the ECHSA-CD will create important and unique opportunities for post-market surveillance of implanted devices, which will be especially useful with new European Union initiatives and regulations related to post-market surveillance of medical devices. The addition of an AEPC Interventional Cardiology Part of the ECHSA-CD to the ECHSA-CD represents a European database collaborative effort supported by the two major and well-established European scientific associations (AEPC and ECHSA) working on quality improvement for the treatment of our common patients with CHD. Since a large amount of surgical data spanning more than two decades is already available in ECHSA-CD, outcome assessment, benchmarking, and quality assurance programs will be facilitated. To realize the agreed collaborative goal, an AEPC representative (TK) was selected by the AEPC Council and appointed by ECHSA as a member of the ECHSA Database Committee, as a liaison with the AEPC Interventional Working Group, with the following objectives: to define the specific goals of the project, to select and define the data fields to be collected, to select and define the outcomes to be tracked, and to design the implementation steps.
In 2019, the first meetings took place involving the ECHSA Database Committee with the new AEPC representative. During these initial meetings, the following objectives were completed and the following decisions were made: The needs of the new AEPC Interventional Cardiology Part of the ECHSA-CD were established. Mandatory and optional demographic data were identified. The decision was made to use IPCCC nomenclature for all diagnoses. Potential interventional treatments to monitor were considered. The decision was made to assure appropriate linkage between diagnosis and the corresponding potential intervention. Possible complications associated with these diseases or their associated specific interventions were defined. These complications may cause a deviation from the desired course or may be associated with suboptimal outcome. Procedure-related data such as radiation dose and time of exposure will be collected. Both interventional cardiac catheterizations and diagnostic cardiac catheterizations will be recorded. Each component procedure of multicomponent interventions will be entered into the new AEPC Interventional Cardiology Part of the ECHSA-CD . Outcome data will consist of intervention success, related morbidity, and mortality. Mortality will continue to be defined in all parts of ECHSA-CD, including the new AEPC Interventional Cardiology Part of the ECHSA-CD , as Operative Mortality, using the standard definition of Operative Mortality currently used in ECHSA-CD and STS CHSD. , In ECHSA-CD, postoperative length of stay is currently calculated as the amount of time between the completion of the operation and discharge from the hospital. In the new AEPC Interventional Cardiology Part of the ECHSA-CD , postprocedural length of stay will be calculated as the amount of time between the completion of the interventional procedure and discharge from the hospital. Follow-up data can be added. After 30 days and 90 days post-intervention, the interventional team is reminded by a pop-up window to enter these follow-up data. The Appendix provides a Quick Users’ Guide that includes multiple screen captures of the user-friendly data entry interface that was developed for the AEPC Interventional Cardiology Part of the ECHSA-CD . Of note, this user interface is the same user interface that cardiac surgeons have used in ECHSA-CD for 22 years. Demographic data are comparable to the surgical dataset. A specific patient code for each patient will be created which anonymizes the data completely. Only this code is submitted to the server, while identifiable patient specifics remain locally stored. documents the preliminary list of fields of data collection in the AEPC Interventional Cardiology Part of the ECHSA-CD . A User Manuel to the AEPC Interventional Cardiology Part of the ECHSA-CD will be published collaboratively by the AEPC Interventional Working Group in collaboration with the ECHSA Database Committee. Feedback reports will be developed collaboratively according to the needs of the AEPC Interventional Working Group and the ECHSA Database Committee. Once the AEPC Interventional Cardiology Part of the ECHSA-CD is operational, all patients with pediatric and/or congenital heart disease at a participating institution undergoing cardiothoracic surgery and/or interventional cardiology will be entered into ECHSA-CD: Patients can be entered into the AEPC Interventional Cardiology Part of the ECHSA-CD even if they have never had cardiothoracic surgery. Patients can still be entered into ECHSA-CD even if they have never undergone an interventional cardiology procedure. Patients who have had both surgery and an interventional cardiology procedure will have data for both their surgical and interventional cardiology procedures entered into ECHSA-CD. The initial data entry interface was checked by several interventional cardiologists for consistency and ease of data entry by the use of fictitious patients. The feedback from initial data entry was utilized to optimize data entry, reporting structure, and data verification. A key feature of the new AEPC Interventional Cardiology Part of the ECHSA-CD is that own center reports can be obtained with one click, including the following information : Demographic data Interventions carried out in different age groups Types of interventions Radiation data Outcome data Follow-up data This report should fit all quality demands which are required locally, regionally, and nationally. Furthermore, procedure-specific reports can benchmark own center results with national data (if more than three centers carry out the specified procedure) and European data. Also, own center results can be compared to all entered data regarding a given procedure on an international level. Importantly, comparison to verified data only will also be possible. For international studies, the Council of AEPC has established a Steering Group, which will also be in dialogue with the ECHSA Database Committee and the ECHSA Research Committee. Thus, high-level quality data can be extracted and lead to high-impact publications. The first version of the data entry software is already functional, and updates and corrections are in the process of being implemented. Based on feedback from users of the new AEPC Interventional Cardiology Part of the ECHSA-CD , the data entry module will be continuously refined, and the structure of the Feedback Reports will be continuously customized. In December 2022, a contract was signed between ECHSA and AEPC that documented that AEPC and ECHSA agree to the following principles: ECHSA and AEPC agree to develop the capability of ECHSA-CD to store and analyze data pertaining to pediatric and congenital cardiology catheter interventional procedures in the new “ AEPC Interventional Cardiology Part of the ECHSA congenital database. ” The governance, structure, and background of this collaborative initiative have been approved by AEPC and ECHSA and are detailed in this contract. The AEPC Interventional Cardiology Part of the ECHSA-CD will provide: appropriate lists and definitions of procedures, relevant preintervention clinical, imaging, and/or pathophysiologic variables and related risk factors, as well as procedure outcomes, including measures of technical success, complications, and possibly follow-up. The same vocabulary/definitions for encoding diagnoses will be used for cardiology cases as already used for surgical cases. Within ECHSA-CD, cardiology data will be treated in the same fashion as surgical data and can be accessed by contributing cardiology centers and analyzed by the same rules which apply to the surgical centers: Essentially, each cardiology center will have access to its own data and to cumulative anonymized data pertaining to the entire cardiology procedure dataset or to custom selected (“filtered”) subsets thereof. ECHSA and AEPC will continue to cooperate to maintain and further develop the whole ECHSA-CD and the relevant data analytic tools. The AEPC agrees to encourage its members to participate in the AEPC Interventional Cardiology Part of the ECHSA-CD . AEPC will be acknowledged as an official ECHSA-CD partner on the ECHSA-CD website. Bidirectional links will be provided from the AEPC website to the ECHSA-CD website and vice versa. The yearly fee per participating cardiology center will be the same as the fee for surgical centers, irrespective of the number of patients, admissions, or procedures entered. If a center contributes both surgical and cardiology data, a 10% discount on the annual center participation fee will be applied to each department (Surgery and Cardiology). Any publications resulting from the database utilizing interventional cardiology data will include recognition of both AEPC and the ECHSA-CD. Authorship involving interventional cardiology publications will be decided by AEPC. Publications involving surgical and interventional data will have balanced authorship of surgeons and interventional cardiologists. This agreement will be valid for the duration of two (2) years, after which the agreement will be reviewed. Each party shall have the right to terminate the agreement, with six months prior written notice to the other party. Based on the formal agreement between AEPC and ECHSA that was signed on December 22, 2022, the new AEPC Interventional Cardiology Part of the ECHSA-CD is now operational, functional, and ready for the large-scale enrollment of patients. One can anticipate that this new part of ECHSA-CD will soon lead to important advances in pediatric and congenital cardiac care in the domains of patient care, research, and teaching, and that this new AEPC Interventional Cardiology Part of the ECHSA-CD will generate data that will be used to support: multiple presentations at national and international scientific meetings, numerous peer-reviewed scientific publications, and most importantly, feedback reports that allow benchmarking of individual programmatic to national and international aggregate data.
With the addition of the new AEPC Interventional Cardiology Part of the ECHSA-CD to the ECHSA-CD, ECHSA-CD has become the first multi-institutional, multinational database dedicated to pediatric and congenital cardiac care that seamlessly combines data from surgical operations and transcatheter interventional cardiology procedures; therefore, ECHSA-CD provides a previously unavailable platform to improve pediatric and congenital cardiac care across the world. The new AEPC Interventional Cardiology Part of the ECHSA-CD will allow centers to have access to robust outcome data from their own center, as well as robust aggregate outcome data for benchmarking. Each contributing center or department (cardiology or surgery) will have access to their own data, as well as aggregate data from the AEPC Interventional Cardiology Part of the ECHSA-CD . The new AEPC Interventional Cardiology Part of the ECHSA-CD will allow cardiology centers to have access to aggregate cardiology data, just as surgical centers already have access to aggregate surgical data. These data will help to improve the quality of patient care and identify risks related to certain techniques. The ECHSA-CD and the new AEPC Interventional Cardiology Part of the ECHSA-CD are tools for research activities and for the further development of the fields of congenital heart surgery and transcatheter interventions. National and international benchmarking will set the level of standard of care. The strengths of ECHSA-CD and the new AEPC Interventional Cardiology Part of the ECHSA-CD include the following features: Use of a standardized international nomenclature (IPCCC), Use of an established database software platform, Use of established strategies for risk adjustment, Use of proven methods of data verification, Single access to both surgical and catheter interventional data, The large volume of data in ECHSA-CD, and The potential to track a single patient as this patient goes through various surgical and transcatheter interventional procedures during life. Potential limitations and goals of the AEPC Interventional Cardiology Part of the ECHSA-CD include the following challenges: Strategies of risk stratification and risk adjustment for interventional cardiology procedures will need to be developed, standardized, and matured. Over the course of time, additional pre-procedural factors will likely be added to the AEPC Interventional Cardiology Part of the ECHSA-CD in order to facilitate the development of tools for risk stratification and risk modeling. Strategies of data entry for hybrid procedures will need to be developed (eg, surgical pulmonary valve replacement and distal pulmonary arterial stent insertion, and hybrid palliation of hypoplastic left heart syndrome). Strategies of risk stratification and risk adjustment for hybrid procedures will need to be developed (eg, surgical pulmonary valve replacement and distal pulmonary arterial stent insertion, and hybrid palliation of hypoplastic left heart syndrome). Strategies will need to be developed to determine the primary interventional cardiology procedures if more than one interventional cardiology procedure is performed during the same intervention (eg, combined atrial septal defect device closure and pulmonary arterial balloon dilation or stent insertion, or other combinations of transcatheter procedures). Both ECHSA-CD and the AEPC Interventional Cardiology Part of the ECHSA-CD do not currently serve as platforms for longitudinal follow-up. A future goal of both ECHSA-CD and the AEPC Interventional Cardiology Part of the ECHSA-CD is to include a longer-term follow-up module. It is an absolute fact that of all the information that we currently lack, consistent, and structured follow-up data is at the top of the list.
The new AEPC Interventional Cardiology Part of the ECHSA-CD will allow centers to have access to robust surgical and transcatheter outcome data from their own center, as well as robust aggregate outcome data for benchmarking. Comparison of surgical and catheter interventional outcomes will strengthen decision processes. A study of the wealth of information collected in the database will also contribute toward improved early and late survival, as well as enhanced quality of life of patients with congenital heart disease treated with surgery and interventional cardiac catheterization across Europe and the world. In the final analysis, the addition of the new AEPC Interventional Cardiology Part of the ECHSA-CD to the ECHSA-CD transforms ECHSA-CD into the first multi-institutional, multinational database dedicated to pediatric and congenital cardiac care that seamlessly combines data from surgical operations and transcatheter interventional cardiology procedures; therefore, ECHSA-CD provides a previously unavailable platform to improve pediatric and congenital cardiac care across the world.
sj-docx-1-pch-10.1177_21501351231168829 - Supplemental material for Combining Congenital Heart Surgical and Interventional Cardiology Outcome Data in a Single Database: The Development of a Patient-Centered Collaboration of the European Congenital Heart Surgeons Association (ECHSA) and the Association for European Paediatric and Congenital Cardiology (AEPC) Click here for additional data file. Supplemental material, sj-docx-1-pch-10.1177_21501351231168829 for Combining Congenital Heart Surgical and Interventional Cardiology Outcome Data in a Single Database: The Development of a Patient-Centered Collaboration of the European Congenital Heart Surgeons Association (ECHSA) and the Association for European Paediatric and Congenital Cardiology (AEPC) by Jeffrey P Jacobs, Thomas Krasemann, Claudia Herbst, Zdzislaw Tobota, Bohdan Maruszewski, Jose Fragata, Tjark Ebels, Vladimiro L Vida, Ilkka Mattila, Andrzej Kansy, Boulos Asfour, Jürgen Hörer, Attilio A Lotto, M Sertaç Çiçek, Petru Liuba, Sven Dittrich, Massimo Chessa, Regina Bökenkamp, Gurleen Sharland, Katarina Hanséus, Nico A Blom and George E Sarris in World Journal for Pediatric and Congenital Heart Surgery
|
70ba5d39-c999-4ede-95e3-91d4335e649e | 9874716 | Anatomy[mh] | INTRODUCTION Medical school curricula, since the time of Flexner, have progressively evolved by adding, reducing or in some instances eliminating topics and subject matter. The rate at which these changes occur is not steady, but tends to mirror advances in numerous fields of knowledge. Changes to curricula are most evident following the development and introduction of new or better treatments for specific disorders, advances in diagnostic methods and techniques, or more recently, an increased recognition of and sensitivity to personal and cultural issues. Other influences on medical education have arisen from the domain of pedagogy. Changes related not to what is taught, but how it is taught have also had an effect on medical curricula. Approaches such as problem‐based learning, flipped classroom techniques and other methodologies shifts from faculty directed instruction to more student‐centered learning environments. These are now part of the curricular landscape. Historically, curricular changes based on scientific advances related to disease origin, mechanism or therapeutic approaches have been supported by evidence developed in support of those advances. Changes in teaching approaches likewise are typically accepted and implemented based on reports of successful usage in educational settings, some medical and some with other groups of learners. In the past, curricular changes related to teaching approaches have been comparatively gradual, allowing time for review of the evidence presented in support of the change. In some institutions, new approaches are readily and enthusiastically adopted. In others, for a variety of reasons, certain approaches may not be feasible or desirable. The speed with which the recent COVID pandemic spread across the country forced medical school administrators and faculty to implement curricular changes on short notice with comparatively little time to address recommendations and mandates for protecting individuals and limiting the spread of disease. Among the mandates were requirements for adequate face coverings and social distancing. The need for social distancing has been particularly challenging for some schools, particularly those with large class sizes in which lectures are delivered in rooms and lecture halls with seating arrangements in which students are in relatively close proximity. An additional challenge has been the need to provide educational materials and content to students who became infected and were required to enter a period of quarantine. To address this need, additional print and electronically resources were identified with links to our educational platform. Several ZOOM sessions were developed to provide additional direct access to the faculty. Since the appearance of COVID in 2020, virtually all schools have implemented strategies designed to protect their students and faculty and maintain effective educational offerings. Descriptions of curricular modifications can be found in both the general medical and discipline‐specific scholarly literature. Most of these reports (Baptiste, ; Cheng et al., ; Das & Mushaiqri, ; Flynn et al., ; Harmon et al., ; Longhurst, ; Moszkowicz et al., ; Patra et al., ; Pather et al., ; Singal et al., ; Srinivasan, ; Tucker & Anderson, ; Zarcone & Saverino, ) are descriptive in nature, outlining specific changes and modifications specific to their particular program. Only a few describe the effectiveness of their changes in terms of performance data and other measurable outcomes. Of these, Syed et al. found no significant differences between men and women in either stress levels or examination grades in their brain and behavior module. Brakora et al. compared test scores for one histology and one gross anatomy examination before and after a change to online lectures and found essentially no significant differences among groups. Grand et al. found no significant differences in scores in a renal course after switching from a traditional to a remote format. Finally, Smith reported no significant differences in examination scores in a pharmacology course following a shift to a virtual/online curriculum. In contrast, Andersen et al. reported that while 77% of first year students scored above the national average on their first five examinations prior to COVID, only 55% of first year students did so following COVID. These authors noted also that students rated their mental health and relationships lower after COVID than before. These conflicting observations prompted us to examine whether, and to what extent, changes made to our anatomy course at the Virginia Tech Carilion School of Medicine (VTCSOM) might have affected performance on our anatomy examinations. We briefly describe our pre‐COVID anatomy curriculum and our post‐COVID course modifications and then compare student examination scores for 4 years before COVID to those of students during 2 years after the implementation of these changes. We discuss our findings in relation to factors that we believe affected student behavior and examination performance.
MATERIALS AND METHODS 2.1 Pre‐COVID anatomy curriculum Prior to academic year 2020–2021 when changes were implemented to address personal safety and social distancing requirements associated with the COVID pandemic, our anatomy curriculum was delivered during each of four “Blocks” of instruction during the first year. Each Block was 10 weeks in length with the first 8 weeks composed of instruction in the form of lectures and laboratory sessions. Summative examinations were scheduled during Week 9 with Week 10 reserved for remediation of deficiencies (failures) based on performance on the End of Block anatomy examination. Each Block included a total of 32 scheduled contact hours for instructional purposes during the first 8 weeks and 2 h during Week 9 for the administration of the End of Block summative anatomy examination. A total of 120 h was scheduled for instruction in anatomy over the course of the year. Anatomy sessions were scheduled on Thursdays between 8:00 am and 12:00 noon during academic years 2016–2017 thru 2019–2020 and were shifted to Tuesday mornings beginning in academic year 2020–2021. Instructional activities included time allocated for cadaver dissection, traditional live lectures and dry laboratory sessions described below. Students were provided with a VTCSOM Anatomy Guide & Workbook which included dissection instructions, clinical correlation material, imaging challenges and daily self‐study review questions. A practice examination identical in format to the summative examination was administered during the last week of each Block. The anatomy session content and the non‐anatomy basic science session content of each Block was organized to provide topic reinforcement between the two components of the basic science curriculum. For example, when cardiac and pulmonary physiology and pharmacology were being considered during the non‐anatomy sessions of Block II, the anatomy sessions during that time were focused on the heart and lungs. The teaching faculty included four core individuals, all of whom are clinicians from different areas of practice (chiropractic, emergency medicine—trauma surgery, physical therapy, and radiology) who have participated in the course for the previous 6 years with assistance provided in both the lecture and laboratory components by members of our clinical faculty from Carilion Clinic with expertise in particular areas during the course. 2.2 Pre‐COVID cadaver dissection laboratory sessions During the pre‐COVID years, our 42 students were grouped into teams of three or four students, each team being assigned to one of 12 cadaver dissection tables. Each student in the group had an assigned responsibility. One or two performed the dissection, one read the instructions in the VTCSOM Anatomy Guide & Workbook and another was responsible for finding appropriate images in the atlas and looking up material in reference material including the recommended textbook. Group membership was changed for each of the four Blocks of the course. The VTCSOM Anatomy Guide & Workbook (the Guide) for each of the four Blocks, was written by the faculty to meet the specific requirements and time constraints of our curriculum. It was provided in hard copy to each student and posted on our educational platform at the beginning of each Block. More than 80% of the individual tasks described in the Guide included an associated short answer question printed in italics and referred to as the italic's questions. These questions focused on different, but relevant information designed to facilitate a more complete understanding of the body regions under study. The questions required the student to actively seek out information related to the dissection task beyond simple identification of a particular structure. The purpose of the italics questions was to prompt interactive discussion and learning among the group members. One student in the group would be responsible for finding the answers to these questions using print or electronic resources available in the laboratory and for explaining and sharing this information with the others in the group. Frequently, students would call upon a faculty member to help answer the question at the dissection table, thereby turning dissection sessions into brief teaching opportunities. All four core faculty members were present during each laboratory session. 2.3 Pre‐COVID lecture and dry laboratory sessions Fifty‐minute live faculty lectures were delivered to the entire class at 8:00 am each class day. Lectures typically focused on structural and functional topics related to the dissection activities scheduled for later that morning. Emphasis was placed on more conceptually difficult aspects of the structures and regions being studied during a particular Block. Other lectures focused on anatomy as encountered using various imaging approaches. Lecture materials (e.g., MS Power Point slides and Supplementary Material) were posted prior to the beginning of each Block to allow for preparation before each session. Lectures were recorded and posted to our educational platform by the end of the day. Dry laboratory sessions included small group, hands on activities with skeletal material and models, and other sessions designed as applied anatomy workshops. These later sessions consisted of small group exercises with students using their peers as subjects. In these sessions students would become familiar with anatomical structures, relationships and functions as they might be encountered in living individuals. The applied anatomy activities involve learning human anatomy (e.g., the texture and position of the thyroid gland, or the position and relationships of the radial artery at the wrist) by visual inspection, palpation and auscultation. Many of the applied anatomy exercises were modeled after techniques and procedures used in the typical physical examination with the focus being placed on anatomical structures and relationships rather than on diagnostic or therapeutic implications of elicited findings (McNamara & Nolan, ). 2.4 Pre‐COVID student assessment in anatomy 2.4.1 Formative assessments Formative assessment during the pre‐COVID years include the italics questions described above and approximately 10–15 short answer questions included in the Guide for each Block. Performance on these questions was not factored into the final Block score. A 1‐h practice examination comprised of 25 questions similar in format to the summative examination was administered during Week 8 of each Block. All students were invited to attend the practice examination session. Questions were projected in the classroom with time provided to think about each question and arrive at an answer. Correct answers were then revealed and students were encouraged to ask questions if they answered incorrectly or if they felt unsure about the concept being tested. 2.4.2 Summative assessment The End of Block summative anatomy examination was comprised of approximately 50 questions written by the faculty to include a balanced number of questions addressing the stated learning objectives for the Block. The questions included an equal mixture of multiple‐choice questions (MCQ's) and single answer fill‐in‐the‐blank (FIB) questions. Approximately half of the questions of each type included an anatomical image with a single arrow to direct the student's attention to the focus of the question. Questions with images were of a variety of types ranging from lower‐level questions such as “Name the structure marked by the tip of the arrow.” (FIB) or “Which of the arteries listed below perfuses the structure marked by the tip of the arrow?” (MCQ), to higher order questions such as “Which of the following clinical finding would most likely be observed in a patient with injury involving the structure marked by the tip of the arrow?” (MCQ) or “On which side and in which intercostal space is the pulmonary valve best auscultated?” (FIB). Questions without images were likewise formatted as either MCQ or FIB type questions. The majority of both types of questions were constructed as clinical vignettes using NBME guidelines. End of Block anatomy examinations were administered using Exam‐Soft© and scored using Exam‐Score© technology. Success in the basic sciences component of the curriculum was based on performance on an End of Block examination for which students receive a grade of pass or fail. The examination is composed of two parts: a 50‐question anatomy examination and an approximately 150 question examination comprised of questions obtained through the Customized Examination Program of the NBME focusing on the non‐anatomy basic science content of the Block. The two parts of the examination are administered separately. Overall performance on the End of Block basic science examination was calculated with 20% contributed by the anatomy examination and 80% from the non‐anatomy basic science examination. This course structure and examination approach was used for 4 years prior to our COVID modifications from academic year 2016–2017 to 2019–2020.
Pre‐COVID anatomy curriculum Prior to academic year 2020–2021 when changes were implemented to address personal safety and social distancing requirements associated with the COVID pandemic, our anatomy curriculum was delivered during each of four “Blocks” of instruction during the first year. Each Block was 10 weeks in length with the first 8 weeks composed of instruction in the form of lectures and laboratory sessions. Summative examinations were scheduled during Week 9 with Week 10 reserved for remediation of deficiencies (failures) based on performance on the End of Block anatomy examination. Each Block included a total of 32 scheduled contact hours for instructional purposes during the first 8 weeks and 2 h during Week 9 for the administration of the End of Block summative anatomy examination. A total of 120 h was scheduled for instruction in anatomy over the course of the year. Anatomy sessions were scheduled on Thursdays between 8:00 am and 12:00 noon during academic years 2016–2017 thru 2019–2020 and were shifted to Tuesday mornings beginning in academic year 2020–2021. Instructional activities included time allocated for cadaver dissection, traditional live lectures and dry laboratory sessions described below. Students were provided with a VTCSOM Anatomy Guide & Workbook which included dissection instructions, clinical correlation material, imaging challenges and daily self‐study review questions. A practice examination identical in format to the summative examination was administered during the last week of each Block. The anatomy session content and the non‐anatomy basic science session content of each Block was organized to provide topic reinforcement between the two components of the basic science curriculum. For example, when cardiac and pulmonary physiology and pharmacology were being considered during the non‐anatomy sessions of Block II, the anatomy sessions during that time were focused on the heart and lungs. The teaching faculty included four core individuals, all of whom are clinicians from different areas of practice (chiropractic, emergency medicine—trauma surgery, physical therapy, and radiology) who have participated in the course for the previous 6 years with assistance provided in both the lecture and laboratory components by members of our clinical faculty from Carilion Clinic with expertise in particular areas during the course.
Pre‐COVID cadaver dissection laboratory sessions During the pre‐COVID years, our 42 students were grouped into teams of three or four students, each team being assigned to one of 12 cadaver dissection tables. Each student in the group had an assigned responsibility. One or two performed the dissection, one read the instructions in the VTCSOM Anatomy Guide & Workbook and another was responsible for finding appropriate images in the atlas and looking up material in reference material including the recommended textbook. Group membership was changed for each of the four Blocks of the course. The VTCSOM Anatomy Guide & Workbook (the Guide) for each of the four Blocks, was written by the faculty to meet the specific requirements and time constraints of our curriculum. It was provided in hard copy to each student and posted on our educational platform at the beginning of each Block. More than 80% of the individual tasks described in the Guide included an associated short answer question printed in italics and referred to as the italic's questions. These questions focused on different, but relevant information designed to facilitate a more complete understanding of the body regions under study. The questions required the student to actively seek out information related to the dissection task beyond simple identification of a particular structure. The purpose of the italics questions was to prompt interactive discussion and learning among the group members. One student in the group would be responsible for finding the answers to these questions using print or electronic resources available in the laboratory and for explaining and sharing this information with the others in the group. Frequently, students would call upon a faculty member to help answer the question at the dissection table, thereby turning dissection sessions into brief teaching opportunities. All four core faculty members were present during each laboratory session.
Pre‐COVID lecture and dry laboratory sessions Fifty‐minute live faculty lectures were delivered to the entire class at 8:00 am each class day. Lectures typically focused on structural and functional topics related to the dissection activities scheduled for later that morning. Emphasis was placed on more conceptually difficult aspects of the structures and regions being studied during a particular Block. Other lectures focused on anatomy as encountered using various imaging approaches. Lecture materials (e.g., MS Power Point slides and Supplementary Material) were posted prior to the beginning of each Block to allow for preparation before each session. Lectures were recorded and posted to our educational platform by the end of the day. Dry laboratory sessions included small group, hands on activities with skeletal material and models, and other sessions designed as applied anatomy workshops. These later sessions consisted of small group exercises with students using their peers as subjects. In these sessions students would become familiar with anatomical structures, relationships and functions as they might be encountered in living individuals. The applied anatomy activities involve learning human anatomy (e.g., the texture and position of the thyroid gland, or the position and relationships of the radial artery at the wrist) by visual inspection, palpation and auscultation. Many of the applied anatomy exercises were modeled after techniques and procedures used in the typical physical examination with the focus being placed on anatomical structures and relationships rather than on diagnostic or therapeutic implications of elicited findings (McNamara & Nolan, ).
Pre‐COVID student assessment in anatomy 2.4.1 Formative assessments Formative assessment during the pre‐COVID years include the italics questions described above and approximately 10–15 short answer questions included in the Guide for each Block. Performance on these questions was not factored into the final Block score. A 1‐h practice examination comprised of 25 questions similar in format to the summative examination was administered during Week 8 of each Block. All students were invited to attend the practice examination session. Questions were projected in the classroom with time provided to think about each question and arrive at an answer. Correct answers were then revealed and students were encouraged to ask questions if they answered incorrectly or if they felt unsure about the concept being tested. 2.4.2 Summative assessment The End of Block summative anatomy examination was comprised of approximately 50 questions written by the faculty to include a balanced number of questions addressing the stated learning objectives for the Block. The questions included an equal mixture of multiple‐choice questions (MCQ's) and single answer fill‐in‐the‐blank (FIB) questions. Approximately half of the questions of each type included an anatomical image with a single arrow to direct the student's attention to the focus of the question. Questions with images were of a variety of types ranging from lower‐level questions such as “Name the structure marked by the tip of the arrow.” (FIB) or “Which of the arteries listed below perfuses the structure marked by the tip of the arrow?” (MCQ), to higher order questions such as “Which of the following clinical finding would most likely be observed in a patient with injury involving the structure marked by the tip of the arrow?” (MCQ) or “On which side and in which intercostal space is the pulmonary valve best auscultated?” (FIB). Questions without images were likewise formatted as either MCQ or FIB type questions. The majority of both types of questions were constructed as clinical vignettes using NBME guidelines. End of Block anatomy examinations were administered using Exam‐Soft© and scored using Exam‐Score© technology. Success in the basic sciences component of the curriculum was based on performance on an End of Block examination for which students receive a grade of pass or fail. The examination is composed of two parts: a 50‐question anatomy examination and an approximately 150 question examination comprised of questions obtained through the Customized Examination Program of the NBME focusing on the non‐anatomy basic science content of the Block. The two parts of the examination are administered separately. Overall performance on the End of Block basic science examination was calculated with 20% contributed by the anatomy examination and 80% from the non‐anatomy basic science examination. This course structure and examination approach was used for 4 years prior to our COVID modifications from academic year 2016–2017 to 2019–2020.
Formative assessments Formative assessment during the pre‐COVID years include the italics questions described above and approximately 10–15 short answer questions included in the Guide for each Block. Performance on these questions was not factored into the final Block score. A 1‐h practice examination comprised of 25 questions similar in format to the summative examination was administered during Week 8 of each Block. All students were invited to attend the practice examination session. Questions were projected in the classroom with time provided to think about each question and arrive at an answer. Correct answers were then revealed and students were encouraged to ask questions if they answered incorrectly or if they felt unsure about the concept being tested.
Summative assessment The End of Block summative anatomy examination was comprised of approximately 50 questions written by the faculty to include a balanced number of questions addressing the stated learning objectives for the Block. The questions included an equal mixture of multiple‐choice questions (MCQ's) and single answer fill‐in‐the‐blank (FIB) questions. Approximately half of the questions of each type included an anatomical image with a single arrow to direct the student's attention to the focus of the question. Questions with images were of a variety of types ranging from lower‐level questions such as “Name the structure marked by the tip of the arrow.” (FIB) or “Which of the arteries listed below perfuses the structure marked by the tip of the arrow?” (MCQ), to higher order questions such as “Which of the following clinical finding would most likely be observed in a patient with injury involving the structure marked by the tip of the arrow?” (MCQ) or “On which side and in which intercostal space is the pulmonary valve best auscultated?” (FIB). Questions without images were likewise formatted as either MCQ or FIB type questions. The majority of both types of questions were constructed as clinical vignettes using NBME guidelines. End of Block anatomy examinations were administered using Exam‐Soft© and scored using Exam‐Score© technology. Success in the basic sciences component of the curriculum was based on performance on an End of Block examination for which students receive a grade of pass or fail. The examination is composed of two parts: a 50‐question anatomy examination and an approximately 150 question examination comprised of questions obtained through the Customized Examination Program of the NBME focusing on the non‐anatomy basic science content of the Block. The two parts of the examination are administered separately. Overall performance on the End of Block basic science examination was calculated with 20% contributed by the anatomy examination and 80% from the non‐anatomy basic science examination. This course structure and examination approach was used for 4 years prior to our COVID modifications from academic year 2016–2017 to 2019–2020.
CURRICULAR CHANGES IN RESPONSE TO COVID ‐19 With the COVID pandemic taking full effect in the spring of 2020, the anatomy faculty undertook a review of the anatomy curriculum during the summer of 2020 to determine how best to implement new federal and institutional mandates for academic year 2020–2021. Our review addressed the lecture and dry laboratory components, our dissection laboratory sessions and our assessment approaches and materials. We also addressed modifications needed to accommodate an increase in class size from 42 students to 49 students. 3.1 Post‐COVID modifications to the cadaver dissection laboratory sessions Institutionally approved laboratory safety and utilization policies, which had been in place since the laboratory was initially opened in 2014, were updated to mandate both face masks and face shields for all users at all times while in the dissection laboratory. Spacing between cadaver tables was increased from approximately 6 feet in prior years to 12 feet for academic years 2020–2021 and 2021–2022. In addition, further distancing measures included usage of every other dissection table in the laboratory. Maximum capacity for the dissecting laboratory which prior to COVID was 65 individuals which was now reduced to 32 persons. To address these space limitations, the class was divided into two groups of 24 students (group A) and 25 students (group B). Scheduling of dissection and dry laboratory sessions were arranged to ensure that students in both groups had identical amounts of time for these activities. Because of the hands‐on nature of the applied anatomy workshops, these sessions were eliminated from the schedule. Time allocated for laboratory activities which previously had been 3 h per week for all students over an 8‐week Block (24 total hours per Block) was reduced to 90 min per group per week (12 total hours per student per Block). Group A dissected from 8:00 to 9:30 am and group B dissected from 10:00 to 11:30 am. The 30 min between 9:30 and 10:00 am was used to clean areas in the laboratory used by students and to perform routine maintenance of the cadavers. The number of cadavers used was reduced from 12 in prior years to 7 in academic year 2020–2021, and to 14 in 2021–2022 with either 3 or 4 students assigned to each cadaver during each of the two dissection sessions. We continued to use prosected specimens prepared by 4th year students as part of our anatomy elective course requirements. Because of the reduction in weekly overall dissection time for each student, it was necessary to review the Guide to ensure that assigned dissection tasks could be accomplished within the time available. Based on this review, some less critical and more time‐consuming dissection activities were eliminated, retaining those judged by faculty consensus to be important for first year medical students. The elimination of some dissection tasks was accompanied by the elimination of the italics questions associated with those tasks, a result we were concerned about in light of the favorable responses we received previously regarding those questions. Among those dissection tasks deleted were those involving the hand, foot and face. These topics were, however, retained in the lecture series and recommended readings for the course. 3.2 Post‐COVID modifications to the lecture and dry laboratory sessions Chief among the challenges were restrictions related to social distancing. Splitting the class into two groups for dissection left us with the challenge of what to do with those students who were not dissecting during the scheduled class time. Since the faculty would be in the dissection laboratory for two consecutive 90 min periods, they would not be available to deliver lectures or oversee dry laboratory activities. Our solution to this problem was to pre‐record lectures. In previous years, lectures had been delivered live and posted to the educational platform in voiced‐over MS Power Point and MP4 formats. Three new pre‐recorded lectures were added, one on the hand, the foot and the face to address those topics previously covered but now deleted from the laboratory schedule. Students were encouraged to view these lecture materials during the part of the morning while not in the dissection laboratory. The dry laboratory session activities were posted and students were asked to complete the exercises during the time on Tuesday's when not in the dissection laboratory, in small group settings, adhering to appropriate social distancing and masking directives. Answers to questions in the Guide were included in an Appendix. The exercises were accomplished without the physical presence of an instructor although all participating faculty were available by e‐mail to answer inquiries outside of scheduled laboratory time. Not infrequently, questions regarding these activities were raised with the faculty during the dissection laboratory sessions. We were careful to ensure that these activities could be accomplished within the time allocated for these activities. 3.3 Post‐COVID student assessment in anatomy 3.3.1 Formative assessments The practice examination administered during Week 8 and the italics questions used prior to COVID were continued for each Block during the two COVID years. In addition, we developed weekly quizzes composed of questions focusing on the material covered during the preceding week. These questions were developed to compensate in part for the loss of a number of italics questions, and to provide additional formative opportunities. The weekly quizzes were posted to the learning platform on Tuesday of each week following the laboratory session. The number of questions per week ranged from 8 to 15 resulting in over 330 questions for the entire course. In addition to providing answers to each question, we included explanatory comments indicating why the correct answer was correct and why an incorrect response was incorrect. Students were encouraged to utilize these questions both as preparation for the weekly dissection laboratory session and/or as a review after completion of the session. Performance on these questions was not factored into the final Block grade. 3.3.2 Summative assessment We continued to administer the same End of Block anatomy examinations that we used during the pre‐COVID years. The method for administering End of Block anatomy examinations was not altered as a result of COVID. Students continued to take the examination using their laptop in a room assigned for testing on the assigned day during examination week.
Post‐COVID modifications to the cadaver dissection laboratory sessions Institutionally approved laboratory safety and utilization policies, which had been in place since the laboratory was initially opened in 2014, were updated to mandate both face masks and face shields for all users at all times while in the dissection laboratory. Spacing between cadaver tables was increased from approximately 6 feet in prior years to 12 feet for academic years 2020–2021 and 2021–2022. In addition, further distancing measures included usage of every other dissection table in the laboratory. Maximum capacity for the dissecting laboratory which prior to COVID was 65 individuals which was now reduced to 32 persons. To address these space limitations, the class was divided into two groups of 24 students (group A) and 25 students (group B). Scheduling of dissection and dry laboratory sessions were arranged to ensure that students in both groups had identical amounts of time for these activities. Because of the hands‐on nature of the applied anatomy workshops, these sessions were eliminated from the schedule. Time allocated for laboratory activities which previously had been 3 h per week for all students over an 8‐week Block (24 total hours per Block) was reduced to 90 min per group per week (12 total hours per student per Block). Group A dissected from 8:00 to 9:30 am and group B dissected from 10:00 to 11:30 am. The 30 min between 9:30 and 10:00 am was used to clean areas in the laboratory used by students and to perform routine maintenance of the cadavers. The number of cadavers used was reduced from 12 in prior years to 7 in academic year 2020–2021, and to 14 in 2021–2022 with either 3 or 4 students assigned to each cadaver during each of the two dissection sessions. We continued to use prosected specimens prepared by 4th year students as part of our anatomy elective course requirements. Because of the reduction in weekly overall dissection time for each student, it was necessary to review the Guide to ensure that assigned dissection tasks could be accomplished within the time available. Based on this review, some less critical and more time‐consuming dissection activities were eliminated, retaining those judged by faculty consensus to be important for first year medical students. The elimination of some dissection tasks was accompanied by the elimination of the italics questions associated with those tasks, a result we were concerned about in light of the favorable responses we received previously regarding those questions. Among those dissection tasks deleted were those involving the hand, foot and face. These topics were, however, retained in the lecture series and recommended readings for the course.
Post‐COVID modifications to the lecture and dry laboratory sessions Chief among the challenges were restrictions related to social distancing. Splitting the class into two groups for dissection left us with the challenge of what to do with those students who were not dissecting during the scheduled class time. Since the faculty would be in the dissection laboratory for two consecutive 90 min periods, they would not be available to deliver lectures or oversee dry laboratory activities. Our solution to this problem was to pre‐record lectures. In previous years, lectures had been delivered live and posted to the educational platform in voiced‐over MS Power Point and MP4 formats. Three new pre‐recorded lectures were added, one on the hand, the foot and the face to address those topics previously covered but now deleted from the laboratory schedule. Students were encouraged to view these lecture materials during the part of the morning while not in the dissection laboratory. The dry laboratory session activities were posted and students were asked to complete the exercises during the time on Tuesday's when not in the dissection laboratory, in small group settings, adhering to appropriate social distancing and masking directives. Answers to questions in the Guide were included in an Appendix. The exercises were accomplished without the physical presence of an instructor although all participating faculty were available by e‐mail to answer inquiries outside of scheduled laboratory time. Not infrequently, questions regarding these activities were raised with the faculty during the dissection laboratory sessions. We were careful to ensure that these activities could be accomplished within the time allocated for these activities.
Post‐COVID student assessment in anatomy 3.3.1 Formative assessments The practice examination administered during Week 8 and the italics questions used prior to COVID were continued for each Block during the two COVID years. In addition, we developed weekly quizzes composed of questions focusing on the material covered during the preceding week. These questions were developed to compensate in part for the loss of a number of italics questions, and to provide additional formative opportunities. The weekly quizzes were posted to the learning platform on Tuesday of each week following the laboratory session. The number of questions per week ranged from 8 to 15 resulting in over 330 questions for the entire course. In addition to providing answers to each question, we included explanatory comments indicating why the correct answer was correct and why an incorrect response was incorrect. Students were encouraged to utilize these questions both as preparation for the weekly dissection laboratory session and/or as a review after completion of the session. Performance on these questions was not factored into the final Block grade. 3.3.2 Summative assessment We continued to administer the same End of Block anatomy examinations that we used during the pre‐COVID years. The method for administering End of Block anatomy examinations was not altered as a result of COVID. Students continued to take the examination using their laptop in a room assigned for testing on the assigned day during examination week.
Formative assessments The practice examination administered during Week 8 and the italics questions used prior to COVID were continued for each Block during the two COVID years. In addition, we developed weekly quizzes composed of questions focusing on the material covered during the preceding week. These questions were developed to compensate in part for the loss of a number of italics questions, and to provide additional formative opportunities. The weekly quizzes were posted to the learning platform on Tuesday of each week following the laboratory session. The number of questions per week ranged from 8 to 15 resulting in over 330 questions for the entire course. In addition to providing answers to each question, we included explanatory comments indicating why the correct answer was correct and why an incorrect response was incorrect. Students were encouraged to utilize these questions both as preparation for the weekly dissection laboratory session and/or as a review after completion of the session. Performance on these questions was not factored into the final Block grade.
Summative assessment We continued to administer the same End of Block anatomy examinations that we used during the pre‐COVID years. The method for administering End of Block anatomy examinations was not altered as a result of COVID. Students continued to take the examination using their laptop in a room assigned for testing on the assigned day during examination week.
RESULTS Anatomy summative examination mean scores and ranges for all four Blocks for the 4 years prior to COVID and for the 2 years following implementation of COVID related changes to the anatomy curriculum are presented in Table . For Block I, the mean examination score for the 2 years following the implementation of COVID related changes was 70% with a range of 71% points (Table ). For the 4 years immediately preceding COVID, the mean Block I examination score was 78% with a mean range of 44.75% points (Table ). These results reflect an 8% decline in mean performance in Block I following COVID coupled with a 26% point larger range of scores for these years. For Blocks II, III, and IV, the mean examination scores for the four Blocks prior to COVID ranged from 81% for Block IV to 89% for Block III, with the ranges varying from 24.5% for Block III to 40.5% for Block IV (Table ). Following COVID, the average examination score ranged from 84% for Block IV to 87% for Blocks II and III, with ranges varying from 37% points for Block III to 45% points for Block IV.
DISCUSSION 5.1 Factors related to dissection laboratory sessions that might have affected examination scores During the two COVID years, face‐to‐face interactions during scheduled dissection time was significantly reduced from 24 h per student per Block prior to COVID to 12 h for each student per Block for both years following COVID related changes. This reduction not only limited opportunities available for learning from dissection and interactions with their peers in this setting, but also eliminated time for live interactions with the faculty. This reduction may have affected the ability of some students to learn the material, particularly for topics that are conceptually difficult. Our data suggests, however, that if these reductions had an effect on examination performance in Block I, this influence of reduced dissection time was effectively overcome for Blocks II, III, and IV. Some dissection tasks, specifically those involving unpaired organs such as opening the chest and extracting the heart or opening the cranium and removing the brain, could only be done one time. In these cases, only one group had the opportunity to perform the dissection. For several of these dissections, including brain removal and exposure of the spinal cord, we recorded and posted a video of the dissection which the students who did not do the dissection could view during the same time frame in which the dissection was occurring. The likelihood that the poorer examination scores for Block I were the result of a situation wherein half of the class was unable to participate in the actual dissections was greatly reduced by the fact that the dissection sessions in Block I involved paired structures. One‐half of the students dissected the right upper and lower limbs while the other half of the class dissected the upper and lower limbs on the left side. Several dissections were eliminated (e.g., the hand, foot and face) in an effort to maintain a reasonable work load for the allocated time. It is possible that despite this reduction in work load, the new work load was greater than could be effectively managed during the assigned time. It is possible also that had we not reduced the number of dissection tasks, the average examination score might have been somewhat lower. 5.2 Factors related to lecture and dry laboratory sessions that might have affected examination scores Anatomy lectures, previously delivered live, were now presented in a pre‐recorded format. The use tracking feature of our educational platform allowed us to determine how many times a particular lecture file was accessed, but does not permit us to identify students who accessed those lecture files or if lectures were watched completely from beginning to end. We therefore are not able to directly link use of the pre‐recorded lectures with individual student performance on the examinations. We do know that access to these recordings was much lower than we expected; however, we were not particularly surprised by this observation in light of a reduction in lecture attendance that we have seen over the past several years, including several years prior to the COVID related changes. We are unable therefore to attribute the decline in Block I examination performance to a shift from live to pre‐recorded lectures. Among the adaptations commonly employed in response to COVID was the shift from live lectures and class sessions to remote learning approaches. Live interaction with the faculty were limited to scheduled class time in the dissection laboratory. Some students appreciate the ability to “attend lectures” without leaving their home or apartment, while others find this non‐traditional learning setting less than ideal, presenting distractions that may have affected their ability to concentrate and focus their efforts. Contemporary student preferences regarding class attendance are well known and range from those who describe themselves as “home schoolers” to those who identify themselves as “class attenders.” We believe that some students may have adapted less well than others to limitations in time spent with the faculty and this factor must be considered when searching for explanations for changes in student performance on examinations. Most students were able to perform the dry laboratory exercises independently, answer the brief associated questions and confirm their answers using a variety of available print and electronic resources. However, a few students who admitted to be less familiar with the material found that the activity would have been more effective had there been faculty present who could help with certain tasks and questions. Since these sessions involve activities similar to those used in the general physical examination, it is likely that students without some familiarity with these skills might have benefitted less from these activities than those with greater familiarity. Our previous experience with these sessions with faculty present is in full agreement with this belief. Whether the change from live sessions with faculty guidance affected examination performance is difficult to determine. However, our data from Blocks II, III, and IV again suggests that this change did not affect examination performance. 5.3 Factors related to self‐assessment materials that might have affected examination scores The incorporation of weekly self‐assessment questions was a new addition to our COVID curriculum. Tracking features of our learning platform (Canvas) allowed us to monitor some features of utilization of these questions over the course of the 8‐week Block. Our data reveal that these questions were accessed only infrequently during the first 6 weeks of a Block, but increasingly during Weeks 7 and 8, suggesting that they were being used not as a means of immediate self‐assessment, but rather as a method for assessing their cumulative understanding of the material as they approached the summative examination. This level of usage was seen in all four Blocks, suggesting that other approaches to examination preparation were more highly favored. Our data do not provide information on which students accessed the questions or how many times a particular student may have done so. The influence of the use of these questions on student performance is difficult to determine. Our data from Blocks II, III, and IV suggests that students were successful in adapting to the various challenges associated with the COVID modifications; however, we cannot directly attribute this success to the use of these additional formative assessment questions. Considerable time and effort were expended in the writing of the questions and their explanations. In light of the overall reduced time available for face‐to‐face interactions, the explanations provided with these questions may have served as an indirect, though valuable means of providing instructional guidance for the students. Despite the absence of a measurable effect on student examination performance, we believe our work in developing these questions to have been worthwhile and beneficial. 5.4 Factors related to the summative assessments that might have affected examination scores For Block I, during both years following COVID mean examination scores were characterized by a drop in the mean score and a marked increase in the range of scores (Table ). For Blocks II, III, and IV, summative examination scores and ranges varied little between the four pre‐COVID years and the 2 years following COVID related curricular changes, suggesting that factors and challenges that influenced performance during Block I were for most students identified and effectively overcome for later Blocks. Our data suggests that despite offering a practice examination during the final week of the Block, and the availability of weekly self‐assessment questions, students nonetheless performed less well on the Block I End of Block anatomy examination during the two post‐COVID years than during pre‐COVID years. Examination data from Blocks II, III, and IV suggest that factors that contributed to the decline in Block I performance were identified and successfully overcome. We did not make any additional changes to the anatomy curriculum for the second post‐COVID year beyond those made for the first post‐COVID year that might have brought examination scores back to pre‐COVID levels. Our observation that a similar decline in Block I performance during the second post‐COVID year, but not for Blocks II, III, and IV, suggests that factors unique to Block I continued to influence examination performance. Of these factors, we are able to identify several within the anatomy curriculum that may have contributed to this result. Those include reduced time to interact with faculty during the dissection laboratory, reduced scheduled dissection time per student, the substitution of pre‐recorded lectures for live lectures, and dry laboratory sessions without direct faculty participation. 5.5 Factors not related to anatomy that might have affected examination scores In addition to changes within the anatomy curriculum, other factors may have affected student performance on the Block I End of Block anatomy examination. The first set of examinations in medical school, the Block I examinations, typically represent a novel experience for most students. Students bring a variety of learning styles and study habits to medical school. Some of these approaches, while effective in prior educational settings, may be less effective in rigorous medical curricula where content may be heavy and available time may be relatively limited. Some students may rely more heavily on faculty‐centered instruction or need more time with a particular learning approach (e.g., cadaver dissection). Others may be hesitant to seek help and guidance from a faculty new to them or may not take advantage of the various materials identified and/or developed by the faculty for their use. Some may have been guided by advice from peers or upperclassmen that may have been incorrect or not helpful for a particular student. Time management is not uncommonly a challenge for many medical students early in their careers. We believe it likely that a combination of these factors could have affected student performance in Block I to a greater extent than for Blocks II, III, and IV. While we recognize that the ability to quantitatively determine the influence of some of these factors is difficult, we are well aware from student comments on End of Block student surveys that these factors do influence student performance on examinations. At VTCSOM, End of Block summative examinations are administered during Week 9 (examination week) of each Block of instruction during the 2 years of the preclinical curriculum. During this week, students take four separate summative examinations plus a single integrated case‐based examination comprised of information from the basic science and clinical science (i.e., physical examination) content of the Block. The anatomy examination is one of two parts of the basic science examination and represents 20% of the calculated score with the non‐anatomy content valued at 80%. Given this grading differential, some students prioritize their study time for the basic science examination based on this formula. Their argument being that it is better to spend more time preparing for the higher valued component of the examination. During Block I, in particular, this strategy for an unfamiliar examination can be risky and may result in scores that are unexpectedly lower than what the student had hoped. Students who have taken this approach and failed the basic science examination not uncommonly report that they used this approach and are not likely to carry it forward to subsequent Blocks. An additional factor which we find to be frequently overlooked in discussions regarding examination performance relates to the first‐year medical curriculum overall. That is, what other courses do the students participate in concurrently, what are the time and effort commitments of those courses, what other summative examinations might students be taking, and at what intervals? When considering factors that can affect examination performance it is necessary to recognize that these factors, particularly when they change suddenly and may be new or unfamiliar to the student as most certainly occurred in response to COVID, may increase or reshape the workload in such a way that performance is affected. The effects of change can be cumulative and may create new and challenging problems regarding time management and the allocation of effort. It is incumbent on the faculty of all courses running concurrently during a particular Block, term or semester to be aware of the expectations each may be placing on the students and create learning objectives and activities that are achievable by the students. Failure to do so may place a level of stress on students that can interfere with effective learning. Our results over consecutive academic years based of summative examination scores demonstrate that despite course changes developed to accommodate COVID related safety mandates, students nonetheless scored well below their predecessors on their first anatomy examination. We found that scores on the three subsequent anatomy examinations during the remainder of the course were comparable to those for several years prior to implementation of COVID related changes. We attribute the decline in performance on the first examination only to an interaction of multiple factors, both within the anatomy curriculum and within the overall first year curriculum that were novel and challenging for first‐year medical students. That the scores on subsequent examinations were comparable to scores obtained for pre‐COVID years indicates that students were able to successfully adapt to a modified learning environment. Importantly, our results call attention to the multifactorial influences that can affect student performance on examinations in a novel and challenging curriculum, and the ability of the students to identify specific challenges and adapt to them successfully. We emphasize the importance of addressing the full spectrum of curricular changes and their interactive effects when attempting to link particular outcomes, in this case anatomy examination scores, with those changes.
Factors related to dissection laboratory sessions that might have affected examination scores During the two COVID years, face‐to‐face interactions during scheduled dissection time was significantly reduced from 24 h per student per Block prior to COVID to 12 h for each student per Block for both years following COVID related changes. This reduction not only limited opportunities available for learning from dissection and interactions with their peers in this setting, but also eliminated time for live interactions with the faculty. This reduction may have affected the ability of some students to learn the material, particularly for topics that are conceptually difficult. Our data suggests, however, that if these reductions had an effect on examination performance in Block I, this influence of reduced dissection time was effectively overcome for Blocks II, III, and IV. Some dissection tasks, specifically those involving unpaired organs such as opening the chest and extracting the heart or opening the cranium and removing the brain, could only be done one time. In these cases, only one group had the opportunity to perform the dissection. For several of these dissections, including brain removal and exposure of the spinal cord, we recorded and posted a video of the dissection which the students who did not do the dissection could view during the same time frame in which the dissection was occurring. The likelihood that the poorer examination scores for Block I were the result of a situation wherein half of the class was unable to participate in the actual dissections was greatly reduced by the fact that the dissection sessions in Block I involved paired structures. One‐half of the students dissected the right upper and lower limbs while the other half of the class dissected the upper and lower limbs on the left side. Several dissections were eliminated (e.g., the hand, foot and face) in an effort to maintain a reasonable work load for the allocated time. It is possible that despite this reduction in work load, the new work load was greater than could be effectively managed during the assigned time. It is possible also that had we not reduced the number of dissection tasks, the average examination score might have been somewhat lower.
Factors related to lecture and dry laboratory sessions that might have affected examination scores Anatomy lectures, previously delivered live, were now presented in a pre‐recorded format. The use tracking feature of our educational platform allowed us to determine how many times a particular lecture file was accessed, but does not permit us to identify students who accessed those lecture files or if lectures were watched completely from beginning to end. We therefore are not able to directly link use of the pre‐recorded lectures with individual student performance on the examinations. We do know that access to these recordings was much lower than we expected; however, we were not particularly surprised by this observation in light of a reduction in lecture attendance that we have seen over the past several years, including several years prior to the COVID related changes. We are unable therefore to attribute the decline in Block I examination performance to a shift from live to pre‐recorded lectures. Among the adaptations commonly employed in response to COVID was the shift from live lectures and class sessions to remote learning approaches. Live interaction with the faculty were limited to scheduled class time in the dissection laboratory. Some students appreciate the ability to “attend lectures” without leaving their home or apartment, while others find this non‐traditional learning setting less than ideal, presenting distractions that may have affected their ability to concentrate and focus their efforts. Contemporary student preferences regarding class attendance are well known and range from those who describe themselves as “home schoolers” to those who identify themselves as “class attenders.” We believe that some students may have adapted less well than others to limitations in time spent with the faculty and this factor must be considered when searching for explanations for changes in student performance on examinations. Most students were able to perform the dry laboratory exercises independently, answer the brief associated questions and confirm their answers using a variety of available print and electronic resources. However, a few students who admitted to be less familiar with the material found that the activity would have been more effective had there been faculty present who could help with certain tasks and questions. Since these sessions involve activities similar to those used in the general physical examination, it is likely that students without some familiarity with these skills might have benefitted less from these activities than those with greater familiarity. Our previous experience with these sessions with faculty present is in full agreement with this belief. Whether the change from live sessions with faculty guidance affected examination performance is difficult to determine. However, our data from Blocks II, III, and IV again suggests that this change did not affect examination performance.
Factors related to self‐assessment materials that might have affected examination scores The incorporation of weekly self‐assessment questions was a new addition to our COVID curriculum. Tracking features of our learning platform (Canvas) allowed us to monitor some features of utilization of these questions over the course of the 8‐week Block. Our data reveal that these questions were accessed only infrequently during the first 6 weeks of a Block, but increasingly during Weeks 7 and 8, suggesting that they were being used not as a means of immediate self‐assessment, but rather as a method for assessing their cumulative understanding of the material as they approached the summative examination. This level of usage was seen in all four Blocks, suggesting that other approaches to examination preparation were more highly favored. Our data do not provide information on which students accessed the questions or how many times a particular student may have done so. The influence of the use of these questions on student performance is difficult to determine. Our data from Blocks II, III, and IV suggests that students were successful in adapting to the various challenges associated with the COVID modifications; however, we cannot directly attribute this success to the use of these additional formative assessment questions. Considerable time and effort were expended in the writing of the questions and their explanations. In light of the overall reduced time available for face‐to‐face interactions, the explanations provided with these questions may have served as an indirect, though valuable means of providing instructional guidance for the students. Despite the absence of a measurable effect on student examination performance, we believe our work in developing these questions to have been worthwhile and beneficial.
Factors related to the summative assessments that might have affected examination scores For Block I, during both years following COVID mean examination scores were characterized by a drop in the mean score and a marked increase in the range of scores (Table ). For Blocks II, III, and IV, summative examination scores and ranges varied little between the four pre‐COVID years and the 2 years following COVID related curricular changes, suggesting that factors and challenges that influenced performance during Block I were for most students identified and effectively overcome for later Blocks. Our data suggests that despite offering a practice examination during the final week of the Block, and the availability of weekly self‐assessment questions, students nonetheless performed less well on the Block I End of Block anatomy examination during the two post‐COVID years than during pre‐COVID years. Examination data from Blocks II, III, and IV suggest that factors that contributed to the decline in Block I performance were identified and successfully overcome. We did not make any additional changes to the anatomy curriculum for the second post‐COVID year beyond those made for the first post‐COVID year that might have brought examination scores back to pre‐COVID levels. Our observation that a similar decline in Block I performance during the second post‐COVID year, but not for Blocks II, III, and IV, suggests that factors unique to Block I continued to influence examination performance. Of these factors, we are able to identify several within the anatomy curriculum that may have contributed to this result. Those include reduced time to interact with faculty during the dissection laboratory, reduced scheduled dissection time per student, the substitution of pre‐recorded lectures for live lectures, and dry laboratory sessions without direct faculty participation.
Factors not related to anatomy that might have affected examination scores In addition to changes within the anatomy curriculum, other factors may have affected student performance on the Block I End of Block anatomy examination. The first set of examinations in medical school, the Block I examinations, typically represent a novel experience for most students. Students bring a variety of learning styles and study habits to medical school. Some of these approaches, while effective in prior educational settings, may be less effective in rigorous medical curricula where content may be heavy and available time may be relatively limited. Some students may rely more heavily on faculty‐centered instruction or need more time with a particular learning approach (e.g., cadaver dissection). Others may be hesitant to seek help and guidance from a faculty new to them or may not take advantage of the various materials identified and/or developed by the faculty for their use. Some may have been guided by advice from peers or upperclassmen that may have been incorrect or not helpful for a particular student. Time management is not uncommonly a challenge for many medical students early in their careers. We believe it likely that a combination of these factors could have affected student performance in Block I to a greater extent than for Blocks II, III, and IV. While we recognize that the ability to quantitatively determine the influence of some of these factors is difficult, we are well aware from student comments on End of Block student surveys that these factors do influence student performance on examinations. At VTCSOM, End of Block summative examinations are administered during Week 9 (examination week) of each Block of instruction during the 2 years of the preclinical curriculum. During this week, students take four separate summative examinations plus a single integrated case‐based examination comprised of information from the basic science and clinical science (i.e., physical examination) content of the Block. The anatomy examination is one of two parts of the basic science examination and represents 20% of the calculated score with the non‐anatomy content valued at 80%. Given this grading differential, some students prioritize their study time for the basic science examination based on this formula. Their argument being that it is better to spend more time preparing for the higher valued component of the examination. During Block I, in particular, this strategy for an unfamiliar examination can be risky and may result in scores that are unexpectedly lower than what the student had hoped. Students who have taken this approach and failed the basic science examination not uncommonly report that they used this approach and are not likely to carry it forward to subsequent Blocks. An additional factor which we find to be frequently overlooked in discussions regarding examination performance relates to the first‐year medical curriculum overall. That is, what other courses do the students participate in concurrently, what are the time and effort commitments of those courses, what other summative examinations might students be taking, and at what intervals? When considering factors that can affect examination performance it is necessary to recognize that these factors, particularly when they change suddenly and may be new or unfamiliar to the student as most certainly occurred in response to COVID, may increase or reshape the workload in such a way that performance is affected. The effects of change can be cumulative and may create new and challenging problems regarding time management and the allocation of effort. It is incumbent on the faculty of all courses running concurrently during a particular Block, term or semester to be aware of the expectations each may be placing on the students and create learning objectives and activities that are achievable by the students. Failure to do so may place a level of stress on students that can interfere with effective learning. Our results over consecutive academic years based of summative examination scores demonstrate that despite course changes developed to accommodate COVID related safety mandates, students nonetheless scored well below their predecessors on their first anatomy examination. We found that scores on the three subsequent anatomy examinations during the remainder of the course were comparable to those for several years prior to implementation of COVID related changes. We attribute the decline in performance on the first examination only to an interaction of multiple factors, both within the anatomy curriculum and within the overall first year curriculum that were novel and challenging for first‐year medical students. That the scores on subsequent examinations were comparable to scores obtained for pre‐COVID years indicates that students were able to successfully adapt to a modified learning environment. Importantly, our results call attention to the multifactorial influences that can affect student performance on examinations in a novel and challenging curriculum, and the ability of the students to identify specific challenges and adapt to them successfully. We emphasize the importance of addressing the full spectrum of curricular changes and their interactive effects when attempting to link particular outcomes, in this case anatomy examination scores, with those changes.
LIMITATIONS This paper describes the changes made in one anatomy curriculum to incorporate safety mandates associated with the COVID pandemic. These changes were designed for a particular course within particular medical curriculum. We recognize that medical curricula vary greatly in structure and organization as do specific courses within the curriculum, and that our modifications might not be appropriate for other schools. We believe, however, that student responses to curricula that may be unfamiliar or novel are likely to differ based on a variety of factors and that it is important for faculty to identify, understand and effectively address these issues in order to maintain the high expectations and level of success both our students and the public expects of our medical education programs.
CONCLUSION The drop in the average student performance on the Block I End of Block anatomy examination following COVID related instructional modifications suggests that some students were less successful than others in adapting to the changes in the curriculum, including those made to the anatomy curriculum. Changes and modifications across the curriculum, likely in combination, contributed to the performance declines observed in Block I. Our data indicate that despite the challenges faced during the first Block of instruction, students were able adjust their behavior and approaches for Blocks II, III, and IV, such that performance during these Blocks was comparable to that observed during the 4 years prior to implementation of COVID mandated changes. Our experience suggests that adaptations made in a single course may not fully explain changes in examination performance for that course. A return to pre‐COVID levels of performance, despite continuance of COVID related curricular changes, highlights the ability of students to adapt to challenges associated with a changing learning environment.
|
|
The Experts’ Advice: Prevention and Responsibility in German Media and Scientific Discourses on Dementia | de857ae7-9a15-44c0-9d6f-bd498b7fecdb | 8552391 | Health Communication[mh] | Those who think walking is stupid should not read this text, because they are beyond help anyway. Everyone else could still change their lives before it’s too late (or before they don’t even realize anymore). Neurodegenerative diseases—i.e. dementia—are insidious epidemics of our aging society. And lack of physical activity is a definite risk factor for their development . ( , p. 59) This remark was published in a leading German newspaper. In blaming the reader, oversimplifying scientific findings, and individualizing the responsibility for dementia risk reduction, it is emblematic of a characteristic tendency of media coverage on dementia in Germany. Over the last decade, the notions of dementia and cognitive decline in old age have changed: In light of the insight that lifestyle factors and preexisting conditions do influence the risk of developing dementia, cognitive decline and dementia are no longer perceived as a necessary part of aging or an inevitable fate. Instead, they tend to be understood as effects of individual lifestyle choices, which implies they can be delayed or possibly even prevented. In the context of a significant reconceptualization of Alzheimer’s disease and given the lack of effective curative treatment options , the focus of dementia research has shifted toward very early detection, early disease stages, and prevention . Current studies on public health communication and media coverage on cognitive aging and dementia have identified a strong emphasis on individual responsibility and lifestyle factors. People are positioned as being at risk for developing dementia and strongly encouraged to adopt a healthy way of life to prevent or delay cognitive decline ( ; see also ; ; ; ; ; ; ). Focusing on the social and moral implications of these paradigmatic shifts in the medical understanding of dementia and associated public communication, we present an analysis of medical science, nursing science, and media discourses on dementia in Germany. First, we provide a short overview of recent paradigmatic shifts in dementia and Alzheimer’s research as well as an outline of the current debates in social sciences regarding the interaction of current cultures of aging and dementia research. Then, we describe our sampling strategy and the discourse analytical approach we used to reconstruct the patterns of knowledge production and public communication on dementia in the fields of medical science, nursing science, and media. In the results, we show which notions of cognitive decline, dementia, and prevention shape the German scientific and media discourses, and we illustrate how framing and responsibility ascriptions differ between the different dementia discourses. We finally discuss how dementia risk communication interacts with contemporary social and health policies and in what ways current dementia discourses are associated with a (self-)responsibilization of cognitive aging.
From Cure to Prevention: Paradigmatic Shifts in Dementia and Alzheimer’s Research Despite massive research efforts, progress in the curative and effective symptomatic treatment of dementia has remained very limited in recent years . In the absence of effective pharmacological therapy options, the focus has shifted from research on treatment to risk reduction and prediction. Current research and public attention are focused on primary and secondary prevention as well as detection and prediction in very early or even presymptomatic disease stages . In 2017, a Lancet report sparked a broad public discussion on dementia prevention . The report showed that one out of three dementia cases could be prevented if nine risk factors were better managed. Similarly, the World Health Organization guidelines on risk reduction of cognitive decline and dementia highlight the potentials of preventive measures and risk management. As most of the discussed risk factors, such as hypertension, obesity, hearing loss, or diabetes, are treatable or modifiable, cognitive health is more and more understood as the outcome of efficient medical risk management and individual lifestyle choices (e.g., a Mediterranean diet, physical and mental activities, and social engagement). The causal relationship between risk factors and cognitive decline is, however, not well established, and the actual effects of each preventive measure on dementia rates are continuously disputed in medical research (e.g., ). The focus on prevention and risk reduction is connected to a novel understanding of AD. In recent years, a new biological conception of AD based on the underlying pathological processes has replaced the older syndromal definition of AD . This novel AD continuum theory assumes three stages of a slowly progressing disease. The first stage is characterized by a long asymptomatic phase without any symptomatic change; biomarker research is currently developing blood tests that can detect Alzheimer-specific protein changes at this stage, years before the first clinical symptoms appear. The disease then, in a second phase, enters a symptomatic stage involving subjectively experienced and objectively measurable mild cognitive impairment (MCI), which does not (yet) significantly affect daily activities. Eventually, in a third stage, AD develops into a clinical syndromic disease with an advanced pathology . However, the clinical usefulness of disease labels such as SCI (subjective cognitive impairment) and MCI—which describe the transitional state between normal cognitive performance and dementia—as well as the social and ethical implications of predictive diagnostics are controversial (for the controversy regarding MCI, see ; for the ethical debates on predictive testing, see ; ). Active Aging and Cognitive Health The reconceptualization of AD, the establishment of new disease labels like SCI and MCI, and the increased focus on dementia prevention coincide with a change of broader cultural images of aging and a general trend toward individual risk management. Over the past two decades, models of a well-deserved retirement and a passive and disengaged old age have been increasingly replaced by models of “productive”, “healthy”, “active”, and “successful aging” (see, for example, ). For German media and social-political discourses, showed how the cultural image of the age of retirement (dispensation from work, retreat, physical decline) has been supplemented in the late 1980s by the cultural image of restless aging (mobility, activity, brain plasticity) and later by the idea of productive aging (productivity potentials, obligation to serve the common good). In gerontological research, too, the deficit-oriented view of old age has been fundamentally questioned in recent decades; current gerontological approaches emphasize the plasticity of aging processes and highlight the activity and productivity potentials of aging persons (see, for example, , for an overview of the gerontological debates on successful aging models). Social science and critical gerontology authors repeatedly have drawn attention to the convergences and links between current neuroscientific (dementia) research, aging cultures, and contemporary social policies (e.g., ; ; ). , for example, argued that the interaction of neuroscientific research and the culture of active aging produces new notions of cognitive health and successful aging, blurs the boundaries between health and disease, and creates new medical and social perceptions of risk and responsibility. In this context, dementia is emblematic of the transition from the active, productive, healthy “third age” to the dreaded “fourth” phase of the very old, which is imagined as a period of dependency, immobility, and frailty . Whereas cognitive health and memory could be seen as master metaphors for successful aging , cognitive decline and memory loss are associated with frailness and the “potential loss of successful selfhood” ( , p. 242). Because cognitive decline is, in addition, no longer viewed as an inevitable fate but as an avoidable disease, the maintenance of cognitive performance until old age might be increasingly seen as a matter of personal responsibility and individual efforts. Analyzing Responsibility Ascriptions in German Dementia Discourses Because assigning responsibility to individuals cannot be understood as per se problematic , the analysis of notions of responsibility in the field of dementia prevention must specify who is actually addressed in which social role and on which normative and scientific basis as responsible for what. Although an increasing emphasis on individual risk management and personal dementia prevention might be observable in many countries, the analysis and evaluation of responsibility ascriptions in the field of dementia prevention and care must further reflect the specific institutional characteristics of different countries and the respective situatedness of different dementia discourses and health care systems . In Germany (and many other European countries), the forms of state intervention and the logic of health and social policies affecting care have changed in the last decades. Instead of collectivizing life risks and directly ensuring social security, they focus now on individual risk management, privatizing risk provisioning and encouraging self-care . Professionalization, including the increased academic qualification level of nursing care, and economizing trends such as cost containment, efficiency-oriented measures, provider competition, and consumer choice also took hold in the German care sector since the introduction of long-term care insurance in the mid-1990s . Care of the elderly is at the same time still strongly dependent on informal care by family members . Despite the neo-social reconfiguration of state intervention , German welfare policy is still characterized by relatively high public social spending and comprehensive social security . Following conception of “situated prevention,” we seek to take into account these sociopolitical contexts to contribute to a profound understanding of the discourses on prevention, care, and treatment of dementia and the specific notions of responsibility ascriptions in the different fields. Ascriptions of responsibility become ethically questionable if the subjects held liable do not have the freedom of and capacity for meaningful choice among different courses of action or if there is no causal relationship between the moral subject’s action and the outcome . In democratic societies, justifiable responsibility ascriptions should rely on norms that can be explicated, contested, and jointly agreed on in deliberative processes. In short, taking responsibility requires individual autonomy—not merely in the negative notions of private autonomy but also in the positive conceptions of social, ethical, moral, and political autonomy . With regard to an ethical evaluation of responsibility ascriptions in the context of prevention, it is, furthermore, relevant whether preventive measures have been proven effective, whether they have positive risk–benefit and cost–benefit ratios, how restrictive they are, and whether they have been agreed upon and implemented using legitimate decision-making procedures . To expose and evaluate the specific foundations and characteristics of responsibility ascriptions in the field of dementia prevention, we use an adapted relational conception of responsibility first introduced by that includes the following relata: Someone (subject) is—in a particular time frame and a certain temporal direction—responsible for something/someone (object) vis-à-vis someone (norm-proofing instance) on the basis of specific normative standards and understandings of causality and with specific consequences. Analyzing discourses in the fields of medical science, nursing science, and media, we seek to expose the normative and epistemic foundations as well as the moral and social implications of responsibility ascriptions in current German dementia discourses.
Despite massive research efforts, progress in the curative and effective symptomatic treatment of dementia has remained very limited in recent years . In the absence of effective pharmacological therapy options, the focus has shifted from research on treatment to risk reduction and prediction. Current research and public attention are focused on primary and secondary prevention as well as detection and prediction in very early or even presymptomatic disease stages . In 2017, a Lancet report sparked a broad public discussion on dementia prevention . The report showed that one out of three dementia cases could be prevented if nine risk factors were better managed. Similarly, the World Health Organization guidelines on risk reduction of cognitive decline and dementia highlight the potentials of preventive measures and risk management. As most of the discussed risk factors, such as hypertension, obesity, hearing loss, or diabetes, are treatable or modifiable, cognitive health is more and more understood as the outcome of efficient medical risk management and individual lifestyle choices (e.g., a Mediterranean diet, physical and mental activities, and social engagement). The causal relationship between risk factors and cognitive decline is, however, not well established, and the actual effects of each preventive measure on dementia rates are continuously disputed in medical research (e.g., ). The focus on prevention and risk reduction is connected to a novel understanding of AD. In recent years, a new biological conception of AD based on the underlying pathological processes has replaced the older syndromal definition of AD . This novel AD continuum theory assumes three stages of a slowly progressing disease. The first stage is characterized by a long asymptomatic phase without any symptomatic change; biomarker research is currently developing blood tests that can detect Alzheimer-specific protein changes at this stage, years before the first clinical symptoms appear. The disease then, in a second phase, enters a symptomatic stage involving subjectively experienced and objectively measurable mild cognitive impairment (MCI), which does not (yet) significantly affect daily activities. Eventually, in a third stage, AD develops into a clinical syndromic disease with an advanced pathology . However, the clinical usefulness of disease labels such as SCI (subjective cognitive impairment) and MCI—which describe the transitional state between normal cognitive performance and dementia—as well as the social and ethical implications of predictive diagnostics are controversial (for the controversy regarding MCI, see ; for the ethical debates on predictive testing, see ; ).
The reconceptualization of AD, the establishment of new disease labels like SCI and MCI, and the increased focus on dementia prevention coincide with a change of broader cultural images of aging and a general trend toward individual risk management. Over the past two decades, models of a well-deserved retirement and a passive and disengaged old age have been increasingly replaced by models of “productive”, “healthy”, “active”, and “successful aging” (see, for example, ). For German media and social-political discourses, showed how the cultural image of the age of retirement (dispensation from work, retreat, physical decline) has been supplemented in the late 1980s by the cultural image of restless aging (mobility, activity, brain plasticity) and later by the idea of productive aging (productivity potentials, obligation to serve the common good). In gerontological research, too, the deficit-oriented view of old age has been fundamentally questioned in recent decades; current gerontological approaches emphasize the plasticity of aging processes and highlight the activity and productivity potentials of aging persons (see, for example, , for an overview of the gerontological debates on successful aging models). Social science and critical gerontology authors repeatedly have drawn attention to the convergences and links between current neuroscientific (dementia) research, aging cultures, and contemporary social policies (e.g., ; ; ). , for example, argued that the interaction of neuroscientific research and the culture of active aging produces new notions of cognitive health and successful aging, blurs the boundaries between health and disease, and creates new medical and social perceptions of risk and responsibility. In this context, dementia is emblematic of the transition from the active, productive, healthy “third age” to the dreaded “fourth” phase of the very old, which is imagined as a period of dependency, immobility, and frailty . Whereas cognitive health and memory could be seen as master metaphors for successful aging , cognitive decline and memory loss are associated with frailness and the “potential loss of successful selfhood” ( , p. 242). Because cognitive decline is, in addition, no longer viewed as an inevitable fate but as an avoidable disease, the maintenance of cognitive performance until old age might be increasingly seen as a matter of personal responsibility and individual efforts.
Because assigning responsibility to individuals cannot be understood as per se problematic , the analysis of notions of responsibility in the field of dementia prevention must specify who is actually addressed in which social role and on which normative and scientific basis as responsible for what. Although an increasing emphasis on individual risk management and personal dementia prevention might be observable in many countries, the analysis and evaluation of responsibility ascriptions in the field of dementia prevention and care must further reflect the specific institutional characteristics of different countries and the respective situatedness of different dementia discourses and health care systems . In Germany (and many other European countries), the forms of state intervention and the logic of health and social policies affecting care have changed in the last decades. Instead of collectivizing life risks and directly ensuring social security, they focus now on individual risk management, privatizing risk provisioning and encouraging self-care . Professionalization, including the increased academic qualification level of nursing care, and economizing trends such as cost containment, efficiency-oriented measures, provider competition, and consumer choice also took hold in the German care sector since the introduction of long-term care insurance in the mid-1990s . Care of the elderly is at the same time still strongly dependent on informal care by family members . Despite the neo-social reconfiguration of state intervention , German welfare policy is still characterized by relatively high public social spending and comprehensive social security . Following conception of “situated prevention,” we seek to take into account these sociopolitical contexts to contribute to a profound understanding of the discourses on prevention, care, and treatment of dementia and the specific notions of responsibility ascriptions in the different fields. Ascriptions of responsibility become ethically questionable if the subjects held liable do not have the freedom of and capacity for meaningful choice among different courses of action or if there is no causal relationship between the moral subject’s action and the outcome . In democratic societies, justifiable responsibility ascriptions should rely on norms that can be explicated, contested, and jointly agreed on in deliberative processes. In short, taking responsibility requires individual autonomy—not merely in the negative notions of private autonomy but also in the positive conceptions of social, ethical, moral, and political autonomy . With regard to an ethical evaluation of responsibility ascriptions in the context of prevention, it is, furthermore, relevant whether preventive measures have been proven effective, whether they have positive risk–benefit and cost–benefit ratios, how restrictive they are, and whether they have been agreed upon and implemented using legitimate decision-making procedures . To expose and evaluate the specific foundations and characteristics of responsibility ascriptions in the field of dementia prevention, we use an adapted relational conception of responsibility first introduced by that includes the following relata: Someone (subject) is—in a particular time frame and a certain temporal direction—responsible for something/someone (object) vis-à-vis someone (norm-proofing instance) on the basis of specific normative standards and understandings of causality and with specific consequences. Analyzing discourses in the fields of medical science, nursing science, and media, we seek to expose the normative and epistemic foundations as well as the moral and social implications of responsibility ascriptions in current German dementia discourses.
Discourse Analytical Approach In our empirical analysis, we use a discourse analytical approach, inspired by the work of . Following , discourses are understood as systems of knowledge, which structure the field of the sayable. Discourses determine what can be said and what will be concealed, what is considered true and what is considered false. The aim of discourse analysis is not only to describe discourses in terms of content and topic but to reconstruct the deeper structure of the discursive formations (see ). Employing a structural perspective, the discourse analysis hence aims to discover the rules of formation, which make statements, claims, and calls possible (and are at the same time always implied within them). Sampling The analysis seeks to reconstruct contemporary dementia discourses in Germany in the fields of medical science, nursing science, and media. Based on our research interest, the discourses to be analyzed were characterized in terms of their manifest content, and a provisional corpus of texts was determined. For the field of medical science, widely distributed journals were chosen (see ). We included the Deutsches Ärzteblatt (official body of the German Medical Association and distributed to all physicians in Germany) as well as the S3-Guideline Dementia and one exemplary high-circulation journal per relevant discipline (neurology, psychiatry, geriatrics, and internal medicine). For nursing science, widely distributed, practice-oriented journals; recommendations for action; and online resources were included in the analysis. The media discourse was analyzed based on a sample of two leading weekly magazines and their online sites, two leading German daily newspapers, and the Apotheken Umschau , a popular science magazine which has the highest circulation of all magazines in Germany. Using a keyword search (“Alzheimer’s disease,” “dementia diagnosis,” “dementia treatment,” “dementia prevention,” and “dementia care”), relevant articles about dementia were identified. We included texts from the last 6 years (2014–2019) in our corpus. Only articles relating directly to the respective topics were selected; about 130 texts were included in the analysis. Following the idea of theoretical sampling, the size of the sample was not finally determined at the beginning of the analysis; instead, texts were included in the analysis incrementally until theoretical saturation (data satisfaction) was reached . Coding and Analysis In preparation for the analysis, heuristic questions and main thematic categories were developed that analytically frame the access to the texts and sharpen theoretical sensitivity . In accordance with our research interest, we looked for notions of aging, cognitive health and dementia, nursing approaches, and prevention strategies. Across these topics, we examined underlying normative and scientific foundations of responsibility ascriptions and reconstructed who is addressed in what role (subject) as responsible for what (object). In a preliminary analysis, the material was reviewed for main topics and roughly thematically coded with MAXQDA. Based on these sequences, thematic subcategories were formed inductively. First, in vivo codes (short quotes) were assigned to the text passages; in a second step, more abstract categories were developed and similar rationalities were combined into patterns. In the subsequent interpretative analysis, regularities, references, and coherent connections as well as oppositions and contradictions between categories and arguments were sought to reconstruct the structural logic of the different discourses (see ). This way, it was possible to analyze which patterns are interlinked, that is, different notions of aging, concepts of illness and dementia, responsibility ascriptions, and prevention recommendations. Analyzing medical, nursing, and public dementia discourses separately, the analysis also aimed to make clear how expertise and knowledge are transferred between the different fields and to identify the intersections and special features of these special discourses.
In our empirical analysis, we use a discourse analytical approach, inspired by the work of . Following , discourses are understood as systems of knowledge, which structure the field of the sayable. Discourses determine what can be said and what will be concealed, what is considered true and what is considered false. The aim of discourse analysis is not only to describe discourses in terms of content and topic but to reconstruct the deeper structure of the discursive formations (see ). Employing a structural perspective, the discourse analysis hence aims to discover the rules of formation, which make statements, claims, and calls possible (and are at the same time always implied within them).
The analysis seeks to reconstruct contemporary dementia discourses in Germany in the fields of medical science, nursing science, and media. Based on our research interest, the discourses to be analyzed were characterized in terms of their manifest content, and a provisional corpus of texts was determined. For the field of medical science, widely distributed journals were chosen (see ). We included the Deutsches Ärzteblatt (official body of the German Medical Association and distributed to all physicians in Germany) as well as the S3-Guideline Dementia and one exemplary high-circulation journal per relevant discipline (neurology, psychiatry, geriatrics, and internal medicine). For nursing science, widely distributed, practice-oriented journals; recommendations for action; and online resources were included in the analysis. The media discourse was analyzed based on a sample of two leading weekly magazines and their online sites, two leading German daily newspapers, and the Apotheken Umschau , a popular science magazine which has the highest circulation of all magazines in Germany. Using a keyword search (“Alzheimer’s disease,” “dementia diagnosis,” “dementia treatment,” “dementia prevention,” and “dementia care”), relevant articles about dementia were identified. We included texts from the last 6 years (2014–2019) in our corpus. Only articles relating directly to the respective topics were selected; about 130 texts were included in the analysis. Following the idea of theoretical sampling, the size of the sample was not finally determined at the beginning of the analysis; instead, texts were included in the analysis incrementally until theoretical saturation (data satisfaction) was reached .
In preparation for the analysis, heuristic questions and main thematic categories were developed that analytically frame the access to the texts and sharpen theoretical sensitivity . In accordance with our research interest, we looked for notions of aging, cognitive health and dementia, nursing approaches, and prevention strategies. Across these topics, we examined underlying normative and scientific foundations of responsibility ascriptions and reconstructed who is addressed in what role (subject) as responsible for what (object). In a preliminary analysis, the material was reviewed for main topics and roughly thematically coded with MAXQDA. Based on these sequences, thematic subcategories were formed inductively. First, in vivo codes (short quotes) were assigned to the text passages; in a second step, more abstract categories were developed and similar rationalities were combined into patterns. In the subsequent interpretative analysis, regularities, references, and coherent connections as well as oppositions and contradictions between categories and arguments were sought to reconstruct the structural logic of the different discourses (see ). This way, it was possible to analyze which patterns are interlinked, that is, different notions of aging, concepts of illness and dementia, responsibility ascriptions, and prevention recommendations. Analyzing medical, nursing, and public dementia discourses separately, the analysis also aimed to make clear how expertise and knowledge are transferred between the different fields and to identify the intersections and special features of these special discourses.
In this section, we present our main findings regarding each discourse, starting with medical science, followed by nursing science and media. In each subsection, we reconstruct how research on dementia is portrayed and describe what rationalities are implied in guidelines and dementia risk communication. Finally, we compare framing and responsibility ascriptions between the three examined discourses. From Treatment to Risk Reduction: New Conceptions of Dementia in Medical Science In medical science, dementia is no longer seen only as a disease of old age. As noted in the “Introduction” section, the focus has shifted to the early, long, symptom-free phase, to the treatment of risk factors, to health promotion, and to individual behavioral prevention. Understanding and diagnosing AD no longer focus on the clinical signs of cognitive decline but rather on neuropathological changes in the brain. Whereas pathological change was once only a “post-mortem criterion,” it now also applies to “living patients” (DA 3). This paradigm shift is as such explicitly addressed in the examined journals. It is referred to as a fundamentally “new conception” of AD (DA 7) or as a “dramatic shift” in Alzheimer’s research (DA 3). With this new focal point on the early phase of the disease, predictive and early diagnosis of dementia is becoming increasingly important (e.g., DA 15; DNP 4; NA 7). So far, predictive, biomarker-based dementia diagnosis in symptom-free patients has been discouraged due to a lack of effective treatment options (DA 7, 15; DNP 4; NA 7). However, an early differential diagnosis at the first signs of cognitive changes is now emphasized in the German medical discourse as critical for appropriate care and treatment (DA 2, 6; DNP 7; NA 6). This rationale for the importance of early differential diagnosis is based on two approaches to possible interventions: the elimination of reversible causes of dementia and the hope of delaying the course of the disease through individual lifestyle changes (DA 2, 7). The possibility of earlier diagnoses, it is also hoped, will support the development of pharmacological therapies for the phase of the neuropathological disease development prior to the first appearance of clinical symptoms (DA 9; NA 3, 7). The uncertainties and ethical challenges of current (early) dementia diagnosis and the limits of current treatment options are regularly debated in the medical discourse (DA 7, 15; DNP 4). The limited number of available therapeutic options is used to emphasize the relevance of primary prevention and treatment of risk factors (DA 12, 14; DNP 4; GG 3; INT 2; NA 4). The urgency of dementia prevention is rhetorically further framed by references to various other topics: the increasing number of illnesses as a result of demographic change (GG 3; INT 3; NA 2, 8), the burden on the health care system and economic follow-up costs (GG 3; INT 1, 2; NA 1, 4), and, less dominantly, the dementia-associated strains on patients and their relatives (INT 1; NA 1). Medical intervention guidelines for clinical practice primarily recommend early (pharmaceutical) treatment of diabetes, hypertension, and obesity but also depression and hearing loss (DA 8, 10; INT 4; NA 1, 4). In addition to medical risk management, both environmental factors such as good education and individual lifestyle modifications such as healthy diet and cognitive, physical, and social activities are stressed (DA 8; DNP 5, 8; GG 3; NA 2, 4). The importance of preventive behavioral interventions which promote healthy aging is highlighted (GG 3). Primary prevention should start in young adulthood. The aim is to decelerate neurodegenerative processes as early as possible “before the actual relevance to everyday life” and to identify and implement “salutogenetic resources” sustainably in one’s lifestyle (NA 8). Multimodal prevention approaches are preferred, and the weak evidence for individual measures is discussed (e.g., DNP 2; NA 8). With the focus on individual lifestyle factors and primary prevention, personal responsibility is stressed increasingly. In particular, contributions focusing on concepts of “successful aging” (GG 3; NA 8) tend to understand patients as directly responsible for a healthy, proactive lifestyle and the prevention of health risks: “A high degree of personal responsibility is required in primary prevention, which ultimately each person must take for himself or herself” (NA 8). Appropriate lifestyle measures should slow down cognitive decline, prevent severe stages of dementia, and maintain “independence until death” (GG 3). Consequently, patients are not seen as passive objects but as “managers and shapers of their risk,” who can “actively and preventively do something” against dementia (NA 8). This paradigmatic shift in dementia research corresponds with a change of the professional self-image in geriatrics and neurology from symptomatic or curative dementia therapy to primary and secondary prevention as well as health promotion. The professional reorientation also requires a change in understanding of the role of physicians. Physicians are no longer merely responsible for treating manifest diseases and passing on the treatment plan to the passive patient. Instead—against the background of the new understanding of dementia—risk management and support of lifestyle changes in the middle-age years are debated as additional responsibilities of medical professionals. Physicians should represent the role of advisors for “successful aging” for an informed, actively interested, and self-responsible patient (NA 8). The changes in the medical understanding of dementia and the role of physicians also alter the conception of what it means to be a patient. It is no longer only those who turn to physicians with memory problems who are seen as patients. Rather, people are addressed as “persons at risk” or even “patients” long before symptoms like memory loss appear. As a result of the shift, also the line between cognitive health and disease is becoming increasingly blurred. Above all, patients are not (only) seen as passive symptom carriers but (also) positioned as persons at risk in middle age, who can—and should—reduce their dementia risk by pursuing an active and healthy lifestyle. Dignity, Autonomy, and Activation: Dementia Care in Nursing Science The nursing science discourse is characterized by a strong focus on the burdens associated with dementia for relatives, caregivers, affected persons, and society (e.g., MDS; PZ 4; SP 1). It is, for example, argued that in times of “mass aging,” the growing number of people with cognitive disabilities and impaired everyday skills will challenge the health care system (PZ 4). Not only the cognitive limitations in the stricter sense are seen as potential burdens for both professional caregivers and caring relatives. Also, and more specifically, the psychological and behavioral symptoms such as physical aggression, anxiety, or irritability, and personality changes are highlighted (MDS; PZ 4; SP 1, 6). In the German nursing science, two separate discourses can be discerned that revolve around the need to develop innovative nursing approaches and to embed prevention and rehabilitation more firmly in the practice of nursing care. The first underlying rationale is that, in light of demographic change and the expected increase in dementia rates, the burden on the health care system and society should be reduced. For example, recommendations for action are contextualized with reference to cost studies, which point to the enormous economic burdens caused by dementia (MDS; PZ 4). Second, individual quality of life with dementia represents a central reference point of debate in nursing science. One quality-of-life discourse is oriented around the guiding principles of self-determination and self-reliance (MDS; PZ 1, 5; ZQS 1), another is characterized by references to human dignity, personal needs, and relationships (MDS; SP 4, 8). Good dementia care in this sense means, as stated in The Nurse [ Die Schwester/Der Pfleger ], perceiving the person in need of care in their unique personality and strengthening their self-esteem and emotional well-being (SP 4). Nursing approaches such as “person-centered care” therefore aim at relationship building and successful communication between caregivers and patients and are oriented toward the needs and feelings of the persons in need of care (MDS; SP 1, 4). The reference to the value of human dignity thus primarily justifies nursing concepts that emphasize relationships, communication, closeness, and safety. In this context, nursing care approaches do not primarily aim to restore (cognitive) abilities or prevent further decline but are oriented toward the immediate well-being of the persons in need of care. On the contrary—and oriented toward the ideal of personal autonomy—there are also nursing approaches which first of all intend to strengthen the daily living skills and personal responsibility of those in need of care. In this context, the entitlement of people with dementia to the greatest possible degree of self-determination is stressed; patronizing practices in care are problematized. Activating care aims at maintaining or improving “mobility and independence in everyday life,” which are considered important indicators of quality of life and subjective well-being (PZ 1). It is emphasized that even severely impaired patients have potentials for health promotion and the preservation of health resources. Specifically, care services should preserve the “functional performance of patients” (PZ 5), strengthen the “active participation” of people in need care (HB 2), or promote the “motivation and competence to carry out measures on their own terms” (HB 2). In this framework, a range of specific nursing interventions such as dancing, coordination exercises, and cognitive stimulation for the purpose of maintaining cognitive performance are recommended (GG 1; HB 1, 3; MDS; PZ 2, 5; ZQP 2, 3), despite low-quality evidence (GG 2, 5). “Permanent advancement” and specifically the combination of cognitive and physical training are expected to effectively delay mental deterioration and maintain or even restore everyday skills (HB 3). Conversely, a deterioration of cognitive abilities is associated with inactivity and the lack of “movement and environmental stimuli” (PZ 2). Nurses should not do anything for persons with dementia that they can still do themselves. For nurses, this means that they must shift from an attitude of care toward an attitude of encouragement and support (PZ 2). In this context, nurses are seemingly considered to be directly responsible for maintaining the cognitive health of persons in need of care. In some articles, the possible benefits of successful, activating care are seen to be substantial: “So one thing is clear: how good our mental performance is depends on how much we perform” (PZ 2). Assuming a straightforward causal relationship between cognitive activity and cognitive health, it seems to be a question of good care whether the patient’s abilities deteriorate or recover. Risk, Prevention, and Responsibility: Dementia in the Media While the importance of environmental protective factors and the unclear evidence for the success of individual measures of prevention are still regularly discussed in professional discourses, the media discourse is characterized by a stronger and partly exclusive focus on individual behavioral prevention. One can reconstruct the following storylines that typically frame media coverage on dementia in Germany. First, media coverage on dementia is usually contextualized with references to demographic change and a prognosis of future dementia rates (e.g., AU 7; DS 1, 9; FAZ 4, 9; SZ 6). The estimations about rising case numbers and the prospect of rising costs are used to call attention to the issue. Second, the suffering associated with dementia and its psychological and behavioral symptoms are portrayed. As in the nursing science discourse, the burden for caregivers is also occasionally highlighted. Headlines like “Forgetful, Aggressive, Confused: Experts Warn About the ‘Dementia Republic of Germany’” (FO 5) convey degrading images of dementia and paint an alarming picture of the increase of cases in an aging population. However, other articles use more sophisticated and careful formulations; some voices also argue that dementia “doesn’t have to be a big deal” as most courses of the disease are mild (SZ 2). Third, the possibility of dementia risk reduction and the findings regarding the impact of individual lifestyle choices are presented as a glimmer of hope against the background of a lack of treatment options and the failure of recent drug trials (SZ 6; DS 2, 8; FO 9; FAZ 1, 5). Thus, the reference to demographic change and limited treatment options is generally used to emphasize the relevance of risk reduction and dementia prevention. However, the actual presentation of scientific research and the understanding of individual responsibility differ greatly within the media discourse on dementia. On one hand, we found articles in the science sections of newspapers and weekly magazines, which closely reflect current medical debates and knowledge. In these articles, recent paradigmatic shifts in dementia and Alzheimer’s research are portrayed, the importance of cardiovascular risk factors is highlighted, and multimodal preventive strategies are discussed (SZ 3; DS 9; FAZ 1, 3; AU 5). Headlines like “What Is Good for the Heart Is Also Good for the Brain” (SZ 7) reflect the current medical focus on cardiovascular risk factors. In addition, the lack of evidence for single preventive efforts is portrayed (DS 7, 9; FAZ 5, 6), and the relevance of environmental factors like the influence of education on dementia risk is mentioned (DS 3, 9; FAZ 5; SZ 3). Above all, in light of limited treatment options and with reference to WHO recommendations or the Lancet Report, it is stressed that there are many risk factors and that risk reduction and prevention in middle age as well as early detection are crucial (DS 9; FAZ 5, 10; SZ 3). On the contrary, we found articles in newspapers, weekly magazines, and popular science magazines that strongly focus on individual lifestyle measures. In this context, the readers are often addressed as being directly responsible for successful dementia prevention. Typical headlines such as “How to Reduce Your Risk of Alzheimer’s” (DS 2) or “Preventing Dementia: How to Strengthen the Self-Healing Powers of the Brain” (FO 4) convey the idea that successful dementia prevention is mostly a question of sufficient individual efforts and correct lifestyle choices. Insights from epidemiological and lifestyle studies are used as a basis for responsibilization of the individual, for example, for the moral call to lead an active and healthy life. The advice and the medical studies cited focus on a wide range of very specific measures like dancing (DS 5), playing video games like “Super Mario” (FO 3), or eating nuts and avocados (FO 7). Headlines like “Study Shows How Many Cups of Coffee You Have to Drink to Protect Yourself From Dementia” (FO 8) or “Food Against Dementia: 20 Foods That Help and 9 That Hurt” (FO 7) create the impression of conclusive and clearly measurable causal relationship between individual living habits and dementia. In addition to articles that focus on the impact of single interventions, readers are also provided with comprehensive lists of dementia prevention measures as “Seven Components that Protect Your Brain From Dementia” (FO 1, also FO 10, 11; AU 3, 4). In this context, uncertainties regarding the evidence for the effectiveness of individual prevention measures are not commonly mentioned. It is further noticeable that readers are addressed directly as a potential risk group and asked to take dementia prevention into their own hands. Headlines like “How We Should Live to Protect Ourselves From Dementia” (FO 9) convey the moral responsibility to live health consciously to reduce the risk of dementia (also AU 4). Dementia prevention and an active, healthy lifestyle tend to be discussed as an individual duty. Young people are prompted to prevent future cognitive decline (e.g., SZ 1), and older people are encouraged to face the cognitive decline with an active lifestyle. “The elderly can stay mentally fit even if they already have initial memory gaps. A healthy lifestyle is crucial for preventing dementia” (DS 10). A media report on activation measures offered in nursing homes exemplifies the strong focus on self-responsibility and self-help, concluding with the paradigmatic sentences: “The residents strain their brains, move around and meet like-minded people while playing. Instead of waiting for medication, they themselves take care of their brain health” (DS 9). In addition, readers are occasionally called to contribute, wherever possible, to the well-being of the community, as illustrated by the following example: “Whoever takes on a social volunteer service or a voluntary position links the strain on the brain with a meaningful and thus fulfilling activity—a strong mental protection” (FO 6). Above all, the media discourse on dementia is focused on individual risk management. The burden the disease places on society and the lack of treatment options are used to emphasize the importance of dementia prevention through individual lifestyle changes. Comparing the Framing of Dementia Prevention Between Medical Science, Nursing Science, and Media Discourses Framing and responsibility ascriptions differ between the three examined discourses. Medical science debates are framed by the changing neuroscientific understanding of AD and the novel focus on the presymptomatic disease stage in middle age. The nursing science discourse is characterized by a strong emphasis on the burdens for caregivers and the health care system (see the similar results in the analysis for the German nursing science discourse in ). In German media, dementia is predominantly framed by demographic change and alarming future visions of dementia rates. Recommendations about dementia risk reduction are no longer directed to older people exclusively but also address persons in middle age at risk for developing dementia. In the medical science discourse, both physicians and patients are addressed as being responsible for dementia prevention, albeit in (naturally) different ways. Physicians are seen as responsible not only for the correct diagnosis and treatment of advanced dementia but also for considering risk factors in middle age. The consequent implication is that patients and persons at risk are seen as responsible for behavioral prevention and dementia risk reduction. In the media, the main focus is on lifestyle changes and individual risk management (as also observed by , for U.K. newspapers). Readers are—as also shown by for English online dementia health information and for online women’s brain health campaigns—positioned as being at risk and are directly addressed and called on to adopt a healthy and active lifestyle to strengthen their cognitive abilities and to reduce the future risk of dementia. Frequently used normative phrasings imply a moral obligation to engage in dementia prevention (see also ). Nursing science focuses on well-being and rehabilitation of persons with advanced dementia. Professional caregivers are called on to restore or maintain the cognitive abilities of persons in need of care using specific mental and physical interventions. Some nursing science articles portray nurses as directly responsible for maintaining the cognitive health of persons in need of care. While limited evidence of the actual success of prevention measures is regularly addressed in medical and nursing discourses, media coverage tends to highlight single studies and to overestimate the effectiveness of behavioral dementia prevention. Here, most clearly, dementia tends to be portrayed as a direct outcome of individual lifestyle choices; the preservation of cognitive abilities through old age implicitly seems to be a matter of personal responsibility. summarizes these findings by using the different dimensions of responsibility to highlight the similarities and differences between the examined discourses.
In medical science, dementia is no longer seen only as a disease of old age. As noted in the “Introduction” section, the focus has shifted to the early, long, symptom-free phase, to the treatment of risk factors, to health promotion, and to individual behavioral prevention. Understanding and diagnosing AD no longer focus on the clinical signs of cognitive decline but rather on neuropathological changes in the brain. Whereas pathological change was once only a “post-mortem criterion,” it now also applies to “living patients” (DA 3). This paradigm shift is as such explicitly addressed in the examined journals. It is referred to as a fundamentally “new conception” of AD (DA 7) or as a “dramatic shift” in Alzheimer’s research (DA 3). With this new focal point on the early phase of the disease, predictive and early diagnosis of dementia is becoming increasingly important (e.g., DA 15; DNP 4; NA 7). So far, predictive, biomarker-based dementia diagnosis in symptom-free patients has been discouraged due to a lack of effective treatment options (DA 7, 15; DNP 4; NA 7). However, an early differential diagnosis at the first signs of cognitive changes is now emphasized in the German medical discourse as critical for appropriate care and treatment (DA 2, 6; DNP 7; NA 6). This rationale for the importance of early differential diagnosis is based on two approaches to possible interventions: the elimination of reversible causes of dementia and the hope of delaying the course of the disease through individual lifestyle changes (DA 2, 7). The possibility of earlier diagnoses, it is also hoped, will support the development of pharmacological therapies for the phase of the neuropathological disease development prior to the first appearance of clinical symptoms (DA 9; NA 3, 7). The uncertainties and ethical challenges of current (early) dementia diagnosis and the limits of current treatment options are regularly debated in the medical discourse (DA 7, 15; DNP 4). The limited number of available therapeutic options is used to emphasize the relevance of primary prevention and treatment of risk factors (DA 12, 14; DNP 4; GG 3; INT 2; NA 4). The urgency of dementia prevention is rhetorically further framed by references to various other topics: the increasing number of illnesses as a result of demographic change (GG 3; INT 3; NA 2, 8), the burden on the health care system and economic follow-up costs (GG 3; INT 1, 2; NA 1, 4), and, less dominantly, the dementia-associated strains on patients and their relatives (INT 1; NA 1). Medical intervention guidelines for clinical practice primarily recommend early (pharmaceutical) treatment of diabetes, hypertension, and obesity but also depression and hearing loss (DA 8, 10; INT 4; NA 1, 4). In addition to medical risk management, both environmental factors such as good education and individual lifestyle modifications such as healthy diet and cognitive, physical, and social activities are stressed (DA 8; DNP 5, 8; GG 3; NA 2, 4). The importance of preventive behavioral interventions which promote healthy aging is highlighted (GG 3). Primary prevention should start in young adulthood. The aim is to decelerate neurodegenerative processes as early as possible “before the actual relevance to everyday life” and to identify and implement “salutogenetic resources” sustainably in one’s lifestyle (NA 8). Multimodal prevention approaches are preferred, and the weak evidence for individual measures is discussed (e.g., DNP 2; NA 8). With the focus on individual lifestyle factors and primary prevention, personal responsibility is stressed increasingly. In particular, contributions focusing on concepts of “successful aging” (GG 3; NA 8) tend to understand patients as directly responsible for a healthy, proactive lifestyle and the prevention of health risks: “A high degree of personal responsibility is required in primary prevention, which ultimately each person must take for himself or herself” (NA 8). Appropriate lifestyle measures should slow down cognitive decline, prevent severe stages of dementia, and maintain “independence until death” (GG 3). Consequently, patients are not seen as passive objects but as “managers and shapers of their risk,” who can “actively and preventively do something” against dementia (NA 8). This paradigmatic shift in dementia research corresponds with a change of the professional self-image in geriatrics and neurology from symptomatic or curative dementia therapy to primary and secondary prevention as well as health promotion. The professional reorientation also requires a change in understanding of the role of physicians. Physicians are no longer merely responsible for treating manifest diseases and passing on the treatment plan to the passive patient. Instead—against the background of the new understanding of dementia—risk management and support of lifestyle changes in the middle-age years are debated as additional responsibilities of medical professionals. Physicians should represent the role of advisors for “successful aging” for an informed, actively interested, and self-responsible patient (NA 8). The changes in the medical understanding of dementia and the role of physicians also alter the conception of what it means to be a patient. It is no longer only those who turn to physicians with memory problems who are seen as patients. Rather, people are addressed as “persons at risk” or even “patients” long before symptoms like memory loss appear. As a result of the shift, also the line between cognitive health and disease is becoming increasingly blurred. Above all, patients are not (only) seen as passive symptom carriers but (also) positioned as persons at risk in middle age, who can—and should—reduce their dementia risk by pursuing an active and healthy lifestyle.
The nursing science discourse is characterized by a strong focus on the burdens associated with dementia for relatives, caregivers, affected persons, and society (e.g., MDS; PZ 4; SP 1). It is, for example, argued that in times of “mass aging,” the growing number of people with cognitive disabilities and impaired everyday skills will challenge the health care system (PZ 4). Not only the cognitive limitations in the stricter sense are seen as potential burdens for both professional caregivers and caring relatives. Also, and more specifically, the psychological and behavioral symptoms such as physical aggression, anxiety, or irritability, and personality changes are highlighted (MDS; PZ 4; SP 1, 6). In the German nursing science, two separate discourses can be discerned that revolve around the need to develop innovative nursing approaches and to embed prevention and rehabilitation more firmly in the practice of nursing care. The first underlying rationale is that, in light of demographic change and the expected increase in dementia rates, the burden on the health care system and society should be reduced. For example, recommendations for action are contextualized with reference to cost studies, which point to the enormous economic burdens caused by dementia (MDS; PZ 4). Second, individual quality of life with dementia represents a central reference point of debate in nursing science. One quality-of-life discourse is oriented around the guiding principles of self-determination and self-reliance (MDS; PZ 1, 5; ZQS 1), another is characterized by references to human dignity, personal needs, and relationships (MDS; SP 4, 8). Good dementia care in this sense means, as stated in The Nurse [ Die Schwester/Der Pfleger ], perceiving the person in need of care in their unique personality and strengthening their self-esteem and emotional well-being (SP 4). Nursing approaches such as “person-centered care” therefore aim at relationship building and successful communication between caregivers and patients and are oriented toward the needs and feelings of the persons in need of care (MDS; SP 1, 4). The reference to the value of human dignity thus primarily justifies nursing concepts that emphasize relationships, communication, closeness, and safety. In this context, nursing care approaches do not primarily aim to restore (cognitive) abilities or prevent further decline but are oriented toward the immediate well-being of the persons in need of care. On the contrary—and oriented toward the ideal of personal autonomy—there are also nursing approaches which first of all intend to strengthen the daily living skills and personal responsibility of those in need of care. In this context, the entitlement of people with dementia to the greatest possible degree of self-determination is stressed; patronizing practices in care are problematized. Activating care aims at maintaining or improving “mobility and independence in everyday life,” which are considered important indicators of quality of life and subjective well-being (PZ 1). It is emphasized that even severely impaired patients have potentials for health promotion and the preservation of health resources. Specifically, care services should preserve the “functional performance of patients” (PZ 5), strengthen the “active participation” of people in need care (HB 2), or promote the “motivation and competence to carry out measures on their own terms” (HB 2). In this framework, a range of specific nursing interventions such as dancing, coordination exercises, and cognitive stimulation for the purpose of maintaining cognitive performance are recommended (GG 1; HB 1, 3; MDS; PZ 2, 5; ZQP 2, 3), despite low-quality evidence (GG 2, 5). “Permanent advancement” and specifically the combination of cognitive and physical training are expected to effectively delay mental deterioration and maintain or even restore everyday skills (HB 3). Conversely, a deterioration of cognitive abilities is associated with inactivity and the lack of “movement and environmental stimuli” (PZ 2). Nurses should not do anything for persons with dementia that they can still do themselves. For nurses, this means that they must shift from an attitude of care toward an attitude of encouragement and support (PZ 2). In this context, nurses are seemingly considered to be directly responsible for maintaining the cognitive health of persons in need of care. In some articles, the possible benefits of successful, activating care are seen to be substantial: “So one thing is clear: how good our mental performance is depends on how much we perform” (PZ 2). Assuming a straightforward causal relationship between cognitive activity and cognitive health, it seems to be a question of good care whether the patient’s abilities deteriorate or recover.
While the importance of environmental protective factors and the unclear evidence for the success of individual measures of prevention are still regularly discussed in professional discourses, the media discourse is characterized by a stronger and partly exclusive focus on individual behavioral prevention. One can reconstruct the following storylines that typically frame media coverage on dementia in Germany. First, media coverage on dementia is usually contextualized with references to demographic change and a prognosis of future dementia rates (e.g., AU 7; DS 1, 9; FAZ 4, 9; SZ 6). The estimations about rising case numbers and the prospect of rising costs are used to call attention to the issue. Second, the suffering associated with dementia and its psychological and behavioral symptoms are portrayed. As in the nursing science discourse, the burden for caregivers is also occasionally highlighted. Headlines like “Forgetful, Aggressive, Confused: Experts Warn About the ‘Dementia Republic of Germany’” (FO 5) convey degrading images of dementia and paint an alarming picture of the increase of cases in an aging population. However, other articles use more sophisticated and careful formulations; some voices also argue that dementia “doesn’t have to be a big deal” as most courses of the disease are mild (SZ 2). Third, the possibility of dementia risk reduction and the findings regarding the impact of individual lifestyle choices are presented as a glimmer of hope against the background of a lack of treatment options and the failure of recent drug trials (SZ 6; DS 2, 8; FO 9; FAZ 1, 5). Thus, the reference to demographic change and limited treatment options is generally used to emphasize the relevance of risk reduction and dementia prevention. However, the actual presentation of scientific research and the understanding of individual responsibility differ greatly within the media discourse on dementia. On one hand, we found articles in the science sections of newspapers and weekly magazines, which closely reflect current medical debates and knowledge. In these articles, recent paradigmatic shifts in dementia and Alzheimer’s research are portrayed, the importance of cardiovascular risk factors is highlighted, and multimodal preventive strategies are discussed (SZ 3; DS 9; FAZ 1, 3; AU 5). Headlines like “What Is Good for the Heart Is Also Good for the Brain” (SZ 7) reflect the current medical focus on cardiovascular risk factors. In addition, the lack of evidence for single preventive efforts is portrayed (DS 7, 9; FAZ 5, 6), and the relevance of environmental factors like the influence of education on dementia risk is mentioned (DS 3, 9; FAZ 5; SZ 3). Above all, in light of limited treatment options and with reference to WHO recommendations or the Lancet Report, it is stressed that there are many risk factors and that risk reduction and prevention in middle age as well as early detection are crucial (DS 9; FAZ 5, 10; SZ 3). On the contrary, we found articles in newspapers, weekly magazines, and popular science magazines that strongly focus on individual lifestyle measures. In this context, the readers are often addressed as being directly responsible for successful dementia prevention. Typical headlines such as “How to Reduce Your Risk of Alzheimer’s” (DS 2) or “Preventing Dementia: How to Strengthen the Self-Healing Powers of the Brain” (FO 4) convey the idea that successful dementia prevention is mostly a question of sufficient individual efforts and correct lifestyle choices. Insights from epidemiological and lifestyle studies are used as a basis for responsibilization of the individual, for example, for the moral call to lead an active and healthy life. The advice and the medical studies cited focus on a wide range of very specific measures like dancing (DS 5), playing video games like “Super Mario” (FO 3), or eating nuts and avocados (FO 7). Headlines like “Study Shows How Many Cups of Coffee You Have to Drink to Protect Yourself From Dementia” (FO 8) or “Food Against Dementia: 20 Foods That Help and 9 That Hurt” (FO 7) create the impression of conclusive and clearly measurable causal relationship between individual living habits and dementia. In addition to articles that focus on the impact of single interventions, readers are also provided with comprehensive lists of dementia prevention measures as “Seven Components that Protect Your Brain From Dementia” (FO 1, also FO 10, 11; AU 3, 4). In this context, uncertainties regarding the evidence for the effectiveness of individual prevention measures are not commonly mentioned. It is further noticeable that readers are addressed directly as a potential risk group and asked to take dementia prevention into their own hands. Headlines like “How We Should Live to Protect Ourselves From Dementia” (FO 9) convey the moral responsibility to live health consciously to reduce the risk of dementia (also AU 4). Dementia prevention and an active, healthy lifestyle tend to be discussed as an individual duty. Young people are prompted to prevent future cognitive decline (e.g., SZ 1), and older people are encouraged to face the cognitive decline with an active lifestyle. “The elderly can stay mentally fit even if they already have initial memory gaps. A healthy lifestyle is crucial for preventing dementia” (DS 10). A media report on activation measures offered in nursing homes exemplifies the strong focus on self-responsibility and self-help, concluding with the paradigmatic sentences: “The residents strain their brains, move around and meet like-minded people while playing. Instead of waiting for medication, they themselves take care of their brain health” (DS 9). In addition, readers are occasionally called to contribute, wherever possible, to the well-being of the community, as illustrated by the following example: “Whoever takes on a social volunteer service or a voluntary position links the strain on the brain with a meaningful and thus fulfilling activity—a strong mental protection” (FO 6). Above all, the media discourse on dementia is focused on individual risk management. The burden the disease places on society and the lack of treatment options are used to emphasize the importance of dementia prevention through individual lifestyle changes.
Framing and responsibility ascriptions differ between the three examined discourses. Medical science debates are framed by the changing neuroscientific understanding of AD and the novel focus on the presymptomatic disease stage in middle age. The nursing science discourse is characterized by a strong emphasis on the burdens for caregivers and the health care system (see the similar results in the analysis for the German nursing science discourse in ). In German media, dementia is predominantly framed by demographic change and alarming future visions of dementia rates. Recommendations about dementia risk reduction are no longer directed to older people exclusively but also address persons in middle age at risk for developing dementia. In the medical science discourse, both physicians and patients are addressed as being responsible for dementia prevention, albeit in (naturally) different ways. Physicians are seen as responsible not only for the correct diagnosis and treatment of advanced dementia but also for considering risk factors in middle age. The consequent implication is that patients and persons at risk are seen as responsible for behavioral prevention and dementia risk reduction. In the media, the main focus is on lifestyle changes and individual risk management (as also observed by , for U.K. newspapers). Readers are—as also shown by for English online dementia health information and for online women’s brain health campaigns—positioned as being at risk and are directly addressed and called on to adopt a healthy and active lifestyle to strengthen their cognitive abilities and to reduce the future risk of dementia. Frequently used normative phrasings imply a moral obligation to engage in dementia prevention (see also ). Nursing science focuses on well-being and rehabilitation of persons with advanced dementia. Professional caregivers are called on to restore or maintain the cognitive abilities of persons in need of care using specific mental and physical interventions. Some nursing science articles portray nurses as directly responsible for maintaining the cognitive health of persons in need of care. While limited evidence of the actual success of prevention measures is regularly addressed in medical and nursing discourses, media coverage tends to highlight single studies and to overestimate the effectiveness of behavioral dementia prevention. Here, most clearly, dementia tends to be portrayed as a direct outcome of individual lifestyle choices; the preservation of cognitive abilities through old age implicitly seems to be a matter of personal responsibility. summarizes these findings by using the different dimensions of responsibility to highlight the similarities and differences between the examined discourses.
Contemporary discourses on successful aging and the trend toward behavioral prevention have been problematized as part of a general (self-)responsibilization of aging. Specifically, the focus on individual lifestyle choices has been criticized for masking the relevance of economic and social prerequisites for successful aging. The critique can be differentiated—as suggested by —along the following key points: overextension of effectiveness of preventive measures, privatization of life risks and individualization as in reducing complex social and medical issues to individual behaviors and lifestyle choices, ideologization (as in justifying welfare cuts by glorifying self-care), and stigmatization of old age frailty. Overextension and Oversimplification Our analysis of nursing science, medical science, and media discourses on dementia in Germany identifies some of these critical aspects in the different fields. Tendencies of overextension and specifically oversimplification and inadmissible generalization can be found partly in nursing science and more distinctly in media discourses. The fact that numerous findings on the influence of individual prevention measures are contradictory or controversial and that results of single studies in many cases cannot be confirmed (see, for example, ) is often neglected within the examined discourses. Instead, a direct and strong causal relationship is often suggested between certain lifestyle choices or preventive measures and cognitive performance and personal dementia risk. The importance of cognitive training is, for instance, highlighted regularly, despite comparatively weak or even insufficient evidence . If the evidence of certain measures is partly missing or critically discussed, the underlying moral assumption seems to be that recommending a healthier lifestyle is nothing that harms—even if it would be ineffective for dementia prevention. Yet, oversimplification and misinterpretation of scientific findings on dementia prevention can create excessive responsibility demands and false expectations, and could, as argued, foster pessimism toward dementia prevention research. Individualization and Privatization In addition, the strong and partly exclusive focus on individual preventive measures might support an individualization and privatization of life risks. In German media, aging readers are called on to model their lifestyle along the normative guiding principles of “active aging” and to shape their lives in a healthy and socially responsible manner. The political, economic, and social prerequisites for active aging and dementia prevention tend to be neglected (see also ). The comparatively large protective influence of education is discussed only occasionally in the examined discourses. This focus on individual lifestyle choices and the call for optimizing individually one’s health and to preserve it until old age reflect the logic of an increasing privatization of life risks. The management of life risks, which had been a task of direct political intervention in the “Keynesian welfare state,” is transferred to individual citizens . In nursing science, too, successful prevention and rehabilitation seem to be understood foremost as a question of good nursing by individuals; external factors and structural preconditions as, for example, the substantial influence of the economic rationalization and rationing of health care on the quality of dementia care are rarely considered. Ideologization Against the background of the individual suffering and societal burden associated with dementia, individual risk reduction and behavioral prevention are discussed further not only as an individual opportunity but implicitly also as a duty. Prevention strategies and an active lifestyle not only aim to improve individual health and extend autonomy in old age. They seemingly also—and this could be discussed as ideological embeddedness in the sociopolitical regime of the activating welfare state —aim to relieve the (financial) burden on society. Aging people are addressed in their personal responsibility for preventing illness and proactively preserving their physical and mental abilities to sustain their living standards independently, to avoid need for help, and to contribute to the public good . In context of this framing, cognitive decline in old age and forgetting might even appear, as have argued, as a sign of personal failure. Those who remain healthy and cognitively fit demonstrate the willingness and ability to take care of themselves in a socially responsible manner. Conversely, cognitive decline threatens those capabilities necessary to age as an active citizen and tends to be associated in parts of German media with lacking preventive efforts. However, whereas the productivity-oriented active aging discourse is clearly linked to the idea of actively contributing to the common good even in old age , German dementia prevention discourses are still more centered on the personal well-being and the individual quality of life in old age. Stigmatization The (self-)responsibilization of cognitive aging and the fact that cognitive decline no longer appears as an inevitable fate but rather as a preventable disease could further lead to an increased stigmatization of both risky lifestyles and dementia itself. Our discourse analysis shows that the moral obligation to take individual responsibility for dementia prevention is more or less implicitly implied in German dementia discourses. If healthy aging and dementia prevention appear a question of individual efforts, there is a danger of associating dementia with a negligent lifestyle and blaming the individual for cognitive decline. The responsibility for cognitive health and the blame for cognitive decline in old age might be shifted onto those who do not follow health and prevention recommendations and thus fail or refuse to live their lives in a healthy, active, and socially acceptable manner . In interaction with degrading and objectifying images of dementia , the strong emphasis on individual responsibility might support “victim-blaming of those living with dementia and result in increased stigmatization” ( , p. 1548; see also ; ; ). The responsibilization of cognitive aging could hence contribute to new forms of ageisms which replace the earlier general angst of aging with a specific fear of frailty, inability, and loss of cognitive abilities , and reinforce a devaluation of those who, due to physical and cognitive decline, can no longer comply with the ideal of self-reliant, successful aging.
Our analysis of nursing science, medical science, and media discourses on dementia in Germany identifies some of these critical aspects in the different fields. Tendencies of overextension and specifically oversimplification and inadmissible generalization can be found partly in nursing science and more distinctly in media discourses. The fact that numerous findings on the influence of individual prevention measures are contradictory or controversial and that results of single studies in many cases cannot be confirmed (see, for example, ) is often neglected within the examined discourses. Instead, a direct and strong causal relationship is often suggested between certain lifestyle choices or preventive measures and cognitive performance and personal dementia risk. The importance of cognitive training is, for instance, highlighted regularly, despite comparatively weak or even insufficient evidence . If the evidence of certain measures is partly missing or critically discussed, the underlying moral assumption seems to be that recommending a healthier lifestyle is nothing that harms—even if it would be ineffective for dementia prevention. Yet, oversimplification and misinterpretation of scientific findings on dementia prevention can create excessive responsibility demands and false expectations, and could, as argued, foster pessimism toward dementia prevention research.
In addition, the strong and partly exclusive focus on individual preventive measures might support an individualization and privatization of life risks. In German media, aging readers are called on to model their lifestyle along the normative guiding principles of “active aging” and to shape their lives in a healthy and socially responsible manner. The political, economic, and social prerequisites for active aging and dementia prevention tend to be neglected (see also ). The comparatively large protective influence of education is discussed only occasionally in the examined discourses. This focus on individual lifestyle choices and the call for optimizing individually one’s health and to preserve it until old age reflect the logic of an increasing privatization of life risks. The management of life risks, which had been a task of direct political intervention in the “Keynesian welfare state,” is transferred to individual citizens . In nursing science, too, successful prevention and rehabilitation seem to be understood foremost as a question of good nursing by individuals; external factors and structural preconditions as, for example, the substantial influence of the economic rationalization and rationing of health care on the quality of dementia care are rarely considered.
Against the background of the individual suffering and societal burden associated with dementia, individual risk reduction and behavioral prevention are discussed further not only as an individual opportunity but implicitly also as a duty. Prevention strategies and an active lifestyle not only aim to improve individual health and extend autonomy in old age. They seemingly also—and this could be discussed as ideological embeddedness in the sociopolitical regime of the activating welfare state —aim to relieve the (financial) burden on society. Aging people are addressed in their personal responsibility for preventing illness and proactively preserving their physical and mental abilities to sustain their living standards independently, to avoid need for help, and to contribute to the public good . In context of this framing, cognitive decline in old age and forgetting might even appear, as have argued, as a sign of personal failure. Those who remain healthy and cognitively fit demonstrate the willingness and ability to take care of themselves in a socially responsible manner. Conversely, cognitive decline threatens those capabilities necessary to age as an active citizen and tends to be associated in parts of German media with lacking preventive efforts. However, whereas the productivity-oriented active aging discourse is clearly linked to the idea of actively contributing to the common good even in old age , German dementia prevention discourses are still more centered on the personal well-being and the individual quality of life in old age.
The (self-)responsibilization of cognitive aging and the fact that cognitive decline no longer appears as an inevitable fate but rather as a preventable disease could further lead to an increased stigmatization of both risky lifestyles and dementia itself. Our discourse analysis shows that the moral obligation to take individual responsibility for dementia prevention is more or less implicitly implied in German dementia discourses. If healthy aging and dementia prevention appear a question of individual efforts, there is a danger of associating dementia with a negligent lifestyle and blaming the individual for cognitive decline. The responsibility for cognitive health and the blame for cognitive decline in old age might be shifted onto those who do not follow health and prevention recommendations and thus fail or refuse to live their lives in a healthy, active, and socially acceptable manner . In interaction with degrading and objectifying images of dementia , the strong emphasis on individual responsibility might support “victim-blaming of those living with dementia and result in increased stigmatization” ( , p. 1548; see also ; ; ). The responsibilization of cognitive aging could hence contribute to new forms of ageisms which replace the earlier general angst of aging with a specific fear of frailty, inability, and loss of cognitive abilities , and reinforce a devaluation of those who, due to physical and cognitive decline, can no longer comply with the ideal of self-reliant, successful aging.
Our analysis of current German dementia discourses showed not only that the understanding of dementia and AD has changed in recent years. In addition, medical measures and media coverage of dementia have expanded their focus to younger people who do not or do not yet have any cognitive impairments. With new focus on risk reduction and the possibility of using biomarker-based diagnostics to detect pathological changes in the brain even before first symptoms of cognitive impairment become apparent, people in middle age who feel healthy may suddenly be classified as persons at risk—or even as presymptomatic Alzheimer’s patients if they show some pathological biomarkers (see ). In the discourses examined, prevention strategies focus on individual lifestyle choices in middle age. Protective factors active at the societal level, such as education and the influence of the health care system, are addressed only occasionally. Successful dementia prevention as well as successful aging are first and foremost conceived as a question of sufficient personal activity . In all examined discourses, the importance of physical, cognitive, and/or social activities for cognitively healthy aging is emphasized. Effective dementia prevention and healthy aging are linked to the successful mobilization of activity potentials, whereas progressive cognitive decline and the loss of everyday skills are associated with passivity and lack of exercise. In line with the privatization of life risks and a general (self-)responsibilization of aging, the preservation of cognitive health tends to be discussed foremost as a question of individual lifestyle choices and personal responsibility. Further research is required to understand and reflect on the practical implications of the rapid innovations in medical science regarding dementia diagnosis, prognosis and risk reduction, and the associated public communication in different institutional contexts. Although the results of our study can be situated within international studies on dementia risk communication (e.g., ; ; ; ), an in-depth comparison of dementia discourses would require a comparative analysis of cultures or national institutional contexts. Another important future line of research would be to analyze more in detail the underlying body concepts. argues that the lifestyle turn in dementia research has changed the focus from the brain to the heart, or at least from the brain to brain–heart interactions. Others might see the current focus on biological and genetics issues as a form of (bio-)mechanization of the aging body. This line of thinking often conflicts with phenomenological and social views of the body . As our analysis is limited to the discursive level of social reality, we could only describe normative models of healthy cognitive aging as well as scientific and public conceptions of dementia risk management and prevention. The presented discourse analytical examination of medical knowledge and public perceptions of dementia gives insight into contemporary scientific knowledge and political rationalities but does not provide an understanding of what the people who are subjected to these regimes actually think and do . The sociological discussion and ethical evaluation of responsibility ascriptions in the field of dementia prevention therefore cannot solely rely on the reconstruction of the normative and scientific foundations of dementia discourses. Further empirical studies are required to provide an understanding of how physicians, nurses, persons-at-risk, and patients perceive dementia risk communication and adopt those responsibility ascriptions in everyday life.
sj-pdf-1-qhr-10.1177_10497323211014844 – Supplemental material for The Experts’ Advice: Prevention and Responsibility in German Media and Scientific Discourses on Dementia Click here for additional data file. Supplemental material, sj-pdf-1-qhr-10.1177_10497323211014844 for The Experts’ Advice: Prevention and Responsibility in German Media and Scientific Discourses on Dementia by Niklas Petersen and Silke Schicktanz in Qualitative Health Research
|
Associations between health anxiety, eHealth literacy and self-reported health: A cross-sectional study | 57ee507c-b1e0-4911-bfae-65bfadd5e1e4 | 11611066 | Health Literacy[mh] | The World Health Organization’s World Mental Health Surveys showed that 20.3% of university students in 21 countries have mental disorders . University students face numerous health problems in their daily lives, especially mental health issues, such as health anxiety, depression, inferiority complex, interpersonal sensitivity, and other frequent occurrences that have generated global consensus . Normal attention to health can sometimes turn into a persistent and excessive fear of serious illness. Svestkova et al. defined health anxiety as distress or fear related to one’s body. Health anxiety represents excessive contemplation of illness, excessive concern for physical health, persistent fear of illness, misinterpreting bodily sensations as symptoms of severe illness, and reporting symptoms without sufficient physical pathology. Health anxiety can lead to both psychological and physical symptoms, which are often misunderstood as evidence of organic diseases. It ranges from mild anxiety to severe or persistent anxiety . Health anxiety is a serious and costly public health issue ; if left untreated, it may become chronic. Studies indicate that university students use the Internet more frequently than any other group and often seek online health information . The daily lives of college students are filled with health anxiety, and the boundary between “health” and “disease” is gradually blurred. Many diseases are merely self-created “suspected diseases,” rather than genuine psychological or physiological problems . University students are particularly prone to sleep disorders, eating disorders, anxiety and depression, or chronic diseases . Health anxiety is associated with an increased risk of developing various chronic diseases . Compared to male students, female university students typically exhibit lower adaptability to university life, as well as higher concerns and physiological sensitivity . In this study, health anxiety was defined as the anxiety or distress caused by an individual’s or their family members’ unreasonable lifestyle, physical weakness or illness, depression, anxiety, emotional instability, overpressure, as well as dissatisfaction with their appearance or body shapes, including lifestyle anxiety, psychological anxiety, physical anxiety, and appearance anxiety. Electronic health literacy, as a part of a rational cognitive belief system, can help individuals correctly and objectively understand and comprehend things. Individuals with a certain level of eHealth literacy are less likely to blindly believe that their health condition is threatened and not easily develop negative emotions such as health anxiety when facing massive online health information . EHealth literacy refers to the ability of individuals to seek, search, understand and evaluate health information from electronic resources, and apply this knowledge to solve or handle health problems . EHealth literacy has a significant and direct positive impact on the mental health of university students . Although university students have the skills to access online information, they often lack the corresponding medical and health knowledge to determine the authenticity of health information . Could we speculate that the lower eHealth literacy of university students, the more health anxiety, thus, the worse self-reported health? Few relevant studies were found concerning this matter. Therefore, this study used online questionnaire survey and aimed to explore the associations and gender differences among health anxiety, eHealth literacy and self-reported health in Chinese university students. The findings can provide reference for intervention training of eHealth literacy and health anxiety among university and college students based on gender differences to improve overall health levels.
Study Design Cross-sectional quantitative study. Setting The study was conducted at a university in the city of Xinyang City, Henan Province, China-Xinyang Normal University. Xinyang Normal University is a full-time general higher education institution sponsored by the People’s Government of Henan Province and supervised by the Education Department of Henan Province. Sample Selecton Criteria University students were recruited from Xinyang Normal University, who were voluntarily aware that their participation would be anonymous when completing the online self-report questionnaire. They signed a Free Informed Consent form, which included information of health anxiety, eHealth literacy, self-reported health, as well as sociodemographic questions (residence, age, education, and parental education level). If the answer for the questionnaire was incomplete or there was no answer, the respondent would be excluded. However, statistics found that all university students (1,205) had satisfactorily completed the questionnaire (response rate: 100%). Instrument and Study Period for Data Collection Questionnaire used in this study included eHealth literacy scale , questions of health anxiety and self-reported health (One item: “How well do you currently feel with respect to your health status?”), which were created in Wenjunxing software (https://www.wjx.cn/). Wenjuanxing software is a professional online survey, examination, evaluation, and voting platform that focuses on providing users with a series of services such as powerful and user-friendly online questionnaire design, data collection, custom reports, and survey result analysis. First, questionnaire was created using Wenjuanxing software. Then, the questionnaire link was sent to the WeChat groups of Xinyang Normal university students so that they could respond. The questionnaire is anonymous and voluntary. Voluntary participant university students can freely and independently fill out questionnaires on their computers or smartphones. WeChat was chosen because users are verified without virtually possibilities to incur in fake profiles. There is no reward for participants. The eHealth literacy questionnaire used an eHealth literacy scale designed by Chiang et al . The scale consists of three levels: functional (three items), interactive (four items), and critical (five items). Functional eHealth literacy evaluates individuals’ reading and writing skills, as well as their understanding of basic online health information. Interactive eHealth literacy evaluates individuals’ skills and abilities which can be used to access information from various forms of social online environments. Critical eHealth literacy evaluates individuals’ abilities and skills which can be used to critically evaluate online health information and utilize it to make informed health decisions. The 12-item eHealth literacy scale uses a 5-point Likert scale: ranging from 1 for “strongly disagree” to 5 for “strongly agree.” Each item score varies between 1 and 5. The total score varies between 12 and 60. The higher score indicates the higher eHealth literacy level is. The Kaiser-Meyer-Olkin was 0.907, and Cronbach’s Alpha was 0.793, showing good internal validity and reliability. The questionnaire of health anxiety contains four dimensions, including lifestyle anxiety, psychological anxiety, physical anxiety, and appearance anxiety. Each containing different questions (0 = No, 1 = Yes). The total score was used to evaluate one dimension. Higher scores indicate more health anxiety. The self-reported health status was measured by one item: “How well do you currently feel with respect to your health status?” (5 = excellent, 4 = good, 3 = neutral, 2 = poor, 1 = very poor). The questionnaire investigation was conducted from March to May 2023. Data Analysis Self-reported health was the dependent variable, defined as “5 = excellent, 4 = good, 3 = neutral, 2 = poor, 1 = very poor”. Thus, ordinal logistic regression was used to analyze the associations and gender differences among eHealth literacy, health anxiety and self-reported health for Chinese university students. SPSS v20 (IBM, Armonk, NY, United States) was used for descriptive statistics (mean, standard deviation [SD], frequency, and percentage), chi-squared test, t-test and ordinal logistic regression were used for analysis. Ethical Aspects This study was performed in compliance with the Helsinki Declaration guidelines. All procedures relevant to study participants were approved by Xinyang Normal University ethics committee (XFEC-2023-025). Each voluntary participant was informed of the study objective and context and provided their informed consent regarding privacy and information management policies. University students whose survey results are negative in the investigation will receive more health care from psychology professors and school nurses to reduce health anxiety.
Cross-sectional quantitative study.
The study was conducted at a university in the city of Xinyang City, Henan Province, China-Xinyang Normal University. Xinyang Normal University is a full-time general higher education institution sponsored by the People’s Government of Henan Province and supervised by the Education Department of Henan Province.
University students were recruited from Xinyang Normal University, who were voluntarily aware that their participation would be anonymous when completing the online self-report questionnaire. They signed a Free Informed Consent form, which included information of health anxiety, eHealth literacy, self-reported health, as well as sociodemographic questions (residence, age, education, and parental education level). If the answer for the questionnaire was incomplete or there was no answer, the respondent would be excluded. However, statistics found that all university students (1,205) had satisfactorily completed the questionnaire (response rate: 100%).
Questionnaire used in this study included eHealth literacy scale , questions of health anxiety and self-reported health (One item: “How well do you currently feel with respect to your health status?”), which were created in Wenjunxing software (https://www.wjx.cn/). Wenjuanxing software is a professional online survey, examination, evaluation, and voting platform that focuses on providing users with a series of services such as powerful and user-friendly online questionnaire design, data collection, custom reports, and survey result analysis. First, questionnaire was created using Wenjuanxing software. Then, the questionnaire link was sent to the WeChat groups of Xinyang Normal university students so that they could respond. The questionnaire is anonymous and voluntary. Voluntary participant university students can freely and independently fill out questionnaires on their computers or smartphones. WeChat was chosen because users are verified without virtually possibilities to incur in fake profiles. There is no reward for participants. The eHealth literacy questionnaire used an eHealth literacy scale designed by Chiang et al . The scale consists of three levels: functional (three items), interactive (four items), and critical (five items). Functional eHealth literacy evaluates individuals’ reading and writing skills, as well as their understanding of basic online health information. Interactive eHealth literacy evaluates individuals’ skills and abilities which can be used to access information from various forms of social online environments. Critical eHealth literacy evaluates individuals’ abilities and skills which can be used to critically evaluate online health information and utilize it to make informed health decisions. The 12-item eHealth literacy scale uses a 5-point Likert scale: ranging from 1 for “strongly disagree” to 5 for “strongly agree.” Each item score varies between 1 and 5. The total score varies between 12 and 60. The higher score indicates the higher eHealth literacy level is. The Kaiser-Meyer-Olkin was 0.907, and Cronbach’s Alpha was 0.793, showing good internal validity and reliability. The questionnaire of health anxiety contains four dimensions, including lifestyle anxiety, psychological anxiety, physical anxiety, and appearance anxiety. Each containing different questions (0 = No, 1 = Yes). The total score was used to evaluate one dimension. Higher scores indicate more health anxiety. The self-reported health status was measured by one item: “How well do you currently feel with respect to your health status?” (5 = excellent, 4 = good, 3 = neutral, 2 = poor, 1 = very poor). The questionnaire investigation was conducted from March to May 2023.
Self-reported health was the dependent variable, defined as “5 = excellent, 4 = good, 3 = neutral, 2 = poor, 1 = very poor”. Thus, ordinal logistic regression was used to analyze the associations and gender differences among eHealth literacy, health anxiety and self-reported health for Chinese university students. SPSS v20 (IBM, Armonk, NY, United States) was used for descriptive statistics (mean, standard deviation [SD], frequency, and percentage), chi-squared test, t-test and ordinal logistic regression were used for analysis.
This study was performed in compliance with the Helsinki Declaration guidelines. All procedures relevant to study participants were approved by Xinyang Normal University ethics committee (XFEC-2023-025). Each voluntary participant was informed of the study objective and context and provided their informed consent regarding privacy and information management policies. University students whose survey results are negative in the investigation will receive more health care from psychology professors and school nurses to reduce health anxiety.
Demographic Characteristics Out of the 1,205 university student participants , 57.82% were male and 65.79% were females from rural areas. 48.82% males major was in science. For female university students, 43.96% and 43.36% majored in humanities and science, respectively. Comparing males and females, there were more females majored in humanities than males. Parents of males and females generally had low level of education, e.g., middle school (males: 57.82%, females: 68.01%). Self-reported good health status of university students, e.g., males: 40.28% were good and 28.44% were excellent; females: 47.99% were good and 34.81% were neutral. In summary, there were significant gender differences in all five variables. Gender Differences in Health Anxiety In this study, health anxiety was divided into four dimensions, namely lifestyle anxiety, psychological anxiety, physical anxiety, and appearance anxiety. A total of 68% of university students had three or more health anxieties. From , there were significant gender differences in appearance anxiety (P < 0.001). For lifestyle anxiety, there were significant gender differences in lack of exercise (P < 0.001) and smoking or second-hand smoke (P < 0.001), excessive drinking (P < 0.001), excluding insufficient sleep (P = 0.928) and unreasonable diet (P = 0.860). This dimension had the highest proportion of health anxiety among university students. Concerning psychological anxiety, except for overpressure, there were significant gender differences in emotional instability (P = 0.013), anxiousness (P = 0.005), and depression (P = 0.037). For the appearance anxiety dimension, all three items for skin problems (P < 0.001), alopecia (P < 0.001), and obesity (P = 0.042) had significant gender differences. Thus, lifestyle, psychological and physical anxiety are major health problems faced by university students. Among them, lack of exercise (67.22%), insufficient sleep (61.41%), and emotional instability (41.33%) were the three most serious health anxiety among the total 18 health anxieties, accounting for over 40%. Gender Differences in Ehealth Literacy In this study, the eHealth literacy scale consisted of three levels: functional (3 items), interactive (4 items), and critical eHealth literacy (5 items). No significant gender differences were found across these three levels . Specifically, in functional and interactive eHealth literacy, only the second item showed a significant gender difference (P = 0.021). In critical eHealth literacy, significant gender differences were observed, excluding the tenth item (P = 0.481). Analysis of variance revealed significant differences among the means of functional, interactive, and critical eHealth literacy (P < 0.001), but no significant gender differences (P = 0.824). The mean scores indicated that critical eHealth literacy had the highest score, while functional eHealth literacy had the lowest, suggesting that university students’ reading and writing skills, as well as their understanding of basic online health information, still need improvement. Correlations in Self-Reported Health Questionnaire Between Ehealth Literacy and Health Anxiety To analyze the correlations between self-reported health, eHealth literacy, and health anxiety, Pearson correlation analysis was conducted. As shown in , significant positive correlations were found between eHealth literacy and self-reported health. Additionally, significant negative correlations were observed between self-reported health and the four dimensions of health anxiety: lifestyle anxiety, psychological anxiety, appearance anxiety, and physical anxiety. However, there were no significant correlations between eHealth literacy and health anxiety. Associations Among Self-Reported Health, Ehealth Literacy and Health Anxiety As it can be seen, showed the associations between eHealth literacy, health anxiety and self-reported health, through ordinal logistic regression analysis. For males, eHealth literacy [OR = 0.935, (CI 95% = 0.896–0.976), P = 0.002], appearance anxiety [OR = 1.482, (CI95% = 1.049 –2.095), P = 0.026] had significant impacts on males’ self-reported health. For females, eHealth literacy [OR = 0.953, (CI95% = 0.931–0.976), P < 0.001], lifestyle anxiety [OR = 1.331, (CI95% = 1.165–1.521), P < 0.001], psychological anxiety [OR = 1.171, (CI95% = 1.043–1.314), P = 0.007], physical anxiety [OR=1.839, (CI95% = 1.442–2.346), P < 0.001] had significant impacts on females’ self-reported health.
Out of the 1,205 university student participants , 57.82% were male and 65.79% were females from rural areas. 48.82% males major was in science. For female university students, 43.96% and 43.36% majored in humanities and science, respectively. Comparing males and females, there were more females majored in humanities than males. Parents of males and females generally had low level of education, e.g., middle school (males: 57.82%, females: 68.01%). Self-reported good health status of university students, e.g., males: 40.28% were good and 28.44% were excellent; females: 47.99% were good and 34.81% were neutral. In summary, there were significant gender differences in all five variables.
In this study, health anxiety was divided into four dimensions, namely lifestyle anxiety, psychological anxiety, physical anxiety, and appearance anxiety. A total of 68% of university students had three or more health anxieties. From , there were significant gender differences in appearance anxiety (P < 0.001). For lifestyle anxiety, there were significant gender differences in lack of exercise (P < 0.001) and smoking or second-hand smoke (P < 0.001), excessive drinking (P < 0.001), excluding insufficient sleep (P = 0.928) and unreasonable diet (P = 0.860). This dimension had the highest proportion of health anxiety among university students. Concerning psychological anxiety, except for overpressure, there were significant gender differences in emotional instability (P = 0.013), anxiousness (P = 0.005), and depression (P = 0.037). For the appearance anxiety dimension, all three items for skin problems (P < 0.001), alopecia (P < 0.001), and obesity (P = 0.042) had significant gender differences. Thus, lifestyle, psychological and physical anxiety are major health problems faced by university students. Among them, lack of exercise (67.22%), insufficient sleep (61.41%), and emotional instability (41.33%) were the three most serious health anxiety among the total 18 health anxieties, accounting for over 40%.
In this study, the eHealth literacy scale consisted of three levels: functional (3 items), interactive (4 items), and critical eHealth literacy (5 items). No significant gender differences were found across these three levels . Specifically, in functional and interactive eHealth literacy, only the second item showed a significant gender difference (P = 0.021). In critical eHealth literacy, significant gender differences were observed, excluding the tenth item (P = 0.481). Analysis of variance revealed significant differences among the means of functional, interactive, and critical eHealth literacy (P < 0.001), but no significant gender differences (P = 0.824). The mean scores indicated that critical eHealth literacy had the highest score, while functional eHealth literacy had the lowest, suggesting that university students’ reading and writing skills, as well as their understanding of basic online health information, still need improvement.
To analyze the correlations between self-reported health, eHealth literacy, and health anxiety, Pearson correlation analysis was conducted. As shown in , significant positive correlations were found between eHealth literacy and self-reported health. Additionally, significant negative correlations were observed between self-reported health and the four dimensions of health anxiety: lifestyle anxiety, psychological anxiety, appearance anxiety, and physical anxiety. However, there were no significant correlations between eHealth literacy and health anxiety.
As it can be seen, showed the associations between eHealth literacy, health anxiety and self-reported health, through ordinal logistic regression analysis. For males, eHealth literacy [OR = 0.935, (CI 95% = 0.896–0.976), P = 0.002], appearance anxiety [OR = 1.482, (CI95% = 1.049 –2.095), P = 0.026] had significant impacts on males’ self-reported health. For females, eHealth literacy [OR = 0.953, (CI95% = 0.931–0.976), P < 0.001], lifestyle anxiety [OR = 1.331, (CI95% = 1.165–1.521), P < 0.001], psychological anxiety [OR = 1.171, (CI95% = 1.043–1.314), P = 0.007], physical anxiety [OR=1.839, (CI95% = 1.442–2.346), P < 0.001] had significant impacts on females’ self-reported health.
Health is a complex and multidimensional concept, making it difficult to accurately measure all dimensions of an individual’s health. In many survey studies, self-reported health is often used to collect health information from respondents. This self-reported questionnaire for university students’ health comprehensively reflects their physical and mental health and is a reliable predictive indicator of health outcomes among young people . Health anxiety refers to excessive concern and anxiety about suffering from serious illnesses based on misunderstandings about bodily sensations or changes . Patients with high health anxiety often exhibit an unhealthier diet and cravings for food , making them prone to overweight and obesity , which can lead to a series of health problems. Compared to patients without health anxiety, those with health anxiety tend to undergo more examinations and utilize more health service resources. In this study, among the survey respondents (n = 1,205), 87.3% of university students reported experiencing multiple forms of health anxiety, with 70% facing three or more types. The severity of health anxiety among university students was ranked as follows: lifestyle anxiety, psychological anxiety, appearance anxiety, and physical anxiety. Unhealthy lifestyles have become the primary factor endangering the health of university students and contributing to mortality . A lack of physical exercise, insufficient sleep, and excessive daytime sleepiness caused by staying up late, along with unhealthy eating habits, ultimately lead to psychological health issues and appearance anxiety, such as insomnia, alopecia, obesity, poor skin, anxiety, and excessive stress . Thus, the unhealthy lifestyle behaviors of university students are the main causes of their health anxiety. If university students do not modify their behaviors through timely interventions, it may result in irreversible consequences in the future. Self-health diagnosis and treatment through online health-seeking behaviors are among the primary methods for addressing health problems for university students. Studies have shown that university students use the Internet more frequently than any other group, which encourages them to seek health information online and promote their well-being . However, the quality of online health information varies, requiring university students to possess adequate eHealth literacy. eHealth literacy has a significant positive impact on the quality of information and the credibility of information sources . A higher level of eHealth literacy is associated with a greater number of information-seeking channels. University students with higher eHealth literacy possess better abilities for information acquisition, evaluation, and utilization, while those with lower eHealth literacy encounter more difficulties in searching for information . Although studies have shown that individuals with a certain level of eHealth literacy are less likely to blindly believe that their health status is threatened when faced with an abundance of online health information, they may still experience negative emotions such as health anxiety. However, this study cannot conclusively prove that higher eHealth literacy correlates with lower health anxiety among university students. Those with high eHealth literacy may have strong abilities to query, evaluate, and apply online health information; however, excessive or improper use of this information may lead them to exaggerate or misinterpret physical symptoms, resulting in heightened health concerns and fear of illness, thereby exacerbating anxiety . There are several limitations to this study. First, the research employed a cross-sectional design, which cannot determine causality among the study variables. Second, only self-reported health was used to evaluate university students’ health status, which is a subjective assessment of an individual’s health. Third, this study cannot establish that lower eHealth literacy correlates with higher health anxiety among Chinese university students. The association between these two factors requires further in-depth research through various methods.
A total of 1,205 students voluntarily participated in the survey. The severity levels of health anxiety among university students were ranked as follows: lifestyle anxiety, psychological anxiety, appearance anxiety, and physical anxiety. Significant gender differences were observed in appearance anxiety, but no significant gender differences were found in eHealth literacy. The Pearson correlation analysis and ordinal logistic regression model revealed that eHealth literacy was significantly positively associated with self-reported health, while appearance anxiety was significantly negatively associated with males’ self-reported health. Additionally, lifestyle, psychological, and physical anxiety were significantly negatively associated with females’ self-reported health. The findings suggest that lower eHealth literacy and higher levels of health anxiety are correlated with worse self-reported health. Therefore, it is necessary to develop and implement gender-based interventions to reduce health anxiety among university students in the future. In particular, university students who receive negative results regarding health anxiety from the survey will require additional health care.
|
Overview of modern genomic tools for diagnosis and precision therapy of childhood solid cancers | 99a22f6f-d64c-4f17-a9d7-69c719e55d79 | 10763706 | Internal Medicine[mh] | Large-scale characterization of pediatric solid cancers occurred following the decoding of the human genome sequence and utilized emergent next-generation sequencing (NGS) instrumentation and corresponding computational analytics. These efforts transformed our understanding of these diseases at the molecular level. While revealing that pediatric cancer genomes have relatively few somatic mutations, a dizzying array of driver alterations was uncovered including those that impact epigenetic and transcriptional programs, lead to copy number alterations, create gene fusion drivers, and confer germline susceptibility. This genomic complexity predicted that pathology-based evaluation of pediatric cancer tissues would require additional molecular assays to fully evaluate the tumor landscape and uncover variation informing disease risk, potential therapeutic response, and outcomes. As our ability to devise and utilize new methods to characterize disease complexity in the research setting has evolved, so has the understanding of how these new data types may contribute to increasingly precise diagnoses and correspondingly to personalized treatment planning. Studies demonstrating such contributions from several different platforms and analytics in the clinical trial setting have been published over the past 18 months, and this review will detail how several assay types are now being incorporated into clinical diagnosis and treatment planning for children with solid tissue malignancies.
Methylation profiling of tumor DNA. Methylation patterns of the human genome in different tissues are unique and may become altered in disease-specific ways within cancer cell genomes. These facts have led to large-scale efforts to catalog the genome-wide methylation patterns of different tumor types from known tissues of origin based on diagnoses from conventional pathology, using arrays of CpG loci genome-wide. The resulting data were used to develop machine learning-based diagnostic classification schema. These schemas then can be used to evaluate methylation data from any newly assayed tumor DNA, yielding a diagnostic classification and assigning a confidence score for the resulting diagnosis that conveys the certainty of the derived classification. Further value from methylation array data analysis includes the evaluation of copy number alterations genome-wide based on the CpG loci represented on the array and their corresponding chromosomal locations (Fig. a). Similarly, in central nervous system (CNS) malignancies, evaluating Methylguanine-DNA Methyltransferase ( MGMT ) methylation status from the array is critical to decision-making for the use of temozolomide chemotherapy in high-grade disease. Finally, clustering approaches such as principal components analysis (PCA), t-distributed stochastic neighbor embedding (tSNE), or Uniform Manifold Approximation and Projection (UMAP) can be used to provide a visual cluster-based evaluation of an individual patient sample relative to other prior diagnoses (Fig. b). Retrospective analyses of large CNS cancer cohorts have demonstrated the robustness of methylation-based classifier approaches and their ability to provide more precise diagnostic information in the setting of indeterminant diagnoses, to the extent that the 2021 WHO Guidelines for diagnosis of CNS malignancies included methylation-based classification within the standard of diagnosis . Recently, a large prospective trial of methylation array-based classification was reported for pediatric patients with CNS malignancies, further demonstrating improved precision in sub-group classification (e.g., higher granularity among similarly subtyped classes of CNS cancers) and linking new sub-group classifications to outcomes [ ▪▪ ]. Similar approaches have been demonstrated to classify sarcomas of different types using retrospective cohorts , as also was reported for neuroblastomas . Although these classification schemas have not yet been included in WHO-guided diagnostic criteria, this is likely in the near future. Tumor RNA characterization. RNA isolated from solid tumors and sequenced using NGS methods provides a rich source of information that can be evaluated by multiple analytic pipelines to yield valuable information. Clinically, this has been limited to the identification of gene fusions, to verifying predicted impacts of splice-site mutations on alternatively spliced transcripts, and to correlating over-expression with amplified copy number, or absence of expression due to deleted genes or nonsense-mediated decay. Databasing of over 12 000 RNAseq expression profiles from pediatric cancers has greatly aided diagnosis by virtue of online tools and data display such as that hosted at the Treehouse Childhood Cancer Initiative ( https://treehousegenomics.ucsc.edu ). An example of this type of comparison is shown in Fig. , wherein a single sample of indeterminate diagnosis is localized by virtue of its RNA expression profile in proximity to clustered profiles of samples with known diagnosis, through PCA. Recently, Shlein et al. [ ▪▪ ] reported an intriguing method based on RNAseq data to molecularly define most childhood cancers and accurately predict subgroups and corresponding outcomes. Their methods measured transcriptional entropy and demonstrated significant diversity both between and within tumor types, in contrast to the relatively quiet genomic DNA landscape of most pediatric cancers. They then leveraged this transcriptional variability to improve diagnosis based on a clustering approach that performed unsupervised classification of RNAseq data to produce an atlas of 455 tumor and normal tissue classes based on gene expression similarity. An ensemble of convolutional neural networks was designed to provide the ability to robustly classify data from newly studied tumors, which was refined using their classifier in 7% of the cases examined. Further clinical validation of such an approach could represent a unique diagnostic approach to classification that could be applied more broadly to identify gene fusions, alternatively spliced transcripts and outlier expression from a single dataset. NGS-based identification of somatic and germline alterations. NGS has dramatically improved the detection of many types of variants present in germline and somatic DNA and, with decreasing costs of data generation and increasing sophistication of computational variant identification, has become more broadly utilized in molecular profiling of pediatric cancers. Moreover, the downstream impact of identified variants on diagnosis and clinical care, as described below, has established the significant value of these assays. Recent reports include the iTHER prospective trial at the Princess Maxima Center in the Netherlands which used comprehensive molecular profiling demonstrated feasibility and impact of this approach to inform diagnostic, prognostic and therapeutic targets, along with germline susceptibility. This trial result led to the adoption of whole exome and RNA sequencing as the standard of clinical care, including at primary diagnosis, within this tertiary care site for pediatric cancers. As cited previously, the DKFZ INFORM prospective trial of CNS malignancies that combined methylation array-based classification with a gene panel to evaluate germline and somatic mutations and RNAseq for fusion identification has driven the approval for reimbursement of this comprehensive testing regimen by German government-funded medical insurance [ ▪▪ ]. In the United States, the GAIN consortium tested pediatric patients with extracranial solid tumors using a gene panel test of somatic DNA that was capable of detecting mutations in several hundred cancer genes, including copy number alterations and a limited number of gene fusions. In this setting, 77% of the 209 patients with a diagnostic finding had gene fusions. Hence, the study conclusion was that targeted panel testing that includes the ability to identify gene fusions had substantial clinical benefit for these patients . A U.S.-based pediatric molecular profiling project, the Molecular Characterization Initiative (MCI), sponsored by the National Cancer Institute (NCI) opened to enrollment in 2022. In this project, cancer patients from birth to 25 years of age, diagnosed with CNS cancers, soft tissue sarcomas or a collection of rare cancer types at hospitals affiliated with the Children's Oncology Group (COG) receive comprehensive clinical molecular profiling (methylation array, fusion panel testing, tumor vs. normal exome testing) and return of results within 21 days of receipt of tumor and blood samples. Additional cancer types will be eligible for participation over the next 4 years of this 5-year project ( https://www.cancer.gov/research/areas/childhood/childhood-cancer-data-initiative/data-ecosystem/molecular-characterization ). Importantly, all de-identified data and results are actively being deposited into the Childhood Cancer Data Initiative Germline susceptibility . One example of an NGS finding is the identification of germline-based cancer susceptibility, present in over 10% of all pediatric cancers, but ranging up to 15% in specific tissue site diagnoses. Knowledge of inherited or de novo cancer susceptibility has logical impacts on cancer survivorship care and reflex testing within family members, but more recently has been studied in the setting of cancer treatment with immune checkpoint blockade inhibitor (ICBI) therapies [ ▪▪ , ▪▪ ]. The results of clinical trials in pediatric and AYA patients with high or ultra-high tumor mutational burden (TMB) due to constitutional mismatch repair deficiency (CMMRD) or Lynch Syndrome-associated solid cancers being treated with anti-PD1 ICBI monotherapy or combined therapies (anti-CTLA4 plus anti-PD1 or MEK inhibition plus anti-PD1) have demonstrated durable responses in >50% of patients within an admittedly rare subpopulation of pediatric cancers typically having dire outcomes. Importantly, high TMB (>5 mutations/Mb) and/or high microsatellite insertion (MSI) index are strong predictors of response, as are blood-measurable immune parameters such as the level of 4–1BB positive CD8 T cells and elevated TCR clonal diversity [ ▪▪ ]. Somatic indicators of therapy response. Due to the types of variants identifiable from NGS-based testing, and the increasing numbers of FDA- and/or EMA-approved targeted therapies, genomic profiling information can inform cancer treatment decision-making. One such impact is the identification of tumor-specific alterations to known cancer driver genes for which there are existing targeted therapies or agents under investigation in clinical trials. However, despite large-scale characterization of pediatric cancer genomes, there are typically variants identified in cancer-related genes for which no known functional impact on cancer onset or progression can be discerned. This reality applies even to the most frequently mutated genes in cancer such as TP53, although exciting new approaches to contextualizing variants in this gene have been recently published . 1. Protein-targeted therapies . One paradigmatic shift brought about by genomic profiling of adult cancers has been the emergence of small molecule and antibody-based therapies that are highly specific for driver genes. This shift has become manifest in the clinical offering of gene panel testing using DNA derived from cancer samples (needle biopsy or resection) that can identify whether known pathogenic driver mutations in genes with one or more FDA approved targeted therapies are present. Importantly, the discovery of new driver genes and alterations from adult cancer genomics has resulted in a plethora of new therapies that directly target the resulting altered proteins with reduced adverse effect profiles and result in emerging standard-of-care treatments through successful clinical trials. Unfortunately, the overlap in drivers between adult and pediatric cancers is small and hence the benefit of genomic testing to predict targeted therapy response in pediatric cancers suffers from this deficit. Furthermore, since driver alterations are of variable types across pediatric diagnoses, a simple gene panel test may not be capable of detecting all types of alterations, especially copy number variants or fusion genes. 2. Prognostic markers of risk . Recognizing the lifelong impact of aggressive chemo- and radiotherapy treatment in the pediatric setting, efforts to investigate dose reduction in lower risk outcome subtypes has been pursued through clinical trials over the past decade. In trials where these risk predictors were identified as relevant to determining dose reduction and improved long-term sequelae and quality of life, they have been implemented into the diagnostic rubric to determine care. Hence it is important that NGS-based DNA profiling assays produce clinically relevant information about prognostic specific amplification and deletion status genome-wide. Multiassay testing regimens have been slow to develop yet, in settings where these have been pursued, there is clear clinical benefit to pediatric patients from the aspects of (i) identifying targeted therapy options for known driver alterations, (ii) detection of germline susceptibility, which can be interpreted clinically in multiple ways, to (iii) identification of focal, arm-level or whole chromosome copy number alterations, which may provide established prognostic information from clinical trial-based results, and indicate the modification of treatment aggressiveness accordingly . 3. Immune checkpoint blockade inhibitor therapies . The development of immune therapies that inhibit various immune checkpoints has transformed cancer care in the adult setting for high mutation-load tumor types such as smoking-associated lung cancers, POLE-mutated endometrial cancers, and MSI-instable colorectal cancers, among others. Adult clinical trials of these agents have been the first to achieve FDA approval in a tissue-agnostic setting, by enrollment and efficacy demonstration in the setting of high or ultra-high TMB regardless of tissue of origin . In the pediatric setting, however, high and ultra-high mutation loads are certainly the exception but do occur in those with constitutional mismatch repair defects (CMMRD), Lynch syndrome and Li-Fraumeni syndrome. Recently reported clinical trials of these agents, as cited above, have resulted in durable complete and partial responses in these patients regardless of tissue site. “Functional genomics”: therapy response evaluations. Despite the broader implementation of multiomic methods in characterizing pediatric cancers, identifying appropriate treatment is not always clear. For example, many fusion drivers including transcription factor fusions lack any targeted therapies, whereas somatically “quiet” DNA profiles may offer no clues as to likely response to any type of therapy. In this regard, the use of rapid functional testing of therapeutic response obtained by exposing tumor cells to a panel of chemotherapies and small molecule inhibitors is emerging as a clinical approach that indicates likely response. One such approach using an ex-vivo drug sensitivity profiling assay (DSP) was described recently [ ▪▪ ]. Here, spheroid cultures from fresh tumor tissues were grown over a 3-week period in 3D culture conditions in 384 well plates prespotted with 75–78 chemotherapies and small molecule inhibitors. Patients on this study were simultaneously profiled by gene panel testing of tumor and normal DNA, by tumor RNAseq and phospho-proteomic mass spectrometry as well as methylation array classification. Study results demonstrated that ex-vivo DSP produced the same drug targets as molecular profiling. Importantly, drug vulnerabilities were identified in 80% of cases lacking actionable (very) high-evidence molecular events, adding value to the molecular data.
Multiple prospective studies now support the hypothesis that combined clinical testing of DNA and RNA from pediatric solid cancers refines diagnosis, identifies therapeutic vulnerabilities, and uncovers germline susceptibility when present. These data combine with conventional pathology-based evidence to render precision diagnosis and, when the data are shared, may benefit other patients and support future discoveries. Significant barriers to progress remain, however. These include (i) inability to interpret novel variants or variants of uncertain significance and their contribution to cancer development, prognosis or therapy selection, (ii) lack of definitive treatment guidance from genomic testing results, (iii) lack of driver-specific targeted therapies for unique pediatric cancer drivers. The first barrier is currently being approached by the Atlas of Variant Effects Alliance ( https://www.varianteffect.org/ ), a consortium that is systematically mutating sites across many genes and evaluating their impact via functional readouts, thereby creating comprehensive variant effect maps of human genes. A more focused approach might prioritize which of the most frequently mutated positions in cancer-relevant genes should be functionally characterized, using AI-based protein structure-function prediction, for example, AlphaFold2 for further study. The second barrier could be addressed using real-time functional screening approaches such as the ex-vivo screening of disrupted tumor cells cited above, although this is limited to screening only available therapies. When these assays are performed in the context of broad multiomic testing, and the indicated therapy or therapies are used to treat the patient, the possibility to develop artificial intelligence-based predictive methods from large databases of treated patients with known outcomes, holds significant promise toward automating these predictions. The third barrier is challenged by the need to identify suitable medicinal chemistry approaches to design therapies, especially for transcription factor fusions and epigenetic drivers as well as undruggable targets (MYC, for example). This important effort should be encouraged by making funding available to support pursuit of novel concepts. Downstream of encouraging preliminary data, the engagement of pharmaceutical and biotechnology companies alongside pediatric-focused cooperative groups will be needed to support the clinical trials to test these novel therapies.
The author wishes to acknowledge the Nationwide Foundation Innovation Fund for its support of the Rasmussen Institute for Genomic Medicine. Financial support and sponsorship None. Conflicts of interest There are no conflicts of interest.
None.
There are no conflicts of interest.
|
Early death after palliative radiation treatment: 30-, 35- and 40-day mortality data and statistically robust predictors | 373d87e2-f610-48ad-a127-4337d7e77ea3 | 10069056 | Internal Medicine[mh] | Topics such as value-based care, quality-of-care indicators, cost-effectiveness and overtreatment have received considerable attention in the oncological literature . Special consideration is necessary in the palliative and terminal phase of anti-cancer treatment, where mismatch between side effects, cost and other disadvantages of interventions on one hand and expected benefit on the other hand should be minimized. Among factors to consider are an intervention’s aim, e.g., life-prolonging versus symptom-directed, and time-frame aspects such as remaining life time and duration of treatment. Palliative radiotherapy is among the most effective and cost-effective interventions and can be tailored to individual patients’ need and preferences . Extreme hypofractionation cuts treatment duration into a fraction of what is needed to complete traditional regimens, e.g. 3 Gy × 10 . As suggested by a recent large meta-analysis , there is room for improvement of physicians’ prescription habits or ability to decipher prognosis, because the authors found that 16% of patients with advanced cancer receiving palliative radiotherapy died within 30 days of treatment. In other words, the remaining life time with, e.g. reduced pain if this was the goal of treatment, may have been too short to outweigh the burden or side effects of radiotherapy in a proportion of patients. Typically, decision regret analyses are not performed near the end of life and it is thus difficult to estimate how many patients would have consented to radiotherapy in the final phase of cancer progression, had they been able to judge outcomes in advance. There are different ways of measuring radiotherapy utilization near the end of life, e.g. 30-day mortality calculated from start of treatment, 30-day mortality calculated from end of treatment, or treatment in the last 30 days of life. In addition, one might be tempted to ask why radiotherapy performed, e.g., in the last 28 days of life is fundamentally different from treatment one or two weeks later. Does the arbitrary 30-day cut-off represent a sound definition, because the early death rate is highest, e.g., at 20–30 days and patients living beyond that mark often survive for another 2–3 months? Or is death a continuous event necessitating a broader evaluation of alternative time frames? In principle, a peak might exist just outside the 30-day time period. These considerations and open questions led us to study death rates and predictors of 30-, 35- and 40-day mortality in an already established database with many baseline parameters that are lacking in large registries such as the National Cancer Database (NCDB) or the Surveillance, Epidemiology, and End Results (SEER) program.
Our single-institution database (2014–2019) includes 219 consecutive patients with bone metastases managed with standard palliative external beam radiotherapy regimens such as a single fraction of 8 Gy, 5 fractions of 4 Gy or 10 fractions of 3 Gy (3-D conformal or intensity-modulated; no stereotactic ablative body radiotherapy). Fractionation was at the discretion of the treating oncologist. Additional lesions were treated as indicated, e.g., soft tissue or lung metastases. In other words, a proportion of patients received radiotherapy to several target volumes at the same time. Interrupted or permanently discontinued radiotherapy series were included to comply with the intention-to-treat principle. Standard-of-care systemic anticancer treatment was given as indicated (tailored to organ function, frailty etc.). Patients who returned for a new treatment course (re-irradiation or new target volume) in the time period of the study were counted twice, resulting in a total number of 287 evaluable treatment courses. In returning patients, actual blood test results, imaging reports, Karnofsky performance status (KPS), weight and other baseline data, as well as survival were registered for each individual treatment course. Imaging and blood tests were part of standard oncological assessment and typically no older than 3 weeks before radiotherapy. Most patients had blood tests taken at the day of treatment planning. All blood test results were dichotomized (normal/abnormal) according to the institutional upper and lower limits of normal. The review-board approved database is regularly updated for survival and has been utilized for different quality-of-care projects before . Overall survival (time to death) from the first day of radiotherapy was calculated employing the Kaplan–Meier method for all 287 treatment courses (SPSS 28, IBM Corp., Armonk, NY, USA). In 27 cases, survival was censored after median 36 months of follow-up (minimum 28 months). Outcomes of interest (30-, 35-, 40-day mortality from start; death within 30 days of last radiation treatment) were dichotomized (alive/dead) and the chi-square test (2-sided) was utilized for further analyses. A multi-nominal logistic regression analysis was also employed. P -values ≤ 0.05 were considered statistically significant. The methods employed by Rades et al. were utilized to calculate a point sum reflective of 30-day mortality . For example, a risk factor associated with 50% 30-day mortality was assigned 5 points, while 3 points were assigned for a factor associated with 30% 30-day mortality.
Regarding all 287 treatment courses, 42 (15%) took place in the last month of life. Mortality from start of radiotherapy was 13% (30-day), 15% (35-day) and 18% (40-day), respectively. As indicated in Fig. , the 30-day landmark is not particularly representative for early death. Death rates were lower in the first 15 days and increased between day 16 and 45. None of the 5-day intervals can be characterized as outlier. Median actuarial overall survival was 6 months (1-year rate 32%). Table describes the patient-, tumor- and treatment-related baseline characteristics. The impact of all these baseline characteristics on 30-, 35- and 40-day mortality was examined and Table shows that a large number of significant correlations was identified in univariate analyses. All predictors of 30-day mortality were also associated with both, 35- and 40-day mortality. Predictors with p ≤ 0.05 in univariate analyses were included in multi-nominal logistic regression analyses. The one for 30-day mortality confirmed KPS (≤ 50 with hazard ratio (HR) 3.7 and 60–70 with HR 1.8, p < 0.001), weight loss (HR 1.8, p = 0.01) and presence of pleural effusion (HR 7.5, p = 0.006) as independent predictors, whereas, e.g., cancer type, blood test results and treatment-related parameters lost their significance. All three significant predictors of 30-day mortality maintained their impact in an exploratory analysis of 40-day mortality with p = 0.001–0.003. Additional predictors emerged, albeit with different p -values. These included adrenal gland metastases ( p = 0.02), progressive disease outside of the irradiated region(s) ( p = 0.03), and serum creatinine (normal versus abnormal, p = 0.02). Interestingly, all three additional predictors were also identified in the earlier analyses displayed in Table (bold text), because of disproportional increase of % mortality over time. Finally, the three significant predictors of 30-day mortality were employed to construct a predictive model based on the methodology developed by Rades et al. . Table shows how the point sum can be calculated and Fig. displays the corresponding 30-day mortality rates of 0–75%.
This study compared death rates during different time intervals in the early phase after radiotherapy and identified variables that impact on, e.g., 30-day mortality. Early death was not limited to the first 30 days after start of radiotherapy. Relatively similar death rates were seen between day 16 and 45. Focusing on 30-day mortality, a widely used endpoint in the literature (radiotherapy and other approaches), is thus an arbitrary decision (some sort of cut-off is needed) and not necessarily data-driven, as shown in the present example. Furthermore, palliative radiotherapy is not normally associated with procedure-related mortality, in contrast to, e.g. surgery. The present results also demonstrate that mortality rates depend on the method of evaluation. 30-day mortality from start of treatment was 13%, while 15% of courses were administered in the last 30 days of life. Modest increase of the cut-off, from 30 to 40 days from start of radiotherapy, increased the rate from 13 to 18%. In a recent large meta-analysis , 16% of patients with advanced cancer who had received palliative radiotherapy died within 30 days of treatment. In contrast to several previous studies, the present one included an unusually large number of baseline parameters, both traditional predictors of survival such as KPS, and less well-studied variables such as presence of pleural effusion and numerous blood test results. All predictors of 30-day mortality were also associated with both, 35- and 40-day mortality, and thus robust. Nevertheless, with increasing time interval and number of events (higher death rate after 40 days), additional predictors of early death emerged, albeit with clearly different p -values. These dynamics suggest that an increasing number of co-variates impact on death rates in analyses that cover a longer time period. KPS, weight loss and pleural effusion maintained their highly significant role and were therefore employed to construct a predictive model, which performed well (Fig. ). Pleural effusion, which was observed, e.g., in patients with lung, breast and prostate cancer, was not necessarily symptomatic and did not always necessitate intervention. Our study did not include patient-reported dyspnea, which in previous studies was associated with poor prognosis . These two factors might be interrelated, an issue that can only be clarified in prospective studies. The present results are in line with numerous prognostic models that include KPS as a main and indisputable driver of poor prognosis . However, additional factors are important to fully elucidate the likelihood of survival at different time points. Their role requires further study in larger databases. Besides number of patients, limitations of the present work include its retrospective single-institution design and selection bias, because a proportion of poor-prognosis patients referred to palliative radiotherapy might die already before planned start. On the other hand, the study cohort represents a real-world patient population of often elderly patients with highly variable disease burden and survival. Furthermore, we had access to a broad set of baseline parameters and were therefore able to extend the knowledge provided by previous, otherwise similar studies. Patients with brain metastases, a small subgroup in the present study, might represent a special population, if treated to the brain rather than skeletal metastases after previous brain-directed therapy. Our group’s previous work resulted in different predictors of 30-day mortality after treatment for brain metastases (n = 100 patients) than those identified in the present bone metastases study, e.g., number of brain metastases and primary tumor control. Despite progress in prognostic stratification, survival predictions in oncology tend to be overly optimistic . Not all patients initially thought to represent suitable candidates for radiotherapy are able to complete their treatment. In a recent study by Vázquez et al., 30-day mortality after palliative radiotherapy was 17.5% . In the multivariate analysis, male gender, ECOG PS 2–3, gastrointestinal and lung cancer were found to be independent factors related to this endpoint. Weight loss and other parameters available in the present study were not included. The large meta-analysis by Kutzko et al. identified multiple treatment sites, hepatobiliary primary, inpatient status, and ECOG PS 3–4 as predictors of 30-day mortality . In contrast to these results, Wu et al. performed a multivariate analysis suggesting that breast or prostate primary tumor, ECOG PS, body mass index, liver metastases, more than 5 active metastases (dichotomized, radiographically identified), albumin level, and hospitalization within 3 months of radiotherapy consult were associated with 30-day survival . Harmonization efforts and cooperation are needed to arrive at generally accepted and widely implemented predictive models, or a single consensus model. So far, it seems that PS and primary tumor type are common and well-established predictors, while contradictory results were obtained for other variables. Ideally, prospective comparisons should be attempted to clarify the role of potentially redundant variables such as patient-reported dyspnea, radiological presence of lung metastases or pleural effusion, and blood test results such as anemia, which might impact on dyspnea. Patients at high risk of early death should preferably be managed with single-fraction radiotherapy for bone metastases , if they prefer radiotherapy over other palliative and supportive measures aiming at pain control. Even patients with longer survival can often achieve satisfactory pain control with such simple treatment, if uncomplicated bone metastases are present, and sometimes additional re-irradiation is able to “boost” and prolong the effect of initial treatment. Special scenarios such as impending fractures, post-operative radiotherapy, large extra-osseous infiltration or ablation of oligometastases require thorough assessment of advantages and disadvantages of prolonged courses of radiotherapy or stereotactic body radiotherapy.
Early death was not limited to the first 30 days after start of palliative radiotherapy for bone metastases. For different cut-off points (30-, 35-, 40-day mortality), similar predictive factors emerged. A model based on three robust predictors was developed, which is easily applicable in clinical practice. External validation by other institutions is warranted.
|
753b1968-3908-4f1f-82f1-e6ee9060d5cc | 10255023 | Pharmacology[mh] | ||
Implementations and strategies of telehealth during COVID-19 outbreak: a systematic review | aa018845-fbe6-4ef4-b917-9dd6ac5ea4ba | 9238134 | Ophthalmology[mh] | During this pandemic, healthcare organizations developed appropriate traits of flexibility and innovation to deal with institutional pressures . The coronavirus disease 2019 (COVID-19) pandemic imposed the need for social distancing by also interrupting the hospital services. In response to this, innovations using information technologies were largely used within healthcare organizations . Telehealth is a complex digital innovation that involves various stakeholders, across professional and organizational boundaries, with a multidisciplinary approach to ensure health care services to patients. Telehealth is the IT-enabled provision of medical services without in-person interactions between physicians and patients . Through remote monitoring of patients, telehealth works as a preventative measure to avoid emergency department and hospital admissions and reduce costs by enabling a fast and accurate response to patients’ needs . Indeed, while doctors take care of patients, the monitoring can be delegated to nurses or even to the patients themselves . Telemedicine proved to be an effective strategy during the pandemic allowing the patient to connect in real time with health care providers despite the need for social distancing. Thus, this review aims to systematically characterize the utilization of telehealth and its applications during the COVID-19 pandemic focusing mainly on technology implementations.
This study was conducted in accordance with Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) . A systematic search of the literature in Sciencedirect, IEEE XPLORE, Scopus and Web of Science databases was performed from January 2020 until July 2021. The following keywords were used: (‘Telehealth’ _OR ‘e-health’ _OR ‘Telecare’ _OR ‘Telehealth’ _OR ‘remote monitoring’ _OR ‘mHealth’ _OR ‘Medical system’ _OR ‘health care service’ _OR ‘Telemedicine’) AND ( Disease OR Infection OR Virus OR Epidemic OR Outbreak OR Pandemic OR COVID-19 OR COVID-19 OR SARS-COV-2 ). Limited data existed on the telehealth application in COVID-19 since the recent onset of the pandemic. To collect all existing evidences on this topic, we plan to include primary studies such as RCTs, prospective cohort studies, retrospective studies and all kind of reviews published in English language on technologies implementation for telehealth in COVID-19 and non-COVID-19 patients. Conference paper and articles not in English language were excluded. Data extraction, quality assessment and quantitative analysis Data were independently extracted from each study by two authors (MV and SDS) using a data recording form developed for this purpose. Two pairs of independent reviewers performed the initial selection to screen titles and abstracts (MV, SDS). For detailed evaluation, a full-text copy of relevant studies was obtained. Using a pre-standardized data extraction form, paired reviewers (MV, SDS) extracted the data from each study. Title, year, type of study, setting, aim, strategy/type of telehealth, personnel involved, outcomes and main findings of included studies were considered data of interest for this systematic review. Two reviewers (MF, GS) checked the accuracy of data extracted and further evaluated the quality of included studies. The Critical Appraisal Skills Programs checklist was used as quality assessment checklist; it included 11 criteria to ensure the quality of the included studies. Each assessed criteria could be assigned a quality score of 0 for ‘does not meet’, 0.5 for ‘partially meet’ and 1 for ‘fully meet’. The total quality score of each article ranges from 0 to 11. According to this, a signified high-quality article is defined by a high score. Any possible disagreement on data extraction and quality assessment was solved through consultation with an external reviewer, if needed. For the purpose of quantitative analysis, we planned to collect the number of visits and possible quantitative outcomes reported by the included studies.
Data were independently extracted from each study by two authors (MV and SDS) using a data recording form developed for this purpose. Two pairs of independent reviewers performed the initial selection to screen titles and abstracts (MV, SDS). For detailed evaluation, a full-text copy of relevant studies was obtained. Using a pre-standardized data extraction form, paired reviewers (MV, SDS) extracted the data from each study. Title, year, type of study, setting, aim, strategy/type of telehealth, personnel involved, outcomes and main findings of included studies were considered data of interest for this systematic review. Two reviewers (MF, GS) checked the accuracy of data extracted and further evaluated the quality of included studies. The Critical Appraisal Skills Programs checklist was used as quality assessment checklist; it included 11 criteria to ensure the quality of the included studies. Each assessed criteria could be assigned a quality score of 0 for ‘does not meet’, 0.5 for ‘partially meet’ and 1 for ‘fully meet’. The total quality score of each article ranges from 0 to 11. According to this, a signified high-quality article is defined by a high score. Any possible disagreement on data extraction and quality assessment was solved through consultation with an external reviewer, if needed. For the purpose of quantitative analysis, we planned to collect the number of visits and possible quantitative outcomes reported by the included studies.
A total of 6567 records were identified across the different databases. After the screening process, 14 articles related to technology, telehealth, and COVID-19 were included (Fig. ). During the quality evaluation process, three studies reached a score of 9.5 points, three studies 9 points, three studies 8.5 points, five studies reached a score ≤ 7 points (Table ). Figure summarized the category of telehealth evaluated in the included studies. Six studies focused on the implementation of technology for telehealth . Berg et al. , Saleem et al. , Goenka et al. , Hron et al. , and Strol et al. discussed the usefulness of telehealth during the COVID-19 in different medical specialties, such as pediatric gastroenterology, ophthalmology, radiation oncology, inpatient clinics and laryngology. Berg et al. found that telehealth may improve clinical outcome in children with inflammatory bowel disease. Saleem et al. reported the implementation of a workflow diagram that maps the ophthalmology telehealth visit process with the aim to adapt it for the daily evaluation. Goenka et al. found that the 2-way audio telehealth visits were associated with lower billing codes compared with in-person visits. Horn et al. reported that the host of 1820 inpatients, for a total amount of 104 647 min of telehealth, were sufficient to build rapport and to perform a reasonable clinical examination. Strol et al. discussed the key areas to implement the telehealth visits in a tertiary-care laryngology practice. They stated that the key areas were the set-up of the visit, patient examination and treatment, optimization of the tele-visit, limitations of the tele-visit and reimbursement considerations . Franciosi et al. reported that telehealth is an essential tool with the potential to improve access to health care, particularly in nonprocedural specialties. The authors showed the potential shortcomings of telemedicine services for non-English speaking patients and the increased number of telehealth visits for nonsurgical specialties. Cassar et al. reported the experience of using a team called the ‘community covid-19 initial assessment team’ in managing covid-19 patients. They found that the use of telehealth visits did not increase the morbidity and mortality of infected patients . Three studies focused on the service satisfaction . Gentry et al. showed the high satisfaction, acceptability, feasibility and appropriateness of mental health clinicians while using video telehealth visits. Smith et al. highlighted the positive attitude of women underwent fetal ultrasound telemedicine service and the consequent reduction in family costs and journey times. Checcucci et al. reported the high appreciation of patients suffering from benign urological diseases, referring to phone-call visit (phone counselling) as useful telemedicine tool. Two studies provided guidelines to healthcare workers . Harris et al. reported systematic protocols for telehealth intervention in post-acute and long-term care facility residents in order to reduce mortality and hospitalization rates. Basil et al. highlighted the effectiveness of telehealth visits by reporting the incidence of the conversion to in-person visit for only 26 out of 2157 telehealth visits. The authors provided guidelines to perform and standardize the telehealth for neurological examination. One study focused on technology by discussing the strategic role of telehealth in managing the COVID-19 pandemic to relieve congested health-care facilities and avoid the risk of further infection. The author reported the effectiveness of a 3-T model, that is tracking, testing and treating, to defeat the spread of COVID-19. One study highlighted the medical training . In particular Cerqueira-Silva et al. described a strategy combing telehealth and medical training to mitigate the adverse effects of the COVID-19 pandemic. Patients staying at home received a guidance to avoid disease transmission and reduce the spread of pandemic. Table summarized study design, setting, aim, type of telehealth strategy used, personnel involved and outcome/main finding of the included studies. For the quantitative purpose we were able to identify the amount of telehealth visit performed by each study (Table -supplemetary materials).
Three categories of telehealth can be identified by current literature: 1) telehealth visits, a medical visit using of audio and visual telecommunications, 2) virtual check-ins, a brief communication using telephone, audiovisual application, secure text messaging, e-mail, or a patient portal, 3) E-visits through an online patient portal . Telehealth allows health care professionals to ask special questions, collect required information, triage of patient, and supply consultation while the patient is at home. An interesting element emerging from this review is the large, estimated amount of telehealth visits in different specialties. Ten articles reported the number of telehealth visits performed during the study periods for a total of 176.414 medical consultations. The studies included in this systematic review demonstrated the expansion of telemedicine across all medical specialties in many countries in response to a unique and sudden need for virtual medical visits created by the COVID-19 pandemic. Our findings, in line with the literature, showed that nonsurgical specialties have the greatest number of telehealth visits . Telehealth may add potential benefit in non-emergency/routine areas and in services not requiring in person patient-doctor interaction. In addition, during COVID-19 pandemic, telehealth may have the potential role of delivering health care services for underserved populations by eliminating barriers such as transportation needs, distance from specialty providers, and time off from work . Telemedicine may also improve health care delivery by substituting in-person care . Remote care reduces the use of different resources in health centers and improves access to care while minimizing the risk of direct transmission of the infectious agent from person to person . Most of the included studies showed the efficacy of telehealth system in drastically reducing the amount of time spent in the room with the patient per day since some portions of the physical exam were remotely performed. Patients and families appreciated minimizing contact with health care providers during a frightening time, and clinicians showed positive attitudes toward the implementation of telehealth visits, and also a strong interest in continuing this modality as a significant portion of clinical practice . Telehealth is a promising tool that may modernize the traditional in-person clinical practice and inspire alternative ways of organizing or governing the economic activity of health care . According to our findings, telehealth visits are suitable for follow-up visits after patients have already seen the doctor, exam of easy-to-see areas, like eyes or skin, counseling and other mental health services, prescription refills, and monitoring chronic conditions like diabetes or asthma. On the other hand, the in-person visits are better for the first visit, for clinical evaluation that needs a hands-on approach, blood tests, X-rays, and other imaging tests. While clinical history may be taken in-person and by telehealth, physical examination, instrumental evaluation, and laboratory findings are far from being included in a visit from remote. With those premises we tried to identify a model guiding the use of telemedicine to set which phases of the diagnostic process should be done in person and which ones could rely on telehealth (Table ). During the COVID-19 pandemic, telehealth had the aim to screen for infected people, oversee affected subjects, and ensure continuity of care of chronically ill patients. However, as reported by this review, the use of telemedicine was not a homogeneous process . This was due to differences in the awareness of the importance of telemedicine, variability in the quality of the infrastructures, level of informatics literacy of healthcare professionals and patients, and reimbursement schemes and plans. However, the experience collected during the COVID-19 pandemic may help to develop a more coordinated general strategy to favor the implementation of telehealth at large scale in the healthcare systems. In our opinion achieving this goal will be useful to help the healthcare system to be prepared for future pandemic and to develop virtual hospitals, home-base but telehealth-assisted, that may reduce the burden of conventional hospital.
This systematic review provides an illustrative insight into the implementation of telehealth for different purposes. Telehealth may be used in different medical areas with a clear strategy of intervention according to the patients’ and doctors’ needs. As future perspective, we suggest the implementation of telehealth systems to build virtual hospitals, home-based but telehealth-assisted, to reduce the burden of conventional hospital.
Additional file 1.
|
Synergistic effect of nanosilver fluoride with L-arginine on remineralization of early carious lesions | d8b5c843-90dc-4f7a-9d60-5a0c8e15df16 | 11836454 | Dentistry[mh] | Early caries lesions, or incipient caries lesions, are characterized by the demineralization of the enamel without cavitation . These lesions appear as white spots on the enamel surface and indicate subsurface mineral loss. Because early lesions do not exhibit surface breakage in the enamel, they can be reversibly restored by applying appropriate remineralization therapies to restore the mineral content of the enamel and halt the progression of carious lesions . There are various strategies and materials to remineralize early caries lesion materials including, such as the use of nano-hydroxyapatite, peptides, and natural products. Nano-hydroxyapatite closely resembles the mineral composition of natural tooth enamel, acts as a calcium and phosphate reservoir to promote remineralization, and exhibits antimicrobial properties by disrupting bacterial adhesion and reducing biofilm formation . Some bioactive peptides, such as antimicrobial peptides, offer dual benefits by reducing bacterial activity while supporting remineralization. Specific examples include P11-4 peptide and Histatin-5 . Natural products, such as flavonoids derived from propolis or the milk-derived compound casein phosphopeptide-amorphous calcium phosphate , are also among the various strategies that can be used to remineralize early caries lesions. Over the past decades, fluoride has been the most widely used to treat early carious lesions . Fluoride promotes remineralization by enhancing the deposition of calcium and phosphate ions into the demineralized enamel. It forms a protective layer of fluorapatite, which is more resistant to acid attacks than hydroxyapatite , . In clinical practice, fluorine is used as sodium fluoride (NaF), stannous fluoride (SnF 2 ), and acidulated phosphate fluoride. These are available in various forms such as toothpastes, mouth rinses, gels, and varnishes. Some professionally applied fluoride agents are 1.23% acidulated phosphate fluoride gel, 5% sodium fluoride varnish, and 5% silver diamine fluoride (SDF) . SDF halts the caries process and simultaneously prevents new caries formation. This is due to the antibacterial effect of silver and the remineralization effect of fluoride contained in SDF . However, SDF has a critical drawback because it causes black staining following the precipitation of silver phosphate on carious lesions. This is caused by the oxidative properties of ionic silver present in the formulation . The addition of potassium iodide SDF reduces staining by converting silver oxide to silver iodide, a less visible compound . However, the effect was also limited because silver iodide darkens when exposed to light because of its photosensitivity . To address these issues, a nanosilver fluoride (NSF) formulation containing chitosan, fluoride, and silver nanoparticles (AgNPs) was developed . Chitosan is a biocompatible carrier with antimicrobial properties, fluoride aids in remineralization, and AgNPs exert antibacterial effects . Given their antimicrobial properties, AgNPs have been added to some products such as glass ionomer cement, resin-modified glass ionomer cement , and dentin bonding agents . NSF has demonstrated remineralization effects on early carious lesions , ; however, other studies have reported its efficacy to be similar to or even lower than that of SDF or fluoride varnish – , making its effectiveness controversial. Therefore, research on methods to enhance the efficacy of NSF is necessary. One approach to enhance the efficacy of NSF is to add arginine (Arg). Arginine ensures uniform nanoparticles, promotes particle size reduction, and acts as a stabilizer and a reducing agent , . In addition to its potential role in nanoparticle synthesis, arginine has demonstrated caries-preventive potential in several in vitro and clinical studies, whether used alone or in combination with fluorides. The caries-preventive effect of fluoride has been shown to be superior when used synergistically with arginine compared to fluoride used alone – . The ecological effect of arginine on oral microbiota has been shown to be effective against the initiation and progression of dental caries . In oral biofilms, arginine is metabolized by arginolytic bacteria ( Streptococcus sanguinis and Streptococcus gordonii ) through the arginine deiminase system. This results in the production of ammonia, which neutralizes glycolytic acids and inhibits the growth of cariogenic microflora, ultimately preventing tooth demineralization . A recent study by Bijle et al. found that arginine contributes to enhancing enamel remineralization on early enamel carious lesions, as it encourages fluoride uptake into the demineralized enamel lesion. The synergistic effect of existing formulation of NSF and arginine (NSF + Arg) has not been investigated yet, and This study aimed to synthesize an NSF solution added with L-arginine (NSF + Arg) and evaluate the remineralization effect of the synthesized NSF + Arg on demineralized enamel lesions. Characterization of NSF and NSF + Arg Transmission electron microscopy (TEM) was employed to characterize NSF and NSF + Arg formulations. Figure reveals the successful formation of AgNPs in both NSF and NSF + Arg groups. The nearly spherical-shaped nanoparticles with a diameter of 3–18 nm in both groups are shown. The size distribution histogram of the related nanoparticles is shown in Fig. d, h for NSF and NSF + Arg, respectively. In NSF, the mean particle size of the AgNPs was 9.65 nm, with a variance of 14.03, whereas in NSF + Arg, the mean particle size was 7.20 nm, with a variance of 7.80. When NaBH 4 is added to an AgNO 3 solution, the color changes from colorless to light red, forming an NSF solution. In the NSF + Arg solution, a more intense red color is observed (Fig. i). This indicates that Ag + ions are reduced to Ag 0 , leading to the formation of silver nanoparticles. Scanning electron microscopy Figure shows a representative SEM image of the enamel surface. Rough and porous enamel surfaces were observed in the demineralized enamel and control groups. In the NaF varnish and SDF groups, agglomerated precipitates that fill the irregularities of the enamel and reduce the porosities are observed on the enamel surface. NSF and NSF + Arg images showing prisms and interprism gaps were covered with mineral depositions. The enamel surface is relatively smooth compared with the control group, particularly in NSF + Arg, NSF, and NaF varnish groups. Surface microhardness After demineralization, surface microhardness (SMH) decreased in all groups (Table ). After pH cycling, a significant increase in microhardness values was found in all experimental groups; however, no significant difference was found among the experimental groups, except for the control group ( p > 0.05). Mineral density analysis with micro-CT The mean and standard deviation of the mineral density (MD) of the enamel is presented in Table , and representative images are shown in Fig. . The MD of all groups at T 1 was significantly lower than that at T 0 . At T 2 , the MD of all groups, except the control group, was significantly higher than that at T 1 . Mineral gain (MG) of the NSF + Arg group was higher than those of the NaF and control groups ( p < 0.05); however, no significant difference was found between the SDF and NSF groups ( p > 0.05). The NSF + Arg group showed a higher percent remineralization than the NaF, NSF, and control groups ( p < 0.05), and a similar level to the SDF group ( p > 0.05). Color change Table presents the mean values with standard deviation for ΔL, Δa, Δb, and ΔE of all groups, and representative images are shown in Fig. . The SDF group showed the greatest variation of the ΔE value and only significant color difference among all groups ( p < 0.05). Transmission electron microscopy (TEM) was employed to characterize NSF and NSF + Arg formulations. Figure reveals the successful formation of AgNPs in both NSF and NSF + Arg groups. The nearly spherical-shaped nanoparticles with a diameter of 3–18 nm in both groups are shown. The size distribution histogram of the related nanoparticles is shown in Fig. d, h for NSF and NSF + Arg, respectively. In NSF, the mean particle size of the AgNPs was 9.65 nm, with a variance of 14.03, whereas in NSF + Arg, the mean particle size was 7.20 nm, with a variance of 7.80. When NaBH 4 is added to an AgNO 3 solution, the color changes from colorless to light red, forming an NSF solution. In the NSF + Arg solution, a more intense red color is observed (Fig. i). This indicates that Ag + ions are reduced to Ag 0 , leading to the formation of silver nanoparticles. Figure shows a representative SEM image of the enamel surface. Rough and porous enamel surfaces were observed in the demineralized enamel and control groups. In the NaF varnish and SDF groups, agglomerated precipitates that fill the irregularities of the enamel and reduce the porosities are observed on the enamel surface. NSF and NSF + Arg images showing prisms and interprism gaps were covered with mineral depositions. The enamel surface is relatively smooth compared with the control group, particularly in NSF + Arg, NSF, and NaF varnish groups. After demineralization, surface microhardness (SMH) decreased in all groups (Table ). After pH cycling, a significant increase in microhardness values was found in all experimental groups; however, no significant difference was found among the experimental groups, except for the control group ( p > 0.05). The mean and standard deviation of the mineral density (MD) of the enamel is presented in Table , and representative images are shown in Fig. . The MD of all groups at T 1 was significantly lower than that at T 0 . At T 2 , the MD of all groups, except the control group, was significantly higher than that at T 1 . Mineral gain (MG) of the NSF + Arg group was higher than those of the NaF and control groups ( p < 0.05); however, no significant difference was found between the SDF and NSF groups ( p > 0.05). The NSF + Arg group showed a higher percent remineralization than the NaF, NSF, and control groups ( p < 0.05), and a similar level to the SDF group ( p > 0.05). Table presents the mean values with standard deviation for ΔL, Δa, Δb, and ΔE of all groups, and representative images are shown in Fig. . The SDF group showed the greatest variation of the ΔE value and only significant color difference among all groups ( p < 0.05). In the treatment of early carious lesions, remineralization is crucial because it helps restore the mineral content of the enamel, reversing the effects of demineralization and preventing caries progression. The most widely used agents for remineralization are fluoride-based products, among which SDF is notable for containing both fluoride for remineralization and silver particles for antibacterial effects. However, SDF causes severe discoloration. Thus, this study evaluated the remineralization effect and discoloration by NSF, a different form of fluoride-based agent containing silver, and assessed the synergistic effect of adding arginine to NSF. In the TEM analysis, NSF and NSF + Arg were successfully synthesized , and nearly spherical AgNPs with diameters ranging from 3 to 18 nm were identified. The addition of arginine did not change the shape of the NPs but affected the uniformity of the distribution and size, thereby promoting a more stable formulation , . Arginine exhibits a strong affinity for silver ions , allowing it to bind at various electron-rich sites such as the nitrogen atoms in the α-amino groups and guanidino side chains, as well as the carboxyl groups at the C-terminus, resulting in the formation of stable silver–arginine complexes . In contrast to colloidal AgNPs, immobilized AgNPs are more physicochemically stable because they are less susceptible to aggregation and oxidation in aqueous environments . This stability ensures their long-term antibacterial effectiveness and reduces potential toxic effects . The remineralization of NSF and NSF + Arg on artificial enamel caries lesions was evaluated using several indicators. SMH measurements revealed that NSF + Arg did not significantly improve the remineralization of artificial enamel caries lesions compared with other groups ( p > 0.05) but tended to successfully remineralize. This finding indicates that arginine not only enhances nanoparticles stability but also increases fluoride uptake, facilitating more effective remineralization . The MG and remineralization percentage using micro-CT analysis supported these findings, showing higher percent remineralization values in the NSF + Arg group after remineralization treatment than in the NSF group. NSF + Arg demonstrated enhanced performance, which can be explained by the unique properties of arginine. Arginine promotes the uniform distribution of nanoparticles and reduces particle size, which increases the surface area for interaction with the enamel , . Furthermore, arginine acts as a stabilizer and a reducing agent, preventing nanoparticles agglomeration and maintaining their activity over time . Arginine aids in fluoride uptake in enamel lesions contributing to the formation of a more robust and acid-resistant fluorapatite layer and remineralization of early enamel caries . Various in vitro and clinical studies have demonstrated the synergistic effect of arginine and fluoride, showing superior performance to fluoride alone. Arginine has an overall positive charge because of the positively charged guanidinium group attached to the alpha carbon of the amino acid. This guanidinium segment attracts electronegative components, including fluoride, inducing the formation of arginine–F complexes , . In addition, in oral biofilms, arginine is metabolized by arginolytic bacteria such as Streptococcus sanguinis and Streptococcus gordonii through the arginine deiminase system, preventing tooth demineralization , . Notably, NSF and NSF + Arg groups demonstrated a significant reduction in discoloration. In the measurements of color change (ΔE) after remineralization, the SDF group showed the largest color change with a noticeable black stain on the enamel surface, whereas the NSF and NSF + Arg group showed significantly lower color change than the SDF group ( p < 0.05) (Table ). NSF also contains silver, which is the major cause of discoloration in SDF; however, it only causes less discoloration because its shapes and properties are different from those of SDF. The silver particles in NSF have nanosizes (Fig. ), and these nanoparticles have a large surface area, providing a high antibacterial effect while having a milder discoloration through oxidation and precipitation . In the aqueous medium, chitosan tends to agglomerate and adhere to surfaces . Therefore, after applying NSF, a yellowish stain that can be easily removed may appear on the enamel surface. In this study, such a film-like stain was removed by the subsequent pH cycling after NSF application, resulting in the lack of significant difference in the measured values after pH cycling compared with the NaF varnish or the control group. In addition, arginine stabilized the AgNPs while maintaining a smaller particle size distribution, reducing oxidation reactions , and thereby lessening black staining. In this study, NSF + Arg did not cause enamel discoloration but did not show significant SMH or MG values compared with other groups and showed improved effects than NaF, NSF, and control groups only in percent normalization. Several factors can explain this limited effect. One of the main factors is the difference in the application time for each experimental group. Fluoride formulations tend to have a longer-lasting anti-caries effect with prolonged contact time with teeth. A longer contact allows fluoride ions to penetrate deeper into the enamel, aiding in remineralization and formation of a protective layer of calcium fluoride on the tooth surface . NaF has a higher viscosity than SDF, NSF, and NSF + Arg. To reflect this in the experimental design, the NaF group had a contact time of 24 h with the remineralizing agent, whereas the other groups had a contact time of only 3 min. As a result, its efficacy was sufficiently demonstrated because it exhibited comparable remineralization effects despite a much shorter contact time, which is consistent with previous studies , , . However, this study used a chemical pH-cycling model to mimic clinical conditions, so it cannot fully replicate those conducted in vivo conditions such as saliva and dental plaque. Thus, further experiments in microcosmic biofilm models are needed rather than in environments that use only a single bacterium, or in vivo studies should be conducted. Considering the aesthetic concerns associated with the dark staining effect of SDF, it is a very interesting issue to explore alternative treatment options that offer both safety and effectiveness. A study reported by Targino et al. , the authors found that NSF exhibited significantly lower cytotoxicity towards human erythrocytes than SDF. The authors also stated that NSF is more biocompatible, does not cause discoloration, and has antibacterial effects at much lower doses , more cost effective, and reliable , , which make it a possible alternative material to SDF. In our study, arginine was added to NSF formulation, which was reported to be not toxic on human gingival fibroblasts (HGF-1) in low concentration . Arginine-containing NSF was successfully synthesized, and spherical-shaped AgNPs with diameters of 3–18 nm in this compound were identified. The NSF + Arg formulation could be an alternative to SDF owing to its ability to remineralize early caries lesions without causing black staining. However, in future studies is recommended to investigate the cytotoxic effect of NSF + Arg before its clinical application. Preparation of NSF and NSF-Arginine NSF was synthesized according to the protocol by Targino et al. Chitosan (1.0 g; Tokyo Chemical Industry Co., Ltd., Portland, OR, USA) was dissolved in 200 mL of 2% (v/v) acetic acid solution [CH₃COOH] (Daejung Co., Ltd., Busan, Korea). Then, 60 mL of the chitosan solution was transferred to an ice bath with continuous stirring and then added with 0.012 mol/L silver nitrate [AgNO₃] (Alfa Aesar, LLC, Haverhill, MA, USA). After 30 min of stirring, sodium borohydride [NaBH₄] (Sigma-Aldrich, Darmstadt, Hesse, Germany) was added dropwise maintaining an AgNO₃ to NaBH₄ mass ratio of 1:6. Subsequently, 11,310 ppm of 2.5% sodium fluoride [NaF] (Sigma-Aldrich, Darmstadt, Hesse, Germany) was added and stirred until completely dissolved. The colloidal solution was then stored at 4 °C. NSF + Arg was synthesized by adding 5 mg/mL L-Arginine [C 6 H 14 N 4 O 2 ] (Sigma-Aldrich, Darmstadt, Hesse, Germany) to the chitosan solution to enhance stability and control the nanoparticle size , . The concentration of arginine in NSF was determined based on the results of our pilot study, which tested various arginine concentrations in NSF. Characterization of NSF and NSF + Arg The shape, size, and shape of the AgNPs of NSF and NSF + Arg were evaluated by TEM (JEM-F200, Akishima, Tokyo, Japan) at 200 kV. A drop of NSF and NSF + Arg was placed separately on a carbon-coated copper grid and allowed to evaporate overnight at room temperature before TEM analysis. The diameter of each nanoparticle was measured using ImageJ2 (National Institutes of Health, Bethesda, MD, USA), and to obtain the average size of all particles in each sample, data were processed using Origin 2023 (version 10.5.113.50894). Specimen preparation This study was approved by the Ethics Committee of the Gangnam Severance Hospital (IRB approval no. 3–2023-0115). In total, 25 human sound permanent molars extracted for therapeutic purposes without early carious lesions, developmental anomalies, or any other defects were obtained from the Human Derivatives Bank at Gangnam Severance Hospital. Debris on the surface of all teeth was removed by perio-curette. Under a microscope (Olympus® BX40, Shibuya, Tokyo, Japan) at × 20, teeth were examined to check for the presence of crack lines and confirm the absence of cracks or any other defects. They were stored in 0.1% thymol solution to inhibit microbial growth. The root part was removed up to approximately 1 mm above the cementoenamel junction. The smooth surface of the crown was longitudinally sectioned using IsoMet Low Speed (Bueller, Lake Bow, IL, USA) and Wafering Blade (Allied High-Tech Products, Inc., Compton, CA, USA) to obtain 3 enamel slabs approximately 2 mm thick, which was obtained 3 per tooth (Fig. ). A total of 75 enamel slabs were embedded in acrylic resin using a Teflon mold (Polycoat EC304; Aektung Chemical Co., Ltd., Seoul, Korea). The enamel surface was polished gradually with a water-cooled rotating polishing machine (Ecomet 30, Buehler Ltd., Lake Bluff, IL, USA) using a series of sanding papers (600–2000 grit; SiC Sandpaper & Foil, R@B Inc., Daejeon, Korea) until a flat surface was achieved. The enamel specimens were covered with two layers of acid-resistant nail varnish, except for a window of 2 × 3 mm, and then cleaned and stored in 100% relative humidity at 4 °C to avoid dehydration. Demineralization of the samples The demineralization solutions were prepared according to the protocols described by 10 Cate and Duijsters using 2.2 mM Ca (NO 3 ) 2 , 2.2 mM KH 2 PO 4 and 0.1 ppm NaF, 50 mM acetic acid, and 1 M KOH, and the pH was adjusted to 4.5. Each specimen was immersed in 10 mL of the demineralizing solution for 96 h to create caries-like lesions and cleaned with deionized water. Each enamel surface sample was scanned once; one-half of the surface was demineralized, and the other was remineralized. Application of remineralization agent Enamel specimens were randomly allocated to five groups (n = 15) according to remineralization treatment agents (Table ). Before the application of the remineralizing agent, the surface of the enamel of each specimen was dried. Group I (NaF): 2.5% sodium fluoride varnish (FluoriMax, Elevate Oral Care, West Palm Beach, FL, USA) was applied with a microbrush to the enamel surface. Then, the samples were stored in a moist environment for 24 h. Afterward, taking care not to touch the enamel surface, the fluoride varnish was slowly removed with a scalpel blade and a cotton swab soaked in acetone and washed with deionized water for 1 min. Group II (SDF): 5 μL of 12% SDF (Dengen Dental, Bahadurgarh, Haryana, India) was applied by a microbrush to dried surfaces of enamel specimens and left in contact for approximately 3 min. To remove any excess SDF, the specimen was washed with deionized water and gently dried with absorbent paper. The specimens were kept at 25 °C for 30 min. Group III (NSF): The same protocol as group II was followed, and NSF was applied instead of SDF. Group IV (NSF + Arg): The same protocol as group II was followed, and NSF + Arg was used instead of SDF. Group V (control): The specimens were cleaned in deionized water, and no remineralizing agent was applied. pH-cycling regime After the application of the remineralizing agents, pH cycling was performed on the specimens under static conditions for 10 days . They were immersed for 20 h in a remineralization solution composed of 1.5 mM CaCl₂·2H₂O, 0.9 mM KH₂PO₄, 150 mM KCl, and 20 mM HEPES. The pH of this solution was adjusted to 7 using KOH, approximating the neutral pH generally found in oral conditions. This was followed by 4 h of immersion in a demineralization solution consisting of 2.25 mM CaCl₂·2H₂O, 1.35 mM KH₂PO₄, 130 mM KCl, and 50 mM acetic acid, with the pH adjusted to 4.5 using KOH to mimic the acidic conditions that can lead to tooth demineralization (Table ). Moreover, pH levels were measured using a digital pH meter (Orion Star™ A211 Benchtop, Thermo Fisher Scientific Inc., Jakarta, Indonesia). Throughout the experiment, each sample was kept individually in separate containers at room temperature without stirring. To avoid the saturation or depletion of the solution, the demineralizing and remineralizing solutions were renewed daily. The composition of the solutions was designed to mimic the supersaturation of apatite minerals found in the saliva. Scanning electron microscopy After pH cycling, three specimens from each group were chosen to observe the enamel surface morphology. All specimens were dried and coated with a 100 Å layer of platinum using an ion coater (E-1010, Hitachi, Tokyo, Japan). The enamel surfaces were examined by SEM (S-3000N, Hitachi, Tokyo, Japan) with an accelerating voltage of 15 kV (× 10,000). Surface microhardness Specimens were set at least 30 min to dry to standardize the measurements. SMH assessment was taken using a Vickers microhardness measurement device (MMT-X, Matsuzawa Co., Ltd., Akita-shi, Akita Pref, Japan) at 200 g load for 15 s. Three indentations were made on the enamel surface at each test point spaced 100 μm apart. The mean SMH values of each sample were calculated at three time intervals: SMH 0 : SMH of the sound enamel. SMH 1 : SMH of the enamel after demineralization. SMH 2 : SMH of the enamel after pH cycling. The specimen was continuously stored at approximately 100% relative humidity at 4℃, except for the SMH measurement. The measured SMH value was expressed as the relative rate of the change of the sound enamel to SMH. Micro-computed tomography Specimens were scanned in a micro-CT (Skyscan 1076, Bruker/Skyscan, Kontich, Belgium) to assess the MD. Grayscale values obtained from the scans were converted to MD values using the hydroxyapatite phantom (1 g/cm 3 ). Scan settings were as follows: 17-μm voxel size, 100 kV, 500 μA, 0.5° rotation angle, 360° scan, 0.5 mm aluminum filter, 1475 ms exposure/integration time, and frame average of 1. The total scan time per sample was approximately 21 min. MD was evaluated at three different time points: before artificial caries-like lesion formation (T 0 , baseline), after caries-like lesion formation (T 1 , postdemineralization), and after treatment (T 2 , post pH cycling). Three distinct Sects. (9 μm each) for MD evaluation were randomly selected with approximately 77 μm depth at T 0 , T 1 and T 2 . The MG and percent remineralization were computed using the following Eq. . [12pt]{minimal} $$Mineral Gain = {Z}_{d}- {Z}_{r}$$ [12pt]{minimal} $$Percent Remineralization = (_{d}- {Z}_{r}}{ {Z}_{d}}) 100$$ (ΔZ d – MD difference between T 0 and T 1 , ΔZ r – MD difference between T 2 and T 0 ). Color change Spectrophotometric color measurements were taken with the VITA Easyshade® advance V portable dental spectrophotometer (VITA Zahnfabrik GmbH, Bad Säckingen, Germany). Color measurements of the enamel surface were recorded at three time points (T 0 , T 1 , and T 2 ). The International Commission on Illumination (CIE) L*, a*, and b* color coordinates of each specimen were measured. These coordinate values were measured to calculate the following: ΔL = L(T 2 ) – L(T 1 ); Δa = a(T 2 ) – a(T 1 ); and Δb = b(T 2 ) – b(T 1 ). Then, the following mathematical equation was used to determine the degree of the color difference ∆E = [(∆L) 2 + (∆a) 2 + (∆b) 2 ] 1/2 . Statistical analysis The Kolmogorov–Smirnov test was used to assess the normality of SMH, MG, percent remineralization, and color change results. SMH data did not follow a normal distribution; thus, the Kruskal–Wallis test was applied, followed by Dunn’s test to evaluate between-group differences. The Wilcoxon signed-rank test was applied for within-group comparisons. Data for MG, percent remineralization, and color changes followed a normal distribution and were analyzed using a one-way analysis of variance with Duncan’s post-hoc test. A p-value of < 0.05 was considered significant. Statistical analyses were performed using IBM SPSS Statistics version 29.0 (SPSS Inc., Chicago, IL, USA). The level of significance was set at α = 0.05. NSF was synthesized according to the protocol by Targino et al. Chitosan (1.0 g; Tokyo Chemical Industry Co., Ltd., Portland, OR, USA) was dissolved in 200 mL of 2% (v/v) acetic acid solution [CH₃COOH] (Daejung Co., Ltd., Busan, Korea). Then, 60 mL of the chitosan solution was transferred to an ice bath with continuous stirring and then added with 0.012 mol/L silver nitrate [AgNO₃] (Alfa Aesar, LLC, Haverhill, MA, USA). After 30 min of stirring, sodium borohydride [NaBH₄] (Sigma-Aldrich, Darmstadt, Hesse, Germany) was added dropwise maintaining an AgNO₃ to NaBH₄ mass ratio of 1:6. Subsequently, 11,310 ppm of 2.5% sodium fluoride [NaF] (Sigma-Aldrich, Darmstadt, Hesse, Germany) was added and stirred until completely dissolved. The colloidal solution was then stored at 4 °C. NSF + Arg was synthesized by adding 5 mg/mL L-Arginine [C 6 H 14 N 4 O 2 ] (Sigma-Aldrich, Darmstadt, Hesse, Germany) to the chitosan solution to enhance stability and control the nanoparticle size , . The concentration of arginine in NSF was determined based on the results of our pilot study, which tested various arginine concentrations in NSF. The shape, size, and shape of the AgNPs of NSF and NSF + Arg were evaluated by TEM (JEM-F200, Akishima, Tokyo, Japan) at 200 kV. A drop of NSF and NSF + Arg was placed separately on a carbon-coated copper grid and allowed to evaporate overnight at room temperature before TEM analysis. The diameter of each nanoparticle was measured using ImageJ2 (National Institutes of Health, Bethesda, MD, USA), and to obtain the average size of all particles in each sample, data were processed using Origin 2023 (version 10.5.113.50894). This study was approved by the Ethics Committee of the Gangnam Severance Hospital (IRB approval no. 3–2023-0115). In total, 25 human sound permanent molars extracted for therapeutic purposes without early carious lesions, developmental anomalies, or any other defects were obtained from the Human Derivatives Bank at Gangnam Severance Hospital. Debris on the surface of all teeth was removed by perio-curette. Under a microscope (Olympus® BX40, Shibuya, Tokyo, Japan) at × 20, teeth were examined to check for the presence of crack lines and confirm the absence of cracks or any other defects. They were stored in 0.1% thymol solution to inhibit microbial growth. The root part was removed up to approximately 1 mm above the cementoenamel junction. The smooth surface of the crown was longitudinally sectioned using IsoMet Low Speed (Bueller, Lake Bow, IL, USA) and Wafering Blade (Allied High-Tech Products, Inc., Compton, CA, USA) to obtain 3 enamel slabs approximately 2 mm thick, which was obtained 3 per tooth (Fig. ). A total of 75 enamel slabs were embedded in acrylic resin using a Teflon mold (Polycoat EC304; Aektung Chemical Co., Ltd., Seoul, Korea). The enamel surface was polished gradually with a water-cooled rotating polishing machine (Ecomet 30, Buehler Ltd., Lake Bluff, IL, USA) using a series of sanding papers (600–2000 grit; SiC Sandpaper & Foil, R@B Inc., Daejeon, Korea) until a flat surface was achieved. The enamel specimens were covered with two layers of acid-resistant nail varnish, except for a window of 2 × 3 mm, and then cleaned and stored in 100% relative humidity at 4 °C to avoid dehydration. The demineralization solutions were prepared according to the protocols described by 10 Cate and Duijsters using 2.2 mM Ca (NO 3 ) 2 , 2.2 mM KH 2 PO 4 and 0.1 ppm NaF, 50 mM acetic acid, and 1 M KOH, and the pH was adjusted to 4.5. Each specimen was immersed in 10 mL of the demineralizing solution for 96 h to create caries-like lesions and cleaned with deionized water. Each enamel surface sample was scanned once; one-half of the surface was demineralized, and the other was remineralized. Enamel specimens were randomly allocated to five groups (n = 15) according to remineralization treatment agents (Table ). Before the application of the remineralizing agent, the surface of the enamel of each specimen was dried. Group I (NaF): 2.5% sodium fluoride varnish (FluoriMax, Elevate Oral Care, West Palm Beach, FL, USA) was applied with a microbrush to the enamel surface. Then, the samples were stored in a moist environment for 24 h. Afterward, taking care not to touch the enamel surface, the fluoride varnish was slowly removed with a scalpel blade and a cotton swab soaked in acetone and washed with deionized water for 1 min. Group II (SDF): 5 μL of 12% SDF (Dengen Dental, Bahadurgarh, Haryana, India) was applied by a microbrush to dried surfaces of enamel specimens and left in contact for approximately 3 min. To remove any excess SDF, the specimen was washed with deionized water and gently dried with absorbent paper. The specimens were kept at 25 °C for 30 min. Group III (NSF): The same protocol as group II was followed, and NSF was applied instead of SDF. Group IV (NSF + Arg): The same protocol as group II was followed, and NSF + Arg was used instead of SDF. Group V (control): The specimens were cleaned in deionized water, and no remineralizing agent was applied. After the application of the remineralizing agents, pH cycling was performed on the specimens under static conditions for 10 days . They were immersed for 20 h in a remineralization solution composed of 1.5 mM CaCl₂·2H₂O, 0.9 mM KH₂PO₄, 150 mM KCl, and 20 mM HEPES. The pH of this solution was adjusted to 7 using KOH, approximating the neutral pH generally found in oral conditions. This was followed by 4 h of immersion in a demineralization solution consisting of 2.25 mM CaCl₂·2H₂O, 1.35 mM KH₂PO₄, 130 mM KCl, and 50 mM acetic acid, with the pH adjusted to 4.5 using KOH to mimic the acidic conditions that can lead to tooth demineralization (Table ). Moreover, pH levels were measured using a digital pH meter (Orion Star™ A211 Benchtop, Thermo Fisher Scientific Inc., Jakarta, Indonesia). Throughout the experiment, each sample was kept individually in separate containers at room temperature without stirring. To avoid the saturation or depletion of the solution, the demineralizing and remineralizing solutions were renewed daily. The composition of the solutions was designed to mimic the supersaturation of apatite minerals found in the saliva. After pH cycling, three specimens from each group were chosen to observe the enamel surface morphology. All specimens were dried and coated with a 100 Å layer of platinum using an ion coater (E-1010, Hitachi, Tokyo, Japan). The enamel surfaces were examined by SEM (S-3000N, Hitachi, Tokyo, Japan) with an accelerating voltage of 15 kV (× 10,000). Specimens were set at least 30 min to dry to standardize the measurements. SMH assessment was taken using a Vickers microhardness measurement device (MMT-X, Matsuzawa Co., Ltd., Akita-shi, Akita Pref, Japan) at 200 g load for 15 s. Three indentations were made on the enamel surface at each test point spaced 100 μm apart. The mean SMH values of each sample were calculated at three time intervals: SMH 0 : SMH of the sound enamel. SMH 1 : SMH of the enamel after demineralization. SMH 2 : SMH of the enamel after pH cycling. The specimen was continuously stored at approximately 100% relative humidity at 4℃, except for the SMH measurement. The measured SMH value was expressed as the relative rate of the change of the sound enamel to SMH. Specimens were scanned in a micro-CT (Skyscan 1076, Bruker/Skyscan, Kontich, Belgium) to assess the MD. Grayscale values obtained from the scans were converted to MD values using the hydroxyapatite phantom (1 g/cm 3 ). Scan settings were as follows: 17-μm voxel size, 100 kV, 500 μA, 0.5° rotation angle, 360° scan, 0.5 mm aluminum filter, 1475 ms exposure/integration time, and frame average of 1. The total scan time per sample was approximately 21 min. MD was evaluated at three different time points: before artificial caries-like lesion formation (T 0 , baseline), after caries-like lesion formation (T 1 , postdemineralization), and after treatment (T 2 , post pH cycling). Three distinct Sects. (9 μm each) for MD evaluation were randomly selected with approximately 77 μm depth at T 0 , T 1 and T 2 . The MG and percent remineralization were computed using the following Eq. . [12pt]{minimal} $$Mineral Gain = {Z}_{d}- {Z}_{r}$$ [12pt]{minimal} $$Percent Remineralization = (_{d}- {Z}_{r}}{ {Z}_{d}}) 100$$ (ΔZ d – MD difference between T 0 and T 1 , ΔZ r – MD difference between T 2 and T 0 ). Spectrophotometric color measurements were taken with the VITA Easyshade® advance V portable dental spectrophotometer (VITA Zahnfabrik GmbH, Bad Säckingen, Germany). Color measurements of the enamel surface were recorded at three time points (T 0 , T 1 , and T 2 ). The International Commission on Illumination (CIE) L*, a*, and b* color coordinates of each specimen were measured. These coordinate values were measured to calculate the following: ΔL = L(T 2 ) – L(T 1 ); Δa = a(T 2 ) – a(T 1 ); and Δb = b(T 2 ) – b(T 1 ). Then, the following mathematical equation was used to determine the degree of the color difference ∆E = [(∆L) 2 + (∆a) 2 + (∆b) 2 ] 1/2 . The Kolmogorov–Smirnov test was used to assess the normality of SMH, MG, percent remineralization, and color change results. SMH data did not follow a normal distribution; thus, the Kruskal–Wallis test was applied, followed by Dunn’s test to evaluate between-group differences. The Wilcoxon signed-rank test was applied for within-group comparisons. Data for MG, percent remineralization, and color changes followed a normal distribution and were analyzed using a one-way analysis of variance with Duncan’s post-hoc test. A p-value of < 0.05 was considered significant. Statistical analyses were performed using IBM SPSS Statistics version 29.0 (SPSS Inc., Chicago, IL, USA). The level of significance was set at α = 0.05. Supplementary Information 1. Supplementary Information 2. Supplementary Information 3. |
Periodontitis and dental quality of life predict long-term survival in head and neck cancer | b5933407-f1ef-4e00-bf9c-b5e41901be77 | 11575175 | Dentistry[mh] | Head and neck cancers (HNC), which include lip, oral cavity, pharynx, larynx and salivary glands, comprise nearly one million new cases worldwide, constituting about 5% of the worldwide cancer incidence according to Global Cancer Statistics . Despite advancements, only around 50% of newly diagnosed HNC patients worldwide achieve a cure within five years following diagnosis . In Norway, HNC accounted for approximately 2.5% ( n ≈ 800) of total cancer incidence, with curative treatment achieving five-year survival in about two-thirds of diagnosed patients . Tobacco and alcohol, particularly in combination, are well-established as significant risk factors for HN squamous cell carcinoma (HNSCC) . Specifically, in oropharyngeal SCC (OPSCC), human papillomavirus (HPV) is emerging as a crucial risk factor, with an increasing incidence of HPV-positive (HPV OPSCC in the Western world . Patients with HPV tumors exhibit a distinct biology from those with HPV-negative (HPV(−)) tumors, including difference in carcinogenesis . HPV OPSCC patients generally have a much better prognosis compared to their HPV(−) counterparts . However, the prognosis for OPSCC HPV deteriorates with increasing tobacco use . Poor oral health is related to general mortality and is also recognized as a significant risk factor for HNSCC . Periodontal diseases and caries may serve as indicators of poor oral health and are known risk factors related to survival in oral cancer . Additionally, tobacco and alcohol consumption are established risk factors for both HNSCC and periodontal disease . Health-related quality of life (HRQoL) scores, as determined by questions regarding oral and dental-related symptoms at the diagnosis of HNSCC, have demonstrated predictive value for survival both generally and concerning HNSCC . However, the underlying mechanisms behind these associations remain unclear. Several potential mechanisms exist, ranging from the influence of comorbidities and health behaviors, such as tobacco consumption , to the impact of the oral microbiome . In clinical HNSCC practice, assessing dental status is routine for newly diagnosed patients . This evaluation primarily aims to prevent potential side effects of treatment, such as osteoradionecrosis following radiation therapy, and to plan for oral or dental reconstruction as necessary . Emphasizing the importance of these assessments could help optimize patient outcomes . In addition to HNSCC and other smoking-related carcinomas , periodontitis is associated with conditions such as diabetes, hypertension, lung disease, and Alzheimer’s disease, all of which are linked to increased mortality Consequently, patients with extensive periodontitis are expected to have higher mortality compared to those with limited disease. Previous studies from our group have shown that periodontitis predicts non-disease-specific survival in patients with OPSCC . In this study, we hypothesize that a similar relationship may exist with general HNSCC patients. Orthopantomogram (OPG) imaging allows for standardized assessment of osseous lesions associated with periodontitis . We aim to determine the prognostic value of present periodontitis in a cohort of HNSCC patients at the time of diagnosis. Specifically, we are interested in exploring whether survival predictions based on patient-reported dental health and the extent of periodontitis diagnosed via OPG overlap. Understanding the origin and implications of such survival predictions is a primary objective of this investigation. Our study aims to assess both five-year and long-term survival predictions in a general HNSCC cohort, focusing on periodontal pathology measured from OPG at diagnosis and HRQoL scores obtained at the same time. Additionally, we will analyze results with and without including index HNSCC mortality to comprehensively evaluate these survival predictions.
Patients Haukeland University Hospital, Bergen, Norway, treats HNC patients in the Western Health Care Region, which includes around 1.1 million inhabitants. Our hospital-based HNC register includes patients starting treatment since May 1, 1992. The present study is based on data from a consecutive cohort of 106 patients diagnosed from November 2002 to June 2005, all aimed at curative treatment. We required that the patients were able to answer HRQoL questionnaires intelligibly. The patient cut-off age at diagnosis was 78 years. The Regional Committee for Medical Research Ethics in Western Norway approved the study (2011/125). Informed consent to participate was obtained from all participants in the study. All patients underwent standardized diagnostic work-up, which consisted of clinical examination, CT/MRI scans of the primary tumor site, neck, thorax, and liver, and ultrasonography examination of the neck including fine-needle aspiration cytology if indicated. Diagnostic endoscopic examinations (microlaryngoscopy, hypopharyngoscopy, bronchoscopy and esophagoscopy) were performed, preferably under general anesthesia if the patient was suitable. The TNM (Tumor, Nodes, Metastasis) stage was scored according to the International Union against Cancer (IUCC) 6th edition, which was the relevant standard at the time, although the 8th Edition is in use today . The sites and TNM stages of patients are listed in Table . As part of the routine pretreatment workup at our clinic for patients planned for radiation therapy (RT) to the oral cavity, the HNC patients underwent a dental screening examination in the Department of Oral and Maxillofacial Surgery. This examination consisted of a clinical and radiographic examination, including an OPG supplemented with dental radiographs if indicated. In total, 106 patients were included. Of the original cohort, OPGs of 27 patients were not available. Of the patients without OPGs, 14 were not treated with RT and consequently did not undergo OPG. These patients included 10 with early-stage laryngeal cancer and four with early-stage oral cavity cancer. Additionally, five patients with laryngeal cancer were not subjected to OPG examination because the RT field did not reach the oral cavity. Thus, OPGs from eight patients were missing without explanation. Treatment An overview of treatment performed is listed in Table . The patients’ treatment details have been reported in previous studies from our group . Eighty patients underwent primary tumor surgery aimed at radically removing the tumor tissue when indicated. Intraoperative biopsies were taken from the margins for further characterization through frozen sections. Free flap surgery was performed on 21 patients. Neck dissection, following previously reported procedures , was conducted on 51 patients. The radiation therapy (RT) administered is detailed in Table . RT was primarily given according to the Danish Head and Neck Cancer Group (DAHANCA) Guidelines, utilizing an external beam RT with a linear accelerator. The RT doses ranged from 64 to70 Gray for all macroscopic tumors with borders, and 50 Gy to the neck when pertinent risk but no clinical disease was present. Eighty-seven of 106 patients received radiation therapy, with 78 treated specifically with neck radiotherapy. Eleven patients received chemotherapy as part of their primary HNSCC treatment (Table ). Smoking level and alcohol consumption history Patient cigarette smoking was recorded by noting the total years of smoking and estimating the mean level of cigarettes smoked per week. Alcohol consumption was determined by having patients select one of the following statements: never (1), less than 1 time per week (2), 1–2 times per week (3), previously more than 2 times per week (4), and presently more than 2 times per week (5). Health-related quality-of-life (HRQoL) inventories The questionnaires were completed through a structured interview. HRQoL was determined by patients answering the validated Norwegian edition of the European Organization for Research and Treatment of Cancer (EORTC) Quality of life Questionnaire (QLQ) Head and Neck (H&N)-35 . The QLQ H&N-35 comprises seven multi-item scales (pain, swallowing, senses, speech, social eating, social contact, and sexuality), and six symptom items (dental problems, opening mouth, dry mouth, sticky saliva, coughing, and feeling ill). The answers were given according to a 4-point Likert scale. These indices were transformed so that 100 points indicated maximum symptoms and 0 points indicated least symptoms. In this study, we have employed the questions about dental health. Comorbidities Comorbidities were obtained using the validated chart-based Adult Comorbidity Evaluation (ACE)-27 scale measured at baseline . The ACE-27 grades specific conditions into levels of severity: mild, moderate, or severe. Based on the highest-ranked single ailment, an overall comorbidity score (none, mild, moderate, or severe) was assigned. In cases where two or more moderate ailments registered in different disease entities, the overall comorbidity score was designated as severe. Periodontal status Radiographic alveolar bone loss (ABL) was measured as the distance in millimeters (mm) from the cement-enamel junction or restoration margin to the alveolar bone crest at mesial and distal surfaces of molars and premolars. An indicator of periodontal pathology was registered if there was at least 4 mm of bone loss from the cement-enamel junction on at least two molars or premolars . The measurements were adjusted according to the enlargement factor provided by the OPG (1.3). Additionally, distinctions between vertical and horizontal bone loss were noted. Other parameters recorded included the number of missing teeth, filled teeth, residual roots, dental care status, and the number of teeth with apical radiolucencies (Table ). The OPGs were uniformly acquired, and a single examiner scored the radiographic parameters using Sirona Sidexis software without knowledge of patient details. To assess methodological quality, 25 radiographs were randomly selected and scored by the same investigator on two different occasions at least four weeks apart. The examination showed less than 10% variability between two assessments for the same patient. DNA isolation and HPV DNA detection Tumor samples were carefully reviewed by an expert pathologist to select representative tissue specimen. DNA was extracted from formalin-fixed, paraffin-embedded (FFPE) sections, which included both primary tumors tissues or lymph node metastatic lesions obtained during diagnostic or surgical procedures. Three 10 μm thick FFPE sections were first deparaffinized in xylene and ethanol. These sections were then digested overnight in ATL buffer and Proteinase K (Qiagen GmbH, Hilden, Germany) at 56 °C. Following digestion, DNA was extracted using the EZNA tissue DNA kit (Omega Bio-tek, Norcross, GA). The DNA concentration was measured with a NanoDrop spectrophotometer (Nanodrop, Minneapolis, MN). Detailed methods for HPV DNA detection have been previously published in our earlier works . Briefly, for the detection of HPV DNA, standard Gp5+/Gp6 + primers were used. PCR was conducted with both positive and negative controls, and the PCR products were then separated on a 3% agarose gel. Only samples with distinct PCR bands were considered positive for HPV and were subsequently processed for HPV subtype identification through DNA sequencing. The PCR products were purified using the same primers as the initial PCR reaction. The HPV DNA sequences were identified using the NCBI BLAST Database. Statistics Statistical analyses were performed using IBM SPSS Statistics for Windows, version 29 (IBM Corp, Armonk, NY, USA). A value of p < 0.05 was considered to indicate a statistically significant result. All p-values reported represent two-sided tests. Pearson correlation coefficients were used to assess correlation between variables. Analysis of variance (ANOVA) was performed to study differences between HPV-negative and HPV-positive patients. The associations between possible prognostic variables and survival were determined using the Kaplan–Meier estimator and Cox proportional hazards regression models. Survival rates are reported as pertinent percentage survival and/or relative risk (RR) with 95% confidence intervals (CI). Non-disease-specific survival is reported as overall survival with disease-specific survival subtracted.
Haukeland University Hospital, Bergen, Norway, treats HNC patients in the Western Health Care Region, which includes around 1.1 million inhabitants. Our hospital-based HNC register includes patients starting treatment since May 1, 1992. The present study is based on data from a consecutive cohort of 106 patients diagnosed from November 2002 to June 2005, all aimed at curative treatment. We required that the patients were able to answer HRQoL questionnaires intelligibly. The patient cut-off age at diagnosis was 78 years. The Regional Committee for Medical Research Ethics in Western Norway approved the study (2011/125). Informed consent to participate was obtained from all participants in the study. All patients underwent standardized diagnostic work-up, which consisted of clinical examination, CT/MRI scans of the primary tumor site, neck, thorax, and liver, and ultrasonography examination of the neck including fine-needle aspiration cytology if indicated. Diagnostic endoscopic examinations (microlaryngoscopy, hypopharyngoscopy, bronchoscopy and esophagoscopy) were performed, preferably under general anesthesia if the patient was suitable. The TNM (Tumor, Nodes, Metastasis) stage was scored according to the International Union against Cancer (IUCC) 6th edition, which was the relevant standard at the time, although the 8th Edition is in use today . The sites and TNM stages of patients are listed in Table . As part of the routine pretreatment workup at our clinic for patients planned for radiation therapy (RT) to the oral cavity, the HNC patients underwent a dental screening examination in the Department of Oral and Maxillofacial Surgery. This examination consisted of a clinical and radiographic examination, including an OPG supplemented with dental radiographs if indicated. In total, 106 patients were included. Of the original cohort, OPGs of 27 patients were not available. Of the patients without OPGs, 14 were not treated with RT and consequently did not undergo OPG. These patients included 10 with early-stage laryngeal cancer and four with early-stage oral cavity cancer. Additionally, five patients with laryngeal cancer were not subjected to OPG examination because the RT field did not reach the oral cavity. Thus, OPGs from eight patients were missing without explanation.
An overview of treatment performed is listed in Table . The patients’ treatment details have been reported in previous studies from our group . Eighty patients underwent primary tumor surgery aimed at radically removing the tumor tissue when indicated. Intraoperative biopsies were taken from the margins for further characterization through frozen sections. Free flap surgery was performed on 21 patients. Neck dissection, following previously reported procedures , was conducted on 51 patients. The radiation therapy (RT) administered is detailed in Table . RT was primarily given according to the Danish Head and Neck Cancer Group (DAHANCA) Guidelines, utilizing an external beam RT with a linear accelerator. The RT doses ranged from 64 to70 Gray for all macroscopic tumors with borders, and 50 Gy to the neck when pertinent risk but no clinical disease was present. Eighty-seven of 106 patients received radiation therapy, with 78 treated specifically with neck radiotherapy. Eleven patients received chemotherapy as part of their primary HNSCC treatment (Table ).
Patient cigarette smoking was recorded by noting the total years of smoking and estimating the mean level of cigarettes smoked per week. Alcohol consumption was determined by having patients select one of the following statements: never (1), less than 1 time per week (2), 1–2 times per week (3), previously more than 2 times per week (4), and presently more than 2 times per week (5).
The questionnaires were completed through a structured interview. HRQoL was determined by patients answering the validated Norwegian edition of the European Organization for Research and Treatment of Cancer (EORTC) Quality of life Questionnaire (QLQ) Head and Neck (H&N)-35 . The QLQ H&N-35 comprises seven multi-item scales (pain, swallowing, senses, speech, social eating, social contact, and sexuality), and six symptom items (dental problems, opening mouth, dry mouth, sticky saliva, coughing, and feeling ill). The answers were given according to a 4-point Likert scale. These indices were transformed so that 100 points indicated maximum symptoms and 0 points indicated least symptoms. In this study, we have employed the questions about dental health.
Comorbidities were obtained using the validated chart-based Adult Comorbidity Evaluation (ACE)-27 scale measured at baseline . The ACE-27 grades specific conditions into levels of severity: mild, moderate, or severe. Based on the highest-ranked single ailment, an overall comorbidity score (none, mild, moderate, or severe) was assigned. In cases where two or more moderate ailments registered in different disease entities, the overall comorbidity score was designated as severe.
Radiographic alveolar bone loss (ABL) was measured as the distance in millimeters (mm) from the cement-enamel junction or restoration margin to the alveolar bone crest at mesial and distal surfaces of molars and premolars. An indicator of periodontal pathology was registered if there was at least 4 mm of bone loss from the cement-enamel junction on at least two molars or premolars . The measurements were adjusted according to the enlargement factor provided by the OPG (1.3). Additionally, distinctions between vertical and horizontal bone loss were noted. Other parameters recorded included the number of missing teeth, filled teeth, residual roots, dental care status, and the number of teeth with apical radiolucencies (Table ). The OPGs were uniformly acquired, and a single examiner scored the radiographic parameters using Sirona Sidexis software without knowledge of patient details. To assess methodological quality, 25 radiographs were randomly selected and scored by the same investigator on two different occasions at least four weeks apart. The examination showed less than 10% variability between two assessments for the same patient.
Tumor samples were carefully reviewed by an expert pathologist to select representative tissue specimen. DNA was extracted from formalin-fixed, paraffin-embedded (FFPE) sections, which included both primary tumors tissues or lymph node metastatic lesions obtained during diagnostic or surgical procedures. Three 10 μm thick FFPE sections were first deparaffinized in xylene and ethanol. These sections were then digested overnight in ATL buffer and Proteinase K (Qiagen GmbH, Hilden, Germany) at 56 °C. Following digestion, DNA was extracted using the EZNA tissue DNA kit (Omega Bio-tek, Norcross, GA). The DNA concentration was measured with a NanoDrop spectrophotometer (Nanodrop, Minneapolis, MN). Detailed methods for HPV DNA detection have been previously published in our earlier works . Briefly, for the detection of HPV DNA, standard Gp5+/Gp6 + primers were used. PCR was conducted with both positive and negative controls, and the PCR products were then separated on a 3% agarose gel. Only samples with distinct PCR bands were considered positive for HPV and were subsequently processed for HPV subtype identification through DNA sequencing. The PCR products were purified using the same primers as the initial PCR reaction. The HPV DNA sequences were identified using the NCBI BLAST Database.
Statistical analyses were performed using IBM SPSS Statistics for Windows, version 29 (IBM Corp, Armonk, NY, USA). A value of p < 0.05 was considered to indicate a statistically significant result. All p-values reported represent two-sided tests. Pearson correlation coefficients were used to assess correlation between variables. Analysis of variance (ANOVA) was performed to study differences between HPV-negative and HPV-positive patients. The associations between possible prognostic variables and survival were determined using the Kaplan–Meier estimator and Cox proportional hazards regression models. Survival rates are reported as pertinent percentage survival and/or relative risk (RR) with 95% confidence intervals (CI). Non-disease-specific survival is reported as overall survival with disease-specific survival subtracted.
Clinical parameters and survival predictions Clinical parameters are summarized in Table . The factors considered included the age of the patients at diagnosis, gender, HPV status, smoked years, mean cigarettes smoked per week, level of comorbidity (ACE-27), and clinical stage. These parameters were analyzed for their potential to predict long-term survival using univariate Cox regression analysis (Table ). Patient survival data are updated as of July 31, 2023. The age of patients at diagnosis predicted subsequent survival with a relative risk (RR) of 1.04 per year (confidence interval (CI): 1.02–1.07, p = 0.002). Years smoked predicted survival, with an RR of 1.03 (CI: 1.01–1.05, p = 0.028). The same was observed for the number of cigarettes smoked per week, with an RR of 1.004 (CI: 1.00-1.007, p < 0.01). Comorbidity, as measured on the ACE-27 scale, was a significant predictor of survival with an RR of 1.4 (CI: 1.12–1.76, p = 0.004). Clinical stage also showed a predictive trend with an RR of 1.2 (CI: 1.00-1.45, p = 0.056). A trend was observed regarding HPV status and survival (RR = 0.58, CI: 0.32–1.05, p = 0.074) (Table ). Data from the orthopantomogram (OPG) studies, detailing findings on alveolar bone loss and other relevant variables by HPV status, are shown in Table . Pearson correlations between variables Age at diagnosis correlated negatively with HPV status ( r = -0.26, p < 0.01) and positively with the level of comorbidity measured by ACE-27 ( r = 0.23, p < 0.05) (Table ). HPV status correlated positively with clinical stage ( r = 0.35, p < 0.001), and negatively with years smoked ( r = -0.36, p < 0.001) and number of cigarettes smoked per week ( r = -0.25, p < 0.05). Years smoking correlated with both reported dental HRQoL ( r = 0.20, p < 0.05) and the level of alveolar bone loss ( r = 0.39, p < 0.001). The level of alveolar bone loss also correlated with the comorbidity level measured by ACE-27 ( r = 0.36, p < 0.001) (Table ). Five-year survival Five-year survival of the studied cohort by patient-reported dental HRQoL or alveolar bone loss are demonstrated in Fig. . Both entities, i.e., dental HRQoL (RR = 2.85, CI: 1.17–7.01, p = 0.021) and alveolar bone loss (RR = 2.80, CI: 1.04–7.53, p = 0.036), predicted survival (Fig. ). When stratified by tumor HPV status (Fig. ), the dental HRQoL prediction was mainly significant among HPV(−) patients ( p = 0.025) (Fig. A), whereas alveolar bone loss prediction was more pronounced among HPV patients ( p < 0.001) (Fig. D). In a Cox multivariate regression model including variables measured at diagnosis (age of patient, gender, HPV status, clinical stage, smoking and alcohol history, and comorbidity (ACE-27)), dental HRQoL predicted survival (RR = 2.53, CI: 1.02–6.24, p = 0.045) (Table ). When both dental HRQoL and alveolar bone loss were included in the same Cox multivariate analysis alongside the covariates mentioned above, dental HRQoL remained a significant predictor of survival ( p = 0.037), while rate of alveolar bone loss showed a trend towards predicting survival, but it did not reach statistical significance ( p = 0.076) (Table ). Long-term survival: 18–20 years Perceived dental HRQoL also predicted long-term survival (RR = 3.58, CI: 1.99–6.45, p < 0.001). When adjusted for HPV status, this association was maintained for both HPV(−) ( p < 0.001) and HPV ( p = 0.002) patients (Fig. A). Alveolar bone loss also predicted survival (RR = 2.28, CI: 1.22–4.28, p = 0.01). When stratified by HPV status, survival was predicted by alveolar bone loss among HPV patients ( p < 0.001) (Fig. D). Long-term survival was further studied using Cox regression multivariate analyses (Table ). Control covariates included age at diagnosis, gender, HPV tumor status, clinical stage, smoking, alcohol use, and comorbidity (ACE-27). In this analysis, reported dental HRQoL predicted survival (RR = 2.17, CI: 1.17–4.01, p = 0.014), while the rate of alveolar bone loss showed a trend toward predicting survival but did not reach statistical significance (RR = 1.95, CI: 0.98–3.87, p = 0.056). When both dental HRQoL and alveolar bone loss were included in a single regression analysis with the above-mentioned covariates, significant unique survival predictions were obtained both dental HRQoL ( p = 0.007) and alveolar bone loss ( p = 0.034) (Table ). Long-term survival was also analyzed by using Kaplan-Meier methods, focusing on patients who survived the HNSCC index disease. Reported dental HRQoL predicted survival (RR = 3.58, CI: 1.99–6.45, p < 0.001). Stratifying by HPV tumor status, significant survival predictions were observed for HPV(−) ( p < 0.001) and HPV ( p = 0.004) groups (Fig. A and B). Alveolar bone loss, including all surviving patients, also predicted survival (RR = 2.28, CI: 1.22–7.78, p = 0.010). When stratified by HPV status, significant survival predictions was observed only among HPV patient ( p < 0.001) (Fig. D). Long-term survival among HNSCC disease-specific survivors was also assessed using Cox regression analyses using the aforementioned covariates (Table ). This analysis demonstrated that reported dental HRQoL (RR = 2.76, CI: 1.24–6.15, p = 0.013) and the extent of alveolar bone loss (RR = 2.66, CI: 1.18–5.96, p = 0.018) independently predicted survival when analyzed concurrently (Table ). Finally, we investigated whether the reported level of dental HRQoL, adjusted by alveolar bone loss, continued to predict survival, which indeed was the case ( p < 0.001) (Fig. A). Similarly, the reverse analysis also showed significance ( p = 0.017) (Fig. D).
Clinical parameters are summarized in Table . The factors considered included the age of the patients at diagnosis, gender, HPV status, smoked years, mean cigarettes smoked per week, level of comorbidity (ACE-27), and clinical stage. These parameters were analyzed for their potential to predict long-term survival using univariate Cox regression analysis (Table ). Patient survival data are updated as of July 31, 2023. The age of patients at diagnosis predicted subsequent survival with a relative risk (RR) of 1.04 per year (confidence interval (CI): 1.02–1.07, p = 0.002). Years smoked predicted survival, with an RR of 1.03 (CI: 1.01–1.05, p = 0.028). The same was observed for the number of cigarettes smoked per week, with an RR of 1.004 (CI: 1.00-1.007, p < 0.01). Comorbidity, as measured on the ACE-27 scale, was a significant predictor of survival with an RR of 1.4 (CI: 1.12–1.76, p = 0.004). Clinical stage also showed a predictive trend with an RR of 1.2 (CI: 1.00-1.45, p = 0.056). A trend was observed regarding HPV status and survival (RR = 0.58, CI: 0.32–1.05, p = 0.074) (Table ). Data from the orthopantomogram (OPG) studies, detailing findings on alveolar bone loss and other relevant variables by HPV status, are shown in Table .
Age at diagnosis correlated negatively with HPV status ( r = -0.26, p < 0.01) and positively with the level of comorbidity measured by ACE-27 ( r = 0.23, p < 0.05) (Table ). HPV status correlated positively with clinical stage ( r = 0.35, p < 0.001), and negatively with years smoked ( r = -0.36, p < 0.001) and number of cigarettes smoked per week ( r = -0.25, p < 0.05). Years smoking correlated with both reported dental HRQoL ( r = 0.20, p < 0.05) and the level of alveolar bone loss ( r = 0.39, p < 0.001). The level of alveolar bone loss also correlated with the comorbidity level measured by ACE-27 ( r = 0.36, p < 0.001) (Table ).
Five-year survival of the studied cohort by patient-reported dental HRQoL or alveolar bone loss are demonstrated in Fig. . Both entities, i.e., dental HRQoL (RR = 2.85, CI: 1.17–7.01, p = 0.021) and alveolar bone loss (RR = 2.80, CI: 1.04–7.53, p = 0.036), predicted survival (Fig. ). When stratified by tumor HPV status (Fig. ), the dental HRQoL prediction was mainly significant among HPV(−) patients ( p = 0.025) (Fig. A), whereas alveolar bone loss prediction was more pronounced among HPV patients ( p < 0.001) (Fig. D). In a Cox multivariate regression model including variables measured at diagnosis (age of patient, gender, HPV status, clinical stage, smoking and alcohol history, and comorbidity (ACE-27)), dental HRQoL predicted survival (RR = 2.53, CI: 1.02–6.24, p = 0.045) (Table ). When both dental HRQoL and alveolar bone loss were included in the same Cox multivariate analysis alongside the covariates mentioned above, dental HRQoL remained a significant predictor of survival ( p = 0.037), while rate of alveolar bone loss showed a trend towards predicting survival, but it did not reach statistical significance ( p = 0.076) (Table ).
Perceived dental HRQoL also predicted long-term survival (RR = 3.58, CI: 1.99–6.45, p < 0.001). When adjusted for HPV status, this association was maintained for both HPV(−) ( p < 0.001) and HPV ( p = 0.002) patients (Fig. A). Alveolar bone loss also predicted survival (RR = 2.28, CI: 1.22–4.28, p = 0.01). When stratified by HPV status, survival was predicted by alveolar bone loss among HPV patients ( p < 0.001) (Fig. D). Long-term survival was further studied using Cox regression multivariate analyses (Table ). Control covariates included age at diagnosis, gender, HPV tumor status, clinical stage, smoking, alcohol use, and comorbidity (ACE-27). In this analysis, reported dental HRQoL predicted survival (RR = 2.17, CI: 1.17–4.01, p = 0.014), while the rate of alveolar bone loss showed a trend toward predicting survival but did not reach statistical significance (RR = 1.95, CI: 0.98–3.87, p = 0.056). When both dental HRQoL and alveolar bone loss were included in a single regression analysis with the above-mentioned covariates, significant unique survival predictions were obtained both dental HRQoL ( p = 0.007) and alveolar bone loss ( p = 0.034) (Table ). Long-term survival was also analyzed by using Kaplan-Meier methods, focusing on patients who survived the HNSCC index disease. Reported dental HRQoL predicted survival (RR = 3.58, CI: 1.99–6.45, p < 0.001). Stratifying by HPV tumor status, significant survival predictions were observed for HPV(−) ( p < 0.001) and HPV ( p = 0.004) groups (Fig. A and B). Alveolar bone loss, including all surviving patients, also predicted survival (RR = 2.28, CI: 1.22–7.78, p = 0.010). When stratified by HPV status, significant survival predictions was observed only among HPV patient ( p < 0.001) (Fig. D). Long-term survival among HNSCC disease-specific survivors was also assessed using Cox regression analyses using the aforementioned covariates (Table ). This analysis demonstrated that reported dental HRQoL (RR = 2.76, CI: 1.24–6.15, p = 0.013) and the extent of alveolar bone loss (RR = 2.66, CI: 1.18–5.96, p = 0.018) independently predicted survival when analyzed concurrently (Table ). Finally, we investigated whether the reported level of dental HRQoL, adjusted by alveolar bone loss, continued to predict survival, which indeed was the case ( p < 0.001) (Fig. A). Similarly, the reverse analysis also showed significance ( p = 0.017) (Fig. D).
From HNSCC patients, we determined the extent of alveolar pathology based on blind scoring from routine orthopantomograms (OPGs) and patient-reported dental HRQoL, both obtained during the primary diagnostic work-up (see Table ). The results showed that present alveolar bone loss (Figs. , and ), together with low patient reported dental HRQoL (Figs. , and ), uniquely predicted decreased long-term survival (Fig. ; Tables and ). Furthermore, non-HNSSC disease-specific long-term survival was also predicted (Fig. ; Table ). The assessment of periodontal alveolar bone loss was based on OPGs. Several measures derived from an OPG are presented in Table . Previously, studying periodontal bone loss has proved a useful approach regarding survival both generally and regarding HNSCC patients . Studying horizontal and alveolar vertical bone loss separately has been employed. The best survival prediction was shown to be the sum of the horizontal and vertical bone loss , and this has therefore mainly been used in this study. Ideally, results from clinical investigations of the level of marginal periodontitis could also have been included. The accuracy of estimating the degree of marginal periodontitis by clinical examinations compared to OPG been investigated by Bueno et al. . They showed that defining marginal periodontitis as ≥ 2 sites with interproximal clinical attachment loss ≥ 4 mm in alveoli from at least two different teeth as seen on an OPG, was comparable to defining marginal periodontitis clinically . This is the presently used measure. Furthermore, an advantage has been that the OPGs were scored by a single investigator who had no separate clinical knowledge about the patients. Therefore, data acquisition can be considered blind and to some extent prospective. We have previously shown that a high degree of periodontal bone loss in newly diagnosed patients with oropharyngeal carcinoma may predict lowered subsequent survival . Currently, this prediction is being validated in a general cohort of newly diagnosed HNSCC patients (Figs. , , and ). This may help to establish personalized treatment by identifying patients with a serious prognosis. The questions regarding dental HRQoL were sampled from the EORTC QoL H&N specific part . We, along with others, have previously shown that the response pattern to this questionnaire can predict subsequent survival, both generally and specifically for HNSCC . This survival prediction has now been validated for both short-term (Fig. ) and long-term survival (Fig. ), as well as among the HNSCC survivors (Fig. ). It suggests that this method can be used to identify patients who should be offered close follow-up. One aim of this investigation was to determine to what extent survival predictions from alveolar bone loss and reported dental HRQoL overlap. The survival predictions from these entities were statistically independent of each other (Tables , and ; Fig. ). The findings furthermore suggests a general survival pattern, with periodontal disease interacting differently with tumor HPV versus HPV(−) HNSCC patients (Figs. and ). The HRQoL survival prediction is more important among HPV(−) patients, while alveolar bone loss is more important among HPV patients. Presently, comorbidity and smoking were related to periodontal status as reported in Table . The study design, however, does not allow establishing the cause of death except for the index HNSCC. It is likely that much of the non-HNSCC specific deaths observed may be due to smoking-related cancer and smoking-related cardiovascular disease, both also being associated with periodontal disease . We have shown that both information about smoking history and the presence of comorbidity at least to some extent may serve as covariates without loss of the periodontal survival predictions (Tables , and ). Patients with extensive alcohol consumption typically show a poor degree of maintaining their dental health , with alcohol consumption also being a risk factor for HNSCC . As s far as studied, we can conclude that the present basic findings are not secondary to the alcohol consumption rate. This work is based on studying minimum 77 patients. Multivariate Cox regression analyses with many introduced covariates, as in this case, can only suggest relationships between these covariates and survival predictions (Tables , and ). Therefore, this study should primarily inspire other investigators to include dental/oral HRQoL and periodontal level scores in their research to further detail the interactions between the studied covariates. The suggested uniform criteria for defining and measuring the extent or severity of periodontitis primarily rely on clinical examinations . However, information from OPGs can also be utilized, as this type of examination is easy to standardize and realistically to perform within the short time frame between cancer diagnosis and start of treatment. Additionally, asking patients to complete a HRQoL questionnaire is straightforward. Therefore, this study highlights the use of readily available variables as basis for personalized medicine. Consequently, information on treatment choice and especially individualized follow-up may be individually recommended. Periodontitis is furthermore a chronic inflammatory disease characterized by progressive loss of alveolar bone and periodontal attachment . Microbiota in dental biofilm and their harmful products trigger a host immune response, which in susceptible individuals may lead to destruction of periodontal tissue . Periodontitis has been associated with various systemic diseases, and shared inflammatory pathways have been proposed as a possible explanation . Inflammatory cytokines, soluble cytokine inhibitors, and/or soluble cytokine receptors may also provide a communication channel through which periodontitis can increase the risk of conditions such as cancer . The opportunistic pathogen Fusobacterium nucleatum, which acts as a bridge between early and late colonizers in the dental biofilm , is among the periodontal bacteria frequently mentioned in connection with cancer progression and prognosis , possibly through inflammatory pathways. The present results may support this suggestion. It would be of interest to further study the association between periodontitis and inflammatory activation in relation to patient prognosis . Studies have shown systematic effects of periodontal disease treatment on conditions like diabetes . In line with this, treatment of periodontitis, which leads to less inflammation and fewer oral pathogens, could be systematically studied through formal phase II-III trials, in HNSCC patients, with the main aim of preventing mortality. Such studies could also improve HRQoL for the patients, providing another mechanism for disease mitigation.
The present work has demonstrated that periodontitis at the time of HNC diagnosis predicts subsequent survival. Similarly, patients reporting low dental HRQoL experienced worse survival outcomes. This information may serve as a basis for treatment decisions. However, many questions about these findings remain unanswered, and further studies are needed to explore the relationship between HNC, dental HRQoL, and periodontitis, ideally in formal phase II/III study settings.
|
Diabetes management in cancer patients. An Italian Association of Medical Oncology, Italian Association of Medical Diabetologists, Italian Society of Diabetology, Italian Society of Endocrinology and Italian Society of Pharmacology multidisciplinary consensus position paper | a8791f89-b4a9-46a9-be50-5cce8cb3d60b | 10714217 | Internal Medicine[mh] | Cancer and diabetes mellitus (DM) are among the two most prevalent and serious health concerns worldwide, and their incidence and prevalence have increased significantly in the last decade. A diagnosis of either cancer or DM can significantly impact an individual’s life; even more so, their coexistence can affect the quality of life (QoL), patient care, and survival. It is estimated that a significant proportion of oncology patients, ranging between 8% and 18%, also suffer from DM. Several studies revealed a complex relationship between DM and cancer. Recently, in addition to the common pathogenetic mechanisms usually proposed to explain this relationship (e.g. hyperinsulinaemia, hyperglycaemia, chronic inflammation, pharmacological treatments, surgery outcomes), new biological mechanisms, such as the dysregulation of microRNAs intervening in pathways involved in the pathogenesis of both DM and cancer, have been proposed as possibly responsible for the close correlation between these two pathological conditions. However, more research is needed to better understand the biological links between these two diseases aiming at developing more effective therapeutic strategies and better management. While the exact relationship between these two diseases is not fully understood, people with DM are at a higher risk of many types of cancer. Epidemiological evidence indicates an increased risk for cancer in individuals with DM, including pancreatic, liver, colorectal, breast, and bladder cancer. Therefore, it is essential to emphasize on primary prevention and healthy lifestyle habits, especially regular exercise, healthy eating, and smoking cessation, to reduce the risk of developing DM and cancer. Some studies have reported increased cancer-related mortality in patients with DM. Several aspects of the interaction between DM and cancer may determine this trend. DM-related comorbidities may influence cancer treatment choice, and patients may receive less aggressive treatments, potentially resulting in a suboptimal approach with worse outcomes. A study has recently confirmed that patients with type 2 DM (T2DM) have a significantly higher risk of cancer mortality than the general population. The risk of death due to cancer was 18% higher for all types combined, 9% higher for breast cancer, and 2.4 times for colorectal cancer. These results could indicate the possible benefits of a breast cancer screening programme for young women with T2DM. Hyperglycaemia in oncological patients is a frequent issue during cancer treatment and palliation. DM management for cancer patients is crucial to reduce both short- and long-term complications and the incidence of cancer treatment toxicities: a better DM control not only avoids delays in scheduling some diagnostic tests (e.g. [ 18 F]2-fluoro-2-deoxy- D -glucose positron emission tomography/computed tomography scanning) but also increases the adherence to the therapeutic programme, the QoL, and the prognosis. The metabolic control in cancer patients can be affected by anticancer treatments, such as corticosteroids, widely used in premedication and supportive and palliative care. Moreover, the management of patients with DM may also be overlooked by the tendency of both patients and caregivers to focus mainly on cancer treatment. As a result, these patients are at a higher risk of experiencing poor outcomes. The role of health care providers is essential in supporting and educating cancer patients with DM on managing their glucose control throughout their entire care plan, from diagnosis to end of life. Three main scenarios could involve oncologists and diabetologists in the multidisciplinary approach of patients with cancer and diabetes: patients with a history of DM, patients with previously unknown DM, and patients with iatrogenic DM. The specific characteristics and emerging relevant clinical aspects are summarized in . Cancer management has evolved significantly in recent years, focusing on a multidisciplinary team approach to provide the best possible patient care and to cope with the various comorbidities, toxicities, and complications arising during the patient’s treatment journey. Specialized fields such as cardio-oncology and onco-nephrology have emerged to provide the best possible care for cancer patients based on a comprehensive approach to the management of treatment toxicities, comorbidities, and cancer-related complications. Emerging trends in ‘diabeto-oncology’ focus on developing personalized treatment plans for cancer patients with DM, identifying biomarkers predicting cancer risk and prognosis in diabetic patients, and implementing primary prevention strategies. Co-management of cancer and DM requires collaboration between various health care professionals, including endocrinologists and oncologists, and the formation of dedicated specialists for this setting. ‘Diabeto-oncology’ offers a holistic approach to cancer patient management by considering glucose control and the presence of long-term diabetic complications: this coordinated approach allows not only a personalized treatment plan but also addresses the unique challenges and needs of these patients.
Diabetes management in cancer patients requires a comprehensive and collaborative approach. Collaboration and interaction between oncologists and diabetologists are critical to achieve appropriate levels of care and reduce the risk of complications. Each specialist brings a unique set of skills and knowledge, and their collaboration can help to ensure that patients receive the best possible care. Effective teamwork involves communication, coordination, and cooperation. It requires a shared understanding of the patient’s needs, goals, and preferences, as well as a willingness to work together to develop a personalized treatment plan that addresses each patient’s unique needs and goals. Collaboration can also help to improve patient outcomes, prevent errors, and reduce costs. In addition, interaction with patients and their families is crucial for providing high-quality cancer care, improving outcomes, and enhancing their experience. Patients with DM and cancer often have complex medical and psychosocial needs, and effective communication and support can help them to cope with treatment-related complications. The fragmentation of care is a major challenge in managing comorbid patients, leading to a lack of coordination between health care professionals. The Italian Association of Medical Oncology (AIOM) and the Italian Association of Medical Diabetologists (AMD) have been working in strict cooperation for several years to improve the approach towards cancer patients with DM. This collaboration led to creating and sharing a common road to address the challenges in providing effective care to such patients, particularly cooperating in a dedicated working group on ‘Diabetes and Cancer’. A multidisciplinary panel of oncologists and diabetologists first met in January 2015 in Turin, Italy, to develop a shared understanding of the challenges posed by DM and cancer, evaluate the impact of care pathways on interprofessional teamwork, and create evidence-based shared clinical protocols to treat patients with DM and cancer in different settings (including nurses, nutritionists, psychologists, etc.). The primary objective of this partnership was to offer patients optimal cancer and diabetes treatment, thus reducing the risk of complications and improving patients’ overall QoL. The collaboration has also been relevant in promoting awareness and education on DM management in cancer patients through several scientific initiatives, such as surveys, consensus papers, reviews, and expert insights. To further improve the multidisciplinary approach, common working group activities were established with other societies such as the Italian Society of Endocrinology (SIE), the Italian Society of Pharmacology (SIF), the Italian Society of Diabetology (SID), and Italian Association of Nuclear Medicine (AIMN). A panel of experts from AMD, AIOM, SIE, SIF, and SID provide in this manuscript an overview of the clinical interplay of cancer and DM and new models of shared management of cancer patients with DM to improve their QoL and survival.
Diabetes screening before starting anticancer therapy and proactive strategies to manage iatrogenic hyperglycaemia Patients with previously known DM should be scheduled for a visit at a DM care clinic before starting oncological treatments to evaluate the presence of diabetic complications that could influence the choice of anticancer therapy and to assess current nutritional status and requirements, the overall metabolic control, and the need to proactively modify current glucose-lowering therapy. , On the contrary, many patients with normal glucose control can develop new-onset DM or metabolic disorders (dyslipidaemia, hyperuricaemia, hypertension) because of cancer therapies or supportive drugs. Therefore, paraphrasing the quote of a famous Canadian ice-hockey player, it is not important to (only) know where the patient with DM and cancer is but also where he/she will be. This includes careful consideration of how we expect glucose control and clinical condition to change to proactively modify antidiabetic therapy accordingly [e.g. avoiding antidiabetic drugs (ADDs), with specific contraindications and potential adverse events (AEs)]. Early recognition and proactive management of anticancer drug-induced hyperglycaemia allow starting antidiabetic therapy at an early stage, enhancing care, nutritional status, and QoL of cancer patients. Corticosteroids commonly induce (or exacerbate) hyperglycaemia, hyperlipidaemia, and other metabolic side-effects. Even if it is hard to reliably identify in advance subjects who will develop steroid-induced DM, older patients with specific conditions are at increased risk, especially if a high dose and long duration of steroid treatment are predictable . , , It is fundamental to remember the predominant effect of corticosteroids on postprandial glucose levels. Indeed, fasting plasma glucose may be normal in these patients, with relevant glucose excursion after lunch. Over the last two decades, many commonly used targeted therapies [e.g. kinase/multikinase inhibitors, monoclonal antibodies, along with poly (ADP-ribose) polymerase, phosphoinositide 3-kinase (PI3-K), and mammalian target of rapamycin (mTOR) inhibitors] have shown to exert detrimental effects on glucose and lipid metabolism, as well as on blood pressure, and the cardiovascular (CV) system. , Therefore, every cancer patient starting a targeted therapy, as well as those who are going to be treated with high-dose steroids, should undergo an appropriate screening at baseline to identify those people requiring close monitoring of glucose and lipid metabolism . , In patients with increased DM risk, we recommend fasting plasma glucose monitoring every 2 weeks during the first month and then monthly after that, together with glycated haemoglobin (HbA1c) at baseline, at 3 months, and annually. Self-monitoring of blood glucose (SMBG) should be proposed or reinforced in patients with already known DM, monitoring fasting and 2-hour postprandial glucose levels. Flash and continuous glucose monitoring can also provide valuable help in enabling patients to avoid severe hyper- and hypoglycaemia. More recently, a new type of permanent insulin-dependent DM has been recognized in cancer patients treated with immune checkpoint inhibitors (ICIs). , ICIs may trigger autoimmune diabetes even far beyond 6 months from their introduction. Since severe hyperglycaemia and ketoacidosis may abruptly occur, diabetologists and oncologists should know about this potential risk. In this setting, a proactive approach means that patients should be appropriately trained to recognize signs and symptoms of severe hyperglycaemia. Since ICI-induced autoimmune DM may also affect patients with already known DM, monitoring glucose levels of these patients should be reinforced, too. Anticancer drug’s effect on glucose metabolism Glucocorticoids It is widely recognized that glucocorticoid therapy can lead to hyperglycaemia or further worsen a pre-existing condition of DM. , However, the development of de novo DM in patients with normal glucose tolerance is uncommon. , The effect of glucocorticoids on glucose metabolism is dose-dependent and, although it causes only a mild increase in fasting blood glucose levels, a large increase in postprandial blood glucose both in patients with and without pre-existing DM, predominantly occurring in the afternoon and evening, and impaired sensitivity to exogenous insulin are frequently observed. Glucocorticoid-induced hyperglycaemia may be due to increased hepatic glucose production and inhibited glucose uptake in adipose tissue and skeletal muscle, as well as due to decreased β-cell insulin production. , For these reasons, before initiating glucocorticoid therapy and throughout treatment, glycaemia should be closely monitored, and antidiabetic therapy started, or adjustment should be considered if necessary. , Although risk factors for steroid-induced DM predominantly include older age and higher body mass index, glucose level monitoring should be considered for all patients taking glucocorticoids. Clinicians should aim to the same glycaemic targets in glucocorticoid-induced DM as in those with pre-existing DM. Importantly, hyperglycaemia improves with the reduction in the dose of glucocorticoids and usually reverses when the medication is stopped , ; therefore, patients who are taking ADDs, which increase endogenous insulin availability (insulin or sulfonylureas), and are tapering their glucocorticoid dose should closely monitor their blood glucose level, because of the risk for life-threatening hypoglycaemia. , , Chemotherapy Patients with both DM and cancer undergoing chemotherapy are at a greater risk of glycaemic issues. Around 10%-30% of cancer patients during chemotherapy may experience hyperglycaemia. Although it is typically a temporary condition during treatment, it can develop into a long-term issue. Several chemotherapy drugs are known to cause hyperglycaemia, even in patients without DM. Cisplatin, 5-fluorouracil, and chemoradiation have been linked to hyperglycaemia. , Combining chemotherapy and steroids, frequently used as premedication, can increase the risk of hyperglycaemia and the potential to either cause de novo DM or worsen existing DM, which can cause complications during treatment, such as dose reduction or interruption. Poor glycaemic control in cancer patients is associated with more severe cancer courses and AEs such as neutropenia, infections, and even increased mortality. Receiving chemotherapy can be challenging for patients with pre-existing DM and related health issues, which can lead to CV problems, renal disease, or neuropathy. Chemotherapy drugs can worsen renal function and neuropathic complications. Patients with DM should be well-informed about the risks and benefits of chemotherapy drugs, and preventing dehydration to avoid acute kidney injury should be a top priority. Chemotherapy drugs like platinum derivatives and taxanes are also known to cause peripheral neuropathy. These drugs are commonly used to treat various types of cancer. Depending on the type of symptoms and the used drugs, patients with DM are more likely to experience neuropathy as a side-effect of chemotherapy. The severity of neuropathy symptoms may increase at higher doses of chemotherapy. Diabetic patients may experience longer-lasting neuropathy after chemotherapy, with symptoms persisting for up to 2 years after treatment. A meta-analysis was conducted to analyze the effect of DM on the clinical outcome of patients with pancreatic cancer who received adjuvant chemotherapy. The results showed that patients with DM who underwent chemotherapy for pancreatic cancer presented with reduced survival rates and larger tumours. Additionally, pancreatic cancer patients with DM had a higher risk of death after chemotherapy. Targeted therapy Targeted therapy with tyrosine kinase inhibitors (TKIs) and mTOR inhibitors has increased the possibility of treatment in various types of cancer. TKIs and mTOR inhibitors interfere with glucose metabolism , with hypo- or hyperglycaemia, even for the same molecule. TKIs and mTOR inhibitors are associated with a high incidence of hyperglycaemia with a reported rate of 15%-50%, , depending on the molecules used as anticancer therapies. Hyperglycaemia generally occurs within the first 3-4 weeks of therapy with TKIs. TKIs may impact glucose metabolism by various mechanisms, but the molecular mechanism remains unclear. First- and second-generation TKIs influence glucose metabolism. Prevalence of DM, glucose intolerance, and metabolic syndrome did not differ depending on TKI molecules. However, the most diabetogenic drugs seem to be nilotinib and crizotinib (up to 40% and 49%, respectively), while imatinib and dasatinib have been reported also to cause hypoglycaemia. A possible mechanism could be the increase in insulin resistance and the reduction in β-cell function with impaired insulin secretion. Another proposed mechanism is the potential inhibition of glycogen synthesis and/or activation of glycogenolysis, with inhibition of peripheral glucose uptake. TKIs can have a hypoglycaemic impact in type 1 DM (T1DM) and T2DM, with an improvement in glycaemia. Severe hypoglycaemia has been reported in non-diabetic patients treated with sunitinib or imatinib. , Everolimus is an oral mTOR inhibitor. mTOR exists in two distinct large protein complexes: mTORC1 and mTORC2. A relationship has been found between hyperglycaemia and everolimus. The effects of mTOR on glucose homeostasis are complex, depending on the level of mTORC1 activity. mTORC1 promotes insulin resistance and improves insulin secretion. Hyperglycaemia induced by mTOR inhibition may also be due to a decrease in insulin secretion. The risk of hyperglycaemia with everolimus seems to vary by tumour type. The highest has been observed in renal cell carcinoma and the lowest in breast, hepatocellular, and neuroendocrine tumours (NETs). Somatostatin analogues Long-acting somatostatin analogues (SSAs) are used to treat NETs, acromegaly, and Cushing’s disease. Two first-generation SSAs, octreotide and lanreotide, and one second-generation somatostatin receptor agonist, pasireotide, are available. SSAs have been shown to decrease growth hormone and insulin-like growth factor-I (IGF-I) levels in patients with acromegaly, and contribute to progression-free survival in patients with NETs. SSAs also inhibit the secretion of prolactin, thyrotropin, cholecystokinin, glucose-dependent insulinotropic polypeptide (GIP), gastrin, motilin, neurotensin, secretin, glucagon, insulin, and pancreatic polypeptide. They also inhibit the exocrine secretion of amylase by salivary glands; hydrochloric acid, pepsinogen, and intrinsic factor by gastrointestinal mucosa; enzymes and bicarbonate by pancreas; and bile in the liver. Furthermore, glucose, fat, and amino acid absorption is inhibited by SSAs. Among the most frequently reported AEs, SSAs have a negative impact on glucose homeostasis. Pasireotide has shown a good safety profile, as expected for SSAs, except for a higher degree of hyperglycaemia. Octreotide and lanreotide usually induce minor glucose metabolism abnormalities. Hyperglycaemia with a reduction in insulin secretion during an oral glucose tolerance test was reported with octreotide and lanreotide. , Mechanistic studies in healthy volunteers suggest that pasireotide-associated hyperglycaemia is due to reduced secretion of glucagon-like peptide (GLP)-1, GIP, and insulin; however, it is associated with intact postprandial glucagon secretion. AEs such as hyperglycaemia and DM, classified as grade 3 and 4 toxicity (according to the National Cancer Institute Common Terminology for Adverse Events version 5.0 ), occurred in up to 20% of patients. Glucose and HbA1c levels increased soon after the initiation of pasireotide treatment. Immunotherapy ICIs have revolutionized the treatment of various cancers by enhancing the immune system’s ability to target cancer cells. However, recent studies suggest that these drugs may also induce DM development. , Specifically, ICIs, such as cytotoxic T lymphocyte antigen 4 (CTLA-4) and programmed cell death protein 1 (PD-1) inhibitors, have been found to induce de novo DM in a low percentage of patients (1%-2% across ICI regimens). , Among these drugs, PD-1 inhibitors, including pembrolizumab and nivolumab, and programmed death-ligand 1 (PD-L1) inhibitors, such as durvalumab, are more likely to precipitate DM than CTLA-4 inhibitors alone, such as ipilimumab. The development of DM induced by ICIs can manifest as new-onset insulin-dependent DM or worsening of pre-existing T2DM. The mechanism behind this phenomenon is not yet fully understood but it is considered immune-related, similarly to T1DM. The combination therapy with anti-PD-L1 and anti-CTLA-4 has been shown to significantly impact the onset of DM in cancer patients. While the median onset of ICI-induced DM is after 4.5 cycles, in ICI combination therapy it has been found to occur earlier (median 2.7 cycles). Glucose management during cancer treatments Given the potential negative effects of hyperglycaemia and uncontrolled DM on cancer patient outcomes, achieving good glycaemic control throughout the care pathway (both in inpatient and outpatient settings, before, during, and after active antineoplastic therapy) is warranted for cancer patients with DM. The management of DM in cancer patients requires a ‘paradigm change’ as compared to DM patients without cancer. In the last few years, growing evidence has brought to consider an early and proactive multimodal approach towards subjects with DM, to lower diabetes-associated CV risk. Recent international guidelines have endorsed the early use of some classes of ADDs with proven CV benefits, such as sodium-glucose co-transporter inhibitors (SGLT2is) and GLP-1 receptor agonists (GLP1-RAs), in the treatment pathway for DM patients with atherosclerotic CV disease and/or heart failure, in order to reduce CV events and CV-related mortality and hospitalization for heart failure. , , Moreover, these suggestions are placed alongside the ‘classic’ recommendation of achieving tight glycaemic control in most non-frail patients with DM to minimize the risk of chronic diabetic complications. , , Although CV risk and complications should not be underestimated in cancer patients with DM, the choice of therapy and glycaemic targets should be carefully evaluated and individualized. In this setting, the goals of treatment switch from prevention of chronic complications and control of CV risk to maintaining acceptable glycaemic levels, minimizing drug interactions and AEs, and improving nutritional status with the final aim of improving patient’s well-being and adherence to cancer therapy. Various factors contribute to determining the glycaemic targets in the setting of cancer patients with DM. In particular, overall performance status, life expectancy, disease stage, hypoglycaemic risk, comorbidities, and presence of caregiver(s) are pivotal for the evaluation of glycaemic targets and frequency of self-monitoring (SMBG). , In the case of a good life expectancy, limited and controlled comorbidities, and younger age, a stricter glycaemic target should be aimed for. On the contrary, poor performance status, short life expectancy, significant hypoglycaemic risk, and older age need a substantially less tight target to avoid symptomatic hyper- and hypoglycaemia. In the palliative care and ‘end-of-life’ settings, glycaemic targets should be further loosened, and SMBG frequency should be reduced to the minimally acceptable. , Another difference lies in the methods by which glycaemic status should be evaluated. Given the frequent occurrence of anaemia and the need for blood transfusion (especially in haematologic malignancies), HbA1c measurement could frequently provide an inaccurate result in evaluating glucose control. , Moreover, short-term glycaemic excursions (albeit significant, like in steroid-induced hyperglycaemia) do not usually affect HbA1c levels. Therefore, in this setting, SMBG represents a valuable option for cancer patients with DM/dysglycaemia. In selected cases, such as in patients with high glycaemic variability/instability (e.g. pancreatectomized patients, immunotherapy-induced autoimmune DM), the use of glucose sensors should be considered, considering patient’s characteristics, local resources, and patient/caregiver suitability to this technology. The above-mentioned clinical factors should also be evaluated before choosing the type of antidiabetic treatment. Furthermore, the safety profile of the various classes of ADDs, drug interactions, and type of cancer therapy (and its possible contribution to hyperglycaemia/worsening of DM) should be considered, too. Cancer treatments are usually associated with frequent AEs, especially involving the gastrointestinal tract (e.g. nausea, vomiting, diarrhoea), significantly burdening patients’ QoL. Attention should be given when prescribing ADDs with the potential of gastrointestinal AEs, such as metformin, acarbose, and GLP1-RAs. Moreover, although metformin usually represents the first choice of ADD for DM treatment, one should remember to thoroughly evaluate renal function and the risk of its worsening for which cancer patients, through exposure to nephrotoxic antineoplastic drugs and intravenous contrast agents, are more vulnerable. Metformin should also be temporarily held before imaging procedures requiring the administration of iodinated contrast agents. SGLT2is, while effective in reducing CV risk and treating heart failure, carry the risk of dehydration and urogenital infections that could become clinically significant in a setting of active cancer therapy and subsequent immunosuppression. Their use should therefore be thoroughly evaluated. The neoplastic disease is often accompanied by a catabolic state that facilitates anorexia, weight loss, and cachexia. The nutritional status of cancer patients with DM should be carefully evaluated, and the use of ADDs with known weight loss effects (e.g. metformin, SGLT2is, GLP1-RA) should be cautiously balanced. , In this setting, also favoured by its flexibility and efficiency, insulin could represent the treatment of choice producing an anabolic effect. Nonetheless, the use of insulin, albeit useful in a great percentage of cancer patients with DM and virtually with no contraindications, carries with itself a significant hypoglycaemic risk and the need to adequately educate patients and caregivers about its everyday management and SMBG/sensor use. , Some cancer treatments can cause significant metabolic and glycaemic derangement, favouring DM onset or worsening, through the induction of significant insulin resistance or reduced insulin production. Understanding the underlying mechanisms through which hyperglycaemia develops is pivotal for choosing the most appropriate antidiabetic treatment. For instance, ADDs with known insulin-sensitizer effects could be the drugs of choice in managing hyperglycaemia related to some kinase inhibitors (e.g. nilotinib, ponatinib, alpelisib), mTOR inhibitors (e.g. everolimus), or corticosteroid therapy. , , On the contrary, in situations of relative or absolute insulin deficiency, such as immunotherapy-induced autoimmune DM, pancreatic cancer-related DM, or post-pancreatitis DM, insulin therapy is mandatory. , Given the higher risk of severe complications from various infectious diseases (including coronavirus disease-19) in people with DM and the relative immunosuppression associated with neoplastic disease and treatments, cancer patients with DM should also be offered vaccinations recommended by the International Diabetes Federation (IDF) to reduce mortality and morbidity risk. Supportive and palliative treatments in cancer patients with diabetes Supportive and palliative care is essential to the overall care plan for patients with DM and cancer. These conditions can be challenging to manage, and patients often require various supportive services. Health care providers need to work together to develop a plan that addresses the patient’s cancer- and DM-related needs. Supportive care may include training and support on managing blood sugar levels, ADD management, lifestyle changes, and access to a multidisciplinary team of health care professionals who can provide symptom management, nutrition, and emotional support. Palliative care services may also be necessary for patients experiencing neuropathy, chronic pain, symptoms related to treatment toxicities, or cancer progression, such as pain, nausea, or fatigue. Overall, supportive and palliative care for individuals with DM or cancer aims to improve their QoL and provide them with the resources they need to manage their symptoms and maintain their functional status. Cancer patients with DM and limited glycaemic control often experience increased pain and asthenia and have a higher incidence of treatment toxicities, such as nausea, vomiting, reduction of appetite, diarrhoea, and weight loss, which can lead to malnutrition and sarcopenia, with skeletal muscle mass loss and a decline in functional status. , Nutritional intake is crucial for managing DM and cancer. A well-balanced diet that includes adequate protein and calories can help patients to improve glycaemic control, maintain weight, and improve their overall strength and energy levels, reducing the risk of complications associated with DM. Also, exercise has been shown to have a protective effect against both DM and cancer, not only in the prevention setting but also in each phase of the patient journey, with an adaptive and personalized approach, maintaining muscle mass and reducing or delaying the risk of neoplastic cachexia. However, the nutritional needs of cancer patients with DM may differ depending on several factors, such as clinical conditions, comorbidities, cancer site and stage, and age. These patients need to warrant specialized and personalized nutritional support in the multidisciplinary team and provide specific interventions for oral and parenteral supplementation.
Patients with previously known DM should be scheduled for a visit at a DM care clinic before starting oncological treatments to evaluate the presence of diabetic complications that could influence the choice of anticancer therapy and to assess current nutritional status and requirements, the overall metabolic control, and the need to proactively modify current glucose-lowering therapy. , On the contrary, many patients with normal glucose control can develop new-onset DM or metabolic disorders (dyslipidaemia, hyperuricaemia, hypertension) because of cancer therapies or supportive drugs. Therefore, paraphrasing the quote of a famous Canadian ice-hockey player, it is not important to (only) know where the patient with DM and cancer is but also where he/she will be. This includes careful consideration of how we expect glucose control and clinical condition to change to proactively modify antidiabetic therapy accordingly [e.g. avoiding antidiabetic drugs (ADDs), with specific contraindications and potential adverse events (AEs)]. Early recognition and proactive management of anticancer drug-induced hyperglycaemia allow starting antidiabetic therapy at an early stage, enhancing care, nutritional status, and QoL of cancer patients. Corticosteroids commonly induce (or exacerbate) hyperglycaemia, hyperlipidaemia, and other metabolic side-effects. Even if it is hard to reliably identify in advance subjects who will develop steroid-induced DM, older patients with specific conditions are at increased risk, especially if a high dose and long duration of steroid treatment are predictable . , , It is fundamental to remember the predominant effect of corticosteroids on postprandial glucose levels. Indeed, fasting plasma glucose may be normal in these patients, with relevant glucose excursion after lunch. Over the last two decades, many commonly used targeted therapies [e.g. kinase/multikinase inhibitors, monoclonal antibodies, along with poly (ADP-ribose) polymerase, phosphoinositide 3-kinase (PI3-K), and mammalian target of rapamycin (mTOR) inhibitors] have shown to exert detrimental effects on glucose and lipid metabolism, as well as on blood pressure, and the cardiovascular (CV) system. , Therefore, every cancer patient starting a targeted therapy, as well as those who are going to be treated with high-dose steroids, should undergo an appropriate screening at baseline to identify those people requiring close monitoring of glucose and lipid metabolism . , In patients with increased DM risk, we recommend fasting plasma glucose monitoring every 2 weeks during the first month and then monthly after that, together with glycated haemoglobin (HbA1c) at baseline, at 3 months, and annually. Self-monitoring of blood glucose (SMBG) should be proposed or reinforced in patients with already known DM, monitoring fasting and 2-hour postprandial glucose levels. Flash and continuous glucose monitoring can also provide valuable help in enabling patients to avoid severe hyper- and hypoglycaemia. More recently, a new type of permanent insulin-dependent DM has been recognized in cancer patients treated with immune checkpoint inhibitors (ICIs). , ICIs may trigger autoimmune diabetes even far beyond 6 months from their introduction. Since severe hyperglycaemia and ketoacidosis may abruptly occur, diabetologists and oncologists should know about this potential risk. In this setting, a proactive approach means that patients should be appropriately trained to recognize signs and symptoms of severe hyperglycaemia. Since ICI-induced autoimmune DM may also affect patients with already known DM, monitoring glucose levels of these patients should be reinforced, too.
Glucocorticoids It is widely recognized that glucocorticoid therapy can lead to hyperglycaemia or further worsen a pre-existing condition of DM. , However, the development of de novo DM in patients with normal glucose tolerance is uncommon. , The effect of glucocorticoids on glucose metabolism is dose-dependent and, although it causes only a mild increase in fasting blood glucose levels, a large increase in postprandial blood glucose both in patients with and without pre-existing DM, predominantly occurring in the afternoon and evening, and impaired sensitivity to exogenous insulin are frequently observed. Glucocorticoid-induced hyperglycaemia may be due to increased hepatic glucose production and inhibited glucose uptake in adipose tissue and skeletal muscle, as well as due to decreased β-cell insulin production. , For these reasons, before initiating glucocorticoid therapy and throughout treatment, glycaemia should be closely monitored, and antidiabetic therapy started, or adjustment should be considered if necessary. , Although risk factors for steroid-induced DM predominantly include older age and higher body mass index, glucose level monitoring should be considered for all patients taking glucocorticoids. Clinicians should aim to the same glycaemic targets in glucocorticoid-induced DM as in those with pre-existing DM. Importantly, hyperglycaemia improves with the reduction in the dose of glucocorticoids and usually reverses when the medication is stopped , ; therefore, patients who are taking ADDs, which increase endogenous insulin availability (insulin or sulfonylureas), and are tapering their glucocorticoid dose should closely monitor their blood glucose level, because of the risk for life-threatening hypoglycaemia. , , Chemotherapy Patients with both DM and cancer undergoing chemotherapy are at a greater risk of glycaemic issues. Around 10%-30% of cancer patients during chemotherapy may experience hyperglycaemia. Although it is typically a temporary condition during treatment, it can develop into a long-term issue. Several chemotherapy drugs are known to cause hyperglycaemia, even in patients without DM. Cisplatin, 5-fluorouracil, and chemoradiation have been linked to hyperglycaemia. , Combining chemotherapy and steroids, frequently used as premedication, can increase the risk of hyperglycaemia and the potential to either cause de novo DM or worsen existing DM, which can cause complications during treatment, such as dose reduction or interruption. Poor glycaemic control in cancer patients is associated with more severe cancer courses and AEs such as neutropenia, infections, and even increased mortality. Receiving chemotherapy can be challenging for patients with pre-existing DM and related health issues, which can lead to CV problems, renal disease, or neuropathy. Chemotherapy drugs can worsen renal function and neuropathic complications. Patients with DM should be well-informed about the risks and benefits of chemotherapy drugs, and preventing dehydration to avoid acute kidney injury should be a top priority. Chemotherapy drugs like platinum derivatives and taxanes are also known to cause peripheral neuropathy. These drugs are commonly used to treat various types of cancer. Depending on the type of symptoms and the used drugs, patients with DM are more likely to experience neuropathy as a side-effect of chemotherapy. The severity of neuropathy symptoms may increase at higher doses of chemotherapy. Diabetic patients may experience longer-lasting neuropathy after chemotherapy, with symptoms persisting for up to 2 years after treatment. A meta-analysis was conducted to analyze the effect of DM on the clinical outcome of patients with pancreatic cancer who received adjuvant chemotherapy. The results showed that patients with DM who underwent chemotherapy for pancreatic cancer presented with reduced survival rates and larger tumours. Additionally, pancreatic cancer patients with DM had a higher risk of death after chemotherapy. Targeted therapy Targeted therapy with tyrosine kinase inhibitors (TKIs) and mTOR inhibitors has increased the possibility of treatment in various types of cancer. TKIs and mTOR inhibitors interfere with glucose metabolism , with hypo- or hyperglycaemia, even for the same molecule. TKIs and mTOR inhibitors are associated with a high incidence of hyperglycaemia with a reported rate of 15%-50%, , depending on the molecules used as anticancer therapies. Hyperglycaemia generally occurs within the first 3-4 weeks of therapy with TKIs. TKIs may impact glucose metabolism by various mechanisms, but the molecular mechanism remains unclear. First- and second-generation TKIs influence glucose metabolism. Prevalence of DM, glucose intolerance, and metabolic syndrome did not differ depending on TKI molecules. However, the most diabetogenic drugs seem to be nilotinib and crizotinib (up to 40% and 49%, respectively), while imatinib and dasatinib have been reported also to cause hypoglycaemia. A possible mechanism could be the increase in insulin resistance and the reduction in β-cell function with impaired insulin secretion. Another proposed mechanism is the potential inhibition of glycogen synthesis and/or activation of glycogenolysis, with inhibition of peripheral glucose uptake. TKIs can have a hypoglycaemic impact in type 1 DM (T1DM) and T2DM, with an improvement in glycaemia. Severe hypoglycaemia has been reported in non-diabetic patients treated with sunitinib or imatinib. , Everolimus is an oral mTOR inhibitor. mTOR exists in two distinct large protein complexes: mTORC1 and mTORC2. A relationship has been found between hyperglycaemia and everolimus. The effects of mTOR on glucose homeostasis are complex, depending on the level of mTORC1 activity. mTORC1 promotes insulin resistance and improves insulin secretion. Hyperglycaemia induced by mTOR inhibition may also be due to a decrease in insulin secretion. The risk of hyperglycaemia with everolimus seems to vary by tumour type. The highest has been observed in renal cell carcinoma and the lowest in breast, hepatocellular, and neuroendocrine tumours (NETs). Somatostatin analogues Long-acting somatostatin analogues (SSAs) are used to treat NETs, acromegaly, and Cushing’s disease. Two first-generation SSAs, octreotide and lanreotide, and one second-generation somatostatin receptor agonist, pasireotide, are available. SSAs have been shown to decrease growth hormone and insulin-like growth factor-I (IGF-I) levels in patients with acromegaly, and contribute to progression-free survival in patients with NETs. SSAs also inhibit the secretion of prolactin, thyrotropin, cholecystokinin, glucose-dependent insulinotropic polypeptide (GIP), gastrin, motilin, neurotensin, secretin, glucagon, insulin, and pancreatic polypeptide. They also inhibit the exocrine secretion of amylase by salivary glands; hydrochloric acid, pepsinogen, and intrinsic factor by gastrointestinal mucosa; enzymes and bicarbonate by pancreas; and bile in the liver. Furthermore, glucose, fat, and amino acid absorption is inhibited by SSAs. Among the most frequently reported AEs, SSAs have a negative impact on glucose homeostasis. Pasireotide has shown a good safety profile, as expected for SSAs, except for a higher degree of hyperglycaemia. Octreotide and lanreotide usually induce minor glucose metabolism abnormalities. Hyperglycaemia with a reduction in insulin secretion during an oral glucose tolerance test was reported with octreotide and lanreotide. , Mechanistic studies in healthy volunteers suggest that pasireotide-associated hyperglycaemia is due to reduced secretion of glucagon-like peptide (GLP)-1, GIP, and insulin; however, it is associated with intact postprandial glucagon secretion. AEs such as hyperglycaemia and DM, classified as grade 3 and 4 toxicity (according to the National Cancer Institute Common Terminology for Adverse Events version 5.0 ), occurred in up to 20% of patients. Glucose and HbA1c levels increased soon after the initiation of pasireotide treatment. Immunotherapy ICIs have revolutionized the treatment of various cancers by enhancing the immune system’s ability to target cancer cells. However, recent studies suggest that these drugs may also induce DM development. , Specifically, ICIs, such as cytotoxic T lymphocyte antigen 4 (CTLA-4) and programmed cell death protein 1 (PD-1) inhibitors, have been found to induce de novo DM in a low percentage of patients (1%-2% across ICI regimens). , Among these drugs, PD-1 inhibitors, including pembrolizumab and nivolumab, and programmed death-ligand 1 (PD-L1) inhibitors, such as durvalumab, are more likely to precipitate DM than CTLA-4 inhibitors alone, such as ipilimumab. The development of DM induced by ICIs can manifest as new-onset insulin-dependent DM or worsening of pre-existing T2DM. The mechanism behind this phenomenon is not yet fully understood but it is considered immune-related, similarly to T1DM. The combination therapy with anti-PD-L1 and anti-CTLA-4 has been shown to significantly impact the onset of DM in cancer patients. While the median onset of ICI-induced DM is after 4.5 cycles, in ICI combination therapy it has been found to occur earlier (median 2.7 cycles). Glucose management during cancer treatments Given the potential negative effects of hyperglycaemia and uncontrolled DM on cancer patient outcomes, achieving good glycaemic control throughout the care pathway (both in inpatient and outpatient settings, before, during, and after active antineoplastic therapy) is warranted for cancer patients with DM. The management of DM in cancer patients requires a ‘paradigm change’ as compared to DM patients without cancer. In the last few years, growing evidence has brought to consider an early and proactive multimodal approach towards subjects with DM, to lower diabetes-associated CV risk. Recent international guidelines have endorsed the early use of some classes of ADDs with proven CV benefits, such as sodium-glucose co-transporter inhibitors (SGLT2is) and GLP-1 receptor agonists (GLP1-RAs), in the treatment pathway for DM patients with atherosclerotic CV disease and/or heart failure, in order to reduce CV events and CV-related mortality and hospitalization for heart failure. , , Moreover, these suggestions are placed alongside the ‘classic’ recommendation of achieving tight glycaemic control in most non-frail patients with DM to minimize the risk of chronic diabetic complications. , , Although CV risk and complications should not be underestimated in cancer patients with DM, the choice of therapy and glycaemic targets should be carefully evaluated and individualized. In this setting, the goals of treatment switch from prevention of chronic complications and control of CV risk to maintaining acceptable glycaemic levels, minimizing drug interactions and AEs, and improving nutritional status with the final aim of improving patient’s well-being and adherence to cancer therapy. Various factors contribute to determining the glycaemic targets in the setting of cancer patients with DM. In particular, overall performance status, life expectancy, disease stage, hypoglycaemic risk, comorbidities, and presence of caregiver(s) are pivotal for the evaluation of glycaemic targets and frequency of self-monitoring (SMBG). , In the case of a good life expectancy, limited and controlled comorbidities, and younger age, a stricter glycaemic target should be aimed for. On the contrary, poor performance status, short life expectancy, significant hypoglycaemic risk, and older age need a substantially less tight target to avoid symptomatic hyper- and hypoglycaemia. In the palliative care and ‘end-of-life’ settings, glycaemic targets should be further loosened, and SMBG frequency should be reduced to the minimally acceptable. , Another difference lies in the methods by which glycaemic status should be evaluated. Given the frequent occurrence of anaemia and the need for blood transfusion (especially in haematologic malignancies), HbA1c measurement could frequently provide an inaccurate result in evaluating glucose control. , Moreover, short-term glycaemic excursions (albeit significant, like in steroid-induced hyperglycaemia) do not usually affect HbA1c levels. Therefore, in this setting, SMBG represents a valuable option for cancer patients with DM/dysglycaemia. In selected cases, such as in patients with high glycaemic variability/instability (e.g. pancreatectomized patients, immunotherapy-induced autoimmune DM), the use of glucose sensors should be considered, considering patient’s characteristics, local resources, and patient/caregiver suitability to this technology. The above-mentioned clinical factors should also be evaluated before choosing the type of antidiabetic treatment. Furthermore, the safety profile of the various classes of ADDs, drug interactions, and type of cancer therapy (and its possible contribution to hyperglycaemia/worsening of DM) should be considered, too. Cancer treatments are usually associated with frequent AEs, especially involving the gastrointestinal tract (e.g. nausea, vomiting, diarrhoea), significantly burdening patients’ QoL. Attention should be given when prescribing ADDs with the potential of gastrointestinal AEs, such as metformin, acarbose, and GLP1-RAs. Moreover, although metformin usually represents the first choice of ADD for DM treatment, one should remember to thoroughly evaluate renal function and the risk of its worsening for which cancer patients, through exposure to nephrotoxic antineoplastic drugs and intravenous contrast agents, are more vulnerable. Metformin should also be temporarily held before imaging procedures requiring the administration of iodinated contrast agents. SGLT2is, while effective in reducing CV risk and treating heart failure, carry the risk of dehydration and urogenital infections that could become clinically significant in a setting of active cancer therapy and subsequent immunosuppression. Their use should therefore be thoroughly evaluated. The neoplastic disease is often accompanied by a catabolic state that facilitates anorexia, weight loss, and cachexia. The nutritional status of cancer patients with DM should be carefully evaluated, and the use of ADDs with known weight loss effects (e.g. metformin, SGLT2is, GLP1-RA) should be cautiously balanced. , In this setting, also favoured by its flexibility and efficiency, insulin could represent the treatment of choice producing an anabolic effect. Nonetheless, the use of insulin, albeit useful in a great percentage of cancer patients with DM and virtually with no contraindications, carries with itself a significant hypoglycaemic risk and the need to adequately educate patients and caregivers about its everyday management and SMBG/sensor use. , Some cancer treatments can cause significant metabolic and glycaemic derangement, favouring DM onset or worsening, through the induction of significant insulin resistance or reduced insulin production. Understanding the underlying mechanisms through which hyperglycaemia develops is pivotal for choosing the most appropriate antidiabetic treatment. For instance, ADDs with known insulin-sensitizer effects could be the drugs of choice in managing hyperglycaemia related to some kinase inhibitors (e.g. nilotinib, ponatinib, alpelisib), mTOR inhibitors (e.g. everolimus), or corticosteroid therapy. , , On the contrary, in situations of relative or absolute insulin deficiency, such as immunotherapy-induced autoimmune DM, pancreatic cancer-related DM, or post-pancreatitis DM, insulin therapy is mandatory. , Given the higher risk of severe complications from various infectious diseases (including coronavirus disease-19) in people with DM and the relative immunosuppression associated with neoplastic disease and treatments, cancer patients with DM should also be offered vaccinations recommended by the International Diabetes Federation (IDF) to reduce mortality and morbidity risk. Supportive and palliative treatments in cancer patients with diabetes Supportive and palliative care is essential to the overall care plan for patients with DM and cancer. These conditions can be challenging to manage, and patients often require various supportive services. Health care providers need to work together to develop a plan that addresses the patient’s cancer- and DM-related needs. Supportive care may include training and support on managing blood sugar levels, ADD management, lifestyle changes, and access to a multidisciplinary team of health care professionals who can provide symptom management, nutrition, and emotional support. Palliative care services may also be necessary for patients experiencing neuropathy, chronic pain, symptoms related to treatment toxicities, or cancer progression, such as pain, nausea, or fatigue. Overall, supportive and palliative care for individuals with DM or cancer aims to improve their QoL and provide them with the resources they need to manage their symptoms and maintain their functional status. Cancer patients with DM and limited glycaemic control often experience increased pain and asthenia and have a higher incidence of treatment toxicities, such as nausea, vomiting, reduction of appetite, diarrhoea, and weight loss, which can lead to malnutrition and sarcopenia, with skeletal muscle mass loss and a decline in functional status. , Nutritional intake is crucial for managing DM and cancer. A well-balanced diet that includes adequate protein and calories can help patients to improve glycaemic control, maintain weight, and improve their overall strength and energy levels, reducing the risk of complications associated with DM. Also, exercise has been shown to have a protective effect against both DM and cancer, not only in the prevention setting but also in each phase of the patient journey, with an adaptive and personalized approach, maintaining muscle mass and reducing or delaying the risk of neoplastic cachexia. However, the nutritional needs of cancer patients with DM may differ depending on several factors, such as clinical conditions, comorbidities, cancer site and stage, and age. These patients need to warrant specialized and personalized nutritional support in the multidisciplinary team and provide specific interventions for oral and parenteral supplementation.
It is widely recognized that glucocorticoid therapy can lead to hyperglycaemia or further worsen a pre-existing condition of DM. , However, the development of de novo DM in patients with normal glucose tolerance is uncommon. , The effect of glucocorticoids on glucose metabolism is dose-dependent and, although it causes only a mild increase in fasting blood glucose levels, a large increase in postprandial blood glucose both in patients with and without pre-existing DM, predominantly occurring in the afternoon and evening, and impaired sensitivity to exogenous insulin are frequently observed. Glucocorticoid-induced hyperglycaemia may be due to increased hepatic glucose production and inhibited glucose uptake in adipose tissue and skeletal muscle, as well as due to decreased β-cell insulin production. , For these reasons, before initiating glucocorticoid therapy and throughout treatment, glycaemia should be closely monitored, and antidiabetic therapy started, or adjustment should be considered if necessary. , Although risk factors for steroid-induced DM predominantly include older age and higher body mass index, glucose level monitoring should be considered for all patients taking glucocorticoids. Clinicians should aim to the same glycaemic targets in glucocorticoid-induced DM as in those with pre-existing DM. Importantly, hyperglycaemia improves with the reduction in the dose of glucocorticoids and usually reverses when the medication is stopped , ; therefore, patients who are taking ADDs, which increase endogenous insulin availability (insulin or sulfonylureas), and are tapering their glucocorticoid dose should closely monitor their blood glucose level, because of the risk for life-threatening hypoglycaemia. , ,
Patients with both DM and cancer undergoing chemotherapy are at a greater risk of glycaemic issues. Around 10%-30% of cancer patients during chemotherapy may experience hyperglycaemia. Although it is typically a temporary condition during treatment, it can develop into a long-term issue. Several chemotherapy drugs are known to cause hyperglycaemia, even in patients without DM. Cisplatin, 5-fluorouracil, and chemoradiation have been linked to hyperglycaemia. , Combining chemotherapy and steroids, frequently used as premedication, can increase the risk of hyperglycaemia and the potential to either cause de novo DM or worsen existing DM, which can cause complications during treatment, such as dose reduction or interruption. Poor glycaemic control in cancer patients is associated with more severe cancer courses and AEs such as neutropenia, infections, and even increased mortality. Receiving chemotherapy can be challenging for patients with pre-existing DM and related health issues, which can lead to CV problems, renal disease, or neuropathy. Chemotherapy drugs can worsen renal function and neuropathic complications. Patients with DM should be well-informed about the risks and benefits of chemotherapy drugs, and preventing dehydration to avoid acute kidney injury should be a top priority. Chemotherapy drugs like platinum derivatives and taxanes are also known to cause peripheral neuropathy. These drugs are commonly used to treat various types of cancer. Depending on the type of symptoms and the used drugs, patients with DM are more likely to experience neuropathy as a side-effect of chemotherapy. The severity of neuropathy symptoms may increase at higher doses of chemotherapy. Diabetic patients may experience longer-lasting neuropathy after chemotherapy, with symptoms persisting for up to 2 years after treatment. A meta-analysis was conducted to analyze the effect of DM on the clinical outcome of patients with pancreatic cancer who received adjuvant chemotherapy. The results showed that patients with DM who underwent chemotherapy for pancreatic cancer presented with reduced survival rates and larger tumours. Additionally, pancreatic cancer patients with DM had a higher risk of death after chemotherapy.
Targeted therapy with tyrosine kinase inhibitors (TKIs) and mTOR inhibitors has increased the possibility of treatment in various types of cancer. TKIs and mTOR inhibitors interfere with glucose metabolism , with hypo- or hyperglycaemia, even for the same molecule. TKIs and mTOR inhibitors are associated with a high incidence of hyperglycaemia with a reported rate of 15%-50%, , depending on the molecules used as anticancer therapies. Hyperglycaemia generally occurs within the first 3-4 weeks of therapy with TKIs. TKIs may impact glucose metabolism by various mechanisms, but the molecular mechanism remains unclear. First- and second-generation TKIs influence glucose metabolism. Prevalence of DM, glucose intolerance, and metabolic syndrome did not differ depending on TKI molecules. However, the most diabetogenic drugs seem to be nilotinib and crizotinib (up to 40% and 49%, respectively), while imatinib and dasatinib have been reported also to cause hypoglycaemia. A possible mechanism could be the increase in insulin resistance and the reduction in β-cell function with impaired insulin secretion. Another proposed mechanism is the potential inhibition of glycogen synthesis and/or activation of glycogenolysis, with inhibition of peripheral glucose uptake. TKIs can have a hypoglycaemic impact in type 1 DM (T1DM) and T2DM, with an improvement in glycaemia. Severe hypoglycaemia has been reported in non-diabetic patients treated with sunitinib or imatinib. , Everolimus is an oral mTOR inhibitor. mTOR exists in two distinct large protein complexes: mTORC1 and mTORC2. A relationship has been found between hyperglycaemia and everolimus. The effects of mTOR on glucose homeostasis are complex, depending on the level of mTORC1 activity. mTORC1 promotes insulin resistance and improves insulin secretion. Hyperglycaemia induced by mTOR inhibition may also be due to a decrease in insulin secretion. The risk of hyperglycaemia with everolimus seems to vary by tumour type. The highest has been observed in renal cell carcinoma and the lowest in breast, hepatocellular, and neuroendocrine tumours (NETs).
Long-acting somatostatin analogues (SSAs) are used to treat NETs, acromegaly, and Cushing’s disease. Two first-generation SSAs, octreotide and lanreotide, and one second-generation somatostatin receptor agonist, pasireotide, are available. SSAs have been shown to decrease growth hormone and insulin-like growth factor-I (IGF-I) levels in patients with acromegaly, and contribute to progression-free survival in patients with NETs. SSAs also inhibit the secretion of prolactin, thyrotropin, cholecystokinin, glucose-dependent insulinotropic polypeptide (GIP), gastrin, motilin, neurotensin, secretin, glucagon, insulin, and pancreatic polypeptide. They also inhibit the exocrine secretion of amylase by salivary glands; hydrochloric acid, pepsinogen, and intrinsic factor by gastrointestinal mucosa; enzymes and bicarbonate by pancreas; and bile in the liver. Furthermore, glucose, fat, and amino acid absorption is inhibited by SSAs. Among the most frequently reported AEs, SSAs have a negative impact on glucose homeostasis. Pasireotide has shown a good safety profile, as expected for SSAs, except for a higher degree of hyperglycaemia. Octreotide and lanreotide usually induce minor glucose metabolism abnormalities. Hyperglycaemia with a reduction in insulin secretion during an oral glucose tolerance test was reported with octreotide and lanreotide. , Mechanistic studies in healthy volunteers suggest that pasireotide-associated hyperglycaemia is due to reduced secretion of glucagon-like peptide (GLP)-1, GIP, and insulin; however, it is associated with intact postprandial glucagon secretion. AEs such as hyperglycaemia and DM, classified as grade 3 and 4 toxicity (according to the National Cancer Institute Common Terminology for Adverse Events version 5.0 ), occurred in up to 20% of patients. Glucose and HbA1c levels increased soon after the initiation of pasireotide treatment.
ICIs have revolutionized the treatment of various cancers by enhancing the immune system’s ability to target cancer cells. However, recent studies suggest that these drugs may also induce DM development. , Specifically, ICIs, such as cytotoxic T lymphocyte antigen 4 (CTLA-4) and programmed cell death protein 1 (PD-1) inhibitors, have been found to induce de novo DM in a low percentage of patients (1%-2% across ICI regimens). , Among these drugs, PD-1 inhibitors, including pembrolizumab and nivolumab, and programmed death-ligand 1 (PD-L1) inhibitors, such as durvalumab, are more likely to precipitate DM than CTLA-4 inhibitors alone, such as ipilimumab. The development of DM induced by ICIs can manifest as new-onset insulin-dependent DM or worsening of pre-existing T2DM. The mechanism behind this phenomenon is not yet fully understood but it is considered immune-related, similarly to T1DM. The combination therapy with anti-PD-L1 and anti-CTLA-4 has been shown to significantly impact the onset of DM in cancer patients. While the median onset of ICI-induced DM is after 4.5 cycles, in ICI combination therapy it has been found to occur earlier (median 2.7 cycles).
Given the potential negative effects of hyperglycaemia and uncontrolled DM on cancer patient outcomes, achieving good glycaemic control throughout the care pathway (both in inpatient and outpatient settings, before, during, and after active antineoplastic therapy) is warranted for cancer patients with DM. The management of DM in cancer patients requires a ‘paradigm change’ as compared to DM patients without cancer. In the last few years, growing evidence has brought to consider an early and proactive multimodal approach towards subjects with DM, to lower diabetes-associated CV risk. Recent international guidelines have endorsed the early use of some classes of ADDs with proven CV benefits, such as sodium-glucose co-transporter inhibitors (SGLT2is) and GLP-1 receptor agonists (GLP1-RAs), in the treatment pathway for DM patients with atherosclerotic CV disease and/or heart failure, in order to reduce CV events and CV-related mortality and hospitalization for heart failure. , , Moreover, these suggestions are placed alongside the ‘classic’ recommendation of achieving tight glycaemic control in most non-frail patients with DM to minimize the risk of chronic diabetic complications. , , Although CV risk and complications should not be underestimated in cancer patients with DM, the choice of therapy and glycaemic targets should be carefully evaluated and individualized. In this setting, the goals of treatment switch from prevention of chronic complications and control of CV risk to maintaining acceptable glycaemic levels, minimizing drug interactions and AEs, and improving nutritional status with the final aim of improving patient’s well-being and adherence to cancer therapy. Various factors contribute to determining the glycaemic targets in the setting of cancer patients with DM. In particular, overall performance status, life expectancy, disease stage, hypoglycaemic risk, comorbidities, and presence of caregiver(s) are pivotal for the evaluation of glycaemic targets and frequency of self-monitoring (SMBG). , In the case of a good life expectancy, limited and controlled comorbidities, and younger age, a stricter glycaemic target should be aimed for. On the contrary, poor performance status, short life expectancy, significant hypoglycaemic risk, and older age need a substantially less tight target to avoid symptomatic hyper- and hypoglycaemia. In the palliative care and ‘end-of-life’ settings, glycaemic targets should be further loosened, and SMBG frequency should be reduced to the minimally acceptable. , Another difference lies in the methods by which glycaemic status should be evaluated. Given the frequent occurrence of anaemia and the need for blood transfusion (especially in haematologic malignancies), HbA1c measurement could frequently provide an inaccurate result in evaluating glucose control. , Moreover, short-term glycaemic excursions (albeit significant, like in steroid-induced hyperglycaemia) do not usually affect HbA1c levels. Therefore, in this setting, SMBG represents a valuable option for cancer patients with DM/dysglycaemia. In selected cases, such as in patients with high glycaemic variability/instability (e.g. pancreatectomized patients, immunotherapy-induced autoimmune DM), the use of glucose sensors should be considered, considering patient’s characteristics, local resources, and patient/caregiver suitability to this technology. The above-mentioned clinical factors should also be evaluated before choosing the type of antidiabetic treatment. Furthermore, the safety profile of the various classes of ADDs, drug interactions, and type of cancer therapy (and its possible contribution to hyperglycaemia/worsening of DM) should be considered, too. Cancer treatments are usually associated with frequent AEs, especially involving the gastrointestinal tract (e.g. nausea, vomiting, diarrhoea), significantly burdening patients’ QoL. Attention should be given when prescribing ADDs with the potential of gastrointestinal AEs, such as metformin, acarbose, and GLP1-RAs. Moreover, although metformin usually represents the first choice of ADD for DM treatment, one should remember to thoroughly evaluate renal function and the risk of its worsening for which cancer patients, through exposure to nephrotoxic antineoplastic drugs and intravenous contrast agents, are more vulnerable. Metformin should also be temporarily held before imaging procedures requiring the administration of iodinated contrast agents. SGLT2is, while effective in reducing CV risk and treating heart failure, carry the risk of dehydration and urogenital infections that could become clinically significant in a setting of active cancer therapy and subsequent immunosuppression. Their use should therefore be thoroughly evaluated. The neoplastic disease is often accompanied by a catabolic state that facilitates anorexia, weight loss, and cachexia. The nutritional status of cancer patients with DM should be carefully evaluated, and the use of ADDs with known weight loss effects (e.g. metformin, SGLT2is, GLP1-RA) should be cautiously balanced. , In this setting, also favoured by its flexibility and efficiency, insulin could represent the treatment of choice producing an anabolic effect. Nonetheless, the use of insulin, albeit useful in a great percentage of cancer patients with DM and virtually with no contraindications, carries with itself a significant hypoglycaemic risk and the need to adequately educate patients and caregivers about its everyday management and SMBG/sensor use. , Some cancer treatments can cause significant metabolic and glycaemic derangement, favouring DM onset or worsening, through the induction of significant insulin resistance or reduced insulin production. Understanding the underlying mechanisms through which hyperglycaemia develops is pivotal for choosing the most appropriate antidiabetic treatment. For instance, ADDs with known insulin-sensitizer effects could be the drugs of choice in managing hyperglycaemia related to some kinase inhibitors (e.g. nilotinib, ponatinib, alpelisib), mTOR inhibitors (e.g. everolimus), or corticosteroid therapy. , , On the contrary, in situations of relative or absolute insulin deficiency, such as immunotherapy-induced autoimmune DM, pancreatic cancer-related DM, or post-pancreatitis DM, insulin therapy is mandatory. , Given the higher risk of severe complications from various infectious diseases (including coronavirus disease-19) in people with DM and the relative immunosuppression associated with neoplastic disease and treatments, cancer patients with DM should also be offered vaccinations recommended by the International Diabetes Federation (IDF) to reduce mortality and morbidity risk.
Supportive and palliative care is essential to the overall care plan for patients with DM and cancer. These conditions can be challenging to manage, and patients often require various supportive services. Health care providers need to work together to develop a plan that addresses the patient’s cancer- and DM-related needs. Supportive care may include training and support on managing blood sugar levels, ADD management, lifestyle changes, and access to a multidisciplinary team of health care professionals who can provide symptom management, nutrition, and emotional support. Palliative care services may also be necessary for patients experiencing neuropathy, chronic pain, symptoms related to treatment toxicities, or cancer progression, such as pain, nausea, or fatigue. Overall, supportive and palliative care for individuals with DM or cancer aims to improve their QoL and provide them with the resources they need to manage their symptoms and maintain their functional status. Cancer patients with DM and limited glycaemic control often experience increased pain and asthenia and have a higher incidence of treatment toxicities, such as nausea, vomiting, reduction of appetite, diarrhoea, and weight loss, which can lead to malnutrition and sarcopenia, with skeletal muscle mass loss and a decline in functional status. , Nutritional intake is crucial for managing DM and cancer. A well-balanced diet that includes adequate protein and calories can help patients to improve glycaemic control, maintain weight, and improve their overall strength and energy levels, reducing the risk of complications associated with DM. Also, exercise has been shown to have a protective effect against both DM and cancer, not only in the prevention setting but also in each phase of the patient journey, with an adaptive and personalized approach, maintaining muscle mass and reducing or delaying the risk of neoplastic cachexia. However, the nutritional needs of cancer patients with DM may differ depending on several factors, such as clinical conditions, comorbidities, cancer site and stage, and age. These patients need to warrant specialized and personalized nutritional support in the multidisciplinary team and provide specific interventions for oral and parenteral supplementation.
Metabolic emergencies Hyperglycaemia and hypoglycaemia are more likely to occur in patients with DM and cancer. Conditions such as fatigue, dehydration, vomiting and diarrhoea, cachexia, and infections can trigger acute DM complications like surgical and medical procedures. Diabetic ketoacidosis (DKA) and hyperglycaemic hyperosmolar state (HHS) are life-threatening conditions accounting for several DM-related deaths (∼0.4% for DKA, reaching 2% in patients >65 years, and up to 20% for HHS). DKA’s main features are hyperglycaemia, ketonaemia, and metabolic acidosis with a high anionic gap, whereas HHS is characterized by hyperglycaemia, hyperosmolarity, and dehydration without significant acidosis (see ). While the former is related to an absolute shortage or lack of insulin, endogenous production of insulin persists in HHS: albeit not sufficient to provide glucose to the insulin-sensitive tissues, it is adequate to prevent lipolysis and consequent ketogenesis . Euglycaemic DKA (EDKA), whose diagnosis can be delayed by the absence of hyperglycaemia and ketonuria, may also develop in specific circumstances. The incidence of EDKA has recently increased due to the introduction of SGLT2is. Since intravascular volume depletion induced by diarrhoea and emesis or by a ketosis-prone status consequent to reduced food intake, hospitalization, and surgery are all precipitating factors for EDKA, oncologic patients treated with these ADDs should be closely monitored. The clinical presentation of DKA is relatively fast, while it can take days or weeks for HHS. Common symptoms include polyuria, polydipsia, weight loss, and weakness, associated with signs related to intravascular volume depletion. Both in DKA and in HHS, neurologic signs and symptoms may occur. Conversely, polyuria and polydipsia may not be present in the case of EDKA, whereas these patients experience fatigue and malaise. Severe ketoacidosis may also simulate an acute abdomen. , In oncology, the sudden onset of DKA may also be the first manifestation of autoimmune DM (much similar to T1DM) induced by ICIs, which are responsible for several immune-related AEs. , , These patients often show normal levels of HbA1c, with a low C-peptide and sometimes positive islet-cell autoantibodies. More commonly, other drugs used in oncology (e.g. glucocorticoids, TKIs, and everolimus) may worsen a pre-existing DM and trigger hyperglycaemic complications due to insulin resistance, reduced insulin secretion, or both. Therefore, blood glucose should be closely monitored in patients with DM and cancer. Particular attention should be paid to patients treated with ICIs and people with already known DM treated with glucocorticoids in order to take immediate action in case of DKA and/or HHS. Differential diagnosis between the two types of acute complications is based on glucose levels, pH value, the presence/absence of ketones, osmolality, anion gap, and mental status (see ). Medical treatment of DKA/HHS in cancer patients does not differ from the general population, requiring restoring the circulatory volume and extracellular compartment, reducing blood glucose levels, plasma osmolality, and correction of electrolyte alterations. Intravenous insulin infusion is the treatment of choice for these patients. Identifying and treating precipitating events such as dehydration and infections are mandatory. Patient education is also fundamental, particularly concerning glucose monitoring and DM management on sick days, especially in case of fever and/or concomitant infection. Ongoing training and education must be provided to the medical staff and caregivers about recognizing and treating symptoms before they escalate to acute, life-threatening conditions. Chronic diabetes complications influencing cancer treatments Many data indicate that glycaemic control, adherence to therapy, and self-management of DM worsen after a cancer diagnosis, partially explaining the increased risk of adverse outcomes in diabetic patients with cancer compared to non-diabetics. Moreover, anticancer therapies have a further detrimental effect on metabolic compensation, which affects the onset of long-term micro- and macrovascular complications, and exacerbates already present diabetes-induced organ damage. Consequently, the onset or progression of CV, renal, ocular, and neuropathic injuries should be prevented and monitored in patients with DM and cancer, provided that life expectancy is not too short. Macrovascular complications (ischaemic heart disease, stroke, and peripheral vascular disease) are a leading cause of mortality among people with T2DM, and the risk of developing heart failure is more than double compared to patients without DM. Conventional anticancer therapies (e.g. anthracyclines, antimetabolites, cyclophosphamide) and novel therapies (e.g. monoclonal antibodies, TKIs, ICIs) are associated with many adverse CV events, including left ventricular dysfunction and heart failure, hypertension, vascular thrombosis and ischaemia, rhythm disturbances and QT prolongation, cardiomyopathy, myocardial fibrosis, and myocarditis, which can contribute to the worsening of CV complications related to DM. The occurrence of cancer treatment-induced CV impairments differs greatly, depending on the patient’s age, specific anticancer therapy used, duration of therapy, and the patient’s comorbidities. Some anticancer treatments lead to irreversible and progressively worsening CV damage (classical cytolytic cancer therapies). In contrast, others induce only temporary dysfunctions (some novel biological therapies) with no apparent long-term consequence. Furthermore, coronary artery disease, valvular disease, myocardium damage, defects in the conduction system, and diastolic dysfunction constitute the broad spectrum of CV AEs that can occur after radiotherapy. In order to prevent chronic cardiotoxicity of anticancer drugs, early detection using cardiac biomarkers [troponin-I, brain-type natriuretic peptide (BNP), and N-terminal proBNP] and/or imaging techniques (echocardiography, cardiac magnetic resonance) may be extremely useful and required as well as the use of cardioprotective therapy (β-blockers, angiotensin-converting enzyme inhibitors, angiotensin inhibitors, and mineralocorticoid receptor antagonists), even if cardioprotective effects of most of these agents have not been clearly proven in the setting of cancer treatment-related CV damage. Diabetic nephropathy (DN) affects ∼25%-30% of patients with DM and has become the leading cause of end-stage renal disease. DN is characterized by a progressive increase in proteinuria and/or a gradual decline in estimated glomerular filtration rate (eGFR), which are worsened at a relevant incidence by various anticancer drugs. In particular, mTOR inhibitors (everolimus and temsirolimus), probably as a consequence of their hyperglycaemic effect rather than direct damage on renal cells, showed an increase in creatinine and proteinuria ; TKIs (e.g. pazopanib, sunitinib, axitinib, sorafenib, and lapatinib) and monoclonal antibodies (bevacizumab, aflibercept) pointed out an increase in proteinuria. , Finally, acute inflammatory infiltrates in the renal interstitium have been observed in patients treated with ICIs (ipilimumab) in the short and medium term. Epidemiological data from developed countries, unconfirmed in low- and middle-income countries, have suggested a downward trend in the prevalence of blindness related to diabetic retinopathy in people with DM. There is limited evidence on the effects of anticancer drugs on diabetic retinopathy. A low percentage of diabetic patients have been reported to have worsening retinopathy, including vascular injuries (e.g. tamoxifen) and retinal ischaemia with neovascularization (e.g. alkylating agents, ICIs), soon after starting cancer therapy. Considering diabetes-related neuropathy, many new anti-myeloma agents can set off or aggravate any pre-existing sensory (thalidomide, bortezomib), sensorimotor (thalidomide), or autonomic (bortezomib) neuropathy. Combinations of chemotherapeutic drugs with the highest peripheral neurotoxicity rates include those involving platinum salts (cisplatin, carboplatin, and oxaliplatin), vinca alkaloids (vincristine, vinblastine, vinorelbine), bortezomib (proteasome inhibitor), and taxanes (paclitaxel, docetaxel, cabazitaxel). , Paraneoplastic hypoglycaemia and hyperglycaemia NETs are secreting neoplasms frequently associated with hormonal hypersecretion. Up to 30% of pancreatic NETs are associated with functioning endocrine syndromes, which can result in impaired glucose homeostasis. One of the most frequent syndromes is related to insulinoma, an insulin-secreting pancreatic NET with an incidence of 1-3 per million per year. Hypoglycaemia-related symptoms generally guide the diagnosis when the tumour is still localized within the pancreas, as in ∼90% of cases. Insulinomas are malignant but slowly progressing tumours, explaining the gradual development of the syndrome and a kind of adaptation of the patient to hypoglycaemia. However, in the case of advanced unresectable disease, hypoglycaemia could be life-threatening and requires either insulin-lowering drugs (i.e. diazoxide, everolimus, pasireotide) or any potentially active antiproliferative agents (chemotherapy, targeted therapy, radionuclide therapy, liver-directed therapy). Nutritional recommendations are a high-protein diet with a low glycaemic index and complex carbohydrates to minimize hypoglycaemic events and rapidly absorbable carbohydrates during hypoglycaemia. Very rarely, IGF-II-secreting pancreatic NETs can induce hypoglycaemia by activating insulin receptors. Other IGF-II syndromes arise from mesenchymal, epithelial, or haematopoietic neoplasms. On the contrary, hyperglycaemic syndromes can also develop in NET patients. Sporadic endocrine syndromes arising from NETs of the duodenum–pancreas include glucagonoma and somatostatinoma. Glucagon and somatostatin exert proglycaemic and inhibitory effects on insulin secretion, resulting in reduced glucose tolerance and DM. More frequent syndromes inducing glucose impairments are Cushing’s syndrome and paraganglioma syndromes, both cortisol and catecholamines being proglycaemic hormones counteracting insulin activity. In particular, Cushing’s syndrome is frequently associated with hyperglycaemia or overt DM, which is worsened by the metabolic syndromes characterizing these subjects. Metformin is an optimal approach to improve insulin sensitivity to manage these syndromes, while insulin should be rapidly adopted in case of poor glycaemic control. From a nutritional point of view, a Mediterranean-style diet is optimal in these patients since the control of both hyperglycaemia and body weight as well as other metabolic impairments could be of benefit both to avoid DM complications and to obtain antitumour effects. Regardless of DM, malnourished patients, as well as those with Cushing’s syndrome, should receive nutritional assessment and support.
Hyperglycaemia and hypoglycaemia are more likely to occur in patients with DM and cancer. Conditions such as fatigue, dehydration, vomiting and diarrhoea, cachexia, and infections can trigger acute DM complications like surgical and medical procedures. Diabetic ketoacidosis (DKA) and hyperglycaemic hyperosmolar state (HHS) are life-threatening conditions accounting for several DM-related deaths (∼0.4% for DKA, reaching 2% in patients >65 years, and up to 20% for HHS). DKA’s main features are hyperglycaemia, ketonaemia, and metabolic acidosis with a high anionic gap, whereas HHS is characterized by hyperglycaemia, hyperosmolarity, and dehydration without significant acidosis (see ). While the former is related to an absolute shortage or lack of insulin, endogenous production of insulin persists in HHS: albeit not sufficient to provide glucose to the insulin-sensitive tissues, it is adequate to prevent lipolysis and consequent ketogenesis . Euglycaemic DKA (EDKA), whose diagnosis can be delayed by the absence of hyperglycaemia and ketonuria, may also develop in specific circumstances. The incidence of EDKA has recently increased due to the introduction of SGLT2is. Since intravascular volume depletion induced by diarrhoea and emesis or by a ketosis-prone status consequent to reduced food intake, hospitalization, and surgery are all precipitating factors for EDKA, oncologic patients treated with these ADDs should be closely monitored. The clinical presentation of DKA is relatively fast, while it can take days or weeks for HHS. Common symptoms include polyuria, polydipsia, weight loss, and weakness, associated with signs related to intravascular volume depletion. Both in DKA and in HHS, neurologic signs and symptoms may occur. Conversely, polyuria and polydipsia may not be present in the case of EDKA, whereas these patients experience fatigue and malaise. Severe ketoacidosis may also simulate an acute abdomen. , In oncology, the sudden onset of DKA may also be the first manifestation of autoimmune DM (much similar to T1DM) induced by ICIs, which are responsible for several immune-related AEs. , , These patients often show normal levels of HbA1c, with a low C-peptide and sometimes positive islet-cell autoantibodies. More commonly, other drugs used in oncology (e.g. glucocorticoids, TKIs, and everolimus) may worsen a pre-existing DM and trigger hyperglycaemic complications due to insulin resistance, reduced insulin secretion, or both. Therefore, blood glucose should be closely monitored in patients with DM and cancer. Particular attention should be paid to patients treated with ICIs and people with already known DM treated with glucocorticoids in order to take immediate action in case of DKA and/or HHS. Differential diagnosis between the two types of acute complications is based on glucose levels, pH value, the presence/absence of ketones, osmolality, anion gap, and mental status (see ). Medical treatment of DKA/HHS in cancer patients does not differ from the general population, requiring restoring the circulatory volume and extracellular compartment, reducing blood glucose levels, plasma osmolality, and correction of electrolyte alterations. Intravenous insulin infusion is the treatment of choice for these patients. Identifying and treating precipitating events such as dehydration and infections are mandatory. Patient education is also fundamental, particularly concerning glucose monitoring and DM management on sick days, especially in case of fever and/or concomitant infection. Ongoing training and education must be provided to the medical staff and caregivers about recognizing and treating symptoms before they escalate to acute, life-threatening conditions.
Many data indicate that glycaemic control, adherence to therapy, and self-management of DM worsen after a cancer diagnosis, partially explaining the increased risk of adverse outcomes in diabetic patients with cancer compared to non-diabetics. Moreover, anticancer therapies have a further detrimental effect on metabolic compensation, which affects the onset of long-term micro- and macrovascular complications, and exacerbates already present diabetes-induced organ damage. Consequently, the onset or progression of CV, renal, ocular, and neuropathic injuries should be prevented and monitored in patients with DM and cancer, provided that life expectancy is not too short. Macrovascular complications (ischaemic heart disease, stroke, and peripheral vascular disease) are a leading cause of mortality among people with T2DM, and the risk of developing heart failure is more than double compared to patients without DM. Conventional anticancer therapies (e.g. anthracyclines, antimetabolites, cyclophosphamide) and novel therapies (e.g. monoclonal antibodies, TKIs, ICIs) are associated with many adverse CV events, including left ventricular dysfunction and heart failure, hypertension, vascular thrombosis and ischaemia, rhythm disturbances and QT prolongation, cardiomyopathy, myocardial fibrosis, and myocarditis, which can contribute to the worsening of CV complications related to DM. The occurrence of cancer treatment-induced CV impairments differs greatly, depending on the patient’s age, specific anticancer therapy used, duration of therapy, and the patient’s comorbidities. Some anticancer treatments lead to irreversible and progressively worsening CV damage (classical cytolytic cancer therapies). In contrast, others induce only temporary dysfunctions (some novel biological therapies) with no apparent long-term consequence. Furthermore, coronary artery disease, valvular disease, myocardium damage, defects in the conduction system, and diastolic dysfunction constitute the broad spectrum of CV AEs that can occur after radiotherapy. In order to prevent chronic cardiotoxicity of anticancer drugs, early detection using cardiac biomarkers [troponin-I, brain-type natriuretic peptide (BNP), and N-terminal proBNP] and/or imaging techniques (echocardiography, cardiac magnetic resonance) may be extremely useful and required as well as the use of cardioprotective therapy (β-blockers, angiotensin-converting enzyme inhibitors, angiotensin inhibitors, and mineralocorticoid receptor antagonists), even if cardioprotective effects of most of these agents have not been clearly proven in the setting of cancer treatment-related CV damage. Diabetic nephropathy (DN) affects ∼25%-30% of patients with DM and has become the leading cause of end-stage renal disease. DN is characterized by a progressive increase in proteinuria and/or a gradual decline in estimated glomerular filtration rate (eGFR), which are worsened at a relevant incidence by various anticancer drugs. In particular, mTOR inhibitors (everolimus and temsirolimus), probably as a consequence of their hyperglycaemic effect rather than direct damage on renal cells, showed an increase in creatinine and proteinuria ; TKIs (e.g. pazopanib, sunitinib, axitinib, sorafenib, and lapatinib) and monoclonal antibodies (bevacizumab, aflibercept) pointed out an increase in proteinuria. , Finally, acute inflammatory infiltrates in the renal interstitium have been observed in patients treated with ICIs (ipilimumab) in the short and medium term. Epidemiological data from developed countries, unconfirmed in low- and middle-income countries, have suggested a downward trend in the prevalence of blindness related to diabetic retinopathy in people with DM. There is limited evidence on the effects of anticancer drugs on diabetic retinopathy. A low percentage of diabetic patients have been reported to have worsening retinopathy, including vascular injuries (e.g. tamoxifen) and retinal ischaemia with neovascularization (e.g. alkylating agents, ICIs), soon after starting cancer therapy. Considering diabetes-related neuropathy, many new anti-myeloma agents can set off or aggravate any pre-existing sensory (thalidomide, bortezomib), sensorimotor (thalidomide), or autonomic (bortezomib) neuropathy. Combinations of chemotherapeutic drugs with the highest peripheral neurotoxicity rates include those involving platinum salts (cisplatin, carboplatin, and oxaliplatin), vinca alkaloids (vincristine, vinblastine, vinorelbine), bortezomib (proteasome inhibitor), and taxanes (paclitaxel, docetaxel, cabazitaxel). ,
NETs are secreting neoplasms frequently associated with hormonal hypersecretion. Up to 30% of pancreatic NETs are associated with functioning endocrine syndromes, which can result in impaired glucose homeostasis. One of the most frequent syndromes is related to insulinoma, an insulin-secreting pancreatic NET with an incidence of 1-3 per million per year. Hypoglycaemia-related symptoms generally guide the diagnosis when the tumour is still localized within the pancreas, as in ∼90% of cases. Insulinomas are malignant but slowly progressing tumours, explaining the gradual development of the syndrome and a kind of adaptation of the patient to hypoglycaemia. However, in the case of advanced unresectable disease, hypoglycaemia could be life-threatening and requires either insulin-lowering drugs (i.e. diazoxide, everolimus, pasireotide) or any potentially active antiproliferative agents (chemotherapy, targeted therapy, radionuclide therapy, liver-directed therapy). Nutritional recommendations are a high-protein diet with a low glycaemic index and complex carbohydrates to minimize hypoglycaemic events and rapidly absorbable carbohydrates during hypoglycaemia. Very rarely, IGF-II-secreting pancreatic NETs can induce hypoglycaemia by activating insulin receptors. Other IGF-II syndromes arise from mesenchymal, epithelial, or haematopoietic neoplasms. On the contrary, hyperglycaemic syndromes can also develop in NET patients. Sporadic endocrine syndromes arising from NETs of the duodenum–pancreas include glucagonoma and somatostatinoma. Glucagon and somatostatin exert proglycaemic and inhibitory effects on insulin secretion, resulting in reduced glucose tolerance and DM. More frequent syndromes inducing glucose impairments are Cushing’s syndrome and paraganglioma syndromes, both cortisol and catecholamines being proglycaemic hormones counteracting insulin activity. In particular, Cushing’s syndrome is frequently associated with hyperglycaemia or overt DM, which is worsened by the metabolic syndromes characterizing these subjects. Metformin is an optimal approach to improve insulin sensitivity to manage these syndromes, while insulin should be rapidly adopted in case of poor glycaemic control. From a nutritional point of view, a Mediterranean-style diet is optimal in these patients since the control of both hyperglycaemia and body weight as well as other metabolic impairments could be of benefit both to avoid DM complications and to obtain antitumour effects. Regardless of DM, malnourished patients, as well as those with Cushing’s syndrome, should receive nutritional assessment and support.
The growing need to care for patients with cancer and DM is a major clinical challenge for oncologists and endocrinologists, as well as for haematologists, radiotherapists, and palliative care clinicians. Today, the clinical management of cancer patients with DM still relies more on the clinician’s experience than guidelines. The time has come for academic centres and scientific societies to train ad hoc endocrinologists who practice in the oncology field and oncologists who really want to care for cancer-related metabolic issues. Just like cardio-oncology has recently emerged as a subspecialty of clinicians with a special interest in the detection, monitoring, and management of CV side-effects of chemotherapy, targeted therapy, and radiotherapy, the time has probably come for clinicians with special interest and knowledge in treating DM and metabolic complications in people with cancer. Several issues should represent the core curriculum of specialists with a special interest in diabeto-oncology (see ). In addition to these, some other specific circumstances regarding the everyday clinical management of patients with both DM and cancer should be accurately discussed and shared among specialists and with patients and caregivers. The diversity of disease entities and aspects of subspecialties covered by diabeto-oncology could make it an essential component of modern health care. The diabeto-oncology education programme should be designed to develop specific skills on the biological and clinical intersection of DM and cancer and provide effective care to patients affected by both clinical conditions. A summary of the main specific areas of oncology and diabetology training has been reported in , focusing on common topics to be shared and elaborated according to the main expertise.
Overall, the coexistence of cancer and DM poses significant challenges for patients and health care providers. The complex relationship between these two diseases highlights the need for a multidisciplinary approach and collaboration between oncologists and diabetologists. The management of cancer patients with DM requires careful screening before starting anticancer therapy to assess diabetic complications, nutritional status, and metabolic control. During cancer treatments, health care providers must proactively manage iatrogenic hyperglycaemia and consider the impact of various therapies on glucose metabolism. Glycaemic control plays a crucial role in improving patient outcomes and QoL. Individualized treatment plans, close monitoring of glucose levels, and appropriate adjustment of antidiabetic therapy are necessary to minimize the risk of complications and optimize patient care. The emerging field of ‘diabeto-oncology’ focuses on developing personalized strategies, identifying biomarkers, and implementing primary prevention strategies to address the unique challenges of cancer patients with DM. By promoting collaboration, education, and awareness, health care providers can improve clinical management, survival rates, and QoL of patients.
|
Antimalarial
Imidazopyridines Incorporating an Intramolecular
Hydrogen Bonding Motif: Medicinal Chemistry and Mechanistic Studies | 3e272441-e4d8-49b3-b530-c71d68a3567f | 10111423 | Pharmacology[mh] | Synthetic protocols reported in the literature were adapted to access the target compounds. The general protocol followed an eight-step synthetic route from commercially available 2,4-dichloro-5-nitropyridine . A nucleophilic substitution was first performed to introduce a para -methoxybenzyl (PMB) amino-protecting group to form the 2-chloro- N -(4-methoxybenzyl)-5-nitropyridin-4-amine intermediate a in quantitative yield. This intermediate was subjected to a second nucleophilic substitution reaction under microwave irradiation with tert -butyl (2-aminoethyl)(ethyl)carbamate in the presence of triethylamine and N , N -dimethylformamide (DMF) to produce tert -butyl ethyl(2-((4-((4-methoxybenzyl)amino)-5-nitropyridin-2-yl)amino)ethyl)carbamate b . The nitro group was then reduced using palladium on carbon (10% Pd/C) under H 2 gas to deliver the corresponding aniline c . This intermediate was reacted with the appropriate carboxylic acids in the presence of 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide, hydrochloride (EDCI·HCl), and a catalytic amount of 4-dimethylaminopyridine (DMAP) in dichloromethane (DCM) to produce the amide intermediates d.1 – d.19 . The imidazole ring was allowed to form at this stage by heating in 2 M aqueous NaOH and absolute ethanol at 80 °C to yield intermediates e.1 – e.19 . The appropriate cyclized intermediate was subjected to boc-deprotection followed by reductive amination with 4-chloro-2-hydroxybenzaldehyde in the presence of sodium borohydride to yield the penultimate intermediates f.1 – f.19 . Finally, the PMB group was removed using neat trifluoroacetic acid (TFA) to afford the desired imidazopyridine analogues 1–19 in moderate yields (50–75%). To probe the subcellular localization of the target compounds within the parasite, a representative fluorescent probe of one of the compounds was designed and synthesized by attaching 7-nitrobenz-2-oxa-1,3-diazole (NBD) on the ethylenediamine side chain through an ethyl linker, as illustrated in . Compound 14-NBD was synthesized through an 11-step synthetic route from commercially available 2,4-dichloro-5-nitropyridine. A nucleophilic substitution reaction on the para -methoxybenzyl (PMB) amino-protecting group to form the 2-chloro- N -(4-methoxybenzyl)-5-nitropyridin-4-amine intermediate a , followed by a second nucleophilic substitution reaction involving the 2-chloro- N -(4-methoxybenzyl)-5-nitropyridin-4-amine intermediate with N -boc ethylenediamine in the presence of triethylamine and N,N -dimethylformamide (DMF), produced intermediate g . The nitro group was then reduced with Zn in acetic acid to deliver the corresponding aniline intermediate h . This intermediate was reacted with 3-trifluoromethyl benzoic acid in the presence of 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide, hydrochloride (EDCI·HCl), and a catalytic amount of 4-dimethylaminopyridine (DMAP) in dichloromethane (DCM) to produce the amide intermediate i . At this stage, the imidazole ring was allowed to form by heating in 2 M aqueous NaOH and absolute ethanol at 80 °C to yield intermediate j . The cyclized intermediate was subjected to boc-deprotection followed by reductive amination with 4-chloro-2-hydroxybenzaldehyde in the presence of sodium borohydride to deliver the intermediate k . The PMB group was removed using neat trifluoroacetic acid (TFA), resulting in an intermediate l . Reductive amination of N -boc glycinal was then carried out in the presence of sodium cyanoborohydride to deliver the penultimate intermediate m . Finally, removal of the boc group, followed by a nucleophilic substitution reaction with NBD-chloride, yielded the target fluorescent probe 14-NBD . In Vitro Asexual Blood-Stage Antiplasmodium Activity and Cytotoxicity All the compounds were evaluated for in vitro antiplasmodium activity against both the drug-sensitive NF54 and multidrug-resistant K1 strains of P. falciparum , and the SAR is discussed with respect to IC 50 values on the NF54 strain. Aromatic groups bearing small non-polar meta - or para - electron-withdrawing substituents displayed better antiplasmodium potency, with compound 14 (IC 50 = 0.08 μM) having the highest potency. Incorporation of heteroatoms in the saturated cyclic substituents as exemplified in compounds 2 (IC 50 = 1.67 μM), 3 (IC 50 = 2.37 μM) and 13 (IC 50 = 1.03 μM) was detrimental to antiplasmodium activity. Further, electron-withdrawing substituents such as a fluoro (-F) or a trifluoromethyl (-CF 3 ) on the cyclohexane ring led to comparable antiplasmodium activity in comparison to their unsubstituted congeners as shown in the matched pairs 9 (IC 50 = 0.67 μM), 10 (IC 50 = 0.86 μM) and 11 (IC 50 = 0.69 μM), 12 (IC 50 = 0.93 μM). However, the presence of the electron drawing -CF 3 group on the cyclopropane was detrimental to activity while the -CH 3 group was more tolerated compared to the unsubstituted analogue. Finally, changes in ring size did not have any significant effect on antiplasmodium activity as exemplified in compounds 5 (IC 50 = 0.39 μM), 9 (IC 50 = 0.67 μM), 1 (IC 50 = 0.34 μM), 11 (IC 50 = 0.69 μM), and 15 (IC 50 = 0.67 μM). The cytotoxicity of these analogues was determined against the Chinese hamster ovarian (CHO) cell line using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay. All the compounds showed a favorable cytotoxicity profile on account of their selectivity indices (SI > 10), with compound 14 (SI = 466.25) exhibiting the highest SI. Metabolic Stability Studies of Selected Analogues Selected imidazopyridine analogues exhibiting sub-micromolar in vitro asexual blood stage antiplasmodium activity (IC 50 < 1 μM), suitable solubility (> 50 μM), and an acceptable selectivity profile relative to the mammalian Chinese hamster ovarian cell line (SI > 10) were evaluated for metabolic stability in mouse, rat, and human liver microsomes . These compounds were generally not stable across all three species of microsomes, although 1 and 5 were more stable in human liver microsomes than in rodent microsomes. The metabolic stability of these analogues were assessed by the hepatic ratio ( E H ). Metabolite Identification Studies for Compound 14 Considering the generally poor microsomal metabolic stability displayed by selected compounds, metabolite identification studies in mouse liver microsomes were undertaken on one of them, compound 14 . Four (4) metabolites were identified from the metabolism of 14 in mouse liver microsomes, with the primary metabolites (P-28 and P-140) arising from the dealkylation of the side chain N -alkyl groups although the exact structure is yet to be confirmed. Notably, the formation of the metabolites required NADPH suggesting the presence of microsomes and the involvement of CYP450 enzymes, indicating that they were products of metabolism and not chemical degradation. All the compounds were evaluated for in vitro antiplasmodium activity against both the drug-sensitive NF54 and multidrug-resistant K1 strains of P. falciparum , and the SAR is discussed with respect to IC 50 values on the NF54 strain. Aromatic groups bearing small non-polar meta - or para - electron-withdrawing substituents displayed better antiplasmodium potency, with compound 14 (IC 50 = 0.08 μM) having the highest potency. Incorporation of heteroatoms in the saturated cyclic substituents as exemplified in compounds 2 (IC 50 = 1.67 μM), 3 (IC 50 = 2.37 μM) and 13 (IC 50 = 1.03 μM) was detrimental to antiplasmodium activity. Further, electron-withdrawing substituents such as a fluoro (-F) or a trifluoromethyl (-CF 3 ) on the cyclohexane ring led to comparable antiplasmodium activity in comparison to their unsubstituted congeners as shown in the matched pairs 9 (IC 50 = 0.67 μM), 10 (IC 50 = 0.86 μM) and 11 (IC 50 = 0.69 μM), 12 (IC 50 = 0.93 μM). However, the presence of the electron drawing -CF 3 group on the cyclopropane was detrimental to activity while the -CH 3 group was more tolerated compared to the unsubstituted analogue. Finally, changes in ring size did not have any significant effect on antiplasmodium activity as exemplified in compounds 5 (IC 50 = 0.39 μM), 9 (IC 50 = 0.67 μM), 1 (IC 50 = 0.34 μM), 11 (IC 50 = 0.69 μM), and 15 (IC 50 = 0.67 μM). The cytotoxicity of these analogues was determined against the Chinese hamster ovarian (CHO) cell line using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay. All the compounds showed a favorable cytotoxicity profile on account of their selectivity indices (SI > 10), with compound 14 (SI = 466.25) exhibiting the highest SI. Selected imidazopyridine analogues exhibiting sub-micromolar in vitro asexual blood stage antiplasmodium activity (IC 50 < 1 μM), suitable solubility (> 50 μM), and an acceptable selectivity profile relative to the mammalian Chinese hamster ovarian cell line (SI > 10) were evaluated for metabolic stability in mouse, rat, and human liver microsomes . These compounds were generally not stable across all three species of microsomes, although 1 and 5 were more stable in human liver microsomes than in rodent microsomes. The metabolic stability of these analogues were assessed by the hepatic ratio ( E H ). 14 Considering the generally poor microsomal metabolic stability displayed by selected compounds, metabolite identification studies in mouse liver microsomes were undertaken on one of them, compound 14 . Four (4) metabolites were identified from the metabolism of 14 in mouse liver microsomes, with the primary metabolites (P-28 and P-140) arising from the dealkylation of the side chain N -alkyl groups although the exact structure is yet to be confirmed. Notably, the formation of the metabolites required NADPH suggesting the presence of microsomes and the involvement of CYP450 enzymes, indicating that they were products of metabolism and not chemical degradation. β-Hematin Inhibition Assay and Docking Previously synthesized antimalarial imidazopyridines were shown to inhibit the formation of β-hematin in a cell-free assay and were subsequently confirmed as bonafide inhibitors of hemozoin formation in a cell-fractionation assay. , Based on this precedence, the potential of our imidazopyridine series to inhibit hemozoin formation was assessed using the β-hematin inhibition Assay (BHIA) . Using the discriminatory cut-off of <100 μM, only nine compounds: 1 (18 μM), 5 (16 μM), 6 (33 μM), 10 (76 μM), 11 (80 μM), 12 (31 μM), 14 (9 μM), 18 (18 μM), and 19 (65 μM) exhibited β-hematin inhibition activity in the preferred range, with compound 14 showing the highest potency. The frontrunner compound 14 , which exhibited sub-micromolar in vitro asexual blood-stage antiplasmodium activity, acceptable cytotoxicity profile against the mammalian CHO cell line, good solubility, and potent β-hematin inhibition activity, also showed specific intermolecular interactions with the previously published crystal surface of β-hematin. The imidazopyridine core, the 3-trifluoromethylphenyl, and the 3-chlorophenol moieties of the compound interact through π–π stacking with the porphyrin ring of β-hematin. On the other hand, the basic nitrogen of the tertiary amine on the side chain of the compound forms a hydrogen bond with the propionate group of β-hematin when protonated at pH 4.5 , further supporting the inhibition of β-hematin as a possible contributing mode of action. Fluorescence Drug Localization Studies Fluorescence drug-localization studies were employed as the starting point to probe the subcellular localization of the target compounds within the parasite. The representative compound, 14 (IC 50 Pf NF54 = 0.08 μM), was first assessed for its inherent fluorescence for imaging in P. falciparum using a fluorimeter. Excitation between 200 and 600 nm yielded no significant emission with reference to the blank solvent. This underscored the need to attach an external fluorophore with suitable photophysical properties and comparable in vitro antiplasmodium activity to the parent compound. 7-Nitrobenz-2-oxa-1,3-diazole (NBD) was selected as an appropriate extrinsic fluorophore based on its small size, commercial availability, and stability over a biologically relevant pH range. The point of attachment of the fluorophore was guided by the earlier SAR studies on the scaffold. The NBD-labeled probe retained nanomolar in vitro activity against P. falciparum ( 14-NBD Pf NF54 IC 50 = 0.049 μM; ). It also possesses photophysical properties that are suitable for live-cell imaging. Subcellular accumulation of Pf -infected red blood cells was assessed through confocal microscopy. Commercially available organelle trackers such as the LysoTracker Red, MitoTracker Deep Red, ER-Tacker Red, DRAQ5, and Nile Red aided the colocalization studies of 14-NBD . These dyes illuminate the acidic organelles such as the parasite’s digestive vacuole, mitochondrion, endoplasmic reticulum, nucleus, and lipids, respectively. − The results from the live-cell confocal microscopy showed a partial accumulation between 14-NBD and LysoTracker Red, with regions of intense localization observed around the parasite’s membrane structures. No significant accumulation was seen in the areas around the hemozoin crystals (Hz), suggesting that 14-NBD does not localize in the parasite’s digestive vacuoles ( A). It is noteworthy that while 14-NBD retained antiplasmodium activities compared to the parent compound, the presence of the NBD fluorophore can influence the accumulation of the compound in the parasite. Similarly, no colocalization was observed between the nuclear marker, DRAQ5, and 14-NBD , thereby eliminating the parasite’s nucleus as a site of action of the compound. Although the biochemistry of hemozoin formation has not been fully elucidated, with many hypotheses in the literature regarding its formation, − one hypothesis that has gained popularity is that it is lipid-catalyzed. Neutral lipids, in particular, have been associated with hemozoin formation. Consequently, Nile Red was co-incubated with 14-NBD to identify and assess the interaction of 14-NBD with neutral lipids. Punctuate structures believed to be neutral lipid droplets were observed close to the hemozoin crystals. They are formed from the parasite’s cytosol and transported into its food vacuole, where they aid in the conversion of heme to hemozoin. 14-NBD colocalized with Nile Red, indicating the compound’s association with neutral lipid droplets ( B). Furthermore, 14-NBD interacted significantly with the parasite’s mitochondrion. This is shown by the colocalization between 14-NBD and MitoTracker Deep Red ( A). Heme Speciation Assay To augment the findings from live-cell confocal microscopy, β-hematin inhibition assay, and docking studies that suggest hemozoin inhibition as a possible mode of action of this class of compounds, the frontrunner of the series, 14 was tested in a cellular heme fractionation assay to evaluate the dose-dependent effect of the compound on various iron species in the parasite and to confirm the compound’s ability to inhibit intracellular Hz formation in P. falciparum parasites according to methods previously described by Combrinck and co-workers. However, at increasing concentrations of 14 , no statistically significant change was observed in the levels of heme. Conversely, a significant decrease in the levels of hemozoin was observed between 0.5–2× IC 50 of 14 , suggesting that although compound 14 does not directly interfere with the conversion of heme to hemozoin, it could be targeting other processes in the parasite’s digestive vacuole. At this juncture, it is noteworthy that a true hemozoin inhibitor causes a dose-dependent increase in “free” heme and a corresponding decrease in hemozoin. Previously synthesized antimalarial imidazopyridines were shown to inhibit the formation of β-hematin in a cell-free assay and were subsequently confirmed as bonafide inhibitors of hemozoin formation in a cell-fractionation assay. , Based on this precedence, the potential of our imidazopyridine series to inhibit hemozoin formation was assessed using the β-hematin inhibition Assay (BHIA) . Using the discriminatory cut-off of <100 μM, only nine compounds: 1 (18 μM), 5 (16 μM), 6 (33 μM), 10 (76 μM), 11 (80 μM), 12 (31 μM), 14 (9 μM), 18 (18 μM), and 19 (65 μM) exhibited β-hematin inhibition activity in the preferred range, with compound 14 showing the highest potency. The frontrunner compound 14 , which exhibited sub-micromolar in vitro asexual blood-stage antiplasmodium activity, acceptable cytotoxicity profile against the mammalian CHO cell line, good solubility, and potent β-hematin inhibition activity, also showed specific intermolecular interactions with the previously published crystal surface of β-hematin. The imidazopyridine core, the 3-trifluoromethylphenyl, and the 3-chlorophenol moieties of the compound interact through π–π stacking with the porphyrin ring of β-hematin. On the other hand, the basic nitrogen of the tertiary amine on the side chain of the compound forms a hydrogen bond with the propionate group of β-hematin when protonated at pH 4.5 , further supporting the inhibition of β-hematin as a possible contributing mode of action. Fluorescence drug-localization studies were employed as the starting point to probe the subcellular localization of the target compounds within the parasite. The representative compound, 14 (IC 50 Pf NF54 = 0.08 μM), was first assessed for its inherent fluorescence for imaging in P. falciparum using a fluorimeter. Excitation between 200 and 600 nm yielded no significant emission with reference to the blank solvent. This underscored the need to attach an external fluorophore with suitable photophysical properties and comparable in vitro antiplasmodium activity to the parent compound. 7-Nitrobenz-2-oxa-1,3-diazole (NBD) was selected as an appropriate extrinsic fluorophore based on its small size, commercial availability, and stability over a biologically relevant pH range. The point of attachment of the fluorophore was guided by the earlier SAR studies on the scaffold. The NBD-labeled probe retained nanomolar in vitro activity against P. falciparum ( 14-NBD Pf NF54 IC 50 = 0.049 μM; ). It also possesses photophysical properties that are suitable for live-cell imaging. Subcellular accumulation of Pf -infected red blood cells was assessed through confocal microscopy. Commercially available organelle trackers such as the LysoTracker Red, MitoTracker Deep Red, ER-Tacker Red, DRAQ5, and Nile Red aided the colocalization studies of 14-NBD . These dyes illuminate the acidic organelles such as the parasite’s digestive vacuole, mitochondrion, endoplasmic reticulum, nucleus, and lipids, respectively. − The results from the live-cell confocal microscopy showed a partial accumulation between 14-NBD and LysoTracker Red, with regions of intense localization observed around the parasite’s membrane structures. No significant accumulation was seen in the areas around the hemozoin crystals (Hz), suggesting that 14-NBD does not localize in the parasite’s digestive vacuoles ( A). It is noteworthy that while 14-NBD retained antiplasmodium activities compared to the parent compound, the presence of the NBD fluorophore can influence the accumulation of the compound in the parasite. Similarly, no colocalization was observed between the nuclear marker, DRAQ5, and 14-NBD , thereby eliminating the parasite’s nucleus as a site of action of the compound. Although the biochemistry of hemozoin formation has not been fully elucidated, with many hypotheses in the literature regarding its formation, − one hypothesis that has gained popularity is that it is lipid-catalyzed. Neutral lipids, in particular, have been associated with hemozoin formation. Consequently, Nile Red was co-incubated with 14-NBD to identify and assess the interaction of 14-NBD with neutral lipids. Punctuate structures believed to be neutral lipid droplets were observed close to the hemozoin crystals. They are formed from the parasite’s cytosol and transported into its food vacuole, where they aid in the conversion of heme to hemozoin. 14-NBD colocalized with Nile Red, indicating the compound’s association with neutral lipid droplets ( B). Furthermore, 14-NBD interacted significantly with the parasite’s mitochondrion. This is shown by the colocalization between 14-NBD and MitoTracker Deep Red ( A). To augment the findings from live-cell confocal microscopy, β-hematin inhibition assay, and docking studies that suggest hemozoin inhibition as a possible mode of action of this class of compounds, the frontrunner of the series, 14 was tested in a cellular heme fractionation assay to evaluate the dose-dependent effect of the compound on various iron species in the parasite and to confirm the compound’s ability to inhibit intracellular Hz formation in P. falciparum parasites according to methods previously described by Combrinck and co-workers. However, at increasing concentrations of 14 , no statistically significant change was observed in the levels of heme. Conversely, a significant decrease in the levels of hemozoin was observed between 0.5–2× IC 50 of 14 , suggesting that although compound 14 does not directly interfere with the conversion of heme to hemozoin, it could be targeting other processes in the parasite’s digestive vacuole. At this juncture, it is noteworthy that a true hemozoin inhibitor causes a dose-dependent increase in “free” heme and a corresponding decrease in hemozoin. With the goal of incorporating an intramolecular hydrogen bonding motif in known antimalarial chemotypes, we identified a set of potent antimalarial imidazopyridine analogues. The medicinal chemistry of the chemical series with respect to antiplasmodium SAR profiles around an earlier-identified benzimidazole core was explored, leading to the identification of the series’ frontrunner 14, which displayed the highest potency within the series. Furthermore, all compounds from this series showed a favorable cytotoxicity profile against the CHO cell line. Nonetheless, these compounds were metabolically labile and could not be progressed to in vivo efficacy studies. However, metabolite identification studies provided insight into the metabolic hotspots, which can be used to synthesize analogues that would address this liability in future studies. Although 14 interacted favorably with the β-hematin surface through docking and showed potent β-hematin inhibition, no statistically significant effect on the levels of heme was observed after a dose-dependent treatment of P. falciparum cells with 14 . Conversely, hemozoin levels decreased with increasing concentrations of 14 . Hence, we hypothesized that while 14 does not directly affect the conversion of heme to hemozoin, it may target different digestive vacuole processes. The interaction of 14-NBD with other organelles aside from the parasite’s digestive vacuole may suggest the potential involvement of a novel target. All commercially available chemicals were purchased from either Sigma-Aldrich (Germany) or Combi-Blocks (United States). 1 H NMR (all intermediates and final compounds) and 13 C NMR (for target compounds only) spectra were recorded on a Bruker Spectrometer at 300, 400, or 600 megahertz (MHz). Melting points for all target compounds were determined using a Reichert-Jung Thermovar hot-stage microscope coupled to a Reichert-Jung Thermovar digital thermometer (20–350 °C range). Reaction monitoring using analytical thin-layer chromatography (TLC) was performed on aluminum-backed silica-gel 60 F 254 (70–230 mesh) plates with detection and visualization done using (a) UV lamp (254/366 nm), (b) iodine vapors, or (c) ninhydrin spray reagent. Column chromatography was performed with Merck silica-gel 60 (70–230 mesh). Chemical shifts (δ) are reported in ppm downfield from trimethlysilane (TMS) as the internal standard. Coupling constants ( J ) were recorded in Hertz (Hz). Purity of compounds was determined by an Agilent 1260 Infinity binary pump, Agilent 1260 Infinity diode array detector, Agilent 1290 Infinity column compartment, Agilent 1260 Infinity standard autosampler, and Agilent 6120 quadrupole (single) mass spectrometer, equipped with APCI and ESI multi-mode ionization source. All compounds tested for biological activity were confirmed to have ≥95% purity by HPLC. Solubility, biological assays, and any experimental data not shown below (i.e., NMR of compound intermediates) are fully supplied and detailed in the Supporting information . Preparation of 2-Chloro- N -(4-methoxybenzyl)-5-nitropyridin-4-amine ( a ) A mixture of p -methoxybenzyl amine (2.56 g, 18.65 mmol) and N , N -diisopropylethylamine (DIPEA) in tetrahydrofuran (THF) was added dropwise to a 0 °C solution of 2,4-dichloro-5-nitropyridine (2.00 g, 10.36 mmol) in THF. The solution was then warmed up to 25 °C and stirred for an additional 30 min. Water was then added, and the resulting mixture was extracted with ethyl acetate. The combined organic layer was dried over anhydrous Na 2 SO 4 and concentrated under reduced pressure to produce the desired intermediate as a yellow solid in compound in 98% yield. 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.96 (t, J = 6.1 Hz, 1H), 8.85 (s, 1H), 7.29 (d, J = 8.7 Hz, 2H), 6.94 (s, 1H), 6.89 (d, J = 8.7 Hz, 2H), 4.57 (d, J = 6.1 Hz, 2H), 3.71 (s, 3H). 13 C-NMR (151 MHz, DMSO): δ 159.91, 155.92, 150.57, 150.08, 130.29, 129.76 (2C), 115.42 (2C), 108.83, 56.41, 46.26. HPLC-MS (ESI): purity = 98%, t R = 2.457 min, m/z [M-H] + = 294.0. Preparation of tert -Butylethyl(2-((4-((4-methoxybenzyl)amino)-5-itropyridin-2-yl)amino)ethyl)carbamate ( b ) A mixture of 2-chloro- N -(4-methoxybenzyl)-5-nitropyridin-4-amine ( 1a ) (5.00 g, 17.02 mmol), tert -butyl (2-aminoethyl)(ethyl)carbamate (4.81 g, 25.53 mmol), and triethylamine was made in N , N -dimethylformamide (DMF). The mixture was heated under microwave radiation at 100 °C for 1 h. When the reaction had completed, water was added, and the mixture was extracted with ethyl acetate (4 × 30 mL). The combined organic layer was dried over anhydrous Na 2 SO 4 , concentrated under reduced pressure, and purified via column chromatography. A yellow solid was obtained as the product. 1 H-NMR (600 MHz, chloroform- d ): δ 8.90 (s, 1H), 8.36 (s, 1H) 7.22 (d, J = 8.7 Hz, 2H), 6.85 (d, J = 8.7 Hz, 2H), 4.37 (s, 2H), 3.76 (s, 3H), 3.37–3.34 (m, 4H), 3.17 (q, J = 7.1 Hz, 2H), 1.41 (s, 9H), 1.05 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 161.22, 159.20, 156.45, 150.82, 149.91, 128.71, 128.49 (2C), 124.63, 114.31(2C), 83.65, 79.91, 55.26, 46.20, 45.70, 43.02, 41.73, 28.37 (3C), 13.87. HPLC-MS (ESI): purity = 99%, t R = 2.641 min, m/z [M + H] + = 446.2. Preparation of tert -Butylethyl(2-((4-((4-methoxybenzyl)amino)-5-nitropyridin-2-yl)amino)ethyl)carbamate ( c ) A mixture of tert -butyl ethyl(2-((4-((4-methoxybenzyl)amino)-5-nitropyridin-2 yl)amino)ethyl)carbamate ( b ) (6.00 g, 13.47 mmol) and 10% Pd/C in methanol was stirred for 16 h at 25 °C under hydrogen gas. After the reaction had been completed, the mixture was filtered through a pad of Celite and concentrated in vacuo to obtain the product, which was used in the next reaction without any further purification. 1 H-NMR (600 MHz, chloroform- d ): δ 7.42 (s, 1H), 7.24 (d, J = 8.7 Hz, 2H), 6.84 (d, J = 8.7 Hz, 2H), 4.96 (s, 1H), 4.23 (s, 2H), 3.77 (s, 3H ), 3.32–3.32 (m, 4H), 3.18 (q, J = 7.1 Hz, 2H), 1.41 (s, 9H), 1.04 (t, J = 7.0 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 158.99, 155.78, 149.34, 130.21, 129.99, 128.75 (2C), 119.97, 114.10 (2C), 87.26, 79.42, 55.29, 55.23, 46.61, 46.21, 41.54, 29.65, 28.42 (3C), 13.86. HPLC-MS (ESI): purity = 97%, t R = 2.334 min, m/z [M + H] + = 416.2. General Procedure for the Synthesis of Intermediates d and e Amide Coupling (Intermediate d ) Intermediate c (1 eq) was dissolved in DCM with the appropriate carboxylic acid (1.3 eq) and 4-dimethylaminopyridine (DMAP, 0.1 eq). 1-Ethyl-3-(3-dimethylaminopropyl)carbodiimide hydrochloride (EDCI·HCl, 1.5 eq) was then added, and the reaction mixture was stirred at 25 °C for 16 h. Water was added, and the solution was extracted with ethyl acetate, dried over anhydrous Na 2 SO 4 , and concentrated under reduced pressure. The residue was used in the subsequent reaction without any further purification. Cyclization (Intermediates e.1 – e.19 ) The corresponding amide intermediate d was dissolved in ethanol (10 mL), and 2 M NaOH solution (10 mL) was added. The resulting mixture was heated at 80 °C for 24–72 h depending on the amide intermediate. When the reaction had gone to completion, the solvent was removed in vacuo, and saturated citric acid was added to the residue. Extraction was done with DCM (2 × 20 mL), and the combined organic extract was dried over anhydrous Na 2 SO 4 , filtered, and concentrated in vacuo. The residue was purified via column chromatography (DCM/MeOH) to obtain the corresponding product. tert -Butyl(2-((2-cyclopentyl-1-(4-methoxybenzyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)carbamate ( e.1 ) Obtained from intermediate c (500 mg, 1.20 mmol) and cyclopentane carboxylic acid (0.16 mL, 1.56 mmol) as a wine-colored sticky solid (46%, 270.4 mg); Rf (DCM: MeOH, 9:1) 0.64; 1 H-NMR (600 MHz, chloroform- d ): δ 8.37 (s, 1H), 6.97 (d, J = 8.7 Hz, 2H), 6.80 (d, J = 8.7 Hz, 2H), 6.47 (s, 1H), 5.18 (s, 2H), 3.74 (s, 3H), 3.36–3.10 (m, 6H), 2.71 (p, J = 8.1 Hz, 1H), 2.05–1.92 (m, 2H), 1.89–1.78 (m, 3H), 1.71–1.51 (m, 3H), 1.39 (s, 9H), 1.03 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 181.48, 159.28, 154.15, 127.56, 127.42 (2C), 114.35 (2C), 85.09, 79.49, 55.24, 46.30, 45.90, 44.64, 43.14, 41.82, 37.16, 32.08 (2C), 30.17, 28.39 (3C), 25.86 (2C), 25.76 (2C), 13.93. HPLC-MS (ESI): purity = 98%, t R = 2.598 min, m/z [M + H] + = 494.3. tert -Butylethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydrofuran-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)carbamate ( e.2 ) Obtained from intermediate c (500 mg, 1.20 mmol) and tetrahydrofuran-3-carboxylic acid (0.14 mL, 1.56 mmol) as a wine-colored sticky solid (68%, 401.7 mg); Rf (DCM: MeOH, 9:1) 0.57; 1 H-NMR (600 MHz, chloroform- d ): δ 8.45 (s, 1H), 6.96 (d, J = 8.7 Hz, 2H), 6.81 (d, J = 8.7 Hz, 2H), 6.45 (s, 1H), 5.17 (s, 2H), 4.04–3.98 (m, 2H), 3.93–3.86 (m, 2H), 3.74 (s, 3H), 3.50–3.44 (m, 1H), 3.39–3.33 (m, 4H), 3.20 (q, J = 7.2 Hz, 2H), 2.37–2.30 (m, 1H), 2.24–2.16 (m, 1H), 1.40 (s, 9H), 1.05 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.43, 156.14, 154.68, 137.64, 133.55, 127.48, 127.27 (2C), 114.48 (2C), 84.86, 79.53, 71.93, 70.92, 68.33, 55.27, 46.34, 45.97, 43.07, 41.98, 37.14, 31.86, 29.65, 28.40 (3C), 13.89. HPLC-MS (ESI): purity = 98%, t R = 2.448 min, m/z [M + H] + = 496.3. tert -Butylethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydro-2 H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)carbamate ( e.3 ) Obtained from intermediate c (500 mg, 1.20 mmol) and tetrahydro-2 H -pyran-4-carboxylic acid (202.96 mg, 1.56 mmol) as a wine-colored sticky solid (98%, 599.12 mg); Rf (DCM: MeOH, 9:1) 0.56; 1 H-NMR (600 MHz, chloroform- d ): δ 8.44 (s, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.78 (d, J = 8.7 Hz, 2H), 6.34 (s, 1H, ), 5.15 (s, 2H), 4.02–3.96 (m, 2H), 3.71 (s, 3H), 3.42–3.36 (m, 2H), 3.34–3.30 (m, 4H), 3.16 (q, J = 7.3 Hz, 2H), 2.93 (tt, J = 11.5, 3.7 Hz, 1H), 2.08–1.99 (m, 2H), 1.68–1.63 (m, 2H), 1.37 (s, 9H), 1.01 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.34, 157.79, 154.73, 127.39 (2C), 114.43 (2C), 84.91, 79.47, 67.50, 67.45 (2C), 55.24, 46.23, 45.96, 42.99, 41.99, 33.78, 31.26, 31.21 (2C), 29.62, 29.26, 28.47, 28.38 (3C), 13.86. HPLC-MS (ESI): purity = 97%, t R = 2.457 min, m/z [M + H] + = 510.2. General Procedure for the Synthesis of Intermediates f.1 – f.19 Boc-Deprotection The appropriate intermediate e.1 – e.19 was dissolved in 4 M HCl/dioxane, and the mixture was stirred at 25 °C for 2 h. When the reaction was complete, the solvent was removed in vacuo, and the residue was neutralized with Amberlyst A21 in a mixture of DCM and methanol. Amberlyst was filtered off, the solvent was removed in vacuo, and the residue was used in the next reaction without further purification. Reductive Amination The crude product from step (a) above and 4-chloro-2-hydroxybenzaldehyde in methanol was stirred at 25 °C for 6 h. The mixture was cooled at 0 °C, and sodium borohydride (NaBH 4 ) was added portion-wise. After the addition, the reaction was allowed to warm to room temperature (25 °C) for 2 h. The solvent was removed in vacuo, and the residue was diluted with deionized water. The compound was extracted with DCM and dried over anhydrous sodium sulfate. The solvent was removed in vacuo, and the residue was purified via column chromatography to obtain the desired product. 5-Chloro-2-(((2-((2-cyclopentyl-1-(4-methoxybenzyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( f.1 ) Obtained from intermediate e.1 (160 mg, 0.41 mmol) and 4-chloro-2-hydroxybenzaldehyde (76 mg, 0.49 mmol) as a pale yellow sticky solid (53%, 116 mg); Rf (DCM:MeOH, 9:1) 0.50; 1 H-NMR (600 MHz, chloroform- d ): δ 8.44 (d, J = 1.0 Hz, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.82–6.80 (m, 3H), 6.73 (d, J = 2.1 Hz, 1H), 6.67 (dd, J = 8.0, 2.1 Hz, 1H), 6.03 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 3.74 (s, 3H), 3.72 (s, 2H), 3.38 (t, J = 6.5 Hz, 2H), 3.14–3.08 (m, 1H), 2.72 (t, J = 6.5 Hz, 2H), 2.61 (q, J = 7.2 Hz, 2H), 1.98–1.92 (m, 4H, 1.86–1.79 (m, 2H), 1.64–1.57 (m, 2H), 1.03 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.80, 159.29, 158.80, 154.09, 143.91, 138.46, 134.01, 133.98, 129.25, 128.58, 127.48, 127.39 (2C), 119.13, 116.42, 114.40 (2C), 85.32, 57.12, 55.27, 52.13, 47.60, 46.25, 40.48, 37.14, 32.10 (2C), 25.75 (2C), 10.91. HPLC-MS (ESI): purity = 98%, t R = 2.450 min, m/z [M + H] + = 534.2. 5-Chloro-2-((ethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydrofuran-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( f.2 ) Obtained from intermediate e.2 (140 mg, 0.35 mmol) and 4-chloro-2-hydroxybenzaldehyde (66 mg, 0.42 mmol) as an orange-colored sticky solid (60%, 113 mg); Rf (DCM:MeOH, 9:1) 0.53; 1 H-NMR (600 MHz, chloroform- d ): δ 8.48 (d, J = 1.0 Hz, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.82–6.80 (m, 3H), 6.72 (d, J = 2.1 Hz, 1H), 6.66 (dd, J = 8.0, 2.1 Hz, 1H), 6.08 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 4.02–3.95 (m, 3H), 3.93–3.84 (m, 2H), 3.74 (s, 3H), 3.74 (s, 2H), 3.42 (t, J = 6.5 Hz, 2H), 2.75 (t, J = 6.5 Hz, 2H), 2.64 (q, J = 7.2 Hz, 2H), 2.35–2.26 (m, 1H), 2.23–2.13 (m, 1H), 1.05 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.44, 158.78, 155.83, 154.36, 143.95, 139.07, 134.00, 129.31, 127.39 (2C), 127.28, 120.34, 119.14, 116.41, 114.52 (2C), 85.13, 71.92, 68.32, 57.08, 55.28, 52.16, 47.66, 46.28, 40.41, 37.12, 31.87, 29.65, 10.91. HPLC-MS (ESI): purity = 98%, t R = 2.542 min, m/z [M + H] + = 536.2. 5-Chloro-2-((ethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydro-2 H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( f.3 ) Obtained from intermediate e.3 (90 mg, 0.22 mmol) and 4-chloro-2-hydroxybenzaldehyde (41 mg, 0.26 mmol) as an orange-colored sticky solid (54%, 66 mg); Rf (DCM: MeOH, 9:1) 0.44; 1 H-NMR (600 MHz, chloroform- d ): δ 8.49 (d, J = 1.0 Hz, 1H), 6.92 (d, J = 8.7 Hz, 2H), 6.84–6.79 (m, 3H), 6.72 (d, J = 2.1 Hz, 1H), 6.66 (dd, J = 8.0, 2.1 Hz, 1H), 6.05 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 4.04–3.97 (m, 2H), 3.74 (s, 3H), 3.73 (s, 2H), 3.43–3.38 (m, 4H), 2.93 (tt, J = 11.5, 3.7 Hz, 1H), 2.74 (t, J = 6.4 Hz, 2H), 2.63 (q, J = 7.1 Hz, 2H), 2.09–2.00 (m, 2H), 1.70–1.63 (m, 2H), 1.04 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.39, 158.79, 157.69, 154.31, 143.61, 139.03, 133.98, 129.32, 127.38, 127.33 (2C), 125.25, 120.35, 119.12, 116.39, 114.49 (2C), 85.35, 67.48 (2C), 57.04, 55.27, 52.15, 47.64, 46.22, 40.40, 33.80, 31.24 (2C), 10.90. HPLC-MS (ESI): purity = 98%, t R = 2.531 min, m/z [M + H] + = 550.2. General Procedure for the Synthesis of Target Compounds 1 – 19 The appropriate intermediate f.1 – f.19 was stirred in neat TFA (10 mL) at 100 °C for 16 h. Once the reaction was complete, TFA was removed under reduced pressure. The residue was dissolved in DCM/MeOH (9:1) and stirred with Amberlyst A21 for 1 h. The resin was filtered off, and the filtrate was concentrated under reduced pressure. The residue was purified via column chromatography to obtain the final product. 5-Chloro-2-(((2-((2-cyclopentyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 1 ) Obtained from intermediate f.1 (116 mg, 0.22 mmol) as an off-white sticky solid (68%, 61 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.18 (d, J = 1.0 Hz, 1H), 7.25 (d, J = 8.1 Hz, 1H), 6.81 (d, J = 2.1 Hz, 1H), 6.79 (dd, J = 8.1, 2.1 Hz, 1H), 6.44 (d, J = 1.0 Hz, 1H), 4.03 (s, 2H), 3.40 (t, J = 5.9 Hz, 2H), 3.20–3.13 (m, 1H), 2.98 (t, J = 5.8 Hz, 2H), 2.93 (q, J = 7.2 Hz, 2H), 2.02–1.95 (m, 2H), 1.84–1.77 (m, 2H), 1.73–1.66 (m, 2H), 1.63–1.56 (m, 2H), 1.13 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.76, 154.94, 137.73, 132.39, 130.71, 130.00, 129.65, 128.84, 122.94, 118.83, 115.55, 86.67, 54.92, 52.22, 49.03, 47.10, 39.29, 31.92 (2C), 25.54 (2C), 11.31. HPLC-MS (ESI): purity = 98%, t R = 2.203 min, m/z [M + H] + = 414.1. 5-Chloro-2-((ethyl(2-((2-(tetrahydrofuran-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 2 ) Obtained from intermediate f.2 (113 mg, 0.21 mmol) as an off-white sticky solid (74%, 65 mg); Rf (DCM: MeOH, 9:1) 0.34; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.21 (d, J = 1.0 Hz, 1H), 7.08 (d, J = 8.0 Hz, 1H), 6.73 (d, J = 2.1 Hz, 1H), 6.71 (dd, J = 8.0, 2.1 Hz, 1H), 6.32 (d, J = 1.0 Hz, 1H), 4.02–3.99 (m, 1H), 3.83 (t, J = 6.7 Hz, 2H), 3.78–3.73 (m, 1H), 3.71 (s, 2H), 3.56–3.49 (m, 1H), 3.35–3.31 (m, 2H), 2.65 (t, J = 6.7 Hz, 2H), 2.56 (q, J = 7.1 Hz, 2H), 2.28–2.17 (m, 2H), 0.98 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.75, 155.54, 155.12, 142.76, 138.05, 134.83, 132.38, 130.70, 130.00, 122.96, 118.83, 115.55, 86.69, 71.63, 67.89, 54.93, 52.19, 49.03, 47.10, 31.48, 11.32. HPLC-MS (ESI): purity = 98%, t R = 0.298 min, m/z [M + H] + = 416.2. 5-Chloro-2-((ethyl(2-((2-(tetrahydro-2 H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 3 ) Obtained from intermediate f.3 (66 mg, 0.12 mmol) as an off-white sticky solid (80%, 41 mg); Rf (DCM: MeOH, 9:1) 0.29; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.21 (d, J = 1.0 Hz, 1H), 7.15 (d, J = 7.9 Hz, 1H), 6.76 (d, J = 2.1 Hz, 1H), 6.74 (dd, J = 7.9, 2.1 Hz, 1H), 6.38 (d, J = 1.0 Hz, 1H), 3.91–3.86 (m, 2H), 3.83 (s, 2H), 3.44–3.39 (m, 2H), 3.36 (t, J = 6.4 Hz, 2H), 3.03–2.96 (m, 1H), 2.78 (t, J = 6.4 Hz, 2H), 2.71 (q, J = 7.2 Hz, 2H), 1.90–1.85 (m, 2H), 1.79–1.72 (m, 2H), 1.04 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.55, 154.84, 142.80, 137.40, 133.97, 133.06, 131.58, 130.00, 129.68, 128.96, 119.01, 115.57, 66.96 (2C), 54.16, 52.89, 49.03, 47.44, 35.17, 31.12 (2C), 10.84. HPLC-MS (ESI): purity = 97%, t R = 0.430 min, m/z [M + H] + = 430.2. 5-Chloro-2-((ethyl(2-((2-methyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 4 ) Obtained from intermediate f.4 (66 mg, 0.14 mmol) as an off-white sticky solid (71%, 36 mg); Rf (DCM: MeOH, 9:1) 0.26; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.15 (d, J = 1.0 Hz, 1H), 7.08 (d, J = 8.0 Hz, 1H), 6.73 (d, J = 2.1 Hz, 1H), 6.71 (dd, J = 8.0, 2.1 Hz, 1H), 6.29 (d, J = 1.0 Hz, 1H), 3.70 (s, 2H), 3.32 (t, J = 6.4 Hz, 2H), 2.64 (t, J = 6.4 Hz, 2H), 2.55 (q, J = 7.2 Hz, 2H), 2.35 (s, 3H), 0.98 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.75, 154.96, 151.25, 137.45, 132.35, 130.69, 130.01, 123.02, 118.83, 115.55, 86.62, 63.28, 54.92, 52.20, 49.04, 47.10, 15.01, 11.35. HPLC-MS (ESI): purity = 98%, t R = 0.219 min, m/z [M + H] + = 360.1. 5-Chloro-2-(((2-((2-cyclopropyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 5 ) Obtained from intermediate f.5 (351 mg, 0.69 mmol) as an off-white sticky solid (68%, 181 mg); Rf (DCM: MeOH, 9:1) 0.30; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.11 (d, J = 1.0 Hz, 1H), 7.10 (d, J = 8.0 Hz, 1H), 6.74 (d, J = 2.2 Hz, 1H), 6.72 (dd, J = 8.0, 2.1 Hz, 1H), 6.31 (d, J = 1.0 Hz, 1H), 3.74 (s, 2H), 3.32 (t, J = 6.7 Hz, 2H), 2.68 (t, J = 6.7 Hz, 2H), 2.59 (q, J = 7.2 Hz, 2H), 2.01–1.96 (m, 1H), 0.99 (t, J = 7.1 Hz, 3H), 0.95–0.93 (m, 4H). 13 C-NMR (151 MHz, DMSO): δ 158.69, 157.15, 154.78, 142.64, 137.07, 132.58, 130.97, 130.00, 122.52, 118.88, 115.56, 86.76, 54.69, 52.44, 49.03, 47.20, 11.18, 9.83 (2C), 8.93. HPLC-MS (ESI): purity = 98%, t R = 2.211 min, m/z [M + H] + = 386.1. 5-Chloro-2-((ethyl(2-((2-(1-methylcyclopropyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 6 ) Obtained from intermediate f.6 (381 mg, 0.73 mmol) as an off-white sticky solid (86%, 250 mg); Rf (DCM: MeOH, 9:1) 0.29; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.37 (d, J = 1.0 Hz, 1H), 7.40 (d, J = 8.2 Hz, 1H), 6.92 (d, J = 2.1 Hz, 1H), 6.87 (dd, J = 8.2, 2.1 Hz, 1H), 6.73 (d, J = 1.0 Hz, 1H), 4.30 (s, 2H), 3.66 (t, J = 6.1 Hz, 2H), 3.28 (t, J = 6.0 Hz, 2H), 3.22 (q, J = 7.2 Hz, 1H), 1.53 (s, 3H), 1.31–1.26 (m, 5H), 1.04–1.01 (m, 2H). 13 C-NMR (101 MHz, DMSO): δ 158.06, 152.70, 149.17, 143.15, 140.46, 135.35, 134.45, 131.71, 127.66, 116.35, 115.81, 108.67, 90.14, 51.45, 48.64, 38.07, 20.88, 17.89, 15.68 (2C), 9.19. HPLC-MS (ESI): purity = 98%, t R = 0.755 min, m/z [M + H] + = 400.2. 5-Chloro-2-((ethyl(2-((2-(1-(trifluoromethyl)cyclopropyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 7 ) Obtained from intermediate f.7 (59 mg, 0.10 mmol) as an off-white sticky solid (77%, 35 mg); Rf (DCM: MeOH, 9:1) 0.30; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.50 (d, J = 1.0 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.93 (d, J = 2.1 Hz, 1H), 6.88 (dd, J = 8.1, 2.1 Hz, 1H), 6.74 (d, J = 1.0 Hz, 1H), 4.32 (s, 2H), 3.65 (t, J = 5.9 Hz, 2H), 3.30 (t, J = 5.9 Hz, 2H), 3.24 (q, J = 7.1 Hz, 2H), 1.60–1.55 (m, 4H), 1.29 (t, J = 7.2 Hz, 3H). 13 C-NMR (101 MHz, DMSO): δ 158.05, 152.50, 149.26, 146.22, 141.57, 135.40, 134.48, 129.93, 124.43, 119.59, 116.31, 115.81, 111.78, 51.57, 48.70, 38.22, 31.43, 23.37, 12.03 (2C), 9.18. HPLC-MS (ESI): purity = 98%, t R = 2.194 min, m/z [M + H] + = 454.1. 5-Chloro-2-(((2-((2-(cyclopropylmethyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 8 ) Obtained from intermediate f.8 (239 mg, 0.46 mmol) as an off-white sticky solid (88%, 162 mg); Rf (DCM: MeOH, 9:1) 0.35; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.46 (d, J = 1.0 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.94 (d, J = 2.1 Hz, 1H), 6.87 (dd, J = 8.1, 2.1 Hz, 1H), 6.79 (d, J = 1.0 Hz, 1H), 4.31 (s, 2H), 3.68 (t, J = 6.0 Hz, 2H), 3.30 (t, J = 6.0 Hz, 2H), 3.24 (q, J = 7.2 Hz, 2H), 2.80 (d, J = 7.0 Hz, 2H), 1.30 (t, J = 7.2 Hz, 3H), 1.20–1.15 (m, 1H), 0.57–0.52 (m, 2H), 0.34–0.29 (m, 2H). 13 C-NMR (101 MHz, DMSO): δ 159.52, 158.11, 152.35, 146.06, 135.35, 134.43, 131.03, 125.99, 119.54, 116.36, 115.84, 89.98, 56.56, 52.04, 51.47, 48.64, 38.03, 33.06, 9.19, 4.94 (2C). HPLC-MS (ESI): purity = 98%, t R = 0.588 min, m/z [M + H] + = 400.2. 5-Chloro-2-(((2-((2-cyclobutyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 9 ) Obtained from intermediate f.9 (311 mg, 0.60 mmol) as an off-white sticky solid (79%, 191 mg); Rf (DCM: MeOH, 9:1) 0.31; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.45 (d, J = 1.0 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.94 (d, J = 2.1 Hz, 1H), 6.85 (dd, J = 8.2, 2.1 Hz, 1H), 6.78 (d, J = 1.0 Hz, 1H), 4.31 (s, 2H), 3.83–3.72 (m, 1H), 3.69 (t, J = 6.1 Hz, 2H), 3.29 (t, J = 6.0 Hz, 2H), 3.23 (q, J = 7.2 Hz, 2H), 2.46–2.34 (m, 4H), 2.15–2.02 (m, 1H), 2.00–1.89 (m, 1H), 1.30 (t, J = 7.1 Hz, 3H). 13 C-NMR (101 MHz, DMSO) δ 163.14, 159.51, 158.11, 151.94, 135.38, 134.37, 119.50, 116.27, 115.86, 89.93, 79.09, 51.89, 51.45, 48.65, 40.46, 38.02, 33.40, 27.37 (2C), 18.60, 9.19. HPLC-MS (ESI): purity = 98%, t R = 0.921 min, m/z [M + H] + = 400.2. 5-Chloro-2-(((2-((2-(3,3-difluorocyclobutyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 10 ) Obtained from intermediate f.10 (274 mg, 0.49 mmol) as an off-white sticky solid (84%, 179 mg); Rf (DCM: MeOH, 9:1) 0.29; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.51 (d, J = 1.0 Hz, 1H), 7.37 (d, J = 8.2 Hz, 1H), 6.88 (d, J = 2.1 Hz, 1H), 6.84 (dd, J = 8.2, 2.1 Hz, 1H), 6.80 (d, J = 1.0 Hz, 1H), 4.28 (s, 2H), 3.66 (t, J = 6.3 Hz, 2H), 3.64–3.60 (m, 1H), 3.27 (t, J = 6.2 Hz, 2H), 3.20 (q, J = 7.2 Hz, 2H), 3.10–2.94 (m, 4H), 1.26 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 159.36, 158.02, 150.81, 147.81, 135.40, 134.50, 133.25, 122.12, 120.25, 119.58, 118.45, 116.14, 115.75, 89.93, 63.24, 51.32, 48.61, 37.86, 21.96 (2C), 9.11. HPLC-MS (ESI): purity = 99%, t R = 1.808 min, m/z [M + H] + = 436.1. 5-Chloro-2-(((2-((2-cyclohexyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 11 ) Obtained from intermediate f.11 (274 mg, 0.50 mmol) as an off-white sticky solid (71%, 152 mg); Rf (DCM: MeOH, 9:1) 0.32; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.44 (d, J = 1.0 Hz, 1H), 7.36 (d, J = 8.2 Hz, 1H), 6.88 (d, J = 2.1 Hz, 1H), 6.82 (dd, J = 8.2, 2.1 Hz, 1H), 6.76 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.65 (t, J = 6.2 Hz, 2H), 3.25 (t, J = 6.2 Hz, 2H), 3.19 (q, J = 7.2 Hz, 2H), 2.89 (tt, J = 11.7, 3.7 Hz, 1H), 2.01–1.96 (m, 2H), 1.77–1.73 (m, 2H), 1.67–1.63 (m, 1H), 1.59–1.52 (m, 2H), 1.39–1.30 (m, 2H), 1.25 (t, J = 7.2 Hz, 3H), 1.23–1.17 (m, 1H). 13 C-NMR (151 MHz, DMSO): δ 164.77, 159.42, 158.04, 151.60, 146.44, 135.37, 134.44, 130.11, 126.08, 119.51, 116.14, 115.75, 110.92, 89.86, 51.33, 48.63, 37.63, 30.78 (2C), 25.69, 25.57 (2C), 9.09. HPLC-MS (ESI): purity = 97%, t R = 2.238 min, m/z [M + H] + = 428.2. 5-Chloro-2-(((2-((2-(4,4-difluorocyclohexyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 12 ) Obtained from intermediate f.12 (381 mg, 0.65 mmol) as an off-white sticky solid (83%, 250 mg); Rf (DCM: MeOH, 9:1) 0.37; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.46 (d, J = 1.0 Hz, 1H), 7.36 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.81 (dd, J = 8.2, 2.1 Hz, 1H), 6.80 (d, J = 1.0 Hz, 1H), 4.27 (s, 2H), 3.76–3.70 (m, 1H), 3.67 (t, J = 6.3 Hz, 2H), 3.27 (t, J = 6.2 Hz, 2H), 3.19 (q, J = 7.1 Hz, 2H), 3.12–3.06 (m, 1H), 2.11–2.07 (m, 4H), 2.01–1.89 (m, 1H), 1.88–1.80 (m, 2H), 1.25 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.09, 150.96, 135.35, 134.42, 129.70, 125.56, 123.97, 122.38, 119.46, 116.12, 115.78, 89.88, 62.47, 51.26, 48.62, 37.80, 35.06, 32.56, 27.12 (2C), 25.84 (2C), 9.08. HPLC-MS (ESI): purity = 98%, t R = 2.150 min, m/z [M + H] + = 464.2. 5-Chloro-2-((ethyl(2-((2-(1-methylpyrrolidin-2-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 13 ) Obtained from intermediate f.13 (51 mg, 0.10 mmol) as a yellow sticky solid (71%, 31 mg); Rf (DCM: MeOH, 9:1) 0.20; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.44 (d, J = 1.0 Hz, 1H), 7.38 (d, J = 8.2 Hz, 1H), 6.92 (d, J = 2.1 Hz, 1H), 6.83 (dd, J = 8.2, 2.1 Hz, 1H), 6.66 (d, J = 1.0 Hz, 1H), 4.77 (dd, J = 8.1, 4.3 Hz, 1H), 4.27 (s, 2H), 3.58 (t, J = 5.9 Hz, 2H), 3.24 (t, J = 5.9 Hz, 2H), 3.20 (q, J = 7.2 Hz, 2H), 2.94–2.86 (m, 2H), 2.60–2.53 (m, 2H), 2.27–1.98 (m, 2H), 1.72 (s, 3H), 1.24 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 172.03, 159.23, 158.81, 158.11, 153.85, 135.31, 134.45, 120.46, 119.47, 118.48, 116.50, 116.33, 115.80, 114.52, 52.89, 51.50, 48.58, 38.13, 29.75, 22.88, 22.29, 9.14. HPLC-MS (ESI): purity = 97%, t R = 0.141 min, m/z [M + H] + = 429.2. 5-Chloro-2-((ethyl(2-((2-(3-(trifluoromethyl)phenyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 14 ) Obtained from intermediate f.14 (231 mg, 0.38 mmol) as an off-white sticky solid (81%, 151 mg); Rf (DCM: MeOH, 9:1) 0.30; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.53 (d, J = 1.0 Hz, 1H), 8.47 (dd, J = 2.1, 1.5 Hz, 1H), 8.44 (ddd, J = 7.8, 1.6, 1.5 Hz, 1H), 7.88 (ddd, J = 7.8, 2.1, 1.6 Hz, 1H), 7.79 (t, J = 7.8 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.90 (d, J = 2.1 Hz, 1H), 6.85 (dd, J = 8.2, 2.1 Hz, 1H), 6.75 (d, J = 1.0 Hz, 1H), 4.28 (s, 2H), 3.64 (t, J = 6.0 Hz, 2H), 3.26 (t, J = 5.9 Hz, 2H), 3.21 (q, J = 7.2 Hz, 2H), 1.27 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.96, 158.75, 158.54, 158.32, 158.04, 135.32, 134.55, 131.14, 130.85, 130.47, 127.62, 125.27, 123.74, 120.50, 120.36, 119.56, 118.39, 116.28, 115.75, 51.39, 48.58, 38.16, 22.91, 9.16. HPLC-MS (ESI): purity = 98%, t R = 2.422 min, m/z [M + H] + = 490.1. 5-Chloro-2-(((2-((2-cycloheptyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 15 ) Obtained from intermediate f.15 (76 mg, 0.14 mmol) as an off-white sticky solid (92%, 57 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.41 (d, J = 1.0 Hz, 1H), 7.37 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.2 Hz, 1H), 6.82 (dd, J = 8.2, 2.1 Hz, 1H), 6.74 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.64 (t, J = 6.2 Hz, 2H), 3.25 (t, J = 6.2 Hz, 2H), 3.19 (q, J = 7.2 Hz, 2H), 3.07 (tt, J = 9.4, 4.4 Hz, 1H), 2.03–1.96 (m, 2H), 1.84–1.76 (m, 2H), 1.62–1.56 (m, 4H), 1.55–1.48 (m, 4H), 1.25 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 172.01, 159.28, 159.06, 158.85, 158.64, 158.06, 135.31, 134.44, 119.49, 118.40, 116.22, 115.76, 89.77, 51.31, 48.60, 37.84, 32.67 (2C), 28.14 (2C), 26.17 (2C), 22.88, 9.12. HPLC-MS (ESI): purity = 99%, t R = 2.413 min, m/z [M + H] + = 442.2. 5-Chloro-2-((ethyl(2-((2-((1r,4r)-4-(trifluoromethyl)cyclohexyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 16 ) Obtained from intermediate f.16 (81 mg, 0.13 mmol) as an off-white sticky solid (88%, 57 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.42 (d, J = 1.0 Hz, 1H), 7.37 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.82 (dd, J = 8.2, 2.1 Hz, 1H), 6.76 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.64 (t, J = 6.2 Hz, 2H), 3.25 (t, J = 6.2 Hz, 2H), 3.18 (q, J = 7.2 Hz, 2H), 2.89 (tt, J = 12.1, 3.6 Hz, 1H), 2.36–2.27 (m, 1H), 2.16–2.12 (m, 2H), 1.98–1.94 (m, 2H), 1.66–1.58 (m, 2H), 1.45–1.35 (m, 2H), 1.25 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.44, 158.78, 154.20, 143.67, 138.51, 134.04, 134.02, 129.32, 127.19, 120.34, 119.14, 116.42, 85.28, 57.08, 55.29, 52.10, 47.69, 46.36, 46.24, 41.06, 35.44, 29.99 (2C), 10.89. HPLC-MS (ESI): purity = 98%, t R = 2.413 min, m/z [M + H] + = 496.2. 5-Chloro-2-((ethyl(2-((2-((1r,4r)-4-(trifluoromethyl)cyclohexyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)Amino)ethyl)amino)methyl)phenol ( 17 ) Obtained from intermediate f.17 (77 mg, 0.12 mmol) as an off-white sticky solid (78%, 47 mg); Rf (DCM: MeOH, 9:1) 0.36; 1 H-NMR (600 MHz, DMSO): δ 8.40 (d, J = 1.0 Hz, 1H), 7.36 (d, J = 8.2 Hz, 1H), 6.87 (d, J = 2.2 Hz, 1H), 6.83 (dd, J = 8.2, 2.1 Hz, 1H), 6.71 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.25–3.22 (m, 4H), 3.18 (q, J = 7.2 Hz, 2H), 2.88 (tt, J = 12.3, 3.7 Hz, 1H), 2.37–2.28 (m, 1H), 2.17–2.12 (m, 2H), 1.99–1.94 (m, 2H), 1.66–1.58 (m, 2H), 1.45–1.36 (m, 2H), 1.24 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.44, 158.78, 154.20, 143.67, 138.51, 134.04, 134.02, 129.32, 127.19, 120.34, 119.14, 116.42, 85.28, 57.08, 55.29, 52.10, 47.69, 46.36, 46.24, 41.06, 35.44, 29.99 (2C), 10.89. HPLC-MS (ESI): purity = 97%, t R = 2.343 min, m/z [M + H] + = 496.2. 5-Chloro-2-((ethyl(2-((2-(4-methoxycyclohexyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 18 ) Obtained from intermediate f.18 (73 mg, 0.13 mmol) as an off-white sticky solid (77%, 46 mg); Rf (DCM: MeOH, 9:1) 0.50; 1 H-NMR (600 MHz, DMSO): δ 8.37 (d, J = 1.0 Hz, 1H), 8.13 (d, J = 8.1 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.83 (dd, J = 8.2, 2.1 Hz, 1H), 6.62 (d, J = 1.0 Hz, 1H), 4.25 (s, 2H), 3.64–3.53 (m, 1H), 3.22 (s, 3H), 3.12 (tt, J = 10.88, 4.11 Hz, 1H), 2.92 (t, J = 6.2 Hz, 2H), 2.81 (t, J = 6.2 Hz, 2H), 2.58 (q, J = 7.2 Hz, 2H), 2.08–2.02 (m, 2H), 1.88–1.77 (m, 2H), 1.75–1.71 (m, 2H), 1.63–1.47 (m, 2H), 1.23 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.47, 153.50, 144.28, 136.87, 134.14, 129.50, 127.40, 119.22, 116.44, 85.54, 78.32, 73.57, 56.94, 55.77, 55.30, 52.01, 47.89, 40.43, 35.67, 31.57 (2C), 29.79 (2C), 10.87. HPLC-MS (ESI): purity = 97%, t R = 2.124 min, m/z [M + H] + = 458.2. 5-Chloro-2-((ethyl(2-((2-(6-(trifluoromethyl)pyridin-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 19 ) Obtained from intermediate f.19 (55 mg, 0.09 mmol) as an off-white solid (81%, 36 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 9.44 (d, J = 2.1 Hz, 1H), 8.70 (dd, J = 8.2, 2.1 Hz, 1H), 8.52 (d, J = 1.0 Hz, 1H, ), 8.09 (d, J = 8.2 Hz, 1H), 7.38 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.85 (dd, J = 8.2, 2.1 Hz, 1H), 6.67 (d, J = 1.0 Hz, 1H), 4.28 (s, 2H), 3.59 (t, J = 5.8 Hz, 2H), 3.25 (t, J = 5.9 Hz, 2H), 3.21 (q, J = 7.2 Hz, 2H), 1.27 (t, J = 7.2 Hz, 3H).C-NMR (151 MHz, CDCl 3 ): δ 159.54, 158.60, 155.00, 149.46, 144.49, 140.74, 137.88, 134.69, 134.25, 129.57, 128.82, 126.84, 120.39, 119.88, 119.26, 114.79, 85.39, 56.84, 55.31, 52.20, 47.84, 41.84, 10.77. HPLC-MS (ESI): purity = 98%, t R = 2.352 min, m/z [M + H] + = 491.1. 5-Chloro-2-(((2-((7-nitrobenzo[c][1,2,5]oxadiazol-4-yl)amino)ethyl)(2-((2-(3-(trifluoromethyl)phenyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 14-NBD ) Obtained from intermediate m (339 mg, 0.56 mmols) and NBD-chloride (254 mg, 1.27 mmols) as a brick-red solid (50%, 187 mg); R f (DCM: MeOH, 9:1) 0.56; 1 H NMR (600 MHz, DMSO- d 6 ): δ 8.39 (s, 1H), 8.37 (d, J = 8.8 Hz, 1H), 8.35 (dd, J = 2.8, 2.2 Hz, 1H), 8.31 (d, J = 8.8 Hz, 1H), 7.82 (dd, J = 7.8, 2.2 Hz, 1H), 7.74 (ddd, J = 7.8, 2.8, 2.3 Hz, 1H), 7.13 (dd, J = 8.4, 7.8 Hz, 1H), 6.85 (d, J = 7.8 Hz, 1H), 6.59 (ddd, J = 8.4, 2.3 Hz, 1H), 6.19 (d, J = 2.2 Hz, 1H), 6.04 (s, 1H), 3.70 (s, 2H), 3.53 (t, J = 6.3 Hz, 2H), 3.39 (t, J = 6.0 Hz, 2H), 2.82–2.76 (m, 4H). 13 C NMR (151 MHz, DMSO): δ 159.13, 158.92, 157.57, 155.85, 154.95, 149.82, 147.41, 143.12, 139.53, 138.17, 137.90, 135.58, 132.31, 131.80, 131.35, 129.01, 128.78, 126.30, 123.09, 118.92, 115.35, 99.50, 92.26, 86.51, 55.54, 53.70, 51.38, 44.39, 41.56, 36.89. HPLC-MS (ESI): purity = 98%, t R = 0.905 min, m/z [M + H] + = 668.2. In Vitro P. falciparum Assay Compounds were screened against multi-drug-resistant (K1) and sensitive (NF54) strains of P. falciparum in vitro using a parasite lactate dehydrogenase assay (pLDH) (the method is described fully in the Supporting Information ). In Vitro Cytotoxicity Assay In vitro cytotoxicity was performed on the Chinese Hamster Ovarian cell line by measuring cellular growth and survival calorimetrically through the MTT assay. , The formation of tetrazolium salt was used as a measure of chemosensitivity and growth. (Details of this assay is described in the Supporting Information .) In Vitro Microsomal Stability Assay The in vitro microsomal stability assay was performed in duplicate in a 96-well microtiter plate using a single-point experiment design. The test compounds (1 μM) were incubated individually in human (pool of 50, mixed-gender), rat (pool of 711, male Sprague Dawley) and mouse (pool of 1634, male CD1) liver microsomes (final protein concentration of 0.4 mg/mL; Xenotech, Kansas, USA), suspended in 0.1 M phosphate buffer (pH 7.4). (Details of this assay are described in the Supporting Information .) N -(4-methoxybenzyl)-5-nitropyridin-4-amine ( a ) A mixture of p -methoxybenzyl amine (2.56 g, 18.65 mmol) and N , N -diisopropylethylamine (DIPEA) in tetrahydrofuran (THF) was added dropwise to a 0 °C solution of 2,4-dichloro-5-nitropyridine (2.00 g, 10.36 mmol) in THF. The solution was then warmed up to 25 °C and stirred for an additional 30 min. Water was then added, and the resulting mixture was extracted with ethyl acetate. The combined organic layer was dried over anhydrous Na 2 SO 4 and concentrated under reduced pressure to produce the desired intermediate as a yellow solid in compound in 98% yield. 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.96 (t, J = 6.1 Hz, 1H), 8.85 (s, 1H), 7.29 (d, J = 8.7 Hz, 2H), 6.94 (s, 1H), 6.89 (d, J = 8.7 Hz, 2H), 4.57 (d, J = 6.1 Hz, 2H), 3.71 (s, 3H). 13 C-NMR (151 MHz, DMSO): δ 159.91, 155.92, 150.57, 150.08, 130.29, 129.76 (2C), 115.42 (2C), 108.83, 56.41, 46.26. HPLC-MS (ESI): purity = 98%, t R = 2.457 min, m/z [M-H] + = 294.0. tert -Butylethyl(2-((4-((4-methoxybenzyl)amino)-5-itropyridin-2-yl)amino)ethyl)carbamate ( b ) A mixture of 2-chloro- N -(4-methoxybenzyl)-5-nitropyridin-4-amine ( 1a ) (5.00 g, 17.02 mmol), tert -butyl (2-aminoethyl)(ethyl)carbamate (4.81 g, 25.53 mmol), and triethylamine was made in N , N -dimethylformamide (DMF). The mixture was heated under microwave radiation at 100 °C for 1 h. When the reaction had completed, water was added, and the mixture was extracted with ethyl acetate (4 × 30 mL). The combined organic layer was dried over anhydrous Na 2 SO 4 , concentrated under reduced pressure, and purified via column chromatography. A yellow solid was obtained as the product. 1 H-NMR (600 MHz, chloroform- d ): δ 8.90 (s, 1H), 8.36 (s, 1H) 7.22 (d, J = 8.7 Hz, 2H), 6.85 (d, J = 8.7 Hz, 2H), 4.37 (s, 2H), 3.76 (s, 3H), 3.37–3.34 (m, 4H), 3.17 (q, J = 7.1 Hz, 2H), 1.41 (s, 9H), 1.05 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 161.22, 159.20, 156.45, 150.82, 149.91, 128.71, 128.49 (2C), 124.63, 114.31(2C), 83.65, 79.91, 55.26, 46.20, 45.70, 43.02, 41.73, 28.37 (3C), 13.87. HPLC-MS (ESI): purity = 99%, t R = 2.641 min, m/z [M + H] + = 446.2. tert -Butylethyl(2-((4-((4-methoxybenzyl)amino)-5-nitropyridin-2-yl)amino)ethyl)carbamate ( c ) A mixture of tert -butyl ethyl(2-((4-((4-methoxybenzyl)amino)-5-nitropyridin-2 yl)amino)ethyl)carbamate ( b ) (6.00 g, 13.47 mmol) and 10% Pd/C in methanol was stirred for 16 h at 25 °C under hydrogen gas. After the reaction had been completed, the mixture was filtered through a pad of Celite and concentrated in vacuo to obtain the product, which was used in the next reaction without any further purification. 1 H-NMR (600 MHz, chloroform- d ): δ 7.42 (s, 1H), 7.24 (d, J = 8.7 Hz, 2H), 6.84 (d, J = 8.7 Hz, 2H), 4.96 (s, 1H), 4.23 (s, 2H), 3.77 (s, 3H ), 3.32–3.32 (m, 4H), 3.18 (q, J = 7.1 Hz, 2H), 1.41 (s, 9H), 1.04 (t, J = 7.0 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 158.99, 155.78, 149.34, 130.21, 129.99, 128.75 (2C), 119.97, 114.10 (2C), 87.26, 79.42, 55.29, 55.23, 46.61, 46.21, 41.54, 29.65, 28.42 (3C), 13.86. HPLC-MS (ESI): purity = 97%, t R = 2.334 min, m/z [M + H] + = 416.2. d and e Amide Coupling (Intermediate d ) Intermediate c (1 eq) was dissolved in DCM with the appropriate carboxylic acid (1.3 eq) and 4-dimethylaminopyridine (DMAP, 0.1 eq). 1-Ethyl-3-(3-dimethylaminopropyl)carbodiimide hydrochloride (EDCI·HCl, 1.5 eq) was then added, and the reaction mixture was stirred at 25 °C for 16 h. Water was added, and the solution was extracted with ethyl acetate, dried over anhydrous Na 2 SO 4 , and concentrated under reduced pressure. The residue was used in the subsequent reaction without any further purification. Cyclization (Intermediates e.1 – e.19 ) The corresponding amide intermediate d was dissolved in ethanol (10 mL), and 2 M NaOH solution (10 mL) was added. The resulting mixture was heated at 80 °C for 24–72 h depending on the amide intermediate. When the reaction had gone to completion, the solvent was removed in vacuo, and saturated citric acid was added to the residue. Extraction was done with DCM (2 × 20 mL), and the combined organic extract was dried over anhydrous Na 2 SO 4 , filtered, and concentrated in vacuo. The residue was purified via column chromatography (DCM/MeOH) to obtain the corresponding product. tert -Butyl(2-((2-cyclopentyl-1-(4-methoxybenzyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)carbamate ( e.1 ) Obtained from intermediate c (500 mg, 1.20 mmol) and cyclopentane carboxylic acid (0.16 mL, 1.56 mmol) as a wine-colored sticky solid (46%, 270.4 mg); Rf (DCM: MeOH, 9:1) 0.64; 1 H-NMR (600 MHz, chloroform- d ): δ 8.37 (s, 1H), 6.97 (d, J = 8.7 Hz, 2H), 6.80 (d, J = 8.7 Hz, 2H), 6.47 (s, 1H), 5.18 (s, 2H), 3.74 (s, 3H), 3.36–3.10 (m, 6H), 2.71 (p, J = 8.1 Hz, 1H), 2.05–1.92 (m, 2H), 1.89–1.78 (m, 3H), 1.71–1.51 (m, 3H), 1.39 (s, 9H), 1.03 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 181.48, 159.28, 154.15, 127.56, 127.42 (2C), 114.35 (2C), 85.09, 79.49, 55.24, 46.30, 45.90, 44.64, 43.14, 41.82, 37.16, 32.08 (2C), 30.17, 28.39 (3C), 25.86 (2C), 25.76 (2C), 13.93. HPLC-MS (ESI): purity = 98%, t R = 2.598 min, m/z [M + H] + = 494.3. tert -Butylethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydrofuran-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)carbamate ( e.2 ) Obtained from intermediate c (500 mg, 1.20 mmol) and tetrahydrofuran-3-carboxylic acid (0.14 mL, 1.56 mmol) as a wine-colored sticky solid (68%, 401.7 mg); Rf (DCM: MeOH, 9:1) 0.57; 1 H-NMR (600 MHz, chloroform- d ): δ 8.45 (s, 1H), 6.96 (d, J = 8.7 Hz, 2H), 6.81 (d, J = 8.7 Hz, 2H), 6.45 (s, 1H), 5.17 (s, 2H), 4.04–3.98 (m, 2H), 3.93–3.86 (m, 2H), 3.74 (s, 3H), 3.50–3.44 (m, 1H), 3.39–3.33 (m, 4H), 3.20 (q, J = 7.2 Hz, 2H), 2.37–2.30 (m, 1H), 2.24–2.16 (m, 1H), 1.40 (s, 9H), 1.05 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.43, 156.14, 154.68, 137.64, 133.55, 127.48, 127.27 (2C), 114.48 (2C), 84.86, 79.53, 71.93, 70.92, 68.33, 55.27, 46.34, 45.97, 43.07, 41.98, 37.14, 31.86, 29.65, 28.40 (3C), 13.89. HPLC-MS (ESI): purity = 98%, t R = 2.448 min, m/z [M + H] + = 496.3. tert -Butylethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydro-2 H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)carbamate ( e.3 ) Obtained from intermediate c (500 mg, 1.20 mmol) and tetrahydro-2 H -pyran-4-carboxylic acid (202.96 mg, 1.56 mmol) as a wine-colored sticky solid (98%, 599.12 mg); Rf (DCM: MeOH, 9:1) 0.56; 1 H-NMR (600 MHz, chloroform- d ): δ 8.44 (s, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.78 (d, J = 8.7 Hz, 2H), 6.34 (s, 1H, ), 5.15 (s, 2H), 4.02–3.96 (m, 2H), 3.71 (s, 3H), 3.42–3.36 (m, 2H), 3.34–3.30 (m, 4H), 3.16 (q, J = 7.3 Hz, 2H), 2.93 (tt, J = 11.5, 3.7 Hz, 1H), 2.08–1.99 (m, 2H), 1.68–1.63 (m, 2H), 1.37 (s, 9H), 1.01 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.34, 157.79, 154.73, 127.39 (2C), 114.43 (2C), 84.91, 79.47, 67.50, 67.45 (2C), 55.24, 46.23, 45.96, 42.99, 41.99, 33.78, 31.26, 31.21 (2C), 29.62, 29.26, 28.47, 28.38 (3C), 13.86. HPLC-MS (ESI): purity = 97%, t R = 2.457 min, m/z [M + H] + = 510.2. d ) Intermediate c (1 eq) was dissolved in DCM with the appropriate carboxylic acid (1.3 eq) and 4-dimethylaminopyridine (DMAP, 0.1 eq). 1-Ethyl-3-(3-dimethylaminopropyl)carbodiimide hydrochloride (EDCI·HCl, 1.5 eq) was then added, and the reaction mixture was stirred at 25 °C for 16 h. Water was added, and the solution was extracted with ethyl acetate, dried over anhydrous Na 2 SO 4 , and concentrated under reduced pressure. The residue was used in the subsequent reaction without any further purification. e.1 – e.19 ) The corresponding amide intermediate d was dissolved in ethanol (10 mL), and 2 M NaOH solution (10 mL) was added. The resulting mixture was heated at 80 °C for 24–72 h depending on the amide intermediate. When the reaction had gone to completion, the solvent was removed in vacuo, and saturated citric acid was added to the residue. Extraction was done with DCM (2 × 20 mL), and the combined organic extract was dried over anhydrous Na 2 SO 4 , filtered, and concentrated in vacuo. The residue was purified via column chromatography (DCM/MeOH) to obtain the corresponding product. tert -Butyl(2-((2-cyclopentyl-1-(4-methoxybenzyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)carbamate ( e.1 ) Obtained from intermediate c (500 mg, 1.20 mmol) and cyclopentane carboxylic acid (0.16 mL, 1.56 mmol) as a wine-colored sticky solid (46%, 270.4 mg); Rf (DCM: MeOH, 9:1) 0.64; 1 H-NMR (600 MHz, chloroform- d ): δ 8.37 (s, 1H), 6.97 (d, J = 8.7 Hz, 2H), 6.80 (d, J = 8.7 Hz, 2H), 6.47 (s, 1H), 5.18 (s, 2H), 3.74 (s, 3H), 3.36–3.10 (m, 6H), 2.71 (p, J = 8.1 Hz, 1H), 2.05–1.92 (m, 2H), 1.89–1.78 (m, 3H), 1.71–1.51 (m, 3H), 1.39 (s, 9H), 1.03 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 181.48, 159.28, 154.15, 127.56, 127.42 (2C), 114.35 (2C), 85.09, 79.49, 55.24, 46.30, 45.90, 44.64, 43.14, 41.82, 37.16, 32.08 (2C), 30.17, 28.39 (3C), 25.86 (2C), 25.76 (2C), 13.93. HPLC-MS (ESI): purity = 98%, t R = 2.598 min, m/z [M + H] + = 494.3. tert -Butylethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydrofuran-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)carbamate ( e.2 ) Obtained from intermediate c (500 mg, 1.20 mmol) and tetrahydrofuran-3-carboxylic acid (0.14 mL, 1.56 mmol) as a wine-colored sticky solid (68%, 401.7 mg); Rf (DCM: MeOH, 9:1) 0.57; 1 H-NMR (600 MHz, chloroform- d ): δ 8.45 (s, 1H), 6.96 (d, J = 8.7 Hz, 2H), 6.81 (d, J = 8.7 Hz, 2H), 6.45 (s, 1H), 5.17 (s, 2H), 4.04–3.98 (m, 2H), 3.93–3.86 (m, 2H), 3.74 (s, 3H), 3.50–3.44 (m, 1H), 3.39–3.33 (m, 4H), 3.20 (q, J = 7.2 Hz, 2H), 2.37–2.30 (m, 1H), 2.24–2.16 (m, 1H), 1.40 (s, 9H), 1.05 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.43, 156.14, 154.68, 137.64, 133.55, 127.48, 127.27 (2C), 114.48 (2C), 84.86, 79.53, 71.93, 70.92, 68.33, 55.27, 46.34, 45.97, 43.07, 41.98, 37.14, 31.86, 29.65, 28.40 (3C), 13.89. HPLC-MS (ESI): purity = 98%, t R = 2.448 min, m/z [M + H] + = 496.3. tert -Butylethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydro-2 H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)carbamate ( e.3 ) Obtained from intermediate c (500 mg, 1.20 mmol) and tetrahydro-2 H -pyran-4-carboxylic acid (202.96 mg, 1.56 mmol) as a wine-colored sticky solid (98%, 599.12 mg); Rf (DCM: MeOH, 9:1) 0.56; 1 H-NMR (600 MHz, chloroform- d ): δ 8.44 (s, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.78 (d, J = 8.7 Hz, 2H), 6.34 (s, 1H, ), 5.15 (s, 2H), 4.02–3.96 (m, 2H), 3.71 (s, 3H), 3.42–3.36 (m, 2H), 3.34–3.30 (m, 4H), 3.16 (q, J = 7.3 Hz, 2H), 2.93 (tt, J = 11.5, 3.7 Hz, 1H), 2.08–1.99 (m, 2H), 1.68–1.63 (m, 2H), 1.37 (s, 9H), 1.01 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.34, 157.79, 154.73, 127.39 (2C), 114.43 (2C), 84.91, 79.47, 67.50, 67.45 (2C), 55.24, 46.23, 45.96, 42.99, 41.99, 33.78, 31.26, 31.21 (2C), 29.62, 29.26, 28.47, 28.38 (3C), 13.86. HPLC-MS (ESI): purity = 97%, t R = 2.457 min, m/z [M + H] + = 510.2. -Butyl(2-((2-cyclopentyl-1-(4-methoxybenzyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)carbamate ( e.1 ) Obtained from intermediate c (500 mg, 1.20 mmol) and cyclopentane carboxylic acid (0.16 mL, 1.56 mmol) as a wine-colored sticky solid (46%, 270.4 mg); Rf (DCM: MeOH, 9:1) 0.64; 1 H-NMR (600 MHz, chloroform- d ): δ 8.37 (s, 1H), 6.97 (d, J = 8.7 Hz, 2H), 6.80 (d, J = 8.7 Hz, 2H), 6.47 (s, 1H), 5.18 (s, 2H), 3.74 (s, 3H), 3.36–3.10 (m, 6H), 2.71 (p, J = 8.1 Hz, 1H), 2.05–1.92 (m, 2H), 1.89–1.78 (m, 3H), 1.71–1.51 (m, 3H), 1.39 (s, 9H), 1.03 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 181.48, 159.28, 154.15, 127.56, 127.42 (2C), 114.35 (2C), 85.09, 79.49, 55.24, 46.30, 45.90, 44.64, 43.14, 41.82, 37.16, 32.08 (2C), 30.17, 28.39 (3C), 25.86 (2C), 25.76 (2C), 13.93. HPLC-MS (ESI): purity = 98%, t R = 2.598 min, m/z [M + H] + = 494.3. -Butylethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydrofuran-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)carbamate ( e.2 ) Obtained from intermediate c (500 mg, 1.20 mmol) and tetrahydrofuran-3-carboxylic acid (0.14 mL, 1.56 mmol) as a wine-colored sticky solid (68%, 401.7 mg); Rf (DCM: MeOH, 9:1) 0.57; 1 H-NMR (600 MHz, chloroform- d ): δ 8.45 (s, 1H), 6.96 (d, J = 8.7 Hz, 2H), 6.81 (d, J = 8.7 Hz, 2H), 6.45 (s, 1H), 5.17 (s, 2H), 4.04–3.98 (m, 2H), 3.93–3.86 (m, 2H), 3.74 (s, 3H), 3.50–3.44 (m, 1H), 3.39–3.33 (m, 4H), 3.20 (q, J = 7.2 Hz, 2H), 2.37–2.30 (m, 1H), 2.24–2.16 (m, 1H), 1.40 (s, 9H), 1.05 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.43, 156.14, 154.68, 137.64, 133.55, 127.48, 127.27 (2C), 114.48 (2C), 84.86, 79.53, 71.93, 70.92, 68.33, 55.27, 46.34, 45.97, 43.07, 41.98, 37.14, 31.86, 29.65, 28.40 (3C), 13.89. HPLC-MS (ESI): purity = 98%, t R = 2.448 min, m/z [M + H] + = 496.3. -Butylethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydro-2 H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)carbamate ( e.3 ) Obtained from intermediate c (500 mg, 1.20 mmol) and tetrahydro-2 H -pyran-4-carboxylic acid (202.96 mg, 1.56 mmol) as a wine-colored sticky solid (98%, 599.12 mg); Rf (DCM: MeOH, 9:1) 0.56; 1 H-NMR (600 MHz, chloroform- d ): δ 8.44 (s, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.78 (d, J = 8.7 Hz, 2H), 6.34 (s, 1H, ), 5.15 (s, 2H), 4.02–3.96 (m, 2H), 3.71 (s, 3H), 3.42–3.36 (m, 2H), 3.34–3.30 (m, 4H), 3.16 (q, J = 7.3 Hz, 2H), 2.93 (tt, J = 11.5, 3.7 Hz, 1H), 2.08–1.99 (m, 2H), 1.68–1.63 (m, 2H), 1.37 (s, 9H), 1.01 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.34, 157.79, 154.73, 127.39 (2C), 114.43 (2C), 84.91, 79.47, 67.50, 67.45 (2C), 55.24, 46.23, 45.96, 42.99, 41.99, 33.78, 31.26, 31.21 (2C), 29.62, 29.26, 28.47, 28.38 (3C), 13.86. HPLC-MS (ESI): purity = 97%, t R = 2.457 min, m/z [M + H] + = 510.2. f.1 – f.19 Boc-Deprotection The appropriate intermediate e.1 – e.19 was dissolved in 4 M HCl/dioxane, and the mixture was stirred at 25 °C for 2 h. When the reaction was complete, the solvent was removed in vacuo, and the residue was neutralized with Amberlyst A21 in a mixture of DCM and methanol. Amberlyst was filtered off, the solvent was removed in vacuo, and the residue was used in the next reaction without further purification. Reductive Amination The crude product from step (a) above and 4-chloro-2-hydroxybenzaldehyde in methanol was stirred at 25 °C for 6 h. The mixture was cooled at 0 °C, and sodium borohydride (NaBH 4 ) was added portion-wise. After the addition, the reaction was allowed to warm to room temperature (25 °C) for 2 h. The solvent was removed in vacuo, and the residue was diluted with deionized water. The compound was extracted with DCM and dried over anhydrous sodium sulfate. The solvent was removed in vacuo, and the residue was purified via column chromatography to obtain the desired product. 5-Chloro-2-(((2-((2-cyclopentyl-1-(4-methoxybenzyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( f.1 ) Obtained from intermediate e.1 (160 mg, 0.41 mmol) and 4-chloro-2-hydroxybenzaldehyde (76 mg, 0.49 mmol) as a pale yellow sticky solid (53%, 116 mg); Rf (DCM:MeOH, 9:1) 0.50; 1 H-NMR (600 MHz, chloroform- d ): δ 8.44 (d, J = 1.0 Hz, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.82–6.80 (m, 3H), 6.73 (d, J = 2.1 Hz, 1H), 6.67 (dd, J = 8.0, 2.1 Hz, 1H), 6.03 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 3.74 (s, 3H), 3.72 (s, 2H), 3.38 (t, J = 6.5 Hz, 2H), 3.14–3.08 (m, 1H), 2.72 (t, J = 6.5 Hz, 2H), 2.61 (q, J = 7.2 Hz, 2H), 1.98–1.92 (m, 4H, 1.86–1.79 (m, 2H), 1.64–1.57 (m, 2H), 1.03 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.80, 159.29, 158.80, 154.09, 143.91, 138.46, 134.01, 133.98, 129.25, 128.58, 127.48, 127.39 (2C), 119.13, 116.42, 114.40 (2C), 85.32, 57.12, 55.27, 52.13, 47.60, 46.25, 40.48, 37.14, 32.10 (2C), 25.75 (2C), 10.91. HPLC-MS (ESI): purity = 98%, t R = 2.450 min, m/z [M + H] + = 534.2. 5-Chloro-2-((ethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydrofuran-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( f.2 ) Obtained from intermediate e.2 (140 mg, 0.35 mmol) and 4-chloro-2-hydroxybenzaldehyde (66 mg, 0.42 mmol) as an orange-colored sticky solid (60%, 113 mg); Rf (DCM:MeOH, 9:1) 0.53; 1 H-NMR (600 MHz, chloroform- d ): δ 8.48 (d, J = 1.0 Hz, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.82–6.80 (m, 3H), 6.72 (d, J = 2.1 Hz, 1H), 6.66 (dd, J = 8.0, 2.1 Hz, 1H), 6.08 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 4.02–3.95 (m, 3H), 3.93–3.84 (m, 2H), 3.74 (s, 3H), 3.74 (s, 2H), 3.42 (t, J = 6.5 Hz, 2H), 2.75 (t, J = 6.5 Hz, 2H), 2.64 (q, J = 7.2 Hz, 2H), 2.35–2.26 (m, 1H), 2.23–2.13 (m, 1H), 1.05 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.44, 158.78, 155.83, 154.36, 143.95, 139.07, 134.00, 129.31, 127.39 (2C), 127.28, 120.34, 119.14, 116.41, 114.52 (2C), 85.13, 71.92, 68.32, 57.08, 55.28, 52.16, 47.66, 46.28, 40.41, 37.12, 31.87, 29.65, 10.91. HPLC-MS (ESI): purity = 98%, t R = 2.542 min, m/z [M + H] + = 536.2. 5-Chloro-2-((ethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydro-2 H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( f.3 ) Obtained from intermediate e.3 (90 mg, 0.22 mmol) and 4-chloro-2-hydroxybenzaldehyde (41 mg, 0.26 mmol) as an orange-colored sticky solid (54%, 66 mg); Rf (DCM: MeOH, 9:1) 0.44; 1 H-NMR (600 MHz, chloroform- d ): δ 8.49 (d, J = 1.0 Hz, 1H), 6.92 (d, J = 8.7 Hz, 2H), 6.84–6.79 (m, 3H), 6.72 (d, J = 2.1 Hz, 1H), 6.66 (dd, J = 8.0, 2.1 Hz, 1H), 6.05 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 4.04–3.97 (m, 2H), 3.74 (s, 3H), 3.73 (s, 2H), 3.43–3.38 (m, 4H), 2.93 (tt, J = 11.5, 3.7 Hz, 1H), 2.74 (t, J = 6.4 Hz, 2H), 2.63 (q, J = 7.1 Hz, 2H), 2.09–2.00 (m, 2H), 1.70–1.63 (m, 2H), 1.04 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.39, 158.79, 157.69, 154.31, 143.61, 139.03, 133.98, 129.32, 127.38, 127.33 (2C), 125.25, 120.35, 119.12, 116.39, 114.49 (2C), 85.35, 67.48 (2C), 57.04, 55.27, 52.15, 47.64, 46.22, 40.40, 33.80, 31.24 (2C), 10.90. HPLC-MS (ESI): purity = 98%, t R = 2.531 min, m/z [M + H] + = 550.2. The appropriate intermediate e.1 – e.19 was dissolved in 4 M HCl/dioxane, and the mixture was stirred at 25 °C for 2 h. When the reaction was complete, the solvent was removed in vacuo, and the residue was neutralized with Amberlyst A21 in a mixture of DCM and methanol. Amberlyst was filtered off, the solvent was removed in vacuo, and the residue was used in the next reaction without further purification. The crude product from step (a) above and 4-chloro-2-hydroxybenzaldehyde in methanol was stirred at 25 °C for 6 h. The mixture was cooled at 0 °C, and sodium borohydride (NaBH 4 ) was added portion-wise. After the addition, the reaction was allowed to warm to room temperature (25 °C) for 2 h. The solvent was removed in vacuo, and the residue was diluted with deionized water. The compound was extracted with DCM and dried over anhydrous sodium sulfate. The solvent was removed in vacuo, and the residue was purified via column chromatography to obtain the desired product. 5-Chloro-2-(((2-((2-cyclopentyl-1-(4-methoxybenzyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( f.1 ) Obtained from intermediate e.1 (160 mg, 0.41 mmol) and 4-chloro-2-hydroxybenzaldehyde (76 mg, 0.49 mmol) as a pale yellow sticky solid (53%, 116 mg); Rf (DCM:MeOH, 9:1) 0.50; 1 H-NMR (600 MHz, chloroform- d ): δ 8.44 (d, J = 1.0 Hz, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.82–6.80 (m, 3H), 6.73 (d, J = 2.1 Hz, 1H), 6.67 (dd, J = 8.0, 2.1 Hz, 1H), 6.03 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 3.74 (s, 3H), 3.72 (s, 2H), 3.38 (t, J = 6.5 Hz, 2H), 3.14–3.08 (m, 1H), 2.72 (t, J = 6.5 Hz, 2H), 2.61 (q, J = 7.2 Hz, 2H), 1.98–1.92 (m, 4H, 1.86–1.79 (m, 2H), 1.64–1.57 (m, 2H), 1.03 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.80, 159.29, 158.80, 154.09, 143.91, 138.46, 134.01, 133.98, 129.25, 128.58, 127.48, 127.39 (2C), 119.13, 116.42, 114.40 (2C), 85.32, 57.12, 55.27, 52.13, 47.60, 46.25, 40.48, 37.14, 32.10 (2C), 25.75 (2C), 10.91. HPLC-MS (ESI): purity = 98%, t R = 2.450 min, m/z [M + H] + = 534.2. 5-Chloro-2-((ethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydrofuran-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( f.2 ) Obtained from intermediate e.2 (140 mg, 0.35 mmol) and 4-chloro-2-hydroxybenzaldehyde (66 mg, 0.42 mmol) as an orange-colored sticky solid (60%, 113 mg); Rf (DCM:MeOH, 9:1) 0.53; 1 H-NMR (600 MHz, chloroform- d ): δ 8.48 (d, J = 1.0 Hz, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.82–6.80 (m, 3H), 6.72 (d, J = 2.1 Hz, 1H), 6.66 (dd, J = 8.0, 2.1 Hz, 1H), 6.08 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 4.02–3.95 (m, 3H), 3.93–3.84 (m, 2H), 3.74 (s, 3H), 3.74 (s, 2H), 3.42 (t, J = 6.5 Hz, 2H), 2.75 (t, J = 6.5 Hz, 2H), 2.64 (q, J = 7.2 Hz, 2H), 2.35–2.26 (m, 1H), 2.23–2.13 (m, 1H), 1.05 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.44, 158.78, 155.83, 154.36, 143.95, 139.07, 134.00, 129.31, 127.39 (2C), 127.28, 120.34, 119.14, 116.41, 114.52 (2C), 85.13, 71.92, 68.32, 57.08, 55.28, 52.16, 47.66, 46.28, 40.41, 37.12, 31.87, 29.65, 10.91. HPLC-MS (ESI): purity = 98%, t R = 2.542 min, m/z [M + H] + = 536.2. 5-Chloro-2-((ethyl(2-((1-(4-methoxybenzyl)-2-(tetrahydro-2 H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( f.3 ) Obtained from intermediate e.3 (90 mg, 0.22 mmol) and 4-chloro-2-hydroxybenzaldehyde (41 mg, 0.26 mmol) as an orange-colored sticky solid (54%, 66 mg); Rf (DCM: MeOH, 9:1) 0.44; 1 H-NMR (600 MHz, chloroform- d ): δ 8.49 (d, J = 1.0 Hz, 1H), 6.92 (d, J = 8.7 Hz, 2H), 6.84–6.79 (m, 3H), 6.72 (d, J = 2.1 Hz, 1H), 6.66 (dd, J = 8.0, 2.1 Hz, 1H), 6.05 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 4.04–3.97 (m, 2H), 3.74 (s, 3H), 3.73 (s, 2H), 3.43–3.38 (m, 4H), 2.93 (tt, J = 11.5, 3.7 Hz, 1H), 2.74 (t, J = 6.4 Hz, 2H), 2.63 (q, J = 7.1 Hz, 2H), 2.09–2.00 (m, 2H), 1.70–1.63 (m, 2H), 1.04 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.39, 158.79, 157.69, 154.31, 143.61, 139.03, 133.98, 129.32, 127.38, 127.33 (2C), 125.25, 120.35, 119.12, 116.39, 114.49 (2C), 85.35, 67.48 (2C), 57.04, 55.27, 52.15, 47.64, 46.22, 40.40, 33.80, 31.24 (2C), 10.90. HPLC-MS (ESI): purity = 98%, t R = 2.531 min, m/z [M + H] + = 550.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( f.1 ) Obtained from intermediate e.1 (160 mg, 0.41 mmol) and 4-chloro-2-hydroxybenzaldehyde (76 mg, 0.49 mmol) as a pale yellow sticky solid (53%, 116 mg); Rf (DCM:MeOH, 9:1) 0.50; 1 H-NMR (600 MHz, chloroform- d ): δ 8.44 (d, J = 1.0 Hz, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.82–6.80 (m, 3H), 6.73 (d, J = 2.1 Hz, 1H), 6.67 (dd, J = 8.0, 2.1 Hz, 1H), 6.03 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 3.74 (s, 3H), 3.72 (s, 2H), 3.38 (t, J = 6.5 Hz, 2H), 3.14–3.08 (m, 1H), 2.72 (t, J = 6.5 Hz, 2H), 2.61 (q, J = 7.2 Hz, 2H), 1.98–1.92 (m, 4H, 1.86–1.79 (m, 2H), 1.64–1.57 (m, 2H), 1.03 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.80, 159.29, 158.80, 154.09, 143.91, 138.46, 134.01, 133.98, 129.25, 128.58, 127.48, 127.39 (2C), 119.13, 116.42, 114.40 (2C), 85.32, 57.12, 55.27, 52.13, 47.60, 46.25, 40.48, 37.14, 32.10 (2C), 25.75 (2C), 10.91. HPLC-MS (ESI): purity = 98%, t R = 2.450 min, m/z [M + H] + = 534.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( f.2 ) Obtained from intermediate e.2 (140 mg, 0.35 mmol) and 4-chloro-2-hydroxybenzaldehyde (66 mg, 0.42 mmol) as an orange-colored sticky solid (60%, 113 mg); Rf (DCM:MeOH, 9:1) 0.53; 1 H-NMR (600 MHz, chloroform- d ): δ 8.48 (d, J = 1.0 Hz, 1H), 6.93 (d, J = 8.7 Hz, 2H), 6.82–6.80 (m, 3H), 6.72 (d, J = 2.1 Hz, 1H), 6.66 (dd, J = 8.0, 2.1 Hz, 1H), 6.08 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 4.02–3.95 (m, 3H), 3.93–3.84 (m, 2H), 3.74 (s, 3H), 3.74 (s, 2H), 3.42 (t, J = 6.5 Hz, 2H), 2.75 (t, J = 6.5 Hz, 2H), 2.64 (q, J = 7.2 Hz, 2H), 2.35–2.26 (m, 1H), 2.23–2.13 (m, 1H), 1.05 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.44, 158.78, 155.83, 154.36, 143.95, 139.07, 134.00, 129.31, 127.39 (2C), 127.28, 120.34, 119.14, 116.41, 114.52 (2C), 85.13, 71.92, 68.32, 57.08, 55.28, 52.16, 47.66, 46.28, 40.41, 37.12, 31.87, 29.65, 10.91. HPLC-MS (ESI): purity = 98%, t R = 2.542 min, m/z [M + H] + = 536.2. H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( f.3 ) Obtained from intermediate e.3 (90 mg, 0.22 mmol) and 4-chloro-2-hydroxybenzaldehyde (41 mg, 0.26 mmol) as an orange-colored sticky solid (54%, 66 mg); Rf (DCM: MeOH, 9:1) 0.44; 1 H-NMR (600 MHz, chloroform- d ): δ 8.49 (d, J = 1.0 Hz, 1H), 6.92 (d, J = 8.7 Hz, 2H), 6.84–6.79 (m, 3H), 6.72 (d, J = 2.1 Hz, 1H), 6.66 (dd, J = 8.0, 2.1 Hz, 1H), 6.05 (d, J = 1.0 Hz, 1H), 5.13 (s, 2H), 4.04–3.97 (m, 2H), 3.74 (s, 3H), 3.73 (s, 2H), 3.43–3.38 (m, 4H), 2.93 (tt, J = 11.5, 3.7 Hz, 1H), 2.74 (t, J = 6.4 Hz, 2H), 2.63 (q, J = 7.1 Hz, 2H), 2.09–2.00 (m, 2H), 1.70–1.63 (m, 2H), 1.04 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.39, 158.79, 157.69, 154.31, 143.61, 139.03, 133.98, 129.32, 127.38, 127.33 (2C), 125.25, 120.35, 119.12, 116.39, 114.49 (2C), 85.35, 67.48 (2C), 57.04, 55.27, 52.15, 47.64, 46.22, 40.40, 33.80, 31.24 (2C), 10.90. HPLC-MS (ESI): purity = 98%, t R = 2.531 min, m/z [M + H] + = 550.2. 1 – 19 The appropriate intermediate f.1 – f.19 was stirred in neat TFA (10 mL) at 100 °C for 16 h. Once the reaction was complete, TFA was removed under reduced pressure. The residue was dissolved in DCM/MeOH (9:1) and stirred with Amberlyst A21 for 1 h. The resin was filtered off, and the filtrate was concentrated under reduced pressure. The residue was purified via column chromatography to obtain the final product. 5-Chloro-2-(((2-((2-cyclopentyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 1 ) Obtained from intermediate f.1 (116 mg, 0.22 mmol) as an off-white sticky solid (68%, 61 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.18 (d, J = 1.0 Hz, 1H), 7.25 (d, J = 8.1 Hz, 1H), 6.81 (d, J = 2.1 Hz, 1H), 6.79 (dd, J = 8.1, 2.1 Hz, 1H), 6.44 (d, J = 1.0 Hz, 1H), 4.03 (s, 2H), 3.40 (t, J = 5.9 Hz, 2H), 3.20–3.13 (m, 1H), 2.98 (t, J = 5.8 Hz, 2H), 2.93 (q, J = 7.2 Hz, 2H), 2.02–1.95 (m, 2H), 1.84–1.77 (m, 2H), 1.73–1.66 (m, 2H), 1.63–1.56 (m, 2H), 1.13 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.76, 154.94, 137.73, 132.39, 130.71, 130.00, 129.65, 128.84, 122.94, 118.83, 115.55, 86.67, 54.92, 52.22, 49.03, 47.10, 39.29, 31.92 (2C), 25.54 (2C), 11.31. HPLC-MS (ESI): purity = 98%, t R = 2.203 min, m/z [M + H] + = 414.1. 5-Chloro-2-((ethyl(2-((2-(tetrahydrofuran-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 2 ) Obtained from intermediate f.2 (113 mg, 0.21 mmol) as an off-white sticky solid (74%, 65 mg); Rf (DCM: MeOH, 9:1) 0.34; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.21 (d, J = 1.0 Hz, 1H), 7.08 (d, J = 8.0 Hz, 1H), 6.73 (d, J = 2.1 Hz, 1H), 6.71 (dd, J = 8.0, 2.1 Hz, 1H), 6.32 (d, J = 1.0 Hz, 1H), 4.02–3.99 (m, 1H), 3.83 (t, J = 6.7 Hz, 2H), 3.78–3.73 (m, 1H), 3.71 (s, 2H), 3.56–3.49 (m, 1H), 3.35–3.31 (m, 2H), 2.65 (t, J = 6.7 Hz, 2H), 2.56 (q, J = 7.1 Hz, 2H), 2.28–2.17 (m, 2H), 0.98 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.75, 155.54, 155.12, 142.76, 138.05, 134.83, 132.38, 130.70, 130.00, 122.96, 118.83, 115.55, 86.69, 71.63, 67.89, 54.93, 52.19, 49.03, 47.10, 31.48, 11.32. HPLC-MS (ESI): purity = 98%, t R = 0.298 min, m/z [M + H] + = 416.2. 5-Chloro-2-((ethyl(2-((2-(tetrahydro-2 H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 3 ) Obtained from intermediate f.3 (66 mg, 0.12 mmol) as an off-white sticky solid (80%, 41 mg); Rf (DCM: MeOH, 9:1) 0.29; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.21 (d, J = 1.0 Hz, 1H), 7.15 (d, J = 7.9 Hz, 1H), 6.76 (d, J = 2.1 Hz, 1H), 6.74 (dd, J = 7.9, 2.1 Hz, 1H), 6.38 (d, J = 1.0 Hz, 1H), 3.91–3.86 (m, 2H), 3.83 (s, 2H), 3.44–3.39 (m, 2H), 3.36 (t, J = 6.4 Hz, 2H), 3.03–2.96 (m, 1H), 2.78 (t, J = 6.4 Hz, 2H), 2.71 (q, J = 7.2 Hz, 2H), 1.90–1.85 (m, 2H), 1.79–1.72 (m, 2H), 1.04 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.55, 154.84, 142.80, 137.40, 133.97, 133.06, 131.58, 130.00, 129.68, 128.96, 119.01, 115.57, 66.96 (2C), 54.16, 52.89, 49.03, 47.44, 35.17, 31.12 (2C), 10.84. HPLC-MS (ESI): purity = 97%, t R = 0.430 min, m/z [M + H] + = 430.2. 5-Chloro-2-((ethyl(2-((2-methyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 4 ) Obtained from intermediate f.4 (66 mg, 0.14 mmol) as an off-white sticky solid (71%, 36 mg); Rf (DCM: MeOH, 9:1) 0.26; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.15 (d, J = 1.0 Hz, 1H), 7.08 (d, J = 8.0 Hz, 1H), 6.73 (d, J = 2.1 Hz, 1H), 6.71 (dd, J = 8.0, 2.1 Hz, 1H), 6.29 (d, J = 1.0 Hz, 1H), 3.70 (s, 2H), 3.32 (t, J = 6.4 Hz, 2H), 2.64 (t, J = 6.4 Hz, 2H), 2.55 (q, J = 7.2 Hz, 2H), 2.35 (s, 3H), 0.98 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.75, 154.96, 151.25, 137.45, 132.35, 130.69, 130.01, 123.02, 118.83, 115.55, 86.62, 63.28, 54.92, 52.20, 49.04, 47.10, 15.01, 11.35. HPLC-MS (ESI): purity = 98%, t R = 0.219 min, m/z [M + H] + = 360.1. 5-Chloro-2-(((2-((2-cyclopropyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 5 ) Obtained from intermediate f.5 (351 mg, 0.69 mmol) as an off-white sticky solid (68%, 181 mg); Rf (DCM: MeOH, 9:1) 0.30; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.11 (d, J = 1.0 Hz, 1H), 7.10 (d, J = 8.0 Hz, 1H), 6.74 (d, J = 2.2 Hz, 1H), 6.72 (dd, J = 8.0, 2.1 Hz, 1H), 6.31 (d, J = 1.0 Hz, 1H), 3.74 (s, 2H), 3.32 (t, J = 6.7 Hz, 2H), 2.68 (t, J = 6.7 Hz, 2H), 2.59 (q, J = 7.2 Hz, 2H), 2.01–1.96 (m, 1H), 0.99 (t, J = 7.1 Hz, 3H), 0.95–0.93 (m, 4H). 13 C-NMR (151 MHz, DMSO): δ 158.69, 157.15, 154.78, 142.64, 137.07, 132.58, 130.97, 130.00, 122.52, 118.88, 115.56, 86.76, 54.69, 52.44, 49.03, 47.20, 11.18, 9.83 (2C), 8.93. HPLC-MS (ESI): purity = 98%, t R = 2.211 min, m/z [M + H] + = 386.1. 5-Chloro-2-((ethyl(2-((2-(1-methylcyclopropyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 6 ) Obtained from intermediate f.6 (381 mg, 0.73 mmol) as an off-white sticky solid (86%, 250 mg); Rf (DCM: MeOH, 9:1) 0.29; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.37 (d, J = 1.0 Hz, 1H), 7.40 (d, J = 8.2 Hz, 1H), 6.92 (d, J = 2.1 Hz, 1H), 6.87 (dd, J = 8.2, 2.1 Hz, 1H), 6.73 (d, J = 1.0 Hz, 1H), 4.30 (s, 2H), 3.66 (t, J = 6.1 Hz, 2H), 3.28 (t, J = 6.0 Hz, 2H), 3.22 (q, J = 7.2 Hz, 1H), 1.53 (s, 3H), 1.31–1.26 (m, 5H), 1.04–1.01 (m, 2H). 13 C-NMR (101 MHz, DMSO): δ 158.06, 152.70, 149.17, 143.15, 140.46, 135.35, 134.45, 131.71, 127.66, 116.35, 115.81, 108.67, 90.14, 51.45, 48.64, 38.07, 20.88, 17.89, 15.68 (2C), 9.19. HPLC-MS (ESI): purity = 98%, t R = 0.755 min, m/z [M + H] + = 400.2. 5-Chloro-2-((ethyl(2-((2-(1-(trifluoromethyl)cyclopropyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 7 ) Obtained from intermediate f.7 (59 mg, 0.10 mmol) as an off-white sticky solid (77%, 35 mg); Rf (DCM: MeOH, 9:1) 0.30; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.50 (d, J = 1.0 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.93 (d, J = 2.1 Hz, 1H), 6.88 (dd, J = 8.1, 2.1 Hz, 1H), 6.74 (d, J = 1.0 Hz, 1H), 4.32 (s, 2H), 3.65 (t, J = 5.9 Hz, 2H), 3.30 (t, J = 5.9 Hz, 2H), 3.24 (q, J = 7.1 Hz, 2H), 1.60–1.55 (m, 4H), 1.29 (t, J = 7.2 Hz, 3H). 13 C-NMR (101 MHz, DMSO): δ 158.05, 152.50, 149.26, 146.22, 141.57, 135.40, 134.48, 129.93, 124.43, 119.59, 116.31, 115.81, 111.78, 51.57, 48.70, 38.22, 31.43, 23.37, 12.03 (2C), 9.18. HPLC-MS (ESI): purity = 98%, t R = 2.194 min, m/z [M + H] + = 454.1. 5-Chloro-2-(((2-((2-(cyclopropylmethyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 8 ) Obtained from intermediate f.8 (239 mg, 0.46 mmol) as an off-white sticky solid (88%, 162 mg); Rf (DCM: MeOH, 9:1) 0.35; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.46 (d, J = 1.0 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.94 (d, J = 2.1 Hz, 1H), 6.87 (dd, J = 8.1, 2.1 Hz, 1H), 6.79 (d, J = 1.0 Hz, 1H), 4.31 (s, 2H), 3.68 (t, J = 6.0 Hz, 2H), 3.30 (t, J = 6.0 Hz, 2H), 3.24 (q, J = 7.2 Hz, 2H), 2.80 (d, J = 7.0 Hz, 2H), 1.30 (t, J = 7.2 Hz, 3H), 1.20–1.15 (m, 1H), 0.57–0.52 (m, 2H), 0.34–0.29 (m, 2H). 13 C-NMR (101 MHz, DMSO): δ 159.52, 158.11, 152.35, 146.06, 135.35, 134.43, 131.03, 125.99, 119.54, 116.36, 115.84, 89.98, 56.56, 52.04, 51.47, 48.64, 38.03, 33.06, 9.19, 4.94 (2C). HPLC-MS (ESI): purity = 98%, t R = 0.588 min, m/z [M + H] + = 400.2. 5-Chloro-2-(((2-((2-cyclobutyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 9 ) Obtained from intermediate f.9 (311 mg, 0.60 mmol) as an off-white sticky solid (79%, 191 mg); Rf (DCM: MeOH, 9:1) 0.31; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.45 (d, J = 1.0 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.94 (d, J = 2.1 Hz, 1H), 6.85 (dd, J = 8.2, 2.1 Hz, 1H), 6.78 (d, J = 1.0 Hz, 1H), 4.31 (s, 2H), 3.83–3.72 (m, 1H), 3.69 (t, J = 6.1 Hz, 2H), 3.29 (t, J = 6.0 Hz, 2H), 3.23 (q, J = 7.2 Hz, 2H), 2.46–2.34 (m, 4H), 2.15–2.02 (m, 1H), 2.00–1.89 (m, 1H), 1.30 (t, J = 7.1 Hz, 3H). 13 C-NMR (101 MHz, DMSO) δ 163.14, 159.51, 158.11, 151.94, 135.38, 134.37, 119.50, 116.27, 115.86, 89.93, 79.09, 51.89, 51.45, 48.65, 40.46, 38.02, 33.40, 27.37 (2C), 18.60, 9.19. HPLC-MS (ESI): purity = 98%, t R = 0.921 min, m/z [M + H] + = 400.2. 5-Chloro-2-(((2-((2-(3,3-difluorocyclobutyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 10 ) Obtained from intermediate f.10 (274 mg, 0.49 mmol) as an off-white sticky solid (84%, 179 mg); Rf (DCM: MeOH, 9:1) 0.29; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.51 (d, J = 1.0 Hz, 1H), 7.37 (d, J = 8.2 Hz, 1H), 6.88 (d, J = 2.1 Hz, 1H), 6.84 (dd, J = 8.2, 2.1 Hz, 1H), 6.80 (d, J = 1.0 Hz, 1H), 4.28 (s, 2H), 3.66 (t, J = 6.3 Hz, 2H), 3.64–3.60 (m, 1H), 3.27 (t, J = 6.2 Hz, 2H), 3.20 (q, J = 7.2 Hz, 2H), 3.10–2.94 (m, 4H), 1.26 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 159.36, 158.02, 150.81, 147.81, 135.40, 134.50, 133.25, 122.12, 120.25, 119.58, 118.45, 116.14, 115.75, 89.93, 63.24, 51.32, 48.61, 37.86, 21.96 (2C), 9.11. HPLC-MS (ESI): purity = 99%, t R = 1.808 min, m/z [M + H] + = 436.1. 5-Chloro-2-(((2-((2-cyclohexyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 11 ) Obtained from intermediate f.11 (274 mg, 0.50 mmol) as an off-white sticky solid (71%, 152 mg); Rf (DCM: MeOH, 9:1) 0.32; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.44 (d, J = 1.0 Hz, 1H), 7.36 (d, J = 8.2 Hz, 1H), 6.88 (d, J = 2.1 Hz, 1H), 6.82 (dd, J = 8.2, 2.1 Hz, 1H), 6.76 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.65 (t, J = 6.2 Hz, 2H), 3.25 (t, J = 6.2 Hz, 2H), 3.19 (q, J = 7.2 Hz, 2H), 2.89 (tt, J = 11.7, 3.7 Hz, 1H), 2.01–1.96 (m, 2H), 1.77–1.73 (m, 2H), 1.67–1.63 (m, 1H), 1.59–1.52 (m, 2H), 1.39–1.30 (m, 2H), 1.25 (t, J = 7.2 Hz, 3H), 1.23–1.17 (m, 1H). 13 C-NMR (151 MHz, DMSO): δ 164.77, 159.42, 158.04, 151.60, 146.44, 135.37, 134.44, 130.11, 126.08, 119.51, 116.14, 115.75, 110.92, 89.86, 51.33, 48.63, 37.63, 30.78 (2C), 25.69, 25.57 (2C), 9.09. HPLC-MS (ESI): purity = 97%, t R = 2.238 min, m/z [M + H] + = 428.2. 5-Chloro-2-(((2-((2-(4,4-difluorocyclohexyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 12 ) Obtained from intermediate f.12 (381 mg, 0.65 mmol) as an off-white sticky solid (83%, 250 mg); Rf (DCM: MeOH, 9:1) 0.37; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.46 (d, J = 1.0 Hz, 1H), 7.36 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.81 (dd, J = 8.2, 2.1 Hz, 1H), 6.80 (d, J = 1.0 Hz, 1H), 4.27 (s, 2H), 3.76–3.70 (m, 1H), 3.67 (t, J = 6.3 Hz, 2H), 3.27 (t, J = 6.2 Hz, 2H), 3.19 (q, J = 7.1 Hz, 2H), 3.12–3.06 (m, 1H), 2.11–2.07 (m, 4H), 2.01–1.89 (m, 1H), 1.88–1.80 (m, 2H), 1.25 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.09, 150.96, 135.35, 134.42, 129.70, 125.56, 123.97, 122.38, 119.46, 116.12, 115.78, 89.88, 62.47, 51.26, 48.62, 37.80, 35.06, 32.56, 27.12 (2C), 25.84 (2C), 9.08. HPLC-MS (ESI): purity = 98%, t R = 2.150 min, m/z [M + H] + = 464.2. 5-Chloro-2-((ethyl(2-((2-(1-methylpyrrolidin-2-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 13 ) Obtained from intermediate f.13 (51 mg, 0.10 mmol) as a yellow sticky solid (71%, 31 mg); Rf (DCM: MeOH, 9:1) 0.20; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.44 (d, J = 1.0 Hz, 1H), 7.38 (d, J = 8.2 Hz, 1H), 6.92 (d, J = 2.1 Hz, 1H), 6.83 (dd, J = 8.2, 2.1 Hz, 1H), 6.66 (d, J = 1.0 Hz, 1H), 4.77 (dd, J = 8.1, 4.3 Hz, 1H), 4.27 (s, 2H), 3.58 (t, J = 5.9 Hz, 2H), 3.24 (t, J = 5.9 Hz, 2H), 3.20 (q, J = 7.2 Hz, 2H), 2.94–2.86 (m, 2H), 2.60–2.53 (m, 2H), 2.27–1.98 (m, 2H), 1.72 (s, 3H), 1.24 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 172.03, 159.23, 158.81, 158.11, 153.85, 135.31, 134.45, 120.46, 119.47, 118.48, 116.50, 116.33, 115.80, 114.52, 52.89, 51.50, 48.58, 38.13, 29.75, 22.88, 22.29, 9.14. HPLC-MS (ESI): purity = 97%, t R = 0.141 min, m/z [M + H] + = 429.2. 5-Chloro-2-((ethyl(2-((2-(3-(trifluoromethyl)phenyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 14 ) Obtained from intermediate f.14 (231 mg, 0.38 mmol) as an off-white sticky solid (81%, 151 mg); Rf (DCM: MeOH, 9:1) 0.30; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.53 (d, J = 1.0 Hz, 1H), 8.47 (dd, J = 2.1, 1.5 Hz, 1H), 8.44 (ddd, J = 7.8, 1.6, 1.5 Hz, 1H), 7.88 (ddd, J = 7.8, 2.1, 1.6 Hz, 1H), 7.79 (t, J = 7.8 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.90 (d, J = 2.1 Hz, 1H), 6.85 (dd, J = 8.2, 2.1 Hz, 1H), 6.75 (d, J = 1.0 Hz, 1H), 4.28 (s, 2H), 3.64 (t, J = 6.0 Hz, 2H), 3.26 (t, J = 5.9 Hz, 2H), 3.21 (q, J = 7.2 Hz, 2H), 1.27 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.96, 158.75, 158.54, 158.32, 158.04, 135.32, 134.55, 131.14, 130.85, 130.47, 127.62, 125.27, 123.74, 120.50, 120.36, 119.56, 118.39, 116.28, 115.75, 51.39, 48.58, 38.16, 22.91, 9.16. HPLC-MS (ESI): purity = 98%, t R = 2.422 min, m/z [M + H] + = 490.1. 5-Chloro-2-(((2-((2-cycloheptyl-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 15 ) Obtained from intermediate f.15 (76 mg, 0.14 mmol) as an off-white sticky solid (92%, 57 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.41 (d, J = 1.0 Hz, 1H), 7.37 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.2 Hz, 1H), 6.82 (dd, J = 8.2, 2.1 Hz, 1H), 6.74 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.64 (t, J = 6.2 Hz, 2H), 3.25 (t, J = 6.2 Hz, 2H), 3.19 (q, J = 7.2 Hz, 2H), 3.07 (tt, J = 9.4, 4.4 Hz, 1H), 2.03–1.96 (m, 2H), 1.84–1.76 (m, 2H), 1.62–1.56 (m, 4H), 1.55–1.48 (m, 4H), 1.25 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 172.01, 159.28, 159.06, 158.85, 158.64, 158.06, 135.31, 134.44, 119.49, 118.40, 116.22, 115.76, 89.77, 51.31, 48.60, 37.84, 32.67 (2C), 28.14 (2C), 26.17 (2C), 22.88, 9.12. HPLC-MS (ESI): purity = 99%, t R = 2.413 min, m/z [M + H] + = 442.2. 5-Chloro-2-((ethyl(2-((2-((1r,4r)-4-(trifluoromethyl)cyclohexyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 16 ) Obtained from intermediate f.16 (81 mg, 0.13 mmol) as an off-white sticky solid (88%, 57 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.42 (d, J = 1.0 Hz, 1H), 7.37 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.82 (dd, J = 8.2, 2.1 Hz, 1H), 6.76 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.64 (t, J = 6.2 Hz, 2H), 3.25 (t, J = 6.2 Hz, 2H), 3.18 (q, J = 7.2 Hz, 2H), 2.89 (tt, J = 12.1, 3.6 Hz, 1H), 2.36–2.27 (m, 1H), 2.16–2.12 (m, 2H), 1.98–1.94 (m, 2H), 1.66–1.58 (m, 2H), 1.45–1.35 (m, 2H), 1.25 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.44, 158.78, 154.20, 143.67, 138.51, 134.04, 134.02, 129.32, 127.19, 120.34, 119.14, 116.42, 85.28, 57.08, 55.29, 52.10, 47.69, 46.36, 46.24, 41.06, 35.44, 29.99 (2C), 10.89. HPLC-MS (ESI): purity = 98%, t R = 2.413 min, m/z [M + H] + = 496.2. 5-Chloro-2-((ethyl(2-((2-((1r,4r)-4-(trifluoromethyl)cyclohexyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)Amino)ethyl)amino)methyl)phenol ( 17 ) Obtained from intermediate f.17 (77 mg, 0.12 mmol) as an off-white sticky solid (78%, 47 mg); Rf (DCM: MeOH, 9:1) 0.36; 1 H-NMR (600 MHz, DMSO): δ 8.40 (d, J = 1.0 Hz, 1H), 7.36 (d, J = 8.2 Hz, 1H), 6.87 (d, J = 2.2 Hz, 1H), 6.83 (dd, J = 8.2, 2.1 Hz, 1H), 6.71 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.25–3.22 (m, 4H), 3.18 (q, J = 7.2 Hz, 2H), 2.88 (tt, J = 12.3, 3.7 Hz, 1H), 2.37–2.28 (m, 1H), 2.17–2.12 (m, 2H), 1.99–1.94 (m, 2H), 1.66–1.58 (m, 2H), 1.45–1.36 (m, 2H), 1.24 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.44, 158.78, 154.20, 143.67, 138.51, 134.04, 134.02, 129.32, 127.19, 120.34, 119.14, 116.42, 85.28, 57.08, 55.29, 52.10, 47.69, 46.36, 46.24, 41.06, 35.44, 29.99 (2C), 10.89. HPLC-MS (ESI): purity = 97%, t R = 2.343 min, m/z [M + H] + = 496.2. 5-Chloro-2-((ethyl(2-((2-(4-methoxycyclohexyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 18 ) Obtained from intermediate f.18 (73 mg, 0.13 mmol) as an off-white sticky solid (77%, 46 mg); Rf (DCM: MeOH, 9:1) 0.50; 1 H-NMR (600 MHz, DMSO): δ 8.37 (d, J = 1.0 Hz, 1H), 8.13 (d, J = 8.1 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.83 (dd, J = 8.2, 2.1 Hz, 1H), 6.62 (d, J = 1.0 Hz, 1H), 4.25 (s, 2H), 3.64–3.53 (m, 1H), 3.22 (s, 3H), 3.12 (tt, J = 10.88, 4.11 Hz, 1H), 2.92 (t, J = 6.2 Hz, 2H), 2.81 (t, J = 6.2 Hz, 2H), 2.58 (q, J = 7.2 Hz, 2H), 2.08–2.02 (m, 2H), 1.88–1.77 (m, 2H), 1.75–1.71 (m, 2H), 1.63–1.47 (m, 2H), 1.23 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.47, 153.50, 144.28, 136.87, 134.14, 129.50, 127.40, 119.22, 116.44, 85.54, 78.32, 73.57, 56.94, 55.77, 55.30, 52.01, 47.89, 40.43, 35.67, 31.57 (2C), 29.79 (2C), 10.87. HPLC-MS (ESI): purity = 97%, t R = 2.124 min, m/z [M + H] + = 458.2. 5-Chloro-2-((ethyl(2-((2-(6-(trifluoromethyl)pyridin-3-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 19 ) Obtained from intermediate f.19 (55 mg, 0.09 mmol) as an off-white solid (81%, 36 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 9.44 (d, J = 2.1 Hz, 1H), 8.70 (dd, J = 8.2, 2.1 Hz, 1H), 8.52 (d, J = 1.0 Hz, 1H, ), 8.09 (d, J = 8.2 Hz, 1H), 7.38 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.85 (dd, J = 8.2, 2.1 Hz, 1H), 6.67 (d, J = 1.0 Hz, 1H), 4.28 (s, 2H), 3.59 (t, J = 5.8 Hz, 2H), 3.25 (t, J = 5.9 Hz, 2H), 3.21 (q, J = 7.2 Hz, 2H), 1.27 (t, J = 7.2 Hz, 3H).C-NMR (151 MHz, CDCl 3 ): δ 159.54, 158.60, 155.00, 149.46, 144.49, 140.74, 137.88, 134.69, 134.25, 129.57, 128.82, 126.84, 120.39, 119.88, 119.26, 114.79, 85.39, 56.84, 55.31, 52.20, 47.84, 41.84, 10.77. HPLC-MS (ESI): purity = 98%, t R = 2.352 min, m/z [M + H] + = 491.1. 5-Chloro-2-(((2-((7-nitrobenzo[c][1,2,5]oxadiazol-4-yl)amino)ethyl)(2-((2-(3-(trifluoromethyl)phenyl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 14-NBD ) Obtained from intermediate m (339 mg, 0.56 mmols) and NBD-chloride (254 mg, 1.27 mmols) as a brick-red solid (50%, 187 mg); R f (DCM: MeOH, 9:1) 0.56; 1 H NMR (600 MHz, DMSO- d 6 ): δ 8.39 (s, 1H), 8.37 (d, J = 8.8 Hz, 1H), 8.35 (dd, J = 2.8, 2.2 Hz, 1H), 8.31 (d, J = 8.8 Hz, 1H), 7.82 (dd, J = 7.8, 2.2 Hz, 1H), 7.74 (ddd, J = 7.8, 2.8, 2.3 Hz, 1H), 7.13 (dd, J = 8.4, 7.8 Hz, 1H), 6.85 (d, J = 7.8 Hz, 1H), 6.59 (ddd, J = 8.4, 2.3 Hz, 1H), 6.19 (d, J = 2.2 Hz, 1H), 6.04 (s, 1H), 3.70 (s, 2H), 3.53 (t, J = 6.3 Hz, 2H), 3.39 (t, J = 6.0 Hz, 2H), 2.82–2.76 (m, 4H). 13 C NMR (151 MHz, DMSO): δ 159.13, 158.92, 157.57, 155.85, 154.95, 149.82, 147.41, 143.12, 139.53, 138.17, 137.90, 135.58, 132.31, 131.80, 131.35, 129.01, 128.78, 126.30, 123.09, 118.92, 115.35, 99.50, 92.26, 86.51, 55.54, 53.70, 51.38, 44.39, 41.56, 36.89. HPLC-MS (ESI): purity = 98%, t R = 0.905 min, m/z [M + H] + = 668.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 1 ) Obtained from intermediate f.1 (116 mg, 0.22 mmol) as an off-white sticky solid (68%, 61 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.18 (d, J = 1.0 Hz, 1H), 7.25 (d, J = 8.1 Hz, 1H), 6.81 (d, J = 2.1 Hz, 1H), 6.79 (dd, J = 8.1, 2.1 Hz, 1H), 6.44 (d, J = 1.0 Hz, 1H), 4.03 (s, 2H), 3.40 (t, J = 5.9 Hz, 2H), 3.20–3.13 (m, 1H), 2.98 (t, J = 5.8 Hz, 2H), 2.93 (q, J = 7.2 Hz, 2H), 2.02–1.95 (m, 2H), 1.84–1.77 (m, 2H), 1.73–1.66 (m, 2H), 1.63–1.56 (m, 2H), 1.13 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.76, 154.94, 137.73, 132.39, 130.71, 130.00, 129.65, 128.84, 122.94, 118.83, 115.55, 86.67, 54.92, 52.22, 49.03, 47.10, 39.29, 31.92 (2C), 25.54 (2C), 11.31. HPLC-MS (ESI): purity = 98%, t R = 2.203 min, m/z [M + H] + = 414.1. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 2 ) Obtained from intermediate f.2 (113 mg, 0.21 mmol) as an off-white sticky solid (74%, 65 mg); Rf (DCM: MeOH, 9:1) 0.34; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.21 (d, J = 1.0 Hz, 1H), 7.08 (d, J = 8.0 Hz, 1H), 6.73 (d, J = 2.1 Hz, 1H), 6.71 (dd, J = 8.0, 2.1 Hz, 1H), 6.32 (d, J = 1.0 Hz, 1H), 4.02–3.99 (m, 1H), 3.83 (t, J = 6.7 Hz, 2H), 3.78–3.73 (m, 1H), 3.71 (s, 2H), 3.56–3.49 (m, 1H), 3.35–3.31 (m, 2H), 2.65 (t, J = 6.7 Hz, 2H), 2.56 (q, J = 7.1 Hz, 2H), 2.28–2.17 (m, 2H), 0.98 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.75, 155.54, 155.12, 142.76, 138.05, 134.83, 132.38, 130.70, 130.00, 122.96, 118.83, 115.55, 86.69, 71.63, 67.89, 54.93, 52.19, 49.03, 47.10, 31.48, 11.32. HPLC-MS (ESI): purity = 98%, t R = 0.298 min, m/z [M + H] + = 416.2. H -pyran-4-yl)-1 H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 3 ) Obtained from intermediate f.3 (66 mg, 0.12 mmol) as an off-white sticky solid (80%, 41 mg); Rf (DCM: MeOH, 9:1) 0.29; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.21 (d, J = 1.0 Hz, 1H), 7.15 (d, J = 7.9 Hz, 1H), 6.76 (d, J = 2.1 Hz, 1H), 6.74 (dd, J = 7.9, 2.1 Hz, 1H), 6.38 (d, J = 1.0 Hz, 1H), 3.91–3.86 (m, 2H), 3.83 (s, 2H), 3.44–3.39 (m, 2H), 3.36 (t, J = 6.4 Hz, 2H), 3.03–2.96 (m, 1H), 2.78 (t, J = 6.4 Hz, 2H), 2.71 (q, J = 7.2 Hz, 2H), 1.90–1.85 (m, 2H), 1.79–1.72 (m, 2H), 1.04 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.55, 154.84, 142.80, 137.40, 133.97, 133.06, 131.58, 130.00, 129.68, 128.96, 119.01, 115.57, 66.96 (2C), 54.16, 52.89, 49.03, 47.44, 35.17, 31.12 (2C), 10.84. HPLC-MS (ESI): purity = 97%, t R = 0.430 min, m/z [M + H] + = 430.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 4 ) Obtained from intermediate f.4 (66 mg, 0.14 mmol) as an off-white sticky solid (71%, 36 mg); Rf (DCM: MeOH, 9:1) 0.26; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.15 (d, J = 1.0 Hz, 1H), 7.08 (d, J = 8.0 Hz, 1H), 6.73 (d, J = 2.1 Hz, 1H), 6.71 (dd, J = 8.0, 2.1 Hz, 1H), 6.29 (d, J = 1.0 Hz, 1H), 3.70 (s, 2H), 3.32 (t, J = 6.4 Hz, 2H), 2.64 (t, J = 6.4 Hz, 2H), 2.55 (q, J = 7.2 Hz, 2H), 2.35 (s, 3H), 0.98 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.75, 154.96, 151.25, 137.45, 132.35, 130.69, 130.01, 123.02, 118.83, 115.55, 86.62, 63.28, 54.92, 52.20, 49.04, 47.10, 15.01, 11.35. HPLC-MS (ESI): purity = 98%, t R = 0.219 min, m/z [M + H] + = 360.1. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 5 ) Obtained from intermediate f.5 (351 mg, 0.69 mmol) as an off-white sticky solid (68%, 181 mg); Rf (DCM: MeOH, 9:1) 0.30; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.11 (d, J = 1.0 Hz, 1H), 7.10 (d, J = 8.0 Hz, 1H), 6.74 (d, J = 2.2 Hz, 1H), 6.72 (dd, J = 8.0, 2.1 Hz, 1H), 6.31 (d, J = 1.0 Hz, 1H), 3.74 (s, 2H), 3.32 (t, J = 6.7 Hz, 2H), 2.68 (t, J = 6.7 Hz, 2H), 2.59 (q, J = 7.2 Hz, 2H), 2.01–1.96 (m, 1H), 0.99 (t, J = 7.1 Hz, 3H), 0.95–0.93 (m, 4H). 13 C-NMR (151 MHz, DMSO): δ 158.69, 157.15, 154.78, 142.64, 137.07, 132.58, 130.97, 130.00, 122.52, 118.88, 115.56, 86.76, 54.69, 52.44, 49.03, 47.20, 11.18, 9.83 (2C), 8.93. HPLC-MS (ESI): purity = 98%, t R = 2.211 min, m/z [M + H] + = 386.1. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 6 ) Obtained from intermediate f.6 (381 mg, 0.73 mmol) as an off-white sticky solid (86%, 250 mg); Rf (DCM: MeOH, 9:1) 0.29; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.37 (d, J = 1.0 Hz, 1H), 7.40 (d, J = 8.2 Hz, 1H), 6.92 (d, J = 2.1 Hz, 1H), 6.87 (dd, J = 8.2, 2.1 Hz, 1H), 6.73 (d, J = 1.0 Hz, 1H), 4.30 (s, 2H), 3.66 (t, J = 6.1 Hz, 2H), 3.28 (t, J = 6.0 Hz, 2H), 3.22 (q, J = 7.2 Hz, 1H), 1.53 (s, 3H), 1.31–1.26 (m, 5H), 1.04–1.01 (m, 2H). 13 C-NMR (101 MHz, DMSO): δ 158.06, 152.70, 149.17, 143.15, 140.46, 135.35, 134.45, 131.71, 127.66, 116.35, 115.81, 108.67, 90.14, 51.45, 48.64, 38.07, 20.88, 17.89, 15.68 (2C), 9.19. HPLC-MS (ESI): purity = 98%, t R = 0.755 min, m/z [M + H] + = 400.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 7 ) Obtained from intermediate f.7 (59 mg, 0.10 mmol) as an off-white sticky solid (77%, 35 mg); Rf (DCM: MeOH, 9:1) 0.30; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.50 (d, J = 1.0 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.93 (d, J = 2.1 Hz, 1H), 6.88 (dd, J = 8.1, 2.1 Hz, 1H), 6.74 (d, J = 1.0 Hz, 1H), 4.32 (s, 2H), 3.65 (t, J = 5.9 Hz, 2H), 3.30 (t, J = 5.9 Hz, 2H), 3.24 (q, J = 7.1 Hz, 2H), 1.60–1.55 (m, 4H), 1.29 (t, J = 7.2 Hz, 3H). 13 C-NMR (101 MHz, DMSO): δ 158.05, 152.50, 149.26, 146.22, 141.57, 135.40, 134.48, 129.93, 124.43, 119.59, 116.31, 115.81, 111.78, 51.57, 48.70, 38.22, 31.43, 23.37, 12.03 (2C), 9.18. HPLC-MS (ESI): purity = 98%, t R = 2.194 min, m/z [M + H] + = 454.1. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 8 ) Obtained from intermediate f.8 (239 mg, 0.46 mmol) as an off-white sticky solid (88%, 162 mg); Rf (DCM: MeOH, 9:1) 0.35; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.46 (d, J = 1.0 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.94 (d, J = 2.1 Hz, 1H), 6.87 (dd, J = 8.1, 2.1 Hz, 1H), 6.79 (d, J = 1.0 Hz, 1H), 4.31 (s, 2H), 3.68 (t, J = 6.0 Hz, 2H), 3.30 (t, J = 6.0 Hz, 2H), 3.24 (q, J = 7.2 Hz, 2H), 2.80 (d, J = 7.0 Hz, 2H), 1.30 (t, J = 7.2 Hz, 3H), 1.20–1.15 (m, 1H), 0.57–0.52 (m, 2H), 0.34–0.29 (m, 2H). 13 C-NMR (101 MHz, DMSO): δ 159.52, 158.11, 152.35, 146.06, 135.35, 134.43, 131.03, 125.99, 119.54, 116.36, 115.84, 89.98, 56.56, 52.04, 51.47, 48.64, 38.03, 33.06, 9.19, 4.94 (2C). HPLC-MS (ESI): purity = 98%, t R = 0.588 min, m/z [M + H] + = 400.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 9 ) Obtained from intermediate f.9 (311 mg, 0.60 mmol) as an off-white sticky solid (79%, 191 mg); Rf (DCM: MeOH, 9:1) 0.31; 1 H-NMR (400 MHz, DMSO- d 6 ): δ 8.45 (d, J = 1.0 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.94 (d, J = 2.1 Hz, 1H), 6.85 (dd, J = 8.2, 2.1 Hz, 1H), 6.78 (d, J = 1.0 Hz, 1H), 4.31 (s, 2H), 3.83–3.72 (m, 1H), 3.69 (t, J = 6.1 Hz, 2H), 3.29 (t, J = 6.0 Hz, 2H), 3.23 (q, J = 7.2 Hz, 2H), 2.46–2.34 (m, 4H), 2.15–2.02 (m, 1H), 2.00–1.89 (m, 1H), 1.30 (t, J = 7.1 Hz, 3H). 13 C-NMR (101 MHz, DMSO) δ 163.14, 159.51, 158.11, 151.94, 135.38, 134.37, 119.50, 116.27, 115.86, 89.93, 79.09, 51.89, 51.45, 48.65, 40.46, 38.02, 33.40, 27.37 (2C), 18.60, 9.19. HPLC-MS (ESI): purity = 98%, t R = 0.921 min, m/z [M + H] + = 400.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 10 ) Obtained from intermediate f.10 (274 mg, 0.49 mmol) as an off-white sticky solid (84%, 179 mg); Rf (DCM: MeOH, 9:1) 0.29; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.51 (d, J = 1.0 Hz, 1H), 7.37 (d, J = 8.2 Hz, 1H), 6.88 (d, J = 2.1 Hz, 1H), 6.84 (dd, J = 8.2, 2.1 Hz, 1H), 6.80 (d, J = 1.0 Hz, 1H), 4.28 (s, 2H), 3.66 (t, J = 6.3 Hz, 2H), 3.64–3.60 (m, 1H), 3.27 (t, J = 6.2 Hz, 2H), 3.20 (q, J = 7.2 Hz, 2H), 3.10–2.94 (m, 4H), 1.26 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 159.36, 158.02, 150.81, 147.81, 135.40, 134.50, 133.25, 122.12, 120.25, 119.58, 118.45, 116.14, 115.75, 89.93, 63.24, 51.32, 48.61, 37.86, 21.96 (2C), 9.11. HPLC-MS (ESI): purity = 99%, t R = 1.808 min, m/z [M + H] + = 436.1. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 11 ) Obtained from intermediate f.11 (274 mg, 0.50 mmol) as an off-white sticky solid (71%, 152 mg); Rf (DCM: MeOH, 9:1) 0.32; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.44 (d, J = 1.0 Hz, 1H), 7.36 (d, J = 8.2 Hz, 1H), 6.88 (d, J = 2.1 Hz, 1H), 6.82 (dd, J = 8.2, 2.1 Hz, 1H), 6.76 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.65 (t, J = 6.2 Hz, 2H), 3.25 (t, J = 6.2 Hz, 2H), 3.19 (q, J = 7.2 Hz, 2H), 2.89 (tt, J = 11.7, 3.7 Hz, 1H), 2.01–1.96 (m, 2H), 1.77–1.73 (m, 2H), 1.67–1.63 (m, 1H), 1.59–1.52 (m, 2H), 1.39–1.30 (m, 2H), 1.25 (t, J = 7.2 Hz, 3H), 1.23–1.17 (m, 1H). 13 C-NMR (151 MHz, DMSO): δ 164.77, 159.42, 158.04, 151.60, 146.44, 135.37, 134.44, 130.11, 126.08, 119.51, 116.14, 115.75, 110.92, 89.86, 51.33, 48.63, 37.63, 30.78 (2C), 25.69, 25.57 (2C), 9.09. HPLC-MS (ESI): purity = 97%, t R = 2.238 min, m/z [M + H] + = 428.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 12 ) Obtained from intermediate f.12 (381 mg, 0.65 mmol) as an off-white sticky solid (83%, 250 mg); Rf (DCM: MeOH, 9:1) 0.37; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.46 (d, J = 1.0 Hz, 1H), 7.36 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.81 (dd, J = 8.2, 2.1 Hz, 1H), 6.80 (d, J = 1.0 Hz, 1H), 4.27 (s, 2H), 3.76–3.70 (m, 1H), 3.67 (t, J = 6.3 Hz, 2H), 3.27 (t, J = 6.2 Hz, 2H), 3.19 (q, J = 7.1 Hz, 2H), 3.12–3.06 (m, 1H), 2.11–2.07 (m, 4H), 2.01–1.89 (m, 1H), 1.88–1.80 (m, 2H), 1.25 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.09, 150.96, 135.35, 134.42, 129.70, 125.56, 123.97, 122.38, 119.46, 116.12, 115.78, 89.88, 62.47, 51.26, 48.62, 37.80, 35.06, 32.56, 27.12 (2C), 25.84 (2C), 9.08. HPLC-MS (ESI): purity = 98%, t R = 2.150 min, m/z [M + H] + = 464.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 13 ) Obtained from intermediate f.13 (51 mg, 0.10 mmol) as a yellow sticky solid (71%, 31 mg); Rf (DCM: MeOH, 9:1) 0.20; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.44 (d, J = 1.0 Hz, 1H), 7.38 (d, J = 8.2 Hz, 1H), 6.92 (d, J = 2.1 Hz, 1H), 6.83 (dd, J = 8.2, 2.1 Hz, 1H), 6.66 (d, J = 1.0 Hz, 1H), 4.77 (dd, J = 8.1, 4.3 Hz, 1H), 4.27 (s, 2H), 3.58 (t, J = 5.9 Hz, 2H), 3.24 (t, J = 5.9 Hz, 2H), 3.20 (q, J = 7.2 Hz, 2H), 2.94–2.86 (m, 2H), 2.60–2.53 (m, 2H), 2.27–1.98 (m, 2H), 1.72 (s, 3H), 1.24 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 172.03, 159.23, 158.81, 158.11, 153.85, 135.31, 134.45, 120.46, 119.47, 118.48, 116.50, 116.33, 115.80, 114.52, 52.89, 51.50, 48.58, 38.13, 29.75, 22.88, 22.29, 9.14. HPLC-MS (ESI): purity = 97%, t R = 0.141 min, m/z [M + H] + = 429.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 14 ) Obtained from intermediate f.14 (231 mg, 0.38 mmol) as an off-white sticky solid (81%, 151 mg); Rf (DCM: MeOH, 9:1) 0.30; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.53 (d, J = 1.0 Hz, 1H), 8.47 (dd, J = 2.1, 1.5 Hz, 1H), 8.44 (ddd, J = 7.8, 1.6, 1.5 Hz, 1H), 7.88 (ddd, J = 7.8, 2.1, 1.6 Hz, 1H), 7.79 (t, J = 7.8 Hz, 1H), 7.41 (d, J = 8.2 Hz, 1H), 6.90 (d, J = 2.1 Hz, 1H), 6.85 (dd, J = 8.2, 2.1 Hz, 1H), 6.75 (d, J = 1.0 Hz, 1H), 4.28 (s, 2H), 3.64 (t, J = 6.0 Hz, 2H), 3.26 (t, J = 5.9 Hz, 2H), 3.21 (q, J = 7.2 Hz, 2H), 1.27 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 158.96, 158.75, 158.54, 158.32, 158.04, 135.32, 134.55, 131.14, 130.85, 130.47, 127.62, 125.27, 123.74, 120.50, 120.36, 119.56, 118.39, 116.28, 115.75, 51.39, 48.58, 38.16, 22.91, 9.16. HPLC-MS (ESI): purity = 98%, t R = 2.422 min, m/z [M + H] + = 490.1. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)(ethyl)amino)methyl)phenol ( 15 ) Obtained from intermediate f.15 (76 mg, 0.14 mmol) as an off-white sticky solid (92%, 57 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.41 (d, J = 1.0 Hz, 1H), 7.37 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.2 Hz, 1H), 6.82 (dd, J = 8.2, 2.1 Hz, 1H), 6.74 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.64 (t, J = 6.2 Hz, 2H), 3.25 (t, J = 6.2 Hz, 2H), 3.19 (q, J = 7.2 Hz, 2H), 3.07 (tt, J = 9.4, 4.4 Hz, 1H), 2.03–1.96 (m, 2H), 1.84–1.76 (m, 2H), 1.62–1.56 (m, 4H), 1.55–1.48 (m, 4H), 1.25 (t, J = 7.1 Hz, 3H). 13 C-NMR (151 MHz, DMSO): δ 172.01, 159.28, 159.06, 158.85, 158.64, 158.06, 135.31, 134.44, 119.49, 118.40, 116.22, 115.76, 89.77, 51.31, 48.60, 37.84, 32.67 (2C), 28.14 (2C), 26.17 (2C), 22.88, 9.12. HPLC-MS (ESI): purity = 99%, t R = 2.413 min, m/z [M + H] + = 442.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 16 ) Obtained from intermediate f.16 (81 mg, 0.13 mmol) as an off-white sticky solid (88%, 57 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 8.42 (d, J = 1.0 Hz, 1H), 7.37 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.82 (dd, J = 8.2, 2.1 Hz, 1H), 6.76 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.64 (t, J = 6.2 Hz, 2H), 3.25 (t, J = 6.2 Hz, 2H), 3.18 (q, J = 7.2 Hz, 2H), 2.89 (tt, J = 12.1, 3.6 Hz, 1H), 2.36–2.27 (m, 1H), 2.16–2.12 (m, 2H), 1.98–1.94 (m, 2H), 1.66–1.58 (m, 2H), 1.45–1.35 (m, 2H), 1.25 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.44, 158.78, 154.20, 143.67, 138.51, 134.04, 134.02, 129.32, 127.19, 120.34, 119.14, 116.42, 85.28, 57.08, 55.29, 52.10, 47.69, 46.36, 46.24, 41.06, 35.44, 29.99 (2C), 10.89. HPLC-MS (ESI): purity = 98%, t R = 2.413 min, m/z [M + H] + = 496.2. H -imidazo[4,5- c ]pyridin-6-yl)Amino)ethyl)amino)methyl)phenol ( 17 ) Obtained from intermediate f.17 (77 mg, 0.12 mmol) as an off-white sticky solid (78%, 47 mg); Rf (DCM: MeOH, 9:1) 0.36; 1 H-NMR (600 MHz, DMSO): δ 8.40 (d, J = 1.0 Hz, 1H), 7.36 (d, J = 8.2 Hz, 1H), 6.87 (d, J = 2.2 Hz, 1H), 6.83 (dd, J = 8.2, 2.1 Hz, 1H), 6.71 (d, J = 1.0 Hz, 1H), 4.26 (s, 2H), 3.25–3.22 (m, 4H), 3.18 (q, J = 7.2 Hz, 2H), 2.88 (tt, J = 12.3, 3.7 Hz, 1H), 2.37–2.28 (m, 1H), 2.17–2.12 (m, 2H), 1.99–1.94 (m, 2H), 1.66–1.58 (m, 2H), 1.45–1.36 (m, 2H), 1.24 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.44, 158.78, 154.20, 143.67, 138.51, 134.04, 134.02, 129.32, 127.19, 120.34, 119.14, 116.42, 85.28, 57.08, 55.29, 52.10, 47.69, 46.36, 46.24, 41.06, 35.44, 29.99 (2C), 10.89. HPLC-MS (ESI): purity = 97%, t R = 2.343 min, m/z [M + H] + = 496.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 18 ) Obtained from intermediate f.18 (73 mg, 0.13 mmol) as an off-white sticky solid (77%, 46 mg); Rf (DCM: MeOH, 9:1) 0.50; 1 H-NMR (600 MHz, DMSO): δ 8.37 (d, J = 1.0 Hz, 1H), 8.13 (d, J = 8.1 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.83 (dd, J = 8.2, 2.1 Hz, 1H), 6.62 (d, J = 1.0 Hz, 1H), 4.25 (s, 2H), 3.64–3.53 (m, 1H), 3.22 (s, 3H), 3.12 (tt, J = 10.88, 4.11 Hz, 1H), 2.92 (t, J = 6.2 Hz, 2H), 2.81 (t, J = 6.2 Hz, 2H), 2.58 (q, J = 7.2 Hz, 2H), 2.08–2.02 (m, 2H), 1.88–1.77 (m, 2H), 1.75–1.71 (m, 2H), 1.63–1.47 (m, 2H), 1.23 (t, J = 7.2 Hz, 3H). 13 C-NMR (151 MHz, CDCl 3 ): δ 159.47, 153.50, 144.28, 136.87, 134.14, 129.50, 127.40, 119.22, 116.44, 85.54, 78.32, 73.57, 56.94, 55.77, 55.30, 52.01, 47.89, 40.43, 35.67, 31.57 (2C), 29.79 (2C), 10.87. HPLC-MS (ESI): purity = 97%, t R = 2.124 min, m/z [M + H] + = 458.2. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 19 ) Obtained from intermediate f.19 (55 mg, 0.09 mmol) as an off-white solid (81%, 36 mg); Rf (DCM: MeOH, 9:1) 0.38; 1 H-NMR (600 MHz, DMSO- d 6 ): δ 9.44 (d, J = 2.1 Hz, 1H), 8.70 (dd, J = 8.2, 2.1 Hz, 1H), 8.52 (d, J = 1.0 Hz, 1H, ), 8.09 (d, J = 8.2 Hz, 1H), 7.38 (d, J = 8.2 Hz, 1H), 6.89 (d, J = 2.1 Hz, 1H), 6.85 (dd, J = 8.2, 2.1 Hz, 1H), 6.67 (d, J = 1.0 Hz, 1H), 4.28 (s, 2H), 3.59 (t, J = 5.8 Hz, 2H), 3.25 (t, J = 5.9 Hz, 2H), 3.21 (q, J = 7.2 Hz, 2H), 1.27 (t, J = 7.2 Hz, 3H).C-NMR (151 MHz, CDCl 3 ): δ 159.54, 158.60, 155.00, 149.46, 144.49, 140.74, 137.88, 134.69, 134.25, 129.57, 128.82, 126.84, 120.39, 119.88, 119.26, 114.79, 85.39, 56.84, 55.31, 52.20, 47.84, 41.84, 10.77. HPLC-MS (ESI): purity = 98%, t R = 2.352 min, m/z [M + H] + = 491.1. H -imidazo[4,5- c ]pyridin-6-yl)amino)ethyl)amino)methyl)phenol ( 14-NBD ) Obtained from intermediate m (339 mg, 0.56 mmols) and NBD-chloride (254 mg, 1.27 mmols) as a brick-red solid (50%, 187 mg); R f (DCM: MeOH, 9:1) 0.56; 1 H NMR (600 MHz, DMSO- d 6 ): δ 8.39 (s, 1H), 8.37 (d, J = 8.8 Hz, 1H), 8.35 (dd, J = 2.8, 2.2 Hz, 1H), 8.31 (d, J = 8.8 Hz, 1H), 7.82 (dd, J = 7.8, 2.2 Hz, 1H), 7.74 (ddd, J = 7.8, 2.8, 2.3 Hz, 1H), 7.13 (dd, J = 8.4, 7.8 Hz, 1H), 6.85 (d, J = 7.8 Hz, 1H), 6.59 (ddd, J = 8.4, 2.3 Hz, 1H), 6.19 (d, J = 2.2 Hz, 1H), 6.04 (s, 1H), 3.70 (s, 2H), 3.53 (t, J = 6.3 Hz, 2H), 3.39 (t, J = 6.0 Hz, 2H), 2.82–2.76 (m, 4H). 13 C NMR (151 MHz, DMSO): δ 159.13, 158.92, 157.57, 155.85, 154.95, 149.82, 147.41, 143.12, 139.53, 138.17, 137.90, 135.58, 132.31, 131.80, 131.35, 129.01, 128.78, 126.30, 123.09, 118.92, 115.35, 99.50, 92.26, 86.51, 55.54, 53.70, 51.38, 44.39, 41.56, 36.89. HPLC-MS (ESI): purity = 98%, t R = 0.905 min, m/z [M + H] + = 668.2. P. falciparum Assay Compounds were screened against multi-drug-resistant (K1) and sensitive (NF54) strains of P. falciparum in vitro using a parasite lactate dehydrogenase assay (pLDH) (the method is described fully in the Supporting Information ). In vitro cytotoxicity was performed on the Chinese Hamster Ovarian cell line by measuring cellular growth and survival calorimetrically through the MTT assay. , The formation of tetrazolium salt was used as a measure of chemosensitivity and growth. (Details of this assay is described in the Supporting Information .) The in vitro microsomal stability assay was performed in duplicate in a 96-well microtiter plate using a single-point experiment design. The test compounds (1 μM) were incubated individually in human (pool of 50, mixed-gender), rat (pool of 711, male Sprague Dawley) and mouse (pool of 1634, male CD1) liver microsomes (final protein concentration of 0.4 mg/mL; Xenotech, Kansas, USA), suspended in 0.1 M phosphate buffer (pH 7.4). (Details of this assay are described in the Supporting Information .) |
Dentoskeletal effects of aesthetic and conventional twin block appliances in the treatment of skeletal class II malocclusion: a randomized controlled trial | 4d9e6228-93ce-4c09-bf2d-e7a48841c22f | 11729868 | Dentistry[mh] | Class II malocclusion is considered one of the most prevalent orthodontic cases . Cases of Class II malocclusion resulting from maxillary protrusion do not exceed 20% of the total Class II cases, while the majority are caused by mandibular retrognathism. This has led to the use of functional appliances, which stimulate mandibular growth , . Functional appliance therapy aims to improve relationships of dentofacial structures by addressing developmental factors and muscle function. Robin’s monoblock is considered a precursor to modern functional appliances, while Andresen’s Activator is often recognized as the first functional appliance. Since then, numerous modifications and new appliance systems have been developed . The use of removable functional appliances is more common than the use of fixed functional appliances in the treatment of skeletal Class II malocclusion . The twin block appliance (TB) is one of the most popular removable functional appliances because of its high patient compliance – and its ability to increase mandibular length , . Despite his satisfactory outcomes, some undesirable effects have been observed, especially mandibular incisor flaring, which leads to more dental correction than skeletal correction does , as well as negative effects on supporting periodontal tissues . Several modifications have been made to TBs to control mandibular incisors and enhance skeletal effects, including capping the lower incisors , increasing the number of anterior ball clasps , and providing relief behind the lower incisors with an acrylic labial bow . However, these methods have shown limited efficacy in controlling mandibular incisors or have an invasive nature, such as the use of mini implants . With increasing patient demands, especially esthetics, reducing the size and costs of the appliance may improve patient compliance . One of the latest modifications to the TB is the use of vacuum-formed hard plates (VFPs), which are better esthetics , instead of acrylic resin plates and wires as the main part of the appliance to make the aesthetic twin block appliance (ATB), which influences patient compliance and overcomes the drawbacks of the conventional twin block appliance (CTB) – . Previous studies reported significant advancement of the mandible with ATB more than with CTB , and control of lower incisor flaring , , whereas other studies reported no significant changes in mandibular advancement or lower incisor flaring compared with CTB, suggesting that further clinical trials are needed to study the effects of ATB. This randomized clinical trial (RCT) aims to evaluate the dentoalveolar and skeletal changes resulting from the ATB appliance and compare them with those resulting from the CTB appliance via cephalometric and dental cast measurements and a questionnaire to assess the levels of aesthetics and discomfort at four assessment times. Study design This study was conducted as a two-arm, parallel-group randomized controlled trial at the Department of Orthodontics, Faculty of Dentistry, University of Damascus, between June 6, 2022, and April 4, 2023. Ethical consideration The University of Damascus Local Research Ethics Committee approved this study (no. 1205-06-12-2021). All methods were performed in accordance with the relevant guidelines and regulations. Patients received information sheets, and written informed consent forms were collected after permission was obtained. ClinicalTrials.gov has filed this study under the number NCT05418413 (14/06/2022). Sample size calculation and participants G*power 3.1.9.7 software (Universität Düsseldorf, Düsseldorf, Germany) was used to calculate the sample size on the basis of the changes in ANB angular from a prior related study , with the following assumptions: a paired t test with a power of 90%, a significance level of 0.05, and an effect size of 0.83. Consequently, the required sample size was 24 patients for each group. However, with an assumed withdrawal rate of 10%, the required sample size was increased to 26 patients for each group. Participants and eligibility criteria The same orthodontist (M.N.A.) performed the treatments. Clinical examinations, intraoral and extraoral photographs, dental casts, and radiographic records were taken before orthodontic treatment was started for 93 patients referred to the Department of Orthodontics, University of Damascus, between June 2022 and August 2022, and 65 patients who fulfilled the inclusion criteria were identified. When the research project was presented to the patients, 57 agreed to participate. Consequently, 52 patients (19 males, 33 females) were randomly selected. The inclusion criteria were as follows: skeletal class II division 1 relationship (ANB > 4°) with the retrognathic mandible (SNB < 78°), an overjet of 5–10 mm, a normal growth pattern (Bjork sum < 402°), and patients at the pubertal growth spurt peak (S or MP3cap) epiphyseal stages on hand wrist radiographs . The exclusion criteria were as follows: previous orthodontic treatment, systemic diseases that may affect the treatment results; severe facial asymmetry; posterior crossbite or severe maxillary transverse deficiency; flared lower incisors (L1: MP > 97°); poor oral hygiene; and inability to close the lips and breathe from the nose due to respiratory disorders. Randomization, allocation concealment, and blinding A computer-generated randomization list was used to randomly divide the patients into two equal groups via Minitab ® Version 19.1 (Minitab Inc., Pennsylvania, USA), which was created by one of the academic staff (not involved in this research) at the Department of Orthodontics. The allocation sequence was concealed via sequentially numbered, opaque, closed envelopes. Patient and practitioner blinding was not feasible. Therefore, blinding was applied only for the outcome assessor while plaster-distributed casts and cephalometric radiographs were recorded with serial numbers to ensure blinding and avoid bias in the investigation. Additionally, patients were asked not to tell outcome assessors the treatment they received. Treatment method Fifty-two patients (33 females and 19 males) aged 12.23 ± 0.77 years were included in the trial and were randomly divided into 2 equal groups. The first group was the ATB group (experimental group). The 2nd group was the CTB group (control group). The CTB was designed according to Clark and consists of 2 plates with no midline screw in the maxillary plate (Fig. ). The ATB consisted of two 1.5 mm vacuum-formed hard plates (VFPs) (3 A MEDES ® , Easy-Vac Gasket, Gyeonggi-do, Republic of Korea) that were placed individually on a vacuum machine to form the base of the appliance. The models with the VFPs and the reconstruction bite were mounted in the hinge-type articulator, and then acrylic bite blocks with inclined planes at 70° to the occlusal plane were fabricated on the VFPs similar to the CTB (Fig. ). The appliances used in both groups were fabricated in the same laboratory, and clear acrylic was used (Fig. ). Both groups had a reconstruction bite with a single-step mandibular advancement and an edge-to-edge incisal relationship with a 2–3 mm bite opening between the central incisors. In the ATB group, bite registration was performed while the VFPs were in the patient’s mouth, accounting for the thickness of the plates. Follow-up during treatment All participants and parents received both oral and written information on the treatment, oral hygiene and maintenance of the appliance and were instructed to wear the appliance full time except for eating and brushing (nighttime included) and to breathe from the nose with closed lips while the appliance was put. The degree of compliance with appliance wear was measured via ‘compliance charts’, which were completed by the parents. Patients were recalled 1 week after appliance first fitting and then every 3 weeks to check from the appliance and monitor patient compliance, fill out the questionnaire and to measure the overjet clinically by using digital piacolis after applying mild pressure on the chin while closing to ensure that there was no fake bite [Sunday bite]. The process ends at the end of the active phase of the functional therapy when the overjet is reduced to (1–2.5 mm), and the occlusion settles into class I. Outcome measures Skeletal and dental changes Lateral cephalometric radiographs and trimmed dental casts were obtained at T0 (before treatment) and T1 (at the end of the active phase). All cephalometric radiographs were taken with the same device, i.e., a PAX 400 (VATEH Co., Ltd., Hwaseong, Korea), with the same settings. Sixteen angular variables and eleven linear variables (measured in millimeters) were evaluated via lateral cephalometric radiographs, and six linear variables were used to study Arch Dimensions on dental casts, intracanine width, intramolar width and anterior arch length – (Figs. and ). The questionnaire To assess esthetic and discomfort levels during the treatment, a special questionnaire was used, which was derived and further modified from the questionnaire used by Sergl et al. [Sergl et al.., 1998; Sergl et al.., 2000]. It consists of four questions covering the following elements: pain, speech impairment, oral constraint and lack of confidence in the public . All questionnaires were completed by patients with the aid of their parents while the principal researcher (M.N.A.) was observing the procedure. Each subject completed the same questionnaire at the following times: 7 days (T1), 14 days (T2), three months (T3) and six months following initial appliance insertion (T4). The questions were answered on a four-point Likert scale: 1, not at all; 2, little; 3, much; and 4, very much. Error of the method Six weeks after the first measurement, fifteen random cephalograms and fifteen random dental casts were measured and analyzed again to determine method error. Reliability was evaluated via the intraclass correlation coefficient (ICC), which reflects strong intraexaminer reliability (ICC = 0.996). The overall errors were calculated via the formula of Dahlberg. They do not exceed 0, 42 for the linear variable and 0,37 for the angular variable. Statistical analysis The statistical analysis was performed via SPSS for Windows (version 26.0; SPSS, Chicago, USA). The Shapiro–Wilk normality test was used to ensure the normal distribution of the data. The paired sample t test was used to study the significance of differences between the pre- and posttreatment variables in each group and to detect intragroup changes after the treatment. Independent t tests were used to compare the treatment results and find differences between the two groups. However, differences in the questionnaire results between the two groups were detected via the Mann‒Whitney U test. The level of significance was set at p < 0.05. This study was conducted as a two-arm, parallel-group randomized controlled trial at the Department of Orthodontics, Faculty of Dentistry, University of Damascus, between June 6, 2022, and April 4, 2023. The University of Damascus Local Research Ethics Committee approved this study (no. 1205-06-12-2021). All methods were performed in accordance with the relevant guidelines and regulations. Patients received information sheets, and written informed consent forms were collected after permission was obtained. ClinicalTrials.gov has filed this study under the number NCT05418413 (14/06/2022). G*power 3.1.9.7 software (Universität Düsseldorf, Düsseldorf, Germany) was used to calculate the sample size on the basis of the changes in ANB angular from a prior related study , with the following assumptions: a paired t test with a power of 90%, a significance level of 0.05, and an effect size of 0.83. Consequently, the required sample size was 24 patients for each group. However, with an assumed withdrawal rate of 10%, the required sample size was increased to 26 patients for each group. The same orthodontist (M.N.A.) performed the treatments. Clinical examinations, intraoral and extraoral photographs, dental casts, and radiographic records were taken before orthodontic treatment was started for 93 patients referred to the Department of Orthodontics, University of Damascus, between June 2022 and August 2022, and 65 patients who fulfilled the inclusion criteria were identified. When the research project was presented to the patients, 57 agreed to participate. Consequently, 52 patients (19 males, 33 females) were randomly selected. The inclusion criteria were as follows: skeletal class II division 1 relationship (ANB > 4°) with the retrognathic mandible (SNB < 78°), an overjet of 5–10 mm, a normal growth pattern (Bjork sum < 402°), and patients at the pubertal growth spurt peak (S or MP3cap) epiphyseal stages on hand wrist radiographs . The exclusion criteria were as follows: previous orthodontic treatment, systemic diseases that may affect the treatment results; severe facial asymmetry; posterior crossbite or severe maxillary transverse deficiency; flared lower incisors (L1: MP > 97°); poor oral hygiene; and inability to close the lips and breathe from the nose due to respiratory disorders. A computer-generated randomization list was used to randomly divide the patients into two equal groups via Minitab ® Version 19.1 (Minitab Inc., Pennsylvania, USA), which was created by one of the academic staff (not involved in this research) at the Department of Orthodontics. The allocation sequence was concealed via sequentially numbered, opaque, closed envelopes. Patient and practitioner blinding was not feasible. Therefore, blinding was applied only for the outcome assessor while plaster-distributed casts and cephalometric radiographs were recorded with serial numbers to ensure blinding and avoid bias in the investigation. Additionally, patients were asked not to tell outcome assessors the treatment they received. Fifty-two patients (33 females and 19 males) aged 12.23 ± 0.77 years were included in the trial and were randomly divided into 2 equal groups. The first group was the ATB group (experimental group). The 2nd group was the CTB group (control group). The CTB was designed according to Clark and consists of 2 plates with no midline screw in the maxillary plate (Fig. ). The ATB consisted of two 1.5 mm vacuum-formed hard plates (VFPs) (3 A MEDES ® , Easy-Vac Gasket, Gyeonggi-do, Republic of Korea) that were placed individually on a vacuum machine to form the base of the appliance. The models with the VFPs and the reconstruction bite were mounted in the hinge-type articulator, and then acrylic bite blocks with inclined planes at 70° to the occlusal plane were fabricated on the VFPs similar to the CTB (Fig. ). The appliances used in both groups were fabricated in the same laboratory, and clear acrylic was used (Fig. ). Both groups had a reconstruction bite with a single-step mandibular advancement and an edge-to-edge incisal relationship with a 2–3 mm bite opening between the central incisors. In the ATB group, bite registration was performed while the VFPs were in the patient’s mouth, accounting for the thickness of the plates. All participants and parents received both oral and written information on the treatment, oral hygiene and maintenance of the appliance and were instructed to wear the appliance full time except for eating and brushing (nighttime included) and to breathe from the nose with closed lips while the appliance was put. The degree of compliance with appliance wear was measured via ‘compliance charts’, which were completed by the parents. Patients were recalled 1 week after appliance first fitting and then every 3 weeks to check from the appliance and monitor patient compliance, fill out the questionnaire and to measure the overjet clinically by using digital piacolis after applying mild pressure on the chin while closing to ensure that there was no fake bite [Sunday bite]. The process ends at the end of the active phase of the functional therapy when the overjet is reduced to (1–2.5 mm), and the occlusion settles into class I. Skeletal and dental changes Lateral cephalometric radiographs and trimmed dental casts were obtained at T0 (before treatment) and T1 (at the end of the active phase). All cephalometric radiographs were taken with the same device, i.e., a PAX 400 (VATEH Co., Ltd., Hwaseong, Korea), with the same settings. Sixteen angular variables and eleven linear variables (measured in millimeters) were evaluated via lateral cephalometric radiographs, and six linear variables were used to study Arch Dimensions on dental casts, intracanine width, intramolar width and anterior arch length – (Figs. and ). The questionnaire To assess esthetic and discomfort levels during the treatment, a special questionnaire was used, which was derived and further modified from the questionnaire used by Sergl et al. [Sergl et al.., 1998; Sergl et al.., 2000]. It consists of four questions covering the following elements: pain, speech impairment, oral constraint and lack of confidence in the public . All questionnaires were completed by patients with the aid of their parents while the principal researcher (M.N.A.) was observing the procedure. Each subject completed the same questionnaire at the following times: 7 days (T1), 14 days (T2), three months (T3) and six months following initial appliance insertion (T4). The questions were answered on a four-point Likert scale: 1, not at all; 2, little; 3, much; and 4, very much. Lateral cephalometric radiographs and trimmed dental casts were obtained at T0 (before treatment) and T1 (at the end of the active phase). All cephalometric radiographs were taken with the same device, i.e., a PAX 400 (VATEH Co., Ltd., Hwaseong, Korea), with the same settings. Sixteen angular variables and eleven linear variables (measured in millimeters) were evaluated via lateral cephalometric radiographs, and six linear variables were used to study Arch Dimensions on dental casts, intracanine width, intramolar width and anterior arch length – (Figs. and ). To assess esthetic and discomfort levels during the treatment, a special questionnaire was used, which was derived and further modified from the questionnaire used by Sergl et al. [Sergl et al.., 1998; Sergl et al.., 2000]. It consists of four questions covering the following elements: pain, speech impairment, oral constraint and lack of confidence in the public . All questionnaires were completed by patients with the aid of their parents while the principal researcher (M.N.A.) was observing the procedure. Each subject completed the same questionnaire at the following times: 7 days (T1), 14 days (T2), three months (T3) and six months following initial appliance insertion (T4). The questions were answered on a four-point Likert scale: 1, not at all; 2, little; 3, much; and 4, very much. Six weeks after the first measurement, fifteen random cephalograms and fifteen random dental casts were measured and analyzed again to determine method error. Reliability was evaluated via the intraclass correlation coefficient (ICC), which reflects strong intraexaminer reliability (ICC = 0.996). The overall errors were calculated via the formula of Dahlberg. They do not exceed 0, 42 for the linear variable and 0,37 for the angular variable. Statistical analysis The statistical analysis was performed via SPSS for Windows (version 26.0; SPSS, Chicago, USA). The Shapiro–Wilk normality test was used to ensure the normal distribution of the data. The paired sample t test was used to study the significance of differences between the pre- and posttreatment variables in each group and to detect intragroup changes after the treatment. Independent t tests were used to compare the treatment results and find differences between the two groups. However, differences in the questionnaire results between the two groups were detected via the Mann‒Whitney U test. The level of significance was set at p < 0.05. The statistical analysis was performed via SPSS for Windows (version 26.0; SPSS, Chicago, USA). The Shapiro–Wilk normality test was used to ensure the normal distribution of the data. The paired sample t test was used to study the significance of differences between the pre- and posttreatment variables in each group and to detect intragroup changes after the treatment. Independent t tests were used to compare the treatment results and find differences between the two groups. However, differences in the questionnaire results between the two groups were detected via the Mann‒Whitney U test. The level of significance was set at p < 0.05. Sample distribution Fifty-two patients (33 females and 19 males) were included in the current trial. The ATB group comprised 26 patients (15 females and 11 males, with an average age of 12.41 ± 0.75), whereas the CTB group included 26 patients (18 females and 8 males, with an average age of 12.05 ± 0.76). The CONSORT flow diagram of patient recruitment, follow-up, and entry into the data analysis is given in (Fig. ). Baseline data The basic sample characteristics are provided in (Table ). The patients’ initial ages were well matched between the two groups. Independent-sample t tests were performed to determine the significant differences between the two study groups before treatment. The P values were far greater than 0.05 for all studied variables; i.e., there were no significant differences between the two study groups before treatment at the 95% confidence level, which indicates that these groups were equivalent before treatment in terms of the values of the angular variables and linear variables (Tables and ). Dental and skeletal evaluation The changes in the angular variables are shown in (Table ). Table shows a significant decrease in the ANB angle, which was caused by a significant increase in the SNB angle and SNPog angle. These changes were significantly greater in the ATB group than in the CTB group ( P = 0.002, P = 0.02 and P = 0.009, respectively). These desired effects were accompanied by protrusions of the lower incisors of 1.34 ± 2.08° ( P = 0.004) and 3.88 ± 2.47° ( P = 0.000) in the ATB group and CTB group, respectively. The protrusions were significantly larger in the CTB group than in the ATB group ( P = 0.000). The values of the U1:NP angle and U1:SN angle decreased significantly in the CTB group by -4.54 ± 3.25° ( P = 0.000) and − 4.18 ± 3.34° ( P = 0.000), respectively, whereas these values insignificantly decreased in the ATB group. These changes were significant between the CTB group and the ATB group ( P ≤ 0.001, P ≤ 0.008, respectively). There was retraction of the upper incisors in the CTB group but only insignificant changes in the ATB group. For the linear variables, Table shows that similar changes occurred in the two groups, including a significant decrease in the overjet, overbite, and Wits values and a significant increase in the Go-Me and N-Me values. Conversely, many differences were observed between the two groups. S-Go increased significantly by 2.57 ± 2.17 mm ( P = 0.000) and 0.85 ± 1.99 mm ( P = 0.04) in the ATB group and CTB group, respectively. The increase was significantly greater in the ATB group than in the CTB group ( P = 0.005). The changes in N-Me and S-Go caused the Jarabak ratio to increase significantly in the ATB group by 0.84 ± 1.44% ( P = 0.007) and to decrease significantly in the CTB group by -0.65 ± 1.37% ( P = 0.02), with significant differences between the two groups ( P ≤ 0.000). There was vertical growth in the CTB group but horizontal growth in the ATB group. S-Pns and N-Ans increased significantly in the CTB group (0.78 ± 1.52 mm ( P = 0.01) and 1.27 ± 1.72 mm ( P = 0.001), respectively), whereas these values insignificantly increased in the ATB group, with no significant difference between the two groups ( P = 0.102) ( P = 0.425). Pns-Go and Ans-Me increased significantly in the ATB group (1.52 ± 2.62 mm ( P = 0.008) and 1.98 ± 1.52 mm ( P = 0.000), respectively), whereas these values insignificantly increased in the CTB group, with no significant difference between the two groups ( P = 0.107) ( P = 0.237). There were no significant changes in the intracanine or intramolar width of the arches in either group. However, there was a significant decrease in upper anterior arch length in the CTB group by -0.94 ± 0,20 ( P = 0.000) and in the ATB group by -0.42 ± 0,50 ( P = 0.000), with a significant difference between the two groups ( P = 0.000), and a significant increase in lower anterior arch length in the CTB group by 0.89 ± 0,40 ( P = 0.000) and in the ATB group by 0.37 ± 0,38 ( P = 0.001), with a significant difference between the two groups ( P = 0.000). Esthetic and discomfort evaluation When the children were asked if the appliance had been painful or not, the answers revealed that ATB and CTB caused mild levels of pain, and this sensation decreased at all assessment times, with no significant differences between the two groups. The most disturbing complaint with CTB was speech impairment; speech impairment was significantly greater in the CTB group than in the ATB group at T2, T3 and T4. The two appliances caused a mild amount of oral constraint and little constriction to the lower jaw movements, which decreased in both groups during assessment times. There were no significant differences between the two groups. The CTB caused a high degree of ‘lack of confidence in public’, whereas the ATB caused a small amount of lack of confidence, and the differences between the two groups were significant at T1 and T2 (Table ; Fig. ). Harms No serious harm was observed. Fifty-two patients (33 females and 19 males) were included in the current trial. The ATB group comprised 26 patients (15 females and 11 males, with an average age of 12.41 ± 0.75), whereas the CTB group included 26 patients (18 females and 8 males, with an average age of 12.05 ± 0.76). The CONSORT flow diagram of patient recruitment, follow-up, and entry into the data analysis is given in (Fig. ). The basic sample characteristics are provided in (Table ). The patients’ initial ages were well matched between the two groups. Independent-sample t tests were performed to determine the significant differences between the two study groups before treatment. The P values were far greater than 0.05 for all studied variables; i.e., there were no significant differences between the two study groups before treatment at the 95% confidence level, which indicates that these groups were equivalent before treatment in terms of the values of the angular variables and linear variables (Tables and ). The changes in the angular variables are shown in (Table ). Table shows a significant decrease in the ANB angle, which was caused by a significant increase in the SNB angle and SNPog angle. These changes were significantly greater in the ATB group than in the CTB group ( P = 0.002, P = 0.02 and P = 0.009, respectively). These desired effects were accompanied by protrusions of the lower incisors of 1.34 ± 2.08° ( P = 0.004) and 3.88 ± 2.47° ( P = 0.000) in the ATB group and CTB group, respectively. The protrusions were significantly larger in the CTB group than in the ATB group ( P = 0.000). The values of the U1:NP angle and U1:SN angle decreased significantly in the CTB group by -4.54 ± 3.25° ( P = 0.000) and − 4.18 ± 3.34° ( P = 0.000), respectively, whereas these values insignificantly decreased in the ATB group. These changes were significant between the CTB group and the ATB group ( P ≤ 0.001, P ≤ 0.008, respectively). There was retraction of the upper incisors in the CTB group but only insignificant changes in the ATB group. For the linear variables, Table shows that similar changes occurred in the two groups, including a significant decrease in the overjet, overbite, and Wits values and a significant increase in the Go-Me and N-Me values. Conversely, many differences were observed between the two groups. S-Go increased significantly by 2.57 ± 2.17 mm ( P = 0.000) and 0.85 ± 1.99 mm ( P = 0.04) in the ATB group and CTB group, respectively. The increase was significantly greater in the ATB group than in the CTB group ( P = 0.005). The changes in N-Me and S-Go caused the Jarabak ratio to increase significantly in the ATB group by 0.84 ± 1.44% ( P = 0.007) and to decrease significantly in the CTB group by -0.65 ± 1.37% ( P = 0.02), with significant differences between the two groups ( P ≤ 0.000). There was vertical growth in the CTB group but horizontal growth in the ATB group. S-Pns and N-Ans increased significantly in the CTB group (0.78 ± 1.52 mm ( P = 0.01) and 1.27 ± 1.72 mm ( P = 0.001), respectively), whereas these values insignificantly increased in the ATB group, with no significant difference between the two groups ( P = 0.102) ( P = 0.425). Pns-Go and Ans-Me increased significantly in the ATB group (1.52 ± 2.62 mm ( P = 0.008) and 1.98 ± 1.52 mm ( P = 0.000), respectively), whereas these values insignificantly increased in the CTB group, with no significant difference between the two groups ( P = 0.107) ( P = 0.237). There were no significant changes in the intracanine or intramolar width of the arches in either group. However, there was a significant decrease in upper anterior arch length in the CTB group by -0.94 ± 0,20 ( P = 0.000) and in the ATB group by -0.42 ± 0,50 ( P = 0.000), with a significant difference between the two groups ( P = 0.000), and a significant increase in lower anterior arch length in the CTB group by 0.89 ± 0,40 ( P = 0.000) and in the ATB group by 0.37 ± 0,38 ( P = 0.001), with a significant difference between the two groups ( P = 0.000). When the children were asked if the appliance had been painful or not, the answers revealed that ATB and CTB caused mild levels of pain, and this sensation decreased at all assessment times, with no significant differences between the two groups. The most disturbing complaint with CTB was speech impairment; speech impairment was significantly greater in the CTB group than in the ATB group at T2, T3 and T4. The two appliances caused a mild amount of oral constraint and little constriction to the lower jaw movements, which decreased in both groups during assessment times. There were no significant differences between the two groups. The CTB caused a high degree of ‘lack of confidence in public’, whereas the ATB caused a small amount of lack of confidence, and the differences between the two groups were significant at T1 and T2 (Table ; Fig. ). No serious harm was observed. The ATB appliance is a modification of a twin block using VFPs as the base of the appliance (as in our study). Reports have reported different results in terms of skeletal and dentoalveolar changes with ATB – . This RCT suggested the study of the effects of ATB. In the present study, a 1.5 mm VFPs and clear acrylic were used. All patients were at the peak of the pubertal growth spurt to ensure the best effects of the treatment. The cephalometric changes were evaluated at the end of the active phase of functional treatment. The degree of compliance with appliance wear was good at least 16 h a day, which was confirmed by the use of ‘compliance charts’, which were completed by the parents. Skeletal changes In both groups, SNA was minimally decreased, but this decrease was not significant. This might be due to the distal force on the maxilla (headgear effect). Therefore, it could be assumed that some restriction of maxillary growth has occurred. Studies by Tripathi et al.. and Singh et al.. revealed restrictions of the maxilla , , whereas the study of Golfeshan et al.. did not reveal restrictions in the ATB . The differences in results between their study and the current study could be attributed to the differences in working methods. A significant increase in mandibular length (GO-Pog) was observed in both groups, with no significant difference between the 2 groups ( p = 0.13). This result agrees with the studies of Tripathi et al.. and Golfeshan et al. on the affected mandibles , . This increase in both groups was greater than that in other removable functional appliances and may be due to the difference in the variable (Co_Pog instead of Go_Me) . The forward motion of the mandible, which was demonstrated by significant increases in SNB and SNPog angles, has been reported in several studies , ; the ATB shows a significantly greater increase in SNB and SNPog angles, leading to significantly greater decreases in the ANB angle, which is in agreement with previous studies , . In addition, this change was greater than that of other removable functional appliances (i.e., activator, bionator, and Frankel) . The results of the current study indicate that both appliances were effectively able to correct skeletal Class II malocclusion, as evidenced by a significant decrease in the ANB angle, overjet, and Wits value during the treatment period, with the superiority of ATB. Burhan et al., Mills, and McCulloch reported that CTB might be able to prevent any increase in the vertical dimension , . In our study, the 2 groups showed no significant changes in most vertical measurements except the jarabak ratio. The ratio increased significantly in the ATB group because of a significant increase in posterior face height. The reason for this might be the complete coverage of the dental arch by the VFPs and its thickness combined with bite block height, which leads to a greater opening of the Leeway space in addition to closing the lips and thus further promoting molars intrusion and inhibiting vertical growth (Fig. ). Singh et al.. and Golfeshan et al. reported similar results , . This could suggest that ATB could be more beneficial in class II patients with vertical growth patterns. Dental changes In the CTB group, lower incisor angulation increased by 3.88° ±2.47. Ehsani et al. reported significantly lower incisor proclination during functional treatment with CTB . The degree of lower incisor proclination in the present study was lower than that reported in most studies in the abovementioned systematic review with CTB. In the present study, 97° of the maximum angle was taken before treatment, which provides more bone anchorage in the lower labial segment, potentially explaining the differences in results. In the ATB group, the lower incisor angulation increased by 1.34° ±2.08, which was significantly less than that in the CTB group, which agreed with the findings of Golfeshan et al.. and Tripathi et al. , . This may be due to the complete coverage of the buccal surface cervically of the lower incisors by VFP and its rigidity, which limits the effect of the mesial forces resulting from the appliance and reinforced anchorage, providing greater stability in the sagittal dimension. Unlike Singh et al., a 1 mm VFP thickness (less than ours) was used, and the small number of patients could be the reason for this difference . In addition, the increase in the length of the lower anterior arch length was less common in the ATB group, which reinforces this result (Fig. ). This flaring in the ATB group was greater than that in the frankel appliance group and less than that in the activator and bionator groups . The results of the present study support the idea that the ATB provides more control to lower incisors and potentially enhances skeletal correction. The upper incisor angulation in this study significantly decreased in the CTB group, possibly because the distal force resulting from the appliance being concentrated in the labial bow area led to uncontrolled tilting, unlike in the ATB group, where the buccal and palatal surfaces were completely covered, which limits the effect of this tilting and leads to insignificant retraction. The significant decrease in the upper anterior arch length in the CTB group compared with the ATB group supports this effect (Fig. ). Tripathi et al. reported a significant decrease in upper incisors in ATB groups, which may be due to a reduction in rigidity caused by the splitting of the appliance by the expansion screw and the use of thinner VFPs . This retraction in the ATB group was less than that in the other removable functional appliance groups (i.e., activator, bionator, frrankel) . There were no significant changes in the transverse dimensions of the canines or molars, possibly due to the absence of an expansion screw in either appliance. Esthetics and discomfort The two appliances caused a little amount of pain during the short term at T1 and T2, which then decreased gradually because of the patient’s adaptation to pain and discomfort when the treatment progress agreed with that of Alhayek et al. . The CTB caused a greater level of speech impairment, which may be due to its design, in which the acrylic base extends and covers more palatal rugae and had wire elements, such as the labial bow and clasps. These results agreed with those of Idris et al..‘s study, which noted that speech impairment was greater with Trainer T4k™, which had more extension than did the activator . The ‘oral constraint’ was not a problem a little amount stated by participants in this study for both appliances at all assessment times, possibly because the two appliances consisted of two plates, which provided more freedom during jaw movement. The ATB had an aesthetic appearance with no wire elements and a clear color; however, the CTB had wire elements and caused greater speech impairment, which is likely one of the reasons for the greater level of acceptance of the ATB than the CTB. These previous factors may lead to greater compliance with ATB. In both groups, SNA was minimally decreased, but this decrease was not significant. This might be due to the distal force on the maxilla (headgear effect). Therefore, it could be assumed that some restriction of maxillary growth has occurred. Studies by Tripathi et al.. and Singh et al.. revealed restrictions of the maxilla , , whereas the study of Golfeshan et al.. did not reveal restrictions in the ATB . The differences in results between their study and the current study could be attributed to the differences in working methods. A significant increase in mandibular length (GO-Pog) was observed in both groups, with no significant difference between the 2 groups ( p = 0.13). This result agrees with the studies of Tripathi et al.. and Golfeshan et al. on the affected mandibles , . This increase in both groups was greater than that in other removable functional appliances and may be due to the difference in the variable (Co_Pog instead of Go_Me) . The forward motion of the mandible, which was demonstrated by significant increases in SNB and SNPog angles, has been reported in several studies , ; the ATB shows a significantly greater increase in SNB and SNPog angles, leading to significantly greater decreases in the ANB angle, which is in agreement with previous studies , . In addition, this change was greater than that of other removable functional appliances (i.e., activator, bionator, and Frankel) . The results of the current study indicate that both appliances were effectively able to correct skeletal Class II malocclusion, as evidenced by a significant decrease in the ANB angle, overjet, and Wits value during the treatment period, with the superiority of ATB. Burhan et al., Mills, and McCulloch reported that CTB might be able to prevent any increase in the vertical dimension , . In our study, the 2 groups showed no significant changes in most vertical measurements except the jarabak ratio. The ratio increased significantly in the ATB group because of a significant increase in posterior face height. The reason for this might be the complete coverage of the dental arch by the VFPs and its thickness combined with bite block height, which leads to a greater opening of the Leeway space in addition to closing the lips and thus further promoting molars intrusion and inhibiting vertical growth (Fig. ). Singh et al.. and Golfeshan et al. reported similar results , . This could suggest that ATB could be more beneficial in class II patients with vertical growth patterns. In the CTB group, lower incisor angulation increased by 3.88° ±2.47. Ehsani et al. reported significantly lower incisor proclination during functional treatment with CTB . The degree of lower incisor proclination in the present study was lower than that reported in most studies in the abovementioned systematic review with CTB. In the present study, 97° of the maximum angle was taken before treatment, which provides more bone anchorage in the lower labial segment, potentially explaining the differences in results. In the ATB group, the lower incisor angulation increased by 1.34° ±2.08, which was significantly less than that in the CTB group, which agreed with the findings of Golfeshan et al.. and Tripathi et al. , . This may be due to the complete coverage of the buccal surface cervically of the lower incisors by VFP and its rigidity, which limits the effect of the mesial forces resulting from the appliance and reinforced anchorage, providing greater stability in the sagittal dimension. Unlike Singh et al., a 1 mm VFP thickness (less than ours) was used, and the small number of patients could be the reason for this difference . In addition, the increase in the length of the lower anterior arch length was less common in the ATB group, which reinforces this result (Fig. ). This flaring in the ATB group was greater than that in the frankel appliance group and less than that in the activator and bionator groups . The results of the present study support the idea that the ATB provides more control to lower incisors and potentially enhances skeletal correction. The upper incisor angulation in this study significantly decreased in the CTB group, possibly because the distal force resulting from the appliance being concentrated in the labial bow area led to uncontrolled tilting, unlike in the ATB group, where the buccal and palatal surfaces were completely covered, which limits the effect of this tilting and leads to insignificant retraction. The significant decrease in the upper anterior arch length in the CTB group compared with the ATB group supports this effect (Fig. ). Tripathi et al. reported a significant decrease in upper incisors in ATB groups, which may be due to a reduction in rigidity caused by the splitting of the appliance by the expansion screw and the use of thinner VFPs . This retraction in the ATB group was less than that in the other removable functional appliance groups (i.e., activator, bionator, frrankel) . There were no significant changes in the transverse dimensions of the canines or molars, possibly due to the absence of an expansion screw in either appliance. Esthetics and discomfort The two appliances caused a little amount of pain during the short term at T1 and T2, which then decreased gradually because of the patient’s adaptation to pain and discomfort when the treatment progress agreed with that of Alhayek et al. . The CTB caused a greater level of speech impairment, which may be due to its design, in which the acrylic base extends and covers more palatal rugae and had wire elements, such as the labial bow and clasps. These results agreed with those of Idris et al..‘s study, which noted that speech impairment was greater with Trainer T4k™, which had more extension than did the activator . The ‘oral constraint’ was not a problem a little amount stated by participants in this study for both appliances at all assessment times, possibly because the two appliances consisted of two plates, which provided more freedom during jaw movement. The ATB had an aesthetic appearance with no wire elements and a clear color; however, the CTB had wire elements and caused greater speech impairment, which is likely one of the reasons for the greater level of acceptance of the ATB than the CTB. These previous factors may lead to greater compliance with ATB. The two appliances caused a little amount of pain during the short term at T1 and T2, which then decreased gradually because of the patient’s adaptation to pain and discomfort when the treatment progress agreed with that of Alhayek et al. . The CTB caused a greater level of speech impairment, which may be due to its design, in which the acrylic base extends and covers more palatal rugae and had wire elements, such as the labial bow and clasps. These results agreed with those of Idris et al..‘s study, which noted that speech impairment was greater with Trainer T4k™, which had more extension than did the activator . The ‘oral constraint’ was not a problem a little amount stated by participants in this study for both appliances at all assessment times, possibly because the two appliances consisted of two plates, which provided more freedom during jaw movement. The ATB had an aesthetic appearance with no wire elements and a clear color; however, the CTB had wire elements and caused greater speech impairment, which is likely one of the reasons for the greater level of acceptance of the ATB than the CTB. These previous factors may lead to greater compliance with ATB. A limitation of this research is the lack of an untreated control group with which to study neutral growth changes for ethical reasons. However, the resulting differences between the two groups can be attributed to appliance differences, which fulfill the aim of the current research. Blinding was applied only for the outcome assessor when the casts and cephalometric radiographs were recorded. This might be considered a limitation of this study, but this was not possible owing to the clarity of the appliances. Both the CTB and the ATB can lead to the correction of skeletal Class II malocclusion resulting from the retrusion of the mandible, with some advantages of the ATB in mandibular advancement, control of lower and upper incisor angulation, and vertical growth development. Compared with CTB, ATB was superior in terms of aesthetics and discomfort, which may lead to better compliance. ATB is preferred for mandibular advancement in Class II growing patients. |
Btk Inhibitors: A Medicinal Chemistry and Drug Delivery Perspective | e2d81076-f440-424c-9e09-88c76906c591 | 8303217 | Pharmacology[mh] | Bruton’s tyrosine kinase (Btk), also known as agammaglobulinemia tyrosine kinase (TK), is a member of the Tec kinase family, initially identified as a defective protein in human X-linked agammaglobulinemia (XLA) in 1993 by Vetrie and coworkers . Btk is a cytoplasmic non-receptor TK expressed in all cells of the hematopoietic lineage, particularly B cells, mast cells and macrophages; on the contrary, it is not present in T cells, NK cells and plasma cells . This protein plays an essential role in B cell lymphopoiesis, being important for development, maturation and differentiation of immature B-cell and also for proliferation and survival of B-cells themselves . Btk is a fundamental component of different B cell receptors (BCR) and it acts as a modulator of several intracellular signals by a variety of cell surface molecules, including PI3K, MAPK and NF-κB pathways ; in this way, it regulates activation, proliferation and differentiation to antibody-producing plasma cells . Consequently, Btk inhibition leads to the interruption of many down-stream cell signaling pathways related to the development of B cell malignancies (e.g., different types of leukemias and lymphomas) and autoimmune diseases (e.g., rheumatoid arthritis, RA, and multiple sclerosis, MS) . Thus, Btk represents an important target in the drug development field, with 24 Btk inhibitors (BtkIs) currently under clinical evaluation as anti-tumor agents against chronic lymphocytic leukemia (CLL), small lymphocytic lymphoma (SLL), B-cell malignancies and mantle cell lymphoma (MCL) in different countries (i.e., USA, China and Poland) .
As reported in , Btk consists of five regions: the pleckstrin homology (PH) domain, the Tec homology (TH) domain, the Src homology (SH3) domain, the SH2 domain and the C-terminal region with kinase activity . The PH domain mediates protein–phospholipid and protein–protein interactions. The TH domain is formed by two proline-rich regions (PRR) and is involved in autoregulation, whereas SH2 and SH3 domains bind phosphorylated tyrosine residues and PRR, respectively. In the SH3 domain, a fundamental autophosphorylated site (Y223 residue) is present. Finally, the C-terminal part contains the catalytic kinase domain embedding Y551 residue responsible for initial Btk activation . In the catalytic domain, Cys481 residue represents the site of covalent binding of the most studied BtkIs. In detail, Btk is activated by spleen tyrosine kinase (Syk), in turn activated by BCR . Upon activation, Btk can phosphorylate Y753 and Y759 residues of PLCβ2, leading to stimulation and production of IP3, DAG and PKCβ. The levels of calcium increase and the MAPK/ERK path is triggered, affecting the transcriptional expression of genes involved in proliferation, survival and cytokine secretion. Simultaneously, Btk could activate Akt/Nf-kB signaling pathways . Moreover, activated Btk is a mediator of pro-inflammatory signals, as inflammatory cytokines (TNFα, IL-1b), strictly associated with the inflammatory response .
Structural information about the kinase is fundamental for optimal inhibitor design. If the protein structure shows great plasticity, distinct ligands might induce different states of the kinase, as observed in multiple crystal structures in various conformations. The X-ray crystal structures of active and inactive Btk kinase bound to several inhibitors have been determined (PDB codes: 5P9J, 5P9H, 5P9M, 5P9L, 4OTF, 5P9F and 5P9G) . The different ligands proved to induce different states of the kinase, providing interesting information and insights on Btk structure and function. According to their mechanism of action and binding mode, BtkIs can be classified into two types: (i) irreversible inhibitors characterized by a Michael acceptor moiety able to form a covalent bond with the conserved Cys481 residue in the ATP binding site; or (ii) reversible inhibitors that bind to a specific pocket in the SH3 domain through weak, reversible interactions (e.g., hydrogen bonds or hydrophobic interactions). In detail, they access the specific SH3 pocket of Btk, inducing an inactive conformation of the kinase. The majority of currently approved BtkIs are irreversible inhibitors . However, the onset of resistant mutants (especially to ibrutinib, the first drug launched on the market) has reduced their use . In particular, the isosteric replacement of Cys481 into a serine residue decreases the reactivity of Btk variant towards ibrutinib and the other covalent inhibitors with a reduction in compound potency. As an example, ibrutinib showed a sixfold reduction in potency against C481S mutant (IC 50 = 4.6 nM) . Furthermore, other site mutations involving both Cys481 (e.g., C481R, C481F, C481Y) and the gatekeeper residue Thr474 (T474I, T474S and T474M) have been recently evidenced. Although ibrutinib can still bind noncovalently to C481S mutant, its reversible mechanism of action does not guarantee therapeutic efficacy in patients with this mutation . In this regard, non-covalent inhibitors that do not interact with Cys481 could inhibit C481R, T474I and T474M mutants and represent an interesting therapeutic option . Moreover, to date, the use of reversible inhibitors seems to be more effective in treating autoimmune diseases such as RA, different types of MS, chronic graft versus host disease (cGVHD) and systemic lupus erythematosus (SLE) . More recently, some proteolysis-targeting chimera (PROTAC) molecules have been reported as a new therapeutic approach to reduce Btk activity . 3.1. Approved Btk Inhibitors The three BtkIs currently approved for clinical use are ibrutinib ((S)-1-(3-(4-amino-3-(4-phenoxyphenyl)-1H-pyrazolo [3,4- d ]pyrimidin-1-yl)piperidin-1-yl)prop-2-en-1-one (Imbruvica ® , Pharmacyclics LLC, Sunnyvale, CA, USA); acalabrutinib (4-((3S)-8-amino-3-((R)-1-(but-2-ynoyl)pyrrolidin-2-yl)-3,8a-dihydroimidazo[1,5- a ]pyrazin-1-yl)- N -(pyridin-2-yl)benzamide (Calquence ® , AstraZeneca Pharmaceuticals LP, Gaithersburg, DE, USA); and zanubrutinib (7-(1-acryloylpiperidin-4-yl)-2-(4-phenoxyphenyl)-4,5,6,7-tetrahydropyrazolo[1,5- a ]pyrimidine-3-carboxamide (Brukinsa ® , BeiGene USA, Inc., San Mateo, CA, USA) ( A) . As highlighted in , these compounds share some structural similarity, although they are characterized by different pyrazolo[3,4- d ]pyrimidine, dihydroimidazo[1,5- a ]pyrazine and tetrahydropyrazolo[1,5- a ]pyrimidine scaffolds. In addition, ibrutinib and zanubrutinib present a common 4-phenoxyphenyl substituent at position 3 of the pyrazole nucleus and a piperidin-1-yl-prop-2-en-1-one chain, very similar to that of acalabrutinib. Furthermore, ibrutinib and acalabrutinib display a free amino group on the heteroaromatic core nucleus. On the basis of the crystallographic data available, ibrutinib and zanubrutinib share similar bioactive conformations within the wild-type Btk binding site ( B) . Therefore, beside the covalent bond with Cys481, the two complexes are mainly stabilized by similar interactions which include the cation-π contact between the phenoxyphenyl ring and Lys430 side chain and the hydrogen bonds with Met477 and Glu475 backbones ( C,D). It is clearly demonstrated that covalent interaction is not required to generate a potent Btk inhibitor, but with the ability to trap the enzyme in a covalent dead end complex, covalent irreversible BtkIs have a great potency . Ibrutinib (also named PCI-32765) is a first-in-class Btk inhibitor. After the failure of LFM-A13 in 1999 , ibrutinib was initially chosen for preclinical development of in vivo models of RA in 2007 . In 2010, Honigberg and coworkers reported the efficacy of this compound in B-cell lymphoma and subsequently, in 2013, it was approved by the FDA for the treatment of CLL, SLL, Waldenström’s macroglobulinemia (WM), marginal zone lymphoma (MZL) and relapsed/refractory MCL. In 2017, the compound received approval also for cGVHD patients after failure of one or more lines of systemic therapy . Selectivity is an important factor influencing the long-term safety of a drug, but it is impossible to predict every off-target protein that a covalent inhibitor may bind. In addition to Btk, ibrutinib also inhibits other kinases that possess Cys481-like residues including Blk, Bmx, Egfr, ErbB2, ErbB4, Itk, Tec, Txk and Jak . Interestingly, ibrutinib also potently inhibits kinases that lack a reactive cysteine, such as Csk, Fgr, Lck, Brk, Hck, Yes1, Frk, Ret, Flt3, Abl, Fyn, Lyn and Src . A recent proteomic study showed that ibrutinib could also covalently react with non-kinase proteins in cells . To overcome ibrutinib off-target side effects (i.e., skin and dermatological problems , bleeding, infection , headache and atrial fibrillation) and the emerging resistances , some selective second-generation BtkIs were developed. Acalabrutinib (ACP-196, A) is a novel second-generation Btk inhibitor, designed by Acerta Pharma . It was approved in 2017 and is currently indicated for patients with relapsed/refractory MCL as well as CLL/SLL . Zanubrutinib (BGB-3111, A) was developed by BeiGene in 2012 as a potential candidate due to its high potency, selectivity, in vitro pharmacokinetics and pharmacodynamics in an OCI-LY10 DLBCL xenograft model . It was approved in 2019 for patients with MCL who have received at least one prior therapy, becoming the first Chinese-origin drug that won a grand slam tournament in FDA history. Although ibrutinib, acalabrutinib and zanubrutinib are irreversible inhibitors able to covalently bind cysteine 481 in the ATP binding pocket of Btk, their activity on the enzyme is quite different (IC 50 values = 1.5, 5.1 and 0.5 nM for ibrutinib, acalabrutinib and zanubrutinib, respectively). Moreover, as previously reported, all three compounds are not selective for Btk, showing inhibition in the low nanomolar range of other type of intracellular (e.g., Tec, Itk Blk, Jak) and receptor (e.g., epidermal growth factor receptor, EGFR) tyrosine kinases. The lack of specificity is probably associated with rash and severe diarrhea . In particular, acalabrutinib represents the most selective compound , being inactive on Itk, EGFR, ERBB2, Blk and Jak3. Zanubrutinib and acalabrutinib are less active than ibrutinib against Tec and Itk and therefore show less platelet disfunction and bleeding problems and antibody-dependent cell-mediated cytotoxicity . In addition, thrombus formation was significantly inhibited in platelets treated with ibrutinib, whereas no impact on thrombus formation was identified upon treatment with acalabrutinib that therefore displays an improved safety profile with minimal adverse effects compared with ibrutinib . Very recently, Fancher and coworkers reported an interesting study on drug–drug and drug–food interactions associated with ibrutinib, acalabrutinib and zanubrutinib, providing recommendations for their usage, particularly on dosage, in clinical use . The therapeutic indications, dosage and the total number of clinical trials of ibrutinib, acalabrutinib and zanubrutinib are reported in . The majority of clinical trials are focused on diseases for which these compounds have been approved, but in the past few years, these molecules have been evaluated for their immunomodulation activity (as previously reported), with cGVHD and more recently COVID-19 being the principal objects of BtkI application. For cGVHD, a condition that might occur after an allogeneic transplant, the most studied compound is ibrutinib, with nine clinical trials carried out (one completed, five in recruitment, two active studies not recruiting, one enrolling by invitation, ). In detail, ibrutinib has been evaluated in association with rituximab (NCT03689894 and NCT04235036) and corticosteroids (NCT02959944) and in a comparative study with ruxolitinib (NCT03112603). Additionally, two clinical trials (NCT04198922 and NCT04716075; ) are currently recruiting participants for the evaluation of acalabrutinib for the treatment of cGVHD. Despite the evidence of the beneficial effect in MS of anti-inflammatory agents , currently no clinical trials regarding the use of irreversible BtkIs in this pathology are reported. Considering systemic hyper-inflammation, cytokine storm induced by COVID-19 infection and the active role in macrophage function at NF-kB pathways of Btk, in 2020, some clinical studies on hospitalized COVID-19 patients have been carried out on BtkIs . To date, two clinical trials are reported for ibrutinib, three for acalabrutinib (two of them completed) and one for zanubrutinib . Although all these data support the potential use of BtkIs in the COVID-19 treatment, the potential increased risk of secondary infections or impaired humoral immunity in patients should be considered; indeed, opportunistic infections (particularly pneumonia) are commonly reported in treated BtkI patients . In detail, data from the CALAVI phase II trials (NCT04497948) for acalabrutinib in hospitalized COVID-19 patients did not meet the primary efficacy endpoints and, on the base of these results, this study has been prematurely terminated . Drug Delivery of Ibrutinib Nanotechnologies represent an effective approach to overcome the pharmacokinetic issues associated with small molecules (e.g., poor water solubility, limited oral bioavailability, large distribution volume) and the administration of a single nanoparticle (NP) containing several drugs proved to be more effective than the administration of several NPs each containing one compound . The poor water solubility of ibrutinib limits its absorption and bioavailability, negatively effecting the drug’s therapeutic effect. Recently, to improve its efficacy in cancer therapy, different nanoformulations of ibrutinib have been developed, including gold and polymeric NPs or aqueous nanosuspension for oral administration. One of the first studies regarding ibrutinib delivery was focused on the innovative approach of targeting cancer cells with an increased cholesterol demand . Gold nanoparticles functionalized with apolipoprotein A-I and a phospholipid bilayer (HDL NPs) were able to reduce cellular cholesterol uptake in B-cell lymphoma and synergize with inhibitors of downstream B-cell receptor signaling, including ibrutinib. The study demonstrated that the ABC lymphoma cell lines are more resistant to the reduction in cholesterol by HDL NPs, but the combination of these nanoparticles with ibrutinib (5 nM) significantly reduced total cellular cholesterol. The obtained data confirmed that cellular cholesterol depletion induces apoptosis in lymphoma cells and provided a rational approach to target cholesterol metabolism in other cancer types that are cholesterol-dependent. Sanchez-Coronilla and coworkers presented a theoretical study with ibrutinib conjugated with cysteine/methyl-cysteine and gold surface. In particular, the interaction of the drug with a gold surface was studied to explore the possibility to use gold NPs as an ibrutinib delivery system. Based on the obtained results, the authors concluded that gold NPs could represent a valuable delivery system for ibrutinib that can interact with gold through the nitrogen atoms of the pyrimidine ring and the amino group. Interestingly, the ibrutinib acrylamide group would not be involved in the interaction with the gold surface and therefore can react with the Cys481 side chain, thus inhibiting the Btk enzyme. Additionally, several polymeric NPs emerged to be very promising in enhancing ibrutinib action. Peng and coworkers reported the preparation of cellulose derivative NPs obtained by conjugating 2,3-dialdehyde cellulose (DAC) with oleylamine and aminoethyl rhodamine (AERhB) via Schiff base bonds. AERhB was used as a model compound representative of amine-containing anticancer drugs, such as ibrutinib. Two kinds of NPs (namely, DAC-50% oleylamine/50% AERhB and DAC-75% oleylamine/25% AERhB), were used for the drug release studies under both physiological (pH = 7.4) and acid (pH = 5.0 and pH = 4.0) conditions to mimic the existing environment in cancerous tissues. After 48 h at 37 °C, the determined release percentages were 23.3%, 64.9% and 84.8% at pH 7.4, 5.0 and 4.0, respectively. These results indicated that the release of the model drug was predominantly driven by acid-induced degradation of the Schiff base linkages. Qui and coworkers described the preparation of self-assembled nanocomplexes constituted by sialic acid conjugated with stearic acid, ibrutinib and egg phosphatidylglycerol. The efficiency of this system in targeting macrophages and its efficacy in inhibiting tumor progression were investigated in vitro and in vivo. The results indicated that the nanocomplex exhibited high efficiency in targeting tumor-associated macrophages, inhibiting Btk activation and Th2 tumorigenic cytokine release, reducing angiogenesis and suppressing tumor growth. The authors claimed that the developed nanocomplexes could be a promising strategy for ibrutinib delivery with minimal systemic side effects. Noteworthy, two papers reported ibrutinib formulations for oral administration. In particular, an aqueous nanosuspension containing ibrutinib and pluronic F-127 as stabilizing agent was optimized through a three-level, three-factor, Box–Behnken design . The technological properties of the obtained nanosuspension (i.e., particle size between 278.6 and 453.2 nm, stability of freeze-drying formulation up to 6 months and controlled drug release) as well as the in vivo pharmacokinetic were properly defined. A second investigation considered the oral bioavailability and pharmacokinetics of poly(lactic- co -glycolic acid) NPs (PLGA-NPs) loaded with ibrutinib . In details, PLGA is a polymer used for its biocompatibility, biodegradability and tunable physicochemical and formulation characteristics. After administration, it is transformed in lactic acid and glycolic acid, which are endogenous materials that do not cause immunogenic reactions. In this study, PLGA-NPs composed by 75% lactic acid and 25% glycolic acid (75:25) have been used to grant a slower degradation of the nanoparticles and a consequent sustained drug release. The authors observed 4.2-fold enhancements in the oral bioavailability of ibrutinib-loaded PLGA-NPs in comparison with an ibrutinib suspension, which could be attributed to better absorption and higher exposure of the nanoformulation. 3.2. BtkIs under Clinical Investigation In recent years, many molecules able to block Btk, both irreversibly and reversibly, have been patented and reported in the literature. In the past ten years , different chemical scaffolds (e.g., pyrimidines, 2,4-diaminopyrimidines, 1,3,5-triazines and condensed structures as pyrazolo-pyrimidines, pyrazolo-pyridines, pyrrolo-pyrimidines, pyrrolo-triazines, imidazo-pyrazines, imidazo-pyrimidines, imidazo-quinoxalines and purines) have been deeply investigated. In this review, we summarized the recently developed (ir)reversible BtkIs studied in clinical trials, covering recent advances in the field of medicinal chemistry. 3.2.1. Irreversible BtkIs Spebrutinib , evobrutinib , olmutinib , tirabrutinib , elsubrutinib (ABBV-105) and tolebrutinib (SAR 442168) are irreversible BtkIs currently under clinical investigations . These compounds share an α,β-unsaturated carbonyl moiety (essential for covalent bonds) and an aromatic ring (preferentially a pyrimidine nucleus), free or fused with other five-member cycles . Furthermore, with the sole exception of elsubrutinib, these derivatives are basic compounds that inhibit Btk at nanomolar concentrations (IC 50 values of 0.5 nM, 37.9 nM, 1.0 nM and 2.2 nM for spebrutinib, evobrutinib, olmutinib and tirabrutinib, respectively) . In March 2020, tirabrutinib was approved in Japan (at the dosage of 480 mg orally) for the treatment of recurrent or refractory primary central nervous system lymphoma and now is also under study for the treatment of WM, lymphoplasmacytic lymphoma and a number of autoimmune disorders (chronic lymphocytic leukemia, B cell lymphoma, Sjogren’s syndrome, pemphigus and RA) . Olmutinib also inhibits in irreversible mode EGFR, a member of Tec family kinases (including Jak3, EGFR, Her2, Her4 and Blk) sharing with Btk a reactive cysteine residue (namely, Cys797) . TG-1701 (TG Therapeutics), TAS5315 (Taiho Pharmaceutical) and M7583 (TL-895, EMD Serono) are irreversible BtkIs (molecular structures not disclosed) currently in clinical trials for B-cell malignancies (NCT03671590), RA (NCT03605251) and MCL (NCT02825836), respectively. In particular, TAS5315 is a pyrazolo[3,4- d ]pyrimidine derivative currently in a phase II clinical trial for RA treatment . In addition, DTRMWXHS-12 (also named DTRM-12, formula not disclosed) is a pyrazolo-pyrimidine derivative acting as Btk irreversible inhibitor and currently under three phase I clinical trials for different types of leukemia (as CLL) and lymphoma (as MCL) . The presence of reactive Michael acceptor groups in the above mentioned irreversible BtkIs led to undesired side effects such as allergic reactions, fever, lymphadenopathy, edema and albuminuria due to off-target inhibition . 3.2.2. Reversible BtkIs Noncovalent BtkIs offer several advantages over existing covalent inhibitors. Whereas covalent inhibitors loose potency against Cys481 mutants, some noncovalent inhibitors retain potent inhibition against C481S and C481R Btk variants, providing a potentially effective treatment option to ibrutinib-resistant or naïve patients . Furthermore, reversible inhibitors provide a lower risk of toxicity compared to irreversible compounds and for these reasons some of them (i.e., fenebrutinib, vacabrutinib, BMS-986142, BIIB068, CT-1530, AC0058 and SHR1459) are under clinical investigations for long-term drug administration in the treatment of autoimmune diseases, especially RA . In the past ten years, a plethora of reversible BtkIs have been patented ; for example, Genentech corporation generated over 1000 noncovalent BtkIs covering a broad range of chemical substructures and physicochemical properties . Substantial efforts have been made over the years to further develop reversible inhibitors, but unfortunately, none of the studied compounds have yielded significant breakthroughs . The most recent and relevant reversible BtkIs are reported in . In particular, fenebrutinib is a potent Btk inhibitor (IC 50 = 0.5 nM) endowed with good selectivity, favorable pharmacokinetic profile and efficacy against Btk C4815 mutant, , whereas vacabrutinib potently inhibits Btk and Itk . Fenebrutinib is currently under clinical evaluation in eight trials (i.e., NCT03693625, NCT03596632, NCT04586023, NCT04586010, NCT04544449, NCT01991184, NCT03137069, NCT02908100) focused in particular on autoimmune conditions, whereas the NCT03037645 trial assessed vacabrutinib activity in different hematological tumors . Pirimidinone derivative BMS-986142 is currently under clinical studies for its activity in RA (NCT02880670, NCT02456844, NCT02762123, NCT02832180, NCT02638948) and Sjögren’s Syndrome (NCT02843659). Moreover, a clinical study (NCT02257151) on healthy adults has been completed. RN-486 displayed IC 50 values of 4 nM, 43 nM and 64 nM for Btk, Slk and Tec, respectively . Currently, this compound is in preclinical investigation for RA. Unfortunately, RN-486 as well as GDC-0834, developed from compound CGI-1746 and under study for arthritis treatment, evidenced poor stability and pharmacokinetic profiles . BIIB068 demonstrated good kinome selectivity (IC 50 = 1 nM for Btk) and good overall drug-like properties for oral dosing; it was well tolerated across preclinical species at pharmacologically relevant doses with good ADME properties and achieved >90% inhibition of Btk phosphorylation in humans . BIIB068 seems to be effective in SLE disease (clinical study NCT02829541). GNE-431 is a new and interesting molecule, able to inhibit C481R, T474I and T474M mutants , representing the first example of “Pan-Btk” inhibitors. In detail, GNE-431 showed IC 50 = 3.2 nM against wild-type Btk and similar potency against C481S mutant (IC 50 = 2.5 nM) . To date, no clinical studies are reported for this compound. The widespread activity of GNE-431 against Btk mutants (namely, C481S, C481R, T474I and T474M) has been structurally rationalized by docking simulations . The ligand would assume an extended conformation being oriented orthogonally in comparison with ibrutinib. The hexahydropyrazino[1,2- a ]indol-1(2H)-one moiety of GNE-431 would be inserted in the H3 subpocket (a unique site in Btk) and its imidazopyridazine core would be able to interact with hydrophobic gatekeeper residues at position 474. Furthermore, GNE-431 would poorly interact with residue at position 481 and its binding mode would be marginally affected by the different steric requirements of residues at position 474. Many other molecules are under clinical and preclinical studies (APQ531, SHR1459, CT-1530, AC0058), but their chemical structures are not disclosed. 3.2.3. Emerging Reversible Covalent BtkIs One approach for the discovery of next-generation BtkIs with high potency, enhanced selectivity profiles, reduced off-target effects as well as tunable residence times, is the design of compounds able to form reversible covalent bonds with Cys481 residue and temporarily inactivate the enzyme. PRN1008 (rilzabrutinib; , ) is in early clinical studies for RA treatment . PRN-1008 is a potent, selective and reversible covalent inhibitor of Btk (IC 50 = 3.1 nM), inhibiting the kinase by forming a covalent bond with the Cys481 residue . In vivo, PRN-1008 demonstrated enduring pharmacodynamic effects and suppressed collagen-induced arthritis in rats in a dose-dependent manner. These data support the continued development of PRN-1008 as a therapeutic agent for RA. In addition, in 2017, Principia Biopharma announced that PRN-1008 had been designated as orphan drug in the USA for the treatment of pemphigus vulgaris, significantly reducing prednisone use and its related risks, and might become an important treatment option for this devastating disease . Bradshaw and coworkers , using a structure-based design, developed a series of reversible, covalent Btk inhibitors related to PRN1008. These compounds embed a reversible cyanoacrylamide-based electrophile attached to a pyrazolo-pyrimidine scaffold via an amine containing heterocycle linker (piperidine or pyrrolidine). The cyanoacrylamide functionality is capped with various branched-alkyl groups with different steric and electronic properties. The nature of the capping groups affects the residence time of the reversible inhibitors. Compound 1 was identified as a promising lead compound with sustained Btk occupancy. In the crystal structure of Btk- 1 complex (PDB code 4YHF) , the ligand is covalently bonded to Cys481 and the amino-pyrrolopyrimidine portion forms hydrogen bonds with Thr474, Glu475 and Met477 ( B). Interestingly, the tert -butyl group shield the proton attached to Cα, thus preventing the breakage of the thioether bond with Cys481. To enhance solubility and oral bioavailability of compound 1 , a series of methylpyrrolidine-containing compounds was further developed as reversible, covalent BtkIs. Compound 2 exhibited an increased residence time, but dissociated rapidly and quantitatively upon Btk turnover, proteolysis, resynthesis and interaction with cellular binding partners. Additionally, derivative 2 emerged to be as effective as the irreversible covalent inhibitor ibrutinib in decreasing tumor cell invasiveness and blocking Btk activity . This interesting new class of molecules combines the advantages of both reversible and irreversible binding mechanisms and shares a pyrrolopyrimidine scaffold that blocks the Btk active site through hydrogen bonds and hydrophobic interactions and a reactive, modified cyanoacrylamide electrophile, able to form a tunable covalent bond with the exposed Cys481 residue.
The three BtkIs currently approved for clinical use are ibrutinib ((S)-1-(3-(4-amino-3-(4-phenoxyphenyl)-1H-pyrazolo [3,4- d ]pyrimidin-1-yl)piperidin-1-yl)prop-2-en-1-one (Imbruvica ® , Pharmacyclics LLC, Sunnyvale, CA, USA); acalabrutinib (4-((3S)-8-amino-3-((R)-1-(but-2-ynoyl)pyrrolidin-2-yl)-3,8a-dihydroimidazo[1,5- a ]pyrazin-1-yl)- N -(pyridin-2-yl)benzamide (Calquence ® , AstraZeneca Pharmaceuticals LP, Gaithersburg, DE, USA); and zanubrutinib (7-(1-acryloylpiperidin-4-yl)-2-(4-phenoxyphenyl)-4,5,6,7-tetrahydropyrazolo[1,5- a ]pyrimidine-3-carboxamide (Brukinsa ® , BeiGene USA, Inc., San Mateo, CA, USA) ( A) . As highlighted in , these compounds share some structural similarity, although they are characterized by different pyrazolo[3,4- d ]pyrimidine, dihydroimidazo[1,5- a ]pyrazine and tetrahydropyrazolo[1,5- a ]pyrimidine scaffolds. In addition, ibrutinib and zanubrutinib present a common 4-phenoxyphenyl substituent at position 3 of the pyrazole nucleus and a piperidin-1-yl-prop-2-en-1-one chain, very similar to that of acalabrutinib. Furthermore, ibrutinib and acalabrutinib display a free amino group on the heteroaromatic core nucleus. On the basis of the crystallographic data available, ibrutinib and zanubrutinib share similar bioactive conformations within the wild-type Btk binding site ( B) . Therefore, beside the covalent bond with Cys481, the two complexes are mainly stabilized by similar interactions which include the cation-π contact between the phenoxyphenyl ring and Lys430 side chain and the hydrogen bonds with Met477 and Glu475 backbones ( C,D). It is clearly demonstrated that covalent interaction is not required to generate a potent Btk inhibitor, but with the ability to trap the enzyme in a covalent dead end complex, covalent irreversible BtkIs have a great potency . Ibrutinib (also named PCI-32765) is a first-in-class Btk inhibitor. After the failure of LFM-A13 in 1999 , ibrutinib was initially chosen for preclinical development of in vivo models of RA in 2007 . In 2010, Honigberg and coworkers reported the efficacy of this compound in B-cell lymphoma and subsequently, in 2013, it was approved by the FDA for the treatment of CLL, SLL, Waldenström’s macroglobulinemia (WM), marginal zone lymphoma (MZL) and relapsed/refractory MCL. In 2017, the compound received approval also for cGVHD patients after failure of one or more lines of systemic therapy . Selectivity is an important factor influencing the long-term safety of a drug, but it is impossible to predict every off-target protein that a covalent inhibitor may bind. In addition to Btk, ibrutinib also inhibits other kinases that possess Cys481-like residues including Blk, Bmx, Egfr, ErbB2, ErbB4, Itk, Tec, Txk and Jak . Interestingly, ibrutinib also potently inhibits kinases that lack a reactive cysteine, such as Csk, Fgr, Lck, Brk, Hck, Yes1, Frk, Ret, Flt3, Abl, Fyn, Lyn and Src . A recent proteomic study showed that ibrutinib could also covalently react with non-kinase proteins in cells . To overcome ibrutinib off-target side effects (i.e., skin and dermatological problems , bleeding, infection , headache and atrial fibrillation) and the emerging resistances , some selective second-generation BtkIs were developed. Acalabrutinib (ACP-196, A) is a novel second-generation Btk inhibitor, designed by Acerta Pharma . It was approved in 2017 and is currently indicated for patients with relapsed/refractory MCL as well as CLL/SLL . Zanubrutinib (BGB-3111, A) was developed by BeiGene in 2012 as a potential candidate due to its high potency, selectivity, in vitro pharmacokinetics and pharmacodynamics in an OCI-LY10 DLBCL xenograft model . It was approved in 2019 for patients with MCL who have received at least one prior therapy, becoming the first Chinese-origin drug that won a grand slam tournament in FDA history. Although ibrutinib, acalabrutinib and zanubrutinib are irreversible inhibitors able to covalently bind cysteine 481 in the ATP binding pocket of Btk, their activity on the enzyme is quite different (IC 50 values = 1.5, 5.1 and 0.5 nM for ibrutinib, acalabrutinib and zanubrutinib, respectively). Moreover, as previously reported, all three compounds are not selective for Btk, showing inhibition in the low nanomolar range of other type of intracellular (e.g., Tec, Itk Blk, Jak) and receptor (e.g., epidermal growth factor receptor, EGFR) tyrosine kinases. The lack of specificity is probably associated with rash and severe diarrhea . In particular, acalabrutinib represents the most selective compound , being inactive on Itk, EGFR, ERBB2, Blk and Jak3. Zanubrutinib and acalabrutinib are less active than ibrutinib against Tec and Itk and therefore show less platelet disfunction and bleeding problems and antibody-dependent cell-mediated cytotoxicity . In addition, thrombus formation was significantly inhibited in platelets treated with ibrutinib, whereas no impact on thrombus formation was identified upon treatment with acalabrutinib that therefore displays an improved safety profile with minimal adverse effects compared with ibrutinib . Very recently, Fancher and coworkers reported an interesting study on drug–drug and drug–food interactions associated with ibrutinib, acalabrutinib and zanubrutinib, providing recommendations for their usage, particularly on dosage, in clinical use . The therapeutic indications, dosage and the total number of clinical trials of ibrutinib, acalabrutinib and zanubrutinib are reported in . The majority of clinical trials are focused on diseases for which these compounds have been approved, but in the past few years, these molecules have been evaluated for their immunomodulation activity (as previously reported), with cGVHD and more recently COVID-19 being the principal objects of BtkI application. For cGVHD, a condition that might occur after an allogeneic transplant, the most studied compound is ibrutinib, with nine clinical trials carried out (one completed, five in recruitment, two active studies not recruiting, one enrolling by invitation, ). In detail, ibrutinib has been evaluated in association with rituximab (NCT03689894 and NCT04235036) and corticosteroids (NCT02959944) and in a comparative study with ruxolitinib (NCT03112603). Additionally, two clinical trials (NCT04198922 and NCT04716075; ) are currently recruiting participants for the evaluation of acalabrutinib for the treatment of cGVHD. Despite the evidence of the beneficial effect in MS of anti-inflammatory agents , currently no clinical trials regarding the use of irreversible BtkIs in this pathology are reported. Considering systemic hyper-inflammation, cytokine storm induced by COVID-19 infection and the active role in macrophage function at NF-kB pathways of Btk, in 2020, some clinical studies on hospitalized COVID-19 patients have been carried out on BtkIs . To date, two clinical trials are reported for ibrutinib, three for acalabrutinib (two of them completed) and one for zanubrutinib . Although all these data support the potential use of BtkIs in the COVID-19 treatment, the potential increased risk of secondary infections or impaired humoral immunity in patients should be considered; indeed, opportunistic infections (particularly pneumonia) are commonly reported in treated BtkI patients . In detail, data from the CALAVI phase II trials (NCT04497948) for acalabrutinib in hospitalized COVID-19 patients did not meet the primary efficacy endpoints and, on the base of these results, this study has been prematurely terminated . Drug Delivery of Ibrutinib Nanotechnologies represent an effective approach to overcome the pharmacokinetic issues associated with small molecules (e.g., poor water solubility, limited oral bioavailability, large distribution volume) and the administration of a single nanoparticle (NP) containing several drugs proved to be more effective than the administration of several NPs each containing one compound . The poor water solubility of ibrutinib limits its absorption and bioavailability, negatively effecting the drug’s therapeutic effect. Recently, to improve its efficacy in cancer therapy, different nanoformulations of ibrutinib have been developed, including gold and polymeric NPs or aqueous nanosuspension for oral administration. One of the first studies regarding ibrutinib delivery was focused on the innovative approach of targeting cancer cells with an increased cholesterol demand . Gold nanoparticles functionalized with apolipoprotein A-I and a phospholipid bilayer (HDL NPs) were able to reduce cellular cholesterol uptake in B-cell lymphoma and synergize with inhibitors of downstream B-cell receptor signaling, including ibrutinib. The study demonstrated that the ABC lymphoma cell lines are more resistant to the reduction in cholesterol by HDL NPs, but the combination of these nanoparticles with ibrutinib (5 nM) significantly reduced total cellular cholesterol. The obtained data confirmed that cellular cholesterol depletion induces apoptosis in lymphoma cells and provided a rational approach to target cholesterol metabolism in other cancer types that are cholesterol-dependent. Sanchez-Coronilla and coworkers presented a theoretical study with ibrutinib conjugated with cysteine/methyl-cysteine and gold surface. In particular, the interaction of the drug with a gold surface was studied to explore the possibility to use gold NPs as an ibrutinib delivery system. Based on the obtained results, the authors concluded that gold NPs could represent a valuable delivery system for ibrutinib that can interact with gold through the nitrogen atoms of the pyrimidine ring and the amino group. Interestingly, the ibrutinib acrylamide group would not be involved in the interaction with the gold surface and therefore can react with the Cys481 side chain, thus inhibiting the Btk enzyme. Additionally, several polymeric NPs emerged to be very promising in enhancing ibrutinib action. Peng and coworkers reported the preparation of cellulose derivative NPs obtained by conjugating 2,3-dialdehyde cellulose (DAC) with oleylamine and aminoethyl rhodamine (AERhB) via Schiff base bonds. AERhB was used as a model compound representative of amine-containing anticancer drugs, such as ibrutinib. Two kinds of NPs (namely, DAC-50% oleylamine/50% AERhB and DAC-75% oleylamine/25% AERhB), were used for the drug release studies under both physiological (pH = 7.4) and acid (pH = 5.0 and pH = 4.0) conditions to mimic the existing environment in cancerous tissues. After 48 h at 37 °C, the determined release percentages were 23.3%, 64.9% and 84.8% at pH 7.4, 5.0 and 4.0, respectively. These results indicated that the release of the model drug was predominantly driven by acid-induced degradation of the Schiff base linkages. Qui and coworkers described the preparation of self-assembled nanocomplexes constituted by sialic acid conjugated with stearic acid, ibrutinib and egg phosphatidylglycerol. The efficiency of this system in targeting macrophages and its efficacy in inhibiting tumor progression were investigated in vitro and in vivo. The results indicated that the nanocomplex exhibited high efficiency in targeting tumor-associated macrophages, inhibiting Btk activation and Th2 tumorigenic cytokine release, reducing angiogenesis and suppressing tumor growth. The authors claimed that the developed nanocomplexes could be a promising strategy for ibrutinib delivery with minimal systemic side effects. Noteworthy, two papers reported ibrutinib formulations for oral administration. In particular, an aqueous nanosuspension containing ibrutinib and pluronic F-127 as stabilizing agent was optimized through a three-level, three-factor, Box–Behnken design . The technological properties of the obtained nanosuspension (i.e., particle size between 278.6 and 453.2 nm, stability of freeze-drying formulation up to 6 months and controlled drug release) as well as the in vivo pharmacokinetic were properly defined. A second investigation considered the oral bioavailability and pharmacokinetics of poly(lactic- co -glycolic acid) NPs (PLGA-NPs) loaded with ibrutinib . In details, PLGA is a polymer used for its biocompatibility, biodegradability and tunable physicochemical and formulation characteristics. After administration, it is transformed in lactic acid and glycolic acid, which are endogenous materials that do not cause immunogenic reactions. In this study, PLGA-NPs composed by 75% lactic acid and 25% glycolic acid (75:25) have been used to grant a slower degradation of the nanoparticles and a consequent sustained drug release. The authors observed 4.2-fold enhancements in the oral bioavailability of ibrutinib-loaded PLGA-NPs in comparison with an ibrutinib suspension, which could be attributed to better absorption and higher exposure of the nanoformulation.
Nanotechnologies represent an effective approach to overcome the pharmacokinetic issues associated with small molecules (e.g., poor water solubility, limited oral bioavailability, large distribution volume) and the administration of a single nanoparticle (NP) containing several drugs proved to be more effective than the administration of several NPs each containing one compound . The poor water solubility of ibrutinib limits its absorption and bioavailability, negatively effecting the drug’s therapeutic effect. Recently, to improve its efficacy in cancer therapy, different nanoformulations of ibrutinib have been developed, including gold and polymeric NPs or aqueous nanosuspension for oral administration. One of the first studies regarding ibrutinib delivery was focused on the innovative approach of targeting cancer cells with an increased cholesterol demand . Gold nanoparticles functionalized with apolipoprotein A-I and a phospholipid bilayer (HDL NPs) were able to reduce cellular cholesterol uptake in B-cell lymphoma and synergize with inhibitors of downstream B-cell receptor signaling, including ibrutinib. The study demonstrated that the ABC lymphoma cell lines are more resistant to the reduction in cholesterol by HDL NPs, but the combination of these nanoparticles with ibrutinib (5 nM) significantly reduced total cellular cholesterol. The obtained data confirmed that cellular cholesterol depletion induces apoptosis in lymphoma cells and provided a rational approach to target cholesterol metabolism in other cancer types that are cholesterol-dependent. Sanchez-Coronilla and coworkers presented a theoretical study with ibrutinib conjugated with cysteine/methyl-cysteine and gold surface. In particular, the interaction of the drug with a gold surface was studied to explore the possibility to use gold NPs as an ibrutinib delivery system. Based on the obtained results, the authors concluded that gold NPs could represent a valuable delivery system for ibrutinib that can interact with gold through the nitrogen atoms of the pyrimidine ring and the amino group. Interestingly, the ibrutinib acrylamide group would not be involved in the interaction with the gold surface and therefore can react with the Cys481 side chain, thus inhibiting the Btk enzyme. Additionally, several polymeric NPs emerged to be very promising in enhancing ibrutinib action. Peng and coworkers reported the preparation of cellulose derivative NPs obtained by conjugating 2,3-dialdehyde cellulose (DAC) with oleylamine and aminoethyl rhodamine (AERhB) via Schiff base bonds. AERhB was used as a model compound representative of amine-containing anticancer drugs, such as ibrutinib. Two kinds of NPs (namely, DAC-50% oleylamine/50% AERhB and DAC-75% oleylamine/25% AERhB), were used for the drug release studies under both physiological (pH = 7.4) and acid (pH = 5.0 and pH = 4.0) conditions to mimic the existing environment in cancerous tissues. After 48 h at 37 °C, the determined release percentages were 23.3%, 64.9% and 84.8% at pH 7.4, 5.0 and 4.0, respectively. These results indicated that the release of the model drug was predominantly driven by acid-induced degradation of the Schiff base linkages. Qui and coworkers described the preparation of self-assembled nanocomplexes constituted by sialic acid conjugated with stearic acid, ibrutinib and egg phosphatidylglycerol. The efficiency of this system in targeting macrophages and its efficacy in inhibiting tumor progression were investigated in vitro and in vivo. The results indicated that the nanocomplex exhibited high efficiency in targeting tumor-associated macrophages, inhibiting Btk activation and Th2 tumorigenic cytokine release, reducing angiogenesis and suppressing tumor growth. The authors claimed that the developed nanocomplexes could be a promising strategy for ibrutinib delivery with minimal systemic side effects. Noteworthy, two papers reported ibrutinib formulations for oral administration. In particular, an aqueous nanosuspension containing ibrutinib and pluronic F-127 as stabilizing agent was optimized through a three-level, three-factor, Box–Behnken design . The technological properties of the obtained nanosuspension (i.e., particle size between 278.6 and 453.2 nm, stability of freeze-drying formulation up to 6 months and controlled drug release) as well as the in vivo pharmacokinetic were properly defined. A second investigation considered the oral bioavailability and pharmacokinetics of poly(lactic- co -glycolic acid) NPs (PLGA-NPs) loaded with ibrutinib . In details, PLGA is a polymer used for its biocompatibility, biodegradability and tunable physicochemical and formulation characteristics. After administration, it is transformed in lactic acid and glycolic acid, which are endogenous materials that do not cause immunogenic reactions. In this study, PLGA-NPs composed by 75% lactic acid and 25% glycolic acid (75:25) have been used to grant a slower degradation of the nanoparticles and a consequent sustained drug release. The authors observed 4.2-fold enhancements in the oral bioavailability of ibrutinib-loaded PLGA-NPs in comparison with an ibrutinib suspension, which could be attributed to better absorption and higher exposure of the nanoformulation.
In recent years, many molecules able to block Btk, both irreversibly and reversibly, have been patented and reported in the literature. In the past ten years , different chemical scaffolds (e.g., pyrimidines, 2,4-diaminopyrimidines, 1,3,5-triazines and condensed structures as pyrazolo-pyrimidines, pyrazolo-pyridines, pyrrolo-pyrimidines, pyrrolo-triazines, imidazo-pyrazines, imidazo-pyrimidines, imidazo-quinoxalines and purines) have been deeply investigated. In this review, we summarized the recently developed (ir)reversible BtkIs studied in clinical trials, covering recent advances in the field of medicinal chemistry. 3.2.1. Irreversible BtkIs Spebrutinib , evobrutinib , olmutinib , tirabrutinib , elsubrutinib (ABBV-105) and tolebrutinib (SAR 442168) are irreversible BtkIs currently under clinical investigations . These compounds share an α,β-unsaturated carbonyl moiety (essential for covalent bonds) and an aromatic ring (preferentially a pyrimidine nucleus), free or fused with other five-member cycles . Furthermore, with the sole exception of elsubrutinib, these derivatives are basic compounds that inhibit Btk at nanomolar concentrations (IC 50 values of 0.5 nM, 37.9 nM, 1.0 nM and 2.2 nM for spebrutinib, evobrutinib, olmutinib and tirabrutinib, respectively) . In March 2020, tirabrutinib was approved in Japan (at the dosage of 480 mg orally) for the treatment of recurrent or refractory primary central nervous system lymphoma and now is also under study for the treatment of WM, lymphoplasmacytic lymphoma and a number of autoimmune disorders (chronic lymphocytic leukemia, B cell lymphoma, Sjogren’s syndrome, pemphigus and RA) . Olmutinib also inhibits in irreversible mode EGFR, a member of Tec family kinases (including Jak3, EGFR, Her2, Her4 and Blk) sharing with Btk a reactive cysteine residue (namely, Cys797) . TG-1701 (TG Therapeutics), TAS5315 (Taiho Pharmaceutical) and M7583 (TL-895, EMD Serono) are irreversible BtkIs (molecular structures not disclosed) currently in clinical trials for B-cell malignancies (NCT03671590), RA (NCT03605251) and MCL (NCT02825836), respectively. In particular, TAS5315 is a pyrazolo[3,4- d ]pyrimidine derivative currently in a phase II clinical trial for RA treatment . In addition, DTRMWXHS-12 (also named DTRM-12, formula not disclosed) is a pyrazolo-pyrimidine derivative acting as Btk irreversible inhibitor and currently under three phase I clinical trials for different types of leukemia (as CLL) and lymphoma (as MCL) . The presence of reactive Michael acceptor groups in the above mentioned irreversible BtkIs led to undesired side effects such as allergic reactions, fever, lymphadenopathy, edema and albuminuria due to off-target inhibition . 3.2.2. Reversible BtkIs Noncovalent BtkIs offer several advantages over existing covalent inhibitors. Whereas covalent inhibitors loose potency against Cys481 mutants, some noncovalent inhibitors retain potent inhibition against C481S and C481R Btk variants, providing a potentially effective treatment option to ibrutinib-resistant or naïve patients . Furthermore, reversible inhibitors provide a lower risk of toxicity compared to irreversible compounds and for these reasons some of them (i.e., fenebrutinib, vacabrutinib, BMS-986142, BIIB068, CT-1530, AC0058 and SHR1459) are under clinical investigations for long-term drug administration in the treatment of autoimmune diseases, especially RA . In the past ten years, a plethora of reversible BtkIs have been patented ; for example, Genentech corporation generated over 1000 noncovalent BtkIs covering a broad range of chemical substructures and physicochemical properties . Substantial efforts have been made over the years to further develop reversible inhibitors, but unfortunately, none of the studied compounds have yielded significant breakthroughs . The most recent and relevant reversible BtkIs are reported in . In particular, fenebrutinib is a potent Btk inhibitor (IC 50 = 0.5 nM) endowed with good selectivity, favorable pharmacokinetic profile and efficacy against Btk C4815 mutant, , whereas vacabrutinib potently inhibits Btk and Itk . Fenebrutinib is currently under clinical evaluation in eight trials (i.e., NCT03693625, NCT03596632, NCT04586023, NCT04586010, NCT04544449, NCT01991184, NCT03137069, NCT02908100) focused in particular on autoimmune conditions, whereas the NCT03037645 trial assessed vacabrutinib activity in different hematological tumors . Pirimidinone derivative BMS-986142 is currently under clinical studies for its activity in RA (NCT02880670, NCT02456844, NCT02762123, NCT02832180, NCT02638948) and Sjögren’s Syndrome (NCT02843659). Moreover, a clinical study (NCT02257151) on healthy adults has been completed. RN-486 displayed IC 50 values of 4 nM, 43 nM and 64 nM for Btk, Slk and Tec, respectively . Currently, this compound is in preclinical investigation for RA. Unfortunately, RN-486 as well as GDC-0834, developed from compound CGI-1746 and under study for arthritis treatment, evidenced poor stability and pharmacokinetic profiles . BIIB068 demonstrated good kinome selectivity (IC 50 = 1 nM for Btk) and good overall drug-like properties for oral dosing; it was well tolerated across preclinical species at pharmacologically relevant doses with good ADME properties and achieved >90% inhibition of Btk phosphorylation in humans . BIIB068 seems to be effective in SLE disease (clinical study NCT02829541). GNE-431 is a new and interesting molecule, able to inhibit C481R, T474I and T474M mutants , representing the first example of “Pan-Btk” inhibitors. In detail, GNE-431 showed IC 50 = 3.2 nM against wild-type Btk and similar potency against C481S mutant (IC 50 = 2.5 nM) . To date, no clinical studies are reported for this compound. The widespread activity of GNE-431 against Btk mutants (namely, C481S, C481R, T474I and T474M) has been structurally rationalized by docking simulations . The ligand would assume an extended conformation being oriented orthogonally in comparison with ibrutinib. The hexahydropyrazino[1,2- a ]indol-1(2H)-one moiety of GNE-431 would be inserted in the H3 subpocket (a unique site in Btk) and its imidazopyridazine core would be able to interact with hydrophobic gatekeeper residues at position 474. Furthermore, GNE-431 would poorly interact with residue at position 481 and its binding mode would be marginally affected by the different steric requirements of residues at position 474. Many other molecules are under clinical and preclinical studies (APQ531, SHR1459, CT-1530, AC0058), but their chemical structures are not disclosed. 3.2.3. Emerging Reversible Covalent BtkIs One approach for the discovery of next-generation BtkIs with high potency, enhanced selectivity profiles, reduced off-target effects as well as tunable residence times, is the design of compounds able to form reversible covalent bonds with Cys481 residue and temporarily inactivate the enzyme. PRN1008 (rilzabrutinib; , ) is in early clinical studies for RA treatment . PRN-1008 is a potent, selective and reversible covalent inhibitor of Btk (IC 50 = 3.1 nM), inhibiting the kinase by forming a covalent bond with the Cys481 residue . In vivo, PRN-1008 demonstrated enduring pharmacodynamic effects and suppressed collagen-induced arthritis in rats in a dose-dependent manner. These data support the continued development of PRN-1008 as a therapeutic agent for RA. In addition, in 2017, Principia Biopharma announced that PRN-1008 had been designated as orphan drug in the USA for the treatment of pemphigus vulgaris, significantly reducing prednisone use and its related risks, and might become an important treatment option for this devastating disease . Bradshaw and coworkers , using a structure-based design, developed a series of reversible, covalent Btk inhibitors related to PRN1008. These compounds embed a reversible cyanoacrylamide-based electrophile attached to a pyrazolo-pyrimidine scaffold via an amine containing heterocycle linker (piperidine or pyrrolidine). The cyanoacrylamide functionality is capped with various branched-alkyl groups with different steric and electronic properties. The nature of the capping groups affects the residence time of the reversible inhibitors. Compound 1 was identified as a promising lead compound with sustained Btk occupancy. In the crystal structure of Btk- 1 complex (PDB code 4YHF) , the ligand is covalently bonded to Cys481 and the amino-pyrrolopyrimidine portion forms hydrogen bonds with Thr474, Glu475 and Met477 ( B). Interestingly, the tert -butyl group shield the proton attached to Cα, thus preventing the breakage of the thioether bond with Cys481. To enhance solubility and oral bioavailability of compound 1 , a series of methylpyrrolidine-containing compounds was further developed as reversible, covalent BtkIs. Compound 2 exhibited an increased residence time, but dissociated rapidly and quantitatively upon Btk turnover, proteolysis, resynthesis and interaction with cellular binding partners. Additionally, derivative 2 emerged to be as effective as the irreversible covalent inhibitor ibrutinib in decreasing tumor cell invasiveness and blocking Btk activity . This interesting new class of molecules combines the advantages of both reversible and irreversible binding mechanisms and shares a pyrrolopyrimidine scaffold that blocks the Btk active site through hydrogen bonds and hydrophobic interactions and a reactive, modified cyanoacrylamide electrophile, able to form a tunable covalent bond with the exposed Cys481 residue.
Spebrutinib , evobrutinib , olmutinib , tirabrutinib , elsubrutinib (ABBV-105) and tolebrutinib (SAR 442168) are irreversible BtkIs currently under clinical investigations . These compounds share an α,β-unsaturated carbonyl moiety (essential for covalent bonds) and an aromatic ring (preferentially a pyrimidine nucleus), free or fused with other five-member cycles . Furthermore, with the sole exception of elsubrutinib, these derivatives are basic compounds that inhibit Btk at nanomolar concentrations (IC 50 values of 0.5 nM, 37.9 nM, 1.0 nM and 2.2 nM for spebrutinib, evobrutinib, olmutinib and tirabrutinib, respectively) . In March 2020, tirabrutinib was approved in Japan (at the dosage of 480 mg orally) for the treatment of recurrent or refractory primary central nervous system lymphoma and now is also under study for the treatment of WM, lymphoplasmacytic lymphoma and a number of autoimmune disorders (chronic lymphocytic leukemia, B cell lymphoma, Sjogren’s syndrome, pemphigus and RA) . Olmutinib also inhibits in irreversible mode EGFR, a member of Tec family kinases (including Jak3, EGFR, Her2, Her4 and Blk) sharing with Btk a reactive cysteine residue (namely, Cys797) . TG-1701 (TG Therapeutics), TAS5315 (Taiho Pharmaceutical) and M7583 (TL-895, EMD Serono) are irreversible BtkIs (molecular structures not disclosed) currently in clinical trials for B-cell malignancies (NCT03671590), RA (NCT03605251) and MCL (NCT02825836), respectively. In particular, TAS5315 is a pyrazolo[3,4- d ]pyrimidine derivative currently in a phase II clinical trial for RA treatment . In addition, DTRMWXHS-12 (also named DTRM-12, formula not disclosed) is a pyrazolo-pyrimidine derivative acting as Btk irreversible inhibitor and currently under three phase I clinical trials for different types of leukemia (as CLL) and lymphoma (as MCL) . The presence of reactive Michael acceptor groups in the above mentioned irreversible BtkIs led to undesired side effects such as allergic reactions, fever, lymphadenopathy, edema and albuminuria due to off-target inhibition .
Noncovalent BtkIs offer several advantages over existing covalent inhibitors. Whereas covalent inhibitors loose potency against Cys481 mutants, some noncovalent inhibitors retain potent inhibition against C481S and C481R Btk variants, providing a potentially effective treatment option to ibrutinib-resistant or naïve patients . Furthermore, reversible inhibitors provide a lower risk of toxicity compared to irreversible compounds and for these reasons some of them (i.e., fenebrutinib, vacabrutinib, BMS-986142, BIIB068, CT-1530, AC0058 and SHR1459) are under clinical investigations for long-term drug administration in the treatment of autoimmune diseases, especially RA . In the past ten years, a plethora of reversible BtkIs have been patented ; for example, Genentech corporation generated over 1000 noncovalent BtkIs covering a broad range of chemical substructures and physicochemical properties . Substantial efforts have been made over the years to further develop reversible inhibitors, but unfortunately, none of the studied compounds have yielded significant breakthroughs . The most recent and relevant reversible BtkIs are reported in . In particular, fenebrutinib is a potent Btk inhibitor (IC 50 = 0.5 nM) endowed with good selectivity, favorable pharmacokinetic profile and efficacy against Btk C4815 mutant, , whereas vacabrutinib potently inhibits Btk and Itk . Fenebrutinib is currently under clinical evaluation in eight trials (i.e., NCT03693625, NCT03596632, NCT04586023, NCT04586010, NCT04544449, NCT01991184, NCT03137069, NCT02908100) focused in particular on autoimmune conditions, whereas the NCT03037645 trial assessed vacabrutinib activity in different hematological tumors . Pirimidinone derivative BMS-986142 is currently under clinical studies for its activity in RA (NCT02880670, NCT02456844, NCT02762123, NCT02832180, NCT02638948) and Sjögren’s Syndrome (NCT02843659). Moreover, a clinical study (NCT02257151) on healthy adults has been completed. RN-486 displayed IC 50 values of 4 nM, 43 nM and 64 nM for Btk, Slk and Tec, respectively . Currently, this compound is in preclinical investigation for RA. Unfortunately, RN-486 as well as GDC-0834, developed from compound CGI-1746 and under study for arthritis treatment, evidenced poor stability and pharmacokinetic profiles . BIIB068 demonstrated good kinome selectivity (IC 50 = 1 nM for Btk) and good overall drug-like properties for oral dosing; it was well tolerated across preclinical species at pharmacologically relevant doses with good ADME properties and achieved >90% inhibition of Btk phosphorylation in humans . BIIB068 seems to be effective in SLE disease (clinical study NCT02829541). GNE-431 is a new and interesting molecule, able to inhibit C481R, T474I and T474M mutants , representing the first example of “Pan-Btk” inhibitors. In detail, GNE-431 showed IC 50 = 3.2 nM against wild-type Btk and similar potency against C481S mutant (IC 50 = 2.5 nM) . To date, no clinical studies are reported for this compound. The widespread activity of GNE-431 against Btk mutants (namely, C481S, C481R, T474I and T474M) has been structurally rationalized by docking simulations . The ligand would assume an extended conformation being oriented orthogonally in comparison with ibrutinib. The hexahydropyrazino[1,2- a ]indol-1(2H)-one moiety of GNE-431 would be inserted in the H3 subpocket (a unique site in Btk) and its imidazopyridazine core would be able to interact with hydrophobic gatekeeper residues at position 474. Furthermore, GNE-431 would poorly interact with residue at position 481 and its binding mode would be marginally affected by the different steric requirements of residues at position 474. Many other molecules are under clinical and preclinical studies (APQ531, SHR1459, CT-1530, AC0058), but their chemical structures are not disclosed.
One approach for the discovery of next-generation BtkIs with high potency, enhanced selectivity profiles, reduced off-target effects as well as tunable residence times, is the design of compounds able to form reversible covalent bonds with Cys481 residue and temporarily inactivate the enzyme. PRN1008 (rilzabrutinib; , ) is in early clinical studies for RA treatment . PRN-1008 is a potent, selective and reversible covalent inhibitor of Btk (IC 50 = 3.1 nM), inhibiting the kinase by forming a covalent bond with the Cys481 residue . In vivo, PRN-1008 demonstrated enduring pharmacodynamic effects and suppressed collagen-induced arthritis in rats in a dose-dependent manner. These data support the continued development of PRN-1008 as a therapeutic agent for RA. In addition, in 2017, Principia Biopharma announced that PRN-1008 had been designated as orphan drug in the USA for the treatment of pemphigus vulgaris, significantly reducing prednisone use and its related risks, and might become an important treatment option for this devastating disease . Bradshaw and coworkers , using a structure-based design, developed a series of reversible, covalent Btk inhibitors related to PRN1008. These compounds embed a reversible cyanoacrylamide-based electrophile attached to a pyrazolo-pyrimidine scaffold via an amine containing heterocycle linker (piperidine or pyrrolidine). The cyanoacrylamide functionality is capped with various branched-alkyl groups with different steric and electronic properties. The nature of the capping groups affects the residence time of the reversible inhibitors. Compound 1 was identified as a promising lead compound with sustained Btk occupancy. In the crystal structure of Btk- 1 complex (PDB code 4YHF) , the ligand is covalently bonded to Cys481 and the amino-pyrrolopyrimidine portion forms hydrogen bonds with Thr474, Glu475 and Met477 ( B). Interestingly, the tert -butyl group shield the proton attached to Cα, thus preventing the breakage of the thioether bond with Cys481. To enhance solubility and oral bioavailability of compound 1 , a series of methylpyrrolidine-containing compounds was further developed as reversible, covalent BtkIs. Compound 2 exhibited an increased residence time, but dissociated rapidly and quantitatively upon Btk turnover, proteolysis, resynthesis and interaction with cellular binding partners. Additionally, derivative 2 emerged to be as effective as the irreversible covalent inhibitor ibrutinib in decreasing tumor cell invasiveness and blocking Btk activity . This interesting new class of molecules combines the advantages of both reversible and irreversible binding mechanisms and shares a pyrrolopyrimidine scaffold that blocks the Btk active site through hydrogen bonds and hydrophobic interactions and a reactive, modified cyanoacrylamide electrophile, able to form a tunable covalent bond with the exposed Cys481 residue.
In recent years, Btk has emerged as a new target in medicinal chemistry and many BtkIs have been patented and reported in the literature. To date, only three irreversible BtkIs have been launched on the market to treat different types of leukemias and lymphomas, whereas reversible BtkIs are under clinical investigations for long-term drug administration in the treatment of autoimmune diseases, especially RA and MS. Furthermore, these compounds also find application in the treatment of cGVHD, autoimmune inflammation that might occur after an allogeneic transplant and for which currently there are no effective and resolving therapies. Most notable is the application of approved BtkIs in systemic hyper-inflammation and cytokine storm induced by COVID-19 infection, whose effectiveness is still under evaluation. Unfortunately, the onset of resistance to irreversible inhibitors (in particular ibrutinib) and of off-target side effects, in particular skin and dermatological problems, prompted the search for selective second-generation BtkIs with a lower risk of toxicity compared to irreversible compounds. Nanoformulations of ibrutinib (including gold, polymeric NPs and aqueous nanosuspension) showed improved efficacy in cancer therapy, reduced toxicity and ameliorated absorption and bioavailability, thus representing valid help in overcoming resistance and side effects onset of the irreversible BtkIs in clinical use. In addition, the design of compounds able to form reversible covalent bonds with Cys481 residue and temporarily inactivate the enzyme seems to be an innovative and interesting approach. In particular, tunable BTK irreversible inhibitors represent a promising class of compounds with reduced side effects. Additionally, tunable BTKIs would allow the use of this kinase inhibitors in non-oncologic therapeutic areas which require chronic treatment such as auto-immune disorders (i.e., RA, SLE and cGVHD). For all these reasons, despite the presence in the literature of many reviews and articles, the study on the Btk functions and therapeutic applications, as well as the discovery of new Btk inhibitors and innovative formulations of approved compounds, is still very fruitful and of great interest to both the academic community and the pharmaceutical industry.
|
Social Media Use, eHealth Literacy, Disease Knowledge, and Preventive Behaviors in the COVID-19 Pandemic: Cross-Sectional Study on Chinese Netizens | 2b5d3fd7-6d37-4ce8-9347-5ebdd614c3d2 | 7581310 | Health Communication[mh] | Background COVID-19, an acute infectious disease, quickly spread worldwide after it emerged in December 2019 and has evolved from an epidemic to a pandemic. As of the end of May 2020, over 200 countries and territories had reported laboratory-confirmed cases of COVID-19, and the global number of confirmed cases of COVID-19 had exceeded 6,000,000 . As a global pandemic, SARS-CoV-2, the novel coronavirus that causes COVID-19, has infected more people than either of its two predecessors, severe acute respiratory syndrome coronavirus (SARS-CoV) in 2003 and Middle East respiratory syndrome coronavirus (MERS-CoV) in 2012 ; thus, COVID-19 poses a serious threat to global development. There has been an obvious rise in the number of emerging and reemerging infectious diseases over the past two decades, such as severe acute respiratory syndrome (SARS, 2003), H1N1 (2009), Middle East respiratory syndrome (MERS, 2012), Ebola virus (2014), and Zika virus (2016). All these infections were difficult to control due to a lack of effective vaccines and medicines, which led to great concern and anxiety among the public and to challenges for public health systems . Preventive behaviors are essential to control infectious diseases from both public and individual perspectives. Authorities and public health agencies should implement a variety of pharmaceutical and nonpharmaceutical interventions to prevent pandemic expansion, including vaccination and medical prophylaxis, hygienic precautions, patient isolation, and other social distancing measures . Individuals should also take preventive measures to protect themselves, such as washing hands frequently with soap or hand sanitizer, avoiding crowded gatherings, and wearing face masks when going outside . Because many infectious diseases erupt in a short time and have high morbidity and mortality rates, it is difficult for executive agencies to impose sufficient interventions to control these diseases in a timely fashion. Thus, effective disease-management activities benefit greatly from preventive measures by individuals . Therefore, educating the public to enhance health awareness and increase disease knowledge is crucial in a pandemic. Information communication and media use are well suited to achieve this goal by providing the public with professional information, decreasing public panic, disseminating health knowledge, and expressing appreciation to the public for their cooperation . Regarding the COVID-19 pandemic, information communication is still crucial for disease prevention. China has potential advantages in the area of social media. Since the rapid development of the internet and emerging mobile media technologies, China has made remarkable achievements in mobile digital communication. Chinese internet users are also called “netizens,” defined as Chinese citizens who use the internet for at least 1 hour per week by the China Internet Network Information Center (CNNIC); these netizens have been marked by the rise of a highly connected and digitally empowered general public . As of June 2019, the number of Chinese netizens had reached 847 million according to the CNNIC . Social media applications are becoming increasingly diversified; WeChat, Weibo, QQ, and TikTok are the most frequently used platforms by Chinese netizens. Also, social media is widely used by Chinese authorities to inform the public about the latest news, disseminate public health knowledge, refute rumors, and facilitate effective coordination of medical, public, and pharmaceutical resources. Although social media has been broadly used in China, the effects of social media on disease prevention have still not been greatly investigated. In this study, we hope to explore the predictive role of social media use in public preventive behaviors and how health literacy moderates the causality between individuals’ social media use and preventive behaviors during the COVID-19 pandemic in Chinese contexts. Literature Review and Hypotheses The mechanisms underlying the effects of social media use on health behavioral changes is that coverage of a pandemic on social media can magnify the public’s fear and urge the public to take preventive actions . Prior studies indicated that mass media use can produce positive changes or prevent negative changes in health-related behaviors across large populations ; for example, frequency of listening to the radio and reading the newspaper were associated with increased odds of being vaccinated , while time spent watching television was positively correlated with water, sanitation, and hygiene behaviors . Comparatively, social media (eg, Facebook, Twitter, WeChat, Weibo) has provided the public and health institutes with new avenues for disease prevention during an epidemic or pandemic, as it allows two-way communication between health authorities and the public. Social media has also been found to be useful in terms of health-promotion interventions, such as preventing increases in risky sexual behavior , contributing to improved knowledge and attitudes toward skin cancer , positively influencing maternal influenza vaccine uptake , and targeting lifestyle changes among users with chronic diseases . Additionally, studies on the effects of social media have shed light on its utility in public health domains. For example, Facebook was used for strategic crisis communication by health authorities in Singapore during the Zika virus pandemic ; moreover, WeChat and Weibo use were found to significantly increase preventive behaviors for haze health . Scholars are paying increasing attention to the role of social media during pandemics; however, the question of whether social media use can affect the public’s affective responses or preventive behaviors still deserves exploration. Thus, we propose the first research question: RQ1: Does social media use predict preventive behaviors among Chinese netizens during the COVID-19 pandemic? Social cognitive theory is used to explain how people learn behaviors by observing others. It emphasizes the reciprocal causation of individual behaviors between personal factors (eg, values, self-efficacy, outcome expectations), behavioral factors (eg, prior behavior) and social environmental factors (eg, others’ behaviors, feedback). This theory provides a conceptual framework of how media use influences human beings’ thoughts, affect, and actions. Media use leads to behavioral changes by communicating information through two pathways. On one hand, media use promotes changes by informing, enabling, motivating, and guiding users to take direct action to effect change . On the other hand, people adopt, support, spread, and share innovative ideas or behaviors in the socially mediated pathways of social media . As a socially mediated factor, social media frames and reinforces social norms and enriches the ability of the public to receive health information, such as news, knowledge, and health behavior patterns. This knowledge can be rapidly and widely diffused by exerting social influences on people’s health behaviors through observational learning . Therefore, the degree to which people’s use of social media to access health information for disease management may influence an individual’s health behavioral outcomes. As media use is a composite concept that comprises a cluster of measurements, research questions about media use and health behaviors are usually presented as “how many hours did you spend on [social media platform, such as Facebook, Twitter, or YouTube] per day?” or “how many times did you use a particular social media platform?” , which can be respectively summarized as “time of media use” (ie, how long) and “frequency of media use” (ie, how often). Time and frequency are also known to be the key variables of social media use. Thus, we proposed two hypotheses: H1: Social media use time is positively associated with preventive behaviors during the COVID-19 pandemic. H2: Social media use frequency is positively associated with preventive behaviors during the COVID-19 pandemic. In addition to time and frequency, type is a crucial dimension of social media use. As the media landscape has changed dramatically, media types have rapidly become diversified in the new media environment . In China, users usually obtain news or information via mobile news channels. The number of web-based news users has been reported to be 686 million, which accounts for 80.3% of Chinese netizens . Web-based mobile news channels mostly consist of various applications that are characterized by social interactive functions such as reading, commenting, retweeting, and timely interaction. These platforms can be divided into different types by their functions. Official social media outlets, such as China Central Television (CCTV) and People's Daily, often serve as the voice of government or administrative institutions. Professional social media is an emerging form of social media that focuses on news in the professional domain. For example, Caixin News focuses on finance. Aggregated social media is a new type of media that collects and distributes news or information from different agencies. The scope of news on aggregated social media is widespread, including politics, the economy, culture, sports, and entertainment. Public social media (eg, WeChat, Weibo, TikTok), also called interpersonal social media, is produced and disseminated by individuals. Netizens can use public social media to share news with their friends or strangers. All the above types of social media include almost all the social media platforms in China, and each media type is aimed at particular users. For instance, traditional official media represents the official voice of the government, while public or aggregated social media provides voices to grassroots organizations or individuals . At the same time, various types of social media appear to have different effects. Web-based content has been reported to facilitate safer sex literacy and information-sharing intentions on social networking sites . Traditional media (eg, television and radio) can be a more effective tool for managing crises than social media and websites; meanwhile, social media should also be considered to be effective during public health interventions, as younger people heavily rely on social media to seek information . Additionally, when messages are transmitted through reliable web-based personal broadcasting channels, they can induce new attitudes or intentions to change in users . In particular, previous studies have examined the associations of particular types of media access with information-seeking behaviors. For example, Alhuwail and Abdulsalam indicated that people searched YouTube most for health information, but they did not place a high value on other social media platforms such as Twitter, Snapchat, and Facebook. Stawarz et al found in their investigation that people used mobile technologies to support their mental health for specific purposes. Hence, inspired by previous results, it is essential to examine the relationship between different social media types and the public’s preventive behaviors for COVID-19. Here, we propose another research question: RQ2: Do social media types (official social media, professional social media, public social media, aggregated social media) differ in terms of predicting users’ preventive behaviors during the COVID-19 pandemic? Health Literacy and Preventive Behaviors eHealth Literacy The predictors of preventive measures are not merely based on the external impact of social media but also involve internal “assets,” including the set of health knowledge, skills, and capabilities that is called health literacy . As a discrete form of literacy, health literacy is becoming increasingly important in predicting health promotion and prevention . In 2004, the US Institute of Medicine defined health literacy as “the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions.” This concept is also interpreted and has evolved as a wide range of skills that people develop to seek out, comprehend, evaluate, and use health information. The internet is now widely used and has drastically changed how health information is disseminated . eHealth literacy combines information and media literacies and applies them to eHealth promotion. It has been defined as “the ability to seek, find, understand, and appraise health information from electronic sources and to apply the knowledge gained to addressing or solving health problems .” eHealth literacy is becoming increasingly important as individuals continue to seek medical advice from various web-based sources, especially social media. Empirical studies have also found that eHealth literacy positively influences health outcomes, such as health-promoting behaviors among people with diabetes and people’s health-related quality of life . College students with higher eHealth literacy were found to be less likely to consume unhealthy food . Disease Knowledge In addition to eHealth literacy, disease knowledge is a vital component of health literacy; it enables people to recognize the symptoms, understand the causes, and perceive the risks of chronic diseases or infectious diseases . Disease knowledge is also effective in improving health management, and it even acts as a predictor of change in an individual’s health behaviors. Authorities are generally implementing additional measures to improve the level of disease knowledge among the public, with the aim of changing the attitudes of citizens toward public health prevention . For example, disease knowledge can change attitudes and practices toward rabies prevention , levels of oncological knowledge had an impact on individuals’ decisions to consent to particular medical procedures , and higher public health knowledge was positively associated with more frequent handwashing . Additionally, disease knowledge and eHealth literacy can combine as intermediate factors linking to health status . eHealth literacy has been independently related to disease knowledge; it also further influences disease knowledge by an indirect pathway . For example, diabetes knowledge was the most important factor associated with glycemic control, and health literacy through diabetes knowledge exerted an indirect influence on self-care and medication adherence . Therefore, we propose four hypotheses here: H3: eHealth literacy is positively associated with preventive behaviors during the COVID-19 pandemic. H4: Disease knowledge is positively associated with preventive behaviors during the COVID-19 pandemic. H5: eHealth literacy moderates the relationship between social media use and preventive behaviors during the COVID-19. H6: Disease knowledge moderates the relationship between social media use and preventive behaviors during the COVID-19 pandemic. presents all the core variables and research hypotheses examined in this study.
COVID-19, an acute infectious disease, quickly spread worldwide after it emerged in December 2019 and has evolved from an epidemic to a pandemic. As of the end of May 2020, over 200 countries and territories had reported laboratory-confirmed cases of COVID-19, and the global number of confirmed cases of COVID-19 had exceeded 6,000,000 . As a global pandemic, SARS-CoV-2, the novel coronavirus that causes COVID-19, has infected more people than either of its two predecessors, severe acute respiratory syndrome coronavirus (SARS-CoV) in 2003 and Middle East respiratory syndrome coronavirus (MERS-CoV) in 2012 ; thus, COVID-19 poses a serious threat to global development. There has been an obvious rise in the number of emerging and reemerging infectious diseases over the past two decades, such as severe acute respiratory syndrome (SARS, 2003), H1N1 (2009), Middle East respiratory syndrome (MERS, 2012), Ebola virus (2014), and Zika virus (2016). All these infections were difficult to control due to a lack of effective vaccines and medicines, which led to great concern and anxiety among the public and to challenges for public health systems . Preventive behaviors are essential to control infectious diseases from both public and individual perspectives. Authorities and public health agencies should implement a variety of pharmaceutical and nonpharmaceutical interventions to prevent pandemic expansion, including vaccination and medical prophylaxis, hygienic precautions, patient isolation, and other social distancing measures . Individuals should also take preventive measures to protect themselves, such as washing hands frequently with soap or hand sanitizer, avoiding crowded gatherings, and wearing face masks when going outside . Because many infectious diseases erupt in a short time and have high morbidity and mortality rates, it is difficult for executive agencies to impose sufficient interventions to control these diseases in a timely fashion. Thus, effective disease-management activities benefit greatly from preventive measures by individuals . Therefore, educating the public to enhance health awareness and increase disease knowledge is crucial in a pandemic. Information communication and media use are well suited to achieve this goal by providing the public with professional information, decreasing public panic, disseminating health knowledge, and expressing appreciation to the public for their cooperation . Regarding the COVID-19 pandemic, information communication is still crucial for disease prevention. China has potential advantages in the area of social media. Since the rapid development of the internet and emerging mobile media technologies, China has made remarkable achievements in mobile digital communication. Chinese internet users are also called “netizens,” defined as Chinese citizens who use the internet for at least 1 hour per week by the China Internet Network Information Center (CNNIC); these netizens have been marked by the rise of a highly connected and digitally empowered general public . As of June 2019, the number of Chinese netizens had reached 847 million according to the CNNIC . Social media applications are becoming increasingly diversified; WeChat, Weibo, QQ, and TikTok are the most frequently used platforms by Chinese netizens. Also, social media is widely used by Chinese authorities to inform the public about the latest news, disseminate public health knowledge, refute rumors, and facilitate effective coordination of medical, public, and pharmaceutical resources. Although social media has been broadly used in China, the effects of social media on disease prevention have still not been greatly investigated. In this study, we hope to explore the predictive role of social media use in public preventive behaviors and how health literacy moderates the causality between individuals’ social media use and preventive behaviors during the COVID-19 pandemic in Chinese contexts.
The mechanisms underlying the effects of social media use on health behavioral changes is that coverage of a pandemic on social media can magnify the public’s fear and urge the public to take preventive actions . Prior studies indicated that mass media use can produce positive changes or prevent negative changes in health-related behaviors across large populations ; for example, frequency of listening to the radio and reading the newspaper were associated with increased odds of being vaccinated , while time spent watching television was positively correlated with water, sanitation, and hygiene behaviors . Comparatively, social media (eg, Facebook, Twitter, WeChat, Weibo) has provided the public and health institutes with new avenues for disease prevention during an epidemic or pandemic, as it allows two-way communication between health authorities and the public. Social media has also been found to be useful in terms of health-promotion interventions, such as preventing increases in risky sexual behavior , contributing to improved knowledge and attitudes toward skin cancer , positively influencing maternal influenza vaccine uptake , and targeting lifestyle changes among users with chronic diseases . Additionally, studies on the effects of social media have shed light on its utility in public health domains. For example, Facebook was used for strategic crisis communication by health authorities in Singapore during the Zika virus pandemic ; moreover, WeChat and Weibo use were found to significantly increase preventive behaviors for haze health . Scholars are paying increasing attention to the role of social media during pandemics; however, the question of whether social media use can affect the public’s affective responses or preventive behaviors still deserves exploration. Thus, we propose the first research question: RQ1: Does social media use predict preventive behaviors among Chinese netizens during the COVID-19 pandemic? Social cognitive theory is used to explain how people learn behaviors by observing others. It emphasizes the reciprocal causation of individual behaviors between personal factors (eg, values, self-efficacy, outcome expectations), behavioral factors (eg, prior behavior) and social environmental factors (eg, others’ behaviors, feedback). This theory provides a conceptual framework of how media use influences human beings’ thoughts, affect, and actions. Media use leads to behavioral changes by communicating information through two pathways. On one hand, media use promotes changes by informing, enabling, motivating, and guiding users to take direct action to effect change . On the other hand, people adopt, support, spread, and share innovative ideas or behaviors in the socially mediated pathways of social media . As a socially mediated factor, social media frames and reinforces social norms and enriches the ability of the public to receive health information, such as news, knowledge, and health behavior patterns. This knowledge can be rapidly and widely diffused by exerting social influences on people’s health behaviors through observational learning . Therefore, the degree to which people’s use of social media to access health information for disease management may influence an individual’s health behavioral outcomes. As media use is a composite concept that comprises a cluster of measurements, research questions about media use and health behaviors are usually presented as “how many hours did you spend on [social media platform, such as Facebook, Twitter, or YouTube] per day?” or “how many times did you use a particular social media platform?” , which can be respectively summarized as “time of media use” (ie, how long) and “frequency of media use” (ie, how often). Time and frequency are also known to be the key variables of social media use. Thus, we proposed two hypotheses: H1: Social media use time is positively associated with preventive behaviors during the COVID-19 pandemic. H2: Social media use frequency is positively associated with preventive behaviors during the COVID-19 pandemic. In addition to time and frequency, type is a crucial dimension of social media use. As the media landscape has changed dramatically, media types have rapidly become diversified in the new media environment . In China, users usually obtain news or information via mobile news channels. The number of web-based news users has been reported to be 686 million, which accounts for 80.3% of Chinese netizens . Web-based mobile news channels mostly consist of various applications that are characterized by social interactive functions such as reading, commenting, retweeting, and timely interaction. These platforms can be divided into different types by their functions. Official social media outlets, such as China Central Television (CCTV) and People's Daily, often serve as the voice of government or administrative institutions. Professional social media is an emerging form of social media that focuses on news in the professional domain. For example, Caixin News focuses on finance. Aggregated social media is a new type of media that collects and distributes news or information from different agencies. The scope of news on aggregated social media is widespread, including politics, the economy, culture, sports, and entertainment. Public social media (eg, WeChat, Weibo, TikTok), also called interpersonal social media, is produced and disseminated by individuals. Netizens can use public social media to share news with their friends or strangers. All the above types of social media include almost all the social media platforms in China, and each media type is aimed at particular users. For instance, traditional official media represents the official voice of the government, while public or aggregated social media provides voices to grassroots organizations or individuals . At the same time, various types of social media appear to have different effects. Web-based content has been reported to facilitate safer sex literacy and information-sharing intentions on social networking sites . Traditional media (eg, television and radio) can be a more effective tool for managing crises than social media and websites; meanwhile, social media should also be considered to be effective during public health interventions, as younger people heavily rely on social media to seek information . Additionally, when messages are transmitted through reliable web-based personal broadcasting channels, they can induce new attitudes or intentions to change in users . In particular, previous studies have examined the associations of particular types of media access with information-seeking behaviors. For example, Alhuwail and Abdulsalam indicated that people searched YouTube most for health information, but they did not place a high value on other social media platforms such as Twitter, Snapchat, and Facebook. Stawarz et al found in their investigation that people used mobile technologies to support their mental health for specific purposes. Hence, inspired by previous results, it is essential to examine the relationship between different social media types and the public’s preventive behaviors for COVID-19. Here, we propose another research question: RQ2: Do social media types (official social media, professional social media, public social media, aggregated social media) differ in terms of predicting users’ preventive behaviors during the COVID-19 pandemic?
eHealth Literacy The predictors of preventive measures are not merely based on the external impact of social media but also involve internal “assets,” including the set of health knowledge, skills, and capabilities that is called health literacy . As a discrete form of literacy, health literacy is becoming increasingly important in predicting health promotion and prevention . In 2004, the US Institute of Medicine defined health literacy as “the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions.” This concept is also interpreted and has evolved as a wide range of skills that people develop to seek out, comprehend, evaluate, and use health information. The internet is now widely used and has drastically changed how health information is disseminated . eHealth literacy combines information and media literacies and applies them to eHealth promotion. It has been defined as “the ability to seek, find, understand, and appraise health information from electronic sources and to apply the knowledge gained to addressing or solving health problems .” eHealth literacy is becoming increasingly important as individuals continue to seek medical advice from various web-based sources, especially social media. Empirical studies have also found that eHealth literacy positively influences health outcomes, such as health-promoting behaviors among people with diabetes and people’s health-related quality of life . College students with higher eHealth literacy were found to be less likely to consume unhealthy food . Disease Knowledge In addition to eHealth literacy, disease knowledge is a vital component of health literacy; it enables people to recognize the symptoms, understand the causes, and perceive the risks of chronic diseases or infectious diseases . Disease knowledge is also effective in improving health management, and it even acts as a predictor of change in an individual’s health behaviors. Authorities are generally implementing additional measures to improve the level of disease knowledge among the public, with the aim of changing the attitudes of citizens toward public health prevention . For example, disease knowledge can change attitudes and practices toward rabies prevention , levels of oncological knowledge had an impact on individuals’ decisions to consent to particular medical procedures , and higher public health knowledge was positively associated with more frequent handwashing . Additionally, disease knowledge and eHealth literacy can combine as intermediate factors linking to health status . eHealth literacy has been independently related to disease knowledge; it also further influences disease knowledge by an indirect pathway . For example, diabetes knowledge was the most important factor associated with glycemic control, and health literacy through diabetes knowledge exerted an indirect influence on self-care and medication adherence . Therefore, we propose four hypotheses here: H3: eHealth literacy is positively associated with preventive behaviors during the COVID-19 pandemic. H4: Disease knowledge is positively associated with preventive behaviors during the COVID-19 pandemic. H5: eHealth literacy moderates the relationship between social media use and preventive behaviors during the COVID-19. H6: Disease knowledge moderates the relationship between social media use and preventive behaviors during the COVID-19 pandemic. presents all the core variables and research hypotheses examined in this study.
The predictors of preventive measures are not merely based on the external impact of social media but also involve internal “assets,” including the set of health knowledge, skills, and capabilities that is called health literacy . As a discrete form of literacy, health literacy is becoming increasingly important in predicting health promotion and prevention . In 2004, the US Institute of Medicine defined health literacy as “the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions.” This concept is also interpreted and has evolved as a wide range of skills that people develop to seek out, comprehend, evaluate, and use health information. The internet is now widely used and has drastically changed how health information is disseminated . eHealth literacy combines information and media literacies and applies them to eHealth promotion. It has been defined as “the ability to seek, find, understand, and appraise health information from electronic sources and to apply the knowledge gained to addressing or solving health problems .” eHealth literacy is becoming increasingly important as individuals continue to seek medical advice from various web-based sources, especially social media. Empirical studies have also found that eHealth literacy positively influences health outcomes, such as health-promoting behaviors among people with diabetes and people’s health-related quality of life . College students with higher eHealth literacy were found to be less likely to consume unhealthy food .
In addition to eHealth literacy, disease knowledge is a vital component of health literacy; it enables people to recognize the symptoms, understand the causes, and perceive the risks of chronic diseases or infectious diseases . Disease knowledge is also effective in improving health management, and it even acts as a predictor of change in an individual’s health behaviors. Authorities are generally implementing additional measures to improve the level of disease knowledge among the public, with the aim of changing the attitudes of citizens toward public health prevention . For example, disease knowledge can change attitudes and practices toward rabies prevention , levels of oncological knowledge had an impact on individuals’ decisions to consent to particular medical procedures , and higher public health knowledge was positively associated with more frequent handwashing . Additionally, disease knowledge and eHealth literacy can combine as intermediate factors linking to health status . eHealth literacy has been independently related to disease knowledge; it also further influences disease knowledge by an indirect pathway . For example, diabetes knowledge was the most important factor associated with glycemic control, and health literacy through diabetes knowledge exerted an indirect influence on self-care and medication adherence . Therefore, we propose four hypotheses here: H3: eHealth literacy is positively associated with preventive behaviors during the COVID-19 pandemic. H4: Disease knowledge is positively associated with preventive behaviors during the COVID-19 pandemic. H5: eHealth literacy moderates the relationship between social media use and preventive behaviors during the COVID-19. H6: Disease knowledge moderates the relationship between social media use and preventive behaviors during the COVID-19 pandemic. presents all the core variables and research hypotheses examined in this study.
Design and Recruitment A national web-based cross-sectional survey was executed by proportionate probability sampling in this study to examine whether social media use predicted Chinese netizens’ preventive behaviors during the COVID-19 pandemic and to explore the moderators of disease knowledge and eHealth literacy. The proportionate probability sampling method was employed according to the gender and age distributions of Chinese netizens reported in the 44th Statistical Report on Internet Development of China (SRIDC) . The SRIDC is an authoritative report that is released annually by the CNNIC and is based on a representative national survey with a sample size of 60,000. As the report showed, people 20 to 60 years of age were the main body of Chinese netizens; they represented 72.3% of the entire sample. In our survey, the web-based sample pool had an age limitation in that participants >60 years of age were rare. Thus, we selected 20 to 60 years of age as the target sample age range. We set the age intervals and proportions as 20 to 29 years of age (34.02%), 30 to 39 years of age (32.78%), 40 to 49 years of age (23.93%) and 50 to 59 years of age (9.27%); also, the proportions of men and women for each age range were 52.4% and 47.6%, respectively, according to the population distribution of Chinese netizens; these proportions were also in line with the SRIDC. Participants were recruited using a web-based platform from the Questionnaire Star survey company , which contains over 2.6 million registered panelists in its sample pool. A structured questionnaire was developed and pretested for this study . Then, the web-based survey was partially adjusted and formally executed. The survey was conducted from February 13 to 21, 2020. After excluding ineligible samples (eg, incomplete or completed in a very short time), we finally collected 802 valid questionnaires from 952 respondents. The valid response rate was 84.24%. Ethics Statement Authorization to conduct the research and recruit participants was obtained from the Institutional Review Board of the authors’ university (ID: 20200203). In addition, the purpose of this study was elucidated by the “Notification of Sample Service” (Survey ID: 57071374). Consent was obtained from all the participants before the web-based survey was conducted by the survey agency . Participation was completely voluntary, and the participants could choose to quit at any time for any reason during the process of answering the web-based questionnaire. Instruments Demographic Information The six most frequently used sociodemographic variables were collected, including gender (0=female and 1=male), age (the respondents reported their birth year and we computed their actual age, eg, if the respondent entered “1980,” we computed 2020 – 1980 to obtain an age of 40 years), education (from 1=middle school or less to 5=master’s degree and above), monthly income (1, <¥1500; 2, ¥1500 to 3000; 3, ¥3001 to 5000; 4, ¥5001 to 8000; 5, ¥8001 to 12,000; 6, ¥12,001 to 20,000, 7, >¥20,000; 1 ¥=US $0.14), marital status (1, single; 2, divorced or widowed; 3, separated; 4, cohabiting; 5, married), and health status (from 1=severe disease to 5=good). Social Media Use Media use was measured by the following questions: social media use time (“In the past week, how much time did you spend using social media every day to learn about news of the COVID-19 pandemic?” with answers ranging from “less than one hour” to “5 hours and more”); type of social media use (“Which channel do you use often to obtain COVID-19 information every day?” with four types of social media channels, including “Official social media, such as People’s Daily,” “Professional social media, such as Ding Xiang Doctor,” “Public social media, such as WeChat,” and “Aggregated social media, such as Tencent News,” with possible answers for each social media channel of 1, never used; 2, 1 to 2 times per week; 3, 3 to 4 times per week; 4, 5 to 6 times per week, and 5, one or more times per day). Additionally, the variable of social media use frequency was measured by the sum score of the frequencies of all four types of social media channels (maximum score: 20), and a higher score indicates more frequent use of social media. Preventive Behaviors Preventive behaviors were measured by 10 items consisting of basic protective recommendations during the COVID-19 pandemic (eg, “Washing your hands after going home” and “Covering your mouth and nose with a tissue or sleeves when you cough or sneeze”). The 10 items were assessed by a self-reported measurement scale. Firstly, the measures of disease knowledge were drawn from the COVID-19 Protection Manual (Hong Kong version, February 2020) and COVID-19 Protection Manual (China Mainland version, January 2020) . 20 items were generated as alternative metrics. Second, we consulted with medical experts on all the metrics. According to their suggestions, we selected 10 items as the final measurement metrics. Before the formal survey was conducted, we invited 10 adults to conduct a pilot study and modified the survey correspondingly until the validity and reliability were acceptable. Finally, we adopted the adapted measures. Respondents were asked to indicate the extent to which they agreed with the statements on a 5-point Likert scale ranging from 1=never executed to 5=do it every time (Cronbach α=.75). Disease Knowledge Disease knowledge was assessed by a self-reported measurement scale consisting of 10 items (eg, “The incubation period of COVID-19 infections is generally 3-7 days, with a maximum of 14 days,” “The coronavirus volume is about 3 microns”). Like the measurement process of preventive behaviors, the instrument of disease knowledge was drawn from the COVID-19 Protection Manual (Hong Kong Version, February 2020) and COVID-19 Protection Manual (China Mainland version, January 2020) . We generated 20 items, also in consultation with medical experts. Finally, 10 items were used as the final measurement metrics via a pilot study. The answer options were “yes” or “no” for each item. Participants were given 1 point for the correct answer and 0 points for an incorrect response for each item. The variable of disease knowledge had possible scores of 0 to 10 (Cronbach α=.70). eHealth Literacy eHealth Literacy was assessed by the 8-item eHealth Literacy Scale (eHEALS) . The eHEALS is a reliable computer-based measure of patients’ knowledge and self-efficacy for obtaining and evaluating web-based health resources. This brief scale assesses an individual’s perceived ability to find, understand, and appraise health information from web-based sources and apply that knowledge to address health concerns (eg, “I know what health resources are available on the internet” and “I know where to find helpful health resources on the internet”). The eHEALS was developed in English. It was translated into a Chinese version for our questionnaire, and we invited 5 adults to conduct a pilot study. The results indicated that the reliability of the Chinese version is high; therefore, we adopted it. Response options included a 5-point Likert scale ranging from 1=totally disagree to 5=totally agree (Cronbach α =.82). Statistical Analysis Descriptive statistics were used to assess the sociodemographic characteristics of the respondents, including gender, age, education, monthly income, marital status, and health status. Category variables were described as n . Continuous variables were expressed as mean (SD). Category variables (including education, monthly income, marital status, and health status) were also dummy-coded, and one group was set as a reference group in each category. Pearson correlation analysis and hierarchical multiple regression were employed. Two-tailed Pearson correlations were used to examine the correlations between the control variables and the independent and dependent variables, respectively. Two hierarchical regression models were used to test the research questions and hypotheses. The first hierarchical multiple regression was used to investigate RQ1, H1, H2, H3, H4, H5, and H6, in which the demographics were set as the control variables for Model 1. Then, the social media use time and social media use frequency were introduced in Model 2, and disease knowledge and eHealth literacy were introduced in Model 3. Finally, the two interaction items of social media use frequency × disease knowledge and social media use frequency × eHealth literacy were entered in Model 4. Two additional interaction items, time × eHealth literacy and time × disease knowledge, were entered in Model 5. The second hierarchical regression was carried out to explore the predictors of the four social media types (RQ2). The demographics were set as the control variables for Model 1, and four types of social media channels (official social media, professional social media, public social media, aggregated social media) were introduced in Model 2. All statistical analyses were calculated with SPSS for Windows version 22.0 (IBM Corporation).
A national web-based cross-sectional survey was executed by proportionate probability sampling in this study to examine whether social media use predicted Chinese netizens’ preventive behaviors during the COVID-19 pandemic and to explore the moderators of disease knowledge and eHealth literacy. The proportionate probability sampling method was employed according to the gender and age distributions of Chinese netizens reported in the 44th Statistical Report on Internet Development of China (SRIDC) . The SRIDC is an authoritative report that is released annually by the CNNIC and is based on a representative national survey with a sample size of 60,000. As the report showed, people 20 to 60 years of age were the main body of Chinese netizens; they represented 72.3% of the entire sample. In our survey, the web-based sample pool had an age limitation in that participants >60 years of age were rare. Thus, we selected 20 to 60 years of age as the target sample age range. We set the age intervals and proportions as 20 to 29 years of age (34.02%), 30 to 39 years of age (32.78%), 40 to 49 years of age (23.93%) and 50 to 59 years of age (9.27%); also, the proportions of men and women for each age range were 52.4% and 47.6%, respectively, according to the population distribution of Chinese netizens; these proportions were also in line with the SRIDC. Participants were recruited using a web-based platform from the Questionnaire Star survey company , which contains over 2.6 million registered panelists in its sample pool. A structured questionnaire was developed and pretested for this study . Then, the web-based survey was partially adjusted and formally executed. The survey was conducted from February 13 to 21, 2020. After excluding ineligible samples (eg, incomplete or completed in a very short time), we finally collected 802 valid questionnaires from 952 respondents. The valid response rate was 84.24%.
Authorization to conduct the research and recruit participants was obtained from the Institutional Review Board of the authors’ university (ID: 20200203). In addition, the purpose of this study was elucidated by the “Notification of Sample Service” (Survey ID: 57071374). Consent was obtained from all the participants before the web-based survey was conducted by the survey agency . Participation was completely voluntary, and the participants could choose to quit at any time for any reason during the process of answering the web-based questionnaire.
Demographic Information The six most frequently used sociodemographic variables were collected, including gender (0=female and 1=male), age (the respondents reported their birth year and we computed their actual age, eg, if the respondent entered “1980,” we computed 2020 – 1980 to obtain an age of 40 years), education (from 1=middle school or less to 5=master’s degree and above), monthly income (1, <¥1500; 2, ¥1500 to 3000; 3, ¥3001 to 5000; 4, ¥5001 to 8000; 5, ¥8001 to 12,000; 6, ¥12,001 to 20,000, 7, >¥20,000; 1 ¥=US $0.14), marital status (1, single; 2, divorced or widowed; 3, separated; 4, cohabiting; 5, married), and health status (from 1=severe disease to 5=good). Social Media Use Media use was measured by the following questions: social media use time (“In the past week, how much time did you spend using social media every day to learn about news of the COVID-19 pandemic?” with answers ranging from “less than one hour” to “5 hours and more”); type of social media use (“Which channel do you use often to obtain COVID-19 information every day?” with four types of social media channels, including “Official social media, such as People’s Daily,” “Professional social media, such as Ding Xiang Doctor,” “Public social media, such as WeChat,” and “Aggregated social media, such as Tencent News,” with possible answers for each social media channel of 1, never used; 2, 1 to 2 times per week; 3, 3 to 4 times per week; 4, 5 to 6 times per week, and 5, one or more times per day). Additionally, the variable of social media use frequency was measured by the sum score of the frequencies of all four types of social media channels (maximum score: 20), and a higher score indicates more frequent use of social media. Preventive Behaviors Preventive behaviors were measured by 10 items consisting of basic protective recommendations during the COVID-19 pandemic (eg, “Washing your hands after going home” and “Covering your mouth and nose with a tissue or sleeves when you cough or sneeze”). The 10 items were assessed by a self-reported measurement scale. Firstly, the measures of disease knowledge were drawn from the COVID-19 Protection Manual (Hong Kong version, February 2020) and COVID-19 Protection Manual (China Mainland version, January 2020) . 20 items were generated as alternative metrics. Second, we consulted with medical experts on all the metrics. According to their suggestions, we selected 10 items as the final measurement metrics. Before the formal survey was conducted, we invited 10 adults to conduct a pilot study and modified the survey correspondingly until the validity and reliability were acceptable. Finally, we adopted the adapted measures. Respondents were asked to indicate the extent to which they agreed with the statements on a 5-point Likert scale ranging from 1=never executed to 5=do it every time (Cronbach α=.75). Disease Knowledge Disease knowledge was assessed by a self-reported measurement scale consisting of 10 items (eg, “The incubation period of COVID-19 infections is generally 3-7 days, with a maximum of 14 days,” “The coronavirus volume is about 3 microns”). Like the measurement process of preventive behaviors, the instrument of disease knowledge was drawn from the COVID-19 Protection Manual (Hong Kong Version, February 2020) and COVID-19 Protection Manual (China Mainland version, January 2020) . We generated 20 items, also in consultation with medical experts. Finally, 10 items were used as the final measurement metrics via a pilot study. The answer options were “yes” or “no” for each item. Participants were given 1 point for the correct answer and 0 points for an incorrect response for each item. The variable of disease knowledge had possible scores of 0 to 10 (Cronbach α=.70). eHealth Literacy eHealth Literacy was assessed by the 8-item eHealth Literacy Scale (eHEALS) . The eHEALS is a reliable computer-based measure of patients’ knowledge and self-efficacy for obtaining and evaluating web-based health resources. This brief scale assesses an individual’s perceived ability to find, understand, and appraise health information from web-based sources and apply that knowledge to address health concerns (eg, “I know what health resources are available on the internet” and “I know where to find helpful health resources on the internet”). The eHEALS was developed in English. It was translated into a Chinese version for our questionnaire, and we invited 5 adults to conduct a pilot study. The results indicated that the reliability of the Chinese version is high; therefore, we adopted it. Response options included a 5-point Likert scale ranging from 1=totally disagree to 5=totally agree (Cronbach α =.82).
The six most frequently used sociodemographic variables were collected, including gender (0=female and 1=male), age (the respondents reported their birth year and we computed their actual age, eg, if the respondent entered “1980,” we computed 2020 – 1980 to obtain an age of 40 years), education (from 1=middle school or less to 5=master’s degree and above), monthly income (1, <¥1500; 2, ¥1500 to 3000; 3, ¥3001 to 5000; 4, ¥5001 to 8000; 5, ¥8001 to 12,000; 6, ¥12,001 to 20,000, 7, >¥20,000; 1 ¥=US $0.14), marital status (1, single; 2, divorced or widowed; 3, separated; 4, cohabiting; 5, married), and health status (from 1=severe disease to 5=good).
Media use was measured by the following questions: social media use time (“In the past week, how much time did you spend using social media every day to learn about news of the COVID-19 pandemic?” with answers ranging from “less than one hour” to “5 hours and more”); type of social media use (“Which channel do you use often to obtain COVID-19 information every day?” with four types of social media channels, including “Official social media, such as People’s Daily,” “Professional social media, such as Ding Xiang Doctor,” “Public social media, such as WeChat,” and “Aggregated social media, such as Tencent News,” with possible answers for each social media channel of 1, never used; 2, 1 to 2 times per week; 3, 3 to 4 times per week; 4, 5 to 6 times per week, and 5, one or more times per day). Additionally, the variable of social media use frequency was measured by the sum score of the frequencies of all four types of social media channels (maximum score: 20), and a higher score indicates more frequent use of social media.
Preventive behaviors were measured by 10 items consisting of basic protective recommendations during the COVID-19 pandemic (eg, “Washing your hands after going home” and “Covering your mouth and nose with a tissue or sleeves when you cough or sneeze”). The 10 items were assessed by a self-reported measurement scale. Firstly, the measures of disease knowledge were drawn from the COVID-19 Protection Manual (Hong Kong version, February 2020) and COVID-19 Protection Manual (China Mainland version, January 2020) . 20 items were generated as alternative metrics. Second, we consulted with medical experts on all the metrics. According to their suggestions, we selected 10 items as the final measurement metrics. Before the formal survey was conducted, we invited 10 adults to conduct a pilot study and modified the survey correspondingly until the validity and reliability were acceptable. Finally, we adopted the adapted measures. Respondents were asked to indicate the extent to which they agreed with the statements on a 5-point Likert scale ranging from 1=never executed to 5=do it every time (Cronbach α=.75).
Disease knowledge was assessed by a self-reported measurement scale consisting of 10 items (eg, “The incubation period of COVID-19 infections is generally 3-7 days, with a maximum of 14 days,” “The coronavirus volume is about 3 microns”). Like the measurement process of preventive behaviors, the instrument of disease knowledge was drawn from the COVID-19 Protection Manual (Hong Kong Version, February 2020) and COVID-19 Protection Manual (China Mainland version, January 2020) . We generated 20 items, also in consultation with medical experts. Finally, 10 items were used as the final measurement metrics via a pilot study. The answer options were “yes” or “no” for each item. Participants were given 1 point for the correct answer and 0 points for an incorrect response for each item. The variable of disease knowledge had possible scores of 0 to 10 (Cronbach α=.70).
eHealth Literacy was assessed by the 8-item eHealth Literacy Scale (eHEALS) . The eHEALS is a reliable computer-based measure of patients’ knowledge and self-efficacy for obtaining and evaluating web-based health resources. This brief scale assesses an individual’s perceived ability to find, understand, and appraise health information from web-based sources and apply that knowledge to address health concerns (eg, “I know what health resources are available on the internet” and “I know where to find helpful health resources on the internet”). The eHEALS was developed in English. It was translated into a Chinese version for our questionnaire, and we invited 5 adults to conduct a pilot study. The results indicated that the reliability of the Chinese version is high; therefore, we adopted it. Response options included a 5-point Likert scale ranging from 1=totally disagree to 5=totally agree (Cronbach α =.82).
Descriptive statistics were used to assess the sociodemographic characteristics of the respondents, including gender, age, education, monthly income, marital status, and health status. Category variables were described as n . Continuous variables were expressed as mean (SD). Category variables (including education, monthly income, marital status, and health status) were also dummy-coded, and one group was set as a reference group in each category. Pearson correlation analysis and hierarchical multiple regression were employed. Two-tailed Pearson correlations were used to examine the correlations between the control variables and the independent and dependent variables, respectively. Two hierarchical regression models were used to test the research questions and hypotheses. The first hierarchical multiple regression was used to investigate RQ1, H1, H2, H3, H4, H5, and H6, in which the demographics were set as the control variables for Model 1. Then, the social media use time and social media use frequency were introduced in Model 2, and disease knowledge and eHealth literacy were introduced in Model 3. Finally, the two interaction items of social media use frequency × disease knowledge and social media use frequency × eHealth literacy were entered in Model 4. Two additional interaction items, time × eHealth literacy and time × disease knowledge, were entered in Model 5. The second hierarchical regression was carried out to explore the predictors of the four social media types (RQ2). The demographics were set as the control variables for Model 1, and four types of social media channels (official social media, professional social media, public social media, aggregated social media) were introduced in Model 2. All statistical analyses were calculated with SPSS for Windows version 22.0 (IBM Corporation).
Descriptive Statistics Sociodemographic Profiles Among the 802 participants, 416 (51.9%) were male and 386 (48.1%) were female. The ages of the respondents ranged from 20 to 60 years, which is representative of Chinese netizens according to 2019 CNNIC statistics . The sample overrepresented high education (above bachelor’s degree, 624/902, 77.7%) and high monthly income of >¥5000 (US $$736.29, 525/802, 65.3%) compared with the respective values of 9.7% and 27.1% reported in the SRIDC. Most of the respondents had a bachelor’s (undergraduate) degree or higher, and nearly half of the respondents’ monthly income was >¥8000 (US $1178). Additionally, the majority of the respondents in our sample were married (496/802, 61.8%) and in good health (486/802, 60.6%). A detailed comparison of our sample profile and the CNNIC sample is presented in . Characteristics of Social Media Use, Health Literacy, and Preventive Behaviors presents the basic characteristics of social media users in terms of social media use, disease knowledge, eHealth literacy, and preventive behaviors. Respondents did not spend much more time on social media every day to learn about the COVID-19 pandemic, as the average social media use time was approximately 2 to 3 hours per day (mean 2.34, SD 1.12). By contrast, the respondents used social media more often (mean score 13.59/20, SD 2.42) when compared with reference point 12. As the types of social media channels, respondents liked to use public social media and aggregated social media more than official social media and professional social media. Respondents had a high level of disease knowledge (mean score 8.15/10, SD 1.43) and eHealth literacy (mean score 3.79/5, SD 0.59). Moreover, respondents also took many preventive behaviors (mean score 4.30/5, SD 0.44) for health management during the COVID-19 pandemic. Predictors and Moderators of Preventive Behaviors Before the two hierarchical multiple regressions were conducted, Pearson correlations were employed to assess the correlations between independent variables and dependent variables. As displayed in the correlation table in , significant correlations exist between demographics, social media use, disease knowledge, eHealth literacy, and preventive behaviors; however, social media use time (β=.07, P >.05) did not predict preventive behaviors. Thus, H1 was not supported. To examine the predictors and moderators of the preventive behaviors, the first hierarchical multiple regression was carried out, and the full results are shown in (the change in R 2 upon adding the interaction of the last step of Model 5 was insignificant; therefore, we selected Model 4 as our final model). Social media use frequency (β=.20, P< .001), disease knowledge (β=.11, P =.001), and eHealth literacy (β=.27, P <.001) significantly and positively predicted preventive behaviors, respectively, when controlling sociodemographic variables (gender, age, education, monthly income, marital status, and health status). eHealth literacy (β=.27) also emerged as the main effect. These results supported H2, H3, and H4; they also partly answered RQ1, which states that social media use frequency rather than social media use time can predict preventive behaviors during the COVID-19 pandemic. The results showed significant correlations of the social media use frequency × disease knowledge and social media use frequency × eHealth literacy interactions with preventive behaviors (β=–.07, P =.03, and β=.07, P =.04, respectively). These results indicate that disease knowledge and eHealth literacy significantly moderate the relationship between social media use frequency and preventive behaviors. Moreover, eHealth literacy positively moderated the relationship between social media use frequency and preventive behaviors, while disease knowledge negatively moderated this relationship. We also checked the moderator effects of social media use time × eHealth literacy (β=.02, P =.51) and social media use time × disease knowledge (β=.05, P =.15); however, both these correlations were insignificant. Thus, H5 and H6 were partly supported. The slope test is often applied to test the magnitude of a moderated effect on the conditional value of a moderator. Given that the interaction items were significant, we performed slope tests and plotted the predicted preventive behaviors separately for high and low eHealth literacy or disease knowledge (1 SD above the mean and 1 SD below the mean, respectively; see and ). The simple slope analyses indicated that for social media users who had lower levels of eHealth literacy, a higher level of frequency of social media use (mean –1SD) was associated with higher levels of preventive behaviors (β simple=.02, P <.001). For people with higher levels of eHealth literacy (mean +1SD), the positive association between the frequency of social media and preventive behaviors was also significant (β simple=.044, P <.001), and the magnitude of this association was greater than that for lower levels of eHealth literacy. For disease knowledge, simple slope analyses indicate that for social media users who had lower levels of disease knowledge (mean –1SD), a higher level of social media use frequency was associated with higher levels of preventive behaviors (β simple=.060, P <.001). For people with higher levels of disease knowledge (mean +1SD), the positive association between frequency of social media and preventive behaviors was also significant; however, the magnitude of the association was smaller (β=.035, P <.001). Concerning demographics, gender, age, monthly income, and health status were all found to significantly predict preventive behaviors. Age, monthly income, and health status positively predicted preventive behaviors. However, gender negatively predicted preventive behaviors. In detail, participants with a monthly income of more than ¥5000 engaged in more preventive behaviors than the reference group of people with a monthly income of less than ¥1500. Compared with participants who reported their health status as “good,” those who reported unhealthy status took fewer preventive measures. This suggests that social media users who were older, had higher monthly income, and had better health status were more likely to take preventive measures during the COVID-19 pandemic. Women generally engaged in more preventive behaviors than men. However, marital status and education had no significant effects on preventive behaviors. Types of Social Media Use and Preventive Behaviors RQ2 focused on comparisons among the four media genres, namely official social media, professional social media, public social media, and aggregated social media. As shown in , the multiple regression results indicated that professional social media (β=.11, P =.002), public social media (β=.14, P <.001), and aggregated social media (β=.22, P< .001) positively predicted preventive behaviors, while official social media (β=.02, P =.60) did not. Furthermore, aggregated social media was found to be the highest predictor of preventive behaviors, closely followed by public social media and professional social media. However, use of official social media in China did not predict netizens’ preventive behaviors. Additionally, health literacy positively moderated the relationship between social media use and preventive behaviors.
Sociodemographic Profiles Among the 802 participants, 416 (51.9%) were male and 386 (48.1%) were female. The ages of the respondents ranged from 20 to 60 years, which is representative of Chinese netizens according to 2019 CNNIC statistics . The sample overrepresented high education (above bachelor’s degree, 624/902, 77.7%) and high monthly income of >¥5000 (US $$736.29, 525/802, 65.3%) compared with the respective values of 9.7% and 27.1% reported in the SRIDC. Most of the respondents had a bachelor’s (undergraduate) degree or higher, and nearly half of the respondents’ monthly income was >¥8000 (US $1178). Additionally, the majority of the respondents in our sample were married (496/802, 61.8%) and in good health (486/802, 60.6%). A detailed comparison of our sample profile and the CNNIC sample is presented in . Characteristics of Social Media Use, Health Literacy, and Preventive Behaviors presents the basic characteristics of social media users in terms of social media use, disease knowledge, eHealth literacy, and preventive behaviors. Respondents did not spend much more time on social media every day to learn about the COVID-19 pandemic, as the average social media use time was approximately 2 to 3 hours per day (mean 2.34, SD 1.12). By contrast, the respondents used social media more often (mean score 13.59/20, SD 2.42) when compared with reference point 12. As the types of social media channels, respondents liked to use public social media and aggregated social media more than official social media and professional social media. Respondents had a high level of disease knowledge (mean score 8.15/10, SD 1.43) and eHealth literacy (mean score 3.79/5, SD 0.59). Moreover, respondents also took many preventive behaviors (mean score 4.30/5, SD 0.44) for health management during the COVID-19 pandemic.
Among the 802 participants, 416 (51.9%) were male and 386 (48.1%) were female. The ages of the respondents ranged from 20 to 60 years, which is representative of Chinese netizens according to 2019 CNNIC statistics . The sample overrepresented high education (above bachelor’s degree, 624/902, 77.7%) and high monthly income of >¥5000 (US $$736.29, 525/802, 65.3%) compared with the respective values of 9.7% and 27.1% reported in the SRIDC. Most of the respondents had a bachelor’s (undergraduate) degree or higher, and nearly half of the respondents’ monthly income was >¥8000 (US $1178). Additionally, the majority of the respondents in our sample were married (496/802, 61.8%) and in good health (486/802, 60.6%). A detailed comparison of our sample profile and the CNNIC sample is presented in .
presents the basic characteristics of social media users in terms of social media use, disease knowledge, eHealth literacy, and preventive behaviors. Respondents did not spend much more time on social media every day to learn about the COVID-19 pandemic, as the average social media use time was approximately 2 to 3 hours per day (mean 2.34, SD 1.12). By contrast, the respondents used social media more often (mean score 13.59/20, SD 2.42) when compared with reference point 12. As the types of social media channels, respondents liked to use public social media and aggregated social media more than official social media and professional social media. Respondents had a high level of disease knowledge (mean score 8.15/10, SD 1.43) and eHealth literacy (mean score 3.79/5, SD 0.59). Moreover, respondents also took many preventive behaviors (mean score 4.30/5, SD 0.44) for health management during the COVID-19 pandemic.
Before the two hierarchical multiple regressions were conducted, Pearson correlations were employed to assess the correlations between independent variables and dependent variables. As displayed in the correlation table in , significant correlations exist between demographics, social media use, disease knowledge, eHealth literacy, and preventive behaviors; however, social media use time (β=.07, P >.05) did not predict preventive behaviors. Thus, H1 was not supported. To examine the predictors and moderators of the preventive behaviors, the first hierarchical multiple regression was carried out, and the full results are shown in (the change in R 2 upon adding the interaction of the last step of Model 5 was insignificant; therefore, we selected Model 4 as our final model). Social media use frequency (β=.20, P< .001), disease knowledge (β=.11, P =.001), and eHealth literacy (β=.27, P <.001) significantly and positively predicted preventive behaviors, respectively, when controlling sociodemographic variables (gender, age, education, monthly income, marital status, and health status). eHealth literacy (β=.27) also emerged as the main effect. These results supported H2, H3, and H4; they also partly answered RQ1, which states that social media use frequency rather than social media use time can predict preventive behaviors during the COVID-19 pandemic. The results showed significant correlations of the social media use frequency × disease knowledge and social media use frequency × eHealth literacy interactions with preventive behaviors (β=–.07, P =.03, and β=.07, P =.04, respectively). These results indicate that disease knowledge and eHealth literacy significantly moderate the relationship between social media use frequency and preventive behaviors. Moreover, eHealth literacy positively moderated the relationship between social media use frequency and preventive behaviors, while disease knowledge negatively moderated this relationship. We also checked the moderator effects of social media use time × eHealth literacy (β=.02, P =.51) and social media use time × disease knowledge (β=.05, P =.15); however, both these correlations were insignificant. Thus, H5 and H6 were partly supported. The slope test is often applied to test the magnitude of a moderated effect on the conditional value of a moderator. Given that the interaction items were significant, we performed slope tests and plotted the predicted preventive behaviors separately for high and low eHealth literacy or disease knowledge (1 SD above the mean and 1 SD below the mean, respectively; see and ). The simple slope analyses indicated that for social media users who had lower levels of eHealth literacy, a higher level of frequency of social media use (mean –1SD) was associated with higher levels of preventive behaviors (β simple=.02, P <.001). For people with higher levels of eHealth literacy (mean +1SD), the positive association between the frequency of social media and preventive behaviors was also significant (β simple=.044, P <.001), and the magnitude of this association was greater than that for lower levels of eHealth literacy. For disease knowledge, simple slope analyses indicate that for social media users who had lower levels of disease knowledge (mean –1SD), a higher level of social media use frequency was associated with higher levels of preventive behaviors (β simple=.060, P <.001). For people with higher levels of disease knowledge (mean +1SD), the positive association between frequency of social media and preventive behaviors was also significant; however, the magnitude of the association was smaller (β=.035, P <.001). Concerning demographics, gender, age, monthly income, and health status were all found to significantly predict preventive behaviors. Age, monthly income, and health status positively predicted preventive behaviors. However, gender negatively predicted preventive behaviors. In detail, participants with a monthly income of more than ¥5000 engaged in more preventive behaviors than the reference group of people with a monthly income of less than ¥1500. Compared with participants who reported their health status as “good,” those who reported unhealthy status took fewer preventive measures. This suggests that social media users who were older, had higher monthly income, and had better health status were more likely to take preventive measures during the COVID-19 pandemic. Women generally engaged in more preventive behaviors than men. However, marital status and education had no significant effects on preventive behaviors.
RQ2 focused on comparisons among the four media genres, namely official social media, professional social media, public social media, and aggregated social media. As shown in , the multiple regression results indicated that professional social media (β=.11, P =.002), public social media (β=.14, P <.001), and aggregated social media (β=.22, P< .001) positively predicted preventive behaviors, while official social media (β=.02, P =.60) did not. Furthermore, aggregated social media was found to be the highest predictor of preventive behaviors, closely followed by public social media and professional social media. However, use of official social media in China did not predict netizens’ preventive behaviors. Additionally, health literacy positively moderated the relationship between social media use and preventive behaviors.
This study had three goals. The first goal was to explore the predictors of preventive behaviors during the COVID-19 pandemic, the second goal was to examine the roles of disease knowledge and eHealth literacy in moderating public preventive behaviors, and the third goal was to explain the relationship between demographics and people’s preventive behaviors. The findings revealed that social media use frequency, disease knowledge, and eHealth literacy all positively predicted an individual’s preventive behaviors during the COVID-19 pandemic. Aggerated social media, public social media, and professional social media were the significant predictors of preventive behaviors within the four social media channels. Moreover, eHealth literacy positively moderated the relationship between social media use frequency and preventive behaviors, while disease knowledge negatively affected this relationship. Concerning demographics, female sex, older age, high monthly income, and good health status were likely to take more preventive measures during the COVID-19 pandemic in China. Social Media Use and Preventive Behaviors For a long time, mass media (eg, television, radio, and newspapers) was recognized as an important strategy for health-promoting practice . For example, a mass media campaign increased physical activity, produced positive changes, and prevented negative changes in health-related behaviors . Government and executive agencies have generally used mass media and social media as convenient tools for supervising and preventing epidemics. According to the main results of this study, social media use (frequency) played a positive role in public preventive behaviors during the COVID-19 pandemic in China. This may be an important indicator of health promotion, which encourages the public to take more health measures during emergencies. Compared to mass media, social media provides the public with convenient channels to obtain news or disease knowledge and delivers information effectively. Thus, social media should be an effective strategy for public health promotion, especially during an epidemic or a pandemic. In contrast with social media use time (which was nonsignificant), social media use frequency was a significant predictor of preventive behaviors. In other words, “how often” rather than “how long” social media was used was a good predictor of an individual’s preventive behaviors; this was an unexpected but interesting finding in this work. Time and frequency are often used to measure the regularity of social media use . We attempted to draw an explanation from previous studies that investigated the relationship between social media use frequency and behavioral outcomes; we found that “frequency” may be a direct indicator of the motivations of social media use, such as self-expression, social learning, social comparison, or filtering . Therefore, we cautiously concluded that frequency of social media use indicates the degree of engagement or investment in social media. “Frequency” may thus be a more significant predictor of social media effects. Types of Social Media Use and Preventive Behaviors The positive correlation of social media use and preventive behaviors extended the study of the relationships between different types of social media use and preventive behaviors. Aggregated social media use was found to be the most significant predictor of preventive behaviors among four types of social media channels, followed by public social media and professional social media use. In contrast, official social media use was not significant. These results indicate that new media access (aggregated social media, public social media, and professional social media) deserved more attention in affecting public preventive measures than traditional media (official social media), particularly in Chinese contexts. Aggregated social media, a novel type of news aggregator, has ensured that readers can read news stories of high quality from many outlets; this simplifies the search process of news stories and allows users to save time and effort in finding news . News aggregators such as Tencent News, Sina News, and Toutiao have emerged as important components of digital content ecosystems in China, along with overseas Google News, Reddit, Bing News, etc. These aggregated social media sites have drastically changed the ways in which users access information and interact with each other. They can also generate a substitution effect when users switch from news outlets (official media) to news aggregators . Consequently, aggregated social media is competing with official social media for more users’ attention and has led to an intensified propaganda crisis of official social media. This may partially explain our finding that aggregated social media was the most significant predictor for preventive behaviors among the four social media types, while official social media was not significant. Furthermore, official media outlets, such as CCTV, People's Daily and Xinhua Net, are state-driven media platforms in Chinese contexts. The content of official social media platforms mainly focuses on party ideology or party image ; meanwhile, the content spectrum of more extensive social imperatives is limited . Therefore, the readability and humanity of public health content on official social media are lower than on aggregated social media, which may be another reason for the insignificant effect of official social media on public preventive behaviors. Additionally, we found that public social media (eg, WeChat, Weibo, and TikTok) played a vital role in affecting users’ adoption of preventive behaviors. Because public social media is the most popular media type in China, it accelerates news diffusion among people or across regions and enables users to learn from each other . On the other hand, public social media mostly disseminates information via interpersonal communication, which intensifies the perceived credibility of this type of social media . Thus, public social media can act as a significant predictor for preventive behaviors. Finally, as an emerging web-based platform, professional medical social media sites such as Ding Xiang Doctor provide professional health knowledge with enormous medical resources and are a promising information channel for future public health emergencies. All these results suggest that information communication during a pandemic should be built on perceived credibility or trust. Aggregated social media usually provides various sources. Users can compare different sources for a news theme and select the most trustworthy news. In contrast, media with a single source delivers only one voice and has lower perceived credibility. This media will be abandoned in a competitive context. Additionally, public social media platforms are the most popular channels of interpersonal communication in China. These platforms are usually used among acquaintances with higher levels of trust. This shows that the credibility of the information source is important for news dissemination during a pandemic. Governments should deliver more credible news and dispel rumors, which may be helpful in controlling the pandemic. eHealth Literacy and Disease Knowledge as Predictors and Moderators of Preventive Behaviors Health literacy is being increasingly emphasized in public health-related studies. The relationship between health literacy and health behaviors or health status has also been highly recognized and understood based on empirical evidence. For example, it was found that poor health literacy created barriers to fully understanding individual health, illness, and treatment for people with HIV/AIDS . Unimproved public mental health literacy predicted denial of self-help , and limited health literacy was correlated with worse health outcomes in terms of a patient’s motivation, problem-solving ability, self-efficacy, and disease knowledge, among other factors . However, prior studies mainly focused on chronic disease or unhealthy lifestyles. Less attention has been paid to public health emergencies such as pandemics. In this study, we investigated if and how health literacy influenced public preventive behaviors during the COVID-19 pandemic in China. Disease knowledge and eHealth literacy were selected as the core indicators of health literacy, as concluded from previous studies . In line with most previous findings, we verified that both disease knowledge and eHealth literacy significantly predicted Chinese respondents’ preventive behaviors during the COVID-19 pandemic. Additionally, eHealth literacy had more weight in predicting preventive behaviors than disease knowledge. Moreover, eHealth literacy positively moderated the relationship between social media use and preventive behaviors, while disease knowledge had a significant but negative effect. These findings highlight the importance of health literacy for pandemic prevention. Improving the public’s level of health literacy is essential for health promotion, not only during a pandemic but in all contexts of public health in the future. However, it should be mentioned that health literacy is not always positively correlated with preventive behaviors. Health literacy has shown inverse effects on individuals’ healthy behaviors; for example, misinformation toward vaccination may lead to denial of the influenza vaccine , and a higher level of health literacy is not always associated with health-promotion behaviors . This evidence underscored a compelling need to increase public awareness of health literacy in different disease conditions. Demographics and Preventive Behaviors Many studies have indicated that sociodemographic indicators are vital in predicting health promotion behaviors. Our study showed similar outcomes to previous findings. We found that women engaged in more preventive behaviors than men during the COVID-19 pandemic in China. This finding may be explained by a study indicating that women are more sensitive to and interested in health information on social media than men . Moreover, women usually have higher levels of disease knowledge and health literacy than men , and they search more frequently for health information on the internet related to changes in diet . Furthermore, age, monthly income, and health status were positive predictors of preventive behaviors. These results indicate that people who are older and have higher income or good health status are more likely to take measures to prevent COVID-19, which is consistent with previous findings . Additionally, education and marital status were significant predictors in the existing literature; for example, in one study , the odds of having accurate knowledge of malaria increased as individuals’ educational levels increased, and unmarried people were found to be more likely to have positive attitudes toward rabies prevention than married people . However, these variables were not significant in this study, perhaps due to the different social contexts. Limitations The results of our study should be considered in light of several limitations, and the following improvements can be implemented in future studies: Firstly, the sample consisted of netizens between 20 and 60 years of age. Younger people (age <20 years) and older people (age >60 years) had very low response rates in the survey database. Thus, we selected 20 to 60 years of age as the target age range of our sample. People younger than 20 years or older than 60 years could be included in future studies. Furthermore, the sample consisted of much more high-income and educated netizens because our sampling was proportioned according to gender and age without consideration of income and education. Future studies are suggested to comprise netizens with lower income and less education to facilitate the generalizability of our findings. Secondly, a single measurement of disease knowledge was used in this study, which may have led to a ceiling effect on the respondents and impaired the validity of our test. Thus, a more suitable, reasonable, and valid instrument of disease knowledge should be constructed in future studies. Finally, this article mainly focused on the frequency and types of social media use, while other variables of media use, such as motivations and content, were not included in this study. With the rapid development of various social media platforms, such as WeChat, Weibo, Facebook, Twitter, and WhatsApp, they will continue to play a vital role in public health promotion, as we found in this study. Future research is necessary to explore how social media access affects health behaviors, including the information sources and information content accessed. Also, the experience, needs, and motivations of one’s social media use are suggested to be explored in health behavior studies in the future. Conclusions Using a national web-based cross-sectional survey of a representative sample of Chinese netizens, we fully investigated our hypotheses and answered the proposed questions. We present our conclusions as follows: social media use frequency and disease knowledge and eHealth literacy were significant predictive factors of preventive behaviors; eHealth literacy and disease knowledge moderated the relationship between social media use and preventive behaviors. Aggregated social media use and public social media use were significant predictors of preventive behaviors, while official social media use was not. These results not only enrich the theoretical paradigm of public health management and health communication but also have practical implications in pandemic control both for China and for other countries. On one hand, the confirmed predictive ability of social media use suggests that social media is helpful to disseminate pandemic news and disease knowledge, which can help the public to collectively adopt necessary preventive measures for disease control. On the other hand, the predictive ability of disease knowledge and eHealth literacy provided an endorsement that improving one’s level of health literacy is essential during a pandemic in the long term. Additionally, sociodemographic factors such as gender, age, monthly income, and health status should be taken into account in public health interventions. More attention should perhaps be paid to the people who are male, are younger, have lower income, and have poor health status during a pandemic.
For a long time, mass media (eg, television, radio, and newspapers) was recognized as an important strategy for health-promoting practice . For example, a mass media campaign increased physical activity, produced positive changes, and prevented negative changes in health-related behaviors . Government and executive agencies have generally used mass media and social media as convenient tools for supervising and preventing epidemics. According to the main results of this study, social media use (frequency) played a positive role in public preventive behaviors during the COVID-19 pandemic in China. This may be an important indicator of health promotion, which encourages the public to take more health measures during emergencies. Compared to mass media, social media provides the public with convenient channels to obtain news or disease knowledge and delivers information effectively. Thus, social media should be an effective strategy for public health promotion, especially during an epidemic or a pandemic. In contrast with social media use time (which was nonsignificant), social media use frequency was a significant predictor of preventive behaviors. In other words, “how often” rather than “how long” social media was used was a good predictor of an individual’s preventive behaviors; this was an unexpected but interesting finding in this work. Time and frequency are often used to measure the regularity of social media use . We attempted to draw an explanation from previous studies that investigated the relationship between social media use frequency and behavioral outcomes; we found that “frequency” may be a direct indicator of the motivations of social media use, such as self-expression, social learning, social comparison, or filtering . Therefore, we cautiously concluded that frequency of social media use indicates the degree of engagement or investment in social media. “Frequency” may thus be a more significant predictor of social media effects.
The positive correlation of social media use and preventive behaviors extended the study of the relationships between different types of social media use and preventive behaviors. Aggregated social media use was found to be the most significant predictor of preventive behaviors among four types of social media channels, followed by public social media and professional social media use. In contrast, official social media use was not significant. These results indicate that new media access (aggregated social media, public social media, and professional social media) deserved more attention in affecting public preventive measures than traditional media (official social media), particularly in Chinese contexts. Aggregated social media, a novel type of news aggregator, has ensured that readers can read news stories of high quality from many outlets; this simplifies the search process of news stories and allows users to save time and effort in finding news . News aggregators such as Tencent News, Sina News, and Toutiao have emerged as important components of digital content ecosystems in China, along with overseas Google News, Reddit, Bing News, etc. These aggregated social media sites have drastically changed the ways in which users access information and interact with each other. They can also generate a substitution effect when users switch from news outlets (official media) to news aggregators . Consequently, aggregated social media is competing with official social media for more users’ attention and has led to an intensified propaganda crisis of official social media. This may partially explain our finding that aggregated social media was the most significant predictor for preventive behaviors among the four social media types, while official social media was not significant. Furthermore, official media outlets, such as CCTV, People's Daily and Xinhua Net, are state-driven media platforms in Chinese contexts. The content of official social media platforms mainly focuses on party ideology or party image ; meanwhile, the content spectrum of more extensive social imperatives is limited . Therefore, the readability and humanity of public health content on official social media are lower than on aggregated social media, which may be another reason for the insignificant effect of official social media on public preventive behaviors. Additionally, we found that public social media (eg, WeChat, Weibo, and TikTok) played a vital role in affecting users’ adoption of preventive behaviors. Because public social media is the most popular media type in China, it accelerates news diffusion among people or across regions and enables users to learn from each other . On the other hand, public social media mostly disseminates information via interpersonal communication, which intensifies the perceived credibility of this type of social media . Thus, public social media can act as a significant predictor for preventive behaviors. Finally, as an emerging web-based platform, professional medical social media sites such as Ding Xiang Doctor provide professional health knowledge with enormous medical resources and are a promising information channel for future public health emergencies. All these results suggest that information communication during a pandemic should be built on perceived credibility or trust. Aggregated social media usually provides various sources. Users can compare different sources for a news theme and select the most trustworthy news. In contrast, media with a single source delivers only one voice and has lower perceived credibility. This media will be abandoned in a competitive context. Additionally, public social media platforms are the most popular channels of interpersonal communication in China. These platforms are usually used among acquaintances with higher levels of trust. This shows that the credibility of the information source is important for news dissemination during a pandemic. Governments should deliver more credible news and dispel rumors, which may be helpful in controlling the pandemic.
Health literacy is being increasingly emphasized in public health-related studies. The relationship between health literacy and health behaviors or health status has also been highly recognized and understood based on empirical evidence. For example, it was found that poor health literacy created barriers to fully understanding individual health, illness, and treatment for people with HIV/AIDS . Unimproved public mental health literacy predicted denial of self-help , and limited health literacy was correlated with worse health outcomes in terms of a patient’s motivation, problem-solving ability, self-efficacy, and disease knowledge, among other factors . However, prior studies mainly focused on chronic disease or unhealthy lifestyles. Less attention has been paid to public health emergencies such as pandemics. In this study, we investigated if and how health literacy influenced public preventive behaviors during the COVID-19 pandemic in China. Disease knowledge and eHealth literacy were selected as the core indicators of health literacy, as concluded from previous studies . In line with most previous findings, we verified that both disease knowledge and eHealth literacy significantly predicted Chinese respondents’ preventive behaviors during the COVID-19 pandemic. Additionally, eHealth literacy had more weight in predicting preventive behaviors than disease knowledge. Moreover, eHealth literacy positively moderated the relationship between social media use and preventive behaviors, while disease knowledge had a significant but negative effect. These findings highlight the importance of health literacy for pandemic prevention. Improving the public’s level of health literacy is essential for health promotion, not only during a pandemic but in all contexts of public health in the future. However, it should be mentioned that health literacy is not always positively correlated with preventive behaviors. Health literacy has shown inverse effects on individuals’ healthy behaviors; for example, misinformation toward vaccination may lead to denial of the influenza vaccine , and a higher level of health literacy is not always associated with health-promotion behaviors . This evidence underscored a compelling need to increase public awareness of health literacy in different disease conditions.
Many studies have indicated that sociodemographic indicators are vital in predicting health promotion behaviors. Our study showed similar outcomes to previous findings. We found that women engaged in more preventive behaviors than men during the COVID-19 pandemic in China. This finding may be explained by a study indicating that women are more sensitive to and interested in health information on social media than men . Moreover, women usually have higher levels of disease knowledge and health literacy than men , and they search more frequently for health information on the internet related to changes in diet . Furthermore, age, monthly income, and health status were positive predictors of preventive behaviors. These results indicate that people who are older and have higher income or good health status are more likely to take measures to prevent COVID-19, which is consistent with previous findings . Additionally, education and marital status were significant predictors in the existing literature; for example, in one study , the odds of having accurate knowledge of malaria increased as individuals’ educational levels increased, and unmarried people were found to be more likely to have positive attitudes toward rabies prevention than married people . However, these variables were not significant in this study, perhaps due to the different social contexts.
The results of our study should be considered in light of several limitations, and the following improvements can be implemented in future studies: Firstly, the sample consisted of netizens between 20 and 60 years of age. Younger people (age <20 years) and older people (age >60 years) had very low response rates in the survey database. Thus, we selected 20 to 60 years of age as the target age range of our sample. People younger than 20 years or older than 60 years could be included in future studies. Furthermore, the sample consisted of much more high-income and educated netizens because our sampling was proportioned according to gender and age without consideration of income and education. Future studies are suggested to comprise netizens with lower income and less education to facilitate the generalizability of our findings. Secondly, a single measurement of disease knowledge was used in this study, which may have led to a ceiling effect on the respondents and impaired the validity of our test. Thus, a more suitable, reasonable, and valid instrument of disease knowledge should be constructed in future studies. Finally, this article mainly focused on the frequency and types of social media use, while other variables of media use, such as motivations and content, were not included in this study. With the rapid development of various social media platforms, such as WeChat, Weibo, Facebook, Twitter, and WhatsApp, they will continue to play a vital role in public health promotion, as we found in this study. Future research is necessary to explore how social media access affects health behaviors, including the information sources and information content accessed. Also, the experience, needs, and motivations of one’s social media use are suggested to be explored in health behavior studies in the future.
Using a national web-based cross-sectional survey of a representative sample of Chinese netizens, we fully investigated our hypotheses and answered the proposed questions. We present our conclusions as follows: social media use frequency and disease knowledge and eHealth literacy were significant predictive factors of preventive behaviors; eHealth literacy and disease knowledge moderated the relationship between social media use and preventive behaviors. Aggregated social media use and public social media use were significant predictors of preventive behaviors, while official social media use was not. These results not only enrich the theoretical paradigm of public health management and health communication but also have practical implications in pandemic control both for China and for other countries. On one hand, the confirmed predictive ability of social media use suggests that social media is helpful to disseminate pandemic news and disease knowledge, which can help the public to collectively adopt necessary preventive measures for disease control. On the other hand, the predictive ability of disease knowledge and eHealth literacy provided an endorsement that improving one’s level of health literacy is essential during a pandemic in the long term. Additionally, sociodemographic factors such as gender, age, monthly income, and health status should be taken into account in public health interventions. More attention should perhaps be paid to the people who are male, are younger, have lower income, and have poor health status during a pandemic.
|
Digital Health Readiness: Making Digital Health Care More Inclusive | 3cf2bd82-cc7a-4db9-8051-f50dd9664a3c | 11499716 | Health Literacy[mh] | The use of digital tools for health care—including video visits, patient portals, mobile apps, and remote monitors—has risen exponentially over the last decade and become more essential for care access during and after the COVID-19 pandemic . Patients using digital health tools have been shown to have better outcomes in managing many outpatient health conditions, including diabetes , anxiety and mood disorders , hypertension , and chronic pain . Still, despite their growing incorporation into health care and potential to improve health outcomes, many who could benefit from these tools are not using them . If health systems can develop approaches to close this gap with innovative and tailored pathways to digital health care, they could improve access, inclusivity, and outcomes. Prior approaches to increase digital health engagement focused on several domains, including such logistical factors as broadband internet access , access to smartphones, and the ability of individuals to use technology to participate in health care and understand their health (ie, digital health literacy) . Initial assessments of digital health literacy in the mid-2000s focused on the ability to use the internet, but they have since expanded to encompass smartphones, mobile apps, and social media . As a construct, digital health literacy has also grown to reflect multiple domains of health technology use, including personal aspects like prior experiences, digital self-efficacy, motivation to use digital health, and access to technology . The evolution of these assessments reflects changes in the technological environment but also demonstrates the multifaceted nature of digital health literacy overall. Future approaches to facilitating further equitable growth of digital health could consider the ecosystem of factors that drive engagement with these tools. General health literacy is increasingly understood as a relational concept in which patients and health care providers (HCPs) balance their skills and abilities against the demands of health care systems . Digital health readiness for individual patients exists within similar contexts and is impacted by the technological tools themselves (particularly the demands that they place on patients), the HCPs prescribing and monitoring their use, the clinics and digital health navigation services where technological instruction occurs, the health systems and their approach toward digital health implementation, and the insurers that control coverage of these services and tools . In this paper, we review current digital health literacy measures to assess and predict a person’s ability to engage with digital health, discuss their relative strengths and weaknesses, and describe our holistic vision for health care systems to assess digital health readiness efficiently with health record data. Multidomain digital health readiness assessments could create a phenotype for each patient representing how prepared, experienced, and equipped they are to use a particular digital health tool at a certain point in time . Prior studies have established approaches to understand readiness within health systems (ie, how prepared and experienced a system is for digital care implementation) , within individual health care facilities , and among health professionals themselves . Approaches for comprehensively defining and assessing individual-level digital health readiness could become central to health system and payor operations, as signaled by the Center for Medicare and Medicaid Services (CMS) mandate that Medicare Advantage organizations offer “digital health education” for telehealth to their members . Creating effective and holistic digital health readiness assessments could contribute to increased use of and access to these tools among patients and their families. In this paper, we focus only on assessing individual, patient-level digital health readiness, but we acknowledge that this construct can be applied to any node within the digital health readiness ecosystem, as noted above and in .
Current methods to assess digital health readiness have several strengths and weaknesses. One strength of these measures is that they assess relevant aspects of digital health participation and are often short enough to be incorporated into clinical practice; however, these measures assess personal attitudes alone without considering technological aptitude. For example, the eHealth Literacy Scale (eHEALS) is the most cited digital health literacy measure and focuses on assessing a person’s attitudes, confidence, and subjective skill level in using internet search engines and evaluating online information, yet it does not assess the experience needed for smartphones and wearable monitors or address such structural factors as device access (either through personal ownership or sharing) . Newer measures such as the Digital Health Care Literacy Scale do capture skills for using and troubleshooting mobile apps and videoconferencing apps in a brief manner that is primed for clinical settings, but they also do not assess technical aptitude or device access . For digital health readiness assessments to be useful in the clinical operations of health systems, they should have an aptitude assessment to stratify individuals into levels with matched support interventions. Additionally, research will be needed on what demonstrated skills are most important for a particular care modality (like a video visit versus wearing a remote monitor). More thorough digital health readiness assessments cover many relevant aspects of the digital health care experience; however, they may be logistically challenging to administer in clinical settings. For instance, the recent Digital Health Readiness Questionnaire (from 2023) gathers a more detailed assessment of a person’s experiences with digital health by asking about their skills, digital literacy, digital health literacy, device use, and learnability, but its 20 items might be cumbersome to administer in a busy primary care setting, do not assess actual aptitude, and do not include questions about device or internet access . Even more robust assessments, including the eHealth Assessment Toolkit and eHealth Literacy Questionnaire , are validated and available, though their comprehensiveness also likely makes them unwieldy for application in clinical settings. For example, the eHealth Assessment Toolkit has 44 questions encompassing 7 different tools for digital health care. One strength of contemporary digital health readiness measures is that they are grounded in updated theoretical constructs of digital health equity that aim to improve engagement with populations facing health disparities and reflect our current technological environment. The framework for digital health equity augmented the National Institute on Minority Health and Health Disparities research framework by adding individual, interpersonal, community, and societal aspects of the digital environment and patient experience . Previously elaborated digital health readiness research strategies like those from Lyles et al and Jaworski et al were built on components such as “access, motivation and trust, and digital health literacy” that are also fundamental for boosting digital health engagement. Despite being published relatively recently, these frameworks are widely cited and are being incorporated into wide-ranging fields, including behavioral health research, addiction medicine, and cardiovascular medicine—among others . While these updated constructs reflect the current experiences of being a digital health care user, they will also likely need to be updated over time to match the dynamic nature of digital health innovation and remain relevant in the frantic pace of clinical care. Moreover, as seen in the following scenario, approaches to digital health readiness will need to be agile and adaptable to meet the unique needs of each individual.
This hypothetical patient scenario reflects the challenges of applying individual digital health readiness assessments and how clinical teams could be responsive to each person’s unique needs. Ms T’s case demonstrates the importance of aptitude testing (eg, prompting a user to show an instructor how they might use a phone app) and how a care team might adjust a digital health care modality to best meet the needs of a patient. Another weakness of current digital health literacy and readiness measures is that they do not integrate passively collected data from the electronic health record to improve efficiency and efficacy. Using available metrics—such as a visualized breakdown of previous in-person care, completed video visits, completed phone visits, and patient portal use— can increase the efficiency of digital health readiness assessments and portray a person’s actual care use compared with their stated goals. Examples include the Telemedicine ImPACT Score and EpicCare Video Visit Technical Risk Score , which use data on the number of prior completed video visits and portal messages sent to forecast future digital engagement without the need to administer a questionnaire. These data seamlessly contribute information about an individual’s digital determinants of health—that is, the larger social, personal, and structural barriers that impact digital health engagement —and could focus on particular factors that are most predictive of certain tasks (like completing a video visit) . Looking at a person’s health record data in a digital health readiness profile, in-clinic technology navigators may find that a person has no broadband access or internet experience and recommend in-person care over virtual care until these factors are addressed. Passive health record data could refine in-person and digital care delivery so that patients are accessing resources in a way that matches their personal situations. The essential elements needed for comprehensive and practical digital health readiness assessments will include aptitude testing, in addition to evaluating attitudes toward technology, customizing skill assessment to address emerging technologies, and incorporating passively collected health system data. Existing digital health literacy screening metrics and digital health prediction tools each have strengths that could create a more comprehensive profile of a person’s prior technological experience and could be adapted to the use of new technologies over time. Hypothetical patient scenario 1. Ms T is a woman aged 63 years with a laptop computer and a smartphone who regularly searches for health information on the internet. Ms T qualifies for a continuous glucose monitor (CGM) to track her blood sugars; however, the device typically downloads data to a smartphone for users to view their trends. She has nerve damage from diabetes that limits her ability to navigate smartphone screens, but she is able to use computer keyboards without issue. Once the CGM is ordered, the diabetes education team asks her to bring whichever devices she most commonly uses to her CGM training session. During her visit, the diabetes nurse educator evaluates her for digital health literacy using the 3-item Digital Health Care Literacy Scale and feels that she is prepared to use the CGM interface. After the educator downloads the CGM app on her smartphone, Ms T is prompted to sign in and create an account. Immediately, the staff notices that she has issues navigating the smartphone interface. Pivoting to make the technology more usable for her, they set up the CGM application on her laptop so that she can view her blood sugar trends more easily.
We envision a holistic digital heath readiness assessment that will enable health systems to deliver targeted support to those who need it most and close gaps in use. Similar to the Conversational Health Literacy Assessment Tool (CHAT), which is designed to assess multiple dimensions of a person’s health literacy in health care settings, digital health readiness assessments could be designed to provide a more comprehensive and pragmatic picture of a patient’s digital health strengths and obstacles . In particular, the Health Promotion Barriers and Support, Health Information Access and Comprehension, and Current Health Behaviors domains from the CHAT could be adapted to a digital health context. Digital health readiness assessments could begin with questions about personal goals for health technology use and prior digital health experience, followed by focused aptitude testing for a particular digital health tool or goal, a brief digital health literacy assessment, and visualization of that person’s health systems data to probe into their digital determinants of health. reflects the proposed elements of an individual digital health readiness profile that would allow HCPs and care navigators to understand a person’s digital phenotype and act to meet their unique needs. The components of represent our thoughts on ways to address the strengths and weaknesses outlined above and were informed by the framework for digital health equity . This multi-domain approach would incorporate patient-reported data with passive data from health systems and payors to make responses more relevant and able to be added to busy clinical workflows. The key difference from existing digital health literacy assessments is the incorporation of a focused aptitude test assessment (such as having a patient show how they use a mobile app for 1-2 minutes) and the integration of passively collected clinical data. These aspects would make digital health readiness phenotyping more efficient, systematic, and, hopefully, effective for clinical settings. As technology evolves and alters the required skills to participate in modern health care, digital health readiness assessments will need to grow in kind to reflect these skills. Ideally, the collection of inputs will differ for specific tasks. For example, completing a video visit may involve downloading a mobile app, registering an account, checking in online, and signing in to the appointment. In contrast, registering for a patient portal may involve only some of these steps. Domain-specific digital health readiness assessments could make the assessment most relevant to patients and their goals. The following fictional vignette shows how digital health readiness assessments could be tailored to help patients complete a specific task—such as how to log on to and complete a video visit.
The hypothetical patient scenario shown in reflects how passively collected data could link patients with digital health navigation services to improve digital health care outcomes. Looking at Mr P’s case, he is a person who has ostensibly high digital health readiness through demonstrated skills, access to a network, and use of a health system app; however, he has also consistently had issues logging in for video visits, which adversely impacted his digital health care use and increased his risk of hospital readmissions. In this scenario, an automated alert based on previous patterns of digital health care use from electronic health record data triggered help with navigating video visits from a digital health navigator, which many health systems offer . That alert could have triggered office staff to arrange an in-person appointment or home visit to assess his ability to use telehealth and provide help if he could not. Having systematic processes in place to assess who is most appropriate for in-person versus remote or asynchronous care could guide efficient service delivery and use of resources. With their abundance of claims data and the opportunity to trial different variations of digital support pathways, integrated delivery and finance systems represent a unique setting where digital health readiness measures could be deployed, tested, and refined. Digital health readiness assessments could be a key step toward making digital health implementation more systematic for all people, leading to greater equity and effectiveness. In many clinics, the process of selecting in-person care versus telemedicine could be tied to the nature of the medical issue, the judgment of scheduling and treating team members, and personal preferences (ie, a subset of patients who always want in-person care). Adding more specificity to digital health implementation through the creation of care delivery phenotypes—that is, providing navigation support for patients who are motivated to use digital health but are inexperienced—would optimize this care. It is likely that many opportunities for digital engagement and adoption of new tools are missed simply because health systems do not have robust ways to screen for who is best equipped and motivated for digital health but has not used it. Rather than limiting digital health to those patients who are already confident with technology, streamlined and methodical digital onboarding guided by a digital health readiness assessment could expand the reach of these tools to more patients. In turn, this could provide greater efficiency and, in some cases, reduced costs for patients in scenarios where similar treatment outcomes have been achieved with video versus in-person visits . Differentiating those who can complete a telemedicine appointment on their own from those who might need additional support would further expand digital health as a standard of care and improve the service experience for all patients. To fully assess digital health readiness, we should also consider how a person’s situation may change over time as well as how personal and community resources could help them succeed. With an aging and increasingly medically complex population, digital health readiness phenotypes will likely be dynamic and may need to be repeated in certain circumstances, such as a major health event, functional decline, cognitive impairment, financial insecurity, or loss of family support . In the event that a person can no longer use a particular tool, a support person may be best suited to provide digital health support in a convenient environment like a health center–affiliated or community-embedded internet clinic . Furthermore, studies have shown that patients with limited technology experience are often able to complete a telehealth visit with the help of a family member, friend, or caregiver—thereby providing an opportunity to engage those with lower digital health readiness from the onset . Partnering with patients, families, and communities could help to personalize digital care delivery pathways even further and improve engagement. Hypothetical patient scenario 2. Mr P is a man aged 75 years who has been hospitalized 5 times in the past year for decompensated heart failure. He has a smartphone that enables him to message his primary care provider and heart failure specialist via his health system’s patient portal. As he transitioned between hospitals, skilled nursing facilities, and home, he missed multiple follow-ups. His primary care office proactively contacts him at home and sets up a video visit to reestablish care. When the time for the appointment arrives, his primary care provider begins the visit but Mr P cannot log in. After he spends 10 minutes of the 30-minute appointment trying to use the videoconferencing platform, his doctor switches to a phone visit. At the end of the visit, his doctor receives an automated alert from the electronic health record noting that prior scheduled video visits have been converted to phone visits. Looking deeper into the situation, the doctor notices that recurrent telehealth platform issues have taken time away from health care providers to discuss all aspects of his health issues in prior visits—especially dietary counseling (a key reason for his hospitalizations). After the visit, Mr P is referred for an in-person digital health navigation session where he is instructed on ways to troubleshoot the telehealth platform and demonstrate that he can use the videoconferencing service independently.
While digital health readiness assessments apply to individual patients, health systems will also need to build infrastructure to respond to the results of these assessments in a meaningful way to realize their full potential. There are established standards to promote organizational health literacy within health systems that could be applied to digital health implementation—including fostering a culture among employees that promotes communication and engagement with patients and families using technology . Moreover, HCPs and team members also have varying levels of digital health readiness that affect the implementation of digital health readiness assessments. Similar to medication prescribing, HCPs often serve as gatekeepers for recommending and promoting digital health tools. HCPs’ awareness and perceptions of the benefits of digital tools have been identified as determinants of mobile app uptake for chronic disease management . While one might assume that HCPs would have more than adequate digital health readiness and literacy, some studies of hospitals in resource-limited settings worldwide (including one from Ethiopia during the COVID-19 pandemic) have found that less than half of HCPs had high digital health literacy . Health care systems must consider the levels of technological awareness, comfort, and competence among their HCPs when considering more equitable digital health implementation. There are also potential risks and ethical concerns involved in digital health implementation. With studies showing that digital health engagement is lower among older people, those who require an interpreter, and those who live in more deprived areas , efforts to shift more and more health care to digital platforms could exacerbate gaps in care. Furthermore, while the aforementioned evaluation frameworks for digital health tools do consider inclusivity and equity for diverse populations, studies have suggested that only 58% of mobile app evaluation frameworks do so, meaning that vital perspectives on technological tools may still be left behind . Tying back to digital health literacy and health literacy, patients could experience delays in care if they were to choose telehealth or a patient portal message for a condition that warrants in-person evaluation. Personal health data collection and security are also important considerations for making sure that participating in digital health care is safe for all users. A challenge of aptitude- and analytics-based digital health readiness assessment approaches is that they could amplify societal inequities if not designed carefully and evaluated among minoritized populations. Assessments based solely on aptitude may be biased against other-abled individuals with visual or hearing impairments or people whose primary language is not English. Moreover, given the complex array of factors that impact digital health engagement, digital health readiness assessments cannot be perfectly comprehensive. Digital health literacy is a single digital determinant of health that incorporates a person’s underlying literacy, numeracy, and general health literacy—each of which could not be measured or acted upon in a single clinic visit. Using passively collected data carries the risk of perpetuating systemic biases through algorithmic determinism (eg, the perpetuation of systemic bias through algorithms trained on biased data) and underrepresentation of marginalized groups in data overall , which could further contribute to the digital health divide . It will be important to test and validate digital health readiness assessments among diverse patients. If the evidence for these assessments has not yet been established among certain groups, this should be noted in the electronic health record and factored into how they are deployed and understood.
Assessing and supporting individual patient-level digital health readiness is a crucial step toward maximizing benefits from digital health care and could provide a path toward greater digital health equity. More systematic approaches to support patients with low digital health readiness could ensure that assessments are actionable for clinicians, payors, and health systems. If we can work to increase the reach of health technology to keep up with the evolution of the consumer electronics market, more patients could be empowered to enter the digital health care age and benefit from these new tools.
|
Scleral buckling in retinal detachment due to retinal dialysis – A vitreoretina fellow’s perspective | 905c4d31-5009-44c9-b337-4193f2446ef2 | 11834908 | Surgical Procedures, Operative[mh] | We conducted a retrospective consecutive case review at a tertiary eye care center in North India. The study was conducted in accordance with the tenets of declaration of Helsinki and institutional research guidelines. Written informed consent was obtained from all the participants. Records of all the patients who had undergone SB by VR fellows for retinal dialysis associated with RRD between January 2017 to January 2020 were reviewed. All the patients had undergone comprehensive clinical examination. The case files along with colored retinal detachment (RD) charts (modified Amsler Dubois RD chart) were reviewed to document pertinent details including demographic data, duration of RRD/symptoms, mode of trauma if any, best corrected visual acuity (BCVA; Snellen’s), intraocular pressure (IOP), anterior segment details, state of vitreous, location and extent of retinal dialysis, extent of RRD, macular status, proliferative vitreoretinopathy (PVR; modified retina society classification), intraoperative steps, follow-up details, need for resurgery, and other eye status. Visual acuity was converted to log of minimum angle of resolution (logMAR) for statistical analysis. Quadrant-wise location of dialysis was noted (superonasal/superotemporal/inferonasal/inferotemporal). Position was mentioned as superior, inferior, nasal, or temporal whenever dialysis extended in more than one of the aforementioned quadrants. Eyes having additional retinal breaks other than dialysis were excluded from the study. Eyes undergoing combined SB with Vitrectomy for RRD were excluded. All the surgeries were conducted by VR fellows with less than 2 years’ experience (after completion of basic ophthalmology training) under the supervision of VR consultant. All the surgeries were conducted under general or peribulbar anesthesia. After initial peritomy, the recti were hooked and bridled, and margins of retinal dialysis were marked using indirect ophthalmoscopy. After application of cryopexy to the dialysis, a silicone asymmetric tire (#276) was used to cover the dialysis with an additional 1.5 clock hours on each side. Encircling silicone band (#240) was also used in all the patients. They were secured to their position with the help of 5-0 Ethibond sutures. Decision to drain the subretinal fluid (SRF) intraoperatively was based on surgeon’s preference, but was adopted only in high myopes and old RRD. Standard postoperative treatment of topical steroid, antibiotic, and cycloplegic was used in all patients. The patients were followed on postoperative day 1, day 8, day 30, and 3 months. Resurgery in the form of VR surgery was done in eyes where SB failed to reattach the retina. The data was recorded on an Excel spreadsheet and analyzed with STAT 12.1 software. The parametric variables were represented in the form of mean and standard deviation (SD), while the nonparametric variables were represented as median and range. Fifty-three eyes of 53 patients met the inclusion criteria and were included for final analysis. Baseline characteristics are summarized in . Mean age was 20.77 years (range 6–55 years). Thirty-nine patients (73.58%) were male and 14 were female. Thirty-eight patients (71.69%) had history of significant trauma. Mean duration of symptoms was 6.89 months. Twelve (31.5%) of 38 patients with trauma were diagnosed more than a year after trauma. Of 53 eyes, 41 had RD involving the macula and 12 had RD sparing the macula. Of all 53 eyes, 28 (52.83%) had no PVR, three (5.6%) had PVR-A, 12 (22.64%) had PVR-B, and 10 (18.86%) had PVR-C1. Fifty-two eyes had a single dialysis and 1 eye had 2 separate dialysis. The quadrant wise distribution of retinal dialysis in all the eyes and eyes with trauma is given in . Six eyes had giant retinal dialysis (GRD, dialysis greater than 3 clock hours) and all were due to trauma. The location of GRD was superior in 4 eyes and inferior in 2 eyes while 5 eyes with GRD had PVR-B while 1 eye had PVR-C1. Five patients (three males and two females) had retinal dialysis in the fellow eye as well and were diagnosed with bilateral idiopathic retinal dialysis (BIRD). Prophylactic laser barrage for the fellow eye was done in these patients. A total of 21 fellows operated these 53 eyes. The median number of surgeries performed by the fellows was 2 with a range of 2-4 under supervision of VR consultant. In 25 eyes, external drainage of SRF was done during SB, while in 28 eyes SRF was not drained. After single surgery, SB was able to achieve retinal attachment in 45 (84.9%) out of 53 eyes . Four eyes (14.28%) out of 28 eyes that had undergone SB without drainage and four (16%) out of 25 eyes that had undergone SB with drainage failed to achieve retinal attachment after a single surgery. Successful retinal reattachment was achieved in 24/28 eyes (85.71%) with no PVR, 2/3 eyes (66.66%) with PVR-A, 11/12 eyes (91.66%) with PVR-B, and 8/10 eyes (80%) with PVR-C1. Retinal reattachment was achieved in all eight failed cases with vitreoretinal surgery and oil tamponade. Mean preoperative BCVA was 1.9 ± 1.05 logMAR, which improved postoperatively to 1.07 ± 0.72 logMAR ( P < 0.001). The mean follow-up was 7.5 months (range: 2 months–3 years). Eight eyes had cataract, out of which six had posterior subcapsular cataract (PSC), one had anterior subcapsular cataract (ASC), and one had both PSC and ASC. All eyes with cataract had history of trauma. Two eyes, one operated after penetrating knife injury and one operated due to PSC associated with uveitis, were pseudophakic. One eye which had undergone lens aspiration with posterior capsulorrhexis with anterior vitrectomy due to cataract associated with persistent fetal vasculature was aphakic at the time of SB. Among eyes ( n =38) with history of trauma, 11 had greater than 180 degrees angle recession, 4 had sphincter tears, 2 had phacodonesis, 1 had zonular dialysis without lens subluxation, 1 had lens subluxation and 3 had vitreous hemorrhage (VH) with media haze grade 3 (some retinal vessels visible but not second order retinal vessels). The 3 eyes with VH were initially followed-up at 2 weekly intervals with propped up position. All 3 eyes had resolution of VH at the 6-week follow-up, permitting visualization of retinal periphery. All 3 eyes underwent SB at 7 weeks following trauma with all 3 eyes having a superonasal dialysis with PVR-A. Posttraumatic macular hole and choroidal rupture was identified preoperatively in one eye each. Six (54.54%) out of 11 eyes with angle recession more than 180° had high IOP before surgery. Mean preoperative IOP was 18.2 and 15.4 mmHg at the last follow-up. Nine eyes had preoperative IOP greater than 20 mmHg. Out of these, six (66.66%) had greater than 180° angle recession. Remaining three eyes also had history of trauma. One to three medications were required to control IOP in these eyes in the postoperative period. None of the eyes required surgery to control IOP. Intraoperatively, minimal subretinal hemorrhage at the drainage site was noted in three eyes which had undergone intraoperative SRF drainage. Complications such as central retinal artery inclusion, suprachoroidal hemorrhage, iatrogenic retinal breaks, and buckle-related infection were not encountered in any of the eyes. The success rate of SB in retinal detachment associated with retinal dialysis may be as high as 95.8% but there is relative paucity on its success in the hands of VR fellows. Scleral buckling is one of the first surgeries that a VR fellow performs as part of most fellowship courses offered throughout the country. The success rate of SB at the hands of VR fellows would be dependent on the number of surgeries performed as well as intraindividual learning effects. Nevertheless, in our study retinal reattachment was achieved in 84.9% of the cases with a single surgery (SB), which is also consistent with success rate reported in literature. Good success rate with low complication risk even in the hands of fellows indicates that SB is an inherently safe procedure in such a scenario. Because of anterior location of retinal dialysis, they are easy to localize and settle with minimal buckle indent. SB for RRDs due to retinal dialysis thus may be an ideal procedure for the fellows for surgical exposure in the field of retina. This also makes the resident proficient with indirect ophthalmoscopy and drawing of color retina charts, a vital prerequisite for training in the field of retina. Unlike previous studies evaluating the success of SB in patients with RD associated with retinal dialysis, our study has a comparatively large sample size. Several features like occurrence in young males, frequent history of trauma (>70%), location of dialysis in inferotemporal and superonasal quadrants (70%), and high incidence of idiopathic retinal dialysis in the inferotemporal quadrant as seen in our series are similar to existing literature. Rate of progression of RRDs associated with retinal dialysis is usually slower because the posterior edge of dialysis is supported by vitreous base which halters subretinal seepage of fluid. In our study, more than 30% patients presented after 1 year of alleged trauma. Rate of PVR progression is also slower in such cases due to the same reason, which translates to better surgical and visual results than RRDs due to other retinal tears. Though many surgeons prefer external drainage of SRF in all cases, we usually reserve it for long-standing cases and high myopes, where spontaneous absorption may be unsatisfactory. Iatrogenic retinal breaks and subretinal and suprachoroidal hemorrhage are well documented with external drainage of SRF, and thus may be avoided in fresh RRDs. There was no significant difference in failure rate of SB with or without drainage in our study, which reinforces that SB without drainage is safer and equally effective in management of dialysis associated RRDs. SB alone is usually avoided in eyes with PVR changes more than PVR-C1; however, in our series, satisfactory success rates were achieved from no PVR to PVR-C1. Eyes with GRD present a unique surgical problem as high indent in such cases can result in fish mouthing of the posterior edge of tear and subsequent failure of surgery. This is the reason why vitrectomy is almost universally preferred over SB in eyes with giant retinal tears. However, retinal reattachment was achieved in 83.33% (five out of six eyes) cases with GRD, which can be attributed to vitreous base support over the posterior edge and might contribute to decreased fish mouthing. High incidence of increased IOP is well documented in eyes with greater than 180° angle recession. Posttraumatic ciliary body damage, inflammation, and subsequent fibroblastic response in the ciliary body and adjoining trabecular meshwork is the reason for decreased aqueous outflow and increased IOP. Resultant scarring from 360° peritomy done during SB can make future trabeculectomy (if required) difficult. Two to three millimeter frill of conjunctiva around the limbus may be left during peritomy, which allows for minimal damage to palisades of Vogt near the limbus housing stem cells and resultant scarring being hidden by eyelids, providing better cosmetic result. In eyes with angle recession, a 5-mm frill of conjunctiva was purposefully left in our series to provide a larger surface area for bleb formation if trabeculectomy was needed in future. Though IOP was satisfactorily controlled in all cases during the follow-up period in our series, need for future trabeculectomy in such eyes cannot be ruled out. Bilateral idiopathic retinal dialysis (BIRD) is a rare entity which characteristically involves the inferotemporal quadrant and leads to slowly progressive RRD. Bilateral involvement, young age of presentation, and case reports in siblings have led to speculation of underlying genetic factor, though none have been established so far. It accounts for 1.5%–5.6% cases of all retinal dialysis. In our series, five out of 53 patients (9.3%) had BIRD and the other eye was sufficiently managed by laser photocoagulation or cryotherapy. This high incidence of BIRD in our series can be explained by the fact that all these patients were admitted for evaluation before surgery and underwent indirect ophthalmoscopic examination with scleral indentation in both eyes, barring which small retinal dialysis in the other eye may have been missed. Limitations of this study include its retrospective nature and lack of standardization of technique in terms of drainage of SRF and need of air/gas intravitreal tamponade. However, our results show that these variations have no significant effect on the outcome of SB surgery and hence provide a window of variations according to the surgeon’s own preference and skill. SB is an effective and safe treatment modality for treatment of retinal dialysis-associated RRDs, even in the hands of VR fellows under training. Conflicts of interest: There are no conflicts of interest. There are no conflicts of interest. |
Detection of airborne wild waterbird-derived DNA demonstrates potential for transmission of avian influenza virus via air inlets into poultry houses, the Netherlands, 2021 to 2022 | ea0bf64a-d4c3-4892-a6ec-0cfa60a991ab | 11451133 | Microbiology[mh] | The introduction of pathogens from wild animals into domesticated or farmed animal populations is an important global issue. From 2016 onwards, outbreaks of highly pathogenic avian influenza (HPAI) in poultry farms in Europe have been recurring . Infections with HPAI viruses have also caused unprecedented mortality in wild bird populations and increasingly affects mammalian species . The main current prevention and control measures for HPAI in poultry in Europe consist of strict biosecurity measures and, in farms affected by an outbreak, culling of all birds. Presence of wild birds near poultry farms, in particular Anseriformes (ducks, geese and swans), is associated with increased risk of HPAI introduction in poultry . Compliance with biosecurity measures and avoiding direct contact between poultry and wild birds by obligatory indoor housing reduces the risk of HPAI introduction into the flock. However, many indoor-housed poultry on farms, also on farms with apparently high biosecurity standards, have become infected, indicating that transmission routes of HPAI from wild birds to indoor-housed poultry are still poorly understood . Entry via air inlets of airborne HPAI-contaminated particles derived from nearby infected wild birds could be a relevant route. Avian influenza is able to survive in outdoor environments for periods of a few months at ideal conditions but for shorter periods (up to 7 days) around room temperature . Feathers can be infectious or easily become contaminated with virus from faecal particles and can function as fomites. Airborne HPAI dispersal may thus be important and potentially play a role in introducing the virus into a flock. This has been indicated by several studies on transmission between farms at close distances but might also be relevant for initial HPAI virus introduction into the farm from infected waterbirds nearby . Investigating this potential airborne route of introduction by targeting for the HPAI virus in air entering the houses is highly challenging. This requires air sampling during the presence of wild birds shedding avian influenza virus in the proximity of the farm , resulting in a narrow time window with low probability of virus detection considering also relatively low viral loads. Alternatively, capturing host-derived biological materials such as small feathers and faeces particles in air flows entering farms can be used to highlight potential entry pathways of host-associated pathogens. Initial approaches thus far, employing nets, showed that relatively larger materials (several mm) can enter through air inlets. Mainly insect and plant materials were observed visually, but wild waterbird materials such as feathers were not seen . To gain better insight, more advanced sampling and analysis approaches are needed to characterise airborne particle transmission. In this study, we applied environmental DNA (eDNA) metabarcoding (deep sequencing) to the context of infectious disease epidemiology. Metabarcoding has been applied before in biodiversity research; the few studies performed showed that a large diversity of eukaryotic species could be detected simultaneously and over larger distances in airborne particles . Here we aimed to determine whether wild waterbird DNA can be detected in the airflow entering poultry houses . Since the probability of detecting HPAI virus itself in air samples is low, we targeted its wild host species as an indicator for potential HPAI transmission through air.
Locations We performed outdoor and indoor air sampling at three poultry farms in the Netherlands, two broiler farms (B1 and B2) and one layer (L) farm. We selected these farms based on farmers’ willingness to participate and a recently experienced HPAI outbreak. These farms tested positive for HPAI virus subtype A(H5N1) at the end of 2021 or early 2022. At that time, all poultry was housed inside, and expert evaluation of these farms indicated a high biosecurity standard . According to the regulations of the national competent veterinary authority (Netherlands Food and Consumer Product Authority (NVWA)), poultry on affected farms need to be culled, and thereafter follows a long and intense procedure with thorough cleaning and repeated disinfecting. This provided us with a unique opportunity to investigate air in and around farms located in areas with high HPAI risk at a time when no poultry flocks were housed, thus avoiding interference of our sampling with normal farming practices. At the time of the measurements, the broiler houses were completely empty. At the layer farm, the house contained 25 sentinel chickens in Compartment 1 during the first cycle of measurements performed in Compartments 2 and 3. All three poultry farms were located in waterfowl-rich regions, as described previously . We also collected air samples as positive controls at a bird rehabilitation shelter (S), housing a variety of waterfowl and other birds both in indoor (first phase of care) and outdoor aviaries (later phase of recovery). Air sampling At each location, we repeatedly collected indoor and outdoor air samples. Air sampling equipment was used in analogy with earlier studies in and around livestock farms . Teflon filters were used to collect air samples (total suspended particles (TSP)) over 4–5 consecutive days (one measurement cycle) by Harvard impactors operating at a flow of 10 L/min (ca 65 m 3 of air sampled per filter). After each measurement cycle, we immediately collected the filters and stored these on the same day at −80 o C. Per measurement cycle, we also collected one indoor field blank and one outdoor field blank to assess potential (cross-) contamination issues. Field blanks are unexposed filters (not connected to the sampling pump) that underwent a similar handling as the exposed filters. At the three poultry farms, we performed outside air sampling around the poultry house in each of the four wind directions (north, east, south, west) at close distances to the farm (between 12 and 25 m depending on local situation and practicalities e.g. avoiding pathways and ditches). Inside the poultry houses, we positioned the air sampling installations to sample directly the air flowing in through air inlets; an impression of the sampling positions outside the poultry houses as well as inside the poultry house in the direct air flow from the air inlet, is appended in Supplementary Figure S1 . During the sampling period, the mechanical ventilation system (regulated by a computer) was programmed such that the air flow through air inlets was stabilised to represent normal operational conditions with a flock housed in the farm. Nucleic acids isolation We extracted DNA and RNA from the filters following a low biomass protocol . Empirically (data not shown), we demonstrated using DNA and RNA virus spike-in experiments that RNA from viruses could be isolated with great efficiency as well, making the RNA extracts suitable for measurements by qPCR (HPAI diagnostics). We included extraction blanks and field blanks for each sampling round and each batch of DNA and RNA extractions. Extractions of positive controls were handled in the last separate batch to avoid cross-contamination. Metabarcoding and deep sequencing We selected PCR primers 18Sa_F 5’-ATAACAGGTCTGTGATGCCCT-3’ and 18Sa_R 5’-CCTTCYGCAGGTTCACCTAC-‘3 to target the hypervariable regions V8–V9 from the 18S rRNA gene . Amplification for 25 cycles, indexing for six cycles and sequencing on an Illumina MiSeq sequencer was performed as described for 16S , targeting an 18S amplicon sequencing depth > 100,000 paired-end 300 bp sequence clusters per sample. Data analysis Raw sequencing data were primer-clipped, deblurred, error-corrected and annotated using version 1.26.0 of the dada2 R-package at default settings except for truncLen =(190,180), minOverlap =10, maxN =2, maxEE =2, minFoldParentOverAbundance =2, chimeraMethod consensus, and the dada2 pseudo pooling strategy. We subsequently annotated amplicon sequence variants (ASVs) with the dada2 naïve Bayesian classifier and a custom-built 18S sequence database containing all 1,252 sequences in National Center for Biotechnology Information (NCBI) non-redundant (NR)/nucleotide database (accessed: 27 Oct 2022) from the taxonomic classes Aves (n = 265) and Mammalia (n = 987). We replaced non-available taxonomic rank values with the first known higher taxonomic rank value using a prefix for the taxonomic rank it originated from. We created a phylogenetic maximum likelihood neighbour-joining tree from the detected ASVs using mega-x version 11 which we further curated manually. The evolutionary history of ASVs was inferred by using the maximum likelihood method and Tamura–Nei model. Highly pathogenic avian influenza virus RNA qPCR To enable comparisons between our novel eukaryote DNA sequencing approach and traditional direct pathogen detection, we additionally performed qPCR analyses to detect avian influenza virus RNA in available duplicate air samples (n = 15). An accredited diagnostic qPCR targeting the M segment of the avian influenza genome (Wageningen Bioveterinary Research) was used with a detection limit at 10–100 virus particles as estimated from spike-in benchmark values under the used practical total nucleic acid isolation conditions .
We performed outdoor and indoor air sampling at three poultry farms in the Netherlands, two broiler farms (B1 and B2) and one layer (L) farm. We selected these farms based on farmers’ willingness to participate and a recently experienced HPAI outbreak. These farms tested positive for HPAI virus subtype A(H5N1) at the end of 2021 or early 2022. At that time, all poultry was housed inside, and expert evaluation of these farms indicated a high biosecurity standard . According to the regulations of the national competent veterinary authority (Netherlands Food and Consumer Product Authority (NVWA)), poultry on affected farms need to be culled, and thereafter follows a long and intense procedure with thorough cleaning and repeated disinfecting. This provided us with a unique opportunity to investigate air in and around farms located in areas with high HPAI risk at a time when no poultry flocks were housed, thus avoiding interference of our sampling with normal farming practices. At the time of the measurements, the broiler houses were completely empty. At the layer farm, the house contained 25 sentinel chickens in Compartment 1 during the first cycle of measurements performed in Compartments 2 and 3. All three poultry farms were located in waterfowl-rich regions, as described previously . We also collected air samples as positive controls at a bird rehabilitation shelter (S), housing a variety of waterfowl and other birds both in indoor (first phase of care) and outdoor aviaries (later phase of recovery).
At each location, we repeatedly collected indoor and outdoor air samples. Air sampling equipment was used in analogy with earlier studies in and around livestock farms . Teflon filters were used to collect air samples (total suspended particles (TSP)) over 4–5 consecutive days (one measurement cycle) by Harvard impactors operating at a flow of 10 L/min (ca 65 m 3 of air sampled per filter). After each measurement cycle, we immediately collected the filters and stored these on the same day at −80 o C. Per measurement cycle, we also collected one indoor field blank and one outdoor field blank to assess potential (cross-) contamination issues. Field blanks are unexposed filters (not connected to the sampling pump) that underwent a similar handling as the exposed filters. At the three poultry farms, we performed outside air sampling around the poultry house in each of the four wind directions (north, east, south, west) at close distances to the farm (between 12 and 25 m depending on local situation and practicalities e.g. avoiding pathways and ditches). Inside the poultry houses, we positioned the air sampling installations to sample directly the air flowing in through air inlets; an impression of the sampling positions outside the poultry houses as well as inside the poultry house in the direct air flow from the air inlet, is appended in Supplementary Figure S1 . During the sampling period, the mechanical ventilation system (regulated by a computer) was programmed such that the air flow through air inlets was stabilised to represent normal operational conditions with a flock housed in the farm.
We extracted DNA and RNA from the filters following a low biomass protocol . Empirically (data not shown), we demonstrated using DNA and RNA virus spike-in experiments that RNA from viruses could be isolated with great efficiency as well, making the RNA extracts suitable for measurements by qPCR (HPAI diagnostics). We included extraction blanks and field blanks for each sampling round and each batch of DNA and RNA extractions. Extractions of positive controls were handled in the last separate batch to avoid cross-contamination.
We selected PCR primers 18Sa_F 5’-ATAACAGGTCTGTGATGCCCT-3’ and 18Sa_R 5’-CCTTCYGCAGGTTCACCTAC-‘3 to target the hypervariable regions V8–V9 from the 18S rRNA gene . Amplification for 25 cycles, indexing for six cycles and sequencing on an Illumina MiSeq sequencer was performed as described for 16S , targeting an 18S amplicon sequencing depth > 100,000 paired-end 300 bp sequence clusters per sample.
Raw sequencing data were primer-clipped, deblurred, error-corrected and annotated using version 1.26.0 of the dada2 R-package at default settings except for truncLen =(190,180), minOverlap =10, maxN =2, maxEE =2, minFoldParentOverAbundance =2, chimeraMethod consensus, and the dada2 pseudo pooling strategy. We subsequently annotated amplicon sequence variants (ASVs) with the dada2 naïve Bayesian classifier and a custom-built 18S sequence database containing all 1,252 sequences in National Center for Biotechnology Information (NCBI) non-redundant (NR)/nucleotide database (accessed: 27 Oct 2022) from the taxonomic classes Aves (n = 265) and Mammalia (n = 987). We replaced non-available taxonomic rank values with the first known higher taxonomic rank value using a prefix for the taxonomic rank it originated from. We created a phylogenetic maximum likelihood neighbour-joining tree from the detected ASVs using mega-x version 11 which we further curated manually. The evolutionary history of ASVs was inferred by using the maximum likelihood method and Tamura–Nei model.
To enable comparisons between our novel eukaryote DNA sequencing approach and traditional direct pathogen detection, we additionally performed qPCR analyses to detect avian influenza virus RNA in available duplicate air samples (n = 15). An accredited diagnostic qPCR targeting the M segment of the avian influenza genome (Wageningen Bioveterinary Research) was used with a detection limit at 10–100 virus particles as estimated from spike-in benchmark values under the used practical total nucleic acid isolation conditions .
Genetic material in air samples At the three poultry farms, we detected DNA of waterbirds in four of the 47 indoor air samples collected on the three farms (B1, B2, L) and in three of the 52 samples collected outside at two farms (B2, L). These waterbird DNA-positive air samples (n = 7) were taken at various time points and locations in and around the farm ; in the Supplementary Table , we provide more details on timing and location of these positive samples. Waterbird DNA was present in all indoor and outdoor air samples collected at the bird shelter. Sequencing the 18S-derived amplicons from all 119 air samples resulted in a median of 126,617 (range: 11,538–294,079) paired-end sequencing reads per sample. After dada2 processing and chimera filtering, this revealed a median of 37,314 (range: 17,200–150,990) annotated ASVs per air sample with the majority (87.1%) between 330 and 335 bp long. In total, we detected 54,436 different ASVs; they were a good representation of the sampled community (probability of completeness by Good’s coverage estimator on singletons of 0.99689). All field blanks were negative for ASVs belonging to the orders Anseriformes, Passeriformes, Rodentia, or other closely related species, indicating absence of cross-contamination with these species from any of the control samples in the procedures. In the additional qPCR analyses performed, all tested samples were negative for avian influenza virus RNA, indicating that the virus was either absent or present in quantities below the limit of detection. Overview of Aves amplicon sequence variants detected in air samples shows the relative abundance in our samples of Anseriformes ASVs within all annotated Aves and in addition to mammalian taxa. There was clear variation in the relative abundances of waterbird DNA in the seven positive air samples collected at the three poultry farms . An overall impression of the classes Aves and Mammalia for the same samples is depicted in . This shows that we also captured DNA of other animal species, including other wild birds (i.e. Passeriformes and Columbiformes ), chickens and rodents, warranting follow-up in future research. shows that all positive control samples collected at the bird shelter contained Anseriformes DNA, with varying relative abundances including expected visually observed species (data not shown). Phylogeny of a representative selection of detected amplicon sequence variants shows the phylogenetic tree constructed from all detected ASV sequences belonging to the waterbirds (order Anseriformes ), supplemented with detected species that are phylogenetically close (orders Passeriformes, Accipitriformes , Galliformes and Columbiformes ) as well as several detected human ASVs. The tree with the highest log likelihood (−1,683), is shown. This analysis involved 30 nucleotide sequences. There were a total of 340 aligned positions in the final dataset. The inferred tree shows that detected waterbird ASVs were at least 18 nt (from the median 333 bp ASV length) different from the other closest orders . Anseriformes sequences clustered mostly at the species level, while ASVs from other orders clustered more scattered throughout the tree at higher taxonomic levels.
At the three poultry farms, we detected DNA of waterbirds in four of the 47 indoor air samples collected on the three farms (B1, B2, L) and in three of the 52 samples collected outside at two farms (B2, L). These waterbird DNA-positive air samples (n = 7) were taken at various time points and locations in and around the farm ; in the Supplementary Table , we provide more details on timing and location of these positive samples. Waterbird DNA was present in all indoor and outdoor air samples collected at the bird shelter. Sequencing the 18S-derived amplicons from all 119 air samples resulted in a median of 126,617 (range: 11,538–294,079) paired-end sequencing reads per sample. After dada2 processing and chimera filtering, this revealed a median of 37,314 (range: 17,200–150,990) annotated ASVs per air sample with the majority (87.1%) between 330 and 335 bp long. In total, we detected 54,436 different ASVs; they were a good representation of the sampled community (probability of completeness by Good’s coverage estimator on singletons of 0.99689). All field blanks were negative for ASVs belonging to the orders Anseriformes, Passeriformes, Rodentia, or other closely related species, indicating absence of cross-contamination with these species from any of the control samples in the procedures. In the additional qPCR analyses performed, all tested samples were negative for avian influenza virus RNA, indicating that the virus was either absent or present in quantities below the limit of detection.
Aves amplicon sequence variants detected in air samples shows the relative abundance in our samples of Anseriformes ASVs within all annotated Aves and in addition to mammalian taxa. There was clear variation in the relative abundances of waterbird DNA in the seven positive air samples collected at the three poultry farms . An overall impression of the classes Aves and Mammalia for the same samples is depicted in . This shows that we also captured DNA of other animal species, including other wild birds (i.e. Passeriformes and Columbiformes ), chickens and rodents, warranting follow-up in future research. shows that all positive control samples collected at the bird shelter contained Anseriformes DNA, with varying relative abundances including expected visually observed species (data not shown).
shows the phylogenetic tree constructed from all detected ASV sequences belonging to the waterbirds (order Anseriformes ), supplemented with detected species that are phylogenetically close (orders Passeriformes, Accipitriformes , Galliformes and Columbiformes ) as well as several detected human ASVs. The tree with the highest log likelihood (−1,683), is shown. This analysis involved 30 nucleotide sequences. There were a total of 340 aligned positions in the final dataset. The inferred tree shows that detected waterbird ASVs were at least 18 nt (from the median 333 bp ASV length) different from the other closest orders . Anseriformes sequences clustered mostly at the species level, while ASVs from other orders clustered more scattered throughout the tree at higher taxonomic levels.
This study showed, through innovative application of eukaryote eDNA metabarcoding, that wild waterbird materials were present in the airflow entering poultry farms and also around poultry farms. Detecting these biological materials from potential HPAI hosts in the air flowing via air inlets into the poultry farms indicates that HPAI could be introduced in the flock through this airborne route and subsequently lead to an outbreak. This potential route of entry could be an explanatory factor in the surge of outbreaks in poultry farms caused by HPAI virus introduction after an increase in the presence of infected wild waterbirds. The DNA barcoding method would allow assessment of compromised biosecurity and effectiveness of potential intervention strategies without the need to capture HPAI at the time of sampling. Research on the epidemiology of HPAI has intensified over the years, especially as conventional control measures appear insufficient to limit the number of affected farms . Outbreaks in flocks still occur frequently, despite extended periods of mandatory indoor housing and enhanced biosecurity. Measures are aimed at limiting direct and indirect contact of domestic poultry with wildlife. Wild waterbird populations are known reservoirs of avian influenza viruses. They can distribute new strains through migration over long distances and can facilitate recombination of new strains in migratory and resident bird populations, being a source of outbreaks in domestic poultry worldwide. The actual host range for avian influenza is broad, but wild birds are regarded as the most important hosts introducing the virus into domestic flocks especially when water-rich environments are in the vicinity of farms . Studies on HPAI characteristics indicate efficient spread of the virus through the environment on water, dust or larger particles . Research on HPAI strain genome variability in the outbreak in the Netherlands in 2020 and 2021 indicated HPAI from wild birds ranging near poultry farms as the most obvious origin of the virus introduced in poultry houses but did not unequivocally indicate the actual transmission route . Attempts to detect contact between wild birds and chickens using a microbial proxy or changes in the cloacal microbiota of chickens that have free access to an outdoor range largely failed . Furthermore, initial attempts using nets over the air inlets at three poultry farms did not capture visible wild bird-derived materials such as feathers . Another recent study was able to detect HPAI virus in air collected inside and outside at the time when clinically affected animals were present in the poultry houses . Investigating wild bird transmission via detection of wild bird-derived DNA in a surveillance context has been suggested before but has as yet not been implemented . Our study, which combined air sampling with eukaryote DNA metabarcoding, demonstrates that airborne wild bird DNA-containing materials can actually enter the poultry farms via the active airflow through the air inlets under normal commercial operating conditions. Among bacteria and archaea, the ribosomal 16S rRNA gene is typically used as a universal barcoding gene to determine the quantity and taxonomic classification up to species level. The eukaryotic barcoding genes typically include the ribosomal 12S or 18S rRNA genes, the cytochrome oxidase I gene (COI/COX) or ribosomal inter-spacer regions ITS1 or ITS2 . Similar to 16S rRNA gene amplicon sequencing, as used in microbiota research, eukaryotic markers are increasingly used to assess fungal composition mostly up to genus level and some to species level, using the well curated UNITE database . Inter-spacer regions can also be used for other non-fungal species, however, the non-availability of large collections of well curated annotated ITS sequences including the taxonomic class Aves does not allow this. Despite the large barcoding-of-life initiatives targeting single species using the COI/COX gene, the sequence length and variability does not optimally support metabarcoding strategies by short-read deep sequencing strategies yet. Other methods using shotgun metagenomics sequencing or mitochondrial DNA reconstruction (Huanan market study trying to resolve the originating host-species of COVID-19 ), are also feasible but typically restricted to single-species classification strategies. Even though 12S rRNA barcoding has been applied for mammals and birds in airborne dust samples collected at a zoo , it tends to be fairly domain-specific depending on the PCR primer sets used. Since only taxonomically accurately annotated 12S rRNA sequences are limited, especially for the Anseriformes (waterbird) order, we selected sequencing the hypervariable regions V8–V9 from the 18S rRNA gene. For 18S rRNA, several dedicated taxonomic databases exist (such as the highly curated SILVA database), but at the moment of writing, the coverage of the class Aves was relatively low, while the NCBI NR/nucleotide database had a better coverage of these species. We specifically selected primers spanning hypervariable regions V8–V9 since these were most optimal in qPCR at our laboratories on a variety of samples from various species and environments, compared with primers targeting regions V4–V5 (data not shown). Furthermore, we refined the dada2 protocol for using 18S rRNA amplicons to generate longer ASVs. Consequently, we were able to assign taxonomy mostly at species level, stepping up to higher taxonomic levels when the particular ASV sequence could not be assigned at species level without conflict. Clustering sequences of 18S rRNA amplicons was avoided to limit losing classification resolution. The phylogenetic tree demonstrated that keeping sequences at ASV level made the overall assessment for the order Anseriformes within the class Aves accurate enough for the current study. Prior ASV clustering below 97% identity would have lost this order-separating resolution. To substantiate the accuracy, we found that the diversity of visually observed bird species presence inside and outside the bird shelter reflected the detected eukaryote DNA diversity (data not shown). We detected DNA of Anseriformes in all air samples collected at the bird rehabilitation centre and in at least one of the air samples collected at each farm. The findings at the poultry farms indicate variability of waterbird DNA in air over time (presence and/or load). Even though the Good’s coverage indicator indicates that we sequenced deeply enough to capture most of the present variability in species, rarefaction curves of the annotated sequences per sample suggest that for roughly half of the samples, deeper sequencing would have been beneficial; for additional data linking sequencing depth per sample to the detected diversity within these samples, we refer to the appended Supplementary Figure S2 . Nevertheless, we did detect waterbird DNA, albeit at low amounts. Increasing the sequencing depth will increase the amounts of detected Anseriformes DNA increasing its overall sensitivity but will most probably not allow accurate quantification of the amounts of waterbird DNA due to large variation in gene copy numbers between species, cell types or even between single cells. Using a mitochondrial marker such as 12S or COI would most probably increase this problem because of the large variation in the number of mitochondria present per cell(type). For the current study, we are confident that we detected Anseriformes DNA, but we must be careful in interpreting our results beyond semi-quantitative categories (high, low and absent/not detected). We purposefully selected three poultry farms considered at higher risk for virus introduction from wild bird populations for a proof-of-concept of our approach. These farms had recent outbreaks of HPAI, were located in water-rich areas and had substantial HPAI-confirmed wild bird mortalities in the vicinity; they were not selected to be representative of all poultry farms in the Netherlands. As a current limitation, we assume that airborne spread of avian influenza virus will typically be possible through virus-loaded biological materials from hosts, such as small particles of feathers and faeces. We expect the host DNA to be more stable than HPAI viral RNA in the environment, but this has not been investigated yet. As a follow-up study, linking the detection of eukaryotic DNA to the systematic visual observation of wild birds, mammals, rodents and vegetation around poultry houses is a logical next step to further substantiate our claims. We carefully assessed whether derived sequencing reads could be the result of PCR or sequencing errors. However, the detected Anseriformes ASVs were at least 18 nt differences apart to the next closest species (20 nt differences for chicken), therefore we are confident that the detected Anseriformes DNA originated from waterbirds and were true positives. However, considering sensitivity, it could be that we have missed other potentially present eukaryotic species due to restrictions in sequence annotation limited by incomplete annotation databases. We have demonstrated that the eukaryotic DNA metabarcoding approach can be used to detect host-derived materials in air in the context of HPAI and wild waterbirds. This approach can also be extended to other infectious agents and their corresponding hosts for investigating its transmission. This approach avoids the need to actually detect a certain pathogen travelling by air at the precise moment of introduction. It also provides direct semi-quantitative data for source attribution modelling and supports the assessment of effectiveness of interventions. To widen the impact and scope, further effort is needed to evaluate the characteristics of the eDNA metabarcoding approach to accurately detect other species using improved annotation databases. With increasing insights into potential weaknesses in biosecurity related to contaminated airborne biological materials, practical interventions could focus on the air coming in through the air inlets, trying to reduce the risk by, for example, air filtering or micro-organism inactivation. Mesh- or filter-based intervention strategies (for instance in airflow heat-exchange equipment) could supplement biosecurity measures to diminish entry of HPAI virus from wild birds into poultry flocks.
This study demonstrates the entry of wild waterbird DNA into poultry houses through the air inlets, suggesting that airborne HPAI virus could potentially be introduced into poultry houses via the same route. Our eukaryote environmental DNA metabarcoding approach, targeting the actual hosts instead of the pathogen itself, provides a novel tool to monitor, quantify and improve biosecurity measures for pathogens such as HPAI, and provides source attribution modelling possibilities for other pathogens that are difficult to detect at the precise moment of introduction.
|
Systematic identification of cancer pathways and potential drugs for intervention through multi-omics analysis | 63448e8b-a9d4-4e33-bf11-fa87fe722378 | 11839471 | Biochemistry[mh] | Cancer is a family of highly diverse and complex diseases that can occur in almost all organs and tissues of the human body. The occurrence and development of human cancers are associated with many factors, particularly the step-wise accumulation of genetic and epigenetic changes in the genome, which are directly manifested as alterations in the transcript and protein expression profiles . High-throughput omics technologies (e.g., transcriptomics and proteomics) have been applied to identify potential biomarkers and novel therapeutic targets for the diagnosis and treatment of human cancers . In addition, an integrative analysis across multiple omics data is capable of generating valid and testable hypotheses that can be prioritized for experimental validations . Generally, the omics profiles vary with different types of cancer, and cancer research has focused primarily on various oncogenic processes associated with a specific cancer type. However, there are limited integrative multi-omics analyses across different cancer types that may reveal new pathways of cancer genesis and new therapeutic targets. Cancer cell lines have been widely used as in vitro models for the investigation of the cellular and molecular mechanisms underlying tumorigenesis, as well as anti-cancer drug screening and repurposing . The Cancer Cell Line Encyclopedia (CCLE) is a publicly available database that contains multi-level omics data of over 1000 cancer cell lines spanning more than 40 cancer types. It provides RNA sequencing (RNA-Seq) transcriptomics data that measures RNA transcript abundance in the cancer cell lines . In addition, the tandem mass tag (TMT) based quantitative proteomics approach has been used for large-scale protein quantification. Using this method, Nusinow et al. performed quantitative proteomics analysis on 375 cell lines across diverse cancer types, resulting in a rich resource of protein expression levels for the exploration of cellular behavior and cancer research . Transcriptomics and proteomics play pivotal roles in linking genomic transcript sequences and protein levels to potential biological functions. Therefore, integrating these two omics methods (i.e., transcriptomics and proteomics) can provide a more comprehensive and holistic understanding of the biological behaviors of cancer at the transcriptional and translational levels that may reveal new mechanisms of pathogenesis and drug targets for cancer. Understanding molecular targets characteristic of a cancer type is crucial for modern anti-cancer drug discovery and therapeutic development. For example, discoidin domain receptor 1 (DDR1) was identified as a molecular target specific for pancreatic cancer. This discovery enabled the development of a novel series of 2-amino-2,3-dihydro-1H-indene-5-carboxamide derivatives as highly selective DDR1 inhibitors using structure-based drug design. These DDR1 inhibitors showed promising efficacy for pancreatic cancer treatment . Omics analysis, either RNA-Seq or proteomics profiling, has provided a rapidly expanding range of information on new molecular targets for early drug discovery. For example, Swaroop et al. found that the genes differentially expressed in the most severe Hurler syndrome subgroup compared to the intermediate Hurler-Scheie or the least severe Scheie syndrome subgroups based on transcriptome profiling data were extremely valuable in guiding the in vivo animal models and clinical trials in the drug development process . In this study, we integrated the transcriptomics and proteomics data from 16 common human cancer types, including acute myeloid leukemia (AML), breast cancer, colorectal cancer, endometrial cancer, esophageal cancer, glioma, kidney cancer, liver cancer, non-small-cell lung carcinoma (NSCLC), small cell lung carcinoma (SCLC), melanoma, ovarian cancer, pancreatic cancer, stomach cancer, upper aerodigestive cancer, and urinary tract cancer, to identify the biological pathways characteristic of each cancer type and drugs known to target these pathways. The cancer pathways identified in this study can provide insight into the underlying molecular mechanisms for each cancer type, and the drugs targeting these pathways could potentially be repurposed as new cancer therapeutics. Overview of cancer profiling data A total of 1023 human cancer cell lines were collected, including 1019 cell lines with RNA-Seq data and 375 cell lines with proteomics data (Fig. , and Supplementary Table ). Of the cancer cell lines collected, 371 had both RNA-Seq and proteomics data (Fig. , and Supplementary Table ). The four cell lines that had only proteomics data were COLO205 (large intestine cancer), PL45 (pancreatic cancer), SKMEL2 (skin cancer), and NB19 (central nervous system cancer) (Supplementary Table ). According to the cancer cell line annotations, these cancer cell lines can be grouped into 16 cancer types, including AML, breast cancer, colorectal cancer, endometrial cancer, esophageal cancer, glioma, kidney cancer, liver cancer, NSCLC, SCLC, melanoma, ovarian cancer, pancreatic cancer, stomach cancer, upper aerodigestive cancer, and urinary tract cancer (Fig. , and Supplementary Table ). The number of cancer cell lines with proteomics data for each cancer type was significantly smaller than those with RNA-Seq data (Fig. ). For cancer types with RNA-Seq data, the number of cancer cell lines ranged from 25 (liver cancer and urinary tract cancer) to 128 (NSCLC) with a median of 41 (Fig. , and Supplementary Table ). For cancer types with proteomics data, the number of cell lines ranged from 10 (upper aerodigestive cancer) to 64 (NSCLC) with a median of 14 (Fig. , and Supplementary Table ). Transcripts and proteins significantly expressed in each cancer type According to the optimal combination of Gini purity and FDR adjusted P value, the number of significant transcripts for each cancer type ranged from 5756 (liver cancer) to 11,143 (melanoma) with a median of 9256 (Fig. , and Supplementary Table ). Transcripts that showed statistically significant differential expression in a specific cancer type compared to all other cancer types are referred to as “significant transcripts” here. The number of significant proteins for each cancer type ranged from 409 (stomach cancer) to 2443 (AML) with a median of 1344 (Fig. ). The number of significant proteins is much smaller than that of the significant transcripts for each cancer type, and the transcript/protein ratio ranged from 2.86 (kidney cancer) to 19.8 (stomach cancer) with a median of 6.79 (Fig. ). Transcript is a collective term that includes various biotypes. For example, among the 5756 significant transcripts found for liver cancer included 23 biotypes, the top 10 biotypes in descending number of transcripts were protein coding (2579), pseudogene (1107), lincRNA (890), antisense (539), misc RNA (119), miRNA (94), sense intronic (85), snRNA (74), processed transcript (48), and snoRNA (38), accounting for 96.8% of all the biotypes (Fig. ). Moreover, 234 protein coding biotypes in the significant transcript set (a total of 2579 transcripts) were also present in the significant protein set (a total of 825 proteins) for liver cancer (Fig. ), showing that the results from the transcriptomics analysis and the proteomics analysis are consistent. These significantly expressed transcripts and proteins are specific for a particular cancer type and can be used for cancer type-specific pathway analysis. Biological pathways characteristic of each cancer type The significant transcripts and proteins for each cancer type were analyzed for the enrichment of biological pathways, respectively. From the significant transcripts, the number of significant pathways ranged from 36 (ovarian cancer) to 193 (AML and stomach cancer) with a median of 92 (Fig. ). From the significant proteins, the number of significant pathways ranged from 17 (stomach cancer) to 584 (AML) with a median of 174 (Fig. ). The number of overlapping pathways derived from both transcripts and proteins for each cancer type ranged from 4 (stomach cancer) to 112 (AML) with a median of 25.5 (Fig. , and Supplementary Table ). The overlapping significant pathways were considered characteristic of each cancer type. Some pathways were present in multiple cancer types, while others were specific for a particular cancer type (Supplementary Table ). Figure showed the top two significant pathways found for each cancer type, including 12 unique biological pathways. For example, the olfactory transduction pathway was the significant pathway for AML (score = −20.52), breast cancer (score = −14.18), colorectal cancer (score = −21.92), esophageal cancer (score = −8.37), glioma (score = −22.90), kidney cancer (score = −15.21), liver cancer (score = −4.64), melanoma (score = −26.34), NSCLC (score = −22.46), ovarian cancer (score = −6.32), pancreatic cancer (score = −7.17), SCLC (score = −22.44), stomach cancer (score = −4.97), and upper aerodigestive cancer (score = −25.98). Signaling by the GPCR pathway was the significant pathway for breast cancer (score = −5.56), colorectal cancer (score = −10.42), kidney cancer (score = −16.22), melanoma (score = −14.41), NSCLC (score = −7.91), SCLC (score = −10.81), and upper aerodigestive cancer (score = −14.44). Messenger RNA processing was the significant pathway for endometrial cancer (score = −19.50) and glioma (score = −14.69). Alpha-6 beta-1 and alpha-6 beta-4 integrin signaling pathway was the significant pathway for urinary tract cancer (score = −3.84). Axon guidance pathway was the significant pathway for stomach cancer (score = −4.15). Capped intron-containing pre-mRNA processing pathway was the significant pathway for endometrial cancer (score = −18.58). Cell cycle pathway was the significant pathway for esophageal cancer (score = −4.15). Cytoplasmic ribosomal proteins pathway was the significant pathway for AML (score = −11.46). Focal adhesion pathway was the significant pathway for urinary tract cancer (score = −3.39). Metabolism pathway was the significant pathway for liver cancer (score = −3.22). Oncostatin M pathway was the significant pathway for pancreatic cancer (score = −6.08). Tight junction pathway was the significant pathway for ovarian cancer (score = −2.09). Potential anti-cancer drugs identified for each cancer type The significant cancer pathways can serve as a bridge connecting drugs and cancer type. For each cancer type, we identified the drugs that target genes involved in multiple significant cancer pathways. In turn, these drugs can serve as potential anti-cancer drug candidates. The number of potential anti-cancer drugs varied by cancer types, ranging from 1 (ovarian cancer) to 97 (AML and NSCLC) with a median of 66 (Fig. , Supplementary Table ). For each cancer type, the drugs linked to the maximal number of pathways are shown in Fig. and Supplementary Table , and these drugs can be divided into two categories: those involved with multiple cancer types and those involved with one specific cancer type. The former included S-isoproterenol bitartrate for AML (58 pathways), kidney cancer (11 pathways), NSCLC (24 pathways), melanoma (7 pathways), and upper aerodigestive (9 pathways); afatinib for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways); afuresertib for breast cancer (8 pathways) and kidney cancer (11 pathways); bosutinib for endometrial cancer (5 pathways) and esophageal cancer (19 pathways); canertinib for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways); dacomitinib for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways); dasatinib for colorectal cancer (15 pathways), endometrial cancer (5 pathways), esophageal cancer (19 pathways), kidney cancer (11 pathways), and pancreatic cancer (36 pathways); HA-1077 for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways); ipatasertib for breast cancer (8 pathways) and kidney cancer (11 pathways); lithium citrate for endometrial cancer (5 pathways), esophageal cancer (19 pathways), and liver cancer (6 pathways); neratinib for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways); saracatinib for endometrial cancer (5 pathways) and esophageal cancer (19 pathways); varlitinib tosylate for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways). The latter included flavopiridol hydrochloride (9 pathways), lapatinib (9 pathways), minocycline HCl (9 pathways), and sorafenib (9 pathways) for upper aerodigestive; cladribine (13 pathways) for SCLC; D-alpha-tocopherol (8 pathways) for breast cancer; lapatinib (3 pathways) for stomach cancer; R-lotrafiban (17 pathways) and tirofiban hydrochloride monohydrate (17 pathways) for glioma; and sotrastaurin (3 pathways) for ovarian cancer (Fig. , and Supplementary Table ). Some anti-cancer drugs identified in this study have been approved as targeted therapies for the treatment of specific cancer types (Fig. ), such as imatinib, bosutinib, and dasatinib for AML; dabrafenib, crizotinib, trametinib, dacomitinib, and gefitinib for lung cancer; regorafenib for colorectal cancer; pazopanib, cabozantinib, sunitinib malate, and sorafenib for kidney cancer; trametinib for skin cancer; and sunitinib malate for pancreatic cancer. Quantitative validation by mean normalized AUC (mnAUC) A total of 426 potential anti-cancer drugs (~44% of the total) were identified, with mnAUC values ranging from 0.23 to 1.42 and a median of 0.88 (Table ). The number of anti-cancer drugs with available mnAUC values varied across cancer types: breast (7), stomach (17), endometrium (24), liver (25), SCLC (36), colorectal (40), kidney (45), pancreas (48), glioma (49), esophagus (52), and NSCLC (62). A Wilcoxon rank-sum test revealed that the mean mnAUC value (0.87) of potential anti-cancer drugs identified in this study was significantly lower than that (0.96) for 19,759 potential anti-cancer drugs reported in the literature ( p < 2 × 10 –16 ; Fig. ). A total of 1023 human cancer cell lines were collected, including 1019 cell lines with RNA-Seq data and 375 cell lines with proteomics data (Fig. , and Supplementary Table ). Of the cancer cell lines collected, 371 had both RNA-Seq and proteomics data (Fig. , and Supplementary Table ). The four cell lines that had only proteomics data were COLO205 (large intestine cancer), PL45 (pancreatic cancer), SKMEL2 (skin cancer), and NB19 (central nervous system cancer) (Supplementary Table ). According to the cancer cell line annotations, these cancer cell lines can be grouped into 16 cancer types, including AML, breast cancer, colorectal cancer, endometrial cancer, esophageal cancer, glioma, kidney cancer, liver cancer, NSCLC, SCLC, melanoma, ovarian cancer, pancreatic cancer, stomach cancer, upper aerodigestive cancer, and urinary tract cancer (Fig. , and Supplementary Table ). The number of cancer cell lines with proteomics data for each cancer type was significantly smaller than those with RNA-Seq data (Fig. ). For cancer types with RNA-Seq data, the number of cancer cell lines ranged from 25 (liver cancer and urinary tract cancer) to 128 (NSCLC) with a median of 41 (Fig. , and Supplementary Table ). For cancer types with proteomics data, the number of cell lines ranged from 10 (upper aerodigestive cancer) to 64 (NSCLC) with a median of 14 (Fig. , and Supplementary Table ). According to the optimal combination of Gini purity and FDR adjusted P value, the number of significant transcripts for each cancer type ranged from 5756 (liver cancer) to 11,143 (melanoma) with a median of 9256 (Fig. , and Supplementary Table ). Transcripts that showed statistically significant differential expression in a specific cancer type compared to all other cancer types are referred to as “significant transcripts” here. The number of significant proteins for each cancer type ranged from 409 (stomach cancer) to 2443 (AML) with a median of 1344 (Fig. ). The number of significant proteins is much smaller than that of the significant transcripts for each cancer type, and the transcript/protein ratio ranged from 2.86 (kidney cancer) to 19.8 (stomach cancer) with a median of 6.79 (Fig. ). Transcript is a collective term that includes various biotypes. For example, among the 5756 significant transcripts found for liver cancer included 23 biotypes, the top 10 biotypes in descending number of transcripts were protein coding (2579), pseudogene (1107), lincRNA (890), antisense (539), misc RNA (119), miRNA (94), sense intronic (85), snRNA (74), processed transcript (48), and snoRNA (38), accounting for 96.8% of all the biotypes (Fig. ). Moreover, 234 protein coding biotypes in the significant transcript set (a total of 2579 transcripts) were also present in the significant protein set (a total of 825 proteins) for liver cancer (Fig. ), showing that the results from the transcriptomics analysis and the proteomics analysis are consistent. These significantly expressed transcripts and proteins are specific for a particular cancer type and can be used for cancer type-specific pathway analysis. The significant transcripts and proteins for each cancer type were analyzed for the enrichment of biological pathways, respectively. From the significant transcripts, the number of significant pathways ranged from 36 (ovarian cancer) to 193 (AML and stomach cancer) with a median of 92 (Fig. ). From the significant proteins, the number of significant pathways ranged from 17 (stomach cancer) to 584 (AML) with a median of 174 (Fig. ). The number of overlapping pathways derived from both transcripts and proteins for each cancer type ranged from 4 (stomach cancer) to 112 (AML) with a median of 25.5 (Fig. , and Supplementary Table ). The overlapping significant pathways were considered characteristic of each cancer type. Some pathways were present in multiple cancer types, while others were specific for a particular cancer type (Supplementary Table ). Figure showed the top two significant pathways found for each cancer type, including 12 unique biological pathways. For example, the olfactory transduction pathway was the significant pathway for AML (score = −20.52), breast cancer (score = −14.18), colorectal cancer (score = −21.92), esophageal cancer (score = −8.37), glioma (score = −22.90), kidney cancer (score = −15.21), liver cancer (score = −4.64), melanoma (score = −26.34), NSCLC (score = −22.46), ovarian cancer (score = −6.32), pancreatic cancer (score = −7.17), SCLC (score = −22.44), stomach cancer (score = −4.97), and upper aerodigestive cancer (score = −25.98). Signaling by the GPCR pathway was the significant pathway for breast cancer (score = −5.56), colorectal cancer (score = −10.42), kidney cancer (score = −16.22), melanoma (score = −14.41), NSCLC (score = −7.91), SCLC (score = −10.81), and upper aerodigestive cancer (score = −14.44). Messenger RNA processing was the significant pathway for endometrial cancer (score = −19.50) and glioma (score = −14.69). Alpha-6 beta-1 and alpha-6 beta-4 integrin signaling pathway was the significant pathway for urinary tract cancer (score = −3.84). Axon guidance pathway was the significant pathway for stomach cancer (score = −4.15). Capped intron-containing pre-mRNA processing pathway was the significant pathway for endometrial cancer (score = −18.58). Cell cycle pathway was the significant pathway for esophageal cancer (score = −4.15). Cytoplasmic ribosomal proteins pathway was the significant pathway for AML (score = −11.46). Focal adhesion pathway was the significant pathway for urinary tract cancer (score = −3.39). Metabolism pathway was the significant pathway for liver cancer (score = −3.22). Oncostatin M pathway was the significant pathway for pancreatic cancer (score = −6.08). Tight junction pathway was the significant pathway for ovarian cancer (score = −2.09). The significant cancer pathways can serve as a bridge connecting drugs and cancer type. For each cancer type, we identified the drugs that target genes involved in multiple significant cancer pathways. In turn, these drugs can serve as potential anti-cancer drug candidates. The number of potential anti-cancer drugs varied by cancer types, ranging from 1 (ovarian cancer) to 97 (AML and NSCLC) with a median of 66 (Fig. , Supplementary Table ). For each cancer type, the drugs linked to the maximal number of pathways are shown in Fig. and Supplementary Table , and these drugs can be divided into two categories: those involved with multiple cancer types and those involved with one specific cancer type. The former included S-isoproterenol bitartrate for AML (58 pathways), kidney cancer (11 pathways), NSCLC (24 pathways), melanoma (7 pathways), and upper aerodigestive (9 pathways); afatinib for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways); afuresertib for breast cancer (8 pathways) and kidney cancer (11 pathways); bosutinib for endometrial cancer (5 pathways) and esophageal cancer (19 pathways); canertinib for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways); dacomitinib for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways); dasatinib for colorectal cancer (15 pathways), endometrial cancer (5 pathways), esophageal cancer (19 pathways), kidney cancer (11 pathways), and pancreatic cancer (36 pathways); HA-1077 for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways); ipatasertib for breast cancer (8 pathways) and kidney cancer (11 pathways); lithium citrate for endometrial cancer (5 pathways), esophageal cancer (19 pathways), and liver cancer (6 pathways); neratinib for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways); saracatinib for endometrial cancer (5 pathways) and esophageal cancer (19 pathways); varlitinib tosylate for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways). The latter included flavopiridol hydrochloride (9 pathways), lapatinib (9 pathways), minocycline HCl (9 pathways), and sorafenib (9 pathways) for upper aerodigestive; cladribine (13 pathways) for SCLC; D-alpha-tocopherol (8 pathways) for breast cancer; lapatinib (3 pathways) for stomach cancer; R-lotrafiban (17 pathways) and tirofiban hydrochloride monohydrate (17 pathways) for glioma; and sotrastaurin (3 pathways) for ovarian cancer (Fig. , and Supplementary Table ). Some anti-cancer drugs identified in this study have been approved as targeted therapies for the treatment of specific cancer types (Fig. ), such as imatinib, bosutinib, and dasatinib for AML; dabrafenib, crizotinib, trametinib, dacomitinib, and gefitinib for lung cancer; regorafenib for colorectal cancer; pazopanib, cabozantinib, sunitinib malate, and sorafenib for kidney cancer; trametinib for skin cancer; and sunitinib malate for pancreatic cancer. A total of 426 potential anti-cancer drugs (~44% of the total) were identified, with mnAUC values ranging from 0.23 to 1.42 and a median of 0.88 (Table ). The number of anti-cancer drugs with available mnAUC values varied across cancer types: breast (7), stomach (17), endometrium (24), liver (25), SCLC (36), colorectal (40), kidney (45), pancreas (48), glioma (49), esophagus (52), and NSCLC (62). A Wilcoxon rank-sum test revealed that the mean mnAUC value (0.87) of potential anti-cancer drugs identified in this study was significantly lower than that (0.96) for 19,759 potential anti-cancer drugs reported in the literature ( p < 2 × 10 –16 ; Fig. ). In this study, we identified the transcripts and proteins significantly expressed in each of the 16 cancer types through integrated analysis of transcriptomics and proteomics profiling data, resulting in biological pathways characteristic of each cancer type. Moreover, the drugs linked to these biological pathways were identified as potential treatments for human cancer. According to the global cancer statistics in 2020, the cancer types analyzed in our study (Fig. , and Supplementary Table ) included the most commonly diagnosed cancer (breast cancer, 11.7% of all sites) and the cancer with the leading death rate (lung cancer, 18% of all sites) . As proteins are the key executors of gene function, high-throughput proteomics data are important in elucidating the mechanisms of action of many critical cancer-related biological processes . Due to the constrained resolution at the proteome level, the coverage of proteomics data is much lower than that of RNA-Seq data, resulting in a smaller number of significant proteins comparing to the number of significant transcripts for each cancer type identified in our study (Fig. ). The protein levels in cells may not correlate with the expression levels of transcripts because of an underlying epigenetic mechanism . In addition to protein-encoding mRNAs, the transcripts also included non-coding RNAs (e.g., long non-coding RNA (lncRNA) and microRNA (miRNA)), some of which often act as oncogenic drivers and tumor suppressors in major cancer types through post-transcriptional regulatory mechanisms . We also identified the significant pathways characteristic of each cancer type (Fig. , and Supplementary Table ), some of which have been reported to be associated with the corresponding human cancer type. For example, the olfactory transduction pathway has been reported to be associated with certain cancer types including breast cancer , pancreatic cancer , lung carcinoids , colorectal cancer , ovarian serous cystadenocarcinoma , stomach cancer , esophageal cancer , and brain lower grade glioma . Furthermore, the olfactory receptor (OR) family is generally considered to play an important role in the olfactory transduction pathway and a link to various cancers, such as human melanoma, stomach cancer, and AML . In our study, the olfactory transduction pathway has been identified as significant for 16 cancer types (i.e., AML, breast cancer, colorectal cancer, endometrial cancer, esophageal cancer, glioma, kidney cancer, liver cancer, NSCLC, SCLC, melanoma, ovarian cancer, pancreatic cancer, stomach cancer, upper aerodigestive cancer, and urinary tract cancer) (Fig. , and Supplementary Table ). The axon guidance pathway has reported cancer associations, e.g., the axon guidance factor Slit homolog 2 (Slit 2) is known to inhibit neural invasion and metastasis in pancreatic cancer , and affect the prognosis of AML . Silencing of the axon guidance factor semaphorin 6B gene significantly suppressed adhesion, migration, and invasion of stomach cancer cells in vitro . Consistent with these previous studies, the axon guidance pathway was also found closely related to pancreatic cancer, AML, and stomach cancer in our study (Fig. , and Supplementary Table ). Guanine nucleotide-binding protein (G protein) coupled receptors (GPCRs) are the largest family of membrane receptors that mediate transmembrane signaling via heterotrimeric G protein complexes. GPCR signaling has been implicated in various oncogenic and metastatic processes . Consistent with these previous studies, the GPCR signaling pathway was also found closely related to AML, breast cancer, colorectal cancer, glioma, kidney cancer, NSCLC, SCLC, melanoma, ovarian cancer, and upper aerodigestive cancer in our study (Fig. , and Supplementary Table ). These cancer pathways also led to the identification of existing drugs that could potentially be repurposed as new anti-cancer therapies (Fig. , and Supplementary Table ). Drugs that target multiple biological pathways simultaneously may produce additive or even synergistic anti-cancer effects, resulting in more effective therapies and reduced side effects . Figure shows the drugs that are linked to the maximum number of pathways for each cancer type. For example, dasatinib, a small molecule tyrosine kinase inhibitor, has been found to inhibit the growth of AML, breast cancer, liver cancer, melanoma, pancreas tumor, and pre-neoplastic Barrett’s esophagus cell lines . Although dasatinib has previously been reported to inhibit the growth of NSCLC but not SCLC , recent studies have found that dasatinib can significantly enhance the therapeutic efficacy of vorinostat in SCLC xenografts . In addition, dasatinib has been reported to induce autophagic cell death in human ovarian cancer . Consistent with these previous studies, we found dasatinib among the drug candidates for AML, breast cancer, colorectal cancer, endometrium cancer, esophageal cancer, glioma, kidney cancer, liver cancer, melanoma, pancreatic cancer, NSCLC, upper aerodigestive cancer, urinary tract cancer, and SCLC (Fig. , and Supplementary Table ). Afuresertib is a potent protein kinase B (AKT) inhibitor that exhibits favorable tumor-suppressive effects on breast cancer cells by potently inhibiting the phosphatidylinositol 3‑kinase (PI3K)/AKT signaling pathway . Consistent with this study, Afuresertib is one of the drugs we found linked to breast cancer (Fig. , and Supplementary Table ). D-alpha-tocopherol plays a pivotal role in decreasing the metastasis risk of glioma in cancer patients . We also found D-alpha-tocopherol as one of the drugs linked to glioma (Supplementary Table ). Ipatasertib is a potent small molecule AKT kinase inhibitor currently being tested in Phase III clinical trials for the treatment of triple negative metastatic breast cancer , which is also linked to breast cancer in our study (Fig. , and Supplementary Table ). Consistent with the linkage of midostaurin to glioma by our analysis (Supplementary Table ), midostaurin is a multi-targeted tyrosine kinase inhibitor for the treatment of glioma . In addition to these drugs with confirmed anti-cancer activity in the literature, the other drugs identified in our study could potentially be prioritized and repurposed as new treatments for some cancer types. For example, the Rho-kinase inhibitor, HA-1077, suppresses proliferation/migration and induces apoptosis of urothelial cancer cells and MDA-MB 231 human breast cancer cells , while our analysis additionally linked HA-1077 to colorectal cancer and stomach cancer (Fig. , and Supplementary Table ). Moreover, some potential anti-cancer drugs identified in our study have been screened for anti-cancer activities in cell-based assays. For example, dasatinib was associated with 16 significant pathways for colorectal cancer (Supplementary Table ), and inhibited the viability of colorectal cancer cells in vitro (i.e., IC 50 = 0.40 μM, efficacy = 57%) . Enzastaurin was associated with five significant colorectal cancer pathways (Supplementary Table ), and inhibited colorectal cancer cell viability in vitro (i.e., IC 50 = 11 μM, efficacy = 54%) . Finally, puromycin, a drug linked to four significant glioma pathways in our study (Supplementary Table ), was also found to reduce the viability of glioblastoma cells in vitro (i.e., IC 50 = 2.74 μM, efficacy = 90%) . In addition, some drugs identified by our approach are approved targeted therapies for their corresponding cancer type. These findings provide additional evidence for the utility of our method (Fig. ). The Profiling Relative Inhibition Simultaneously in Mixtures (PRISM) repurposing dataset provides information on the growth inhibitory activity of 4518 drugs tested across 578 human cancer cell lines, and the area under the dose-response curve (AUC) is a metric that represents the fraction of cells left after drug exposure averaged over all the test concentrations normalized to cells receiving no drug treatment . Given the variability in cell line testing across different drugs in the PRISM dataset, Koudijs et al. utilized a linear mixed model to separate the effect of cell lines and drugs. They then consolidated the findings into estimating the mean normalized AUC (mnAUC) that represents the average fraction of cells left after drug exposure in a group of cell lines . In this study, mnAUC values for the identified potential anti-cancer drugs were calculated using the methodology of Koudijs et al. to assess drug efficacy (Table ). A Wilcoxon rank-sum test revealed that the mnAUC values of the anti-cancer drugs identified in this study were significantly lower than those reported for potential anti-cancer drugs in the literature ( p < 2 × 10 −16 ), indicating that the identified drugs demonstrated robust anti-cancer effects against their respective cancer types (Figure ). To evaluate the efficacy of the method in identifying drugs for specific cancer types, a randomization test was conducted to compare hit rates between our method and the randomized selections. A drug-cancer type pair was defined as a hit if the drug is an approved targeted therapy for the corresponding cancer type. In the randomization test, 1000 cancer type-drug pairs were sampled from the raw data 100 times, yielding an average hit rate of 0.2%, which was significantly lower than the hit rate of 1.5% for the 974 pairs (Fisher’s exact test, p = 0.001) predicted by our method in this study. In this study, we employed an integrated multi-omics approach, which has demonstrated numerous advantages over conventional single-omics methods. For example, Deng et al. utilized an integrated approach by incorporating transcriptomic, proteomic, and metabolomic molecular profiles of tumor patients. This data integration strategy facilitated the identification of key pathways and metabolites, surpassing the accuracy achieved by individual transcriptomic analyses . Similarly, Lu et al. conducted a thorough analysis by integrating transcriptomic and proteomic data in glioblastoma. The results revealed a significant enrichment of the gonadotropin-releasing hormone (GnRH) signaling pathway, a finding not discernible through single omics datasets. This highlights the potential of multi-omics research and analyses in providing a more comprehensive understanding of complex cancers . Furthermore, Heo et al. found that the integration of multi-omics data offers a comprehensive depiction of the molecular and clinical profile of cancer patients when contrasted with single-omics approaches. This integration not only enhanced the generation of high-quality, unbiased datasets, but also contributed to a more holistic understanding of the subject . Our study is one of many that have utilized the CCLE database in different ways to achieve various goals in cancer research and drug discovery. For example, Shao et al. employed a recommendation system learning model with CCLE data (i.e., drug data and multi-omics data in CCLE), focusing on drug-drug functional similarities, unlike our study, which identified cancer type-specific drugs . Hsu et al. developed Scaden-CA, a deep learning model for deconvoluting tumor data into proportions of cancer type-specific cell lines, aiming to bridge the gap in pharmacogenomics knowledge between in vitro and in vivo datasets. The CCLE bulk RNA data was used for their model validation . Carvalho et al. used CCLE data (i.e., copy number and RNA-Seq expression data of colorectal cancer cell lines in CCLE) to identify cell line models and explore drug responses in rectal cancer, revealing significant findings related to the topoisomerase 2A (TOP2A) gene in separate patient cohorts . Mohammadi et al. analyzed proteomics data from 26 breast cancer cell lines in the CCLE to examine the expression patterns of specific antimicrobial and immunomodulatory peptides across various breast cancer subtypes, aiming to facilitate drug repurposing efforts . Rinaldetti et al. used transcriptome expression data from CCLE and BLA-40 cell lines to identify novel subtype-stratified therapeutic approaches for muscle-invasive bladder cancer through high-content screening, revealing distinct drug sensitivities and highlighting the role of CCLE in molecular subtype assignments . We performed an integrative analysis of large-scale RNA-Seq and proteomics profiling data, resulting in a set of characteristic pathways for 16 human cancer types. These pathways can provide a systematic understanding of the complex underlying mechanisms for each cancer type. Furthermore, through these characteristic cancer pathways, we identified drugs for each cancer type, which could serve as drug repurposing candidates for cancer treatment. Our results provide a rich set of testable hypotheses for the design of future experimental validation and clinical trials. Data collection RNA-Seq data (file: CCLE_RNAseq_genes_rpkm_20180929.gct) were retrieved from the CCLE database, and these data contain a total of 1019 cancer cell lines with 56,202 different transcripts . Quantitative proteomics data were obtained from the literature, and these data contain a total of 375 cancer cell lines with 12,755 different proteins . Cancer cell line annotations (file: Cell_lines_annotations_20181226.txt) were downloaded from the CCLE database . To quantitatively validate the results, mean normalized Area Under the Curve (mnAUC) data were utilized from the supplementary materials of a previously published study . The mnAUC values reflect the average fraction of surviving cells after drug exposure across multiple cell lines. Identification of significant transcripts and proteins for each cancer type The raw transcriptome data were pre-processed to remove outliers using the capping method (i.e., the maximum RPKM value for each cell line was calibrated to the value that occurs most frequently among the maximum RPKM values for all cell lines), followed by a log2 transformation. The raw proteomics data were not subjected to the same preprocessing steps as the transcriptome data, as they had already undergone a log2 transformation. To identify the transcripts and proteins specific for each cancer type, we first determined if there was any significant difference between their expression levels across different cancer types using one-way analysis of variance (ANOVA). Transcripts or proteins that showed significant differential expression ( P value < 0.05) were further analyzed to see if they were significantly expressed for a specific cancer type. The expression levels for one cancer type were compared with those of the others, and statistical significance was determined by the P value from a two-tailed Student’s t test. For each cancer type, the resulting P values were then corrected for multiple hypothesis testing using the false discovery rate (FDR), and the FDR-adjusted P values were set from 10 −10 to 10 −2 with a tenfold proportional increase. Each transcript subset at a different FDR-adjusted P value cutoff was subsequently clustered hierarchically using the complete linkage method with the Euclidean distance as the similarity metric. The clustering results were quantified using Gini purity, a measure of clustering specificity. The value of Gini purity ranged from 0 to 1, with higher values indicating higher specialization in the cluster. Finally, the significant transcripts for each cancer type were prioritized based on the FDR-adjusted P value and Gini purity. For protein expression data, a P value of <0.05 was used to select the significant proteins for each cancer type. Biological pathway enrichment analysis The NCATS BioPlanet pathway database was used to identify the biological pathways characteristic of each cancer type . The pathways enriched in each transcript or protein set for a particular cancer type was determined in two steps. The Fisher’s exact test was first applied and then the FDR was calculated. The statistical significance of the pathways with an FDR adjusted P value < 0.05 was further assessed via bootstrap with 1000 replications. The bootstrap P value was calculated by counting the number of times the Fisher’s exact P value from the randomly permutated data was smaller than the true observed value, i.e., a bootstrap P value of 0.005 means that five out of the 1000 random P values were smaller than the true observed P value. A bootstrap P value < 0.05 was considered statistically significant. To improve the reliability of the pathways identified, the enrichment P values from the transcripts and proteins were further combined into a significance score (i.e., the average of the logarithms of the FDR adjusted P values). The significant biological pathways for each cancer type were ranked and prioritized by this combined score (e.g., a smaller score indicates a higher level of significance). Identification of potential anti-cancer drugs Drug target annotations were acquired from the DrugBank database ( https://go.drugbank.com/ ) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) drug database ( https://www.genome.jp/kegg/drug/ ). DrugBank is a bioinformatics and cheminformatics resource that combines detailed drug data with comprehensive target information . The KEGG drug database stores abundant information pertaining to drugs and their interacting molecular targets, which could be useful in the development of new potential anti-cancer drugs . Anti-cancer drug candidates were identified based on the drug-target interactions annotated by the above two databases. Molecular targets involved in multiple biological pathways significant for a cancer type were collected for drug candidate identification. Approved targeted cancer therapies and their corresponding cancer types were retrieved from the National Cancer Institute (NCI) at the National Institutes of Health (NIH) website ( https://www.cancer.gov/about-cancer/treatment/types/targeted-therapies/targeted-therapies-fact-sheet ). RNA-Seq data (file: CCLE_RNAseq_genes_rpkm_20180929.gct) were retrieved from the CCLE database, and these data contain a total of 1019 cancer cell lines with 56,202 different transcripts . Quantitative proteomics data were obtained from the literature, and these data contain a total of 375 cancer cell lines with 12,755 different proteins . Cancer cell line annotations (file: Cell_lines_annotations_20181226.txt) were downloaded from the CCLE database . To quantitatively validate the results, mean normalized Area Under the Curve (mnAUC) data were utilized from the supplementary materials of a previously published study . The mnAUC values reflect the average fraction of surviving cells after drug exposure across multiple cell lines. The raw transcriptome data were pre-processed to remove outliers using the capping method (i.e., the maximum RPKM value for each cell line was calibrated to the value that occurs most frequently among the maximum RPKM values for all cell lines), followed by a log2 transformation. The raw proteomics data were not subjected to the same preprocessing steps as the transcriptome data, as they had already undergone a log2 transformation. To identify the transcripts and proteins specific for each cancer type, we first determined if there was any significant difference between their expression levels across different cancer types using one-way analysis of variance (ANOVA). Transcripts or proteins that showed significant differential expression ( P value < 0.05) were further analyzed to see if they were significantly expressed for a specific cancer type. The expression levels for one cancer type were compared with those of the others, and statistical significance was determined by the P value from a two-tailed Student’s t test. For each cancer type, the resulting P values were then corrected for multiple hypothesis testing using the false discovery rate (FDR), and the FDR-adjusted P values were set from 10 −10 to 10 −2 with a tenfold proportional increase. Each transcript subset at a different FDR-adjusted P value cutoff was subsequently clustered hierarchically using the complete linkage method with the Euclidean distance as the similarity metric. The clustering results were quantified using Gini purity, a measure of clustering specificity. The value of Gini purity ranged from 0 to 1, with higher values indicating higher specialization in the cluster. Finally, the significant transcripts for each cancer type were prioritized based on the FDR-adjusted P value and Gini purity. For protein expression data, a P value of <0.05 was used to select the significant proteins for each cancer type. The NCATS BioPlanet pathway database was used to identify the biological pathways characteristic of each cancer type . The pathways enriched in each transcript or protein set for a particular cancer type was determined in two steps. The Fisher’s exact test was first applied and then the FDR was calculated. The statistical significance of the pathways with an FDR adjusted P value < 0.05 was further assessed via bootstrap with 1000 replications. The bootstrap P value was calculated by counting the number of times the Fisher’s exact P value from the randomly permutated data was smaller than the true observed value, i.e., a bootstrap P value of 0.005 means that five out of the 1000 random P values were smaller than the true observed P value. A bootstrap P value < 0.05 was considered statistically significant. To improve the reliability of the pathways identified, the enrichment P values from the transcripts and proteins were further combined into a significance score (i.e., the average of the logarithms of the FDR adjusted P values). The significant biological pathways for each cancer type were ranked and prioritized by this combined score (e.g., a smaller score indicates a higher level of significance). Drug target annotations were acquired from the DrugBank database ( https://go.drugbank.com/ ) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) drug database ( https://www.genome.jp/kegg/drug/ ). DrugBank is a bioinformatics and cheminformatics resource that combines detailed drug data with comprehensive target information . The KEGG drug database stores abundant information pertaining to drugs and their interacting molecular targets, which could be useful in the development of new potential anti-cancer drugs . Anti-cancer drug candidates were identified based on the drug-target interactions annotated by the above two databases. Molecular targets involved in multiple biological pathways significant for a cancer type were collected for drug candidate identification. Approved targeted cancer therapies and their corresponding cancer types were retrieved from the National Cancer Institute (NCI) at the National Institutes of Health (NIH) website ( https://www.cancer.gov/about-cancer/treatment/types/targeted-therapies/targeted-therapies-fact-sheet ). Table S1 Table S2 Table S3 Table S4 Supplementary file |
Core values of employed general practitioners in Germany – a qualitative study | 1969b98f-fcca-43f4-a16a-39d0976e9f2f | 10770961 | Family Medicine[mh] | The current definition of general practice by the World Organization of National Colleges, Academies and Academic Associations of General Practitioners/Family Physicians (WONCA) reflects the role of general practitioners (GPs) in Europe . This definition includes established “core values” of general practice, such as holistic, comprehensive care or continuity of care, which are also part of the definition of core values of some European professional associations in General Practice . Definitions of General Practice in Europe are developed within professional networks and are the subject of decision agreed at their general meetings. Core values in this sense, describe a professional consensus on deeply held views of physicians that characterize the profession and guide their beliefs. They shape the profession, determine attitudes to professional responsibility and regulate actions in the professional context . Therefore, consensus on the values of General Practice is documented. It may be based on a particular vision of General Practice, e.g. a model of a self-employed GP in a single practice, which does not fully correspond to the reality in several countries. For example, we observe a feminization of medicine, the choice of General Practice in anticipation of a good work-life-balance or family compatibility and the trend towards better self-care. The latter item was included in the Geneva Declaration of the World Medical Association in 2017 . A change in the professional role and the core values of GPs is being discussed in several countries. This concerns, among other things, the balance between work and private life. Directly care-related aspects are also increasingly discussed when talking about changes of the professional role : In 2016, Hashim abstracted the core values available in the literature worldwide to five principles of family medicine: compassion, generalism, relationship continuity, lifelong learning, and reflective mindfulness . In several countries such as the Netherlands and Ukraine, coordination of care and collaboration among GPs have been identified as core values . The desire to work in groups indicates an increased importance of collaboration. Teamwork is also mentioned in the future positions of the German College of General Practitioners and Family Physicians . Obviously, the number of discussed values and their aspects in General Practice is high. In some European countries, the changing role of GPs has been accompanied by a trend towards practice-based employment: Austria allowed self-employed GPs to employ other GPs in 2019 . In the Netherlands about 12% of the GPs were employed in 2012 . In Germany, the majority of primary care physicians are self-employed. However, between 2006 and 2020, the proportion of employed physicians among GPs in Germany has increased from 3.1% to about 20% . Employed GPs in Germany work directly for one or several self-employed GP(s) and have at least 5 years of training in General Practice training or internal medicine. This means, that employed GPs are paid directly and usually on a regular basis by an employing GP, who is remunerated according to the services provided in the practice. Parents and women in particular prefer the employment status . Employed GPs work fewer hours than self-employed GPs . These developments raise questions about continuity, access, and their impact on patients in an international context. Due to the shortage of GPs in several countries, high-quality care is partly dependent on employed GPs. Their value orientation is therefore of interest. So far employed GPs do not seem to be explicitly considered in the discussion about the core values of GPs and the European definition of General Practice. To our knowledge there is no research on the value orientation of employed GPs regarding the provision of medical care. The aim of this study is therefore to qualitatively describe the values of employed GPs and the manifestation of these values in Germany. The research questions are: What values do employed GPs have regarding their professional role? To what extent are these values manifested in practice and what factors are associated with their manifestation? The term “values” in this work refers both to core values already established in the definition of General Practice and to aspects of values currently under discussion or emerging from the data.
To explore the values of employed GPs in a professional context we conducted 17 semi-structured telephone-interviews. Due to the lack of previous research on employed GPs we conducted an exploratory, qualitative study. Ethical approval was obtained from University of Heidelberg (S-986/2020). For reporting we followed the Consolidated Criteria for Reporting Qualitative Research Checklist (Additional file ). Research team The research was conducted by two master’s students in Health Services Research (LH, LB, both female), a part-time employed GP and researcher (SS, female, MD), a sociologist (CU, female, PhD), an implementation scientist (MW, male, Prof.) and a self-employed GP and researcher (FPK, male, Prof.) at the Department of General Practice and Health Services Research, University Hospital Heidelberg. All authors were experienced in qualitative research, have brought their background to the analysis, and reflected on the data from their own researcher’s perspective. Participant selection Approximately 89% of the German population is covered by the statutory health insurance system, while the much smaller proportion is privately insured. All German GPs who are entitled to claim from the statutory health insurance are obligatory members of the Association of Statutory Health Insurance Physicians. The contact details of employed GPs can therefore be obtained directly from a publicly available search function. In our case we used the website of the Association of Statutory Health Insurance Physicians of Baden-Württemberg . All employed GPs working in two neighbouring districts (city and county) in southern Germany were invited to participate in the study by SS, LH and LB to participate in the study by e-mail or fax on March 31, 2021, and were reminded by telephone from April 12, 2021, in case of non-response. The counties are not identified for privacy reasons. GPs with at least 5 years of training in General Practice or internal medicine, with a license to practice as GP and working for self-employed GPs in practices, were included. This information could also be obtained from the abovementioned website. Employed GPs working for employing GPs who are not entitled to claim from the statutory health insurance, could not be included. A purposive sampling strategy was used, focusing on a maximum variation of gender, age and practice type (single practice, joint practice, medical care center) to allow a broad representation of existing aspects. Groups that were not adequately represented in the first round (male physicians and medical care centers) were subsequently invited by personalized letter, indicating the addition of a convenience sampling strategy. Participants provided written, informed and unsolicited consent. They were informed about the aim of the study, which was to learn about the daily life of employed GPs. There were no dropouts. Data collection The interview guide (Additional file ) was developed in an interprofessional team (CU, FPK, LB, LH, MW, SS) based on a literature review including definitions of General Practice by WONCA , and the German College of General Practitioners and Family Physicians . It includes motivation for employment, personal experiences during employment, the relationship with employing physicians and the potential for improvement. In addition, self-reported sociodemographic data on the employed GPs and the practices were collected. The interview guide was piloted and subsequently concretized with the help of three personal contacts who met (or until recently met) the inclusion criteria and subsequently concretised. The pilot interviews were not included in the further analysis. The interviews were conducted by telephone between April 13, 2021 and May 26, 2021 by LH, LB and SS. They were audio recorded and a postscript was prepared. The interviews were transcribed. During the final interviews, no new topics were addressed by the participants. Further data collection was deemed redundant and data saturation was assumed. Furthermore, in a systematic review identifying studies that used empirical data or statistical modelling to assess saturation, 9–17 interviews were considered sufficient . Data analysis We used a qualitative content analysis, which is appropriate for identifying relevant themes with an exploratory study. Because of its flexibility and practicality, we followed Kuckartz procedure. The analysis of all initially collected qualitative data was performed by a junior researcher in the study team (LH): After familiarization with the material through repeated reading and case summaries (LH), the main categories were primarily formed and defined deductively based on the frameworks of European WONCA Definition of General Practice/Family Medicine , the definition of the Specialty General Practice by the German Society for General and Family Medicine , the World Health Organization’s global strategy on people-centered and integrated health services and supplemented by other available literature (Additional file ). The following categories were created within MAXQDA software: continuity, comprehensive care, collaboration and collegial exchange, waiting times, professional distance, work satisfaction and medical autonomy, job satisfaction, Availability and private life. Deviating from Kuckartz, subthemes were also generated inductively. The robustness of the data analysis was increased through researcher triangulation: Group discussions and reflections were conducted in the interprofessional study team with experienced qualitative researchers (SS, FPK) as well as in qualitative method workshops with junior and senior researchers (CU, MW), a summer school (LH), the 55th Congress of General Practice and Family Medicine (LH, SS) and in the Employment Working Group of the General Practitioners’ Association in Baden-Württemberg. The coding and overall analysis of the collected data were discussed and finalized with an experienced member of the study team (SS). After the complete coding of the material, the identified values were subdivided according to their relevance, taking into account the respondents’ subjective classification of each value in relation to the life of employed GPs, patients, the profession and other values. This resulted in values with “high and heterogeneous relevance”.
The research was conducted by two master’s students in Health Services Research (LH, LB, both female), a part-time employed GP and researcher (SS, female, MD), a sociologist (CU, female, PhD), an implementation scientist (MW, male, Prof.) and a self-employed GP and researcher (FPK, male, Prof.) at the Department of General Practice and Health Services Research, University Hospital Heidelberg. All authors were experienced in qualitative research, have brought their background to the analysis, and reflected on the data from their own researcher’s perspective.
Approximately 89% of the German population is covered by the statutory health insurance system, while the much smaller proportion is privately insured. All German GPs who are entitled to claim from the statutory health insurance are obligatory members of the Association of Statutory Health Insurance Physicians. The contact details of employed GPs can therefore be obtained directly from a publicly available search function. In our case we used the website of the Association of Statutory Health Insurance Physicians of Baden-Württemberg . All employed GPs working in two neighbouring districts (city and county) in southern Germany were invited to participate in the study by SS, LH and LB to participate in the study by e-mail or fax on March 31, 2021, and were reminded by telephone from April 12, 2021, in case of non-response. The counties are not identified for privacy reasons. GPs with at least 5 years of training in General Practice or internal medicine, with a license to practice as GP and working for self-employed GPs in practices, were included. This information could also be obtained from the abovementioned website. Employed GPs working for employing GPs who are not entitled to claim from the statutory health insurance, could not be included. A purposive sampling strategy was used, focusing on a maximum variation of gender, age and practice type (single practice, joint practice, medical care center) to allow a broad representation of existing aspects. Groups that were not adequately represented in the first round (male physicians and medical care centers) were subsequently invited by personalized letter, indicating the addition of a convenience sampling strategy. Participants provided written, informed and unsolicited consent. They were informed about the aim of the study, which was to learn about the daily life of employed GPs. There were no dropouts.
The interview guide (Additional file ) was developed in an interprofessional team (CU, FPK, LB, LH, MW, SS) based on a literature review including definitions of General Practice by WONCA , and the German College of General Practitioners and Family Physicians . It includes motivation for employment, personal experiences during employment, the relationship with employing physicians and the potential for improvement. In addition, self-reported sociodemographic data on the employed GPs and the practices were collected. The interview guide was piloted and subsequently concretized with the help of three personal contacts who met (or until recently met) the inclusion criteria and subsequently concretised. The pilot interviews were not included in the further analysis. The interviews were conducted by telephone between April 13, 2021 and May 26, 2021 by LH, LB and SS. They were audio recorded and a postscript was prepared. The interviews were transcribed. During the final interviews, no new topics were addressed by the participants. Further data collection was deemed redundant and data saturation was assumed. Furthermore, in a systematic review identifying studies that used empirical data or statistical modelling to assess saturation, 9–17 interviews were considered sufficient .
We used a qualitative content analysis, which is appropriate for identifying relevant themes with an exploratory study. Because of its flexibility and practicality, we followed Kuckartz procedure. The analysis of all initially collected qualitative data was performed by a junior researcher in the study team (LH): After familiarization with the material through repeated reading and case summaries (LH), the main categories were primarily formed and defined deductively based on the frameworks of European WONCA Definition of General Practice/Family Medicine , the definition of the Specialty General Practice by the German Society for General and Family Medicine , the World Health Organization’s global strategy on people-centered and integrated health services and supplemented by other available literature (Additional file ). The following categories were created within MAXQDA software: continuity, comprehensive care, collaboration and collegial exchange, waiting times, professional distance, work satisfaction and medical autonomy, job satisfaction, Availability and private life. Deviating from Kuckartz, subthemes were also generated inductively. The robustness of the data analysis was increased through researcher triangulation: Group discussions and reflections were conducted in the interprofessional study team with experienced qualitative researchers (SS, FPK) as well as in qualitative method workshops with junior and senior researchers (CU, MW), a summer school (LH), the 55th Congress of General Practice and Family Medicine (LH, SS) and in the Employment Working Group of the General Practitioners’ Association in Baden-Württemberg. The coding and overall analysis of the collected data were discussed and finalized with an experienced member of the study team (SS). After the complete coding of the material, the identified values were subdivided according to their relevance, taking into account the respondents’ subjective classification of each value in relation to the life of employed GPs, patients, the profession and other values. This resulted in values with “high and heterogeneous relevance”.
Sample We conducted 17 interviews, which lasted on average 50 min (minimum 22; maximum 83 min). The sociodemographic and work-related characteristics of the employed GPs interviewed are shown in Table . Results: values of employed general practitioners We found 12 values of employed GPs in their professional context. They differ in their relevance for the interviewees. First, the identified values are presented from the perspective of the employed GPs and structured according to the observed relevance. Then the implementation of the values (Table ) and associated factors (Table ) are reported from the perspective of the employed GPs. Citations originate from different interviewees. Values with high relevance Job satisfaction According to some employed GPs, job satisfaction leads to better patient care: “If the doctors are more satisfied, I think they can […] sometimes provide better medical care […]” and is therefore not only an issue for employed GPs themselves, but also for patients. Job satisfaction is also mentioned in the context of the type of work e.g. practicing different specializations and subjects and avoiding bureaucratic work. Professional distance from patients Employed GPs seek to maintain a professional distance from patients, mainly for reasons of self-care; patient benefits of professional distance are not mentioned within in the interviews. However, the absence of disadvantages of a professional distance for the patient is relevant for employed GPs. Collaboration and collegial exchange Employed GPs, employing physicians and patients benefit from teamwork and exchange between colleagues within the practice and this is seen as valuable: “What we do a lot […]: Please come here and take a look at this rash, or: Can you help me? […] everything that´s collegial is very valuable to me, so it’s more valuable to me than the things that bother me, because I also believe that our patients benefit immensely from it.” Comprehensive care Comprehensive care by individual providers is usually considered necessary. The inability to offer meaningful services and existing restrictions on treatment occasions, e.g. exclusive acute consultations of individual providers, are criticized. Comprehensive care by the entire practice is also relevant for the employed GPs, and the necessary range of services is described as “ from athlete’s foot to brain tumours” . Multiple views of diseases, specialization and increased specialist expertise can be achieved, and diagnostic possibilities can be used when the entire practice works comprehensively. Access: sufficient consultation time When it comes to access, employed GPs see sufficient consultation time as highly relevant. It is described as part of the employed GPs’ philosophy or professional role: “GP just means […] I have many patients with psychosomatic or psychological problems that you just can’t deal with in five minutes”. Employed GPs would like to have more time window per consultation as a long consultation makes medical sense. The time is needed to curb polymedication, treat chronic and mental illness, and translate diagnoses into therapies. They would like to see fewer patients “pushed in” so that their work is more valued. Availability of employed GPs for their patients versus employed GPs private lives By availability, we mean being available for patients or care-related tasks. Availability has many consequences for employed GPs (e.g. less free time, family conflicts and difficulty in maintaining emotional distance from patients) and for patients (e.g. shorter waiting times, longer consultations and longer office hours of the practice). It is therefore important for the realization of other values, e.g. consultation time, waiting time, the care by a reference provider and private life. By reference provider we mean a general practitioner, who coordinates care, feels responsible for the patient, has the patient on his or her caseload, or to whom the patient feels connected. Limited availability can lead to tensions between values and the need to choose between them. Possible factors for the availability are the day of the week, the location of care, the mission, and policies of the practice, the reason for and urgency of the consultation, the number of colleagues and the existence of a reference provider. Availability functions as a link between private and directly care-related values, as it determines the feasibility of directly care-related values but has an opposite effect on the feasibility of private life. This means, that private life values can become indirectly relevant to patient care through the link of availability: The involvement of employed GPs in private life may affect the provision of adequate consultation time, waiting time or care by reference provider. Some of the employed GPs consider their own availability less important than that of employing doctors, who also have limited availability. Most employed GPs are more concerned with family obligations than with the consequences of low availability for patients. For example, the dissatisfaction of patients due to low availability is accepted by employed GPs, because the desire to be with the family predominates. Another example is the illness of children of employed GPs, in which normatively the family value prevails over availability: “…it was also very difficult to then hand over the sick child to someone else […] where it was then simply clear to me that it was not worth it to always torture myself [to the practice].” . Values with heterogeneous relevance Continuity While coordination between providers and knowing the patient are important parts of continuity, the duration of the doctor-patient-relationship is only partially relevant and the care by a reference provider is perceived as primarily irrelevant. Coordination between providers is particularly important for employed GPs because the care of a patient is often provided by different physicians. It contributes to the knowledge about diagnoses: ‘For that you need time and leisure to really do handovers, […] The patient did not have a follow-up appointment […] and then came back to me at some point and I looked in this lab and hepatitis C was positive. […] where I think that at some points you have to make sure that you´re doing handovers and that things don’t slip up.” Knowing the patients is seen as a resource for a needs-based-treatment and can reduce the time needed for consultation. It is also a factor in employed GPs’ job satisfaction. It is less relevant in a short-term employment with priority treatment of patients with acute causes. The heterogeneity of the relevance of the duration of doctor-patient relationship is due to the growing knowledge about patients (especially in the first phase of the relationship) and the desire not to be tied to a practice. Care by a reference provider is mostly irrelevant for employed GPs, as coordination between providers usually works well. More important are free time, job satisfaction, professional distance and, in some cases, short waiting times. The resulting plurality of providers and their different opinions and specializations are seen as beneficial for care and for the physicians themselves. Exceptions are made for home visits, preventive care, rehabilitation, and changing providers within the same visit. Waiting times Short waiting times are particularly important for the interpretation of diagnostic consultations. When choosing between waiting times and care by a reference provider, employed employed GPs tend to focus on low waiting times, as some suggest reducing them by foregoing care by a reference provider. Round-the-clock care is also sometimes seen as more important than care by reference providers: “We try, if it is somehow possible, not to let the patients fixate so much on the individual physician, […], because we try to ensure this round-the-year and round-the-clock care.” Short waiting times are less important for preventive care and when compared to waiting times for specialists: “If you want to have a check-up with a defined GP, you have to wait four weeks for the appointment. I think that’s really short compared to specialists.” . Medical autonomy Medical autonomy is most often seen in a conflict with economic pressures. Medical autonomy is important to employed GPs, when they compare it to other providers or working environments (e.g. inpatient sector) - especially for newcomers - and when employing doctors do not follow their own guidelines or intervene in medical affairs without being asked: “he says […] cold medicines all go on a green prescription [paid by the patient] and I then represent that to my patients. Then they trot over to the boss, cajole him and then it’s given on a red prescription [paid by the statutory health insurance] and of course that annoyed me massively”. Restrictions of Medical autonomy are better accepted when employed GPs understand the reasons for the restrictions. It is also better accepted by employed GPs when discussions with the employing physicians are seen as learning opportunities, when exceptions can be made for special circumstances and when their working philosophy is similar to that of the employing physicians’. This similarity may result from the adaptation of their working methods over time: “In the beginning, there was much more authority, but now I work quite autonomously. After twenty years, of course, you also adapt very much to each other, so there are not so many conflicts in the sense that he has to dictate certain things to me, because we are very similar.” . The manifestation and associated factors from the perspective of employed general practitioners are presented in Tables and . All statements are derived from the responses of the interviewees (n = 17). A comparison of Table and the relevance of availability and private life reveals a discrepancy between intrinsic value orientation and implementation, e.g. when it comes to the professional distance to patients. In particular, the value orientation of availability and the private life (high relevance of family time) differs from the practical implementation, as compromises and consideration of the patient needs are taken into account.
We conducted 17 interviews, which lasted on average 50 min (minimum 22; maximum 83 min). The sociodemographic and work-related characteristics of the employed GPs interviewed are shown in Table .
We found 12 values of employed GPs in their professional context. They differ in their relevance for the interviewees. First, the identified values are presented from the perspective of the employed GPs and structured according to the observed relevance. Then the implementation of the values (Table ) and associated factors (Table ) are reported from the perspective of the employed GPs. Citations originate from different interviewees.
Job satisfaction According to some employed GPs, job satisfaction leads to better patient care: “If the doctors are more satisfied, I think they can […] sometimes provide better medical care […]” and is therefore not only an issue for employed GPs themselves, but also for patients. Job satisfaction is also mentioned in the context of the type of work e.g. practicing different specializations and subjects and avoiding bureaucratic work. Professional distance from patients Employed GPs seek to maintain a professional distance from patients, mainly for reasons of self-care; patient benefits of professional distance are not mentioned within in the interviews. However, the absence of disadvantages of a professional distance for the patient is relevant for employed GPs. Collaboration and collegial exchange Employed GPs, employing physicians and patients benefit from teamwork and exchange between colleagues within the practice and this is seen as valuable: “What we do a lot […]: Please come here and take a look at this rash, or: Can you help me? […] everything that´s collegial is very valuable to me, so it’s more valuable to me than the things that bother me, because I also believe that our patients benefit immensely from it.” Comprehensive care Comprehensive care by individual providers is usually considered necessary. The inability to offer meaningful services and existing restrictions on treatment occasions, e.g. exclusive acute consultations of individual providers, are criticized. Comprehensive care by the entire practice is also relevant for the employed GPs, and the necessary range of services is described as “ from athlete’s foot to brain tumours” . Multiple views of diseases, specialization and increased specialist expertise can be achieved, and diagnostic possibilities can be used when the entire practice works comprehensively. Access: sufficient consultation time When it comes to access, employed GPs see sufficient consultation time as highly relevant. It is described as part of the employed GPs’ philosophy or professional role: “GP just means […] I have many patients with psychosomatic or psychological problems that you just can’t deal with in five minutes”. Employed GPs would like to have more time window per consultation as a long consultation makes medical sense. The time is needed to curb polymedication, treat chronic and mental illness, and translate diagnoses into therapies. They would like to see fewer patients “pushed in” so that their work is more valued. Availability of employed GPs for their patients versus employed GPs private lives By availability, we mean being available for patients or care-related tasks. Availability has many consequences for employed GPs (e.g. less free time, family conflicts and difficulty in maintaining emotional distance from patients) and for patients (e.g. shorter waiting times, longer consultations and longer office hours of the practice). It is therefore important for the realization of other values, e.g. consultation time, waiting time, the care by a reference provider and private life. By reference provider we mean a general practitioner, who coordinates care, feels responsible for the patient, has the patient on his or her caseload, or to whom the patient feels connected. Limited availability can lead to tensions between values and the need to choose between them. Possible factors for the availability are the day of the week, the location of care, the mission, and policies of the practice, the reason for and urgency of the consultation, the number of colleagues and the existence of a reference provider. Availability functions as a link between private and directly care-related values, as it determines the feasibility of directly care-related values but has an opposite effect on the feasibility of private life. This means, that private life values can become indirectly relevant to patient care through the link of availability: The involvement of employed GPs in private life may affect the provision of adequate consultation time, waiting time or care by reference provider. Some of the employed GPs consider their own availability less important than that of employing doctors, who also have limited availability. Most employed GPs are more concerned with family obligations than with the consequences of low availability for patients. For example, the dissatisfaction of patients due to low availability is accepted by employed GPs, because the desire to be with the family predominates. Another example is the illness of children of employed GPs, in which normatively the family value prevails over availability: “…it was also very difficult to then hand over the sick child to someone else […] where it was then simply clear to me that it was not worth it to always torture myself [to the practice].” .
According to some employed GPs, job satisfaction leads to better patient care: “If the doctors are more satisfied, I think they can […] sometimes provide better medical care […]” and is therefore not only an issue for employed GPs themselves, but also for patients. Job satisfaction is also mentioned in the context of the type of work e.g. practicing different specializations and subjects and avoiding bureaucratic work.
Employed GPs seek to maintain a professional distance from patients, mainly for reasons of self-care; patient benefits of professional distance are not mentioned within in the interviews. However, the absence of disadvantages of a professional distance for the patient is relevant for employed GPs.
Employed GPs, employing physicians and patients benefit from teamwork and exchange between colleagues within the practice and this is seen as valuable: “What we do a lot […]: Please come here and take a look at this rash, or: Can you help me? […] everything that´s collegial is very valuable to me, so it’s more valuable to me than the things that bother me, because I also believe that our patients benefit immensely from it.”
Comprehensive care by individual providers is usually considered necessary. The inability to offer meaningful services and existing restrictions on treatment occasions, e.g. exclusive acute consultations of individual providers, are criticized. Comprehensive care by the entire practice is also relevant for the employed GPs, and the necessary range of services is described as “ from athlete’s foot to brain tumours” . Multiple views of diseases, specialization and increased specialist expertise can be achieved, and diagnostic possibilities can be used when the entire practice works comprehensively.
When it comes to access, employed GPs see sufficient consultation time as highly relevant. It is described as part of the employed GPs’ philosophy or professional role: “GP just means […] I have many patients with psychosomatic or psychological problems that you just can’t deal with in five minutes”. Employed GPs would like to have more time window per consultation as a long consultation makes medical sense. The time is needed to curb polymedication, treat chronic and mental illness, and translate diagnoses into therapies. They would like to see fewer patients “pushed in” so that their work is more valued.
By availability, we mean being available for patients or care-related tasks. Availability has many consequences for employed GPs (e.g. less free time, family conflicts and difficulty in maintaining emotional distance from patients) and for patients (e.g. shorter waiting times, longer consultations and longer office hours of the practice). It is therefore important for the realization of other values, e.g. consultation time, waiting time, the care by a reference provider and private life. By reference provider we mean a general practitioner, who coordinates care, feels responsible for the patient, has the patient on his or her caseload, or to whom the patient feels connected. Limited availability can lead to tensions between values and the need to choose between them. Possible factors for the availability are the day of the week, the location of care, the mission, and policies of the practice, the reason for and urgency of the consultation, the number of colleagues and the existence of a reference provider. Availability functions as a link between private and directly care-related values, as it determines the feasibility of directly care-related values but has an opposite effect on the feasibility of private life. This means, that private life values can become indirectly relevant to patient care through the link of availability: The involvement of employed GPs in private life may affect the provision of adequate consultation time, waiting time or care by reference provider. Some of the employed GPs consider their own availability less important than that of employing doctors, who also have limited availability. Most employed GPs are more concerned with family obligations than with the consequences of low availability for patients. For example, the dissatisfaction of patients due to low availability is accepted by employed GPs, because the desire to be with the family predominates. Another example is the illness of children of employed GPs, in which normatively the family value prevails over availability: “…it was also very difficult to then hand over the sick child to someone else […] where it was then simply clear to me that it was not worth it to always torture myself [to the practice].” .
Continuity While coordination between providers and knowing the patient are important parts of continuity, the duration of the doctor-patient-relationship is only partially relevant and the care by a reference provider is perceived as primarily irrelevant. Coordination between providers is particularly important for employed GPs because the care of a patient is often provided by different physicians. It contributes to the knowledge about diagnoses: ‘For that you need time and leisure to really do handovers, […] The patient did not have a follow-up appointment […] and then came back to me at some point and I looked in this lab and hepatitis C was positive. […] where I think that at some points you have to make sure that you´re doing handovers and that things don’t slip up.” Knowing the patients is seen as a resource for a needs-based-treatment and can reduce the time needed for consultation. It is also a factor in employed GPs’ job satisfaction. It is less relevant in a short-term employment with priority treatment of patients with acute causes. The heterogeneity of the relevance of the duration of doctor-patient relationship is due to the growing knowledge about patients (especially in the first phase of the relationship) and the desire not to be tied to a practice. Care by a reference provider is mostly irrelevant for employed GPs, as coordination between providers usually works well. More important are free time, job satisfaction, professional distance and, in some cases, short waiting times. The resulting plurality of providers and their different opinions and specializations are seen as beneficial for care and for the physicians themselves. Exceptions are made for home visits, preventive care, rehabilitation, and changing providers within the same visit. Waiting times Short waiting times are particularly important for the interpretation of diagnostic consultations. When choosing between waiting times and care by a reference provider, employed employed GPs tend to focus on low waiting times, as some suggest reducing them by foregoing care by a reference provider. Round-the-clock care is also sometimes seen as more important than care by reference providers: “We try, if it is somehow possible, not to let the patients fixate so much on the individual physician, […], because we try to ensure this round-the-year and round-the-clock care.” Short waiting times are less important for preventive care and when compared to waiting times for specialists: “If you want to have a check-up with a defined GP, you have to wait four weeks for the appointment. I think that’s really short compared to specialists.” . Medical autonomy Medical autonomy is most often seen in a conflict with economic pressures. Medical autonomy is important to employed GPs, when they compare it to other providers or working environments (e.g. inpatient sector) - especially for newcomers - and when employing doctors do not follow their own guidelines or intervene in medical affairs without being asked: “he says […] cold medicines all go on a green prescription [paid by the patient] and I then represent that to my patients. Then they trot over to the boss, cajole him and then it’s given on a red prescription [paid by the statutory health insurance] and of course that annoyed me massively”. Restrictions of Medical autonomy are better accepted when employed GPs understand the reasons for the restrictions. It is also better accepted by employed GPs when discussions with the employing physicians are seen as learning opportunities, when exceptions can be made for special circumstances and when their working philosophy is similar to that of the employing physicians’. This similarity may result from the adaptation of their working methods over time: “In the beginning, there was much more authority, but now I work quite autonomously. After twenty years, of course, you also adapt very much to each other, so there are not so many conflicts in the sense that he has to dictate certain things to me, because we are very similar.” . The manifestation and associated factors from the perspective of employed general practitioners are presented in Tables and . All statements are derived from the responses of the interviewees (n = 17). A comparison of Table and the relevance of availability and private life reveals a discrepancy between intrinsic value orientation and implementation, e.g. when it comes to the professional distance to patients. In particular, the value orientation of availability and the private life (high relevance of family time) differs from the practical implementation, as compromises and consideration of the patient needs are taken into account.
While coordination between providers and knowing the patient are important parts of continuity, the duration of the doctor-patient-relationship is only partially relevant and the care by a reference provider is perceived as primarily irrelevant. Coordination between providers is particularly important for employed GPs because the care of a patient is often provided by different physicians. It contributes to the knowledge about diagnoses: ‘For that you need time and leisure to really do handovers, […] The patient did not have a follow-up appointment […] and then came back to me at some point and I looked in this lab and hepatitis C was positive. […] where I think that at some points you have to make sure that you´re doing handovers and that things don’t slip up.” Knowing the patients is seen as a resource for a needs-based-treatment and can reduce the time needed for consultation. It is also a factor in employed GPs’ job satisfaction. It is less relevant in a short-term employment with priority treatment of patients with acute causes. The heterogeneity of the relevance of the duration of doctor-patient relationship is due to the growing knowledge about patients (especially in the first phase of the relationship) and the desire not to be tied to a practice. Care by a reference provider is mostly irrelevant for employed GPs, as coordination between providers usually works well. More important are free time, job satisfaction, professional distance and, in some cases, short waiting times. The resulting plurality of providers and their different opinions and specializations are seen as beneficial for care and for the physicians themselves. Exceptions are made for home visits, preventive care, rehabilitation, and changing providers within the same visit.
Short waiting times are particularly important for the interpretation of diagnostic consultations. When choosing between waiting times and care by a reference provider, employed employed GPs tend to focus on low waiting times, as some suggest reducing them by foregoing care by a reference provider. Round-the-clock care is also sometimes seen as more important than care by reference providers: “We try, if it is somehow possible, not to let the patients fixate so much on the individual physician, […], because we try to ensure this round-the-year and round-the-clock care.” Short waiting times are less important for preventive care and when compared to waiting times for specialists: “If you want to have a check-up with a defined GP, you have to wait four weeks for the appointment. I think that’s really short compared to specialists.” .
Medical autonomy is most often seen in a conflict with economic pressures. Medical autonomy is important to employed GPs, when they compare it to other providers or working environments (e.g. inpatient sector) - especially for newcomers - and when employing doctors do not follow their own guidelines or intervene in medical affairs without being asked: “he says […] cold medicines all go on a green prescription [paid by the patient] and I then represent that to my patients. Then they trot over to the boss, cajole him and then it’s given on a red prescription [paid by the statutory health insurance] and of course that annoyed me massively”. Restrictions of Medical autonomy are better accepted when employed GPs understand the reasons for the restrictions. It is also better accepted by employed GPs when discussions with the employing physicians are seen as learning opportunities, when exceptions can be made for special circumstances and when their working philosophy is similar to that of the employing physicians’. This similarity may result from the adaptation of their working methods over time: “In the beginning, there was much more authority, but now I work quite autonomously. After twenty years, of course, you also adapt very much to each other, so there are not so many conflicts in the sense that he has to dictate certain things to me, because we are very similar.” . The manifestation and associated factors from the perspective of employed general practitioners are presented in Tables and . All statements are derived from the responses of the interviewees (n = 17). A comparison of Table and the relevance of availability and private life reveals a discrepancy between intrinsic value orientation and implementation, e.g. when it comes to the professional distance to patients. In particular, the value orientation of availability and the private life (high relevance of family time) differs from the practical implementation, as compromises and consideration of the patient needs are taken into account.
Important values for employed GPs in the professional context are job satisfaction, a professional distance from patients, cooperation and exchange with other providers, comprehensive care, sufficient consultation time, availability and family. Continuity, waiting times and medical autonomy are only partly perceived as important (interpersonal and intrapersonal) or less important than other values. Tensions occur between values, especially due to the limited availability of employed GPs for patients. Manifestation factors can be at the practice, patient, and physician levels. The overall view of important values of employed GPs may give the impression that there is little focus on patient-centeredness, with values such as job satisfaction, professional distance from the patients and collaboration and collegial exchange being mentioned. However, from the perspective of employed GPs, these values are also relevant to the patients and lead to better medical care or at least do not result in disadvantages for the patients. In our study, employed GPs perceive a tension between private life and other values to due to their limited availability. Availability is thus a central factor in the compatibility of values directly relevant to care but has an opposite effect on compatibility with the private life. The importance of availability may give the impression that employed GPs with low availability cannot provide continuity, adequate consultation time and rapid access: Employed GPs must prioritize values. However, low availability does not necessarily lead to poorer care from the perspective of employed GPs perspective: Reference providers are seen as having little relevance to care when good coordination between providers exists. Similarly, Bodenheimer et al. and Pannatoni et al. show that better patient satisfaction with part-time primary care physicians does not depend solely on waiting times and care provided by referral physicians; a trusting therapeutic relationship can also be established despite relatively low availability (to some extent). Employed GPs need support in making complex value judgments and in providing adequate continuity. Bodenheimer gives some examples . In our study, availability can be influenced, e.g. by good working conditions, leading to less absenteeism. It should not be ignored that GPs increasingly demand a private life, as the realization of a private life is indirectly relevant to patient care. The choice of employment with the motivation to reconcile family and work may suggest a better compatibility than self-employment. However, according to the present results, it can by no means be assumed that this compatibility in employment is comprehensive enough for the employed GPs. Knowing the patients is seen as particularly important for the continuity of the relationship in this study. Care provided by reference providers tends to be perceived as less important, the latter especially when coordination between providers works well. The high relevance of patient knowledge combined with the low relevance of care by referring physicians may be seen as a contradiction, but patient knowledge is built up not only through the quantity but also the intensity and quality of consultations. However, a systematic review has shown an association between less care from reference physicians and patient mortality , and the presence of a reference physician reduces hospitalization and use of out-of-hours care . Future research could consider organizational aspects such as the quality of coordination between providers could be considered. The theme of professional distance to patients was also found in a multicentric qualitative study by Le Floch et al. GPs wanted to “control the level of involvement with their patients” and described an “ability to balance empathy with professional distance” . Employed GPs in our study seem to focus on self-care in the area of tension between distance from the patient and the relationship with the patient. They also find it more difficult to balance professional distance with the patient relationship, describing intrinsic factors and a high sense of responsibility as reasons for this challenge. Regarding medical autonomy, hierarchies exist in practices due to the different roles of self-employed and employed physicians despite equal professional qualifications. The realization of physician autonomy is described as heterogeneous in the present study. A qualitative study from the United Kingdom shows a negative perception of realization by employed primary care providers, namely disempowering and disadvantageous hierarchies . The values of employed GPs found in this study overlap with the professional definitions of WONCA (e.g. comprehensive care, continuity and access), as well as with values currently discussed in the literature (private life, job satisfaction, collaboration and exchange). Thus, the values of employed GPs are partly consistent with the discussion about the professional profile and the definitions of General Practice. They show that employed GPs are partly oriented towards established values . Our study also found values that were not previously included in the discussion of the changing professional role and in the GPs definitions: Professional distance from patients, availability for patients and medical autonomy of the physicians. Strengths and limitations Strengths of the exploratory, hypothesis-generating study stem from the data collection and analysis: We achieved a wide variance in the sample in terms of practice type and age of the employed GPs. The comprehensibility of the interview questions was increased by piloting of the interview guide. The length of the interviews (Ø 50 min) allows for in-depth insights. The deductive-inductive approach helped to identify central values. The entire research process was complemented by the extensive exchange in the interdisciplinary research team. Limitations arise from the regional focus of the study. Even if urban and rural practices were integrated, other regions may have different care structures, leading to different results. Personal relationships between one interviewee and SS as a recruiter may have increased social desirability. To mitigate this risk, this interview was not conducted by SS. Voluntary participation introduces a selection bias; it is possible that particularly motivated or distressed employed GPs participated. Recruitment of a broad gender variation was only partially successful (3 of 17 interviewees are male), as only 30% of employed GPs working in practices with approval for claiming from statutory health insurance in Germany are male . The centrality of the value availability may have arisen due to the high proportion of part-time workers in this study, given the high percentage of employed GPs working less than 30 h a week in Germany (45% among employed GPs working in practices with statutory health insurance participation (2020) ). For data protection reasons, pseudonymised information on the employed GPs cited cannot be given. The frameworks, that were used in clinical practice, were not developed for qualitative research. We are not aware of frameworks for core values in General Practice, that were developed for research. Due to the exploratory and qualitative character, no generalizations can be made. However, the study design is considered appropriate for the hypothesis-generating approach. Representative quantitative surveys in the group of employed GPs could follow to verify the results.
Strengths of the exploratory, hypothesis-generating study stem from the data collection and analysis: We achieved a wide variance in the sample in terms of practice type and age of the employed GPs. The comprehensibility of the interview questions was increased by piloting of the interview guide. The length of the interviews (Ø 50 min) allows for in-depth insights. The deductive-inductive approach helped to identify central values. The entire research process was complemented by the extensive exchange in the interdisciplinary research team. Limitations arise from the regional focus of the study. Even if urban and rural practices were integrated, other regions may have different care structures, leading to different results. Personal relationships between one interviewee and SS as a recruiter may have increased social desirability. To mitigate this risk, this interview was not conducted by SS. Voluntary participation introduces a selection bias; it is possible that particularly motivated or distressed employed GPs participated. Recruitment of a broad gender variation was only partially successful (3 of 17 interviewees are male), as only 30% of employed GPs working in practices with approval for claiming from statutory health insurance in Germany are male . The centrality of the value availability may have arisen due to the high proportion of part-time workers in this study, given the high percentage of employed GPs working less than 30 h a week in Germany (45% among employed GPs working in practices with statutory health insurance participation (2020) ). For data protection reasons, pseudonymised information on the employed GPs cited cannot be given. The frameworks, that were used in clinical practice, were not developed for qualitative research. We are not aware of frameworks for core values in General Practice, that were developed for research. Due to the exploratory and qualitative character, no generalizations can be made. However, the study design is considered appropriate for the hypothesis-generating approach. Representative quantitative surveys in the group of employed GPs could follow to verify the results.
Several values are important for employed GPs, with availability to patients being crucial. It serves as a link between private life and patient care and involves several areas of tension. Trade-offs in the realization of values are often multifactorial, with factors related to practice organization, physicians, and patients. The values of employed GPs are partly consistent with the professional definition of General Practice and the discussion about the professional profile. The increase in the number of employed GPs implies the need to reflect on the core values of General Practice and to consider employed GPs in the promotion of work-family balance. The extent to which established and new values play a role for the general practice profession remains open and can be explored in further research projects and professional policy discussions. The future will show to what degree employed GPs, employing GPs and patients need to adapt to both, changing health care systems and changing professional values in General Practice.
Below is the link to the electronic supplementary material. Additional File 1 Additional File 2 Additional File 3
|
Topographical mapping of catecholaminergic axon innervation in the flat-mounts of the mouse atria: a quantitative analysis | 973b59ef-9b1f-45fe-b67a-d19a61713cea | 10082215 | Anatomy[mh] | The sympathetic nervous system (SNS) plays a pivotal role in regulating cardiac functions including heart rate, contractility, and conduction velocity, which are essential for our survival , . Contrary to conventional belief, not only does the SNS play a role in the “fight or flight” integrated response, but it also regulates heart rate and contractility in both resting and non-resting conditions . In fact, new emerging roles of cardiac sympathetic innervation were revealed including the regulation of cardiomyocyte size and providing a neurotrophic signal to the heart . Furthermore, any disturbance of the SNS functions, including structural remodeling and overactivity, may promote progression of various cardiovascular diseases . Although the functional roles of the SNS have been well established, a comprehensive organization map of the sympathetic postganglionic innervation of the atria remains insufficiently delineated. In addition, the regional density of the sympathetic innervation of the heart has yet to be quantified. There are numerous unanswered questions related to the detailed anatomy of the heart's sympathetic nervous system and how it is modified by disease states, such as atrial fibrillation, arrhythmia, and heart failure . For example, a complete understanding of the morphology and morphometry to explain the complexity of sympatho-cardiac communication and the differential regional distribution of the atrial nerve plexus remains to be elucidated , , . Previous studies investigated the structure and function of sympathetic neurons and axons in different species – using sectioned heart preparations or focused only on specific regions of the atria, which disrupted the continuity of axons and terminals, preventing large scale morphological characterization of these structures. Great effort has been made to better characterize the intrinsic cardiac plexus in the whole-mount mouse heart, which increased our knowledge on the distribution of noradrenergic innervation of the mouse heart , . Nevertheless, the complete fine details of TH-IR axon terminals and varicosities were not fully visualized in the whole-mount. Additionally, thick regions of the auricle and other structures were partially or completely removed. These structures include right cranial vein (RCV), left cranial vein (LCV), and caudal vein (CV) , which we refer to in this study and our previous work as superior vena cava (SVC), left precaval vein (LPCV), and inferior vena cava (IVC) , , ; respectively. Moreover, the topology of sympathetic neurons and their local communication with the heart, which influence cardiac functions were characterized , . In those studies, it was shown that sympathetic neurons directly communicate with cardiomyocytes in the ventricles and the density of innervation correlates with the size of cardiomyocytes, all of which emphasize the need to determine the differential regional innervation of the heart. Recently, researchers were able to generate two- and three-dimensional reconstructions of the sympathetic innervation of the myocardium. However, these studies provided imaging from only a few myocardial sections and a small segment of the heart . Alternatively, they revealed the big bundles without a clear visualization of the fine axons and terminals or cardiac targets . Both studies used tyrosine hydroxylase (TH) as a sympathetic marker and showed that sympathetic nerves and intrinsic cardiac ganglia were distributed in both atria of the heart, predominantly near the SAN, AVN and around the junction of left and right atria , . Despite substantial advances in knowledge on the anatomy and physiology of cardiac nerves that contribute to therapeutic responses, there are still many gaps that need to be filled as neuromodulation treatments move away from pharmaceuticals and non-specific treatments to more guided and specific therapeutic targets for cardiovascular diseases. To facilitate these transitions, the architecture of cardiac sympathetic nerves needs to be carefully and precisely determined. More studies are needed to determine the structural organization of the sympathetic postganglionic innervation of whole-mount preparations of the heart (atria and ventricles) to improve understanding of sympathetic control of the heart. Previously, we have determined the distribution and morphology of parasympathetic afferent and efferent axons in the atria in wild-type rat and mouse preparations – as well as in disease models (e.g., aging, sleep apnea, and diabetes) , , . Collectively, the present work provides a comprehensive topographical map of the catecholaminergic efferent axon distribution, density, and morphology of the atria at the single cell/axon/varicosity resolution. This anatomical map will provide a foundation for future functional studies of sympathetic control of the heart and its remodeling in pathological conditions.
Animals and ethical statement All procedures were approved by the University of Central Florida Animal Care and Use Committee (HURON PROTO202000150) and strictly followed the guidelines established by the National Institutes of Health (NIH) and the ARRIVE 2.0 guidelines. This study was performed on healthy male C57Bl/6 J mice (RRID: IMSR_JAX000664, The Jackson Laboratory, Bar Harbor, ME) (n = 20, age 2–3 months, weighing 20–30 g). Mice were housed in a plastic cage (n = 5/cage) with sawdust bedding (changed three times a week) in a room with controlled environmental conditions of humidity and temperature in which light/dark cycles were set to 12/12 h (6:00 AM to 6:00 PM light cycle) and provided food and water ad libitum. Mice were divided into 3 groups. Connected atria TH-IR axon innervation mapping group (n = 5) were used to show topographical innervation and reconstruction of nerves. Quantification analysis of separate right and left atria group (n = 6) were used to perform regional density analysis. Control group (n = 4) were used to ensure there were not any nonspecific labeling and that labelled structures represent neuronal and axonal structures. This was performed by omitting the primary antibody (n = 1) or omitting the secondary antibody (n = 1) and labelling with PGP9.5 (3). Additional animals were used to counter-stain neurons with Fluorogold (n = 4). All efforts were made to minimize the number of mice and their suffering. Tissue preparation Mice were deeply anesthetized with isoflurane (4%) induction in an anesthetic chamber. Absence of the hind paw pinch withdrawal reflex was used as an indicator of sufficient depth of anesthesia. Mice were injected with 0.2 mL heparin into the left ventricle followed by a cut to the inferior vena cava to drain the blood. After 2 min, a needle was inserted into the left ventricle and the mice were perfused with 0.9% saline at 38–40 °C for 5 min, followed by fixation with 4% paraformaldehyde. Hearts along with the lungs and trachea were removed from the chest and postfixed overnight in 4% paraformaldehyde at 4 °C. The heart was placed and pinned into a dissecting dish lined with Sylgard and containing PBS (0.1 M, pH = 7.4), and the specimen was further dissected using a Leica Stereo microscope as described previously , , , , . To reveal the intact network of sympathetic postganglionic atrial innervation, we removed the heart from the surrounding tissues (lungs, aortic arch and trachea). Then, the atria (both right and left atrium connected at the interatrial septum on the ventral side) were separated from the ventricles (n = 5). The whole atria were processed as a montage of several hundred (~ 260) maximal projections of image stacks. To gain more insight into TH-IR axon innervation and regional density, the right and left atria (RA and LA) were separated. The auricles were cut along the boundary into two halves. The part of the auricle facing more exteriorly and connected to the big vessels is referred to in this study as the outer auricle and the other half is referred to as the inner auricle. Then, flat-mounts were scanned using the confocal microscope at higher magnification (40X oil lens). The separation of the atria was necessary to avoid areas of overlap between RA and LA. Montages of the maximal projections of the right and left atria were prepared (n = 6/group). A detailed experimental protocol is available through Protocol.io: 10.17504/protocols.io.n92ldzbmxv5b/v2. Immunohistochemistry (IHC) Tissue processing and immunolabeling were performed as described previously . Following dissection, the tissues were washed 6 × 5 min in 0.1 M PBS (pH = 7.4), then immersed for 48 h in a blocking reagent (2% bovine serum albumin, 10% normal donkey serum, 2% Triton X-100, 0.08% NaN 3 in 0.1 M PBS, pH = 7.4) to reduce nonspecific binding of the primary antibody and to promote increased antibody penetration. Primary antibodies (1:100) were added to the primary solution (2% bovine serum albumin, 4% normal donkey serum, 0.5% Triton X-100, 0.08% NaN 3 in 0.1 M PBS, pH = 7.4) and incubated for 48 h. Unbound primary antibodies were removed by 6 × 5 min tissue washes in PBST (0.5% Triton X-100 in 0.1 M PBS, pH = 7.4). Secondary antibodies (1:50 in PBST) were then applied for 24 h. Unbound secondary antibodies were removed by 6 × 5 min tissue washes in PBS. Negative control tests (in which primary antibodies were omitted) were also performed, and these preparations presented no labeling, confirming that nonspecific binding of secondary antibodies did not occur. Lastly, we verified the accuracy of our TH labeling by using PGP 9.5 (ubiquitin carboxyl-terminal hydrolase-1), a general neuronal marker that visualizes different populations and subtypes of nerves. A list of the antibodies used in this study is summarized in Table . Flat-mounts were placed on a microscope slide with their dorsal side against the glass, coverslipped, crushed for 2 days with lead weights, and air-dried under a fume hood for 1 day. Slides were dehydrated by immersion for 2 min in each of 4 ascending concentrations of ethanol (75, 95, 100 and 100%), followed by 2 × 10 min washes in xylene. Slides were then covered with coverslips and DEPEX mounting medium (Electron Microscopy Sciences #13514) and allowed to dry overnight. Fluoro-Gold (FG) counterstaining To evaluate the location of immunolabeled structures relative to cardiac ganglia, FG was used to to counterstain neurons in four additional animals. Fluoro-Gold (0.3 mL of 3 mg/mL per mouse; Fluorochrome, LLC, FG 50 mg) was injected (i.p.) to counter stain neurons in the peripheral ganglia. Mice were perfused 3–5 days after FG injection and the hearts were removed and dual labeled with TH. Image acquisition The Nikon 80i fluorescence microscope (Lens: 20X and 40X) was first used to survey the TH labeling in the whole flat-mounts of the atria. Then, a Leica TCS SP5 laser scanning confocal microscope (Lens: 20X and 40X oil) was used to acquire images and assemble image montages of whole connected atria, including left atrium and right atrium flat-mounts. An argon-krypton laser (excitation 488 nm) was used to image TH-IR axons, a helium-neon (HeNe) laser (excitation 543 nm) was used to image PGP9.5-IR axons, and UV laser was used to detect FG or background autofluorescence of the tissues. The connected atria were scanned using a 20X oil immersion objective lens (Z-step 1.5 μm), to produce approximately 400 confocal image stacks per montage. The confocal projection images of these stacks were used to assemble montages of whole atria flat-mounts using either Mosaic J or photoshop. To better visualize the topographical distribution and morphology of TH-IR innervation in the atria, the separate whole left atrium and right atrium and regions of interest were scanned at high magnification (40X oil immersion objective lens, Zoom X1 or X 1.5, Z-step 1.5 μm). The higher magnification resulted in approximately 800 frames for each atrium. We were able to overcome the thickness of the flat-mount whole atria with our optimized tissue processing techniques and flattening of the tissue which allowed us to visualize fine details of TH-IR axon innervation. We also used a Zeiss M2 Imager microscope with an autostage (20X NA 0.8) to scan the samples which produced images with high quality that were comparable to the images obtained with confocal microscopy (20X objective lens). This approach will make future methodology less laborious and more efficient. Tracing of TH-IR axons was performed using Neurolucida 360 (MBF Bioscience). Additionally, Neurolucida Explorer (MBF Bioscience), an analytical software built within Neurolucida 360, was used to perform morphometric analysis on traced axon reconstructions. Branched structure analysis was performed, and parameters (number of trees, nodes, terminals, total length and surface area) were selected for all connected atria tracings (n = 6). Density and size quantification To quantitate the regional density of TH-IR fibers in the atria, we segregated images into specific regions of interest (ROIs): SAN, AVN, SVC, IVC, right outer and inner auricle, LA-PV junction, left PV, middle PV, right PV, left outer and inner auricle using Fiji . The steps of density quantification were as follows (Fig. ): (1) Subtracted the background with radius of 80 pixels to reduce noise and enhance contrast. (2) Applied particle removal to remove small debris. (3) Applied a binary threshold (Otsu method) to isolate immunoreactive structures. (4) Quantified the signal above the threshold. (5) Averaged the signal of different ROIs windows using six counting frames. (6) Ran the Shapiro–Wilk normality test. Axontracer algorithm was used to trace and confirm axon quantification . Axon density was represented as total axon length per ROI. Total axon length in pixels was converted to μm using appropriate conversion factors. Statistical significance of the difference between the means was performed using one-way ANOVA and Tukey’s HSD (Honestly Significant Difference). Data are expressed as means + / − SEM. Significance is accepted at P < 0.05. Heatmaps were created after applying a modified version of the freely available open-source automated software algorithm that trace and quantify axons (Axon tracer plugin, ImageJ) . The percentage of TH-IR neurons was counted using all single optical sections of different ICG image stacks.
All procedures were approved by the University of Central Florida Animal Care and Use Committee (HURON PROTO202000150) and strictly followed the guidelines established by the National Institutes of Health (NIH) and the ARRIVE 2.0 guidelines. This study was performed on healthy male C57Bl/6 J mice (RRID: IMSR_JAX000664, The Jackson Laboratory, Bar Harbor, ME) (n = 20, age 2–3 months, weighing 20–30 g). Mice were housed in a plastic cage (n = 5/cage) with sawdust bedding (changed three times a week) in a room with controlled environmental conditions of humidity and temperature in which light/dark cycles were set to 12/12 h (6:00 AM to 6:00 PM light cycle) and provided food and water ad libitum. Mice were divided into 3 groups. Connected atria TH-IR axon innervation mapping group (n = 5) were used to show topographical innervation and reconstruction of nerves. Quantification analysis of separate right and left atria group (n = 6) were used to perform regional density analysis. Control group (n = 4) were used to ensure there were not any nonspecific labeling and that labelled structures represent neuronal and axonal structures. This was performed by omitting the primary antibody (n = 1) or omitting the secondary antibody (n = 1) and labelling with PGP9.5 (3). Additional animals were used to counter-stain neurons with Fluorogold (n = 4). All efforts were made to minimize the number of mice and their suffering.
Mice were deeply anesthetized with isoflurane (4%) induction in an anesthetic chamber. Absence of the hind paw pinch withdrawal reflex was used as an indicator of sufficient depth of anesthesia. Mice were injected with 0.2 mL heparin into the left ventricle followed by a cut to the inferior vena cava to drain the blood. After 2 min, a needle was inserted into the left ventricle and the mice were perfused with 0.9% saline at 38–40 °C for 5 min, followed by fixation with 4% paraformaldehyde. Hearts along with the lungs and trachea were removed from the chest and postfixed overnight in 4% paraformaldehyde at 4 °C. The heart was placed and pinned into a dissecting dish lined with Sylgard and containing PBS (0.1 M, pH = 7.4), and the specimen was further dissected using a Leica Stereo microscope as described previously , , , , . To reveal the intact network of sympathetic postganglionic atrial innervation, we removed the heart from the surrounding tissues (lungs, aortic arch and trachea). Then, the atria (both right and left atrium connected at the interatrial septum on the ventral side) were separated from the ventricles (n = 5). The whole atria were processed as a montage of several hundred (~ 260) maximal projections of image stacks. To gain more insight into TH-IR axon innervation and regional density, the right and left atria (RA and LA) were separated. The auricles were cut along the boundary into two halves. The part of the auricle facing more exteriorly and connected to the big vessels is referred to in this study as the outer auricle and the other half is referred to as the inner auricle. Then, flat-mounts were scanned using the confocal microscope at higher magnification (40X oil lens). The separation of the atria was necessary to avoid areas of overlap between RA and LA. Montages of the maximal projections of the right and left atria were prepared (n = 6/group). A detailed experimental protocol is available through Protocol.io: 10.17504/protocols.io.n92ldzbmxv5b/v2.
Tissue processing and immunolabeling were performed as described previously . Following dissection, the tissues were washed 6 × 5 min in 0.1 M PBS (pH = 7.4), then immersed for 48 h in a blocking reagent (2% bovine serum albumin, 10% normal donkey serum, 2% Triton X-100, 0.08% NaN 3 in 0.1 M PBS, pH = 7.4) to reduce nonspecific binding of the primary antibody and to promote increased antibody penetration. Primary antibodies (1:100) were added to the primary solution (2% bovine serum albumin, 4% normal donkey serum, 0.5% Triton X-100, 0.08% NaN 3 in 0.1 M PBS, pH = 7.4) and incubated for 48 h. Unbound primary antibodies were removed by 6 × 5 min tissue washes in PBST (0.5% Triton X-100 in 0.1 M PBS, pH = 7.4). Secondary antibodies (1:50 in PBST) were then applied for 24 h. Unbound secondary antibodies were removed by 6 × 5 min tissue washes in PBS. Negative control tests (in which primary antibodies were omitted) were also performed, and these preparations presented no labeling, confirming that nonspecific binding of secondary antibodies did not occur. Lastly, we verified the accuracy of our TH labeling by using PGP 9.5 (ubiquitin carboxyl-terminal hydrolase-1), a general neuronal marker that visualizes different populations and subtypes of nerves. A list of the antibodies used in this study is summarized in Table . Flat-mounts were placed on a microscope slide with their dorsal side against the glass, coverslipped, crushed for 2 days with lead weights, and air-dried under a fume hood for 1 day. Slides were dehydrated by immersion for 2 min in each of 4 ascending concentrations of ethanol (75, 95, 100 and 100%), followed by 2 × 10 min washes in xylene. Slides were then covered with coverslips and DEPEX mounting medium (Electron Microscopy Sciences #13514) and allowed to dry overnight.
To evaluate the location of immunolabeled structures relative to cardiac ganglia, FG was used to to counterstain neurons in four additional animals. Fluoro-Gold (0.3 mL of 3 mg/mL per mouse; Fluorochrome, LLC, FG 50 mg) was injected (i.p.) to counter stain neurons in the peripheral ganglia. Mice were perfused 3–5 days after FG injection and the hearts were removed and dual labeled with TH.
The Nikon 80i fluorescence microscope (Lens: 20X and 40X) was first used to survey the TH labeling in the whole flat-mounts of the atria. Then, a Leica TCS SP5 laser scanning confocal microscope (Lens: 20X and 40X oil) was used to acquire images and assemble image montages of whole connected atria, including left atrium and right atrium flat-mounts. An argon-krypton laser (excitation 488 nm) was used to image TH-IR axons, a helium-neon (HeNe) laser (excitation 543 nm) was used to image PGP9.5-IR axons, and UV laser was used to detect FG or background autofluorescence of the tissues. The connected atria were scanned using a 20X oil immersion objective lens (Z-step 1.5 μm), to produce approximately 400 confocal image stacks per montage. The confocal projection images of these stacks were used to assemble montages of whole atria flat-mounts using either Mosaic J or photoshop. To better visualize the topographical distribution and morphology of TH-IR innervation in the atria, the separate whole left atrium and right atrium and regions of interest were scanned at high magnification (40X oil immersion objective lens, Zoom X1 or X 1.5, Z-step 1.5 μm). The higher magnification resulted in approximately 800 frames for each atrium. We were able to overcome the thickness of the flat-mount whole atria with our optimized tissue processing techniques and flattening of the tissue which allowed us to visualize fine details of TH-IR axon innervation. We also used a Zeiss M2 Imager microscope with an autostage (20X NA 0.8) to scan the samples which produced images with high quality that were comparable to the images obtained with confocal microscopy (20X objective lens). This approach will make future methodology less laborious and more efficient. Tracing of TH-IR axons was performed using Neurolucida 360 (MBF Bioscience). Additionally, Neurolucida Explorer (MBF Bioscience), an analytical software built within Neurolucida 360, was used to perform morphometric analysis on traced axon reconstructions. Branched structure analysis was performed, and parameters (number of trees, nodes, terminals, total length and surface area) were selected for all connected atria tracings (n = 6).
To quantitate the regional density of TH-IR fibers in the atria, we segregated images into specific regions of interest (ROIs): SAN, AVN, SVC, IVC, right outer and inner auricle, LA-PV junction, left PV, middle PV, right PV, left outer and inner auricle using Fiji . The steps of density quantification were as follows (Fig. ): (1) Subtracted the background with radius of 80 pixels to reduce noise and enhance contrast. (2) Applied particle removal to remove small debris. (3) Applied a binary threshold (Otsu method) to isolate immunoreactive structures. (4) Quantified the signal above the threshold. (5) Averaged the signal of different ROIs windows using six counting frames. (6) Ran the Shapiro–Wilk normality test. Axontracer algorithm was used to trace and confirm axon quantification . Axon density was represented as total axon length per ROI. Total axon length in pixels was converted to μm using appropriate conversion factors. Statistical significance of the difference between the means was performed using one-way ANOVA and Tukey’s HSD (Honestly Significant Difference). Data are expressed as means + / − SEM. Significance is accepted at P < 0.05. Heatmaps were created after applying a modified version of the freely available open-source automated software algorithm that trace and quantify axons (Axon tracer plugin, ImageJ) . The percentage of TH-IR neurons was counted using all single optical sections of different ICG image stacks.
Topographical projections of TH-IR axons in the flat-mount of the whole left and right atria (connected): Neurolucida tracing and digitization Four major extrinsic TH-IR axon bundles entered the atria (short yellow arrows in Fig. ), branched into the smaller bundles, and finally ramified into individual axons which covered the entire atria (Fig. ). Across animals, the number of large TH-IR bundles and their entry locations and innervation fields of the atria were quite consistent. In all atrial tissue preparations, most TH-IR bundles were identified consistently at the medial side of superior vena cava (SVC), entrance of the pulmonary veins (PVs) to the left atrium, and left precaval vein (LPCV) (Fig. ). The tracing of TH-IR axons using the Neurolucida system highlighted the trajectory of major bundles effectively. These bundles innervated different regions with a certain degree of overlap (Fig. a). TH-IR bundles projected their axons towards the atria via four main topographical pathways: ● Bundle 1 entered the atria at the medial side of the SVC and branched into smaller bundles that proceeded towards the SAN, conductive fibers, AVN region, right PV and the lower part of the right auricle (Fig. b). ● Bundle 2 formed a loop around the origin of the SVC (probably folded during dissection) and sent projections mainly to the upper part of the right auricle and junction of LA and RA (Fig. c). ● Bundle 3 entered the atria at the LPCV and ramified into individual axons that projected towards the entire left auricle (Fig. d). ● Bundle 4 entered the atria at the lower edge of the LPCV and projected towards the LA-PV junction, left and middle PVs and junction of LA and RA (Fig. e). Most animals showed a similar trend of TH-IR axon distribution. Some of the variations observed could be due to unintentional folding of bundles during dissection and interindividual variation. To confirm TH-IR axons and neurons were accurately representing neural processes, pan-neuronal marker PGP 9.5 was used. All TH-IR axons and neurons were also PGP 9.5-IR (Fig. ), indicating that TH-IR fibers (Fig. a–c) and neurons (Fig. d–f) were indeed neural processes. Additionally, negative controls further confirmed the labeling specificity. TH-IR axon innervation of the right and left atrium: density, distribution and morphology The distribution of TH-IR axons in the whole right atrium was consistent in all animals . A couple of large TH-IR bundles entered the right atrium through the SVC and LPCV (Fig. ). These large bundles branched into smaller bundles that either passed through the intrinsic cardiac ganglia (ICG) or extended directly to other cardiac targets and ramified into individual axons. The overall density heatmap (Fig. a) revealed that TH-IR axon innervation was significantly higher within the region of the SAN compared to other areas ( P < 0.05, n = 6). The steps for the quantification of TH-IR axon density were delineated in Fig. . TH-IR axon density at several regions of interest) in the RA is shown in Fig. b–g. The inner and outer walls of the auricles were separated due to their thickness. The density of TH-IR axon innervation in these regions was in the following order from high to low: SAN (687.3 μm/mm 2 ± 21.63), AVN region (401.7 μm/mm 2 ± 51.03,), inner auricle (303.1 μm/mm 2 ± 36.78) and outer auricle (243.4 μm/mm 2 ± 27.22), SVC (239.5 μm/mm 2 ± 33.09), IVC (113.6 μm/mm 2 ± 14.19) (Fig. h). The distribution of TH-IR bundles and axons in the flat-mount of whole left atrium was determined (Fig. ). A couple of TH-IR bundles entered the left atrium through the LA-PV junction then bifurcated into smaller bundles. These bundles either extended towards the ICG or directly to other cardiac targets and eventually ramified into numerous axon terminals covering the entire left atrium. This montage clearly showed a holistic view of the sympathetic innervation of the left atrium at single axon/cell/varicosity scale. The overall heatmap of a representative mouse (Fig. a) showed the highest density of TH-IR immunoreactivity in the regions of the left atrium within the LA-PV junctions and the roots of pulmonary veins. Regional density analysis of ROIs in the LA (Fig. b–g) showed the density of TH-IR axon innervation as following from high to low: LA-PV junction (mean 348.2 μm/mm 2 ± 26), inner auricle (217 μm/mm 2 ± 19.17), outer auricle (197 μm/mm 2 ± 17.42), and pulmonary veins (left PV 179 μm/mm 2 ± 5.25, middle PV 165 μm/mm 2 ± 28.44, right PV 144.8 μm/mm 2 ± 11.85) (Fig. h). There was a significantly higher density of TH-IR axons in the middle area of the left atrium represented as LA-PV junction than the auricle or pulmonary vein ( P < 0.05, n = 6). A comparison of the TH-IR axon density in the RA and LA showed the highest density of innervation was at the SAN. Of note, TH-IR bundles and ICG were excluded from the density calculations and ROIs selected contained only TH-IR axons to avoid any bias in the quantitative analysis. In the LA, the junction of LA-PVs showed very dense innervation of TH-IR axons in most samples (Fig. h). Interestingly, even though the density of TH-IR axons in the PVs were less than that at the LA-PV junction, the axons in the PVs were more continuous and had a more defined pattern. The bundles seen on LA are most likely branches of the large TH-IR bundles on the RA that were dislocated during the separation of RA and LA. TH-IR neurons and SIF cells and TH-IR axons in ICG In the whole atrial flat-mounts, several intrinsic cardiac ganglia were distributed in the epicardium. The majority of these ganglia were identified near the SAN region, AVN region, and interatrial groove in the connected atria (Fig. ). When separated, the left atrium had the majority of intrinsic cardiac ganglia in the middle area of the left atrium at the attachment points with the right atrium in the SAN and AVN regions and the entrance of the pulmonary veins (Fig. ). Some ganglia were also located in the right atrium around the SA region and the epicardial bundles on the LPCV (Fig. ). ICG were mostly located on the dorsal surface of the mice LA and TH-IR neurons comprised 18–30% of total ICG neurons in maximal intensity projections (Fig. a–c) and optical sections(Fig. a’–c’). TH-IR fibers were mostly observed passing through the individual ICG (Fig. ). Even though maximal projection images showed TH-IR axons near the ICG (Fig. a), a more detailed evaluation of single optical sections (Fig. a’) or partial projections of different ICG (Fig. b–e) showed that no TH-IR axon terminals wrapped tightly around the individual ICG neurons. Additionally, small intensely fluorescent cells (SIF) cells were strongly TH-IR (Fig. ) and were observed in clusters of 3–8 cells, usually dispersed within ICG or near big TH-IR bundles. Optical sections of SIF cells in selected clusters (Fig. a’,a”) showed that they have a smaller diameter (< 10 μm) compared to TH-IR neurons in the ICG (~ 20 μm). TH-IR axon innervation of vasculature and fat cells In addition to the major veins (SVC, IVC, PVs and LPCV) we identified clearly contoured blood vessels (arterioles) in the left and right atria with TH-IR axons running in parallel to the blood vessel walls or across them (Fig. ). In the montages, the blood vessels were much less apparent due to the overlays of multiple layers in the maximal projection masking the detailed vascular structures. TH-IR fibers also densely innervated the fat tissues at different layers of the atrial wall. White adipose tissue (WAT) and brown adipose tissue (BAT) were identified by their morphological characteristics using brightfield (Fig. a,b) or autofluorescence (Fig. d,e). Figure c showed TH-IR axons innervated the fat cells in a cluster with numerous varicose terminals. Additionally, the optical sections of the same region showed that TH-IR axons specifically targeted individual adipocytes (Fig. c’). TH-IR axon terminals were observed around the boundaries and in between WAT recognized by spherical cells with most of the volume occupied by cytoplasmic lipid droplets and peripherally located nucleus (Fig. a’,d). On the other hand, BAT was recognized by multiple vacuoles and darker shade and showed higher innervation by TH-IR axon terminals compared to WAT (Fig. b’,e).
Four major extrinsic TH-IR axon bundles entered the atria (short yellow arrows in Fig. ), branched into the smaller bundles, and finally ramified into individual axons which covered the entire atria (Fig. ). Across animals, the number of large TH-IR bundles and their entry locations and innervation fields of the atria were quite consistent. In all atrial tissue preparations, most TH-IR bundles were identified consistently at the medial side of superior vena cava (SVC), entrance of the pulmonary veins (PVs) to the left atrium, and left precaval vein (LPCV) (Fig. ). The tracing of TH-IR axons using the Neurolucida system highlighted the trajectory of major bundles effectively. These bundles innervated different regions with a certain degree of overlap (Fig. a). TH-IR bundles projected their axons towards the atria via four main topographical pathways: ● Bundle 1 entered the atria at the medial side of the SVC and branched into smaller bundles that proceeded towards the SAN, conductive fibers, AVN region, right PV and the lower part of the right auricle (Fig. b). ● Bundle 2 formed a loop around the origin of the SVC (probably folded during dissection) and sent projections mainly to the upper part of the right auricle and junction of LA and RA (Fig. c). ● Bundle 3 entered the atria at the LPCV and ramified into individual axons that projected towards the entire left auricle (Fig. d). ● Bundle 4 entered the atria at the lower edge of the LPCV and projected towards the LA-PV junction, left and middle PVs and junction of LA and RA (Fig. e). Most animals showed a similar trend of TH-IR axon distribution. Some of the variations observed could be due to unintentional folding of bundles during dissection and interindividual variation. To confirm TH-IR axons and neurons were accurately representing neural processes, pan-neuronal marker PGP 9.5 was used. All TH-IR axons and neurons were also PGP 9.5-IR (Fig. ), indicating that TH-IR fibers (Fig. a–c) and neurons (Fig. d–f) were indeed neural processes. Additionally, negative controls further confirmed the labeling specificity.
The distribution of TH-IR axons in the whole right atrium was consistent in all animals . A couple of large TH-IR bundles entered the right atrium through the SVC and LPCV (Fig. ). These large bundles branched into smaller bundles that either passed through the intrinsic cardiac ganglia (ICG) or extended directly to other cardiac targets and ramified into individual axons. The overall density heatmap (Fig. a) revealed that TH-IR axon innervation was significantly higher within the region of the SAN compared to other areas ( P < 0.05, n = 6). The steps for the quantification of TH-IR axon density were delineated in Fig. . TH-IR axon density at several regions of interest) in the RA is shown in Fig. b–g. The inner and outer walls of the auricles were separated due to their thickness. The density of TH-IR axon innervation in these regions was in the following order from high to low: SAN (687.3 μm/mm 2 ± 21.63), AVN region (401.7 μm/mm 2 ± 51.03,), inner auricle (303.1 μm/mm 2 ± 36.78) and outer auricle (243.4 μm/mm 2 ± 27.22), SVC (239.5 μm/mm 2 ± 33.09), IVC (113.6 μm/mm 2 ± 14.19) (Fig. h). The distribution of TH-IR bundles and axons in the flat-mount of whole left atrium was determined (Fig. ). A couple of TH-IR bundles entered the left atrium through the LA-PV junction then bifurcated into smaller bundles. These bundles either extended towards the ICG or directly to other cardiac targets and eventually ramified into numerous axon terminals covering the entire left atrium. This montage clearly showed a holistic view of the sympathetic innervation of the left atrium at single axon/cell/varicosity scale. The overall heatmap of a representative mouse (Fig. a) showed the highest density of TH-IR immunoreactivity in the regions of the left atrium within the LA-PV junctions and the roots of pulmonary veins. Regional density analysis of ROIs in the LA (Fig. b–g) showed the density of TH-IR axon innervation as following from high to low: LA-PV junction (mean 348.2 μm/mm 2 ± 26), inner auricle (217 μm/mm 2 ± 19.17), outer auricle (197 μm/mm 2 ± 17.42), and pulmonary veins (left PV 179 μm/mm 2 ± 5.25, middle PV 165 μm/mm 2 ± 28.44, right PV 144.8 μm/mm 2 ± 11.85) (Fig. h). There was a significantly higher density of TH-IR axons in the middle area of the left atrium represented as LA-PV junction than the auricle or pulmonary vein ( P < 0.05, n = 6). A comparison of the TH-IR axon density in the RA and LA showed the highest density of innervation was at the SAN. Of note, TH-IR bundles and ICG were excluded from the density calculations and ROIs selected contained only TH-IR axons to avoid any bias in the quantitative analysis. In the LA, the junction of LA-PVs showed very dense innervation of TH-IR axons in most samples (Fig. h). Interestingly, even though the density of TH-IR axons in the PVs were less than that at the LA-PV junction, the axons in the PVs were more continuous and had a more defined pattern. The bundles seen on LA are most likely branches of the large TH-IR bundles on the RA that were dislocated during the separation of RA and LA.
In the whole atrial flat-mounts, several intrinsic cardiac ganglia were distributed in the epicardium. The majority of these ganglia were identified near the SAN region, AVN region, and interatrial groove in the connected atria (Fig. ). When separated, the left atrium had the majority of intrinsic cardiac ganglia in the middle area of the left atrium at the attachment points with the right atrium in the SAN and AVN regions and the entrance of the pulmonary veins (Fig. ). Some ganglia were also located in the right atrium around the SA region and the epicardial bundles on the LPCV (Fig. ). ICG were mostly located on the dorsal surface of the mice LA and TH-IR neurons comprised 18–30% of total ICG neurons in maximal intensity projections (Fig. a–c) and optical sections(Fig. a’–c’). TH-IR fibers were mostly observed passing through the individual ICG (Fig. ). Even though maximal projection images showed TH-IR axons near the ICG (Fig. a), a more detailed evaluation of single optical sections (Fig. a’) or partial projections of different ICG (Fig. b–e) showed that no TH-IR axon terminals wrapped tightly around the individual ICG neurons. Additionally, small intensely fluorescent cells (SIF) cells were strongly TH-IR (Fig. ) and were observed in clusters of 3–8 cells, usually dispersed within ICG or near big TH-IR bundles. Optical sections of SIF cells in selected clusters (Fig. a’,a”) showed that they have a smaller diameter (< 10 μm) compared to TH-IR neurons in the ICG (~ 20 μm).
In addition to the major veins (SVC, IVC, PVs and LPCV) we identified clearly contoured blood vessels (arterioles) in the left and right atria with TH-IR axons running in parallel to the blood vessel walls or across them (Fig. ). In the montages, the blood vessels were much less apparent due to the overlays of multiple layers in the maximal projection masking the detailed vascular structures. TH-IR fibers also densely innervated the fat tissues at different layers of the atrial wall. White adipose tissue (WAT) and brown adipose tissue (BAT) were identified by their morphological characteristics using brightfield (Fig. a,b) or autofluorescence (Fig. d,e). Figure c showed TH-IR axons innervated the fat cells in a cluster with numerous varicose terminals. Additionally, the optical sections of the same region showed that TH-IR axons specifically targeted individual adipocytes (Fig. c’). TH-IR axon terminals were observed around the boundaries and in between WAT recognized by spherical cells with most of the volume occupied by cytoplasmic lipid droplets and peripherally located nucleus (Fig. a’,d). On the other hand, BAT was recognized by multiple vacuoles and darker shade and showed higher innervation by TH-IR axon terminals compared to WAT (Fig. b’,e).
Here, we show that several TH-IR axon bundles (presumably sympathetic postganglionic efferent projections) entered the atria from the right and left sides, branched out into individual axons and projected to different fields of the atria with a certain degree of overlap. There was a clear lateralization with the right bundles projecting mainly to the right atrium, whereas the left bundles preferably projected to the left atrium. Asymmetry and regional differences in the cardiac sympathetic distribution were observed in many physiological studies in mice , pig , and humans . Our study provides anatomical evidence for differential regional distribution in mice atria. TH-IR axon bundles were distributed in the epicardium, then bifurcated and formed a terminal network in the myocardium. Moreover, TH-IR axons were observed along/encircling small blood vessels and around WAT and BAT. Regional density analysis showed that the SAN had the highest TH-IR axon innervation. To our knowledge, this work, for the first time, provides a topographical map with quantitative assessment of the TH-IR axon innervation of the mouse whole atria at single cell/axon/varicosity scale. Topographical distribution of TH-IR axon innervation in the flat-mount of the whole atria at single cell/axon/varicosity scale Innervation field of TH-IR axons Several studies have reported the distribution of catecholaminergic nerve fibers utilizing sectioned or whole mounts of partial atrial preparations , – . The main limitation of such approaches is that the experimental approach damaged the intricate three-dimensional structures of axons and terminals in these tissues. Additionally, sections or partial flat mounts did not provide a comprehensive topographical map to assess the distribution and morphology of sympathetic postganglionic efferent axons and terminals across the entire atria. Recently, tissue clearing procedures have permitted an enhanced 3D view of the whole heart innervation . However, visibility of fine axons and terminals in the whole heart remained restricted with tissue clearing procedures. In addition, tissue clearance diminished the visibility of other cardiac targets such as ganglion cells, muscles, blood vessels, and adipocytes. In order to highlight the complex patterns of TH-IR axons and their terminal networks in atrial and targets, greater resolution imaging is required. Our study has addressed these limitations by providing a comprehensive topographical map of the distribution, and morphology of TH-IR axons and terminals in the atria of mice using flat-mounts of the whole atria. Consistent with previous studies on mouse and other species , – , we found a very dense TH-IR axon innervation in the atria. Additionally, the entrance points of the major TH-IR bundles to the atria, which were determined in our study, are similar to those that were ascertained previously , , . Different from prior reports, our study provided a complete, comprehensive map of TH-IR axons in the atria at single cell/axon/varicosity scale. In the connected atria, we observed that several TH-IR axon bundles (4–5) entered the atria through the SVC and LPCV and bifurcated into smaller bundles that eventually ramified into individual axons forming different projection fields with a certain degree of overlap. Presumably, these bundles were mostly from the left and right sympathetic stellate ganglia. Previous studies using retrograde tracer and stellate ganglionectomy showed that the majority of sympathetic postganglionic innervation originates from the stellate ganglia , . Our tracing of TH-IR axons showed clear lateralization as bundles from the right mainly projected towards the right atrium and SAN, while bundles from the left side showed preferential innervation of the left atrium. Our findings reveal detailed regional differences of TH-IR innervation in the entire atria, which enriches our knowledge regarding the differential sympathetic control over distinct regions. Quantitative analysis of TH-IR regional density Catecholaminergic axon innervation of the atria displays significant anatomical heterogeneity and several studies have attempted to assess the density of cardiac sympathetic nerves at different sites of the heart , . Although previous studies quantified the density of TH-IR axons at specific sites, they only utilized sections or partial atrial preparations. Thus, a more complete quantitative analysis of TH-IR axon density in the whole heart has not been determined. In our study, we addressed the mentioned shortcomings and analyzed the distribution and density of TH-IR axons in the flat-mount of the whole RA and LA at a high resolution (40X oil lens). The density of TH-IR axons showed regional differences across the atrial wall. In the RA, TH-IR axons and terminals were the densest in the SAN region, followed by the AVN region and other regions, which is similar to what was found in other studies – . In the LA, the density of TH-IR axons was the highest at the LA-PV junction which was pointed out to be an area richly innervated with sympathetic nerves . The auricles, one of the most prominent structural features of the right and left atrium, play an important role in pumping the blood within the heart with its capacity to expand during each heartbeat . The differential regional distribution of TH-IR axon innervation indicated by our density assessment gives insight to localized effects of catecholaminergic innervation of the atria. Our results could set the foundation for future physiological studies of anatomical remodeling in pathological conditions. TH-IR ICG neurons and TH-IR axons Traditionally, it was thought that all ICG neurons in guinea pigs and rats were exclusively cholinergic , . However, recent studies demonstrated that ICG neurons exhibit diverse neurochemical phenotypes (including TH, ChAT, nNOS, VIP, NPY) , that extend beyond the traditional concept of cholinergic neurons. A subpopulation of the ICG neurons were also found to be TH-IR in mice which aligns with our findings , . Similar to previous studies, we have observed the ICG being located primarily on the outer surface of the atria near the entrance of the pulmonary veins to the LA and near the SAN and AVN , . Our work in mice showed TH-IR neurons in the ICG with TH-IR axons going through the ganglia without apparent innervation. This differs from what was found in guinea pigs and rats where some TH-IR varicosities were seen around ICG neurons , . These previous findings may be somewhat overestimated by their use of partial preparations that cannot be extrapolated to all ICG neurons. In this study we aimed to assess TH-IR axons that cross through all ICG located on the RA and LA. We found only a few TH-IR axons (if any) were in close contact with ICG neurons. Higher magnification should be used in the future to ensure there is no underestimation of TH-IR axons presence around ICG neurons. In support of this finding in mice, our recent study in pigs showed that TH-IR axons traveled through the ICG without forming varicosities surrounding the principal neurons (PNs) . The lack of TH-IR varicosities wrapping tightly around TH-IR neurons in the ICG contrasts with what was observed in the gastrointestinal tract where TH-IR varicosities tightly surround the PNs in the myenteric ganglia . Prior research indicated that mice ICG are immunoreactive to dopamine-beta-hydroxylase (DBH) and norepinephrine transporter (NET), but they lack vesicular monoamine transporter 2 (VMAT2) . This is in contrast to the nerve fibers and stellate neurons which are positive for DBH, NET, and VMAT2. The lack of VMAT2 renders the neurons in the mice ICG functionally non-noradrenergic due to their inability to transport dopamine and norepinephrine into synaptic vesicles . However, there were limited studies on the function of TH-IR neurons in the ICG, and further studies are needed to explore the functions of TH-IR neurons in the ICG of different species. TH-IR innervation of fat cells and vasculature The sympathetic nervous system plays a crucial role in BAT thermogenesis and WAT lipolysis through its direct innervation of peripheral fat depots – . Epicardial adipose tissue is an unusual visceral fat depot and has been shown to express its own specific transcriptomic signature . Epicardial fat was described as white adipose tissue with brown-fat-like features , . We noticed the presence of both types of adipose tissue at multiple locations with predominance of WAT on the atrial epicardium. Similar to our study, recent work that utilized iDISCO tissue clearance, confocal and light sheet microscopy showed a differential density of TH-IR axonal varicosities in BAT and WAT . Further functional studies to investigate the physiological effects of sympathetic innervation of both BAT and WAT in the atria would be highly valuable. As expected, TH-IR axons were observed in close proximity (running parallel or wrapping around) the vasculature . Identification of the ultrastructure to confirm TH-IR axons formed contacts with the blood vessels using electron microscopy or physiological studies will be needed. It has been demonstrated that the sympathetic nerves have a major influence on the control of blood flow, blood pressure, and total vascular resistance via its innervation of small arteries . In particular, the sympathetic nervous system has an essential role in maintaining cardiovascular homeostasis and normal physiological activities, including vascular tone and blood pressure. Functional implications Although several studies have described the atrial sympathetic innervation, comprehensive studies that delineate the topographical TH-IR axon innervation of the whole atria and regional differences are currently lacking . Our tracing of the TH-IR axon innervation of the whole atria unraveled the complex axonal network and preferential innervation of distinct regions. The mapping data could be utilized to understand the sympathetic specific control of different regions of the atria and their autonomic responses. In our map, the bundles entering the right side of the atria provided the majority of the sympathetic innervation to the right auricle, right PV, SAN and conductive fibers while the left bundles provided the majority of the sympathetic innervation to the left auricle, interatrial groove (junction of LA and RA) and PVs. Regional and lateral differences in the function of the heart have been indicated previously via the functional studies (mainly in humans) of cardiac sympathetic innervation by the right and left stellate ganglia (SG) . SG block revealed that the right SG is largely responsible for increasing heart rate, slowing atrioventricular conduction, and primarily affects the right atrium as opposed to the left atrium. In contrast, the left SG has a lesser effect on heart rate and atrioventricular conduction and primarily affects the left atrium as opposed to the right atrium , . Modulating the sympathetic innervation of the atria is becoming an increasingly important therapeutic approach , for example, neuromodulation therapy by electrical stimulation or renal denervation has shown great success in treating diseases like atrial fibrillation via remodeling of stellate ganglion and reducing sympathetic output . Therefore, selective targeting of sympathetic innervation of either side of the heart can have different effects. Our topographical map of TH-IR axon innervation in the atria could be used as a cardiac sympathetic atlas to navigate more precise control of different heart regions. Knowledge of cardiac sympathetic postganglionic innervation location and density may also help to elucidate the normal physiology and abnormal patterns in certain pathological conditions. Our quantitative analysis shed light onto the atrial regions that received the highest TH-IR axon innervation that could potentially indicate a more precise control in these areas. In the RA we found the highest innervation density of TH-IR axons in the SAN, which supports the fact that the sympathetic nervous system has a role in the fine tuning of heart rate. This could also indicate potential therapeutic targets as blockade of neuronal input with propranolol (beta blocker) leads to a decrease in heart rate – . In the LA, the highest density of TH-IR axons were observed at the entrance of PV to the LA. The junction of the left atrium and pulmonary veins has been indicated to be a focal source which is responsible for the initiation of atrial fibrillation . Therefore, further functional studies of these great vein-atrial junction regions, which were the most dense with TH-IR axons in our quantitative analysis, are valuable to better understand the physiology and pathology of atrial fibrillation. Considering that understanding how sympathetic neurons communicate to their cardiac targets is essential for understanding how the heart works , our results provide a basis for understanding the role that TH-IR axons specific innervation play in the control of the normal heart as well as in the diseased heart. Limitations A couple of limitations must be acknowledged: Neurolucida 360 TH-IR axon tracing: Despite our effort to trace TH-IR axon bundles and their projection field, it was not feasible to trace the smallest branches and individual axons in the whole atria. Our continuous collaboration with MBF Bioscience in SPARC MAP-CORE to improve the customized settings for autotracing of our labeled axons should ensure more precise and faster tracing. Density of single or double layers: Due to great differences in thickness of atria in different regions, some areas had to be separated into single layers to ensure fair comparison of the density. Moreover, our regional density analysis of TH-IR axon innervation in the axon was performed using 2D projection images that present the dense structures along the z-axis in a single bidimensional image. To gain a more accurate representation of the innervation considering the depth of the tissue, 3D representation of the entire image stack of the atria should be reconstructed to quantify the density for each image stack. Summary and future directions We have determined the topographical innervation of TH-IR axons in the flat-mount of the whole atria at single cell/axon/varicosity scale. Several TH-IR axon bundles entered the atria through the SVC and LPCV, and these bundles had different projection fields. A clear lateralization preference was found: the right and left bundles preferably innervated the right and left atrium, respectively. In addition, the regional density analysis showed that TH-IR axon innervation in the RA was more abundant than in the LA. In the RA, The SAN, AVN region and internodal conducting fibers showed higher density than the other regions. LA-PV junction had the densest TH-IR axon innervation in the LA. Furthermore, TH-IR bundles and axons passed through the ICG with very limited innervation around ICG neurons, but densely innervated the blood vessels and fat cells. A schematic diagram that summarizes our main findings is shown in Fig. . This work contributes to the cardiac-sympathetic brain connectome. However, anterograde tracer injections into the stellate ganglia to specifically map sympathetic postganglionic projections to the heart should be conducted in the future to address some limitations, including identifying the source of postganglionic TH-IR axons and characterizing terminal structures. In addition, our work provides an anatomical foundation for functional mapping of sympathetic control for the heart as well as evaluation of the remodeling of cardiac sympathetic innervation in chronic disease models (hypertension, diabetes, sleep apnea, heart failure, aging).
Innervation field of TH-IR axons Several studies have reported the distribution of catecholaminergic nerve fibers utilizing sectioned or whole mounts of partial atrial preparations , – . The main limitation of such approaches is that the experimental approach damaged the intricate three-dimensional structures of axons and terminals in these tissues. Additionally, sections or partial flat mounts did not provide a comprehensive topographical map to assess the distribution and morphology of sympathetic postganglionic efferent axons and terminals across the entire atria. Recently, tissue clearing procedures have permitted an enhanced 3D view of the whole heart innervation . However, visibility of fine axons and terminals in the whole heart remained restricted with tissue clearing procedures. In addition, tissue clearance diminished the visibility of other cardiac targets such as ganglion cells, muscles, blood vessels, and adipocytes. In order to highlight the complex patterns of TH-IR axons and their terminal networks in atrial and targets, greater resolution imaging is required. Our study has addressed these limitations by providing a comprehensive topographical map of the distribution, and morphology of TH-IR axons and terminals in the atria of mice using flat-mounts of the whole atria. Consistent with previous studies on mouse and other species , – , we found a very dense TH-IR axon innervation in the atria. Additionally, the entrance points of the major TH-IR bundles to the atria, which were determined in our study, are similar to those that were ascertained previously , , . Different from prior reports, our study provided a complete, comprehensive map of TH-IR axons in the atria at single cell/axon/varicosity scale. In the connected atria, we observed that several TH-IR axon bundles (4–5) entered the atria through the SVC and LPCV and bifurcated into smaller bundles that eventually ramified into individual axons forming different projection fields with a certain degree of overlap. Presumably, these bundles were mostly from the left and right sympathetic stellate ganglia. Previous studies using retrograde tracer and stellate ganglionectomy showed that the majority of sympathetic postganglionic innervation originates from the stellate ganglia , . Our tracing of TH-IR axons showed clear lateralization as bundles from the right mainly projected towards the right atrium and SAN, while bundles from the left side showed preferential innervation of the left atrium. Our findings reveal detailed regional differences of TH-IR innervation in the entire atria, which enriches our knowledge regarding the differential sympathetic control over distinct regions. Quantitative analysis of TH-IR regional density Catecholaminergic axon innervation of the atria displays significant anatomical heterogeneity and several studies have attempted to assess the density of cardiac sympathetic nerves at different sites of the heart , . Although previous studies quantified the density of TH-IR axons at specific sites, they only utilized sections or partial atrial preparations. Thus, a more complete quantitative analysis of TH-IR axon density in the whole heart has not been determined. In our study, we addressed the mentioned shortcomings and analyzed the distribution and density of TH-IR axons in the flat-mount of the whole RA and LA at a high resolution (40X oil lens). The density of TH-IR axons showed regional differences across the atrial wall. In the RA, TH-IR axons and terminals were the densest in the SAN region, followed by the AVN region and other regions, which is similar to what was found in other studies – . In the LA, the density of TH-IR axons was the highest at the LA-PV junction which was pointed out to be an area richly innervated with sympathetic nerves . The auricles, one of the most prominent structural features of the right and left atrium, play an important role in pumping the blood within the heart with its capacity to expand during each heartbeat . The differential regional distribution of TH-IR axon innervation indicated by our density assessment gives insight to localized effects of catecholaminergic innervation of the atria. Our results could set the foundation for future physiological studies of anatomical remodeling in pathological conditions. TH-IR ICG neurons and TH-IR axons Traditionally, it was thought that all ICG neurons in guinea pigs and rats were exclusively cholinergic , . However, recent studies demonstrated that ICG neurons exhibit diverse neurochemical phenotypes (including TH, ChAT, nNOS, VIP, NPY) , that extend beyond the traditional concept of cholinergic neurons. A subpopulation of the ICG neurons were also found to be TH-IR in mice which aligns with our findings , . Similar to previous studies, we have observed the ICG being located primarily on the outer surface of the atria near the entrance of the pulmonary veins to the LA and near the SAN and AVN , . Our work in mice showed TH-IR neurons in the ICG with TH-IR axons going through the ganglia without apparent innervation. This differs from what was found in guinea pigs and rats where some TH-IR varicosities were seen around ICG neurons , . These previous findings may be somewhat overestimated by their use of partial preparations that cannot be extrapolated to all ICG neurons. In this study we aimed to assess TH-IR axons that cross through all ICG located on the RA and LA. We found only a few TH-IR axons (if any) were in close contact with ICG neurons. Higher magnification should be used in the future to ensure there is no underestimation of TH-IR axons presence around ICG neurons. In support of this finding in mice, our recent study in pigs showed that TH-IR axons traveled through the ICG without forming varicosities surrounding the principal neurons (PNs) . The lack of TH-IR varicosities wrapping tightly around TH-IR neurons in the ICG contrasts with what was observed in the gastrointestinal tract where TH-IR varicosities tightly surround the PNs in the myenteric ganglia . Prior research indicated that mice ICG are immunoreactive to dopamine-beta-hydroxylase (DBH) and norepinephrine transporter (NET), but they lack vesicular monoamine transporter 2 (VMAT2) . This is in contrast to the nerve fibers and stellate neurons which are positive for DBH, NET, and VMAT2. The lack of VMAT2 renders the neurons in the mice ICG functionally non-noradrenergic due to their inability to transport dopamine and norepinephrine into synaptic vesicles . However, there were limited studies on the function of TH-IR neurons in the ICG, and further studies are needed to explore the functions of TH-IR neurons in the ICG of different species.
Several studies have reported the distribution of catecholaminergic nerve fibers utilizing sectioned or whole mounts of partial atrial preparations , – . The main limitation of such approaches is that the experimental approach damaged the intricate three-dimensional structures of axons and terminals in these tissues. Additionally, sections or partial flat mounts did not provide a comprehensive topographical map to assess the distribution and morphology of sympathetic postganglionic efferent axons and terminals across the entire atria. Recently, tissue clearing procedures have permitted an enhanced 3D view of the whole heart innervation . However, visibility of fine axons and terminals in the whole heart remained restricted with tissue clearing procedures. In addition, tissue clearance diminished the visibility of other cardiac targets such as ganglion cells, muscles, blood vessels, and adipocytes. In order to highlight the complex patterns of TH-IR axons and their terminal networks in atrial and targets, greater resolution imaging is required. Our study has addressed these limitations by providing a comprehensive topographical map of the distribution, and morphology of TH-IR axons and terminals in the atria of mice using flat-mounts of the whole atria. Consistent with previous studies on mouse and other species , – , we found a very dense TH-IR axon innervation in the atria. Additionally, the entrance points of the major TH-IR bundles to the atria, which were determined in our study, are similar to those that were ascertained previously , , . Different from prior reports, our study provided a complete, comprehensive map of TH-IR axons in the atria at single cell/axon/varicosity scale. In the connected atria, we observed that several TH-IR axon bundles (4–5) entered the atria through the SVC and LPCV and bifurcated into smaller bundles that eventually ramified into individual axons forming different projection fields with a certain degree of overlap. Presumably, these bundles were mostly from the left and right sympathetic stellate ganglia. Previous studies using retrograde tracer and stellate ganglionectomy showed that the majority of sympathetic postganglionic innervation originates from the stellate ganglia , . Our tracing of TH-IR axons showed clear lateralization as bundles from the right mainly projected towards the right atrium and SAN, while bundles from the left side showed preferential innervation of the left atrium. Our findings reveal detailed regional differences of TH-IR innervation in the entire atria, which enriches our knowledge regarding the differential sympathetic control over distinct regions.
Catecholaminergic axon innervation of the atria displays significant anatomical heterogeneity and several studies have attempted to assess the density of cardiac sympathetic nerves at different sites of the heart , . Although previous studies quantified the density of TH-IR axons at specific sites, they only utilized sections or partial atrial preparations. Thus, a more complete quantitative analysis of TH-IR axon density in the whole heart has not been determined. In our study, we addressed the mentioned shortcomings and analyzed the distribution and density of TH-IR axons in the flat-mount of the whole RA and LA at a high resolution (40X oil lens). The density of TH-IR axons showed regional differences across the atrial wall. In the RA, TH-IR axons and terminals were the densest in the SAN region, followed by the AVN region and other regions, which is similar to what was found in other studies – . In the LA, the density of TH-IR axons was the highest at the LA-PV junction which was pointed out to be an area richly innervated with sympathetic nerves . The auricles, one of the most prominent structural features of the right and left atrium, play an important role in pumping the blood within the heart with its capacity to expand during each heartbeat . The differential regional distribution of TH-IR axon innervation indicated by our density assessment gives insight to localized effects of catecholaminergic innervation of the atria. Our results could set the foundation for future physiological studies of anatomical remodeling in pathological conditions.
Traditionally, it was thought that all ICG neurons in guinea pigs and rats were exclusively cholinergic , . However, recent studies demonstrated that ICG neurons exhibit diverse neurochemical phenotypes (including TH, ChAT, nNOS, VIP, NPY) , that extend beyond the traditional concept of cholinergic neurons. A subpopulation of the ICG neurons were also found to be TH-IR in mice which aligns with our findings , . Similar to previous studies, we have observed the ICG being located primarily on the outer surface of the atria near the entrance of the pulmonary veins to the LA and near the SAN and AVN , . Our work in mice showed TH-IR neurons in the ICG with TH-IR axons going through the ganglia without apparent innervation. This differs from what was found in guinea pigs and rats where some TH-IR varicosities were seen around ICG neurons , . These previous findings may be somewhat overestimated by their use of partial preparations that cannot be extrapolated to all ICG neurons. In this study we aimed to assess TH-IR axons that cross through all ICG located on the RA and LA. We found only a few TH-IR axons (if any) were in close contact with ICG neurons. Higher magnification should be used in the future to ensure there is no underestimation of TH-IR axons presence around ICG neurons. In support of this finding in mice, our recent study in pigs showed that TH-IR axons traveled through the ICG without forming varicosities surrounding the principal neurons (PNs) . The lack of TH-IR varicosities wrapping tightly around TH-IR neurons in the ICG contrasts with what was observed in the gastrointestinal tract where TH-IR varicosities tightly surround the PNs in the myenteric ganglia . Prior research indicated that mice ICG are immunoreactive to dopamine-beta-hydroxylase (DBH) and norepinephrine transporter (NET), but they lack vesicular monoamine transporter 2 (VMAT2) . This is in contrast to the nerve fibers and stellate neurons which are positive for DBH, NET, and VMAT2. The lack of VMAT2 renders the neurons in the mice ICG functionally non-noradrenergic due to their inability to transport dopamine and norepinephrine into synaptic vesicles . However, there were limited studies on the function of TH-IR neurons in the ICG, and further studies are needed to explore the functions of TH-IR neurons in the ICG of different species.
The sympathetic nervous system plays a crucial role in BAT thermogenesis and WAT lipolysis through its direct innervation of peripheral fat depots – . Epicardial adipose tissue is an unusual visceral fat depot and has been shown to express its own specific transcriptomic signature . Epicardial fat was described as white adipose tissue with brown-fat-like features , . We noticed the presence of both types of adipose tissue at multiple locations with predominance of WAT on the atrial epicardium. Similar to our study, recent work that utilized iDISCO tissue clearance, confocal and light sheet microscopy showed a differential density of TH-IR axonal varicosities in BAT and WAT . Further functional studies to investigate the physiological effects of sympathetic innervation of both BAT and WAT in the atria would be highly valuable. As expected, TH-IR axons were observed in close proximity (running parallel or wrapping around) the vasculature . Identification of the ultrastructure to confirm TH-IR axons formed contacts with the blood vessels using electron microscopy or physiological studies will be needed. It has been demonstrated that the sympathetic nerves have a major influence on the control of blood flow, blood pressure, and total vascular resistance via its innervation of small arteries . In particular, the sympathetic nervous system has an essential role in maintaining cardiovascular homeostasis and normal physiological activities, including vascular tone and blood pressure.
Although several studies have described the atrial sympathetic innervation, comprehensive studies that delineate the topographical TH-IR axon innervation of the whole atria and regional differences are currently lacking . Our tracing of the TH-IR axon innervation of the whole atria unraveled the complex axonal network and preferential innervation of distinct regions. The mapping data could be utilized to understand the sympathetic specific control of different regions of the atria and their autonomic responses. In our map, the bundles entering the right side of the atria provided the majority of the sympathetic innervation to the right auricle, right PV, SAN and conductive fibers while the left bundles provided the majority of the sympathetic innervation to the left auricle, interatrial groove (junction of LA and RA) and PVs. Regional and lateral differences in the function of the heart have been indicated previously via the functional studies (mainly in humans) of cardiac sympathetic innervation by the right and left stellate ganglia (SG) . SG block revealed that the right SG is largely responsible for increasing heart rate, slowing atrioventricular conduction, and primarily affects the right atrium as opposed to the left atrium. In contrast, the left SG has a lesser effect on heart rate and atrioventricular conduction and primarily affects the left atrium as opposed to the right atrium , . Modulating the sympathetic innervation of the atria is becoming an increasingly important therapeutic approach , for example, neuromodulation therapy by electrical stimulation or renal denervation has shown great success in treating diseases like atrial fibrillation via remodeling of stellate ganglion and reducing sympathetic output . Therefore, selective targeting of sympathetic innervation of either side of the heart can have different effects. Our topographical map of TH-IR axon innervation in the atria could be used as a cardiac sympathetic atlas to navigate more precise control of different heart regions. Knowledge of cardiac sympathetic postganglionic innervation location and density may also help to elucidate the normal physiology and abnormal patterns in certain pathological conditions. Our quantitative analysis shed light onto the atrial regions that received the highest TH-IR axon innervation that could potentially indicate a more precise control in these areas. In the RA we found the highest innervation density of TH-IR axons in the SAN, which supports the fact that the sympathetic nervous system has a role in the fine tuning of heart rate. This could also indicate potential therapeutic targets as blockade of neuronal input with propranolol (beta blocker) leads to a decrease in heart rate – . In the LA, the highest density of TH-IR axons were observed at the entrance of PV to the LA. The junction of the left atrium and pulmonary veins has been indicated to be a focal source which is responsible for the initiation of atrial fibrillation . Therefore, further functional studies of these great vein-atrial junction regions, which were the most dense with TH-IR axons in our quantitative analysis, are valuable to better understand the physiology and pathology of atrial fibrillation. Considering that understanding how sympathetic neurons communicate to their cardiac targets is essential for understanding how the heart works , our results provide a basis for understanding the role that TH-IR axons specific innervation play in the control of the normal heart as well as in the diseased heart.
A couple of limitations must be acknowledged: Neurolucida 360 TH-IR axon tracing: Despite our effort to trace TH-IR axon bundles and their projection field, it was not feasible to trace the smallest branches and individual axons in the whole atria. Our continuous collaboration with MBF Bioscience in SPARC MAP-CORE to improve the customized settings for autotracing of our labeled axons should ensure more precise and faster tracing. Density of single or double layers: Due to great differences in thickness of atria in different regions, some areas had to be separated into single layers to ensure fair comparison of the density. Moreover, our regional density analysis of TH-IR axon innervation in the axon was performed using 2D projection images that present the dense structures along the z-axis in a single bidimensional image. To gain a more accurate representation of the innervation considering the depth of the tissue, 3D representation of the entire image stack of the atria should be reconstructed to quantify the density for each image stack.
We have determined the topographical innervation of TH-IR axons in the flat-mount of the whole atria at single cell/axon/varicosity scale. Several TH-IR axon bundles entered the atria through the SVC and LPCV, and these bundles had different projection fields. A clear lateralization preference was found: the right and left bundles preferably innervated the right and left atrium, respectively. In addition, the regional density analysis showed that TH-IR axon innervation in the RA was more abundant than in the LA. In the RA, The SAN, AVN region and internodal conducting fibers showed higher density than the other regions. LA-PV junction had the densest TH-IR axon innervation in the LA. Furthermore, TH-IR bundles and axons passed through the ICG with very limited innervation around ICG neurons, but densely innervated the blood vessels and fat cells. A schematic diagram that summarizes our main findings is shown in Fig. . This work contributes to the cardiac-sympathetic brain connectome. However, anterograde tracer injections into the stellate ganglia to specifically map sympathetic postganglionic projections to the heart should be conducted in the future to address some limitations, including identifying the source of postganglionic TH-IR axons and characterizing terminal structures. In addition, our work provides an anatomical foundation for functional mapping of sympathetic control for the heart as well as evaluation of the remodeling of cardiac sympathetic innervation in chronic disease models (hypertension, diabetes, sleep apnea, heart failure, aging).
|
Virtual reality vs. Tablet video for venipuncture education in children: A randomized clinical trial | c0efcc6e-d8bc-404b-b440-873aa41ab6b4 | 11349209 | Patient Education as Topic[mh] | Venipuncture is one of the most distressing and painful medical procedures for pediatric patients . Unpleasant and painful experiences may contribute to high anxiety levels before the procedure and result in uncooperative behavior during the procedure . Additionally, pain from medical procedures may result in long-lasting negative effects among pediatric patients, including the development of fear in adulthood, avoidance of medical care, missed medical appointments, and inadequate health care follow-up . Therefore, it is crucial to ensure effective management of pain and distress during venipuncture procedures for the physical and psychological well-being of pediatric patients . Non-pharmacological or behavioral interventions are necessary to reduce pain and distress during venipuncture because pharmacological interventions are typically administered through the intravenous line after venipuncture. Among non-pharmacological behavioral strategies, pre-venipuncture education via various interventions, such as preparation programs and procedural information, has been reported to relieve procedure-related pain and distress in children . These behavioral approaches to pediatric medical pain are based on the gate control theory, which explains the transmission and modulation of pain signals . According to the gate control theory, pain signals can be inhibited by the closing of “gates” in the spinal cord and familiarity with preprocedural education may reduce anxiety and distress, which in turn decreases pain during painful medical procedures in children . Advancements in information technology (IT) have enabled the application of immersive virtual reality (VR) in pediatric patients, and clinical studies related to VR-based education for pediatric patients are rapidly increasing so as to alleviate anxiety and distress . Studies on VR-based education have outperformed standard care involving the communication of simple verbal information in terms of mitigating anxiety and distress among pediatric patients before anesthesia or non-invasive procedures . Investigations on the effect of VR as a preprocedural educational tool for painful procedures, such as venipuncture, showed that VR is useful in reducing pain and anxiety compared with standard care involving the communication of simple verbal information . Furthermore, the audio–visual parts included in VR-based education may have caused the observed effect irrespective of VR. Therefore, we hypothesized that preprocedural education with more immersive VR could more effectively decrease the pain and discomfort experienced by children during venipuncture compared with tablet videos conveying the same content but without a VR component. To this end, in this prospective randomized controlled trial, we evaluated the pain and distress of pediatric patients during venipuncture and procedure-related outcomes after they received VR- or video-based preprocedural education with identical content. To our knowledge, this is the first such study aimed at elucidating the authentic effect of immersive VR-based education on venipuncture-related pain and distress through a comparison with education via a tablet video.
Study design This randomized clinical trial was approved by the institutional review board (IRB) of Seoul National University Bundang Hospital (SNUBH; IRB number: B-2211-791-301; approval date: October 24, 2022). The protocol was registered in the University Hospital Medical Information Network Clinical Trials Registry (registration number: UMIN 000049307; registration date: October 25, 2022). Written informed consent was obtained from the parents of children younger than 7 years of age and from both a parent and the child for children aged 7 years or higher. This prospective study was performed from October 31, 2022, to April 20, 2023, at SNUBH. Patients This study included children aged 4–8 years who were scheduled to undergo venipuncture at the phlebotomy unit of SNUBH. Children with congenital problems, hearing or vision impairments, intellectual developmental difficulties, cognitive deficiency, seizure history, psychoactive medicine prescriptions, or a history of venipuncture in the previous year were excluded from the study. Of the 127 children assessed for eligibility, 37 were excluded owing to their refusal to participate. The remaining 90 children participated in the study; none dropped out . Randomization Using a computer-generated randomization code (Random Allocation Software, version 1.0; Isfahan University of Medical Sciences), the enrolled participants were randomly assigned in a 1:1 ratio to either a VR or a video group. An independent researcher performed this randomization 10 min before venipuncture. The researcher also asked the patients to predict procedural pain by assigning a score from 0 to 10 by using a visual analogue scale (score 0: no pain; score 10: the worst pain). Another independent researcher received a sealed envelope with the randomization number and performed the intervention in an independent room separated from the phlebotomy unit. Intervention The VR group received VR-based preprocedural education for 4 min, as described by Ryu and co-workers. In brief, cartoon characters from “Hello Carbot” (a famous Korean animation movie; ChoiRock Contents Factory, Seoul, South Korea) welcomed the children at the phlebotomy unit in a 360º three-dimensional virtual universe. After the patient chose one of the characters based on his/her preference, the character kindly explained the purpose and process of venipuncture to the child. The child also experienced venipuncture at a virtual phlebotomy desk and learned to position himself or herself appropriately during the procedure. The cartoons enthusiastically encouraged the child to cooperate properly . We secured the permission to use the cartoon characters through a licensing agreement with ChoiRock Contents Factory. The virtual education was provided using MetaQuest 2 (Meta, Menlo Park, CA, USA; ), the graphics quality in which was superior to that in the previously used version. The content was produced in partnership with a VR software development company (FormalWorks, Inc., Seoul, South Korea). The video group received video-based education for 4 min via a tablet (iPad, Apple Inc., Cupertino, CA, USA; ). The content was identical to that used for the VR group, i.e., the content used in the VR group was transformed into a two-dimensional video. Study outcomes Immediately after the VR or video session, the patients were moved to the phlebotomy unit for venipuncture. The interval from the end of education to the positioning at the phlebotomy desk did not exceed 5 min. An independent assessor blinded to the group assignment observed the children’s behavior and determined the Children’s Hospital of Eastern Ontario Pain Scale (CHEOPS) scores . The CHEOPS score, the primary outcome of this study, was calculated from the scores for each of the six categories: crying, facial expression, verbal response, torso, hands, and legs (score range: 4–13; ). The scores were proportional to the children’s pain and distress levels . The total time for venipuncture (from positioning at the phlebotomy desk to needle insertion for successful blood sampling) and the incidence of repeated venipunctures performed owing to poor cooperation were recorded. After the procedure, the assessor asked the participants’ parents to indicate their satisfaction with the venipuncture process by using a numerical rating scale (NRS; score 0 = extremely dissatisfied; score 10 = extremely satisfied). Immediately after the patients and parents exited the phlebotomy unit, the phlebotomy technicians were asked to evaluate their perception of the difficulty experienced in performing the procedure by using an NRS (score 0 = extremely easy; score 10 = extremely difficult). Statistical analysis Continuous variables are indicated as median (interquartile range) or mean (standard deviation) values according to the normality of the data. Categorical variables are presented as numbers . The Mann–Whitney U test was used to compare continuous study outcomes between the VR and video groups. Categorical outcomes were compared between the study groups by using Fisher’s exact test or the chi-square test, as appropriate. Multiple linear regression analysis was performed to determine independent factors associated with the CHEOPS pain score. All statistical analyses were performed using SPSS software (version 21.0; SPSS Inc., Chicago, IL, USA). Statistical significance was defined as a two-sided P -value of <0.05. Sample size In a pilot study of 40 pediatric patients (20 pairs) undergoing venipuncture, the CHEOPS scores (mean [standard deviation]) for the video and VR groups were 7.7 (2.0) and 6.5 (1.7), respectively. Power analysis was performed using G*Power 3.1.2 (Heinrich-Heine University, Düsseldorf, Germany). Based on the pilot data, a sample size of 45 patients per group was calculated to be necessary, with a power of 0.8, significance level of 0.05, and an assumed dropout rate of 10%.
This randomized clinical trial was approved by the institutional review board (IRB) of Seoul National University Bundang Hospital (SNUBH; IRB number: B-2211-791-301; approval date: October 24, 2022). The protocol was registered in the University Hospital Medical Information Network Clinical Trials Registry (registration number: UMIN 000049307; registration date: October 25, 2022). Written informed consent was obtained from the parents of children younger than 7 years of age and from both a parent and the child for children aged 7 years or higher. This prospective study was performed from October 31, 2022, to April 20, 2023, at SNUBH.
This study included children aged 4–8 years who were scheduled to undergo venipuncture at the phlebotomy unit of SNUBH. Children with congenital problems, hearing or vision impairments, intellectual developmental difficulties, cognitive deficiency, seizure history, psychoactive medicine prescriptions, or a history of venipuncture in the previous year were excluded from the study. Of the 127 children assessed for eligibility, 37 were excluded owing to their refusal to participate. The remaining 90 children participated in the study; none dropped out .
Using a computer-generated randomization code (Random Allocation Software, version 1.0; Isfahan University of Medical Sciences), the enrolled participants were randomly assigned in a 1:1 ratio to either a VR or a video group. An independent researcher performed this randomization 10 min before venipuncture. The researcher also asked the patients to predict procedural pain by assigning a score from 0 to 10 by using a visual analogue scale (score 0: no pain; score 10: the worst pain). Another independent researcher received a sealed envelope with the randomization number and performed the intervention in an independent room separated from the phlebotomy unit.
The VR group received VR-based preprocedural education for 4 min, as described by Ryu and co-workers. In brief, cartoon characters from “Hello Carbot” (a famous Korean animation movie; ChoiRock Contents Factory, Seoul, South Korea) welcomed the children at the phlebotomy unit in a 360º three-dimensional virtual universe. After the patient chose one of the characters based on his/her preference, the character kindly explained the purpose and process of venipuncture to the child. The child also experienced venipuncture at a virtual phlebotomy desk and learned to position himself or herself appropriately during the procedure. The cartoons enthusiastically encouraged the child to cooperate properly . We secured the permission to use the cartoon characters through a licensing agreement with ChoiRock Contents Factory. The virtual education was provided using MetaQuest 2 (Meta, Menlo Park, CA, USA; ), the graphics quality in which was superior to that in the previously used version. The content was produced in partnership with a VR software development company (FormalWorks, Inc., Seoul, South Korea). The video group received video-based education for 4 min via a tablet (iPad, Apple Inc., Cupertino, CA, USA; ). The content was identical to that used for the VR group, i.e., the content used in the VR group was transformed into a two-dimensional video.
Immediately after the VR or video session, the patients were moved to the phlebotomy unit for venipuncture. The interval from the end of education to the positioning at the phlebotomy desk did not exceed 5 min. An independent assessor blinded to the group assignment observed the children’s behavior and determined the Children’s Hospital of Eastern Ontario Pain Scale (CHEOPS) scores . The CHEOPS score, the primary outcome of this study, was calculated from the scores for each of the six categories: crying, facial expression, verbal response, torso, hands, and legs (score range: 4–13; ). The scores were proportional to the children’s pain and distress levels . The total time for venipuncture (from positioning at the phlebotomy desk to needle insertion for successful blood sampling) and the incidence of repeated venipunctures performed owing to poor cooperation were recorded. After the procedure, the assessor asked the participants’ parents to indicate their satisfaction with the venipuncture process by using a numerical rating scale (NRS; score 0 = extremely dissatisfied; score 10 = extremely satisfied). Immediately after the patients and parents exited the phlebotomy unit, the phlebotomy technicians were asked to evaluate their perception of the difficulty experienced in performing the procedure by using an NRS (score 0 = extremely easy; score 10 = extremely difficult).
Continuous variables are indicated as median (interquartile range) or mean (standard deviation) values according to the normality of the data. Categorical variables are presented as numbers . The Mann–Whitney U test was used to compare continuous study outcomes between the VR and video groups. Categorical outcomes were compared between the study groups by using Fisher’s exact test or the chi-square test, as appropriate. Multiple linear regression analysis was performed to determine independent factors associated with the CHEOPS pain score. All statistical analyses were performed using SPSS software (version 21.0; SPSS Inc., Chicago, IL, USA). Statistical significance was defined as a two-sided P -value of <0.05.
In a pilot study of 40 pediatric patients (20 pairs) undergoing venipuncture, the CHEOPS scores (mean [standard deviation]) for the video and VR groups were 7.7 (2.0) and 6.5 (1.7), respectively. Power analysis was performed using G*Power 3.1.2 (Heinrich-Heine University, Düsseldorf, Germany). Based on the pilot data, a sample size of 45 patients per group was calculated to be necessary, with a power of 0.8, significance level of 0.05, and an assumed dropout rate of 10%.
Patient characteristics were comparable between the VR and video groups. The venipuncture-related pain expected by the participants before the procedure was similar between the two groups . Children’s pain and distress assessed using CHEOPS were significantly lower in the VR group (median [interquartile range, IQR], 5.0 [5.0–8.0]) than in the video group (median [IQR], 7.0 [5.0–9.0]) ( P = 0.001; ). When the pain score was classified into three grades (mild, CHEOPS score of 4–6; moderate, CHEOPS score of 7–10; and severe CHEOPS score of 11–13), the proportion of children with mild procedural pain was significantly higher in the VR group than in the video group ( P = 0.02; ). Parental satisfaction with the venipuncture procedure and procedure-related outcomes, including procedure time, incidence of repeated venipuncture, and procedural difficulty score evaluated by the phlebotomy technicians, were not significantly different between the two groups . Multiple linear regression analysis revealed that the expected pain score before the procedure (measured using the visual analogue scale [VAS]) and each group (VR or video) were independent predictors of the CHEOPS score during venipuncture . This study revealed that immersive VR is more effective than non-immersive videos for preprocedural education regarding procedural pain and distress in pediatric patients undergoing venipuncture. To the best of our knowledge, this is the first study to demonstrate that immersive VR per se is an effective tool for preprocedural education in pediatric patients undergoing painful medical procedures. Our finding of immersive VR being more effective than video in preprocedural education is in line with previous pedagogical research reports. Immersive VR is a more effective tool than nonimmersive videos in pre-training education because immersive VR increases enjoyment, intrinsic motivation, and knowledge transfer . Our results, which are in line with those of previous studies, suggest that the pedagogical strength of immersive VR can be adopted in preprocedural education. Most previous studies that presented the positive effects of VR in pediatric medical care failed to show the effects of VR per se . This is because these studies investigated the effectiveness of VR by comparing the content presented via VR and those presented as part of standard care. The VR content used in these studies consisted of a combination of audio and visual components. However, standard care involved education via simple verbal communication. It is well known that multiple sensory modalities (e.g., audio and visual stimulation together) enhance learning more effectively compared with a single modality (e.g., audio or visual stimulation alone) and that multimedia education outperforms verbal education in terms of learners’ behavior change or knowledge achievement, which is called the modality effect theory . In this regard, the most recent research regarding the effect of immersive VR in pediatric patients may have proven the effect of multimedia or the modality effect rather than the effect of immersive VR technology itself. In this study, we excluded any confounding bias related to educational modalities or quality of educational content by using identical audio–visual information in the two study groups. Several studies have compared the effects of immersive VR and video in pediatric patients. However, none of these studies have revealed the effect of immersive VR on the preprocedural education of pediatric patients. Most of these studies targeted patient distractions but not education . In addition, some studies adopted different content between the VR and video groups . To the best of our knowledge, only one study has compared the effects of immersive VR and video on patient education by evaluating the effect of VR per se in the chest radiography setting . In contrast to the present study, this previous study involved non-invasive and painless procedures . The present study demonstrated the effectiveness of immersive VR in pediatric patients undergoing painful and distressing medical procedures. The multiple linear regression analysis in our study also suggested that the expected pain scores before the procedure and the group assigned were independent predictors of the CHEOPS score during venipuncture. Our findings are consistent with those of previous studies that reported positive associations between anticipatory anxiety and procedural pain in children . They found that anticipatory anxiety was strongly associated with pain intensity through various stimuli, including thermal, pressure, and cold pain tasks, in healthy children and adolescents . The total time for venipuncture and the number of venipunctures performed due to poor cooperation were recorded. After the procedure, the phlebotomy technicians were also asked about the subjective difficulty experienced when performing venipuncture. All procedure-related outcomes were comparable between the two groups, which contradicts the results of a previous investigation involving chest radiography, a non-invasive and painless procedure . During chest radiography, the procedure time and degree of difficulty for the radiologist were lower in the VR group than in the tablet group . This may be explained by the difference between painful and painless procedures. The parents of the participants were asked about their satisfaction with the venipuncture process; there was no significant difference in this regard between the two groups. The mean score of parental satisfaction was 10 in both groups, indicating that the parents were highly satisfied with the preprocedural education using VR or tablets. These results are contrary to those of a previous study that showed a significant difference in parental satisfaction between the VR group and the control group which received simple verbal education . Active multimedia education to reduce procedural pain in pediatric patients might increase the satisfaction of the parents, in both groups. Our study has some limitations. First, a considerable number of children did not want to participate in this study; most of them were reluctant to wear a head-mounted display for a virtual experience. Since VR devices are not yet widespread, children experiencing them for the first time may be reluctant to wear them. Therefore, the study only included children who had a positive attitude toward the VR experience and were expected to show positive outcomes, which may have resulted in selection bias. However, in our study, a blinded researcher performed the observational measurements; selection would not significantly bias our conclusions for children who received VR or video education. Second, the VR experience has been reported to cause complications, such as motion sickness and eye strain . Therefore, researchers should try to prevent these side effects and manage them appropriately. In this study, no complications due to the virtual education, which might be attributed to the relatively short play time and minimal virtual movement during the experience. Compared with adults, children are also known to experience fewer side effects from the VR experience .
In conclusion, this randomized controlled trial showed that immersive VR using a head-mounted display is more effective than a non-immersive video for preprocedural education regarding procedural pain and distress in pediatric patients receiving venipuncture.
S1 Fig Distribution of pain grades based on CHEOPS score. CHEOPS, Children’s Hospital of Eastern Ontario Pain Scale. (TIF) S1 Table Children’s Hospital of Eastern Ontario Pain Scale (CHEOPS). (DOCX) S1 Checklist CONSORT 2010 checklist of information to include when reporting a randomised trial*. (DOC) S1 Protocol Study protocol. (DOCX)
|
Targeted prevention in primary care aimed at lifestyle-related diseases: a study protocol for a non-randomised pilot study | bd0e2f83-08d1-4f5d-bef2-4a063c4661cd | 6054846 | Preventive Medicine[mh] | In this paper we report on a non-randomized pilot study examining the efficacy of a preventive healthcare intervention. The intervention has been designed to systematically identify patients at high risk of developing lifestyle-related disease, and provide targeted and coherent preventive services to these individuals . Lifestyle-related disease refers to health conditions that are predominantly caused by health-risk behaviors, such as poor diet, smoking, high consumption of alcohol, or lack of exercise. The consequences of lifestyle-related disease represent a major challenge for the individual as well as for society at large . In Denmark, people who smoke tobacco, consume excessive amounts of alcohol, and have a sedentary lifestyle are nearly seven times as likely to die from lifestyle-related diseases than physically active non-smokers with a moderate intake of alcohol . It is estimated that 80% of cardio-vascular disease (CVD), type 2-diabetes mellitus (T2DM), and chronic obstructive pulmonary disease (COPD), and 40% of all cancers may be averted by maintaining healthy dietary habits, regularly exercising, and refraining from smoking . Indeed, preventable lifestyle-related diseases account for approximately 50 to 60% of all hospital admissions . It is expected that increasing rates of obesity and physical inactivity will lead to a surge in the number of patients with lifestyle-related diseases in the decades to come . In light of these trends, there is a substantial need to advance and implement evidence-based health strategies and interventions that facilitate the identification and management of people at risk of developing these diseases . Disease prevention is a central task in general practice in Denmark and the Nordic countries . Two recent systematic reviews of general practice health checks suggest that people at high risk of chronic disease may benefit from targeted preventive health checks . Indeed, targeted, or selective, preventive healthcare is a generally accepted and well-integrated part of healthcare systems worldwide (e.g. treatment of hypertension and hyperlipidemia). Other studies, however, suggest that systematic screening of the general population does not improve clinical endpoints above and beyond those associated with opportunistic screening. These studies indicate that, at a population level, systematic screening of the general population does more harm than good . Overall, however, the evidence on targeted and systematic screening of chronic disease is very limited, possibly providing an explanation for the apparent contradictions in the literature. To this end, projects in the Netherlands and Great Britain are currently underway, testing different approaches to targeted and systematic intervention in general practice . There is an even greater lack of evidence when it comes to targeted preventive interventions that comprise both general practice and community health services. In such an approach the general practitioner (GP) targets patients at high risk for lifestyle-related diseases and engages in risk-management of biomarkers and disease with behavior change and pharmaceutical interventions when needed. Community health services, on the other hand, focus primarily on the prevention of health-risk behaviors - including tobacco use, poor diet, excessive alcohol consumption, and sedentary lifestyles - and provide behavior-change interventions such as smoking cessation assistance and dietary advice. Danish studies suggest a potential to enhance the collaboration and cohesiveness of the various components that comprise the preventive healthcare services in the Danish primary care system – especially between GPs and community health services . Outside of the Danish context, the benefits of a more unified and coherent healthcare service have also been advanced in peer-reviewed studies . However, effectiveness studies of a unified approach, such as that described above, seem to be lacking. In 2012, we carried out a feasibility study, testing a novel approach to population-based risk stratification at four Danish GP clinics . The intervention combined lifestyle survey data with health record information in order to identify presumably healthy individuals who nonetheless were at high risk of developing lifestyle-related diseases. These individuals were then offered a health check at their GP for a more definitive assessment of their general health as well as their risk of developing lifestyle-related diseases. Results indicated that this approach to preventive action was indeed feasible, and thus ultimately inspired the development of a large randomized study, the present TOF-project (TOF is a Danish acronym for Early Detection and Prevention). The principal aim of the upcoming TOF-project is to examine the efficacy of a preventive healthcare intervention that systematically identifies individuals at high risk of lifestyle-related disease, and provides targeted and coherent preventive services. We expect that significant changes in the targeting and systematization of disease prevention in the Danish primary care sector, including earlier detection and more coherent preventive services, will diminish the individual and societal burden of chronic disease. Due to the complexity of the TOF intervention, and the relatively high number of stakeholders, a pilot study needs to be conducted before full-scale implementation and evaluation . The aim of the pilot study is to test the acceptability, feasibility, and short-term effects of a selective preventive program, designed to systematically help patients evaluate their individual risk of lifestyle-related disease. The program also offers targeted and coordinated preventive services in the primary healthcare sector. The pilot study was designed as a population based non-randomized study in the Region of Southern Denmark, comprising 22 municipalities, 787 GPs, and a general population of 1,2 million. The Danish health care system is a tax-based system comprising three levels: A national level responsible for, among other things, public health, planning, and patient safety; a regional level responsible for the hospitals and the primary care sector; and a municipal level responsible for primary prevention, rehabilitation, and patient education. General practice and the municipalities have shared responsibility for preventive services aimed at the individual. Specifically, GPs assess patient health and implement disease-specific secondary prevention. The municipalities, however, are tasked with primary prevention such as smoking cessation, alcohol treatment, and other lifestyle related services. GPs are organized in clinics with an average of two GPs per clinic. While most clinics comprise a single GP, some have up to ten. Almost all Danish citizens (98%) are registered with a GP . Each GP has an average of 1600 registered patients. Recruitment The pilot study targets adults born between year 1957 and 1986. All 22 municipalities in the Region of Southern Denmark were invited to participate in TOF. Ten municipalities (Esbjerg, Haderslev, Varde, Sønderborg, Aabenraa, Middelfart, Kerteminde, Nyborg, Svendborg, Langeland) submitted expressions of interest to participate in the study, and were approved for participation by the Regional Council. Two of the municipalities (Haderslev and Varde) volunteered to participate in the pilot study. The municipalities of Haderslev and Varde comprise 55,971 and 50,110 citizens, and 37 and 29 GPs, respectively. All GPs from each municipality were invited to an information meeting before being formally invited to participate in the pilot study. The invitation was followed up with telephone calls to the individual GP clinics. All patients were invited at baseline, and the intervention was taken up by the patients at their own convenience during the intervention period. See Additional file for a more detailed project flow showing the recruitment, intervention and evaluation phases. Organization and development of the intervention The intervention was planned during a two-year combined effort involving all stakeholders. End users were involved in the design of the intervention, including patients, GPs, and municipal health professionals. A group of seven GPs developed the targeted intervention at a general practice level during five workshops. Similarly, a group of 10 municipal health workers, one from each of the participating municipalities, developed the targeted intervention at a municipal level during 10 workshops. The workshops lasted between 2 h and 2 days. A digital support system was created and tested by user populations, including patients, non-government patient organizations, GPs, and municipal health professionals. A steering committee was established at the start of the project, consisting of managers or board members from the Region of Southern Denmark (project owner), The Organization of General Practitioners in Denmark (PLO), the 10 participating municipalities, the Research Unit for General Practice at the University of Southern Denmark (FEA), and the Danish Quality Unit for General Practice (DAK-E). The chair of the committee is the health director from the Region of Southern Denmark. A research committee with participation from the steering committee chair and the primary investigator has been established. A mission statement has been approved by the steering committee and an agreement of co-operation has been signed between the Region of Southern Denmark and the University of Southern Denmark. The agreement states that the University of Southern Denmark holds all rights, intellectual as well as judicial, to the research data, and that the Region of Southern Denmark has no right to oppose publication of results. The research committee approves all access to research data from affiliated researchers. Prior to study commencement, all enrolled GPs, practice nurses (PN), and health professionals from the municipalities were invited to a joint three-hour training course (August 2016). The course focused on the assigned intervention activities and tasks both within the GP clinics and the municipality respectively, and between GPs and the municipality. Invitation and consent The source population received an invitation to participate, sent on behalf of the GP and the municipality to the individual’s digital mailbox. All permanent residents in Denmark are obligated to have a digital mailbox, which is essentially a digital mail-system provided by the government for secure and direct communication between individuals and public authorities and other trusted organizations (e.g. banks and insurance companies) . People may opt out of the digital mail system, citing low IT-literacy (usually elderly persons), cognitive impairment, or other complicating factors. To enroll in the study, individuals were asked to follow a link in the invitation to a digital support system protected by a two-phased NemID password . NemID is a password system providing an exact identification of the user. This system is utilized by Danish public and non-public institutions to provide secure access to personal information, such as health and financial data. Through digital mail and NemID, we were able to reach and identify 97% of the target population. In April 2016, participants received an invitation with an embedded hyperlink to a digital consent form on a secure webpage in their digital mailbox. The consent form outlined study participation and disclosure of data from the GPs electronic patient record (EPR) and was supplemented with short videos describing the purpose of the study and the intervention. Participants were asked to read the information and electronically sign the consent form. Two reminders were sent after one and 2 weeks if participants failed to sign the form. Enrollment closed after 6 weeks. At this time, information on relevant diagnosis (International Classification of Primary Care (ICPC-2) codes) and prescribed medicine (Anatomical Therapeutic Chemical Classification (ATC) codes incl. Text fields with indication for treatment) were collected from the GPs EPR system (See Table for the ICPC-2 codes and ATC codes that were accessed based on the consent). Five months after consent (September 2016), participants received another digital invitation in the digital mailbox, this time to fill in a questionnaire and access a personal health profile. Participants could opt-out at any time during the intervention period by clicking an “opt-out” button on the digital support system. Intervention The duration of the intervention was 3 months and took place between September 2016 and December 2016. The intervention comprised a two-pronged approach: a joint intervention applied to the entire sample, regardless of whether the participants were healthy, at risk, or already in treatment for T2DM, COPD, CVD, hypercholesterolemia or hypertension a targeted intervention that was offered only to participants who presumably would benefit from either further examinations at the GP (high risk), or from receiving community health services, such as smoking cessation, dietary advice, or physical activity (health-risk behavior). The joint intervention consisted of: Stratification to one of four risk groups. Stratification to a specific risk group was determined by use of risk algorithms and EPR information A digital support system with user interfaces for all users, including the patient, the GP, and the municipal health professional An individual health profile The targeted intervention consisted of: A focused clinical examination and a subsequent health dialogue with a GP (targeting patients at high risk), and / or A short telephone-based health dialogue with a municipal health professional. For patients with limited capability to care for their own health, this initial talk could be followed up with a subsequent face-to-face health dialogue (targeting patients with health-risk behavior) For all present intents and purposes, the term health dialogue refers to a consultation that includes the elements of the 5As model (see Table ) and the techniques used in motivational interviewing . The joint intervention All participants gained access to the digital support system and were invited to fill in a questionnaire. The participant questionnaire contained 15 items on height, weight, self-perceived health status, family history of lifestyle-related diseases, COPD related symptoms, smoking status, leisure activity level, alcohol consumption, diet, and osteoarthritis risk factors. Questions about family history of diabetes and leisure activity level were taken from the Danish Diabetes Risk model . Similarly, questions on COPD-related symptoms and smoking status were derived from the COPD-PS screener and the Heartscore BMI score . Items tapping dietary habits were from the Swedish National Guidelines on Disease Prevention . The questionnaire took approximately 5 min to complete. Based on the questionnaire and information from the individual EPR, participants were stratified into four distinct risk groups: Group 1 – Participants with a pre-existing diagnosis and/or in current treatment for a lifestyle-related disease. Group 2 – Participants at high risk of developing lifestyle-related disease, and thus eligible for the offer of a targeted intervention at the GP. Group 3 – Participants engaging in health-risk behavior, and thus eligible for the offer of a targeted intervention at the municipality. Group 4 – Participants with a healthy lifestyle and no need for further intervention. Stratification to group 1 EPR data was collected via certified EPR-suppliers. We used International Classification of Primary Care-2 codes (ICPC-2) registered by the GP and/or Anatomical Therapeutic Chemical Classification ( ATC) codes for prescribed medicine within the past 2 years, together with the indication for prescribing the medicine, to identify Group 1 participants (see Table ). Given the pre-existing diagnosis and/or treatment, Group 1 was excluded from the subsequent risk estimation and stratification into Group 2, 3, and 4. Stratification to group 2 Next, participants at risk of lifestyle-related disease were identified using three validated risk scores: the Chronic Obstructive Pulmonary Disease Population Screener (COPD-PS), the Danish Diabetes Risk model, and a modified Heartscore BMI score . The COPD-PS uses an algorithm accounting for age, lifetime use of cigarettes, and smoking-related symptoms to identify at-risk patients who may benefit from a spirometry to test for COPD (Table ) . The Danish Diabetes Risk score is based on an algorithm that incorporates age, sex, BMI, known hypertension, leisure activity level, and family history of diabetes (Table ) . The modified Heartscore BMI score accounts for age, sex, body mass index (BMI), and smoking status (Table ) . Consistent with the criteria of the four distinct stratification groups defined above, participants were categorized into Group 2 when one or more of the risk assessment algorithms indicated high likelihood of developing lifestyle-related disease (see Tables , and ). Stratification to group 3 and 4 Finally, participants engaging in health-risk behavior with one or more risk factors were categorized in Group 3 (Group 3). Health-risk behavior was defined by the presence of at least one of the following behaviors: smoking tobacco on a daily basis, consuming more than 14/21 (male/female) standard units of alcohol per week, sustaining an unhealthy diet (diet score ≤ 4 on a 12-point score drawn from the Swedish National Guidelines on Disease Prevention) , maintaining a BMI ≥ 35, and/or engaging in a generally sedentary lifestyle. Lastly, participants with no lifestyle-related disease or risk thereof were stratified into Group 4. Digital support system All users had access to a digital support system in the form of a web page with a common database and specific user interfaces for the GP, the municipality health professionals, and the patient. No apps were developed. The system design drew inspiration from the work by Krist and colleagues’ research on preventive EPRs, and was further inspired by the results of a Delphi process carried out to identify factors for optimal development of health-related websites . Due to challenges in terms of interoperability between the eight suppliers of EPR systems used by GPs, and at least three suppliers of electronic care records (ECR) in the municipalities, it was not feasible to develop a support system that completely integrated the EPR and ECR systems. Instead, the digital support system was developed as a parallel system with an additional functionality facilitating the transfer of information (e.g. relating to lifestyle and/or prevention plans) to the EPR and ECR systems using Electronic Data Interchange (EDIfact) messages . The patient controlled access to personal health information on the system, such that the GP and municipal health professional were only able to access this information with the explicit consent of the patient. The digital support system was developed iteratively in collaboration with the users during the before mentioned workshops with municipality health professionals and GPs and in the form of usability tests with patients. The user interface for the patient was responsive and compatible with most devices, including mobile phones, tablets, laptops and stationary computers. Due to technical constraints in the secure log-in provided by NemID, the user interface for health professionals was only developed for laptops and stationary computers. In order to make the user interface for the patient as intuitive and user-friendly as possible, the digital support system made extensive use of simple visualizations, icons, and short information videos (Fig. ). The primary text-based messages were kept short and concise with the provided possibility of accessing secondary in-depth information, retrieved from the Danish Health Portal, sundhed.dk . Beyond facilitating the intervention, the digital support system also enabled data collection for research purposes. A number of questionnaires were sent from the digital support system to the participants at specific time-points, including immediately after consent, after receiving the personal health profile, following the health dialogue at the GP, and at the end of the implementation period. Questionnaire reminders were sent by e-mail with a link to the digital support system. The GPs and municipal health professionals received audits in the form of short questionnaires immediately after each consultation as well as before and after the study period (GPs only). Personal health profile Based on results of the stratification process, each patient received a personal health profile on the digital support system. The purpose of the health profile was to encourage patients to change their health-risk behavior and follow the tailored advice provided by the system. Patients who were at increased risk of developing a lifestyle-related disease (Group 2) were advised to consult their GP for further examination and advice. Similarly, patients engaging in health-risk behavior (Group 3) were offered lifestyle counseling, or lifestyle courses from the municipality health services. By definition, Group 4 patients lead a relatively healthy life with no need for health-risk behavior change. Group 1 patients were advised to continue their treatment and use the information provided to change health-risk behavior. The personal health profile included individualized information on current health-risk behavior and risk of disease. The information was tailored based on the questionnaire, the information from the EPR, and the risk scores on COPD, T2DM, and CVD. It also included general health information and information about preventive health services concerning smoking, diet, exercise, and alcohol consumption. This information was provided by the municipality, the Region of Southern Denmark, or national health services, and targeted the individual (e.g. via links to apps and webpages) based on his/her specific health-risk behavior. The targeted intervention The intervention at the GP The intervention at the general practice level consisted of a focused clinical examination and a subsequent health dialogue and was offered to patients who were at increased risk of developing a lifestyle-related disease (Group 2). Group 2 patients accepted the offer of the intervention by scheduling an appointment at the GP (either by phone or the GP’s webpage). Whether the patient participated in the intervention or not was thus determined by their motivation and capabilities as well as the extent to which the content of the personal health profile motivated the patient to take action. The intervention was applied within the framework of the 5As model (see Table ) . The content of the focused clinical examination was based on the patient’s health profile, and might include measurements of blood glucose (HbA1c) and cholesterol levels, as well as height, weight, blood pressure, and lung function measurements and Electrocardiogram (ECG). Results from the examinations were registered in the digital support system where both the patient and the GP could access them at any time. After the focused clinical examination all patients were given the opportunity to prepare for the subsequent health dialogue by answering a questionnaire inspired by three systematic reviews on the determinants of behavior change . These included questions about motivation, resources, former experiences with behavior change, social network, mental health (WHO-5 for stress and Major Depression Inventory (MDI) for depression) , and a scheme to qualitatively self-report on facilitators and barriers to behavior change (a so-called balance-sheet). The questionnaire results were shared with the GP on the digital support system. Based on the health dialogue, the GP and the patient developed a prevention plan that included a goal, a time frame, and identification of the appropriate means to fulfill the plan (e.g. reference to a smoking cessation course, or follow-up at the GP). The prevention plan was registered on the digital support system by the GP and was accessible to both the GP and the patient. The intervention at the municipal level The intervention at the municipal level was offered to patients exhibiting health-risk behavior (Group 3) and consisted of a short telephone consultation with a health professional – for example a nurse, a dietician, or a physiotherapist. A subsequent face-to-face health dialogue was offered to patients who were deemed to potentially benefit from more extensive support. Group 3 patients requested the intervention on the digital support system by filling in a short form and sending it by e-mail to the municipality. A municipal health professional would then call the patient within the following week. Similar to the GP intervention, the intervention at the municipal level was thus also determined by patient motivation and capabilities as well as the extent to which the content of the personal health profile motivated the patient to take action. Immediately after the intervention, a participation form was sent to the municipality. Patients could prepare for the upcoming call from a municipal health professional in the same way as Group 2 patients prepared for the health dialogue – that is, by answering a short questionnaire. Ultimately, a prevention plan, including concrete details on its execution, was developed based on the telephone consultation and the face-to-face health dialogue. The prevention plan was registered by the municipal health professional and presented on the user interfaces of both the municipality and the patient. Sample size calculation While aiming to test the acceptability, feasibility, and short-term effects of the pilot, we estimated a sample size for each GP that would allow the GP to familiarize him/herself with the intervention without unnecessary increases in workload during the intervention period. In agreement with the GP representative in the Region of Southern Denmark, we set a target of four health checks for each GP. From the feasibility study, we estimated that 60% would consent to the study, and that 75% of these participants would receive a personal digital health profile . Also based on the feasibility study, we estimated that 12% of the study population would be recommended to consult their GP (Group 2). From results obtained in similar Dutch studies, we finally estimated that 35% of the these patients (Group 2) would eventually consult the GP . Given these figures, we calculated that a total sample of approximately 200 patients from each GP would be required to reach the target of four completed health checks per GP. Data collection and analysis Evaluation outcomes Evaluation of the study will be carried out using quantitative as well as qualitative research methods (Table ). All outcome measures are based on validated instruments and aim to provide results pertaining to intervention acceptability, feasibility, and short-term effects. In addition, outcomes related to other associated topics will be included. The specific instruments used will be described in detail in later publications. Qualitative data Qualitative data will be derived from interviews (individual and focus groups comprising GPs, practice staff members, municipality staff members, patients from group 2 and 3, stakeholders, project leaders and researchers) and participant observations (during the health dialogues at the GP). The estimated number of participants is shown in Table . Quantitative data Quantitative data will be derived from questionnaires as well as Danish National registers (see section below). Table shows the content of the questionnaires applied while a diagram, attached as Additional file , shows a flow of the entire intervention and the timing of the questionnaires during the intervention. Register based data Data from the Danish national registers concerning demographic information, prescriptions, and health care usage of the target population ( n = 9.400) will be obtained from Statistics Denmark ( https://www.dst.dk/da ) . Information from the different registers will be linked by the patients’ Danish Personal Identification Number. Socio-demographic variables Information on socio-demography encompassed educational level, occupation, income, cohabitation status, ethnicity, and residency. Education is defined as the highest formal educational attainment obtained on the first of October in each calendar year. Occupation is defined as the occupational status on the first of November in each calendar year. OECD-adjusted income level is defined as the individual’s/family’s disposable income, adjusted for family size and categorized in relative terms (low/middle/high income) . Cohabitation status is defined as cohabitating or living alone. Ethnicity is based on country of origin and descendance. Morbidity Information on health/disease status (hypertension, hypercholesterolemia, type-2 diabetes, cardio-vascular disease) is defined in terms of ICD-10 diagnosis codes and medical usage. The ‘National Patient Registry’ will provide information on ICD-10 diagnostic codes. The ‘Register of Medicinal Product Statistics’ provide information on medical usage . Contextual variables Contextual variables include information on study site and neighborhood social deprivation. Neighborhood social deprivation will be derived on a census district level and is principally defined in terms of the following three variables: educational attainment, employment status (employed/social welfare), and income (mean family disposable income). Educational, employment, and income deprivation thus specifically refer to the proportion of citizens within each census district who has access to basic education (up to high school), who is unemployed (e.g. students, unemployed workers), and who belongs to the lowest income quartile, respectively. Each variable is ranked, grouped in quartiles, and given a value between 0 and 3 (3 = high deprivation). This results in an aggregated ranking system ranging from 0 (low deprivation) to 9 (high deprivation). The aggregated rank is then grouped in quartiles. A neighborhood social deprivation score will be calculated for all census districts in Denmark in order to obtain local deprivation scores that mirror the relative social deprivation of the individual census district . The pilot study targets adults born between year 1957 and 1986. All 22 municipalities in the Region of Southern Denmark were invited to participate in TOF. Ten municipalities (Esbjerg, Haderslev, Varde, Sønderborg, Aabenraa, Middelfart, Kerteminde, Nyborg, Svendborg, Langeland) submitted expressions of interest to participate in the study, and were approved for participation by the Regional Council. Two of the municipalities (Haderslev and Varde) volunteered to participate in the pilot study. The municipalities of Haderslev and Varde comprise 55,971 and 50,110 citizens, and 37 and 29 GPs, respectively. All GPs from each municipality were invited to an information meeting before being formally invited to participate in the pilot study. The invitation was followed up with telephone calls to the individual GP clinics. All patients were invited at baseline, and the intervention was taken up by the patients at their own convenience during the intervention period. See Additional file for a more detailed project flow showing the recruitment, intervention and evaluation phases. The intervention was planned during a two-year combined effort involving all stakeholders. End users were involved in the design of the intervention, including patients, GPs, and municipal health professionals. A group of seven GPs developed the targeted intervention at a general practice level during five workshops. Similarly, a group of 10 municipal health workers, one from each of the participating municipalities, developed the targeted intervention at a municipal level during 10 workshops. The workshops lasted between 2 h and 2 days. A digital support system was created and tested by user populations, including patients, non-government patient organizations, GPs, and municipal health professionals. A steering committee was established at the start of the project, consisting of managers or board members from the Region of Southern Denmark (project owner), The Organization of General Practitioners in Denmark (PLO), the 10 participating municipalities, the Research Unit for General Practice at the University of Southern Denmark (FEA), and the Danish Quality Unit for General Practice (DAK-E). The chair of the committee is the health director from the Region of Southern Denmark. A research committee with participation from the steering committee chair and the primary investigator has been established. A mission statement has been approved by the steering committee and an agreement of co-operation has been signed between the Region of Southern Denmark and the University of Southern Denmark. The agreement states that the University of Southern Denmark holds all rights, intellectual as well as judicial, to the research data, and that the Region of Southern Denmark has no right to oppose publication of results. The research committee approves all access to research data from affiliated researchers. Prior to study commencement, all enrolled GPs, practice nurses (PN), and health professionals from the municipalities were invited to a joint three-hour training course (August 2016). The course focused on the assigned intervention activities and tasks both within the GP clinics and the municipality respectively, and between GPs and the municipality. The source population received an invitation to participate, sent on behalf of the GP and the municipality to the individual’s digital mailbox. All permanent residents in Denmark are obligated to have a digital mailbox, which is essentially a digital mail-system provided by the government for secure and direct communication between individuals and public authorities and other trusted organizations (e.g. banks and insurance companies) . People may opt out of the digital mail system, citing low IT-literacy (usually elderly persons), cognitive impairment, or other complicating factors. To enroll in the study, individuals were asked to follow a link in the invitation to a digital support system protected by a two-phased NemID password . NemID is a password system providing an exact identification of the user. This system is utilized by Danish public and non-public institutions to provide secure access to personal information, such as health and financial data. Through digital mail and NemID, we were able to reach and identify 97% of the target population. In April 2016, participants received an invitation with an embedded hyperlink to a digital consent form on a secure webpage in their digital mailbox. The consent form outlined study participation and disclosure of data from the GPs electronic patient record (EPR) and was supplemented with short videos describing the purpose of the study and the intervention. Participants were asked to read the information and electronically sign the consent form. Two reminders were sent after one and 2 weeks if participants failed to sign the form. Enrollment closed after 6 weeks. At this time, information on relevant diagnosis (International Classification of Primary Care (ICPC-2) codes) and prescribed medicine (Anatomical Therapeutic Chemical Classification (ATC) codes incl. Text fields with indication for treatment) were collected from the GPs EPR system (See Table for the ICPC-2 codes and ATC codes that were accessed based on the consent). Five months after consent (September 2016), participants received another digital invitation in the digital mailbox, this time to fill in a questionnaire and access a personal health profile. Participants could opt-out at any time during the intervention period by clicking an “opt-out” button on the digital support system. The duration of the intervention was 3 months and took place between September 2016 and December 2016. The intervention comprised a two-pronged approach: a joint intervention applied to the entire sample, regardless of whether the participants were healthy, at risk, or already in treatment for T2DM, COPD, CVD, hypercholesterolemia or hypertension a targeted intervention that was offered only to participants who presumably would benefit from either further examinations at the GP (high risk), or from receiving community health services, such as smoking cessation, dietary advice, or physical activity (health-risk behavior). The joint intervention consisted of: Stratification to one of four risk groups. Stratification to a specific risk group was determined by use of risk algorithms and EPR information A digital support system with user interfaces for all users, including the patient, the GP, and the municipal health professional An individual health profile The targeted intervention consisted of: A focused clinical examination and a subsequent health dialogue with a GP (targeting patients at high risk), and / or A short telephone-based health dialogue with a municipal health professional. For patients with limited capability to care for their own health, this initial talk could be followed up with a subsequent face-to-face health dialogue (targeting patients with health-risk behavior) For all present intents and purposes, the term health dialogue refers to a consultation that includes the elements of the 5As model (see Table ) and the techniques used in motivational interviewing . All participants gained access to the digital support system and were invited to fill in a questionnaire. The participant questionnaire contained 15 items on height, weight, self-perceived health status, family history of lifestyle-related diseases, COPD related symptoms, smoking status, leisure activity level, alcohol consumption, diet, and osteoarthritis risk factors. Questions about family history of diabetes and leisure activity level were taken from the Danish Diabetes Risk model . Similarly, questions on COPD-related symptoms and smoking status were derived from the COPD-PS screener and the Heartscore BMI score . Items tapping dietary habits were from the Swedish National Guidelines on Disease Prevention . The questionnaire took approximately 5 min to complete. Based on the questionnaire and information from the individual EPR, participants were stratified into four distinct risk groups: Group 1 – Participants with a pre-existing diagnosis and/or in current treatment for a lifestyle-related disease. Group 2 – Participants at high risk of developing lifestyle-related disease, and thus eligible for the offer of a targeted intervention at the GP. Group 3 – Participants engaging in health-risk behavior, and thus eligible for the offer of a targeted intervention at the municipality. Group 4 – Participants with a healthy lifestyle and no need for further intervention. EPR data was collected via certified EPR-suppliers. We used International Classification of Primary Care-2 codes (ICPC-2) registered by the GP and/or Anatomical Therapeutic Chemical Classification ( ATC) codes for prescribed medicine within the past 2 years, together with the indication for prescribing the medicine, to identify Group 1 participants (see Table ). Given the pre-existing diagnosis and/or treatment, Group 1 was excluded from the subsequent risk estimation and stratification into Group 2, 3, and 4. Next, participants at risk of lifestyle-related disease were identified using three validated risk scores: the Chronic Obstructive Pulmonary Disease Population Screener (COPD-PS), the Danish Diabetes Risk model, and a modified Heartscore BMI score . The COPD-PS uses an algorithm accounting for age, lifetime use of cigarettes, and smoking-related symptoms to identify at-risk patients who may benefit from a spirometry to test for COPD (Table ) . The Danish Diabetes Risk score is based on an algorithm that incorporates age, sex, BMI, known hypertension, leisure activity level, and family history of diabetes (Table ) . The modified Heartscore BMI score accounts for age, sex, body mass index (BMI), and smoking status (Table ) . Consistent with the criteria of the four distinct stratification groups defined above, participants were categorized into Group 2 when one or more of the risk assessment algorithms indicated high likelihood of developing lifestyle-related disease (see Tables , and ). Finally, participants engaging in health-risk behavior with one or more risk factors were categorized in Group 3 (Group 3). Health-risk behavior was defined by the presence of at least one of the following behaviors: smoking tobacco on a daily basis, consuming more than 14/21 (male/female) standard units of alcohol per week, sustaining an unhealthy diet (diet score ≤ 4 on a 12-point score drawn from the Swedish National Guidelines on Disease Prevention) , maintaining a BMI ≥ 35, and/or engaging in a generally sedentary lifestyle. Lastly, participants with no lifestyle-related disease or risk thereof were stratified into Group 4. All users had access to a digital support system in the form of a web page with a common database and specific user interfaces for the GP, the municipality health professionals, and the patient. No apps were developed. The system design drew inspiration from the work by Krist and colleagues’ research on preventive EPRs, and was further inspired by the results of a Delphi process carried out to identify factors for optimal development of health-related websites . Due to challenges in terms of interoperability between the eight suppliers of EPR systems used by GPs, and at least three suppliers of electronic care records (ECR) in the municipalities, it was not feasible to develop a support system that completely integrated the EPR and ECR systems. Instead, the digital support system was developed as a parallel system with an additional functionality facilitating the transfer of information (e.g. relating to lifestyle and/or prevention plans) to the EPR and ECR systems using Electronic Data Interchange (EDIfact) messages . The patient controlled access to personal health information on the system, such that the GP and municipal health professional were only able to access this information with the explicit consent of the patient. The digital support system was developed iteratively in collaboration with the users during the before mentioned workshops with municipality health professionals and GPs and in the form of usability tests with patients. The user interface for the patient was responsive and compatible with most devices, including mobile phones, tablets, laptops and stationary computers. Due to technical constraints in the secure log-in provided by NemID, the user interface for health professionals was only developed for laptops and stationary computers. In order to make the user interface for the patient as intuitive and user-friendly as possible, the digital support system made extensive use of simple visualizations, icons, and short information videos (Fig. ). The primary text-based messages were kept short and concise with the provided possibility of accessing secondary in-depth information, retrieved from the Danish Health Portal, sundhed.dk . Beyond facilitating the intervention, the digital support system also enabled data collection for research purposes. A number of questionnaires were sent from the digital support system to the participants at specific time-points, including immediately after consent, after receiving the personal health profile, following the health dialogue at the GP, and at the end of the implementation period. Questionnaire reminders were sent by e-mail with a link to the digital support system. The GPs and municipal health professionals received audits in the form of short questionnaires immediately after each consultation as well as before and after the study period (GPs only). Based on results of the stratification process, each patient received a personal health profile on the digital support system. The purpose of the health profile was to encourage patients to change their health-risk behavior and follow the tailored advice provided by the system. Patients who were at increased risk of developing a lifestyle-related disease (Group 2) were advised to consult their GP for further examination and advice. Similarly, patients engaging in health-risk behavior (Group 3) were offered lifestyle counseling, or lifestyle courses from the municipality health services. By definition, Group 4 patients lead a relatively healthy life with no need for health-risk behavior change. Group 1 patients were advised to continue their treatment and use the information provided to change health-risk behavior. The personal health profile included individualized information on current health-risk behavior and risk of disease. The information was tailored based on the questionnaire, the information from the EPR, and the risk scores on COPD, T2DM, and CVD. It also included general health information and information about preventive health services concerning smoking, diet, exercise, and alcohol consumption. This information was provided by the municipality, the Region of Southern Denmark, or national health services, and targeted the individual (e.g. via links to apps and webpages) based on his/her specific health-risk behavior. The intervention at the GP The intervention at the general practice level consisted of a focused clinical examination and a subsequent health dialogue and was offered to patients who were at increased risk of developing a lifestyle-related disease (Group 2). Group 2 patients accepted the offer of the intervention by scheduling an appointment at the GP (either by phone or the GP’s webpage). Whether the patient participated in the intervention or not was thus determined by their motivation and capabilities as well as the extent to which the content of the personal health profile motivated the patient to take action. The intervention was applied within the framework of the 5As model (see Table ) . The content of the focused clinical examination was based on the patient’s health profile, and might include measurements of blood glucose (HbA1c) and cholesterol levels, as well as height, weight, blood pressure, and lung function measurements and Electrocardiogram (ECG). Results from the examinations were registered in the digital support system where both the patient and the GP could access them at any time. After the focused clinical examination all patients were given the opportunity to prepare for the subsequent health dialogue by answering a questionnaire inspired by three systematic reviews on the determinants of behavior change . These included questions about motivation, resources, former experiences with behavior change, social network, mental health (WHO-5 for stress and Major Depression Inventory (MDI) for depression) , and a scheme to qualitatively self-report on facilitators and barriers to behavior change (a so-called balance-sheet). The questionnaire results were shared with the GP on the digital support system. Based on the health dialogue, the GP and the patient developed a prevention plan that included a goal, a time frame, and identification of the appropriate means to fulfill the plan (e.g. reference to a smoking cessation course, or follow-up at the GP). The prevention plan was registered on the digital support system by the GP and was accessible to both the GP and the patient. The intervention at the municipal level The intervention at the municipal level was offered to patients exhibiting health-risk behavior (Group 3) and consisted of a short telephone consultation with a health professional – for example a nurse, a dietician, or a physiotherapist. A subsequent face-to-face health dialogue was offered to patients who were deemed to potentially benefit from more extensive support. Group 3 patients requested the intervention on the digital support system by filling in a short form and sending it by e-mail to the municipality. A municipal health professional would then call the patient within the following week. Similar to the GP intervention, the intervention at the municipal level was thus also determined by patient motivation and capabilities as well as the extent to which the content of the personal health profile motivated the patient to take action. Immediately after the intervention, a participation form was sent to the municipality. Patients could prepare for the upcoming call from a municipal health professional in the same way as Group 2 patients prepared for the health dialogue – that is, by answering a short questionnaire. Ultimately, a prevention plan, including concrete details on its execution, was developed based on the telephone consultation and the face-to-face health dialogue. The prevention plan was registered by the municipal health professional and presented on the user interfaces of both the municipality and the patient. The intervention at the general practice level consisted of a focused clinical examination and a subsequent health dialogue and was offered to patients who were at increased risk of developing a lifestyle-related disease (Group 2). Group 2 patients accepted the offer of the intervention by scheduling an appointment at the GP (either by phone or the GP’s webpage). Whether the patient participated in the intervention or not was thus determined by their motivation and capabilities as well as the extent to which the content of the personal health profile motivated the patient to take action. The intervention was applied within the framework of the 5As model (see Table ) . The content of the focused clinical examination was based on the patient’s health profile, and might include measurements of blood glucose (HbA1c) and cholesterol levels, as well as height, weight, blood pressure, and lung function measurements and Electrocardiogram (ECG). Results from the examinations were registered in the digital support system where both the patient and the GP could access them at any time. After the focused clinical examination all patients were given the opportunity to prepare for the subsequent health dialogue by answering a questionnaire inspired by three systematic reviews on the determinants of behavior change . These included questions about motivation, resources, former experiences with behavior change, social network, mental health (WHO-5 for stress and Major Depression Inventory (MDI) for depression) , and a scheme to qualitatively self-report on facilitators and barriers to behavior change (a so-called balance-sheet). The questionnaire results were shared with the GP on the digital support system. Based on the health dialogue, the GP and the patient developed a prevention plan that included a goal, a time frame, and identification of the appropriate means to fulfill the plan (e.g. reference to a smoking cessation course, or follow-up at the GP). The prevention plan was registered on the digital support system by the GP and was accessible to both the GP and the patient. The intervention at the municipal level was offered to patients exhibiting health-risk behavior (Group 3) and consisted of a short telephone consultation with a health professional – for example a nurse, a dietician, or a physiotherapist. A subsequent face-to-face health dialogue was offered to patients who were deemed to potentially benefit from more extensive support. Group 3 patients requested the intervention on the digital support system by filling in a short form and sending it by e-mail to the municipality. A municipal health professional would then call the patient within the following week. Similar to the GP intervention, the intervention at the municipal level was thus also determined by patient motivation and capabilities as well as the extent to which the content of the personal health profile motivated the patient to take action. Immediately after the intervention, a participation form was sent to the municipality. Patients could prepare for the upcoming call from a municipal health professional in the same way as Group 2 patients prepared for the health dialogue – that is, by answering a short questionnaire. Ultimately, a prevention plan, including concrete details on its execution, was developed based on the telephone consultation and the face-to-face health dialogue. The prevention plan was registered by the municipal health professional and presented on the user interfaces of both the municipality and the patient. While aiming to test the acceptability, feasibility, and short-term effects of the pilot, we estimated a sample size for each GP that would allow the GP to familiarize him/herself with the intervention without unnecessary increases in workload during the intervention period. In agreement with the GP representative in the Region of Southern Denmark, we set a target of four health checks for each GP. From the feasibility study, we estimated that 60% would consent to the study, and that 75% of these participants would receive a personal digital health profile . Also based on the feasibility study, we estimated that 12% of the study population would be recommended to consult their GP (Group 2). From results obtained in similar Dutch studies, we finally estimated that 35% of the these patients (Group 2) would eventually consult the GP . Given these figures, we calculated that a total sample of approximately 200 patients from each GP would be required to reach the target of four completed health checks per GP. Evaluation outcomes Evaluation of the study will be carried out using quantitative as well as qualitative research methods (Table ). All outcome measures are based on validated instruments and aim to provide results pertaining to intervention acceptability, feasibility, and short-term effects. In addition, outcomes related to other associated topics will be included. The specific instruments used will be described in detail in later publications. Qualitative data Qualitative data will be derived from interviews (individual and focus groups comprising GPs, practice staff members, municipality staff members, patients from group 2 and 3, stakeholders, project leaders and researchers) and participant observations (during the health dialogues at the GP). The estimated number of participants is shown in Table . Quantitative data Quantitative data will be derived from questionnaires as well as Danish National registers (see section below). Table shows the content of the questionnaires applied while a diagram, attached as Additional file , shows a flow of the entire intervention and the timing of the questionnaires during the intervention. Register based data Data from the Danish national registers concerning demographic information, prescriptions, and health care usage of the target population ( n = 9.400) will be obtained from Statistics Denmark ( https://www.dst.dk/da ) . Information from the different registers will be linked by the patients’ Danish Personal Identification Number. Socio-demographic variables Information on socio-demography encompassed educational level, occupation, income, cohabitation status, ethnicity, and residency. Education is defined as the highest formal educational attainment obtained on the first of October in each calendar year. Occupation is defined as the occupational status on the first of November in each calendar year. OECD-adjusted income level is defined as the individual’s/family’s disposable income, adjusted for family size and categorized in relative terms (low/middle/high income) . Cohabitation status is defined as cohabitating or living alone. Ethnicity is based on country of origin and descendance. Morbidity Information on health/disease status (hypertension, hypercholesterolemia, type-2 diabetes, cardio-vascular disease) is defined in terms of ICD-10 diagnosis codes and medical usage. The ‘National Patient Registry’ will provide information on ICD-10 diagnostic codes. The ‘Register of Medicinal Product Statistics’ provide information on medical usage . Contextual variables Contextual variables include information on study site and neighborhood social deprivation. Neighborhood social deprivation will be derived on a census district level and is principally defined in terms of the following three variables: educational attainment, employment status (employed/social welfare), and income (mean family disposable income). Educational, employment, and income deprivation thus specifically refer to the proportion of citizens within each census district who has access to basic education (up to high school), who is unemployed (e.g. students, unemployed workers), and who belongs to the lowest income quartile, respectively. Each variable is ranked, grouped in quartiles, and given a value between 0 and 3 (3 = high deprivation). This results in an aggregated ranking system ranging from 0 (low deprivation) to 9 (high deprivation). The aggregated rank is then grouped in quartiles. A neighborhood social deprivation score will be calculated for all census districts in Denmark in order to obtain local deprivation scores that mirror the relative social deprivation of the individual census district . Evaluation of the study will be carried out using quantitative as well as qualitative research methods (Table ). All outcome measures are based on validated instruments and aim to provide results pertaining to intervention acceptability, feasibility, and short-term effects. In addition, outcomes related to other associated topics will be included. The specific instruments used will be described in detail in later publications. Qualitative data will be derived from interviews (individual and focus groups comprising GPs, practice staff members, municipality staff members, patients from group 2 and 3, stakeholders, project leaders and researchers) and participant observations (during the health dialogues at the GP). The estimated number of participants is shown in Table . Quantitative data will be derived from questionnaires as well as Danish National registers (see section below). Table shows the content of the questionnaires applied while a diagram, attached as Additional file , shows a flow of the entire intervention and the timing of the questionnaires during the intervention. Data from the Danish national registers concerning demographic information, prescriptions, and health care usage of the target population ( n = 9.400) will be obtained from Statistics Denmark ( https://www.dst.dk/da ) . Information from the different registers will be linked by the patients’ Danish Personal Identification Number. Information on socio-demography encompassed educational level, occupation, income, cohabitation status, ethnicity, and residency. Education is defined as the highest formal educational attainment obtained on the first of October in each calendar year. Occupation is defined as the occupational status on the first of November in each calendar year. OECD-adjusted income level is defined as the individual’s/family’s disposable income, adjusted for family size and categorized in relative terms (low/middle/high income) . Cohabitation status is defined as cohabitating or living alone. Ethnicity is based on country of origin and descendance. Information on health/disease status (hypertension, hypercholesterolemia, type-2 diabetes, cardio-vascular disease) is defined in terms of ICD-10 diagnosis codes and medical usage. The ‘National Patient Registry’ will provide information on ICD-10 diagnostic codes. The ‘Register of Medicinal Product Statistics’ provide information on medical usage . Contextual variables include information on study site and neighborhood social deprivation. Neighborhood social deprivation will be derived on a census district level and is principally defined in terms of the following three variables: educational attainment, employment status (employed/social welfare), and income (mean family disposable income). Educational, employment, and income deprivation thus specifically refer to the proportion of citizens within each census district who has access to basic education (up to high school), who is unemployed (e.g. students, unemployed workers), and who belongs to the lowest income quartile, respectively. Each variable is ranked, grouped in quartiles, and given a value between 0 and 3 (3 = high deprivation). This results in an aggregated ranking system ranging from 0 (low deprivation) to 9 (high deprivation). The aggregated rank is then grouped in quartiles. A neighborhood social deprivation score will be calculated for all census districts in Denmark in order to obtain local deprivation scores that mirror the relative social deprivation of the individual census district . This pilot study will provide a solid empirical base from which to plan and implement a full-scale randomized study with the central aim of determining the efficacy of a preventive health intervention. The intervention was designed to systematically identify persons at risk of developing lifestyle-related disease or who engage in health-risk behavior, and provide targeted and coherent preventive services to these individuals. Strengths and limitations Much effort has been made to define the specific nature and objective of pilot and/or feasibility studies. In a scoping review of optimization strategies for complex interventions prior to randomized trials, Levati asserts the notion that different frameworks for intervention development, such as intervention mapping and the MRC framework for complex intervention, call for different approaches to pilot and feasibility studies . As a common feature when developing complex randomized trials, the authors suggest “that the acceptability of the intervention to those directly involved in the delivery and receipt of the final intervention, together with the anticipated effect of the intervention, are important elements to take into account as early as possible in the pre-trial stage.” . Eldridge et al. used a Delphi survey to arrive at distinct definitions of feasibility and pilot studies . They suggest that “feasibility study” is an overarching term with “pilot study” representing a subset of feasibility studies. Generally, feasibility studies ask whether something can be done, should we proceed with it, and if so, how? Pilot studies ask the same questions, but with a specific design feature of a larger study, conducted on a smaller scale. According to the authors, pilot studies can be separated in two distinct types: non-randomized and randomized. Non-randomized pilot studies do not include a control group and are usually external to the subsequent randomized controlled trial (RCT), that is, the participants are not included in the effect analysis of the RCT. Randomized pilot studies, on the other hand, randomize participants to an intervention or control group and can be internal to the subsequent RCT. Bowen et al. complement the work of Eldridge et al. and propose eight foci (design features) of feasibility studies: Acceptability, demand, implementation, practicality, adaptation, integration, expansion, and limited efficacy . According to Eldridge et al., the study presented in this paper is a non-randomized pilot study. We chose a non-randomized design in order to examine the specific design features of a stepped wedge cluster randomized design for the full-scale randomized study. A stepped-wedge design is a type of cluster randomized design that meets the specific ethical and logistical demands of a delayed intervention performed in routine care where all participants will be offered the intervention . The pilot resembles one cluster in a stepped wedge cluster randomized study, and will thus allow us to ascertain whether the intervention can be delivered during a three-month period, or if longer time is required to avoid carry-over effects . In the event that more time is necessary to deliver the intervention, it will be difficult, if not impossible, to accurately determine the optimal duration of a cluster. This will likewise complicate the stepped wedge design. One way to compensate for incomplete knowledge on the optimal timeframe for the intervention may be to include a “wash out” period after every cluster allowing for any delay or lag in implementation before the next cluster is commenced . The length of the “wash out” period can be estimated from the results of the pilot study. We have randomly sampled 200 patients from each GP in order to have a source population that is representative of the target population. We have chosen to target people born between 1957 and 1986 to assess the risk of lifestyle-related disease and health-risk behavior at an age interval where changes in lifestyle will provide significant health effects and be cost-effective. To this end, we have chosen to assess variation in the proportion of patients at increased risk of lifestyle-related disease between baseline and the 12-week follow up as our primary health-related outcome. Further, given the fact that complex interventions, such as the one described here, usually have concurrent endpoints , we also collect data on a variety of other variables – both questionnaire- and register-based – related to both lifestyle and disease. We have yet to determine which of these endpoints to include in the full-scale randomized study. We have planned the intervention in collaboration with the stakeholders, patients, and service providers in order to run a pilot study that is both acceptable and relevant for all user groups. We use quantitative as well as qualitative research methods to assess the acceptability, demand, implementation, and practicality of the pilot, from the viewpoint of both users and service-providers. In addition to evaluating the intervention, we assess the organizational challenges of planning and implementing IT-supported pilot studies . At the same time we test different methods of data collection, including electronic collection of data from the digital support system and participant observations at the GP clinics. We also test various types of questionnaires, including ones that involve simple items with binary outcomes as well as others in more complex discrete choice format. The pilot will hence enable us to assess whether the intervention can be executed, and whether the organizational approach taken, fit the purpose. We will further be able to make an informed decision about how we can collect data during the full-scale study in the most efficient and cost-effective way that is also acceptable to both users and service-providers. From pilot to full-scale randomized study Another issue raised by Levati et al. and Eldridge et al. concerns pinpointing the appropriate time to move from piloting to full-scale RCT. That is, should we proceed with the project, and if so, how? . Proceeding from pilot to a full-scale randomized study is probably the most under-researched part of the implementation of complex interventions. Bugge et al. suggest a three step process to establish the best possible foundation on which to make a decision to advance a full-scale randomized study . First, any problems should be categorized into three distinct types: Issues that are likely to complicate the full-scale study, issues that are likely to complicate both trial and real-world situations, and issues that are likely to complicate real-world situations only. Next, potential solutions should be identified for the expected issues, ideally with lay participation. Finally, the best of these solutions should be selected to determine the best way to proceed. With this strategy in mind, we will do a thorough assessment of the problems encountered in the pilot before advancing the full-scale study. We will thus identify solutions in collaboration with the service providers (GPs and municipal health professionals) who participated in the pilot study, as well as with those who took part in the design of the intervention. We will also seek patient-feedback on the technical and communicative properties of the digital support system before defining its final specifications. The final assessment is presented to the steering committee that will take the decision on the way forward. Much effort has been made to define the specific nature and objective of pilot and/or feasibility studies. In a scoping review of optimization strategies for complex interventions prior to randomized trials, Levati asserts the notion that different frameworks for intervention development, such as intervention mapping and the MRC framework for complex intervention, call for different approaches to pilot and feasibility studies . As a common feature when developing complex randomized trials, the authors suggest “that the acceptability of the intervention to those directly involved in the delivery and receipt of the final intervention, together with the anticipated effect of the intervention, are important elements to take into account as early as possible in the pre-trial stage.” . Eldridge et al. used a Delphi survey to arrive at distinct definitions of feasibility and pilot studies . They suggest that “feasibility study” is an overarching term with “pilot study” representing a subset of feasibility studies. Generally, feasibility studies ask whether something can be done, should we proceed with it, and if so, how? Pilot studies ask the same questions, but with a specific design feature of a larger study, conducted on a smaller scale. According to the authors, pilot studies can be separated in two distinct types: non-randomized and randomized. Non-randomized pilot studies do not include a control group and are usually external to the subsequent randomized controlled trial (RCT), that is, the participants are not included in the effect analysis of the RCT. Randomized pilot studies, on the other hand, randomize participants to an intervention or control group and can be internal to the subsequent RCT. Bowen et al. complement the work of Eldridge et al. and propose eight foci (design features) of feasibility studies: Acceptability, demand, implementation, practicality, adaptation, integration, expansion, and limited efficacy . According to Eldridge et al., the study presented in this paper is a non-randomized pilot study. We chose a non-randomized design in order to examine the specific design features of a stepped wedge cluster randomized design for the full-scale randomized study. A stepped-wedge design is a type of cluster randomized design that meets the specific ethical and logistical demands of a delayed intervention performed in routine care where all participants will be offered the intervention . The pilot resembles one cluster in a stepped wedge cluster randomized study, and will thus allow us to ascertain whether the intervention can be delivered during a three-month period, or if longer time is required to avoid carry-over effects . In the event that more time is necessary to deliver the intervention, it will be difficult, if not impossible, to accurately determine the optimal duration of a cluster. This will likewise complicate the stepped wedge design. One way to compensate for incomplete knowledge on the optimal timeframe for the intervention may be to include a “wash out” period after every cluster allowing for any delay or lag in implementation before the next cluster is commenced . The length of the “wash out” period can be estimated from the results of the pilot study. We have randomly sampled 200 patients from each GP in order to have a source population that is representative of the target population. We have chosen to target people born between 1957 and 1986 to assess the risk of lifestyle-related disease and health-risk behavior at an age interval where changes in lifestyle will provide significant health effects and be cost-effective. To this end, we have chosen to assess variation in the proportion of patients at increased risk of lifestyle-related disease between baseline and the 12-week follow up as our primary health-related outcome. Further, given the fact that complex interventions, such as the one described here, usually have concurrent endpoints , we also collect data on a variety of other variables – both questionnaire- and register-based – related to both lifestyle and disease. We have yet to determine which of these endpoints to include in the full-scale randomized study. We have planned the intervention in collaboration with the stakeholders, patients, and service providers in order to run a pilot study that is both acceptable and relevant for all user groups. We use quantitative as well as qualitative research methods to assess the acceptability, demand, implementation, and practicality of the pilot, from the viewpoint of both users and service-providers. In addition to evaluating the intervention, we assess the organizational challenges of planning and implementing IT-supported pilot studies . At the same time we test different methods of data collection, including electronic collection of data from the digital support system and participant observations at the GP clinics. We also test various types of questionnaires, including ones that involve simple items with binary outcomes as well as others in more complex discrete choice format. The pilot will hence enable us to assess whether the intervention can be executed, and whether the organizational approach taken, fit the purpose. We will further be able to make an informed decision about how we can collect data during the full-scale study in the most efficient and cost-effective way that is also acceptable to both users and service-providers. Another issue raised by Levati et al. and Eldridge et al. concerns pinpointing the appropriate time to move from piloting to full-scale RCT. That is, should we proceed with the project, and if so, how? . Proceeding from pilot to a full-scale randomized study is probably the most under-researched part of the implementation of complex interventions. Bugge et al. suggest a three step process to establish the best possible foundation on which to make a decision to advance a full-scale randomized study . First, any problems should be categorized into three distinct types: Issues that are likely to complicate the full-scale study, issues that are likely to complicate both trial and real-world situations, and issues that are likely to complicate real-world situations only. Next, potential solutions should be identified for the expected issues, ideally with lay participation. Finally, the best of these solutions should be selected to determine the best way to proceed. With this strategy in mind, we will do a thorough assessment of the problems encountered in the pilot before advancing the full-scale study. We will thus identify solutions in collaboration with the service providers (GPs and municipal health professionals) who participated in the pilot study, as well as with those who took part in the design of the intervention. We will also seek patient-feedback on the technical and communicative properties of the digital support system before defining its final specifications. The final assessment is presented to the steering committee that will take the decision on the way forward. Additional file 1: Detailed project flow showing the recruitment, intervention and evaluation phases. Detailed project flow from recruitment to intervention and evaluation. It shows how participants will be recruited, how they will be stratified using algorithms and what intervention elements the participant will receive. Furthermore, it shows when quantitative data will be collected for evaluative purposes. (TIF 1491 kb) |
Integrated Metabolomics and Proteomics Analysis of the Myocardium in a Mouse Model of Acute Viral Myocarditis | 504beb46-2497-43bd-b55f-887b5412e7c5 | 11800238 | Biochemistry[mh] | Introduction Acute viral myocarditis (AVMC) is an immune‐mediated acute inflammatory disease of the myocardium, mainly caused by cardiotropic virus infection. Coxsackievirus B3 (CVB3), an enterovirus of the Picornaviridae family, is considered the most common etiological agent . CVB3 has been widely used in AVMC research since it was first employed to induce myocarditis in mice by Woodruff in 1974 . CVB3 infection in mice closely mimics the clinical and pathological features of AVMC caused by enteroviruses in humans, making it an effective model for studying the mechanisms of AVMC and testing potential therapeutic interventions . Although most patients with AVMC have mild symptoms, a proportion of cases progress to fulminant myocarditis (FM), dilated cardiomyopathy (DCM), and even sudden death . Extensive studies suggest that the pathogenic mechanisms underlying AVMC primarily involve direct viral invasion, immune dysregulation, and diffuse myocardial injury and remodeling . However, our advances in AVMC pathophysiology have not yet translated into improved clinical treatment options. The heart is the most metabolically demanding organ in the human body, which must continuously produce large amounts of adenosine triphosphate (ATP) to sustain contractile function by metabolizing an array of fuels, such as fatty acids, glucose, lactate, pyruvate, and amino acids . Using high‐throughput RNA sequencing, we found that the most significant enrichment pathway for differentially expressed mRNAs in AVMC was the metabolic pathway . Viral infection damages mitochondrial ultrastructure, causing metabolic disorders and metabolite accumulation in cardiomyocytes, ultimately leading to an imbalance in energy supply and demand that accelerates the deterioration of cardiac function . Recently, several metabolites have been demonstrated to serve essential roles in AVMC development. For example, kynurenine 3‐monooxygenase deficiency reduces mortality in mice with AVMC by increasing serum kynurenine pathway metabolites and decreasing chemokine production . Nano‐α‐linolenic acid exerts a protective effect against AVMC in a dose‐dependent manner . The latest study by Zhou et al. found that epoxyeicosatrienoic acids can prevent the progression of CVB3‐induced AVMC, particularly by increasing IFN production to promote viral resistance. Therefore, a full understanding of metabolite alterations in AVMC may help to uncover new therapeutic targets and provide new mechanisms for clinical intervention. As a powerful analytical tool, metabolomics has been widely used to explore changes in the global cardiometabolic profile of individuals with cardiovascular diseases. Currently, the main metabolomics analysis platforms include ultra‐high‐performance liquid chromatography‐tandem mass spectrometry (UHPLC‐MS/MS), gas chromatography‐mass spectrometry (GC‐MS), and nuclear magnetic resonance (NMR), each with its own advantages and shortcomings . Kong et al. recently identified differential metabolites (DMs) in the myocardium of AVMC, chronic viral myocarditis (CVMC), and DCM mice using NMR‐based metabolomics. UHPLC‐MS/MS is the most commonly used platform for metabolic profiling due to its higher sensitivity and superior detection capability compared with NMR, mainly for thermally unstable, nonvolatile, and polar compounds, unlike GC‐MS . However, myocardial metabolomics studies based on UHPLC‐MS/MS in AVMC are still lacking. Proteomics, an important complement to metabolomics, represents the summative effects of gene function and can be used to identify differential proteins (DPs) that directly affect metabolic processes . The integrated analysis of metabolomics and proteomics contributes to a more comprehensive systematic assessment of physiological states, especially for elucidating pathogenesis and identifying biomarkers through modern data analysis . To fill this gap, we established a CVB3‐induced AVMC mouse model. Subsequently, an untargeted UHPLC‐MS/MS‐based metabolomics approach and a data‐independent acquisition (DIA)‐based proteomics method were performed in the myocardium. Finally, we analyzed the Kyoto Encyclopedia of Genes and Genomes (KEGG) metabolic pathways shared by DMs and DPs. Our research will provide a deeper understanding of the pathophysiology underlying AVMC. Materials and Methods 2.1 Animal Handling All experimental protocols followed the Principles of Laboratory Animal Care (People's Republic of China) and were approved by the Institutional Animal Care and Use Committee of Fujian Medical University (License No. IACUC FJMU 2023‐0278). The research staff received special training in animal care and handling provided by Fujian Medical University. Thirty‐six specific pathogen‐free (SPF) male BALB/c mice aged 6 weeks were purchased from Beijing SiPeiFu Biotechnology Co. Ltd. [License No. SCXK (Jing) 2019‐0010]. The animals were housed in a temperature‐controlled environment at 24°C with 12‐h day‐night cycles and had free access to food and water. After a 3‐day adaptation period, the mice were assigned to the Control ( n = 13) or AVMC group ( n = 23) according to a random number table, and no blinding was performed. CVB3‐induced AVMC mouse models were constructed by intraperitoneal injection of CVB3 (Nancy strain; 2 × 10 5 PFU per mouse), while the Control mice were injected with an equal volume of phosphate‐buffered saline (PBS). The overall experimental period lasted 7 days (from Day 0 to Day 7). Daily observations were performed to evaluate the external signs and behavioral activities of the mice. Mice that died during the experiment were excluded from further analysis. After 7 days, the surviving mice were euthanized via cervical dislocation under isoflurane (2%) anesthesia. Their hearts were collected for histological examination, untargeted metabolomics, and DIA proteomics. All efforts were made to minimize the suffering of the mice. 2.2 Hematoxylin and Eosin (HE) Staining Five mice were randomly selected from each group for cardiac histology. After examining the gross appearance, the heart was cut transversely. HE staining was conducted according to routine protocols. In brief, the collected heart tissues were fixed, dehydrated, and embedded in paraffin. The tissue blocks were then cut into 5‐μm sections and stained with HE. The sections were observed and scanned using a Motic EasyScan Digital Slide Scanner, and the severity of inflammation was scored using the method described previously . 2.3 Metabolite Extraction and UHPLC‐MS/MS Analysis The hearts of the remaining mice in each group were horizontally divided into two portions: one for untargeted metabolomics and the other for proteomics. For heart metabolite extraction, tissue samples were ground in liquid nitrogen, and the homogenates were resuspended in 500 μL of prechilled 80% methanol and vortexed for 4 min. The suspensions were kept on ice for 5 min and centrifuged at 15,000 g for 20 min at 4°C. Some supernatants were diluted with MS‐grade water to a final concentration of 53% methanol. Subsequently, the samples were centrifuged at 15,000 g for 20 min at 4°C, and the supernatants were applied to UHPLC‐MS/MS analysis . Analyses were performed using a Vanquish UHPLC system coupled with an Orbitrap Q Exactive HF‐X mass spectrometer (Thermo Fisher Scientific). The detailed experimental conditions for UHPLC‐MS/MS are described in Supporting Information S2: Table . For quality control (QC) and preprocessing, a pooled sample was prepared to assess the analytical variability by mixing equal volumes (20 μL) of the supernatant from each sample. 2.4 Data Processing, DMs Identification, and Functional Prediction The raw data files generated by UHPLC‐MS/MS were processed using Compound Discoverer v3.1 software (Thermo Fisher Scientific) for peak alignment, peak selection, and quantification of each metabolite. The main parameter settings were the same as those described previously . Then, the peak area was quantified, the target ions were integrated, and the molecular formula was predicted based on the molecular ion peak and fragment ions. This information was then compared with the MassList, mzCloud, and mzVault databases to identify the metabolites. The background ions were removed with blank samples (53% methanol solution containing 0.1% formic acid), and the quantitative results were normalized with QC samples. Statistical analyses were performed using the statistical software R (v3.4.3), Python (v2.7.6), and CentOS (v6.6). Principal component analysis (PCA) and orthogonal projection to latent structures‐discriminant analysis (OPLS‐DA) were performed to visualize changes in metabolic profiles between the Control and AVMC groups . A 200‐cycle permutation test was conducted to assess the robustness and predictive ability of the OPLS‐DA model. The variable importance in the projection (VIP) value of each variable in the OPLS‐DA model was calculated to indicate its contribution to the classification. DMs were screened between two groups using the t test as a univariate analysis, and those with VIP > 1.0 and p value < 0.05 were considered differentially expressed. Meanwhile, the volcano plot and heatmap were created to visualize the DMs by the R package. Finally, these DMs were annotated with the KEGG compound database ( http://www.kegg.jp/kegg/compound/ ) and the Human Metabolome Database (HMDB, http://www.hmdb.ca/ ), and then mapped to the KEGG pathway database ( http://www.kegg.jp/kegg/pathway.html ). The metabolic pathway analysis was also performed using the MetPA tool in MetaboAnalyst 5.0 ( http://www.metaboanalyst.ca/MetaboAnalyst/ ) . 2.5 Protein Extraction, Preparation, and Nano‐HPLC‐MS/MS Analysis Myocardial tissue samples were suspended in protein lysis buffer (1% SDS, 8 M urea), which included appropriate protease inhibitors to inhibit protease activity, and then the mixture was vortexed to mix well and processed twice through a high‐throughput tissue grinder. After precipitation for 30 min at 4°C (vortexing every 10 min), the sample was centrifuged at 15,000 g for 20 min at 4°C. The supernatant was collected, and the protein concentration was determined by the bicinchoninic acid (BCA) method. Finally, equal amounts of protein (15 μg/lane) were subjected to 12% SDS‐PAGE gel electrophoresis. The gels were stained with Coomassie Brilliant Blue R‐250 and decolored until the bands were clearly visible. Sample preparation included the processes of protein denaturation, reduction, alkylation, trypsin digestion, and peptide cleanup, according to the protocol provided in the iST Sample Preparation kit (PreOmics). Briefly, 50 µL of lysis buffer was added and heated at 95°C, 200 g with stirring for 10 min. After cooling to room temperature (RT), trypsin digestion buffer was added, and the mixture was incubated at 37°C, 100 g with shaking for 2 h. Subsequently, the samples were cleaned and desalted, and the peptides were eluted with elution buffer (2 × 100 µL) and lyophilized using SpeedVac. A high pH reversed‐phase chromatography step was used to separate the complex mixture of peptides before nano‐HPLC‐MS/MS analysis. The mixed peptide samples were resuspended in buffer A (20 mM ammonium formate, pH 10.0, adjusted with ammonium hydroxide), loaded onto a reverse‐phase column (XBridge C18 column, 4.6 × 250 mm, 5 μm, Waters Corporation) using an Ultimate 3000 system (Thermo Fisher Scientific), separated, and eluted using a linear gradient of 5% to 45% buffer B (20 mM ammonium formate in 80% ACN, pH 10.0, adjusted with ammonium hydroxide) for 40 min. The column flow rate was maintained at 1 mL/min, and the column temperature was maintained at 30°C. The collected fractions were then concatenated into 10 fractions and dried in a vacuum centrifuge. The peptides were redissolved in 30 μL solvent A (0.1% formic acid aqueous solution) and analyzed by on‐line nanospray LC‐MS/MS on an Orbitrap Fusion Lumos mass spectrometer coupled to an EASY‐nLC 1200 system (Thermo Fisher Scientific). Three microliters of the sample was separated on the analytical column (Acclaim PepMap C18, 75 μm × 25 cm) using a 120‐min gradient, from 5% to 35% in solvent B (0.1% formic acid in ACN). The column flow rate was maintained at 200 nL/min with a column temperature of 40°C, and the electrospray voltage was set to 2 kV. The mass spectrometer automatically switched between MS and MS/MS modes under the DIA configuration, and the detailed parameter settings are shown in Supporting Information S2: Table . 2.6 Data Processing, DPs Identification, and Bioinformatics Analysis The DIA raw data were analyzed using Spectronaut 18 (Biognosys AG) with the default settings, and the ideal extraction window was dynamically determined based on the iRT calibration and gradient stability. A Q ‐value (FDR) cutoff on the precursor and protein level was applied at 1%. Decoy generation was set to apply a random number of amino acid position swamps (min = 2, max = length/2). All selected precursors passing the filters were used for MS1 quantification. Proteins were quantified using the average of the top three peptide MS1 areas, yielding raw protein abundances. The thresholds of |Fold Change | > 1.5 and p value < 0.05 were used to identify DPs. The DPs were further assigned to the Gene Ontology (GO) database ( http://www.geneontology.org/ ), where the proteins were divided into three main categories: biological process (BP), molecular function (MF), and cellular component (CC). Pathway enrichment analysis was conducted using the KEGG database. Differences were considered to be statistically significant at a p value < 0.05. 2.7 Integrated Analysis of Metabolomics and Proteomics All DMs and DPs were queried and mapped to KEGG‐based pathways. R version 3.4.1 was used to combine KEGG annotation and enrichment results of metabolomics and proteomics. Venn diagrams and bar graphs were plotted to combine the results of the two omics approaches. 2.8 Statistical Analysis Data analysis was conducted using GraphPad Prism software (v8.0.1, GraphPad Software Inc.). The Mann–Whitney U test was used to evaluate differences in cardiac pathology scores between the two groups. A p value less than 0.05 was considered statistically significant. Animal Handling All experimental protocols followed the Principles of Laboratory Animal Care (People's Republic of China) and were approved by the Institutional Animal Care and Use Committee of Fujian Medical University (License No. IACUC FJMU 2023‐0278). The research staff received special training in animal care and handling provided by Fujian Medical University. Thirty‐six specific pathogen‐free (SPF) male BALB/c mice aged 6 weeks were purchased from Beijing SiPeiFu Biotechnology Co. Ltd. [License No. SCXK (Jing) 2019‐0010]. The animals were housed in a temperature‐controlled environment at 24°C with 12‐h day‐night cycles and had free access to food and water. After a 3‐day adaptation period, the mice were assigned to the Control ( n = 13) or AVMC group ( n = 23) according to a random number table, and no blinding was performed. CVB3‐induced AVMC mouse models were constructed by intraperitoneal injection of CVB3 (Nancy strain; 2 × 10 5 PFU per mouse), while the Control mice were injected with an equal volume of phosphate‐buffered saline (PBS). The overall experimental period lasted 7 days (from Day 0 to Day 7). Daily observations were performed to evaluate the external signs and behavioral activities of the mice. Mice that died during the experiment were excluded from further analysis. After 7 days, the surviving mice were euthanized via cervical dislocation under isoflurane (2%) anesthesia. Their hearts were collected for histological examination, untargeted metabolomics, and DIA proteomics. All efforts were made to minimize the suffering of the mice. Hematoxylin and Eosin (HE) Staining Five mice were randomly selected from each group for cardiac histology. After examining the gross appearance, the heart was cut transversely. HE staining was conducted according to routine protocols. In brief, the collected heart tissues were fixed, dehydrated, and embedded in paraffin. The tissue blocks were then cut into 5‐μm sections and stained with HE. The sections were observed and scanned using a Motic EasyScan Digital Slide Scanner, and the severity of inflammation was scored using the method described previously . Metabolite Extraction and UHPLC‐MS/MS Analysis The hearts of the remaining mice in each group were horizontally divided into two portions: one for untargeted metabolomics and the other for proteomics. For heart metabolite extraction, tissue samples were ground in liquid nitrogen, and the homogenates were resuspended in 500 μL of prechilled 80% methanol and vortexed for 4 min. The suspensions were kept on ice for 5 min and centrifuged at 15,000 g for 20 min at 4°C. Some supernatants were diluted with MS‐grade water to a final concentration of 53% methanol. Subsequently, the samples were centrifuged at 15,000 g for 20 min at 4°C, and the supernatants were applied to UHPLC‐MS/MS analysis . Analyses were performed using a Vanquish UHPLC system coupled with an Orbitrap Q Exactive HF‐X mass spectrometer (Thermo Fisher Scientific). The detailed experimental conditions for UHPLC‐MS/MS are described in Supporting Information S2: Table . For quality control (QC) and preprocessing, a pooled sample was prepared to assess the analytical variability by mixing equal volumes (20 μL) of the supernatant from each sample. Data Processing, DMs Identification, and Functional Prediction The raw data files generated by UHPLC‐MS/MS were processed using Compound Discoverer v3.1 software (Thermo Fisher Scientific) for peak alignment, peak selection, and quantification of each metabolite. The main parameter settings were the same as those described previously . Then, the peak area was quantified, the target ions were integrated, and the molecular formula was predicted based on the molecular ion peak and fragment ions. This information was then compared with the MassList, mzCloud, and mzVault databases to identify the metabolites. The background ions were removed with blank samples (53% methanol solution containing 0.1% formic acid), and the quantitative results were normalized with QC samples. Statistical analyses were performed using the statistical software R (v3.4.3), Python (v2.7.6), and CentOS (v6.6). Principal component analysis (PCA) and orthogonal projection to latent structures‐discriminant analysis (OPLS‐DA) were performed to visualize changes in metabolic profiles between the Control and AVMC groups . A 200‐cycle permutation test was conducted to assess the robustness and predictive ability of the OPLS‐DA model. The variable importance in the projection (VIP) value of each variable in the OPLS‐DA model was calculated to indicate its contribution to the classification. DMs were screened between two groups using the t test as a univariate analysis, and those with VIP > 1.0 and p value < 0.05 were considered differentially expressed. Meanwhile, the volcano plot and heatmap were created to visualize the DMs by the R package. Finally, these DMs were annotated with the KEGG compound database ( http://www.kegg.jp/kegg/compound/ ) and the Human Metabolome Database (HMDB, http://www.hmdb.ca/ ), and then mapped to the KEGG pathway database ( http://www.kegg.jp/kegg/pathway.html ). The metabolic pathway analysis was also performed using the MetPA tool in MetaboAnalyst 5.0 ( http://www.metaboanalyst.ca/MetaboAnalyst/ ) . Protein Extraction, Preparation, and Nano‐HPLC‐MS/MS Analysis Myocardial tissue samples were suspended in protein lysis buffer (1% SDS, 8 M urea), which included appropriate protease inhibitors to inhibit protease activity, and then the mixture was vortexed to mix well and processed twice through a high‐throughput tissue grinder. After precipitation for 30 min at 4°C (vortexing every 10 min), the sample was centrifuged at 15,000 g for 20 min at 4°C. The supernatant was collected, and the protein concentration was determined by the bicinchoninic acid (BCA) method. Finally, equal amounts of protein (15 μg/lane) were subjected to 12% SDS‐PAGE gel electrophoresis. The gels were stained with Coomassie Brilliant Blue R‐250 and decolored until the bands were clearly visible. Sample preparation included the processes of protein denaturation, reduction, alkylation, trypsin digestion, and peptide cleanup, according to the protocol provided in the iST Sample Preparation kit (PreOmics). Briefly, 50 µL of lysis buffer was added and heated at 95°C, 200 g with stirring for 10 min. After cooling to room temperature (RT), trypsin digestion buffer was added, and the mixture was incubated at 37°C, 100 g with shaking for 2 h. Subsequently, the samples were cleaned and desalted, and the peptides were eluted with elution buffer (2 × 100 µL) and lyophilized using SpeedVac. A high pH reversed‐phase chromatography step was used to separate the complex mixture of peptides before nano‐HPLC‐MS/MS analysis. The mixed peptide samples were resuspended in buffer A (20 mM ammonium formate, pH 10.0, adjusted with ammonium hydroxide), loaded onto a reverse‐phase column (XBridge C18 column, 4.6 × 250 mm, 5 μm, Waters Corporation) using an Ultimate 3000 system (Thermo Fisher Scientific), separated, and eluted using a linear gradient of 5% to 45% buffer B (20 mM ammonium formate in 80% ACN, pH 10.0, adjusted with ammonium hydroxide) for 40 min. The column flow rate was maintained at 1 mL/min, and the column temperature was maintained at 30°C. The collected fractions were then concatenated into 10 fractions and dried in a vacuum centrifuge. The peptides were redissolved in 30 μL solvent A (0.1% formic acid aqueous solution) and analyzed by on‐line nanospray LC‐MS/MS on an Orbitrap Fusion Lumos mass spectrometer coupled to an EASY‐nLC 1200 system (Thermo Fisher Scientific). Three microliters of the sample was separated on the analytical column (Acclaim PepMap C18, 75 μm × 25 cm) using a 120‐min gradient, from 5% to 35% in solvent B (0.1% formic acid in ACN). The column flow rate was maintained at 200 nL/min with a column temperature of 40°C, and the electrospray voltage was set to 2 kV. The mass spectrometer automatically switched between MS and MS/MS modes under the DIA configuration, and the detailed parameter settings are shown in Supporting Information S2: Table . Data Processing, DPs Identification, and Bioinformatics Analysis The DIA raw data were analyzed using Spectronaut 18 (Biognosys AG) with the default settings, and the ideal extraction window was dynamically determined based on the iRT calibration and gradient stability. A Q ‐value (FDR) cutoff on the precursor and protein level was applied at 1%. Decoy generation was set to apply a random number of amino acid position swamps (min = 2, max = length/2). All selected precursors passing the filters were used for MS1 quantification. Proteins were quantified using the average of the top three peptide MS1 areas, yielding raw protein abundances. The thresholds of |Fold Change | > 1.5 and p value < 0.05 were used to identify DPs. The DPs were further assigned to the Gene Ontology (GO) database ( http://www.geneontology.org/ ), where the proteins were divided into three main categories: biological process (BP), molecular function (MF), and cellular component (CC). Pathway enrichment analysis was conducted using the KEGG database. Differences were considered to be statistically significant at a p value < 0.05. Integrated Analysis of Metabolomics and Proteomics All DMs and DPs were queried and mapped to KEGG‐based pathways. R version 3.4.1 was used to combine KEGG annotation and enrichment results of metabolomics and proteomics. Venn diagrams and bar graphs were plotted to combine the results of the two omics approaches. Statistical Analysis Data analysis was conducted using GraphPad Prism software (v8.0.1, GraphPad Software Inc.). The Mann–Whitney U test was used to evaluate differences in cardiac pathology scores between the two groups. A p value less than 0.05 was considered statistically significant. Results 3.1 CVB3‐Induced AVMC in Mice The animal grouping and experimental design are shown in Figure . During the experiment, the mice in the Control group were in good condition, exhibiting shiny fur, increased weight, a normal diet, and flexible responses. In contrast, the AVMC mice exhibited obvious viral infection symptoms, such as weakness, irritability, lusterless hair, anorexia, and weight loss, and eight of them died before the end of the experiment. Upon examining the gross appearance of the mouse hearts on Day 7 postinfection, we noted reduced heart size and apparent cellulose‐like exudation on the epicardial surface of the AVMC mice (Figure ). To evaluate the severity of AVMC in more detail, HE staining and histological analysis were performed on transverse sections of the hearts (Figure ). The AVMC group displayed significant focal necrosis, inflammatory cell infiltration, myocardial fiber collapse, and higher cardiac pathological scores, while no obvious histological abnormalities were observed in the Control group. These findings suggest that the CVB3‐induced AVMC mouse models were successfully established. 3.2 Altered Myocardial Metabolomic Profiles in AVMC Mice Subsequently, we applied an untargeted UHPLC‐MS/MS‐based metabolomics approach to determine the metabolome alterations between the Control (C‐1 ~ 8) and AVMC (A‐1 ~ 10) groups. After data processing and filtering, a total of 2671 metabolites were obtained, of which 2117 and 554 metabolites were annotated according to MS1 and MS2, respectively. A correlation heat map was used to show the relationship between samples, indicating that samples within the same group were highly correlated (Supporting Information S1: Figure ). The 3D PCA plot demonstrated significant separation of metabolites between the two groups, with 26.4%, 16.5%, and 8.7% variation attributed to the principal components PC1, PC2, and PC3, respectively (Figure ). Similarly, the OPLS‐DA model was constructed to screen the DMs, and a significant division was observed between the two groups (R2X = 0.575, R2Y = 0.979, Q2Y = 0.954; Figure ). Following the permutation test, the results indicated that the OPLS‐DA model was reliable without overfitting, as the R 2 and Q 2 values were lower than the original point (Figure ). To screen the DMs, the VIP in the OPLS‐DA model (VIP > 1.0) and the p value from the Student's t test ( p value < 0.05) were used as the criteria. Collectively, 149 DMs were identified between the two groups, including 64 upregulated and 85 downregulated metabolites (Supporting Information S2: Table and Figure ). Of these, 7 metabolites in positive mode and 71 metabolites in negative mode were nonredundantly assigned at the MS1 level. Similarly, 28 metabolites in positive mode and 43 metabolites in negative mode, assigned at the MS2 level, were differentially expressed and displayed in a hierarchical clustering heatmap for better visualization (Figure ). Based on the HMDB classification, the top four superclasses to which the DMs belonged were Lipids and lipid‐like molecules (36.84%), Organic acids and derivatives (21.93%), Organoheterocyclic compounds (11.40%), and Organic oxygen compounds (8.77%) (Supporting Information S1: Figure ). Additionally, Pearson correlation analysis of the top 40 DMs at the MS2 level revealed some correlation between different DMs (Supporting Information S1: Figure ), suggesting that the identified DMs might cooperate with each other to participate in the pathogenesis of AVMC. 3.3 Pathways Associated With Myocardial DMs Pathway analysis was conducted based on the KEGG database for the DMs. As shown in Supporting Information S2: Table and Figure , several key pathways were significantly enriched ( p value < 0.05), such as Sulfur metabolism, Nitrogen metabolism, Taurine and hypotaurine metabolism, Lysine degradation, Metabolic pathways, Arginine and proline metabolism, and Propanoate metabolism. To identify significantly altered metabolic pathways in AVMC mice, we also performed metabolic pathway analysis using the MetaboAnalyst 5.0 webserver. A total of 51 metabolic pathways were identified, of which six pathways were significant ( p value < 0.05 and impact value > 0.2), including Butanoate metabolism, β‐Alanine metabolism, Linoleic acid metabolism, Glycerophospholipid metabolism, D‐Glutamine and D‐glutamate metabolism, and Taurine and hypotaurine metabolism (Supporting Information S2: Table and Figure ). These significantly altered metabolic pathways were related to Energy metabolism, Amino acid metabolism, and Carbohydrate metabolism, suggesting that CVB3 infection resulted in significant myocardial metabolic disturbances in mice. 3.4 Altered Myocardial Proteomics Profiles in AVMC Mice Myocardial tissue samples were also collected and sequenced using a DIA‐based proteomic method. A total of 6793 proteins were screened for further analysis, of which 6752 proteins were identified in both groups (Figure ). PCA and Pearson correlation analysis showed a clear distinction between the Control and AVMC groups, indicating significant differences in proteomic profiles between the two groups (Figure and Supporting Information S1: Figure ). DPs were then identified based on the criteria of |Fold Change| >1.5 and p value < 0.05. Volcano plot analysis showed that there were 1385 DPs (1092 upregulated and 293 downregulated) between the two groups (Supporting Information S2: Table and Figure ). Meanwhile, DPs were ranked using unsupervised hierarchical clustering (Figure ), indicating the rationality and credibility of the AVMC mouse model for investigating the DPs between the two groups. 3.5 GO and KEGG Enrichment Analyses of DPs Next, GO and KEGG enrichment analyses were performed to determine the biological functions of the identified DPs. The significantly enriched GO terms for DPs are shown in Supporting Information S2: Table and Figure . Specifically, these DPs were enriched to a total of 46 GO terms and classified into categories of BP with 25 GO terms, MF with 19 GO terms, and CC with 2 GO terms. The top three enriched terms in the BP category were Cellular process, Metabolic process, and Biological regulation, with the number of DPs being 1155, 874, and 838, respectively. For the MF category, these DPs were mainly enriched in terms related to Binding, Catalytic activity, and Molecular function regulator. In the CC category, Cellular anatomical entity and Protein‐containing complex were the only two significantly enriched terms. To investigate the involved pathway of the DPs, we performed KEGG pathway annotation. As shown in Supporting Information S2: Table and Figure , metabolism‐related pathways, including Global and overview maps, Amino acid metabolism, Carbohydrate metabolism, Lipid metabolism, Metabolism of cofactors and vitamins, Glycan biosynthesis and metabolism, Nucleotide metabolism, Xenobiotics biodegradation and metabolism, Energy metabolism, Metabolism of other amino acids, and Biosynthesis of other secondary metabolites, were important components involved in the AVMC process. Thus, metabolic disturbances in AVMC might be accompanied by differential expression of many proteins. 3.6 Integrated Metabolomics and Proteomics Pathway Analysis To associate the results of our metabolomics and proteomics analyses, we chose KEGG pathways as the carrier and conducted a mapping analysis based on the DMs and DPs. As shown in Supporting Information S2: Table and Figure , the Venn diagram revealed 95 KEGG pathways in which both DMs and DPs were involved, and a total of 54 metabolism‐related KEGG pathways were identified. Based on the p value of DMs less than 0.05, the most enriched shared KEGG pathways included Sulfur metabolism, cAMP signaling pathway, Nitrogen metabolism, Neuroactive ligand–receptor interaction, GABAergic synapse, Taurine and hypotaurine metabolism, Lysine degradation, Glutamatergic synapse, Metabolic pathways, Alcoholism, Arginine and proline metabolism, Propanoate metabolism, and Protein digestion and absorption (Figure ). Metabolic pathways contained the largest number of DMs and DPs. CVB3‐Induced AVMC in Mice The animal grouping and experimental design are shown in Figure . During the experiment, the mice in the Control group were in good condition, exhibiting shiny fur, increased weight, a normal diet, and flexible responses. In contrast, the AVMC mice exhibited obvious viral infection symptoms, such as weakness, irritability, lusterless hair, anorexia, and weight loss, and eight of them died before the end of the experiment. Upon examining the gross appearance of the mouse hearts on Day 7 postinfection, we noted reduced heart size and apparent cellulose‐like exudation on the epicardial surface of the AVMC mice (Figure ). To evaluate the severity of AVMC in more detail, HE staining and histological analysis were performed on transverse sections of the hearts (Figure ). The AVMC group displayed significant focal necrosis, inflammatory cell infiltration, myocardial fiber collapse, and higher cardiac pathological scores, while no obvious histological abnormalities were observed in the Control group. These findings suggest that the CVB3‐induced AVMC mouse models were successfully established. Altered Myocardial Metabolomic Profiles in AVMC Mice Subsequently, we applied an untargeted UHPLC‐MS/MS‐based metabolomics approach to determine the metabolome alterations between the Control (C‐1 ~ 8) and AVMC (A‐1 ~ 10) groups. After data processing and filtering, a total of 2671 metabolites were obtained, of which 2117 and 554 metabolites were annotated according to MS1 and MS2, respectively. A correlation heat map was used to show the relationship between samples, indicating that samples within the same group were highly correlated (Supporting Information S1: Figure ). The 3D PCA plot demonstrated significant separation of metabolites between the two groups, with 26.4%, 16.5%, and 8.7% variation attributed to the principal components PC1, PC2, and PC3, respectively (Figure ). Similarly, the OPLS‐DA model was constructed to screen the DMs, and a significant division was observed between the two groups (R2X = 0.575, R2Y = 0.979, Q2Y = 0.954; Figure ). Following the permutation test, the results indicated that the OPLS‐DA model was reliable without overfitting, as the R 2 and Q 2 values were lower than the original point (Figure ). To screen the DMs, the VIP in the OPLS‐DA model (VIP > 1.0) and the p value from the Student's t test ( p value < 0.05) were used as the criteria. Collectively, 149 DMs were identified between the two groups, including 64 upregulated and 85 downregulated metabolites (Supporting Information S2: Table and Figure ). Of these, 7 metabolites in positive mode and 71 metabolites in negative mode were nonredundantly assigned at the MS1 level. Similarly, 28 metabolites in positive mode and 43 metabolites in negative mode, assigned at the MS2 level, were differentially expressed and displayed in a hierarchical clustering heatmap for better visualization (Figure ). Based on the HMDB classification, the top four superclasses to which the DMs belonged were Lipids and lipid‐like molecules (36.84%), Organic acids and derivatives (21.93%), Organoheterocyclic compounds (11.40%), and Organic oxygen compounds (8.77%) (Supporting Information S1: Figure ). Additionally, Pearson correlation analysis of the top 40 DMs at the MS2 level revealed some correlation between different DMs (Supporting Information S1: Figure ), suggesting that the identified DMs might cooperate with each other to participate in the pathogenesis of AVMC. Pathways Associated With Myocardial DMs Pathway analysis was conducted based on the KEGG database for the DMs. As shown in Supporting Information S2: Table and Figure , several key pathways were significantly enriched ( p value < 0.05), such as Sulfur metabolism, Nitrogen metabolism, Taurine and hypotaurine metabolism, Lysine degradation, Metabolic pathways, Arginine and proline metabolism, and Propanoate metabolism. To identify significantly altered metabolic pathways in AVMC mice, we also performed metabolic pathway analysis using the MetaboAnalyst 5.0 webserver. A total of 51 metabolic pathways were identified, of which six pathways were significant ( p value < 0.05 and impact value > 0.2), including Butanoate metabolism, β‐Alanine metabolism, Linoleic acid metabolism, Glycerophospholipid metabolism, D‐Glutamine and D‐glutamate metabolism, and Taurine and hypotaurine metabolism (Supporting Information S2: Table and Figure ). These significantly altered metabolic pathways were related to Energy metabolism, Amino acid metabolism, and Carbohydrate metabolism, suggesting that CVB3 infection resulted in significant myocardial metabolic disturbances in mice. Altered Myocardial Proteomics Profiles in AVMC Mice Myocardial tissue samples were also collected and sequenced using a DIA‐based proteomic method. A total of 6793 proteins were screened for further analysis, of which 6752 proteins were identified in both groups (Figure ). PCA and Pearson correlation analysis showed a clear distinction between the Control and AVMC groups, indicating significant differences in proteomic profiles between the two groups (Figure and Supporting Information S1: Figure ). DPs were then identified based on the criteria of |Fold Change| >1.5 and p value < 0.05. Volcano plot analysis showed that there were 1385 DPs (1092 upregulated and 293 downregulated) between the two groups (Supporting Information S2: Table and Figure ). Meanwhile, DPs were ranked using unsupervised hierarchical clustering (Figure ), indicating the rationality and credibility of the AVMC mouse model for investigating the DPs between the two groups. GO and KEGG Enrichment Analyses of DPs Next, GO and KEGG enrichment analyses were performed to determine the biological functions of the identified DPs. The significantly enriched GO terms for DPs are shown in Supporting Information S2: Table and Figure . Specifically, these DPs were enriched to a total of 46 GO terms and classified into categories of BP with 25 GO terms, MF with 19 GO terms, and CC with 2 GO terms. The top three enriched terms in the BP category were Cellular process, Metabolic process, and Biological regulation, with the number of DPs being 1155, 874, and 838, respectively. For the MF category, these DPs were mainly enriched in terms related to Binding, Catalytic activity, and Molecular function regulator. In the CC category, Cellular anatomical entity and Protein‐containing complex were the only two significantly enriched terms. To investigate the involved pathway of the DPs, we performed KEGG pathway annotation. As shown in Supporting Information S2: Table and Figure , metabolism‐related pathways, including Global and overview maps, Amino acid metabolism, Carbohydrate metabolism, Lipid metabolism, Metabolism of cofactors and vitamins, Glycan biosynthesis and metabolism, Nucleotide metabolism, Xenobiotics biodegradation and metabolism, Energy metabolism, Metabolism of other amino acids, and Biosynthesis of other secondary metabolites, were important components involved in the AVMC process. Thus, metabolic disturbances in AVMC might be accompanied by differential expression of many proteins. Integrated Metabolomics and Proteomics Pathway Analysis To associate the results of our metabolomics and proteomics analyses, we chose KEGG pathways as the carrier and conducted a mapping analysis based on the DMs and DPs. As shown in Supporting Information S2: Table and Figure , the Venn diagram revealed 95 KEGG pathways in which both DMs and DPs were involved, and a total of 54 metabolism‐related KEGG pathways were identified. Based on the p value of DMs less than 0.05, the most enriched shared KEGG pathways included Sulfur metabolism, cAMP signaling pathway, Nitrogen metabolism, Neuroactive ligand–receptor interaction, GABAergic synapse, Taurine and hypotaurine metabolism, Lysine degradation, Glutamatergic synapse, Metabolic pathways, Alcoholism, Arginine and proline metabolism, Propanoate metabolism, and Protein digestion and absorption (Figure ). Metabolic pathways contained the largest number of DMs and DPs. Discussion Myocardial metabolic disturbances have been shown to be strongly associated with the development of myocarditis . In this study, we successfully constructed a CVB3‐induced AVMC mouse model and performed myocardial untargeted metabolomics analysis using UHPLC‐MS/MS. The results showed distinctly altered metabolic profiles and significantly disturbed metabolic pathways in the myocardium of AVMC mice, particularly Global and overview maps, Energy metabolism, Amino acid metabolism, and Carbohydrate metabolism. By integrating the proteomics data, we further mined many DPs that may be involved in the regulation of these metabolic pathways. To the best of our knowledge, this is the first study to reveal the association between the myocardial metabolome and proteome in AVMC mice. Our study provides a new perspective for elucidating the molecular mechanisms of AVMC. Metabolites are the final products of all cellular activity and are closely related to phenotypes . As the terminal of biological information transmission, they can reflect the physiological and pathological states of the heart. In this study, 149 DMs were identified in the myocardium of AVMC mice, with Lipid and lipid‐like molecules accounting for the largest proportion. This is in contrast to the study by Kong et al. , who identified a smaller number of DMs and mainly categorized them as Organic acids and derivatives, which may be related to the different detection methods and time points. Notably, 7 of the top 10 DMs with the most significant VIP values were markedly downregulated. For example, L‐Lactic acid with the highest VIP value is the end product of glycolysis generated from the reduction of pyruvate by lactate dehydrogenase . Previous studies have shown that L‐Lactic acid is not only an important energy source but also serves as a multifunctional signaling molecule involved in a variety of physiological and pathological processes, including angiogenesis, neoplasia, inflammation, and immune regulation . Interestingly, L‐Lactic acid can act as a natural suppressor of RIG‐I‐like receptor signaling by targeting mitochondrial antiviral signaling . L‐Lactic acid reduction heightens type I IFN production to protect mice from viral infection, suggesting an antagonistic effect of glycolysis on antiviral immunity . Linoleic acid, the most abundant fatty acid found in cardiac mitochondria, is essential for the maintenance of mitochondrial function . Mitochondrial damage caused by CVB3 infection is a key driver of cardiomyocyte death during AVMC progression and perhaps a major reason for Linoleic acid downregulation . The cardioprotective effects of Linoleic acid intake have been demonstrated, and it may be a therapeutic agent for AVMC treatment in the future . Adenosine, the most downregulated metabolite at the MS2 level, is a naturally occurring breakdown product of ATP and exerts multiple physiological effects, such as regulation of blood flow, heart rate, and cardiac contractility . Adenosine preconditioning has been shown to protect the heart from ischemia‐reperfusion injury through diverse mechanisms, including increased antioxidant enzyme production, decreased inflammation, interaction with opioid receptors, and activation of various kinases (e.g., PKC, MAPK, Akt, and tyrosine kinase) . Adenosine also inhibits IL‐2 production, thereby reducing CD4+ T cell activation and proliferation . Thus, reduced levels of Adenosine in AVMC may lead to uncontrolled activation of lymphocytes and trigger persistent myocardial inflammation. These DMs provide fertile avenues for future mechanistic studies to identify novel targets for AVMC therapy. Global and overview maps (Metabolic pathways) was the most abundant pathway for DMs enrichment with 152 DPs, implying a complex metabolic regulatory network during AVMC. Cardiac contractile performance is tightly coupled to Energy metabolism. CVB3 infection impairs energy homeostasis in cardiomyocytes, leading to decreased ATP production, increased reactive oxygen species (ROS), heightened glycolysis, and disturbances in amino acid and lipid metabolism, which in turn exacerbate inflammation and cell death . In this study, Energy metabolism was one of the most disturbed metabolic pathways, including Sulfur metabolism and Nitrogen metabolism. Hydrogen sulfide (H 2 S) is an important compound produced during Sulfur metabolism. Tst and Ethe1, key enzymes involved in the mitochondrial oxidative catabolism of H 2 S , were significantly downregulated in the Sulfur metabolism pathway. It has been shown that they oxidize H 2 S to sulfite in mitochondria for detoxification and possibly for the production of extra ATP . Combined with metabonomic pathway enrichment analysis, it was also found that the metabolic pathway associated with L‐Glutamic acid and L‐Glutamine was Nitrogen metabolism, the basis of Energy metabolism. DPs involved in the Nitrogen metabolism pathway were annotated as Carbonic anhydrase (Car), including Car1, 3, 13, and 14, suggesting a strong correlation between Nitrogen metabolism and Carbon metabolism in AVMC . Targeted regulation of key enzymes involved in Sulfur and Nitrogen metabolism may help improve Energy metabolism in cardiomyocytes, thereby alleviating AVMC progression. The massive death of cardiomyocytes during AVMC results in protein degradation, in which most of the amino acids are reused to synthesize new proteins for tissue repair, but some of them also act as metabolic substrates to provide energy . Hence, disordered Amino acid metabolism is an important component of the AVMC process. In this study, the enrichment of DMs in Taurine and hypotaurine metabolism, Lysine degradation, and Arginine and proline metabolism was the most significant. Taurine is the major osmolyte in cardiomyocytes, and it mainly affects osmoregulation, bile acid conjugation, cell proliferation, viability, and prevention of oxidant‐induced tissue damage. Hypotaurine, a precursor of taurine synthesis, is another metabolite possessing antioxidant capacity. This metabolic pathway is essential for the cellular stress response as taurine and hypotaurine are responsible for protection during osmotic stress and oxidative stress . Notably, Ggt5, a key protein involved in this pathway, was significantly highly expressed in AVMC. It has been shown that overexpression of Ggt5 disturbs glutathione homeostasis and affects heme oxygenase‐1 levels, leading to excessive oxidative stress . Thus, disturbances in Taurine and hypotaurine metabolism may be associated with an imbalance in the intracellular oxidative status of CVB3‐infected cardiomyocytes. Lysine is an essential amino acid in humans that can be degraded if present in excess. Razquin et al. reported an association between excessive lysine levels and a high risk of diabetes‐concomitant cardiovascular diseases. Lysine degradation is beneficial because of the production of acetyl‐CoA for the citric acid cycle, and lysine catabolites also contribute to the relief of osmotic stress . Lysine degradation primarily occurs in the mitochondria. We found that the mitochondrial matrix enzyme Gcdh, which is involved in this metabolic process, was significantly downregulated in AVMC, suggesting that the Lysine degradation process may be suppressed by CVB3 infection . Arginine and proline metabolism is one of the central pathways in the biosynthesis of amino acids. Among them, arginine is a precursor for the synthesis of many important biomolecules, including nitric oxide (NO), polyamines, creatine, agmatine, proline, and glutamate . Arginine also acts as a key regulator of multiple BPs, including gene expression, signal transduction, inflammation, and immune response . This metabolic pathway may be significantly weakened in AVMC, supported by decreased levels of Creatine, Creatinine, and L‐glutamic acid, as well as elevated levels of Arginases (Arg1 and Arg2) and Glycine aminotransferase (Gatm) . However, the detailed mechanisms of action of Arginine and proline metabolism in AVMC remain unknown and require further elucidation. In addition to fatty acids, Carbohydrate metabolism is another source of energy for the heart, accounting for about 10%–40% of the heart's total energy supply . In response to various stresses, the heart shifts its fuel substrate preference from fatty acids to carbohydrates through a mechanism of altered cardiac metabolic gene expression . KEGG signaling pathway analysis in the present study revealed that Propanoate metabolism, which is involved in Carbohydrate metabolism, was significantly impacted. Succinic acid is an important intermediate in Propanoate metabolism and produces ATP via the gluconeogenesis pathway . It is also a crucial signal connecting Carbohydrate metabolism with other metabolic pathways . Therefore, decreased levels of succinic acid indicate reduced ATP biosynthesis. Additionally, most of the proteins involved in the regulation of Propanoate metabolism, including Bckdha, Mcee, Suclg1, Mmut, Aldh6a1, and Dbt, were significantly downregulated, further confirming the marked inhibition of this pathway by CVB3 infection. Like other viruses, CVB3 manipulates lipids in host cells to facilitate viral replication . A recent study has shown that obesity exacerbates CVB3 infection through lipid‐induced mitochondrial ROS generation, suggesting a strong link between CVB3 infection and Lipid metabolism . Based on the MetPA pathway analysis, we found that, in addition to Linoleic acid metabolism, Glycerophospholipid metabolism was also significantly affected. Among these, the most notable change was the upregulation of Glycerophosphocholine, a major component of cell membranes. The increase in Glycerophosphocholine likely reflects the degradation of membranes and elevates the risk of cardiovascular diseases . However, its role in AVMC remains unclear. There are some limitations of this study worth noting. First, since the information revealed by metabolomics is much narrower than that of proteomics, the integrated analysis cannot cover every aspect of the BP. Second, the integrated metabolomics and proteomics analysis of myocardium at different time points after CVB3 infection was not investigated in this study, and multiomics studies in female AVMC mice are also a direction of interest. Third, quantitative validation (e.g., western blot analysis and targeted metabolomics) of the molecules identified in this study is necessary before they can be generalized to other studies. Finally, targeting the regulation of key DMs and DPs within specific metabolic pathways will aid in identifying potential therapeutic interventions. This can be achieved through methods such as gene editing (e.g., CRISPR‐Cas9), RNA interference (e.g., siRNA/shRNA), pharmacological agents, epigenetic regulation, transcription factor modulation, and alterations in the microenvironment. Future integrated multiomics analyses in patients with AVMC, combined with longitudinal or interventional studies, are warranted to strengthen and broaden our findings. Conclusion In summary, this study presents a comprehensive analysis of metabolic and proteomic profiles in AVMC mice. Our results suggest that CVB3‐induced AVMC is closely related to several metabolic pathways that are accompanied by DP expression. These data provide further insights into the pathogenesis of AVMC and may help identify potential targets for improved clinical treatment. Yimin Xue: conceptualization, data curation, formal analysis, investigation, methodology, writing – original draft, writing – review and editing. Jiuyun Zhang: conceptualization, investigation, methodology, writing – original draft. Mingguang Chen: data curation, investigation. Qiaolian Fan: data curation, investigation. Tingfeng Huang: investigation, software. Jun Ke: resources, software, supervision . Feng Chen: conceptualization, funding acquisition, supervision, validation, writing – review and editing. All animal protocols were evaluated and approved by the Institutional Animal Care and Use Committee of Fujian Medical University (Permit No. IACUC FJMU 2023‐0278). The authors have nothing to report. The authors declare no conflicts of interest. Supporting information. Supporting information. |
Expanded trade: tripartite interactions in the mycorrhizosphere | 0407ffc7-2ee3-497b-9a50-c8bbf0865300 | 11265408 | Microbiology[mh] | Similar to the microbiota residing in the digestive tracts of vertebrates , microbes proliferating at the interface of plant roots and soil, also called the rhizosphere, can help improve plant health and agricultural productivity . Particularly, the interactions between plant roots, mycorrhizal fungi (obligate root endosymbionts), and the greater rhizospheric community (bacteria, archaea, protists, and viruses), also called the mycorrhizosphere, can increase nutrient availability in soil and its uptake by plants . The United Nations predicts the global population will increase to ~9.5 billion by 2050, requiring an ~ 70% increase in food production . A major limiting factor to agricultural productivity is plants’ ability to acquire and use soil nutrients, particularly phosphorus (P) and nitrogen (N). Although synthetic fertilizers increase available nutrients to boost crop yields , about 40%–60% of applied N and varying P amounts are often lost in agricultural runoff water . This runoff causes harmful algal blooms and pollutes groundwater . Furthermore, genetic and environmental factors also impact the plants’ ability to acquire applied N and P , making it difficult for growers to apply the right amount of fertilizers at the right time. To reduce further environmental damage, we need to adopt practices that can reduce the dependence on synthetic fertilizers. With decades of research on plant-associated microbes, we can harness the benefits of soil microbial relationships with plants to improve crop nutrient uptake. The exemplary plant–microbe nutrient relationship involves N-fixing bacteria known as diazotrophs that provide a variety of crops with atmospherically derived N in forms accessible for plant uptake. Plants in the Fabaceae family, known as legumes, form a symbiotic relationship with root-nodulating diazotrophs such as as rhizobia to acquire N in the form of NH 3 /NH 4 + in exchange for carbon (C) . Farmers use this relationship to provide crop rotations, hence reducing chemical N input . However, in case of non-legumes such as wheat, maize, and rice, non-rhizobial free-living or root-associative diazotrophs only loosely associate with roots and hence provide little N compared to rhizobia. Diazotrophs are also subject to tight feedback regulations of N-fixation, resulting in little to no ammonia excretion . Since cereal crops form a major fraction of human calorie intake compared to legumes , improving biological nitrogen fixation (BNF) in non-legumes is of great interest . Over 70% of land–plant roots form symbiotic partnerships with obligate biotrophs called arbuscular mycorrhizal fungi (AMF), and about 13% of land plants are colonized by other mycorrhizal fungi, including ectomycorrhizal, ericoid, and orchid mycorrhizal fungi . In this review, we focus on AMF interactions owing to their intimate relationship with a vast majority of land plants. Followed by biochemical exchanges, AMF invaginate their hyphae directly into cortical cells of plant roots, creating “arbuscules” where nutrient exchanges occur . AMF transport P and N to the plant in exchange for resources, usually C . With limited ability to access and process organic forms of P and N , AMF rely on other soil microbes to free up these valuable resources in a different focal environment called the “mycorrhizosphere” , a complex nexus of plant roots, extraradical hyphae of the mycorrhizal fungi, and other microbes in the rhizosphere. This review looks at the mycorrhizosphere through the perspective of tripartite interactions between plants, AMF, and the microbial community . While there is extensive research on nutrient dynamics at the AMF–plant interface and relatively less at the interface of the AMF and soil microbial community , these research perspectives focus on two separate interfaces of a whole system. Here, we will consider these different interfaces simultaneously and review research that has focused on this wider perspective. Finally, we will discuss methods that could provide a more comprehensive understanding of the tripartite network. We aim to reveal an interconnected system of nutrient exchange among plants, AMF, and microbes in order to present these tripartite interactions holistically for a better understanding of nutrient exchange.
Phosphorus While plants can acquire inorganic and organic P in close proximity to roots, hyphae of AMF can extend into the soil far beyond the root surface and access inorganic P located much farther from the plant . AMF’s successful colonization of roots is dependent on soil P levels, cementing P as an important factor in the establishment and maintenance of symbiosis . AMF allocate inorganic P to the most advantageous places with precision by providing more P to newly forming lateral roots that offer more C to the AMF due to their more immediate need for P . We now understand that AMF-sourced inorganic P in plants depends on the photosynthate provided to AMF, suggesting that control of P flow is mediated by a C price . This so-called “exchange rate” has favored evolutionary fitness-enhancing strategies in both organisms . Importantly, AMF do not mobilize organic P but recruit and interact with soil bacteria, creating a tripartite system involving nutrient trade . Phosphorus-solubilizing bacteria (PSB) help break down P-rich and chemically complex phytate in optimum P conditions, allowing for an increase in plant-shoot P when bacteria and AMF are present together, relative to AMF or bacteria alone . This suggests that AMF can acquire bacterially solubilized P and transfer it to the plant. However, this interaction is complex and qualitatively dependent on inorganic P in the soil . Fructose exuded from extraradical hyphae induces the expression of phosphatases and P transporters in the PSB, leading to phytate mineralization . This induction suggests that hyphal exudates act as a cue to initiate P acquisition from bacteria, presenting a possible inverse relationship between the PSB and AMF. The plant–AMF nutrient exchange would then be influenced by the AMF–PSB nutrient exchange . Research on P and C allocation and exchange strategies at the AMF–PSB and AMF–plant interfaces simultaneously would help characterize the nutrient value and fitness dynamics in the system. Nitrogen AMF can also acquire and transfer N to their host. Transfer of C from the host into AMF tissue directly induces N uptake and transport in the AMF, suggesting an “exchange rate” similar to that observed with P . AMF acquires and transfers exogenous N to the plant from different sources , including decomposed organic matter and ammonium . Hestrin et al. tracked the flow of 15 N, derived from labeled organic matter, through AMF hyphae into plant roots . They also tracked the flow of 13 C and visualized photosynthate in hyphae and hyphal-associated bacterial decomposers, qualitatively showing the movement of C from plants to bacteria through AMF . Even if this transfer is passive , it may change the value of N in tripartite–nutrient dynamics . When N competition is considered, more microbial players become important. Bukovska et al. observed the suppression of specific bacterial communities, including ammonium oxidizers, in the presence of AMF . In contrast, protist populations were uninhibited. Based on this observation, Bukovska et al. proposed protist grazing of bacterial decomposers, and subsequent release of ammonium ions provided N for AMF without consequences to the protist. While this interaction needs more investigation, experiments that only consider plants, AMF, and N sources may exclude multiple kingdoms of taxa that are performing N-cycling processes. Since AMF cannot acquire C in the absence of plants or mobilize different forms of organic N , AMF-related nutrient dynamics are influenced by C provided by the host plant and inorganic N released by soil microbes, regardless of any byproduct mutualism arising from microbial interactions . The availability and exchange of organic forms of nutrients change the rules of previously identified nutrient exchanges between the plants, AMF, and the microbes associated with hyphae. Therefore, we must design our experiments using appropriate tools to reassess the rules governing these exchanges and underpinning the ecophysiology of AMF. Tools to unravel the tripartite network We have relatively more information on the microbes present in the mycorrhizosphere than on the processes performed by them. These interactions happen in close proximity to roots and extraradical hyphae, making it challenging to identify respective microbes involved in a specific nutrient cycling process. In this section, we will review how the research field is well-poised to achieve this identification through technology and creative methodology. Compartmented systems Since AMF are obligate biotrophs, and mycorrhizosphere microbes live in close proximity to hyphae and roots, discerning between the tripartite member’s physiology is challenging. A simple and elegant approach to overcoming this concern of proximity is building compartmented systems that create root or microbe-free zones. Further addition of air gaps and micron mesh to these compartmented chambers can allow interaction of AMF with bacterial communities in root-free zones (RFZ), while still allowing for nutrient flow between the members. Various designs of the compartmented systems are being used, each tailored differently depending on the specific question being asked. The simplest method to separate tripartite members is a compartmented petri plate. The plate’s raised wall separates two compartments containing different media . Only AMF hyphae in these systems can grow over the wall. The root compartment (RC) side of the plate contains carrot root organ cultures inoculated with AMF. The AMF grows over the wall, creating the hyphal compartment (HC), and a root-free zone (RFZ). Labeled nutrients or bacterial species can be added to the HC, and if the bacterially derived or labeled nutrient is found within the RC, it suggests AMF-dependent transport of the nutrient. These plates are effective at elucidating mechanisms, but the one-dimensional medium and lack of photosynthetic tissue limit our ability to extrapolate system processes to more realistic conditions. Micron meshes can effectively isolate microbial interactions with hyphae from roots, hence avoiding the confoundment of cross-kingdom biology in more realistic settings. Hestrin et al. created an RFZ in a mesocosm by wrapping 15 N-labeled organic material in a micron mesh . This simple addition to the experiment prevented the roots from directly accessing this N source, suggesting N was transported out of the RFZ by AMF hyphae. Hence, the mesh becomes a powerful tool for separating roots and hyphae while tracking nutrient flow within the tripartite system. In another study, use of in-growth cores also created RFZs within the core, allowing for extra-hyphal microbiome assessment (53). The core could be used to study the metabolome and transcriptome of the AMF–microbe interface. Untargeted mass spectrometry from a core could provide information on AMF exudates that may be important in recruiting other microbes. Hyphal transcriptomics within the core would investigate gene expression changes of only the extraradical mycelium of AMF, whereas core-metatranscriptomics could elucidate processes within the extra-hyphal microbiome. Thus, the in-growth cores provide a simple and cost-effective tool to propel our understanding of the tripartite system forward. The addition of an air gap to an RFZ can further isolate the tripartite members. Kakouridis et al. created a mesocosm with two compartments separated by a 1-mm air gap that prevents water transfer between the compartments . Each connecting wall of the air gap had a micron mesh that created an RFZ . This allowed AMF to proliferate in both compartments, so if there were to be any intercompartmental water flow, it would have to be through hyphae. The authors then added H 2 18 O in the RFZ and observed H 2 18 O in the plant, suggesting AMF hyphae-mediated water transport. This combination of the air gap and micron mesh could prevent bacterial transfer between compartments, creating a bacteria-free zone (BFZ). The combination of the BFZ, RFZ, and heavy isotope techniques discussed in the next section would allow for controlled tracking of bacterial-derived nutrients and could help discern the partners responsible for transport of those nutrients. Overall, the compartmented chambers are useful in isolating different tripartite members and can be enhanced through the addition of well-established biochemical techniques. In the following section, we will explore how these biochemical techniques and experimental design can help identify the processes performed by the tripartite members. Listening among the noise Soil is an inherently complex chemical matrix. Diverse mineral makeups, life forms, and organic materials create challenges due to adsorption, hydrological variability, ion exchange, pH variability, and more when characterizing nutrient processes in the soil. These challenges make techniques such as comparative mass-spectrometry (CMS) less useful on their own . However, combining CMS with heavy isotopes can track nutrient flow through the tripartite system with precision. Different mass spectrometry approaches can be used with stable isotope tracking to identify how nutrients move throughout this system. Hestrin et al. used a well-established technique, isotope ratio mass-spectrometry (IRMS), to assess 15 N levels in shoots and nanoscale secondary ion mass-spectrometry (Nano-SIMS) with both 13 C and 15 N to visualize nutrient exchange within the tripartite system under different levels of soil N . Kakouridis et al. recorded transpiration and translated data with an isotopic mixing model to quantify H 2 18 O transferred by AMF hyphae . Smith et al. showed the importance of fungi in the P-nutrient trade, which was independent of the plant growth response with P 33 . P isotopes’ radioactivity makes environmental application unsafe, so labeling PO 4 −3 with 18 O has been attempted, but is limited by O transfer from PO 4 −3 to H 2 O in many biological processes . Tracking different stable isotopes can help elucidate mechanisms of nutrient dynamics driven by respective microbes precisely within the noisy environment of the mycorrhizosphere. Stable isotope probing (SIP) can identify species receiving labeled nutrients . Followed by a pulse of 13 CO 2 , researchers isolated microbes that assimilated heavy isotopes into their DNA through density gradient centrifugation. The sequencing of these fractions revealed metabolically active microbes . Recently, Nuccio et al. enhanced SIP with semi-automated, high-throughput sequencing (HT-SIP), where they identified AMF-associated taxa enriched in 13 C post- 13 CO 2 exposure . Combining this pipeline with compartmented mesocosms would allow HT-SIP sampling of different compartments to simultaneously answer who receives a nutrient and track its flow in the system. The combination of stable isotope tracking and mechanical isolation enhanced the precision and accuracy of these experiments. Organic matter covered by a micron mesh ensured that the transfer of 13 C to bacteria and 15 N uptake was not directly from roots but through hyphae. The combination of the air gap and mesh also showed water transport through hyphae. Overall, these studies demonstrate the impact of simple tweaks in the experimental design. Deconstructing and reconstructing interactions The practical application of these findings necessitates large-scale experiments. In vitro experiments provide mechanistic information about microbial activity in the mycorrhizosphere. However, species diversity and abiotic variables increase significantly in the field, leading to confounding results and interpretations. So, how can we overcome these complex experimental hurdles to make findings more relevant in a field context? Below, we review recent methods that can bridge the gap between in vitro and in situ experiments. Statistical modeling can help find impactful variables within the myriad of field data. Lutz et al. used a combination of well-known methods to identify variables correlated with the mycorrhizal growth response (MGR) in AMF-inoculated fields . They reduced soil parameters through pairwise correlation, which filtered out parameters that did not correlate with the MGR. They fed filtered parameters into a random forest model, a stepwise model, and an exhaustive model screening using “glmulti” and found 15 parameters that correlated with MGR in each method. Furthermore, these 15 parameters were used as vectors in a principal component analysis with MGR values, plotted to assess parameter importance in creating the different MGR groups: high, medium, and low. The same technique was used with microbiome composition data to identify MGR-correlated taxa. This multimodal approach suggested correlations that needed to be assessed for predictive power. Lutz et al. used the correlated parameters in a generalized linear model and found microbial taxa that predicted the MGR more accurately than any other parameter. This approach narrowed down the number of variables that are important to consider when assessing the performance of the AMF inoculum in the field. Furthermore, it could improve in vitro experimentation by introducing only the most impactful variables and taxa from the field into a controlled setting, avoiding needless complexity while ensuring experiments are contextually relevant. Co-occurrence network analyses identify core microbial members correlated with the stability and resilience of the soil microbiome . These findings helped develop a consortium of microbes referred to as synthetic communities (SynComs) that can be used in vitro and in the field, providing important core community processes . These SynComs can help bridge the gap between the sterile environment of the lab and the complex environment of the field and enable scientists to infer causal relationships in the mycorrhizosphere. Exometabolomic assays combine field taxa and laboratory sterility to provide insights into more prominent chemical phenomena. The different members of the tripartite network excrete many metabolites that impact the mycorrhizosphere . Zhalnina et al. collected root exudates at different plant growth stages and grew field-isolated bacteria in media supplemented with these root exudates to observe the effects of metabolites on the microbial community . This reductionist approach can help understand field-isolated microbial responses to certain metabolite profiles with fewer complications. Taking the information gained through these modeling and metabolomic approaches to compartmentalized apparatus experiments would increase the confidence in extrapolating in vitro results to the field. We can assess how tripartite member processes influence community ecology by introducing treatments inoculated with a core SynCom. Additionally, we can simulate field conditions by treating soils in the compartmented chambers with metabolic profiles that closely resemble those found in fields. Using statistical correlations and models, we can ensure impactful variables are present in experiments without erroneous complications. Building upon this work, we can enrich in vitro experiments with in situ findings and vice versa. The importance of tripartite perspective In this review, we highlighted research that suggests that emphasis is needed on all the kingdoms involved in the tripartite nutrient exchange. While P-related research is splintered between bipartite interactions, research focused on the exchange between all tripartite members will be valuable. In this study, we compiled the growing body of research that suggests that this tripartite exchange occurs in the “nitrogen market” as well. Furthermore, we must consider that P dynamics influence N dynamics and vice versa , and integration of N-related research with that of P is vital for accurate interpretation. Careful methodology can uncover the ecophysiological phenomena underpinning these exchanges. In this review, we presented a framework and the tools to piece together small portions of this tripartite to inform increasingly scalable research that will enhance our capabilities in increasing nutrient use efficiency by utilizing microbes residing in the mycorrhizosphere.
While plants can acquire inorganic and organic P in close proximity to roots, hyphae of AMF can extend into the soil far beyond the root surface and access inorganic P located much farther from the plant . AMF’s successful colonization of roots is dependent on soil P levels, cementing P as an important factor in the establishment and maintenance of symbiosis . AMF allocate inorganic P to the most advantageous places with precision by providing more P to newly forming lateral roots that offer more C to the AMF due to their more immediate need for P . We now understand that AMF-sourced inorganic P in plants depends on the photosynthate provided to AMF, suggesting that control of P flow is mediated by a C price . This so-called “exchange rate” has favored evolutionary fitness-enhancing strategies in both organisms . Importantly, AMF do not mobilize organic P but recruit and interact with soil bacteria, creating a tripartite system involving nutrient trade . Phosphorus-solubilizing bacteria (PSB) help break down P-rich and chemically complex phytate in optimum P conditions, allowing for an increase in plant-shoot P when bacteria and AMF are present together, relative to AMF or bacteria alone . This suggests that AMF can acquire bacterially solubilized P and transfer it to the plant. However, this interaction is complex and qualitatively dependent on inorganic P in the soil . Fructose exuded from extraradical hyphae induces the expression of phosphatases and P transporters in the PSB, leading to phytate mineralization . This induction suggests that hyphal exudates act as a cue to initiate P acquisition from bacteria, presenting a possible inverse relationship between the PSB and AMF. The plant–AMF nutrient exchange would then be influenced by the AMF–PSB nutrient exchange . Research on P and C allocation and exchange strategies at the AMF–PSB and AMF–plant interfaces simultaneously would help characterize the nutrient value and fitness dynamics in the system.
AMF can also acquire and transfer N to their host. Transfer of C from the host into AMF tissue directly induces N uptake and transport in the AMF, suggesting an “exchange rate” similar to that observed with P . AMF acquires and transfers exogenous N to the plant from different sources , including decomposed organic matter and ammonium . Hestrin et al. tracked the flow of 15 N, derived from labeled organic matter, through AMF hyphae into plant roots . They also tracked the flow of 13 C and visualized photosynthate in hyphae and hyphal-associated bacterial decomposers, qualitatively showing the movement of C from plants to bacteria through AMF . Even if this transfer is passive , it may change the value of N in tripartite–nutrient dynamics . When N competition is considered, more microbial players become important. Bukovska et al. observed the suppression of specific bacterial communities, including ammonium oxidizers, in the presence of AMF . In contrast, protist populations were uninhibited. Based on this observation, Bukovska et al. proposed protist grazing of bacterial decomposers, and subsequent release of ammonium ions provided N for AMF without consequences to the protist. While this interaction needs more investigation, experiments that only consider plants, AMF, and N sources may exclude multiple kingdoms of taxa that are performing N-cycling processes. Since AMF cannot acquire C in the absence of plants or mobilize different forms of organic N , AMF-related nutrient dynamics are influenced by C provided by the host plant and inorganic N released by soil microbes, regardless of any byproduct mutualism arising from microbial interactions . The availability and exchange of organic forms of nutrients change the rules of previously identified nutrient exchanges between the plants, AMF, and the microbes associated with hyphae. Therefore, we must design our experiments using appropriate tools to reassess the rules governing these exchanges and underpinning the ecophysiology of AMF.
We have relatively more information on the microbes present in the mycorrhizosphere than on the processes performed by them. These interactions happen in close proximity to roots and extraradical hyphae, making it challenging to identify respective microbes involved in a specific nutrient cycling process. In this section, we will review how the research field is well-poised to achieve this identification through technology and creative methodology.
Since AMF are obligate biotrophs, and mycorrhizosphere microbes live in close proximity to hyphae and roots, discerning between the tripartite member’s physiology is challenging. A simple and elegant approach to overcoming this concern of proximity is building compartmented systems that create root or microbe-free zones. Further addition of air gaps and micron mesh to these compartmented chambers can allow interaction of AMF with bacterial communities in root-free zones (RFZ), while still allowing for nutrient flow between the members. Various designs of the compartmented systems are being used, each tailored differently depending on the specific question being asked. The simplest method to separate tripartite members is a compartmented petri plate. The plate’s raised wall separates two compartments containing different media . Only AMF hyphae in these systems can grow over the wall. The root compartment (RC) side of the plate contains carrot root organ cultures inoculated with AMF. The AMF grows over the wall, creating the hyphal compartment (HC), and a root-free zone (RFZ). Labeled nutrients or bacterial species can be added to the HC, and if the bacterially derived or labeled nutrient is found within the RC, it suggests AMF-dependent transport of the nutrient. These plates are effective at elucidating mechanisms, but the one-dimensional medium and lack of photosynthetic tissue limit our ability to extrapolate system processes to more realistic conditions. Micron meshes can effectively isolate microbial interactions with hyphae from roots, hence avoiding the confoundment of cross-kingdom biology in more realistic settings. Hestrin et al. created an RFZ in a mesocosm by wrapping 15 N-labeled organic material in a micron mesh . This simple addition to the experiment prevented the roots from directly accessing this N source, suggesting N was transported out of the RFZ by AMF hyphae. Hence, the mesh becomes a powerful tool for separating roots and hyphae while tracking nutrient flow within the tripartite system. In another study, use of in-growth cores also created RFZs within the core, allowing for extra-hyphal microbiome assessment (53). The core could be used to study the metabolome and transcriptome of the AMF–microbe interface. Untargeted mass spectrometry from a core could provide information on AMF exudates that may be important in recruiting other microbes. Hyphal transcriptomics within the core would investigate gene expression changes of only the extraradical mycelium of AMF, whereas core-metatranscriptomics could elucidate processes within the extra-hyphal microbiome. Thus, the in-growth cores provide a simple and cost-effective tool to propel our understanding of the tripartite system forward. The addition of an air gap to an RFZ can further isolate the tripartite members. Kakouridis et al. created a mesocosm with two compartments separated by a 1-mm air gap that prevents water transfer between the compartments . Each connecting wall of the air gap had a micron mesh that created an RFZ . This allowed AMF to proliferate in both compartments, so if there were to be any intercompartmental water flow, it would have to be through hyphae. The authors then added H 2 18 O in the RFZ and observed H 2 18 O in the plant, suggesting AMF hyphae-mediated water transport. This combination of the air gap and micron mesh could prevent bacterial transfer between compartments, creating a bacteria-free zone (BFZ). The combination of the BFZ, RFZ, and heavy isotope techniques discussed in the next section would allow for controlled tracking of bacterial-derived nutrients and could help discern the partners responsible for transport of those nutrients. Overall, the compartmented chambers are useful in isolating different tripartite members and can be enhanced through the addition of well-established biochemical techniques. In the following section, we will explore how these biochemical techniques and experimental design can help identify the processes performed by the tripartite members.
Soil is an inherently complex chemical matrix. Diverse mineral makeups, life forms, and organic materials create challenges due to adsorption, hydrological variability, ion exchange, pH variability, and more when characterizing nutrient processes in the soil. These challenges make techniques such as comparative mass-spectrometry (CMS) less useful on their own . However, combining CMS with heavy isotopes can track nutrient flow through the tripartite system with precision. Different mass spectrometry approaches can be used with stable isotope tracking to identify how nutrients move throughout this system. Hestrin et al. used a well-established technique, isotope ratio mass-spectrometry (IRMS), to assess 15 N levels in shoots and nanoscale secondary ion mass-spectrometry (Nano-SIMS) with both 13 C and 15 N to visualize nutrient exchange within the tripartite system under different levels of soil N . Kakouridis et al. recorded transpiration and translated data with an isotopic mixing model to quantify H 2 18 O transferred by AMF hyphae . Smith et al. showed the importance of fungi in the P-nutrient trade, which was independent of the plant growth response with P 33 . P isotopes’ radioactivity makes environmental application unsafe, so labeling PO 4 −3 with 18 O has been attempted, but is limited by O transfer from PO 4 −3 to H 2 O in many biological processes . Tracking different stable isotopes can help elucidate mechanisms of nutrient dynamics driven by respective microbes precisely within the noisy environment of the mycorrhizosphere. Stable isotope probing (SIP) can identify species receiving labeled nutrients . Followed by a pulse of 13 CO 2 , researchers isolated microbes that assimilated heavy isotopes into their DNA through density gradient centrifugation. The sequencing of these fractions revealed metabolically active microbes . Recently, Nuccio et al. enhanced SIP with semi-automated, high-throughput sequencing (HT-SIP), where they identified AMF-associated taxa enriched in 13 C post- 13 CO 2 exposure . Combining this pipeline with compartmented mesocosms would allow HT-SIP sampling of different compartments to simultaneously answer who receives a nutrient and track its flow in the system. The combination of stable isotope tracking and mechanical isolation enhanced the precision and accuracy of these experiments. Organic matter covered by a micron mesh ensured that the transfer of 13 C to bacteria and 15 N uptake was not directly from roots but through hyphae. The combination of the air gap and mesh also showed water transport through hyphae. Overall, these studies demonstrate the impact of simple tweaks in the experimental design.
The practical application of these findings necessitates large-scale experiments. In vitro experiments provide mechanistic information about microbial activity in the mycorrhizosphere. However, species diversity and abiotic variables increase significantly in the field, leading to confounding results and interpretations. So, how can we overcome these complex experimental hurdles to make findings more relevant in a field context? Below, we review recent methods that can bridge the gap between in vitro and in situ experiments. Statistical modeling can help find impactful variables within the myriad of field data. Lutz et al. used a combination of well-known methods to identify variables correlated with the mycorrhizal growth response (MGR) in AMF-inoculated fields . They reduced soil parameters through pairwise correlation, which filtered out parameters that did not correlate with the MGR. They fed filtered parameters into a random forest model, a stepwise model, and an exhaustive model screening using “glmulti” and found 15 parameters that correlated with MGR in each method. Furthermore, these 15 parameters were used as vectors in a principal component analysis with MGR values, plotted to assess parameter importance in creating the different MGR groups: high, medium, and low. The same technique was used with microbiome composition data to identify MGR-correlated taxa. This multimodal approach suggested correlations that needed to be assessed for predictive power. Lutz et al. used the correlated parameters in a generalized linear model and found microbial taxa that predicted the MGR more accurately than any other parameter. This approach narrowed down the number of variables that are important to consider when assessing the performance of the AMF inoculum in the field. Furthermore, it could improve in vitro experimentation by introducing only the most impactful variables and taxa from the field into a controlled setting, avoiding needless complexity while ensuring experiments are contextually relevant. Co-occurrence network analyses identify core microbial members correlated with the stability and resilience of the soil microbiome . These findings helped develop a consortium of microbes referred to as synthetic communities (SynComs) that can be used in vitro and in the field, providing important core community processes . These SynComs can help bridge the gap between the sterile environment of the lab and the complex environment of the field and enable scientists to infer causal relationships in the mycorrhizosphere. Exometabolomic assays combine field taxa and laboratory sterility to provide insights into more prominent chemical phenomena. The different members of the tripartite network excrete many metabolites that impact the mycorrhizosphere . Zhalnina et al. collected root exudates at different plant growth stages and grew field-isolated bacteria in media supplemented with these root exudates to observe the effects of metabolites on the microbial community . This reductionist approach can help understand field-isolated microbial responses to certain metabolite profiles with fewer complications. Taking the information gained through these modeling and metabolomic approaches to compartmentalized apparatus experiments would increase the confidence in extrapolating in vitro results to the field. We can assess how tripartite member processes influence community ecology by introducing treatments inoculated with a core SynCom. Additionally, we can simulate field conditions by treating soils in the compartmented chambers with metabolic profiles that closely resemble those found in fields. Using statistical correlations and models, we can ensure impactful variables are present in experiments without erroneous complications. Building upon this work, we can enrich in vitro experiments with in situ findings and vice versa.
In this review, we highlighted research that suggests that emphasis is needed on all the kingdoms involved in the tripartite nutrient exchange. While P-related research is splintered between bipartite interactions, research focused on the exchange between all tripartite members will be valuable. In this study, we compiled the growing body of research that suggests that this tripartite exchange occurs in the “nitrogen market” as well. Furthermore, we must consider that P dynamics influence N dynamics and vice versa , and integration of N-related research with that of P is vital for accurate interpretation. Careful methodology can uncover the ecophysiological phenomena underpinning these exchanges. In this review, we presented a framework and the tools to piece together small portions of this tripartite to inform increasingly scalable research that will enhance our capabilities in increasing nutrient use efficiency by utilizing microbes residing in the mycorrhizosphere.
|
Impella malrotation affects left ventricle unloading in cardiogenic shock patients | 0e917b11-9b0b-4e3b-b3fa-5c29584a769a | 11769641 | Surgical Procedures, Operative[mh] | The percutaneous trans‐aortic microaxial‐flow Impella pump (Abiomed, Danvers, MA) is a powerful temporary mechanical circulatory support (MCS). The unique Impella mechanism generates continuous ejection of blood from the left ventricle (LV) into the ascending aorta, thus unloading the LV. If the aortic valve (AV) is competent and the device is properly positioned, this translates into reduced LV end‐diastolic pressure (LVEDP) and volume (LVEDV), with favourable myocardial mechanical and metabolic effects for the failing LV. Optimal positioning is thus warranted to avoid pump‐related adverse events. , Across the spectrum of device positions inside the LV, Impella malrotation has been defined as a condition characterized by (1) normal pressure and motor current waveforms on the device console, (2) proper depth of the device across the AV, but (3) abnormal orientation of the inlet away from the LV apex and towards the LV lateral wall. Malrotation was reported in up to 32% of patients supported with Impella for cardiogenic shock (CS). Preliminary data suggest that malrotation might lead to worsening aortic regurgitation (AR) and mitral regurgitation (MR) during Impella support and might be associated with adverse in‐hospital outcomes. However, these findings need further confirmation and the true haemodynamic impact needs to be investigated because whether malrotation causes suboptimal LV unloading still remains unclear. Because impaired LV unloading would affect the expected benefit of Impella support, understanding of the haemodynamic consequences of malrotation is urgently warranted, particularly considering that pump rotation might represent a potentially actionable therapeutic target. The aim of this study was to explore the impact of Impella malrotation upon pulmonary and systemic haemodynamics and to assess its clinical outcomes in a larger cohort of CS patients.
Study design We retrospectively reviewed all consecutive patients who received Impella support for CS at our Cardiac Intensive Care Unit at the IRCCS ‘San Raffaele Hospital’ (Milan, Italy). Study period ranged from January 2019 to September 2023. Patients who received at least one echocardiographic examination within 12 h from Impella implant, with appropriate views to assess Impella placement, were included in this study. Invasive pulmonary and systemic haemodynamic measurements were obtained using a pulmonary artery catheter (PAC) and an arterial line at the time of support initiation and after 48 h and were collected for this study, as available from the medical reports. Cardiac intensive care unit (CICU) data including invasive haemodynamic assessment, lactate clearance, clinical, laboratory and imaging findings were retrospectively reviewed by two authors prior to malrotation outcome assessment (D.R. and M.F.). Malrotation was adjudicated by two authors (D.R. and L.B.) and resolved by consensus with a third (A.B.). Study cohort was dichotomized according to Impella malrotation occurrence for the purpose of this analysis. The analysis was performed as a sub‐study of the IMPELLA ECO project approved on 7 June 2018 (ID: 100/int/2018) and according to the subsequent amendments approved by our institutional Ethical Committee. This study represents an extended analysis of our previous preliminary publication on Impella malrotation. Impella malrotation definition Impella malrotation was defined as follows , : (1) correct pressure and motor current waveforms on the device console; (2) correct depth of the device across the AV according to manufacturer (catheter inflow ‘teardrop’ 3.5–4.5 cm below the AV and catheter outflow area above the AV); and (3) abnormal orientation of the pig‐tail away from LV apex and directed towards the lateral LV wall, plus at least one among (a) Impella catheter concavity not facing the interventricular septum, (b) device impingement on the mitral subvalvular apparatus, (c) Impella inlet in close proximity of the mitral valve leaflets ( Figure ). Impella malrotation status was assessed with either trans‐thoracic (TTE) or trans‐esophageal echocardiography (TEE) within 12 h after Impella insertion. Study outcomes Primary study aim was to analyse the impact of Impella malrotation on pulmonary haemodynamics [pulmonary artery wedge pressure (PAWP), systolic pulmonary artery pressure (sPAP) diastolic pulmonary artery pressure (dPAP), mean pulmonary artery pressures (mPAP)], as assessed by 48‐h PAC assessment. Other relevant endpoints were longitudinal changes in adjunctive invasive haemodynamic parameters and in perfusion markers from admission to 48 h of support, change in LV diameters and AR and MR severity during support. In addition, clinical outcomes were also assessed (extended definitions in the supporting ), including in‐hospital stroke; in‐hospital major bleeding and haemolysis during Impella support; and durable left ventricular assist device (LVAD) implant. For this analysis, major bleeding was defined as MCS Academic Research Consortium (MCS ARC) grade 3b‐5 event, and major haemolysis was defined according to the MCS ARC consensus. Laboratory tests were obtained 48 h after Impella implant, according to MCS ARC consensus. Statistical analysis Continuous variables are described as sample averages and sample standard deviations. Permutation tests were applied for comparing both paired and independent continuous variables. These tests represent a non‐parametric alternative to traditional statistical tests such as the t ‐test, avoiding the assumption of normality in the data without compromising statistical power. The absolute value of the difference between group means served as test statistic, highlighting any significant discrepancies between the group means. Categorical variables are expressed as proportions and compared with independence permutation tests, which employ the Pearson chi‐squared test statistic. These tests are a non‐parametric version of chi‐squared tests. All permutation tests were performed with one million iterations, maintaining the same random initialization. A P < 0.05 was considered statistically significant. All analyses were performed with RStudio (version 2023.12.0+369) and R (version 4.2.1).
We retrospectively reviewed all consecutive patients who received Impella support for CS at our Cardiac Intensive Care Unit at the IRCCS ‘San Raffaele Hospital’ (Milan, Italy). Study period ranged from January 2019 to September 2023. Patients who received at least one echocardiographic examination within 12 h from Impella implant, with appropriate views to assess Impella placement, were included in this study. Invasive pulmonary and systemic haemodynamic measurements were obtained using a pulmonary artery catheter (PAC) and an arterial line at the time of support initiation and after 48 h and were collected for this study, as available from the medical reports. Cardiac intensive care unit (CICU) data including invasive haemodynamic assessment, lactate clearance, clinical, laboratory and imaging findings were retrospectively reviewed by two authors prior to malrotation outcome assessment (D.R. and M.F.). Malrotation was adjudicated by two authors (D.R. and L.B.) and resolved by consensus with a third (A.B.). Study cohort was dichotomized according to Impella malrotation occurrence for the purpose of this analysis. The analysis was performed as a sub‐study of the IMPELLA ECO project approved on 7 June 2018 (ID: 100/int/2018) and according to the subsequent amendments approved by our institutional Ethical Committee. This study represents an extended analysis of our previous preliminary publication on Impella malrotation.
Impella malrotation was defined as follows , : (1) correct pressure and motor current waveforms on the device console; (2) correct depth of the device across the AV according to manufacturer (catheter inflow ‘teardrop’ 3.5–4.5 cm below the AV and catheter outflow area above the AV); and (3) abnormal orientation of the pig‐tail away from LV apex and directed towards the lateral LV wall, plus at least one among (a) Impella catheter concavity not facing the interventricular septum, (b) device impingement on the mitral subvalvular apparatus, (c) Impella inlet in close proximity of the mitral valve leaflets ( Figure ). Impella malrotation status was assessed with either trans‐thoracic (TTE) or trans‐esophageal echocardiography (TEE) within 12 h after Impella insertion.
Primary study aim was to analyse the impact of Impella malrotation on pulmonary haemodynamics [pulmonary artery wedge pressure (PAWP), systolic pulmonary artery pressure (sPAP) diastolic pulmonary artery pressure (dPAP), mean pulmonary artery pressures (mPAP)], as assessed by 48‐h PAC assessment. Other relevant endpoints were longitudinal changes in adjunctive invasive haemodynamic parameters and in perfusion markers from admission to 48 h of support, change in LV diameters and AR and MR severity during support. In addition, clinical outcomes were also assessed (extended definitions in the supporting ), including in‐hospital stroke; in‐hospital major bleeding and haemolysis during Impella support; and durable left ventricular assist device (LVAD) implant. For this analysis, major bleeding was defined as MCS Academic Research Consortium (MCS ARC) grade 3b‐5 event, and major haemolysis was defined according to the MCS ARC consensus. Laboratory tests were obtained 48 h after Impella implant, according to MCS ARC consensus.
Continuous variables are described as sample averages and sample standard deviations. Permutation tests were applied for comparing both paired and independent continuous variables. These tests represent a non‐parametric alternative to traditional statistical tests such as the t ‐test, avoiding the assumption of normality in the data without compromising statistical power. The absolute value of the difference between group means served as test statistic, highlighting any significant discrepancies between the group means. Categorical variables are expressed as proportions and compared with independence permutation tests, which employ the Pearson chi‐squared test statistic. These tests are a non‐parametric version of chi‐squared tests. All permutation tests were performed with one million iterations, maintaining the same random initialization. A P < 0.05 was considered statistically significant. All analyses were performed with RStudio (version 2023.12.0+369) and R (version 4.2.1).
Study population From 175 consecutive CS patients receiving Impella support and initially screened, we included 100 patients (Figure ). Median cohort age was 60 ± 12 years, and 79 (79.0%) were males. The majority of the patients (73%) presented with acute myocardial infarction‐related CS (AMI‐CS). On CICU admission, CS SCAI stage was C or higher in 88 (88.0%) patients, and a cardiac arrest modifier was present in 29 (29.0%). Various Impella devices were used: CP (82%), 5.0 (11%), 2.5 (4%) and 5.5 (3%). A combined VA‐ECMO and Impella configuration (ECPella) was adopted in 40 (40.0%). Overall, prior to Impella support, patients had overt signs of tissue hypoperfusion due to cardiac dysfunction: serum lactate was 6.20 ± 4.69 mmol/L, and cardiac index (CI) was 2.22 ± 0.80 L/min/m 2 . Impella malrotation was identified in 36 (36.0%) patients. Patients with malrotation were more often males, but no other significant differences were found in echocardiographic data and invasive haemodynamic profile prior to Impella support initiation. These findings are summarized in Table . Haemodynamics according to Impella malrotation Admission and repeated invasive haemodynamic assessment with pulmonary artery catheter (PAC) at 48 h were evaluated. Paired invasive haemodynamic data at the pre‐specified time points were available for 85% patients. Before Impella insertion, PAWP, sPAP, dPAP, mPAP and right atrial pressure (RAP) were similar between groups. However, during Impella support, patients with malrotation demonstrated higher PAWP (16.0 ± 8.2 vs. 13.0 ± 4.6 mmHg; P = 0.033), higher sPAP (35.0 ± 11.3 vs. 29.5 ± 9.0 mmHg; P = 0.015), higher dPAP (19.3 ± 8.1 vs. 15.1 ± 6.1 mmHg; P = 0.007) and higher mPAP (25.7 ± 9.1 vs. 20.8 ± 6.8 mmHg; P = 0.005); this was also associated with worse parameters of static and pulsatile right ventricular (RV) afterload, as highlighted by a higher pulmonary vascular resistance index (PVRi, 4.78 ± 2.75 vs. 3.49 ± 1.94 WUm 2 ; P = 0.020) and higher pulmonary artery elastance (PaE, 0.91 ± 0.60 vs. 0.67 ± 0.40 mmHg/mL; P = 0.045) in the malrotation cohort. In addition, PAC assessment revealed worse metrics of RV adaptation, as highlighted by a higher RAP (10.3 ± 4.8 vs. 7.7 ± 4.3 mmHg; P = 0.009) and a lower pulmonary artery pulsatility index (PAPi, 3.10 ± 2.91 vs. 1.95 ± 1.48 mmHg; P = 0.037). These findings are summarized in Table . In addition, serum lactate at 48 h was higher among patients with malrotation (6.63 ± 6.25 vs. 3.60 ± 4.21 mmol/L; P = 0.004), also mirrored by a greater serum lactate decrease in the non‐malrotated cohort (−2.44 ± 6.10 vs. 0.13 ± 5.04 mmol/L; P = 0.034). These findings are summarized in Figure . The haemodynamic variations observed from before Impella insertion to during Impella support and respective paired measures tests are provided in Table , demonstrating more favourable haemodynamic trends in patient without Impella malrotation. No interaction was found between the haemodynamic effects of malrotation and the CS aetiology (AMI‐CS vs. non AMI‐CS), as reported in Table . Finally, no relevant differences were found in haemodynamic parameters during Impella support in patients supported only by Impella as compared with ECPella patients (Table ). Echocardiographic and valvular function assessment according to Impella malrotation Admission AR and MR distribution, LV ejection fraction (LVEF) and LV end‐diastolic diameter (LVEDD) were not significantly different between the two cohorts ( Table ). Abnormal position of Impella inlet away from the LV apex as a result of the abnormal pig‐tail direction was confirmed by a higher inlet‐to‐apex distance in the malrotation cohort (55 ± 12 vs. 47 ± 7 mm; P < 0.001). Consistently with the proposed definitions, a higher proportion of patients in the malrotation cohort demonstrated impingement of the Impella inlet area on the posterior papillary muscle (73.1% vs. 8.9%; P < 0.001). However, no significant impact on MR was observed. Similarly, no significant differences were found in RV geometry and function echocardiographic parameters. The malrotation cohort demonstrated higher LVEDD during Impella support (52 ± 10 vs. 46 ± 11 mm; P = 0.006), Figure . In addition, LVEDD decreased significantly during support in patients without malrotation from 52 ± 12 mm to 46 ± 11 mm; P < 0.001, whereas it did not significantly diminish in malrotation cohort (from 55 ± 11 mm to 52 ± 10 mm; P = 0.351), accordingly the variation between admission LVEDD and LVEDD during support was −6 ± 7 vs. − 2 ± 7 mm; P = 0.039 in the non‐malrotated versus malrotated cohort. As compared with the non‐malrotated cohort, AR presence was more common in the malrotated cohort (86.1% vs. 56.2%; P = 0.004) and patients in the malrotation group more often demonstrated AR severity increase during support ( P < 0.001), and mean variation in AR severity tier from admission to during support was +0.46 ± 0.95 versus + 0.94 ± 0.92; P = 0.016 in the non‐malrotated versus malrotated cohort ( Figure and Figure ). Notably, no difference in the AR severity distribution was found between admission and discharge ( P = 0.999). No interaction was found between the effects of malrotation on echocardiographic metrics and the CS aetiology (AMI‐CS vs. non AMI‐CS), as reported in Table . No differences were found in echocardiographic parameters during Impella support in patients supported only by Impella as compared to ECPella patients (Table ). In‐hospital outcomes according to Impella malrotation No significant differences were observed in the major adverse cardiovascular events between malrotated and non‐malrotated cohorts, in terms of in‐hospital death (28.1% vs. 27.8%; P = 1.000), stroke or transient ischaemic attack (10.9% vs. 16.7%; P = 0.538), major bleeding (26.6% vs. 30.6%; P = 0.817), major vascular complications (14.1% vs. 5.6%; P = 0.318) and major haemolysis (40.6% vs. 41.7%; P = 1.000). Similarly, need of any valvular surgery was low and comparable between groups (1.6 vs. 5.6%; P = 0.551). Major adverse in‐hospital outcomes are summarized in Table .
From 175 consecutive CS patients receiving Impella support and initially screened, we included 100 patients (Figure ). Median cohort age was 60 ± 12 years, and 79 (79.0%) were males. The majority of the patients (73%) presented with acute myocardial infarction‐related CS (AMI‐CS). On CICU admission, CS SCAI stage was C or higher in 88 (88.0%) patients, and a cardiac arrest modifier was present in 29 (29.0%). Various Impella devices were used: CP (82%), 5.0 (11%), 2.5 (4%) and 5.5 (3%). A combined VA‐ECMO and Impella configuration (ECPella) was adopted in 40 (40.0%). Overall, prior to Impella support, patients had overt signs of tissue hypoperfusion due to cardiac dysfunction: serum lactate was 6.20 ± 4.69 mmol/L, and cardiac index (CI) was 2.22 ± 0.80 L/min/m 2 . Impella malrotation was identified in 36 (36.0%) patients. Patients with malrotation were more often males, but no other significant differences were found in echocardiographic data and invasive haemodynamic profile prior to Impella support initiation. These findings are summarized in Table .
Admission and repeated invasive haemodynamic assessment with pulmonary artery catheter (PAC) at 48 h were evaluated. Paired invasive haemodynamic data at the pre‐specified time points were available for 85% patients. Before Impella insertion, PAWP, sPAP, dPAP, mPAP and right atrial pressure (RAP) were similar between groups. However, during Impella support, patients with malrotation demonstrated higher PAWP (16.0 ± 8.2 vs. 13.0 ± 4.6 mmHg; P = 0.033), higher sPAP (35.0 ± 11.3 vs. 29.5 ± 9.0 mmHg; P = 0.015), higher dPAP (19.3 ± 8.1 vs. 15.1 ± 6.1 mmHg; P = 0.007) and higher mPAP (25.7 ± 9.1 vs. 20.8 ± 6.8 mmHg; P = 0.005); this was also associated with worse parameters of static and pulsatile right ventricular (RV) afterload, as highlighted by a higher pulmonary vascular resistance index (PVRi, 4.78 ± 2.75 vs. 3.49 ± 1.94 WUm 2 ; P = 0.020) and higher pulmonary artery elastance (PaE, 0.91 ± 0.60 vs. 0.67 ± 0.40 mmHg/mL; P = 0.045) in the malrotation cohort. In addition, PAC assessment revealed worse metrics of RV adaptation, as highlighted by a higher RAP (10.3 ± 4.8 vs. 7.7 ± 4.3 mmHg; P = 0.009) and a lower pulmonary artery pulsatility index (PAPi, 3.10 ± 2.91 vs. 1.95 ± 1.48 mmHg; P = 0.037). These findings are summarized in Table . In addition, serum lactate at 48 h was higher among patients with malrotation (6.63 ± 6.25 vs. 3.60 ± 4.21 mmol/L; P = 0.004), also mirrored by a greater serum lactate decrease in the non‐malrotated cohort (−2.44 ± 6.10 vs. 0.13 ± 5.04 mmol/L; P = 0.034). These findings are summarized in Figure . The haemodynamic variations observed from before Impella insertion to during Impella support and respective paired measures tests are provided in Table , demonstrating more favourable haemodynamic trends in patient without Impella malrotation. No interaction was found between the haemodynamic effects of malrotation and the CS aetiology (AMI‐CS vs. non AMI‐CS), as reported in Table . Finally, no relevant differences were found in haemodynamic parameters during Impella support in patients supported only by Impella as compared with ECPella patients (Table ).
Admission AR and MR distribution, LV ejection fraction (LVEF) and LV end‐diastolic diameter (LVEDD) were not significantly different between the two cohorts ( Table ). Abnormal position of Impella inlet away from the LV apex as a result of the abnormal pig‐tail direction was confirmed by a higher inlet‐to‐apex distance in the malrotation cohort (55 ± 12 vs. 47 ± 7 mm; P < 0.001). Consistently with the proposed definitions, a higher proportion of patients in the malrotation cohort demonstrated impingement of the Impella inlet area on the posterior papillary muscle (73.1% vs. 8.9%; P < 0.001). However, no significant impact on MR was observed. Similarly, no significant differences were found in RV geometry and function echocardiographic parameters. The malrotation cohort demonstrated higher LVEDD during Impella support (52 ± 10 vs. 46 ± 11 mm; P = 0.006), Figure . In addition, LVEDD decreased significantly during support in patients without malrotation from 52 ± 12 mm to 46 ± 11 mm; P < 0.001, whereas it did not significantly diminish in malrotation cohort (from 55 ± 11 mm to 52 ± 10 mm; P = 0.351), accordingly the variation between admission LVEDD and LVEDD during support was −6 ± 7 vs. − 2 ± 7 mm; P = 0.039 in the non‐malrotated versus malrotated cohort. As compared with the non‐malrotated cohort, AR presence was more common in the malrotated cohort (86.1% vs. 56.2%; P = 0.004) and patients in the malrotation group more often demonstrated AR severity increase during support ( P < 0.001), and mean variation in AR severity tier from admission to during support was +0.46 ± 0.95 versus + 0.94 ± 0.92; P = 0.016 in the non‐malrotated versus malrotated cohort ( Figure and Figure ). Notably, no difference in the AR severity distribution was found between admission and discharge ( P = 0.999). No interaction was found between the effects of malrotation on echocardiographic metrics and the CS aetiology (AMI‐CS vs. non AMI‐CS), as reported in Table . No differences were found in echocardiographic parameters during Impella support in patients supported only by Impella as compared to ECPella patients (Table ).
No significant differences were observed in the major adverse cardiovascular events between malrotated and non‐malrotated cohorts, in terms of in‐hospital death (28.1% vs. 27.8%; P = 1.000), stroke or transient ischaemic attack (10.9% vs. 16.7%; P = 0.538), major bleeding (26.6% vs. 30.6%; P = 0.817), major vascular complications (14.1% vs. 5.6%; P = 0.318) and major haemolysis (40.6% vs. 41.7%; P = 1.000). Similarly, need of any valvular surgery was low and comparable between groups (1.6 vs. 5.6%; P = 0.551). Major adverse in‐hospital outcomes are summarized in Table .
The main findings of this study can be summarized as follows ( Figure ): Impella malrotation was associated with suboptimal LV unloading, as highlighted by higher pulmonary artery wedge pressure and pulmonary artery pressures during support. Impella malrotation was associated with higher indexes of pulsatile and steady RV afterload and worse indexes of RV adaptation. Impella malrotation was associated with higher serum lactate during support, and a worse serum lactate clearance from admission to 48 h. Optimal Impella positioning requires a combined assessment of device depth and rotation within the LV. , Device malrotation—defined by normal pressure and motor current waveforms on the device console, proper depth of the device across the AV, but abnormal orientation of the inlet away from the LV apex—may occur in up to 40% and may often be overlooked in CS patients. , The mechanistic and prognostic implications of this condition remain poorly characterized, as data is limited to a single‐centre retrospective cohort. This report, combining echocardiographic with invasive haemodynamic assessments, extends the observation of the previous study and frames the haemodynamic effect of Impella malrotation. On echocardiography, we observed less effective unloading among patients with device malrotation, as they featured a higher LVEDD (an indirect metric of LV loading conditions) during support and demonstrated less decrease in LVEDD after Impella initiation. Notably, PAC invasive haemodynamic data provide unique adjunctive information on the LV loading conditions, complementary to the echocardiographic assessment. Higher values of PAWP, sPAP, mPAP, dPAP and during support were consistent with greater pulmonary circulation overload in patients with Impella malrotation. While in our study no significant differences were observed in early adverse in‐hospital outcomes, these findings need additional investigation, as the observed higher static pressure in the pulmonary circulation likely indicate higher LVEDP during support, which may still have relevant consequences on the longer term. As LVEDP reduction, coupled with antegrade flow increase, is the ultimate goal of trans‐aortic axial‐flow MCS and—specifically in the AMI‐CS setting—aggressive and early LVEDP reduction (LV unloading) may have relevant biological consequences related to myocardial protection and infarct size limitation, , the worse unloading observed in case of malrotation might negatively affect this therapeutic objective. However, this would affect outcomes chiefly on the mid‐ to long‐term, and our study was not powered to capture such endpoints. The observed derangement on pulmonary haemodynamics, coupled with worse metrics of steady and pulsatile right ventricular afterload, namely, PVRi and PaE and also with worse indexes of RV adaptation, like RAP and PAPi, suggests uncoupling of the RV to pulmonary circulation. So far, the effect that malrotation exerts upon the RV remain largely unexplored, but this observation reinforces the need of additional data focusing on this common issue. In addition, the higher value of serum lactate at 48‐h and the worse lactate clearance observed in this timeframe in the malrotation cohort lend further support to the possibility of a lower antegrade flow in this subset. With this study, we confirmed the previously observed association of malrotation with higher degrees of AR during support, especially in the moderate and moderate‐to‐severe tiers. Increasing degrees of AR lessen the haemodynamic benefit of Impella device, by short‐circuiting the ascending aorta and the LV across the AV and might also contribute to the observed suboptimal unloading, reducing the antegrade aortic flow and creating back‐flow to the LV. Indeed, severe AR is considered a contraindication to Impella use. The trans‐aortic profile of the Impella pumps enhances their ease of insertion (as compared with trans‐septal or apical cannulas), but at the same time, causes interaction with the aortic valve: for example, in our cohort, few patients worsened the AR to the severe tier after support initiation. Interestingly, however, the effect of Impella on the AV appeared to be transient and possibly related to the presence of the device across valve leaflets: indeed, the distribution of AR severity at discharge did not differ from that of the admission in either study cohort. The AR worsening effect may be particularly evident with a malrotated device, consequently to the abnormal angle by which the shaft crosses the AV, thereby potentially deforming the cusps and/or interfering with their excursion and coaptation. Finally, we could not confirm the association of Impella malrotation with haemocompatibility‐related adverse events (HRAE), including stroke and major bleeding that we observed in our previous study. As compared to the former cohort, the present study population was sicker at presentation, as highlighted by a higher rate of cardiac arrest (29.9% vs. 14.0%) and a higher use of the ECPella configuration (40.0% vs. 30.0%) and indeed experienced an overall higher number of stroke events (13.0% vs. 4.0%). In addition, the current population had an almost double proportion of AMI‐CS patients (73.0% vs. 38.1%): these observations may suggest an intrinsically higher baseline risk both for stroke and bleeding independently of the malrotation status. Therefore, we recommend conducting larger, multicentre studies to clarify the implications of malrotation on HRAE. The need to identify Impella suboptimal positioning early to avoid unwarranted detrimental effects strengthens the role of dedicated multi‐disciplinary teams (including interventional cardiologists, CICU intensivists, echocardiographers and cardiac surgeons) and hospital algorithms capable of delivering timely multimodality imaging techniques and troubleshooting both at time of device insertion and during the CIUC stay, as a key elements for a successful Impella program. In conclusion, malrotation of Impella device within the LV may cause suboptimal unloading, less pulmonary congestion relief, and impaired systemic perfusion. These data highlight the need of careful device insertion and positioning, as well as a greater awareness of the impact of a proper rotation of the catheter within the LV, in the pursuit of maximizing the advantages offered by a powerful MCS device. Limitations The study has some limitations, including its retrospective nature and single‐center design. The sample size is relatively small, although it represents the largest cohort analysed for this purpose. In addition we initially screened 175 patients, but excluded 75 due to lack of available echocardiographic images depending on several random logistic issues (hour of the exam, echocardiographic machine used, focused/urgent examination). Complete haemodynamic profile was available only for a subset of patients, although this group represented the vast majority of the study population. Additionally, we could not assess the impact of device repositioning/reinsertion in this retrospective cohort due to lack of systematic reporting of these data. Finally, the study analysis was limited to a 48‐h time‐window and the impact of malrotation on haemodynamics beyond this threshold is not known.
The study has some limitations, including its retrospective nature and single‐center design. The sample size is relatively small, although it represents the largest cohort analysed for this purpose. In addition we initially screened 175 patients, but excluded 75 due to lack of available echocardiographic images depending on several random logistic issues (hour of the exam, echocardiographic machine used, focused/urgent examination). Complete haemodynamic profile was available only for a subset of patients, although this group represented the vast majority of the study population. Additionally, we could not assess the impact of device repositioning/reinsertion in this retrospective cohort due to lack of systematic reporting of these data. Finally, the study analysis was limited to a 48‐h time‐window and the impact of malrotation on haemodynamics beyond this threshold is not known.
Impella malrotation is associated with suboptimal unloading of the LV during support, with detrimental haemodynamic effects on pulmonary circulation. Careful attention to Impella positioning and rotation within the LV is strongly warranted to maximize the haemodynamic benefits expected from this device.
None declared.
No funding was required for this study.
Figure S1. Study consort diagram. Table S1. Paired analysis for hemodynamic and echocardiographic indexes according to the malrotation status. The difference columns express the mean variation observed after Impella support initiation, expressed as: values measured during Impella support – values measured before Impella support. The p ‐values is referred to the paired measure statistical testing (before vs during Impella). Table S2. Interaction analysis for CS etiology (AMI vs non‐AMI‐CS). The reported p ‐values are the results of a two‐way ANOVA testing for the effects of the malrotation variable, the AMI‐CS etiology variable, and their interaction. All the hemodynamic measures are obtained during Impella support. Table S3. Comparison of Impella only and ECPella patients. All the hemodynamic and echocardiographic measures are obtained during Impella support. Table S4. Interaction analysis for CS etiology (AMI vs non‐AMI‐CS). The reported p ‐values are the results of a two‐way ANOVA testing for the effects of the malrotation variable, the AMI‐CS etiology variable, and their interaction. All the echocardiographic measures are obtained during Impella support.
|
Comparing the efficacy and safety of bridging therapy vs. monotherapy in patients with minor stroke: a meta-analysis | a817be31-adb8-4b95-97c6-42af4cd9fc60 | 11891610 | Surgical Procedures, Operative[mh] | Stroke remains one of the leading causes of mortality and long-term disability worldwide . With an aging population, the global incidence of stroke is expected to increase . Acute ischemic stroke occurs due to a thrombus formation, and the primary treatment strategies include endovascular thrombectomy (EVT), which physically removes the thrombus and bridging therapy, where intravenous thrombolytics (IVT) are administered before EVT . The severity of stroke can be assessed through various methods, including the National Institutes of Health Stroke Scale (NIHSS) score . According to the NIHSS, a stroke is considered
if the score is below 5 . Despite its classification as 'minor', about 30% of these patients experience lasting disabilities after 90 days . Stroke centers and countries vary in how they approach the clinical management of minor ischemic strokes. Intravenous thrombolysis (IVT) remains the recommended treatment for disabling acute ischemic stroke, regardless of the NIHSS score . Although large vessel occlusion (LVO) typically results in severe strokes, approximately 10–20% of patients with minor strokes have LVO due to strong collateral circulation . Neurological deficits occur in about 20–40% of these patients, which increases their risk of a poor outcome . The current recommendation for patients with LVO and NIHSS scores above 5 is to combine endovascular thrombectomy with IVT . However, few randomized trials have included patients with NIHSS scores of 5 or less, and results from both single-center and multicenter studies have been inconclusive . Consequently, the benefit of combination therapy versus IVT alone in these patients remains unclear. A meta-analysis by the HERMES study group found no significant advantage of EVT over standard therapy, including IVT, in patients with NIHSS scores below 10 . Nevertheless, observational studies suggest that early thrombectomy may lead to better outcomes in mild stroke compared to optimal medical treatment followed by rescue thrombectomy in cases of deterioration . There is also a potential for increased risk of intracerebral hemorrhage (ICH) with combined therapy. Therefore, we performed this systematic review and meta-analysis to evaluate the efficacy and safety of monotherapy (EVT or IVT) versus bridging therapy (IVT & EVT) in patients with minor ischemic stroke.
This systematic review and meta-analysis followed the Cochrane Handbook for Systematic Reviews and the guidelines outlined by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. The study protocol was registered in the International Prospective Register of Systematic Reviews (ID: CRD42024548143) . Database searching We systematically searched PubMed, Web of Science, Google Scholar, and Scopus for eligible articles from inception to 2023. The search strategy employed the following keywords: “Thrombolysis” AND “Thrombectomy” AND “Stroke” AND (“Minor” OR “Mild”). Screening process After conducting the database search, we eliminated duplicates using EndNote version 7 . The remaining articles were uploaded into Rayyan software to facilitate screening. Two authors independently screened the titles and abstracts to assess eligibility, followed by a full-text review of the selected studies. Any disagreements were resolved by a third author . Eligibility criteria We applied predefined inclusion and exclusion criteria during the screening process. We included observational studies and randomized controlled trials (RCTs) that compared monotherapy, whether IVT or EVT, with bridging therapy (IVT+EVT) in patients with minor or mild ischemic stroke (NIHSS score 1–4). Studies that did not compare these two treatment strategies, which involved higher NIHSS scores, case reports, or reviews, were excluded. Quality assessment For the included observational cohort studies, the New Castle Ottawa Scale (NOS) was employed to evaluate quality. Studies scoring between 0 and 3 were classified as low quality, 4–6 as moderate, and 7–9 as high quality . Data extraction Four independent authors used Microsoft Excel to extract baseline information such as study design, sample size, age, and gender, along with outcomes like the Modified Rankin Score (mRS) 0–1, mRS 0–2, mortality, symptomatic intracranial hemorrhage (sICH), and ICH. Any discrepancies were addressed by an author not involved in the data extraction process. Statistical analysis We conducted a meta-analysis using Review Manager (RevMan) version 5.4, pooling dichotomous variables to calculate odds ratios (ORs) with corresponding 95% confidence intervals (CIs). A P value ≤ 0.05 was considered statistically significant. Heterogeneity was assessed using the I 2 statistic, with significance determined by the P value.
We systematically searched PubMed, Web of Science, Google Scholar, and Scopus for eligible articles from inception to 2023. The search strategy employed the following keywords: “Thrombolysis” AND “Thrombectomy” AND “Stroke” AND (“Minor” OR “Mild”).
After conducting the database search, we eliminated duplicates using EndNote version 7 . The remaining articles were uploaded into Rayyan software to facilitate screening. Two authors independently screened the titles and abstracts to assess eligibility, followed by a full-text review of the selected studies. Any disagreements were resolved by a third author .
We applied predefined inclusion and exclusion criteria during the screening process. We included observational studies and randomized controlled trials (RCTs) that compared monotherapy, whether IVT or EVT, with bridging therapy (IVT+EVT) in patients with minor or mild ischemic stroke (NIHSS score 1–4). Studies that did not compare these two treatment strategies, which involved higher NIHSS scores, case reports, or reviews, were excluded.
For the included observational cohort studies, the New Castle Ottawa Scale (NOS) was employed to evaluate quality. Studies scoring between 0 and 3 were classified as low quality, 4–6 as moderate, and 7–9 as high quality .
Four independent authors used Microsoft Excel to extract baseline information such as study design, sample size, age, and gender, along with outcomes like the Modified Rankin Score (mRS) 0–1, mRS 0–2, mortality, symptomatic intracranial hemorrhage (sICH), and ICH. Any discrepancies were addressed by an author not involved in the data extraction process.
We conducted a meta-analysis using Review Manager (RevMan) version 5.4, pooling dichotomous variables to calculate odds ratios (ORs) with corresponding 95% confidence intervals (CIs). A P value ≤ 0.05 was considered statistically significant. Heterogeneity was assessed using the I 2 statistic, with significance determined by the P value.
Database searching and screening The database search yielded 176 articles, of which 77 were duplicates and subsequently removed. A total of 99 articles were screened by title and abstract, and 87 articles were excluded during this process. A full-text review was conducted on 12 articles, and 8 articles were included for the qualitative synthesis and meta-analysis. The total number of patients in both treatment arms across the 8 included studies was 3,117 patients . Quality assessment According to NOS, five studies were classified as high quality, while three were considered moderate quality . Baseline characteristics All included studies were cohort studies, comparing monotherapy (IVT or EVT) versus bridging therapy (IVT+EVT). The baseline characteristics of the included articles are summarized in . Meta-analysis For mRS 0–1, no significant difference was found when comparing IVT monotherapy to bridging therapy (IVT+EVT), with an odds ratio of 0.79 (95% CI, 0.46–1.38; P = 0.41). Similarly, no significant difference was detected between EVT monotherapy and bridging therapy (OR = 0.88; 95% CI, 0.66–1.18; P = 0.4) . For mRS 0–2, no statistically significant differences emerged between IVT monotherapy and bridging therapy, with an OR of 0.86 (95% CI, 0.69–1.08; P = 0.19), and EVT monotherapy versus bridging therapy which yielded an OR of 1.08 (95% CI, 0.41–2.9; P = 0.87) . In terms of symptomatic intracerebral hemorrhage, IVT was associated with a lower risk of sICH compared to bridging therapy, with an OR of 0.51 (95% CI, 0.29–0.89; P = 0.02), whereas EVT was linked to a higher risk of sICH when compared to bridging therapy, with an OR of 8.33 (95% CI, 1.52–45.71; P = 0.01) . IVT was also associated with a reduced risk of ICH compared to bridging therapy, with an OR of 0.5 (95% CI, 0.29–0.88; P = 0.02) . Mortality rates were similar between IVT monotherapy and bridging therapy, as well as EVT monotherapy and bridging therapy. Although there was a slight trend favoring bridging therapy, it was not statistically significant (OR =1.3; 95% CI, 0.92–1.84; P = 0.14) .
The database search yielded 176 articles, of which 77 were duplicates and subsequently removed. A total of 99 articles were screened by title and abstract, and 87 articles were excluded during this process. A full-text review was conducted on 12 articles, and 8 articles were included for the qualitative synthesis and meta-analysis. The total number of patients in both treatment arms across the 8 included studies was 3,117 patients .
According to NOS, five studies were classified as high quality, while three were considered moderate quality .
All included studies were cohort studies, comparing monotherapy (IVT or EVT) versus bridging therapy (IVT+EVT). The baseline characteristics of the included articles are summarized in .
For mRS 0–1, no significant difference was found when comparing IVT monotherapy to bridging therapy (IVT+EVT), with an odds ratio of 0.79 (95% CI, 0.46–1.38; P = 0.41). Similarly, no significant difference was detected between EVT monotherapy and bridging therapy (OR = 0.88; 95% CI, 0.66–1.18; P = 0.4) . For mRS 0–2, no statistically significant differences emerged between IVT monotherapy and bridging therapy, with an OR of 0.86 (95% CI, 0.69–1.08; P = 0.19), and EVT monotherapy versus bridging therapy which yielded an OR of 1.08 (95% CI, 0.41–2.9; P = 0.87) . In terms of symptomatic intracerebral hemorrhage, IVT was associated with a lower risk of sICH compared to bridging therapy, with an OR of 0.51 (95% CI, 0.29–0.89; P = 0.02), whereas EVT was linked to a higher risk of sICH when compared to bridging therapy, with an OR of 8.33 (95% CI, 1.52–45.71; P = 0.01) . IVT was also associated with a reduced risk of ICH compared to bridging therapy, with an OR of 0.5 (95% CI, 0.29–0.88; P = 0.02) . Mortality rates were similar between IVT monotherapy and bridging therapy, as well as EVT monotherapy and bridging therapy. Although there was a slight trend favoring bridging therapy, it was not statistically significant (OR =1.3; 95% CI, 0.92–1.84; P = 0.14) .
The objective of this study was to evaluate the efficacy and safety of monotherapy with either IVT or EVT in comparison to bridging therapy (IVT+EVT) for patients with minor ischemic stroke. In terms of efficacy, the results indicated no significant differences between the treatment approaches for mRS 0-1 and mRS 0-2. However, the incidence of sICH and ICH was significantly higher in the group receiving bridging therapy compared to those treated with either IVT or EVT alone. Although EVT was associated with an elevated risk of sICH compared to bridging therapy, this finding was based on a very small sample size from a single study. The meta-analysis revealed that bridging therapy may not provide the same benefits as IVT and poses a higher risk. The optimal treatment strategy for mild strokes remains uncertain and lacks standardization. Most patients diagnosed with mild stroke receive IVT alone, while a small subset is excluded from IVT due to their condition being perceived as too favorable to receive treatment . Additionally, recent RCT meta-analyses have shown that patients with an NIHSS score below 10 do not gain significant benefit from EVT . Consequently, the use of EVT in patients with LVO and NIHSS ≤ 5 has only been documented in a limited number of case series . Vessel recanalization appears to play a crucial role even in minor strokes, as failure to achieve acute recanalization may result in approximately one-third of minor stroke patients being unable to walk independently at hospital discharge and facing a higher likelihood of neurological decline and poor outcomes at the 90-day follow-up . Feil et al . analyzed data from patients enrolled between June 2015 and December 2019 in the Safe Implementation of Treatments in Stroke–International Stroke Thrombolysis Registry (SITS-ISTR) and the German Stroke Registry–Endovascular Treatment (GSR-ET). Their findings indicated that combining EVT with IVT did not significantly enhance functional outcomes compared to IVT alone in patients with minor strokes, specifically those with NIHSS scores ≤5. Although 81.6% of GSR-ET patients treated with EVT or IVT achieved successful reperfusion (mTICI scores 2b–3), follow-up imaging at 24 hours showed a higher point estimate of sICH in patients who underwent both EVT and IVT. Nevertheless, even when performed in extended time windows, thrombectomy was carried out safely, with favorable clinical outcomes of 64%, 75%, and 60%, respectively . These retrospective single-center studies included 33 patients (NIHSS score ≤8, varying occlusion sites), 41 patients (NIHSS score ≤5, M1 occlusions), and 88 patients (NIHSS score ≤4, different occlusion sites) with LVO and mild stroke symptoms . Feil et al . further reported that patients who underwent thrombectomy had notably worse functional outcomes when comparing EVT, with or without IVT, to IVT alone. Additionally, those treated with EVT had a higher median NIHSS score at the 24-hour follow-up. Logistic regression analysis revealed that IVT, but not EVT, was a strong predictor of favorable outcomes. These results differ from earlier case series, one of which reported superior outcomes for EVT patients compared to those receiving only IVT, while another case series examined 24 IVT patients alongside 32 interventional cases (19 EVT only and 13 EVT plus IVT) . In the latter study, a greater shift in NIHSS scores was observed in the group undergoing endovascular procedures compared to those receiving only medical therapy. However, the interpretation of these findings may be biased, as 40% of the thrombectomy patients were ineligible for IVT . Another case series involving 32 thrombectomy patients showed a greater improvement in NIHSS scores, where 25% of those primarily managed with medical therapy did not reach functional independence at follow-up . In a study of 169 patients with M2 occlusion and mild stroke symptoms, no significant difference in favorable outcomes was found among those treated with IVT alone, EVT alone, or a combination of EVT and IVT. However, when analyzing only patients treated after 2015, the shift in mRS scores was significantly better in the EVT group compared to the IVT-only group . Another study involving 96 patients with mild stroke found no difference in favorable clinical outcomes between the IVT group and those receiving standard medical care, although early neurological improvement was observed in IVT patients . A study based on the Swiss Stroke Registry indicated that patients with mild acute ischemic stroke and LVO who underwent either IVT or EVT achieved favorable functional outcomes at three months . However, further research is required to clarify the necessity of both IVT and EVT in patients with acute LVO stroke. A meta-analysis of individual patient data from five randomized trials demonstrated that EVT was more effective than standard medical treatment in cases of acute ischemic stroke caused by proximal anterior circulation artery occlusion . However, the SKIP Randomized Clinical Trial did not show functional differences between the EVT and bridging groups . Subsequent trials suggested that EVT alone might yield similar results to bridging therapy for patients with acute ischemic stroke due to major artery occlusions . Interestingly, improved functional outcomes were observed in patients with large vessel occlusion stroke who received adjunct intra-arterial thrombolysis after a successful angiographic thrombectomy . Moreover, findings from the Italian registry on endovascular treatment for acute stroke suggest that bridging therapy may reduce the risk of death or severe disability three months after a stroke, particularly in cases of major artery occlusion . A meta-analysis involving three RCTs and six observational studies concluded that direct EVT might be as effective as bridging therapy, with a lower likelihood of intracerebral hemorrhage (ICH) and clot migration in patients with acute ischemic stroke . Likewise, another meta-analysis of five observational studies reported that bridging therapy and EVT might be equally effective in managing acute anterior circulation strokes . In contrast, a single-center retrospective study of 90 consecutive patients found that bridging therapy was associated with substantially higher direct and overall hospital costs than EVT alone, without demonstrating superior clinical outcomes . Furthermore, Qureshi et al . suggested that EVT alone may be more cost-effective than bridging therapy for treating acute ischemic stroke patients within 4.5 hours of symptom onset, although the study did not establish the cost-effectiveness of bridging compared to direct thrombectomy. The limitations of the study include the fact that all the articles were cohort studies, with some having small sample sizes. Additionally, certain outcomes were based on limited sample sizes, which may have reduced the statistical power to detect significant differences. Therefore, future large-scale randomized controlled trials and an updated systematic review in the next five to ten years are recommended.
The present results regarding efficacy outcomes revealed no statistically significant differences between the two treatment approaches in terms of mRs 0–1 and mRs 0–2. However, when bridging therapy was used instead of IVT, the safety outcomes, such as sICH and ICH, were statistically considerably higher. Furthermore, there were no discernible variations in the death rates between the two therapy modalities.
|
Discussing menstrual health in family medicine | d72c2b54-c9f1-48ac-89ae-c9ba2eecdf15 | 11015204 | Family Medicine[mh] | Menstrual health is a general biological marker for many cisgender women, transgender men and non-binary people. Despite more than half of the population being people who menstruate, stigma, lack of conversation and pressing social needs around menstrual health persists throughout medicine. Discussions around menstruation and menstrual management can be difficult for individuals, whether it is with friends or family, or in the healthcare setting. Patients who have never discussed menstruation with a clinician may not know what is healthy, assume that an abnormal experience is normal and may endure periods that negatively affect their life, career or well-being. Menstruation plays a vital role in overall well-being and contributes significantly to an individual’s quality of life. Given their scope of care, family medicine clinicians are poised to identify red-flag menstrual symptoms in their routine visits with patients, reducing time to diagnosis of menstrual disorders. We urge family medicine clinicians to have renewed conversations surrounding menstrual health with their patients. The purpose of this report is to supply a brief overview of the importance of menstrual communication in primary care and serve as a resource to enhance menstrual communication between patient and clinician, with the ultimate goal of decreasing menstrual stigma and promoting improved menstrual health and experiences for patients. Menstrual health is a nuanced topic that can vary greatly from person to person and region to region. There are differences in menstrual health and wellness measures and norms between high-income countries (HIC) and low-income and middle-income countries (LMICs). This paper focuses on menstrual health in HIC specifically. This is not to say that the points described do not apply to LMIC, but to acknowledge that the topic of menstrual health in these areas is greater than the scope of this singular work.
Family medicine clinicians care for people of all ages and life stages. They are on the front lines of preventative medicine and can be a great resource for patients trying to improve their quality of life through medical and lifestyle means. By caring for people across their lifespans, family medicine clinicians are well poised to address menstrual health with patients who menstruate. They may see their patients who menstruate when they first begin menstruating in adolescence, when they become sexually active, when they are trying to become pregnant, when they are trying not to become pregnant, when their periods begin to slow at perimenopause, and at all other times in between. Family medicine clinicians are trained to provide comprehensive women’s health services and should include menstrual history as a vital sign to be addressed at routine visits. The scope of menstruation and menstrual wellness is expansive, with connections to family planning, sexual wellness, diet and exercise habits, and mental health. Patients can feel more in control of their bodies by understanding their menstrual cycle and its full-body impacts overall.
A discussion on exact timing of symptoms in relation to the menstrual period can help guide prompt diagnosis and treatment. Menstrual concerns may include weight changes, abdominal pain, back pain, headache, swelling and tenderness of the breasts, nausea, change in appetite, constipation, and mental health concerns including, an increase in anxiety, irritability, anger, fatigue, mood swings and heavy or painful bleeding. Additional concerns may be leakage, physical activity limitations, fear of toxic shock syndrome, cost of products, painful or irregularity of bleeding, adolescents worried that they have not started menstruating because their friends have or ability to get pregnant. For menstruators who identify as non-binary or transgender, menstruation can be a trigger for gender dysphoria. Another concern is period poverty. Period poverty, the lack of access to menstrual products, clean facilities and health education, impacts nearly one in four menstruating people at least once in their life. Menstruators may perceive heavy bleeding as normal, fear embarrassment and dismissal by clinicians who do not validate their concerns. Family medicine clinicians should proactively engage in these discussions by eliciting a detailed menstrual history. Family medicine clinicians can help normalise these conversations, provide reassurance regarding the common physiological symptoms of menstruation, while also addressing abnormal symptoms, and work with patients to optimise their menstrual health for their individual circumstances at various stages across the lifespan.
There are excellent resources available online that detail birth control options, however, there are no sites that provide the same level of detailed information about options for menstrual products. Some menstruators may only be familiar and comfortable with what they first started using when they began menstruating as an adolescent or what they were introduced to by their parent/caregiver. The choice of a menstrual product that fits best with a patient’s lifestyle can impact their quality of life. Products such as tampons and menstrual cups/discs might allow a patient to be more active. Cups and discs can allow for much longer wear without changing compared with tampons and pads. Period underwear allows the patient to essentially free bleed without fear of leakage. A period tracker app, though not a traditional menstrual product, can be useful for both patient and clinician to notice changes in a patient’s menstrual cycle throughout their life as well as predict when any symptoms may start regardless of whether it aligns with bleeding. Below is a table of available period products as well as some discussion points for talking to patients about each one .
Asking a patient when their last menstrual period was is fairly common practice to gauge regularity and risk of pregnancy. It should not be assumed that because a patient does not bring up their menstrual cycle it is not negatively affecting their life. For example, premenstrual dysphoric symptoms affect 75%–90% of women and are often seen as normal and something that people who menstruate just cope with. However, many of the symptoms can be treated with hormonal contraceptive options, anti-inflammatory medications or even just general counselling from a clinician. Despite this, menstrual history-taking varies among primary care clinicians and is often incomplete. Detailed history-taking can help assess how a patient’s menstrual cycle is affecting their life. Here are several such questions that you can ask any menstruating patient to assess whether they have any symptoms that you can help treat . With these simple conversation starters, it is possible to identify patients with menstrual concerns that may not have otherwise been reported. Each of these prompts could segue into an opportunity for counselling and/or medical treatment that can easily be provided and improve the patient’s quality of life. These questions can be incorporated as part of wellness visits and as needed for related problem visits. Use the International Federation of Gynecology and Obstetrics table to guide normal and abnormal menstrual symptoms to guide clinical decision-making . Consider having patients complete a questionnaire regarding their menstrual health as it may assist provide privacy and avoid embarrassment around this sensitive topic. Family medicine clinicians should continue to emphasise the normalcy of these issues with patients, reassure them that these conversations are appropriate and necessary and that their clinician is a readily available resource for questions surrounding all these issues.
Menstrual health is a vital part of primary care for menstruating patients. It is important for clinicians to have these conversations with patients. These conversations may be uncomfortable for both clinicians and patients due to societal stigma on menstruation. We encourage clinicians to work to fight this stigma and build strong, and long-lasting relationships with their patients so conversations can be as comfortable as possible for both parties. It is important to recognise, and acknowledge, that family physicians do not have a lot of time to discuss everything they would ideally like to discuss with patients during a single session. Thus, the purpose of this work is to bring heightened awareness to menstrual health in primary care. Our synthesis is simply a starting place for downstream conversation and investigation into menstrual health and wellness in clinic practice. Clinicians must be educated on normal menstruation and trained in identifying their own implicit biases, in what they perceive as ‘normal’, so they do not brush off any patient’s menstruation-related concern as normal. We also recognise that having access to a primary care clinician, let alone one with whom you can develop a strong relationship, is a privilege that many people do not have. Those heavily impacted by the social determinants of health are less likely to be able to have these conversations. Moreover, there are also many different cultural understandings of menstruation and menstrual care, as well as limited trust in the healthcare systems based on current and past influences of racism, sexism and the intersections that impact the physician–patient relationship. While this paper focuses on conversations in HIC between clinicians and patients, future work should examine different approaches or practices that may be needed in LMIC where topics such as menstruation-related violence and less access to information may be more prevalent. Family medicine clinicians can work closely with the communities they care for, building trust, reducing stigma and providing culturally sensitive care. We call on family medicine clinicians to bring menstrual health into the focus of primary care visits for the holistic wellness of all menstruators.
|
Senolytic Treatment Alleviates Corneal Allograft Rejection Through Upregulation of Angiotensin-Converting Enzyme 2 (ACE2) | e9d83b90-37ff-4dce-b520-f56e1d81c6e4 | 11806429 | Surgical Procedures, Operative[mh] | Animals All animal experiments were approved by the Ethics Committee of Shandong Eye Institute (approval nos. G-2015-020 and 2019S007). All procedures were conducted in compliance with the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research. Male wild-type (WT) C57BL/6 mice ( n = 126) and BALB/c mice ( n = 308) were obtained from Charles River Laboratories (Beijing, China) and housed in a specific pathogen-free facility under a 12-hour light/12-hour dark cycle. Cdkn2a/p16 knockout (KO) mice ( n = 16, 01XB1:129-Cdkn2a tm1Rdp ) were procured from the Frederick National Laboratory for Cancer Research (Frederick, MD, USA) and were used as corneal donors. All mice were given ad libitum access to commercial rodent chow and water. Murine Corneal Transplantation and Drug Treatment Murine penetrating keratoplasty was conducted between fully mismatched C57BL/6 (donor) and BALB/c (recipient) mice under a surgical microscope (Carl Zeiss Microscopy, Oberkochen, Germany). Recipient mice were randomly divided into three groups: normal (Nor), BALB/c mice without surgery; syngeneic (Syn), BALB/c mice receiving corneal grafts from BALB/c donors; and allogeneic (Allo), BALB/c mice receiving corneal grafts from C57BL/6 donors. The surgical procedures were followed established protocols. Briefly, mice were anesthetized by intraperitoneal injection of 0.6% pentobarbital sodium (75 mg/kg) and placed on a flat surgical bed. Central corneas (2.25-mm diameter) from C57BL/6 donors were excised and sutured onto graft beds (2.00-mm diameter) prepared in BALB/c recipients using a 2.00-mm trephine. Donor corneas were secured with eight interrupted 11-0 nylon sutures (Mani, Inc., Tochigi, Japan). Ofloxacin eye ointment (Santen Pharmaceutical, Osaka, Japan) was applied postoperatively to prevent ocular infection. Grafts with severe complications were excluded. All surgeries were performed by the same experienced surgeon. Corneal allografts were evaluated twice weekly using slit-lamp microscopy. Rejection was assessed based on an opacification scoring system. The scoring system was as follows: 0, clear; 1, minimal superficial opacity with pupil margin and iris vessels clearly visible; 2, minimal deep (stromal) opacity with the pupil margin and iris vessels visible; 3, moderate stromal opacity with only the pupil margin visible; 4, intense stromal opacity with only a portion of the pupil margin visible; and 5, maximum stromal opacity with the anterior chamber not visible. An opacity score ≥ 3 after suture removal was considered indicative of rejection. The scoring was performed by two independent observers in a double-blind manner, and the mean score was recorded. To investigate the effect of transplantation stress–induced senescence on allograft rejection, the senolytic drug ABT-263 (50 mg/kg; Selleck Chemicals, Houston, TX, USA) or vehicle was administered intraperitoneally on postoperative days 2, 4, 6, 8, 10, and 12. Additionally, a genetic approach was employed to evaluate the impact of donor corneal senescence. Corneal allografts from WT or p16 KO C57BL/6 mice were transplanted onto BALB/c recipient beds, and rejection was assessed via slit-lamp microscopy. To determine the role of ACE2 in ABT-263–mediated anti-rejection effects, the ACE2 inhibitor MLN-4760 (10 µM; MedChemExpress, Monmouth Junction, NJ, USA) was administered subconjunctivally, with or without ABT-263, on postoperative days 3, 6, 9, 12, and 15. Corneal allograft survival was evaluated using slit-lamp microscopy twice a week. Additional Materials and Methods Details regarding senescence-associated β-galactosidase (SA-β-Gal) staining, adoptive transfer experiments, immunofluorescence staining, western blot, RNA sequencing and analysis, real-time PCR, flow cytometry (FC) analysis, and Luminex cytokine assays are provided in the . Statistical Analysis Data are presented as mean ± SD. Statistical analyses were conducted using Prism 5 (GraphPad, Boston, MA, USA). Multiple comparisons were performed using one-way ANOVA followed by Tukey–Kramer post hoc tests. Graft survival was analyzed using Kaplan–Meier curves. A two-tailed Fisher's exact test was used to assess functional enrichment of differentially expressed genes (DEGs). All experiments were duplicated three times. P < 0.05 was considered statistically significant. The data that support the findings of this study are available from the corresponding authors upon reasonable request. RNA-sequencing data supporting the findings of this study have been deposited in National Center for Biotechnology Information (accession code CRA014662). All animal experiments were approved by the Ethics Committee of Shandong Eye Institute (approval nos. G-2015-020 and 2019S007). All procedures were conducted in compliance with the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research. Male wild-type (WT) C57BL/6 mice ( n = 126) and BALB/c mice ( n = 308) were obtained from Charles River Laboratories (Beijing, China) and housed in a specific pathogen-free facility under a 12-hour light/12-hour dark cycle. Cdkn2a/p16 knockout (KO) mice ( n = 16, 01XB1:129-Cdkn2a tm1Rdp ) were procured from the Frederick National Laboratory for Cancer Research (Frederick, MD, USA) and were used as corneal donors. All mice were given ad libitum access to commercial rodent chow and water. Murine penetrating keratoplasty was conducted between fully mismatched C57BL/6 (donor) and BALB/c (recipient) mice under a surgical microscope (Carl Zeiss Microscopy, Oberkochen, Germany). Recipient mice were randomly divided into three groups: normal (Nor), BALB/c mice without surgery; syngeneic (Syn), BALB/c mice receiving corneal grafts from BALB/c donors; and allogeneic (Allo), BALB/c mice receiving corneal grafts from C57BL/6 donors. The surgical procedures were followed established protocols. Briefly, mice were anesthetized by intraperitoneal injection of 0.6% pentobarbital sodium (75 mg/kg) and placed on a flat surgical bed. Central corneas (2.25-mm diameter) from C57BL/6 donors were excised and sutured onto graft beds (2.00-mm diameter) prepared in BALB/c recipients using a 2.00-mm trephine. Donor corneas were secured with eight interrupted 11-0 nylon sutures (Mani, Inc., Tochigi, Japan). Ofloxacin eye ointment (Santen Pharmaceutical, Osaka, Japan) was applied postoperatively to prevent ocular infection. Grafts with severe complications were excluded. All surgeries were performed by the same experienced surgeon. Corneal allografts were evaluated twice weekly using slit-lamp microscopy. Rejection was assessed based on an opacification scoring system. The scoring system was as follows: 0, clear; 1, minimal superficial opacity with pupil margin and iris vessels clearly visible; 2, minimal deep (stromal) opacity with the pupil margin and iris vessels visible; 3, moderate stromal opacity with only the pupil margin visible; 4, intense stromal opacity with only a portion of the pupil margin visible; and 5, maximum stromal opacity with the anterior chamber not visible. An opacity score ≥ 3 after suture removal was considered indicative of rejection. The scoring was performed by two independent observers in a double-blind manner, and the mean score was recorded. To investigate the effect of transplantation stress–induced senescence on allograft rejection, the senolytic drug ABT-263 (50 mg/kg; Selleck Chemicals, Houston, TX, USA) or vehicle was administered intraperitoneally on postoperative days 2, 4, 6, 8, 10, and 12. Additionally, a genetic approach was employed to evaluate the impact of donor corneal senescence. Corneal allografts from WT or p16 KO C57BL/6 mice were transplanted onto BALB/c recipient beds, and rejection was assessed via slit-lamp microscopy. To determine the role of ACE2 in ABT-263–mediated anti-rejection effects, the ACE2 inhibitor MLN-4760 (10 µM; MedChemExpress, Monmouth Junction, NJ, USA) was administered subconjunctivally, with or without ABT-263, on postoperative days 3, 6, 9, 12, and 15. Corneal allograft survival was evaluated using slit-lamp microscopy twice a week. Details regarding senescence-associated β-galactosidase (SA-β-Gal) staining, adoptive transfer experiments, immunofluorescence staining, western blot, RNA sequencing and analysis, real-time PCR, flow cytometry (FC) analysis, and Luminex cytokine assays are provided in the . Data are presented as mean ± SD. Statistical analyses were conducted using Prism 5 (GraphPad, Boston, MA, USA). Multiple comparisons were performed using one-way ANOVA followed by Tukey–Kramer post hoc tests. Graft survival was analyzed using Kaplan–Meier curves. A two-tailed Fisher's exact test was used to assess functional enrichment of differentially expressed genes (DEGs). All experiments were duplicated three times. P < 0.05 was considered statistically significant. The data that support the findings of this study are available from the corresponding authors upon reasonable request. RNA-sequencing data supporting the findings of this study have been deposited in National Center for Biotechnology Information (accession code CRA014662). Stress-Induced Senescence Is Increased in Corneal Allografts During Transplantation Rejection Previous studies have demonstrated that cellular senescence caused by donor aging or cold storage is strongly associated with poor organ transplantation outcomes, – suggesting that cellular senescence is a significant contributor to graft failure. However, the extent to which surgical injury induces stress-induced graft senescence and subsequent immune rejection remains unclear. Using an age-matched murine corneal transplantation model, we evaluated the effect of transplantation injury on allograft senescence. Compared with normal and syngeneic controls, corneal allografts on postoperative day 10 exhibited a senescence-like phenotype, characterized by increased SA-β-Gal–positive staining in both the corneal endothelium and stroma ( A, B), as well as elevated expression of senescence markers p16 and p21 ( C, D). Double immunofluorescence staining further identified the senescent cell types as vimentin + fibroblasts and CD45 + immune cells in the allogeneic group ( E). These findings indicate the presence of stress-induced senescence in age-matched corneal allografts at an early stage post-transplantation, consistent with observations in high-risk rabbit corneal allografts. Stress-Induced Senescence Accelerates Corneal Transplantation Rejection To assess the impact of stress-induced senescence on corneal allograft rejection, WT and p16 KO C57BL/6 mice were used as allogeneic donors. Recipient mice with p16 KO corneal allografts exhibited reduced immune rejection and prolonged graft survival compared with recipients of WT corneas ( A, B). These results suggest a pathogenic role of senescent donor corneas in allograft rejection. To further investigate this, adoptive transfer experiments were performed to mimic graft senescence. Senescent donor mouse corneal fibroblasts (MCFs) were transferred into the anterior chamber and coated onto recipient corneal endothelia. This procedure accelerated corneal graft senescence ( A) and exacerbated rejection, as indicated by reduced survival time and severe edema ( C, D). However, when senescent MCFs pretreated with ABT-263 were transferred, the pro-rejection effects were significantly reversed ( C, D). These findings demonstrate that transplantation stress–induced senescence aggravates corneal allograft rejection. Targeted Clearance of Senescent Cells Significantly Mitigates Allograft Rejection The potential of selective senescent cell clearance to mitigate corneal transplantation rejection was subsequently investigated. The senolytic drug ABT-263 was administered intraperitoneally following a specified protocol ( A). As shown in , corneal grafts treated with ABT-263 exhibited reduced cellular senescence, as indicated by fewer SA-β-Gal–positive cells in the endothelium and stroma, alongside decreased expression of senescence-associated markers (p16 and p21). These findings confirmed the efficacy of ABT-263 in eliminating senescent cells in corneal allografts. These findings confirmed the efficacy of ABT-263 in eliminating senescent cells in corneal allografts. Building on these results, the anti-rejection effects of ABT-263 were evaluated. Corneal grafts in vehicle-treated mice displayed more severe immune rejection than those in ABT-263–treated recipients, characterized by pronounced corneal opacity and edema ( B) and shortened graft survival time ( C). Quantitative reverse transcription polymerase chain reaction (qRT-PCR) and FC revealed increased levels of pro-inflammatory cytokines, including IL-1β, IL-17A, TNF-α, and IFN-γ, in the rejected grafts ( D– F), contributing to aggravated rejection. In contrast, ABT-263–treated recipients exhibited reduced immune rejection, as evidenced by transparent corneas without noticeable edema ( B), prolonged graft survival time ( C), and significantly decreased levels of pro-inflammatory cytokines in the grafts ( D– F). Additionally, treatment with ABT-263 lowered the production of pro-inflammatory cytokines in intraocular tissues . These findings collectively demonstrate that senolytic therapy represents a promising strategy for mitigating corneal graft rejection. Corneal Allografts Following Senolytic ABT-263 Treatment Show Significant Pathological Alterations on Postoperative Day 10 To further assess the impact of senolytic treatment on corneal allograft rejection, corneal samples under different conditions were subjected to RNA sequencing. Principal component analysis (PCA) revealed that the gene expression profiles of corneal grafts in the allogeneic group differed significantly from those in normal corneas ( A), indicating clear group partitioning. Analysis identified 7727 DEGs in corneal allografts on postoperative day 10 in the allogeneic versus normal group, comprised of 5469 upregulated genes and 2258 downregulated genes ( B). Biological process (BP) analysis showed that the upregulated genes were enriched in inflammation- and immunity-related pathways, such as inflammatory response, immune response, positive regulation of angiogenesis, neutrophil chemotaxis, and chemokine-mediated signaling ( C). Gene set enrichment analysis (GSEA) further confirmed the enrichment of immune/inflammatory pathways, including antigen processing and presentation, leukocyte migration, and vasculogenesis ( D). These results suggest that heightened immune and inflammatory responses in corneal allografts at early stages post-transplantation likely contribute to subsequent rejection. Following ABT-263 treatment, PCA analysis indicated distinct gene expression profiles in corneal grafts compared to the untreated allogeneic group ( A). In the ABT-263–treated group, 338 upregulated and 292 downregulated genes were identified ( B). Functional analysis revealed significant enrichment of inflammation- and immunity-related BPs in the downregulated genes, including inflammatory response, neutrophil chemotaxis, lymphocyte chemotaxis, leukocyte chemotaxis, and monocyte chemotaxis ( C). GSEA identified five significantly enriched pathways in the downregulated genes, such as inflammatory response, chemotaxis, leukocyte chemotaxis, and angiogenesis ( D). Moreover, decreased expression of several inflammation-associated factors was observed in corneal tissues following ABT-263 treatment ( E). Notably, in addition to cellular response to IFN-β and activation of innate immune response, the upregulated genes in ABT-263–treated grafts showed enrichment in negative regulation of innate immune responses, suggesting reduced inflammatory activity ( F). These findings indicate that ABT-263 treatment facilitates inflammation resolution at an early stage post-transplantation, thereby attenuating corneal allograft rejection. Pharmacological Inhibition of Transplantation Stress–Induced Senescence Suppresses Early Ocular Alloimmune Responses Based on RNA sequencing findings, it was hypothesized that senolytic therapy could attenuate ocular alloimmune responses during early post-transplantation stages. As shown in , ABT-263 effectively cleared senescent cells. FC analysis revealed a lower proportion of MHCⅡ + CD11c + dendritic cells (DCs) in ABT-263–treated corneal allografts compared to the untreated group ( A, B). Quantitative RT-PCR analysis confirmed reduced transcriptional levels of DC activation–related genes, including CD80 , CCR7 , IL-6 , IL-12p40 , and S100A8 , in the grafts after ABT-263 treatment ( C). Additionally, pro-inflammatory cytokines in the aqueous humor, such as granulocyte-colony stimulating factor (G-CSF), IFN-γ, IL-1β, IL-6, IFN-γ-inducible protein 10 kDa (IP-10), keratinocyte-derived cytokine (KC), monocyte chemoattractant protein-1 (MCP-1), and RANTES, were significantly reduced in ABT-263–treated recipients ( D). Furthermore, the expression of DC activation–related genes in inflamed corneal tissues from ABT-263–treated mice was pronouncedly reduced when compared to untreated allogeneic controls ( E). Notably, adoptive transfer of senescent donor MCFs into the anterior chamber of recipient mice elevated the expression of DC activation–associated genes, including CD80 , CCR7 , and S100A8 ( B). However, transferring ABT-263–pretreated senescent donor MCFs significantly reduced the expression of these genes compared to untreated senescent MCFs ( B). These findings demonstrate that targeting transplantation stress–induced senescence effectively downregulates ocular alloimmune responses during the early post-transplantation period. The Anti-Rejection Effect of Senolytic ABT-263 Depends on Elevated ACE2 Although ABT-263 treatment significantly ameliorated allograft rejection, the underlying mechanism remained unclear. Gene Ontology (GO) analysis revealed substantial enrichment of cellular components among the upregulated DEGs in ABT-263–treated corneal allografts, including the symbiont-containing vacuole membrane, omegasome, extracellular region, plasma membrane, and basement membrane ( A), These findings likely reflect the notable enhancement of corneal allografts following senolytic treatment. Among these top five terms, the heatmap primarily illustrated the DEGs associated with the extracellular region and plasma membrane, notably demonstrating higher ACE2 expression in ABT-263–treated corneal allografts compared to untreated counterparts ( B). Additionally, protein–protein interaction (PPI) analysis indicated that ACE2 was closely linked to several DEGs with immunomodulatory properties, including immunity-related GTPase family M member 1 (IRGM-1) ( C). , We also observed higher protein levels of ACE2 in senolytic-treated corneal allografts than in untreated counterparts ( D). Based on these findings and the known role of ACE2 in mitigating inflammation, ACE2 was identified as a critical molecule contributing to the anti-rejection effect of ABT-263. To further investigate the role of ACE2 in this context, we employed a pharmacological approach. In recipient mice treated with ABT-263, topical application of the ACE2 inhibitor MLN-4760 markedly reversed the anti-rejection effect of ABT-263, manifesting as increased corneal opacity, edema, and reduced graft survival time ( E, F). Quantitative RT-PCR analysis revealed elevated levels of pro-inflammatory cytokines, such as IFN-γ, IL-17A, and IL-1β, in ABT-263–treated corneal allografts subjected to ACE2 inhibition compared to those without MLN-4760 ( G). These cytokines are known contributors to severe corneal transplantation rejection. , Collectively, these results demonstrate that the anti-rejection effect of ABT-263 is functionally dependent, at least partially, on increased ACE2 expression. Previous studies have demonstrated that cellular senescence caused by donor aging or cold storage is strongly associated with poor organ transplantation outcomes, – suggesting that cellular senescence is a significant contributor to graft failure. However, the extent to which surgical injury induces stress-induced graft senescence and subsequent immune rejection remains unclear. Using an age-matched murine corneal transplantation model, we evaluated the effect of transplantation injury on allograft senescence. Compared with normal and syngeneic controls, corneal allografts on postoperative day 10 exhibited a senescence-like phenotype, characterized by increased SA-β-Gal–positive staining in both the corneal endothelium and stroma ( A, B), as well as elevated expression of senescence markers p16 and p21 ( C, D). Double immunofluorescence staining further identified the senescent cell types as vimentin + fibroblasts and CD45 + immune cells in the allogeneic group ( E). These findings indicate the presence of stress-induced senescence in age-matched corneal allografts at an early stage post-transplantation, consistent with observations in high-risk rabbit corneal allografts. To assess the impact of stress-induced senescence on corneal allograft rejection, WT and p16 KO C57BL/6 mice were used as allogeneic donors. Recipient mice with p16 KO corneal allografts exhibited reduced immune rejection and prolonged graft survival compared with recipients of WT corneas ( A, B). These results suggest a pathogenic role of senescent donor corneas in allograft rejection. To further investigate this, adoptive transfer experiments were performed to mimic graft senescence. Senescent donor mouse corneal fibroblasts (MCFs) were transferred into the anterior chamber and coated onto recipient corneal endothelia. This procedure accelerated corneal graft senescence ( A) and exacerbated rejection, as indicated by reduced survival time and severe edema ( C, D). However, when senescent MCFs pretreated with ABT-263 were transferred, the pro-rejection effects were significantly reversed ( C, D). These findings demonstrate that transplantation stress–induced senescence aggravates corneal allograft rejection. The potential of selective senescent cell clearance to mitigate corneal transplantation rejection was subsequently investigated. The senolytic drug ABT-263 was administered intraperitoneally following a specified protocol ( A). As shown in , corneal grafts treated with ABT-263 exhibited reduced cellular senescence, as indicated by fewer SA-β-Gal–positive cells in the endothelium and stroma, alongside decreased expression of senescence-associated markers (p16 and p21). These findings confirmed the efficacy of ABT-263 in eliminating senescent cells in corneal allografts. These findings confirmed the efficacy of ABT-263 in eliminating senescent cells in corneal allografts. Building on these results, the anti-rejection effects of ABT-263 were evaluated. Corneal grafts in vehicle-treated mice displayed more severe immune rejection than those in ABT-263–treated recipients, characterized by pronounced corneal opacity and edema ( B) and shortened graft survival time ( C). Quantitative reverse transcription polymerase chain reaction (qRT-PCR) and FC revealed increased levels of pro-inflammatory cytokines, including IL-1β, IL-17A, TNF-α, and IFN-γ, in the rejected grafts ( D– F), contributing to aggravated rejection. In contrast, ABT-263–treated recipients exhibited reduced immune rejection, as evidenced by transparent corneas without noticeable edema ( B), prolonged graft survival time ( C), and significantly decreased levels of pro-inflammatory cytokines in the grafts ( D– F). Additionally, treatment with ABT-263 lowered the production of pro-inflammatory cytokines in intraocular tissues . These findings collectively demonstrate that senolytic therapy represents a promising strategy for mitigating corneal graft rejection. To further assess the impact of senolytic treatment on corneal allograft rejection, corneal samples under different conditions were subjected to RNA sequencing. Principal component analysis (PCA) revealed that the gene expression profiles of corneal grafts in the allogeneic group differed significantly from those in normal corneas ( A), indicating clear group partitioning. Analysis identified 7727 DEGs in corneal allografts on postoperative day 10 in the allogeneic versus normal group, comprised of 5469 upregulated genes and 2258 downregulated genes ( B). Biological process (BP) analysis showed that the upregulated genes were enriched in inflammation- and immunity-related pathways, such as inflammatory response, immune response, positive regulation of angiogenesis, neutrophil chemotaxis, and chemokine-mediated signaling ( C). Gene set enrichment analysis (GSEA) further confirmed the enrichment of immune/inflammatory pathways, including antigen processing and presentation, leukocyte migration, and vasculogenesis ( D). These results suggest that heightened immune and inflammatory responses in corneal allografts at early stages post-transplantation likely contribute to subsequent rejection. Following ABT-263 treatment, PCA analysis indicated distinct gene expression profiles in corneal grafts compared to the untreated allogeneic group ( A). In the ABT-263–treated group, 338 upregulated and 292 downregulated genes were identified ( B). Functional analysis revealed significant enrichment of inflammation- and immunity-related BPs in the downregulated genes, including inflammatory response, neutrophil chemotaxis, lymphocyte chemotaxis, leukocyte chemotaxis, and monocyte chemotaxis ( C). GSEA identified five significantly enriched pathways in the downregulated genes, such as inflammatory response, chemotaxis, leukocyte chemotaxis, and angiogenesis ( D). Moreover, decreased expression of several inflammation-associated factors was observed in corneal tissues following ABT-263 treatment ( E). Notably, in addition to cellular response to IFN-β and activation of innate immune response, the upregulated genes in ABT-263–treated grafts showed enrichment in negative regulation of innate immune responses, suggesting reduced inflammatory activity ( F). These findings indicate that ABT-263 treatment facilitates inflammation resolution at an early stage post-transplantation, thereby attenuating corneal allograft rejection. Based on RNA sequencing findings, it was hypothesized that senolytic therapy could attenuate ocular alloimmune responses during early post-transplantation stages. As shown in , ABT-263 effectively cleared senescent cells. FC analysis revealed a lower proportion of MHCⅡ + CD11c + dendritic cells (DCs) in ABT-263–treated corneal allografts compared to the untreated group ( A, B). Quantitative RT-PCR analysis confirmed reduced transcriptional levels of DC activation–related genes, including CD80 , CCR7 , IL-6 , IL-12p40 , and S100A8 , in the grafts after ABT-263 treatment ( C). Additionally, pro-inflammatory cytokines in the aqueous humor, such as granulocyte-colony stimulating factor (G-CSF), IFN-γ, IL-1β, IL-6, IFN-γ-inducible protein 10 kDa (IP-10), keratinocyte-derived cytokine (KC), monocyte chemoattractant protein-1 (MCP-1), and RANTES, were significantly reduced in ABT-263–treated recipients ( D). Furthermore, the expression of DC activation–related genes in inflamed corneal tissues from ABT-263–treated mice was pronouncedly reduced when compared to untreated allogeneic controls ( E). Notably, adoptive transfer of senescent donor MCFs into the anterior chamber of recipient mice elevated the expression of DC activation–associated genes, including CD80 , CCR7 , and S100A8 ( B). However, transferring ABT-263–pretreated senescent donor MCFs significantly reduced the expression of these genes compared to untreated senescent MCFs ( B). These findings demonstrate that targeting transplantation stress–induced senescence effectively downregulates ocular alloimmune responses during the early post-transplantation period. Although ABT-263 treatment significantly ameliorated allograft rejection, the underlying mechanism remained unclear. Gene Ontology (GO) analysis revealed substantial enrichment of cellular components among the upregulated DEGs in ABT-263–treated corneal allografts, including the symbiont-containing vacuole membrane, omegasome, extracellular region, plasma membrane, and basement membrane ( A), These findings likely reflect the notable enhancement of corneal allografts following senolytic treatment. Among these top five terms, the heatmap primarily illustrated the DEGs associated with the extracellular region and plasma membrane, notably demonstrating higher ACE2 expression in ABT-263–treated corneal allografts compared to untreated counterparts ( B). Additionally, protein–protein interaction (PPI) analysis indicated that ACE2 was closely linked to several DEGs with immunomodulatory properties, including immunity-related GTPase family M member 1 (IRGM-1) ( C). , We also observed higher protein levels of ACE2 in senolytic-treated corneal allografts than in untreated counterparts ( D). Based on these findings and the known role of ACE2 in mitigating inflammation, ACE2 was identified as a critical molecule contributing to the anti-rejection effect of ABT-263. To further investigate the role of ACE2 in this context, we employed a pharmacological approach. In recipient mice treated with ABT-263, topical application of the ACE2 inhibitor MLN-4760 markedly reversed the anti-rejection effect of ABT-263, manifesting as increased corneal opacity, edema, and reduced graft survival time ( E, F). Quantitative RT-PCR analysis revealed elevated levels of pro-inflammatory cytokines, such as IFN-γ, IL-17A, and IL-1β, in ABT-263–treated corneal allografts subjected to ACE2 inhibition compared to those without MLN-4760 ( G). These cytokines are known contributors to severe corneal transplantation rejection. , Collectively, these results demonstrate that the anti-rejection effect of ABT-263 is functionally dependent, at least partially, on increased ACE2 expression. Previous studies have demonstrated that aging and cold storage contribute to poor allograft outcomes via cellular senescence. – Our findings revealed that transplantation stress induces cellular senescence even in age-matched corneal allografts, and the pharmacological and genetic elimination of senescent cells effectively prevents immune rejection. Through transcriptomic analysis and loss-of-function experiments, we identified a close association between the anti-rejection effect of senolytic ABT-263 and elevated ACE2 expression. Thus, these findings suggest that transplantation stress–induced senescence is a key pathological driver of corneal allograft rejection, highlighting senolytic therapy as a novel approach to mitigate transplant rejection. Extensive evidence has shown that older organs, burdened with senescent cells, exhibit reduced functionality and heightened immunogenicity, exacerbating adverse outcomes post-transplantation. , Senescent CD4 + T cells with enhanced glutaminolysis exhibit greater activation potential, increased pro-inflammatory cytokine production, and proliferation, thereby driving rejection of aged organs. Similarly, overactive senescent dendritic cells promote Th1 and Th17 responses, accelerating the rejection of aged cardiac and skin transplants. , Cold storage has also been identified as a significant catalyst for donor organ and tissue aging, ultimately contributing to transplantation failure. In this study, we identified stress-induced senescence-like phenotypes in age-matched corneal grafts post-transplantation, indicating that transplantation stress accelerates cellular senescence. This phenomenon aligns with findings in other organ transplants , , and various pathological contexts. , Both genetic and pharmacological approaches underscored the pathogenic role of stress-induced senescence during corneal transplantation. Accordingly, transplantation stress–induced senescence should be recognized as a critical factor in transplantation failure, alongside donor/recipient aging and organ preservation. The activation and propagation of innate immune response cascades, including NLRP3 inflammasome, cGAS/STING signaling, and Toll-like receptor pathways, are integral to adaptive alloimmune responses and subsequent allograft rejection. – However, the mechanisms sustaining these responses remain unclear. According to the “danger hypothesis,” DAMPs released from donor cells establish a direct link between innate immunity and allograft failure. , Increased release of DAMPs and inflammatory cytokines under various preservation conditions has been linked to allograft rejection and dysfunction, including during machine preservation and cold ischemia storage. , Mitochondrial DNA from senescent cells of aged donors has been implicated in worsened cardiac transplantation outcomes. In contrast, early senolytic treatment of corneal allografts reduced innate immune responses and pro-inflammatory cytokine levels. Given their pro-inflammatory nature and secretion of SASP, stress-induced senescent cells amplify allograft rejection; however, the precise mechanisms warrant further investigation. ACE2 was initially identified as a key regulator of the renin–angiotensin system through its ability to hydrolyze angiotensin II (Ang II), influencing angiogenesis, inflammation, and fibrosis. Subsequent studies expanded its roles to multiple organs, including regulating intestinal amino acid homeostasis and serving as the primary receptor for severe acute respiratory syndrome coronavirus 1 (SARS-CoV-1) and SARS-CoV-2. ACE2 deficiency has been linked to pathological conditions such as cardiac dysfunction, diabetic kidney injury, and acute lung injury. In this study, we observed increased ACE2 expression in ABT-263–treated corneal allografts compared to untreated counterparts, and ACE2 inhibition reversed the anti-rejection effect of ABT-263. Previous studies have also linked ACE2 deficiency to spontaneous corneal clouding and inflammation, as well as accelerated aging, potentially explaining our findings. Senolytic drugs such as ABT-263 eliminate senescent cells by inducing apoptosis, addressing their anti-apoptotic properties. , Donor apoptotic cells have been shown to promote antigen-specific immune tolerance in allografts. , Thus, immune tolerance induction may contribute to the anti-rejection effects of senolytic treatment. Nevertheless, further studies are required to elucidate the relationships among cellular senescence, allograft rejection, and allograft tolerance. Although this study has demonstrated the efficacy of targeting stress-induced senescent cells to mitigate corneal allograft rejection, several limitations remain. First, although transplantation stress–induced senescence was observed in the age-matched murine corneal transplantation model, its underlying mechanisms are yet to be elucidated. Second, although RNA sequencing established a link between reduced inflammatory immune responses and senolytic therapy, further investigations are necessary to clarify the underlying mechanisms. Third, additional mechanisms, such as the involvement of IRGM-1, require exploration to provide a more comprehensive explanation of the anti-rejection effects of ABT-263. Finally, although the anti-rejection effects of ABT-263 were achieved via intraperitoneal injection, future studies should aim to develop formulations, such as eye drops, to enhance the solubility and corneal permeability of the drug. Despite these limitations, this study provides proof-of-concept evidence that stress-induced senescence plays a critical pathogenic role in the age-matched murine corneal transplantation model. Targeting stress-induced senescence significantly alleviates allograft rejection through ACE2 upregulation. Integrating our findings with prior studies, we propose that cellular senescence, induced by donor age, organ preservation, and transplantation stress, constitutes a major risk factor for transplantation failure and allograft rejection . Collectively, this study underscores stress-induced senescence as a key pathogenic mechanism in corneal allograft rejection and highlights senolytic therapy as a promising strategy for mitigating transplant rejection in corneal and potentially other organ transplants. Supplement 1 |
BARRIERS TO THE USE OF BASIC HEALTH SERVICES AMONG WOMEN IN RURAL SOUTHERN EGYPT (UPPER EGYPT) | 1ea067d0-cd67-4d67-9d29-72fe0eaa00d7 | 4345669 | Preventive Medicine[mh] | Access to health services has to be guaranteed for all people throughout the world; however, it is not yet fully achieved in many developing countries, particularly in rural areas. In addition, it is often difficult for women in developing countries, such as Pakistan, where gender-biased traditional values still prevail, to use health services unless the provided services are culturally acceptable in practice. , ) Egypt had reached the stage of developing an extensive basic health service delivery network: over 95 % of the population lived within 5 kilometers of a health facility. , ) Nevertheless, women’s use of the services was still at a low level, especially in the underprivileged southern part of the country (Upper Egypt, as the region located upstream of the Nile River). According to the 2008 Egypt Demographic and Health Survey (DHS), maternal health service coverage in rural Upper Egypt was at the lowest level in the whole country: e.g. , regular antenatal care (ANC) attendance was 49 %; delivery assisted by skilled professionals was 59 %, while the national averages were 66 % and 79 % respectively. ) Even though the health service provision, or the geographical access, is improved, local women may not use the services unless the provided services meet their demands in quality and cultural manners. In other words, demand-side barriers are as important as supply-side factors in deterring people from obtaining appropriate health services among vulnerable groups of population including rural women. ) Our previous studies in northern part of Egypt suggested that increased access to maternal health services was positively related to the empowered status of women in the households (higher age at first marriage, higher education level of husbands, less experience of physical assault by husbands) and availability of family support (living with an extended family). ) However, it was not related to economic independency or the decision-making power of women, contrary to the reports from previous studies in Nepal and India. , ) Since we studied access to the maternal health services, the results might be confounded by the facts that pregnancy and childbirths were not merely women’s own health issues but important family events in Egypt. Moreover, the study suggested that women’s behaviors and attitudes toward ANC (preventive aspect) were different from those toward delivery care (curative aspect). ) The objective of this study is to identify possible demand-side barriers to the use of basic health services. The study investigated barriers to the use of preventive services (ANC) and curative services (medical treatment of common illness) among local women in rural Upper Egypt, where cultural constraints and economic conditions were much tougher than those in the northern part of Egypt, our previous study site.
This cross-sectional study was carried out in 3 purposively selected villages adjacent to Assiut City in Upper Egypt in November 2009. The 3 villages were similar with respect to geographical and socio-demographic conditions. Although they were located in an agricultural area, the main sources of income for most households were from paid jobs, instead of traditional farming. The total population of the villages was close to 50,000. A stratified sampling approach was adopted to recruit a total of 205 women for our study. Since accurate household maps were not available in the villages, we utilize the "health service sections" for the sampling, which were originally divided for health services, such as immunization. We randomly selected about 7 currently-married women with at least one child under the age of 5 years from each of the 30 sections across the 3 villages. Face-to-face interviews with all the participants were conducted with a structured questionnaire, which consisted of 3 major parts: (1) women’s demographic and social backgrounds, including age, education, age at first marriage, parity, family structure and cash income, (2) the use of preventive and curative basic health services, and (3) perceived barriers to the use of the health services. In this study, ANC was chosen as the proxy indicator of preventive health services for women, since it is commonly available at health facilities in rural Egypt and well known by local women. Whether a participant had received ANC by trained health providers at least 4 times during her last pregnancy, which is regarded as regular ANC by the World Health Organization (WHO), ) was transformed into a dichotomous value for our statistical analyses. To examine the use of curative health services, we designed questions asking about medical treatment of common illness as follows: when you felt very sick and suffered from (1) high fever, (2) long-lasting cough, and (3) persistent diarrhea, did you seek help from any medical professionals? Women were asked to give the answers according to their most recent experiences. If a respondent had no experience of such symptoms, she was asked to answer regarding the actions which would be taken in case of need. We defined respondents as having good access to curative health services if they chose two or more "yes" answers for these 3 symptoms. Those who chose one or less were defined as having poor access. Although it has been acknowledged that there is no universally accepted model of access to health services, monitoring the use of health services is one of the most common ways to determine its level. , ) We adopted a model of access to personal health care services proposed by the Institute of Medicine (IOM) Access Monitoring Project, ) which categorized barriers to the use of health services into 3 primary dimensions—structural, financial, and personal/cultural barriers. In accordance with these 3 dimensions and the questionnaire of the Egypt DHS, we selected 6 potential barriers for this study: (1) distance to preferred health facilities (2) transportation to health facilities (3) payment for health services (4) time allocation (5) family permission (6) concern about lack of female physicians. We simply asked women whether it was a big problem or not for the 6 potential barriers respectively, when they were sick and desired to see a doctor. Logistic regression models were used to compute odds ratios (ORs) and 95 % confidence intervals (CIs) to assess the association between the outcome (regular ANC and the use of medical treatment services) and each of the predictor (6 potential perceived barriers) variables. The final models were adjusted for potential confounding variables such as age, education, age at first marriage, parity, family structure and cash income. P <0.05 was considered as statistically significant via the Wald test. All statistical analyses were performed with Stata statistical software (Release 12). Ethical clearances for the study were obtained from the Ethics Review Committee of Nagoya University School of Medicine in Nagoya, Japan, and Faculty of Nursing, Assiut University in Assiut, Egypt. Written informed consent was obtained from all participants of the interviews after adequate explanations of the objectives and procedures of the study.
The 205 respondents ranged in age from 18 to 45, and the median age was 29 years old. Regular ANC attendance rate was 41 %, and about 29 % of women had good access to the use of medical treatment services across the 3 study villages. Distance and transportation to health facilities were considered as barriers to the use of health services for about 30 % of the respondents . More than 42 % of them complained about the high costs of health services, and approximately a quarter of the participants said that gaining family permission, allocating time to go and concern about lack of female physicians were big problems for them to seek basic health services. After controlling for the potential confounding covariates, women who mentioned distance to preferred health facilities and the transportation as potential barriers were less likely to use both regular ANC and medical treatment services. Perceived financial barriers showed a significant association with the use of medical treatment services ( P <0.001) but not with regular ANC. With regard to personal/cultural barriers, time allocation, family permission or concern about lack of female physicians was not statistically associated with both women’s regular ANC and medical treatment services .
Our findings showed that women who used regular ANC were merely 41 % and who had good access to medical treatment services were 29 %, even though public health facilities were located within accessible distance. Twenty-four to forty-two percent of women recognized that each of structural, financial, and personal/cultural barriers was preventing them from seeking health services. Furthermore, the 3 primary dimensions of barriers were shown to have different patterns of association with preventive (regular ANC) and curative (treatment of common illness) services in the current study. It was reported that structural barriers, namely distance and transportation to health facilities, commonly impeded the use of maternal health services in many low- and middle-income countries. , ) Our findings also indicated perceived distance to health facilities was associated with lower use of regular ANC and medical treatment services, and suggested an insufficiency of geographical access to health services. Considering that the geographical access to public health units/centers was guaranteed in our target villages, the results might be caused by the fact that women did not often see a doctor at a nearby public health center, but visited preferred health service providers apart from their residential areas. Quality and contents of the health services provided by local public health units/centers might be poorly accepted by those health service users. ) Our findings showed that the financial barrier was strongly associated with the use of medical treatment services, but not with regular ANC. Private health service providers played a major role in the Egyptian health service system, as shown in the fact that private providers cover more than half of the use of health services. , ) Although the services of public health centers were provided at a nominal fee, even the poor were increasingly seeking outpatient care in private facilities, of which services were perceived as better quality than those of public facilities. ) This could explain the inverse association we found between the financial barrier and the use of medical treatment services. In contrast, regular ANC did not show statistical significance with the financial barrier. Similarly findings were reported from a previous study performed in South Africa, where a removal of user fees increased the use of medical treatment of common illness but not ANC. Congestion and long waiting time at clinics might have discouraged some women from attending for ANC. ) Moreover, unlike the use of medical treatment services, women in Egypt might have preferred to go to public health facilities for ANC, or private ANC services were as affordable as public services. Personal/cultural barriers to women’s use of health services have been investigated in various studies, which pointed out that women could not always have access to appropriate health services because of social and cultural constraints. , ) In this study, we selected time allocation, family permission and concern about lack of female physicians as indicators of potential personal/cultural barriers. Our findings showed that fewer women recognized personal/cultural barriers than those recognized structural and financial barriers. Time allocation was not a main barrier which statistically associated with the use of health facilities, although women in developing countries often have difficulty leaving their daily work and sparing time to visit health facilities even if they feel sick. ) Availability of family support might have helped women in our study to allocate time to visit health facilities. Extended families are common in rural Egypt, and even women who live in nuclear families usually have relatives living nearby. Our preliminary findings from concurrently conducted focus group discussions have also showed that family members were willing to support women by assisting their daily household chores during pregnancy or illness. It was reported that women in India and Pakistan had to ask permission from husband or head of the household to leave their home, including making a visit to health facilities. , ) In Pakistan, lack of female health service providers had hindered women’s use of appropriate and timely medical care. , ) However, our findings indicated that gaining permission and concern about lack of female physicians might not be the significant barriers to women’s use of health service in the area. This may reflect changes of women’s status in the Egyptian society, and accordingly further anthropological studies may be designed to clarify such social changes and the context behind. This study was carried out on a small scale in 3 purposively selected villages; thus the results might not be generalized to a broader region. Barriers that prevent women from seeking health services are difficult to measure in a study. Although researchers commonly choose self-perceived indicators to estimate the situation, what women have responded to the questionnaires may not be always consistent with their actual behaviors and situations. Objective indicators, such as measured distance and time to health facilities need to be used for further studies. Our results revealed that structural and financial barriers were standing in the way of improvement of women’s access to basic health services in the rural Upper Egypt, while the associations between personal/cultural barriers and use of the services were not verified. The findings from our research might offer an insight into the problems of the health service delivery systems, and give the health policy makers some clues about how to make all population fully benefit from the health resources of the nation.
The authors wish to thank faculty members of Assiut University Faculty of Nursing, Dr. Leo Kawaguchi, and Ms Ayumi Ohashi for assistance in data collection and valuable advice during the process of the research. This study was in part supported by a Grant-in-Aid for Scientific Research (B, 19406024) to A.A. from the Japan Society for the Promotion of Sciences and the International Cooperation Research Grant (17-3) to A.A. from Ministry of Health, Labour and Welfare, Government of Japan.
|
Whole‐transcriptome sequencing in advanced gastric or gastroesophageal cancer: A deep dive into its clinical potential | 66a9d10e-6f07-4459-b0e6-669dc2d21253 | 11093190 | Anatomy[mh] | INTRODUCTION Gastric and gastroesophageal junctional neoplasms (GC/GEJC) rank as the fifth most prevalent cancer and the fourth most common cause of oncological mortality worldwide. Despite significant advancements in treatment strategies for advanced unresectable or metastatic GC/GEJC, such as immune checkpoint inhibitors (ICIs) and molecular‐targeted agents such as anti‐human epidermal growth factor receptor 2 (HER2), the prognosis remains poor, with median life expectancies of approximately 1 year. , , , , This implies that although combining these targeted therapies with chemotherapy can enhance survival rates, the scarcity of validated molecular targets leaves certain populations without suitable treatment options. To address this critical issue, substantial progress has been made in the development of novel molecular‐targeted therapies. Multiomics analyses, including immunohistochemistry (IHC) and comprehensive genome profiling through whole‐exome sequencing (WES) or whole‐transcriptome sequencing (WTS), have been employed to reveal the intricate molecular landscape of solid tumors, including GC/GEJC. , In advanced GC/GEJC, HER2 is a validated treatment target because trastuzumab and trastuzumab deruxtecan, an antibody‐drug conjugate consisting of an anti‐HER2 antibody and a cytotoxic topoisomerase I inhibitor, have shown impressive therapeutic efficacy. , Additionally, several other tyrosine kinase receptors (RTKs), including fibroblast growth factor receptor 2 (FGFR2), epidermal growth factor receptor (EGFR), and mesenchymal–epithelial transition (MET) factor, have been examined for their potential as targetable molecules in several studies. , , Moreover, claudin‐18 isoform 2 (CLDN18.2), an important component of tight‐junction proteins that regulate tissue permeability, paracellular transport, and signal transduction, is a promising therapeutic target molecule based on the results of two global, randomized, phase III trials: SPOTLIGHT and GLOW. , However, the landscape of these molecules in advanced GC/GEJCs remains unclear. Traditionally, the assessment of these molecular targets, including RTKs, CLDN18, or programmed death‐ligand 1 (PD‐L1), relies mainly on IHC analysis. Nonetheless, in GC/GEJC, which is considered highly heterogeneous, some of these molecules are difficult to evaluate using IHC or other evaluation methods, and thresholds have not yet been established. In this context, WTS does not fully resolve the complexities of tumor heterogeneity but it may be useful to homogenize the varied expression within a tumor to provide quantitative data, even for samples with strong heterogeneity. More specifically, there may exist subpopulations that, despite receiving negative evaluations through IHC and consequently being disqualified from targeted therapy, demonstrate elevated expression through WTS, which renders them potentially eligible for such treatments. However, few studies have comprehensively evaluated the correlation between IHC and WTS and the contrast in their prognostic impact. This study aimed to elucidate the correlation between IHC and WTS and highlight the utility of WTS in GC/GEJC. Our overarching objective included the identification of potential therapeutic targets, particularly for patients in whom GC/GEJC is undetectable by IHC alone. To achieve this, we performed a comprehensive biomarker analysis in patients with advanced GC/GEJC treated at our hospital, while concurrently participating in a nationwide genome‐screening study, MONSTAR‐SCREEN‐2. This study was approved by the Institutional Review Board of the National Cancer Center and conducted in accordance with the ethical guidelines of the Declaration of Helsinki.
PATIENTS AND METHODS 2.1 Patients This study included patients with advanced GC/GEJC treated at the National Cancer Center Hospital East between May 2021 and April 2023, who were registered in the immunological profiling study (UMIN000019129) and participated in MONSTAR‐SCREEN‐2, a multicenter study on biomarker development utilizing artificial intelligence multiomics for patients with advanced solid malignant tumors (UMIN000043899). MONSTAR‐SCREEN‐2 was launched in May 2021, and multiomics analyses of tumor tissues and plasma were conducted for all participating patients with advanced solid tumors. The eligibility criteria in the present study were as follows: (1) age 18 years or older; (2) histologically or cytologically confirmed advanced or recurrent GC/GEJC; (3) Eastern Cooperative Oncology Group performance status of 0 or 1; (4) no curative treatment available; (5) adequate organ function; (6) received systemic chemotherapy from May 2021 to April 2023 in our hospital; and (7) underwent key biomarker analysis using IHC. All patients provided written informed consent for biomarker analysis. The study protocol was approved by the Institutional Review Board of the National Cancer Center, Japan. 2.2 Immunohistochemistry In the present study, molecular characteristics, including RTKs such as HER2, FGFR2, and EGFR, and MET, CLDN18, mismatch repair (MMR) status, PD‐L1, and Epstein–Barr virus (EBV), were analyzed using formalin‐fixed paraffin‐embedded tissue specimens from primary tumors. The antibodies used for IHC were HER2 (4B5; Ventana), CLDN18 (43‐14A; Ventana), FGFR2 (K‐sam; IBL), EGFR (3C6; Ventana), MET (SP44; Ventana), PD‐L1 (SP142 or SP263; Ventana), and MMR (mutL homolog 1, ES05; mutS homolog 2, FE11; post‐mitotic segregation increased 2, EP51; mutShomolog 6, EP49; Dako), as reported previously. , HER2 positivity was defined as IHC 3+ or IHC 2+ and fluorescence in situ hybridization positivity. CLDN positivity was evaluated on three levels: IHC 2+/3+ ≥75%, ≥50%, or ≥25% membranous staining of tumor cells. For the concordance with WTS of CLDN18, we adopted IHC 2+/3+ ≥50% membranous staining of tumor cells. FGFR2 positivity was evaluated on two levels: IHC 2+/3+ ≥10% or ≥1% membranous staining of tumor cells. For the concordance with WTS of FGFR2 and molecular profiling, we adopted IHC 2+/3+ ≥1% membranous staining of tumor cells. EGFR positivity was evaluated on two levels: IHC 2+/3+ ≥50% or ≥1% membranous staining of tumor cells. For concordance with WTS of EGFR and molecular profiling, we adopted IHC 2+/3+ ≥50% membranous staining of tumor cells. MET was evaluated on two levels: IHC 2+/3+ ≥1% membranous staining of tumor cells or H‐score ≥20, which is calculated as the product of the percent of positively staining cells and the intensity of staining (0, 1+, 2+, 3+). , For the concordance with WTS of MET and molecular profiling, we adopted H‐score ≥20. PD‐L1 expression was measured using the combined positive score (CPS), which is defined as the ratio of the number of PD‐L1‐positive cells (tumor cells, lymphocytes, and macrophages) to the total number of tumor cells multiplied by 100. PD‐L1 positivity was evaluated on two levels: CPS ≥10 or ≥5. , For concordance with the WTS of PD‐L1, we adopted a CPS of 10. Tumors lacking nuclear staining for one or more MMR proteins (MLH1, MSH2, PMS2, or MSH6) were considered MMR deficient (d‐MMR). Chromogenic in situ hybridization for EBV‐encoded RNA (EBER) using fluorescein‐labeled oligonucleotide probes was performed to assess the EBV status. The IHC assessment was performed by two trained pathologists (N.S. and T. K.) who were blinded to the diagnoses and/or other identifying information. 2.3 Whole‐transcriptome sequencing and whole‐exome sequencing analysis This study used the CARIS MI Profile assay for tumor tissue samples, which was utilized in MONSTAR‐SCREEN‐2. This technology includes WTS analysis using the Illumina NovaSeq System, which enables the comprehensive analysis of tumor‐specific mRNA expression, splice variant transcripts, and gene fusions. This assay allows the quantification of mRNA expression levels in tissues by sequencing all coding exons. Transcript levels were normalized to transcript per million (TPM) values for each gene. To determine the concordance between the IHC status and WTS expression, we divided the TPM values by the mean or third quartile expression levels for each molecule. The CARIS MI Profile assay also included WES of genomic DNA using an Illumina NovaSeq 6000 sequencer. For copy number amplifications, if all exons within the gene of interest have an average of ≥3 copies and the average copy number of the entire gene is ≥6 copies, the gene result is reported as amplified. Tumor specimens collected at different time points were used for IHC and WTS analyses in some patients. More specifically, initial biopsy samples before chemotherapy are often used for IHC, and additional biopsy samples during chemotherapy are assessed by WTS in some patients. 2.4 Outcomes and statistical analysis We conducted a comparative analysis of clinical characteristics using the chi‐square test for categorical variables and the Mann–Whitney U test for continuous variables. We evaluated the correlation between the IHC and WTS results for each biomarker. Moreover, patients were classified into two groups based on RTK positivity by IHC and molecular profiles by IHC and WTS. Treatment response was evaluated according to the Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1. Overall response rate (ORR) was defined as the proportion of patients whose best overall response was a complete response (CR) or partial response (PR). Progression‐free survival (PFS) was calculated for each treatment after enrolment and was defined as the time from the date of treatment initiation to either the date of disease progression or death from any cause. The median PFS of each group was estimated using the Kaplan–Meier method and compared using the log‐rank test. Cox proportional hazard models were used for both univariate and multivariate analyses. Statistical significance was set at p < 0.05. Statistical analyses were performed using R version 4.2.3.
Patients This study included patients with advanced GC/GEJC treated at the National Cancer Center Hospital East between May 2021 and April 2023, who were registered in the immunological profiling study (UMIN000019129) and participated in MONSTAR‐SCREEN‐2, a multicenter study on biomarker development utilizing artificial intelligence multiomics for patients with advanced solid malignant tumors (UMIN000043899). MONSTAR‐SCREEN‐2 was launched in May 2021, and multiomics analyses of tumor tissues and plasma were conducted for all participating patients with advanced solid tumors. The eligibility criteria in the present study were as follows: (1) age 18 years or older; (2) histologically or cytologically confirmed advanced or recurrent GC/GEJC; (3) Eastern Cooperative Oncology Group performance status of 0 or 1; (4) no curative treatment available; (5) adequate organ function; (6) received systemic chemotherapy from May 2021 to April 2023 in our hospital; and (7) underwent key biomarker analysis using IHC. All patients provided written informed consent for biomarker analysis. The study protocol was approved by the Institutional Review Board of the National Cancer Center, Japan.
Immunohistochemistry In the present study, molecular characteristics, including RTKs such as HER2, FGFR2, and EGFR, and MET, CLDN18, mismatch repair (MMR) status, PD‐L1, and Epstein–Barr virus (EBV), were analyzed using formalin‐fixed paraffin‐embedded tissue specimens from primary tumors. The antibodies used for IHC were HER2 (4B5; Ventana), CLDN18 (43‐14A; Ventana), FGFR2 (K‐sam; IBL), EGFR (3C6; Ventana), MET (SP44; Ventana), PD‐L1 (SP142 or SP263; Ventana), and MMR (mutL homolog 1, ES05; mutS homolog 2, FE11; post‐mitotic segregation increased 2, EP51; mutShomolog 6, EP49; Dako), as reported previously. , HER2 positivity was defined as IHC 3+ or IHC 2+ and fluorescence in situ hybridization positivity. CLDN positivity was evaluated on three levels: IHC 2+/3+ ≥75%, ≥50%, or ≥25% membranous staining of tumor cells. For the concordance with WTS of CLDN18, we adopted IHC 2+/3+ ≥50% membranous staining of tumor cells. FGFR2 positivity was evaluated on two levels: IHC 2+/3+ ≥10% or ≥1% membranous staining of tumor cells. For the concordance with WTS of FGFR2 and molecular profiling, we adopted IHC 2+/3+ ≥1% membranous staining of tumor cells. EGFR positivity was evaluated on two levels: IHC 2+/3+ ≥50% or ≥1% membranous staining of tumor cells. For concordance with WTS of EGFR and molecular profiling, we adopted IHC 2+/3+ ≥50% membranous staining of tumor cells. MET was evaluated on two levels: IHC 2+/3+ ≥1% membranous staining of tumor cells or H‐score ≥20, which is calculated as the product of the percent of positively staining cells and the intensity of staining (0, 1+, 2+, 3+). , For the concordance with WTS of MET and molecular profiling, we adopted H‐score ≥20. PD‐L1 expression was measured using the combined positive score (CPS), which is defined as the ratio of the number of PD‐L1‐positive cells (tumor cells, lymphocytes, and macrophages) to the total number of tumor cells multiplied by 100. PD‐L1 positivity was evaluated on two levels: CPS ≥10 or ≥5. , For concordance with the WTS of PD‐L1, we adopted a CPS of 10. Tumors lacking nuclear staining for one or more MMR proteins (MLH1, MSH2, PMS2, or MSH6) were considered MMR deficient (d‐MMR). Chromogenic in situ hybridization for EBV‐encoded RNA (EBER) using fluorescein‐labeled oligonucleotide probes was performed to assess the EBV status. The IHC assessment was performed by two trained pathologists (N.S. and T. K.) who were blinded to the diagnoses and/or other identifying information.
Whole‐transcriptome sequencing and whole‐exome sequencing analysis This study used the CARIS MI Profile assay for tumor tissue samples, which was utilized in MONSTAR‐SCREEN‐2. This technology includes WTS analysis using the Illumina NovaSeq System, which enables the comprehensive analysis of tumor‐specific mRNA expression, splice variant transcripts, and gene fusions. This assay allows the quantification of mRNA expression levels in tissues by sequencing all coding exons. Transcript levels were normalized to transcript per million (TPM) values for each gene. To determine the concordance between the IHC status and WTS expression, we divided the TPM values by the mean or third quartile expression levels for each molecule. The CARIS MI Profile assay also included WES of genomic DNA using an Illumina NovaSeq 6000 sequencer. For copy number amplifications, if all exons within the gene of interest have an average of ≥3 copies and the average copy number of the entire gene is ≥6 copies, the gene result is reported as amplified. Tumor specimens collected at different time points were used for IHC and WTS analyses in some patients. More specifically, initial biopsy samples before chemotherapy are often used for IHC, and additional biopsy samples during chemotherapy are assessed by WTS in some patients.
Outcomes and statistical analysis We conducted a comparative analysis of clinical characteristics using the chi‐square test for categorical variables and the Mann–Whitney U test for continuous variables. We evaluated the correlation between the IHC and WTS results for each biomarker. Moreover, patients were classified into two groups based on RTK positivity by IHC and molecular profiles by IHC and WTS. Treatment response was evaluated according to the Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1. Overall response rate (ORR) was defined as the proportion of patients whose best overall response was a complete response (CR) or partial response (PR). Progression‐free survival (PFS) was calculated for each treatment after enrolment and was defined as the time from the date of treatment initiation to either the date of disease progression or death from any cause. The median PFS of each group was estimated using the Kaplan–Meier method and compared using the log‐rank test. Cox proportional hazard models were used for both univariate and multivariate analyses. Statistical significance was set at p < 0.05. Statistical analyses were performed using R version 4.2.3.
RESULTS 3.1 Baseline characteristics and the prevalence of molecular targets using IHC In this study, 140 patients with advanced GC/GEJC were assessed for the prevalence of molecular markers by IHC (Figure ). We have highlighted the baseline characteristics of all participating patients (Table ) and the prevalence of molecular markers in IHC according to their respective thresholds (Figure ). This study included predominantly male patients (71.4%), with a primary diagnosis of GC in 86.4% of patients, with most (70%) presenting with an initially unresectable/metastatic disease state and predominantly diagnosed with the diffuse type (59.3%). In the IHC analysis, MET IHC was not performed for 33 patients because it was added only in April 2022. Our findings revealed 16.4% HER2 positivity, with CLDN18 detected in 39.3%, 58.6%, and 62.9% of cases at thresholds of 75%, 50%, and 25% for the 2+ and 3+ regions, respectively. FGFR2 was prevalent in 12.9% and 19.3% at thresholds of ≥10% and ≥1% for the 2+ or 3+ region, respectively. MET was detected in 33.6% and 52.3% at thresholds of ≥1% and H‐score ≥20 for the 2+ or 3+ region, respectively. EGFR was observed in 12.1% and 30.0% at thresholds of ≥50% and ≥1% for the 2+ or 3+ region, respectively. dMMR was noted in only 3.6% of the cases. Additionally, PD‐L1 positive status was prevalent in 15.8% and 41.0% at thresholds of CPS ≥10 and ≥5, respectively. 3.2 Relationships between IHC and WTS Whole‐transcriptome sequencing data were not available for 30 patients due to inadequate sample quality or volume. Of the 110 patients, 35 presented with a discrepancy exceeding 2 months between the collection dates of tissue specimens subjected to WTS and IHC analysis. We delineated the correlations between IHC and WTS expression of biomarkers, including HER2, CLDN18, FGFR2, EGFR, MET, and PD‐L1 (Figure ). A significant correlation between the IHC assessments and TPM values calculated from the WTS was observed for all biomarkers. Specifically, the median TPM values between IHC positive versus IHC negative were manifested as: ERBB2 at 121.9 versus 24.0 ( p < 0.001); CLDN18 at 22.7 versus 2.81 ( p < 0.001); FGFR2 at 201.5 versus 24.7 ( p = 0.002); EGFR at 65.9 versus 13.0 ( p < 0.001); MET at 69.9 versus 47.1 ( p = 0.013); and PD‐L1 (CD274) at 10.77 versus 6.01 ( p = 0.013). In subsequent analyses, we explored the concordance across IHC statuses, WTS outputs, and amplification detected via WES/fusion detected via WTS in a granular, patient‐specific manner (Figure ). The demarcation of WTS expression was established using either the median or third‐quartile values of TPM. Within the HER2 evaluation, of the 11 patients with ERBB2 amplification and elevated WTS expression, two paradoxically reflected a negative IHC status. Notably, barring the two exceptions for ERBB2 , all patients with specific amplifications exhibited both IHC positivity and high WTS expression. Among the six patients with CLDN18/ARDHGAP26 fusion, four exhibited high WTS expression, and the IHC status varied among these patients. Although there was a significant correlation between IHC and WTS analyses, some patients showed discrepancies between the IHC and WTS expression of target molecular markers. In three patients who displayed HER2 positivity in IHC but were negative in WTS, the sample subjected to WTS differed from that subjected to IHC in two patients. Furthermore, among the six patients demonstrating CLDN18 positivity in IHC (2+ or 3+ region ≥75%) but negativity in WTS, four of them had different samples collected at different time points between IHC and WTS. To understand the impact of histopathological perspectives, we expanded our analysis to examine the concordance between WTS and IHC across different histological types (diffuse vs. intestinal). However, there were no significant correlations between them (Table ). Furthermore, we analyzed the impact of specific gene mutations, which were observed in five patients or more, including APC, ARID1A, CDH1, ERBB2, KMT2D, KRAS, PIK3CA, RHOA, SMAD4 , and TP53 , on the discrepancies. However, our analysis did not reveal any significant correlations between these genetic mutations and their concordance (Table ). 3.3 Prognostic impact of WTS expression in patients diagnosed as HER2 positive in IHC Following our initial analyses, we focused on 19 patients who were HER2 positive via IHC, all of whom were treated with a combination of cytotoxic agents and anti‐HER2 therapeutics. To discern the prognostic impact of IHC and WTS, 19 patients were stratified into either ERBB2 ‐high or ERBB2 ‐low groups based on WTS using the median TPM value of ERBB2 as a threshold, and the PFS of the patients was evaluated according to the WTS expression status. To accurately assess the efficacy of the anti‐HER2 treatment, three patients treated with a combination of ICIs were excluded from the analysis. Notably, in the remaining 16 patients, those with ERBB2 ‐high levels showed significantly prolonged PFS compared with those with ERBB2 ‐low (median PFS, 9.0 vs. 5.5 months; hazard ratio = 0.27, 95% confidence interval = 0.07–0.98, log‐rank p = 0.046) (Figure ). The ERBB 2‐high cohort had one CR and five PRs, resulting in an objective response rate (ORR) of 66.7%, which was higher than that for the ERBB2 ‐low cohort with three PRs (ORR, 42.8%) ( p = 0.34). 3.4 Therapeutic targets identified through molecular profiling with IHC and WTS To ascertain the mutual expression status of the target molecules and identify potential therapeutic biomarkers, we classified 110 patients into two groups according to RTK expression by IHC: RTK positive ( N = 59) and RTK negative ( N = 51). A detailed molecular landscape of IHC and WTS, including RTKs, CLDN18, and PD‐L1, was charted based on this classification (Figure ). Subsequently, the baseline characteristics of the two groups were compared (Table ). A heightened prevalence of the diffuse type and recurrent disease was observed in the RTK‐negative group (diffuse type, 70.6% vs. 49.2%, p = 0.037; recurrent tumor, 33.3% vs. 15.3%, p = 0.045). In contrast, the proportion of patients with liver metastases was higher in the RTK‐positive group than in the RTK‐negative group (44.1% vs. 15.7%, p < 0.01). Regarding the IHC status, the proportion of patients with CLDN18 positive (2+/3+ ≥50%) was significantly higher in the RTK‐negative group ( p = 0.008), whereas the proportion of patients with other markers, including EBV positive, MMR deficient, and PD L1 positive, were similar between the two groups. Within the RTK‐negative subgroup, 27 (52.9%), 37 (72.5%), and 39 (76.5%) patients exhibited CLDN18 positivity at the thresholds for 2+/3+ of ≥75%, ≥50%, and ≥ 25%, respectively. Further, 6 (11.8%) and 18 (35.3%) patients demonstrated PD‐L1 positivity at the thresholds for CPS of ≥10, and ≥5, respectively. Remarkably, 41 (78.4%) of those classified as RTK negative displayed positivity for either CLDN18 (2+/3+ ≥50%) or PD‐L1 (CPS ≥5). Furthermore, within the cohort of patients displaying targetable biomarker negativity via IHC, WTS revealed upregulated expression of specific biomarkers, including RTKs, CLDN18, and PD‐L1. Consequently, it is noteworthy that none of the patients lacked any molecular targets in the IHC or WTS analyses.
Baseline characteristics and the prevalence of molecular targets using IHC In this study, 140 patients with advanced GC/GEJC were assessed for the prevalence of molecular markers by IHC (Figure ). We have highlighted the baseline characteristics of all participating patients (Table ) and the prevalence of molecular markers in IHC according to their respective thresholds (Figure ). This study included predominantly male patients (71.4%), with a primary diagnosis of GC in 86.4% of patients, with most (70%) presenting with an initially unresectable/metastatic disease state and predominantly diagnosed with the diffuse type (59.3%). In the IHC analysis, MET IHC was not performed for 33 patients because it was added only in April 2022. Our findings revealed 16.4% HER2 positivity, with CLDN18 detected in 39.3%, 58.6%, and 62.9% of cases at thresholds of 75%, 50%, and 25% for the 2+ and 3+ regions, respectively. FGFR2 was prevalent in 12.9% and 19.3% at thresholds of ≥10% and ≥1% for the 2+ or 3+ region, respectively. MET was detected in 33.6% and 52.3% at thresholds of ≥1% and H‐score ≥20 for the 2+ or 3+ region, respectively. EGFR was observed in 12.1% and 30.0% at thresholds of ≥50% and ≥1% for the 2+ or 3+ region, respectively. dMMR was noted in only 3.6% of the cases. Additionally, PD‐L1 positive status was prevalent in 15.8% and 41.0% at thresholds of CPS ≥10 and ≥5, respectively.
Relationships between IHC and WTS Whole‐transcriptome sequencing data were not available for 30 patients due to inadequate sample quality or volume. Of the 110 patients, 35 presented with a discrepancy exceeding 2 months between the collection dates of tissue specimens subjected to WTS and IHC analysis. We delineated the correlations between IHC and WTS expression of biomarkers, including HER2, CLDN18, FGFR2, EGFR, MET, and PD‐L1 (Figure ). A significant correlation between the IHC assessments and TPM values calculated from the WTS was observed for all biomarkers. Specifically, the median TPM values between IHC positive versus IHC negative were manifested as: ERBB2 at 121.9 versus 24.0 ( p < 0.001); CLDN18 at 22.7 versus 2.81 ( p < 0.001); FGFR2 at 201.5 versus 24.7 ( p = 0.002); EGFR at 65.9 versus 13.0 ( p < 0.001); MET at 69.9 versus 47.1 ( p = 0.013); and PD‐L1 (CD274) at 10.77 versus 6.01 ( p = 0.013). In subsequent analyses, we explored the concordance across IHC statuses, WTS outputs, and amplification detected via WES/fusion detected via WTS in a granular, patient‐specific manner (Figure ). The demarcation of WTS expression was established using either the median or third‐quartile values of TPM. Within the HER2 evaluation, of the 11 patients with ERBB2 amplification and elevated WTS expression, two paradoxically reflected a negative IHC status. Notably, barring the two exceptions for ERBB2 , all patients with specific amplifications exhibited both IHC positivity and high WTS expression. Among the six patients with CLDN18/ARDHGAP26 fusion, four exhibited high WTS expression, and the IHC status varied among these patients. Although there was a significant correlation between IHC and WTS analyses, some patients showed discrepancies between the IHC and WTS expression of target molecular markers. In three patients who displayed HER2 positivity in IHC but were negative in WTS, the sample subjected to WTS differed from that subjected to IHC in two patients. Furthermore, among the six patients demonstrating CLDN18 positivity in IHC (2+ or 3+ region ≥75%) but negativity in WTS, four of them had different samples collected at different time points between IHC and WTS. To understand the impact of histopathological perspectives, we expanded our analysis to examine the concordance between WTS and IHC across different histological types (diffuse vs. intestinal). However, there were no significant correlations between them (Table ). Furthermore, we analyzed the impact of specific gene mutations, which were observed in five patients or more, including APC, ARID1A, CDH1, ERBB2, KMT2D, KRAS, PIK3CA, RHOA, SMAD4 , and TP53 , on the discrepancies. However, our analysis did not reveal any significant correlations between these genetic mutations and their concordance (Table ).
Prognostic impact of WTS expression in patients diagnosed as HER2 positive in IHC Following our initial analyses, we focused on 19 patients who were HER2 positive via IHC, all of whom were treated with a combination of cytotoxic agents and anti‐HER2 therapeutics. To discern the prognostic impact of IHC and WTS, 19 patients were stratified into either ERBB2 ‐high or ERBB2 ‐low groups based on WTS using the median TPM value of ERBB2 as a threshold, and the PFS of the patients was evaluated according to the WTS expression status. To accurately assess the efficacy of the anti‐HER2 treatment, three patients treated with a combination of ICIs were excluded from the analysis. Notably, in the remaining 16 patients, those with ERBB2 ‐high levels showed significantly prolonged PFS compared with those with ERBB2 ‐low (median PFS, 9.0 vs. 5.5 months; hazard ratio = 0.27, 95% confidence interval = 0.07–0.98, log‐rank p = 0.046) (Figure ). The ERBB 2‐high cohort had one CR and five PRs, resulting in an objective response rate (ORR) of 66.7%, which was higher than that for the ERBB2 ‐low cohort with three PRs (ORR, 42.8%) ( p = 0.34).
Therapeutic targets identified through molecular profiling with IHC and WTS To ascertain the mutual expression status of the target molecules and identify potential therapeutic biomarkers, we classified 110 patients into two groups according to RTK expression by IHC: RTK positive ( N = 59) and RTK negative ( N = 51). A detailed molecular landscape of IHC and WTS, including RTKs, CLDN18, and PD‐L1, was charted based on this classification (Figure ). Subsequently, the baseline characteristics of the two groups were compared (Table ). A heightened prevalence of the diffuse type and recurrent disease was observed in the RTK‐negative group (diffuse type, 70.6% vs. 49.2%, p = 0.037; recurrent tumor, 33.3% vs. 15.3%, p = 0.045). In contrast, the proportion of patients with liver metastases was higher in the RTK‐positive group than in the RTK‐negative group (44.1% vs. 15.7%, p < 0.01). Regarding the IHC status, the proportion of patients with CLDN18 positive (2+/3+ ≥50%) was significantly higher in the RTK‐negative group ( p = 0.008), whereas the proportion of patients with other markers, including EBV positive, MMR deficient, and PD L1 positive, were similar between the two groups. Within the RTK‐negative subgroup, 27 (52.9%), 37 (72.5%), and 39 (76.5%) patients exhibited CLDN18 positivity at the thresholds for 2+/3+ of ≥75%, ≥50%, and ≥ 25%, respectively. Further, 6 (11.8%) and 18 (35.3%) patients demonstrated PD‐L1 positivity at the thresholds for CPS of ≥10, and ≥5, respectively. Remarkably, 41 (78.4%) of those classified as RTK negative displayed positivity for either CLDN18 (2+/3+ ≥50%) or PD‐L1 (CPS ≥5). Furthermore, within the cohort of patients displaying targetable biomarker negativity via IHC, WTS revealed upregulated expression of specific biomarkers, including RTKs, CLDN18, and PD‐L1. Consequently, it is noteworthy that none of the patients lacked any molecular targets in the IHC or WTS analyses.
DISCUSSION The present study compared IHC and WTS, encompassing their clinical implications, along with the molecular profiling analysis of advanced GC/GEJC. Our findings revealed a statistically significant correlation between IHC analysis and WTS expression. Furthermore, among patients with HER2 positivity using IHC, a notable trend of improved PFS was observed in patients with ERBB2 ‐high by WTS, in contrast to those with ERBB2 ‐low. This observation underscores the potential utility of WTS for ascertaining the positivity of molecules for which IHC evaluation presents challenges or lacks well‐established criteria. Moreover, approximately 80% of patients with RTK‐negative IHC were positive for CLDN18 or PD‐L1, making it one of the most prominent therapeutic targets for advanced GC/GEJC. The integration of IHC and WTS analyses revealed that there were no patients who lacked any molecular targets. To the best of our knowledge, this is the first report assessing the correlation between IHC and WTS with multiple molecular targets and elucidating the prognostic impact of WTS analysis in patients with unresectable advanced GC/GEJC. The salient aspect of the current investigation lies in scrutinizing the correlation between IHC and WTS across multiple molecular targets. This dataset suggests a high degree of concordance between IHC and WTS results, underscoring the utility of WTS determination. In light of the substantial upsurge in molecular‐targeted therapies, including antibodies, antibody–drug conjugates, bispecific antibodies, and other compounds in recent years, it has become arduous to subject all target molecules to IHC analysis. Concerns have arisen over potential delays in the therapeutic development of molecules lacking standardized IHC protocols or established criteria for evaluation. Moreover, the subjective nature of current IHC assessments conducted by pathologists, devoid of numerical quantification, poses challenges in cases of tumor heterogeneity. , Indeed, two patients exhibited ERBB2 amplification and remarkably high WTS expression but did not exhibit HER2 positivity in IHC. Furthermore, two patients underwent HER2‐targeted therapy following the second IHC analysis on an alternate specimen, which was predicted based on the WTS status and subsequently reaffirmed as HER2 positive by IHC. This discrepancy is partly attributable to the use of distinct specimen sets for IHC and WTS analyses. Efficacy in WTS‐positive and IHC‐negative patients can be attributed mainly to intratumoral heterogeneity. Emerging evidence suggests that RNA expression values derived from WTS may be indicative of treatment efficacy and guide therapeutic choices. Recent studies have reported the potential utility of WTS‐derived RNA expression in treatment prediction. , , , Furthermore, in the context of HER2 therapies, trastuzumab deruxtecan, a HER2‐targeted antibody–drug conjugate, demonstrated remarkable efficacy in patients with breast cancer characterized by HER2‐low expression. , , , A higher response rate of trastuzumab deruxtecan in patients with high ERBB2 expression by WTS and no HER2 IHC expression in a phase 2 study for breast cancer also supports our finding. Nevertheless, there are no extensive studies on treatment effectiveness specifically in cases that are IHC negative but show high expression of WTS in some specific biomarkers. Further studies are required to evaluate the benefits of HER2‐targeted therapy guided by ERBB2 WTS expression. The molecular characterization of advanced GC/GEJCs has recently evolved. The Cancer Genome Atlas (TCGA) introduced a classification scheme encompassing four distinct subtypes: EBV, microsatellite instability‐high, chromosomal instability (CIN), and others, which are classified as genome stable (GS) because of the absence of any specific signature. Within this classification framework, the CIN subtype is associated with the amplification of RTKs, which serve as promising biomolecular targets. The GS subtype has been associated with features such as a diffuse histological type, mesenchymal gene expression patterns, CLDN18‐ARDHGAP gene fusion events, and stem cell‐like properties. In alignment with this conceptual framework, our study segregated patients into two categories, RTK positive and RTK negative, as determined by IHC analysis. This division facilitated the exploration of molecular profiles and the identification of potential therapeutic targets within the RTK‐negative subgroup, which bears resemblance to the GS subtype delineated in TCGA. As indicated in a previous report, our findings revealed a notable predominance of diffuse‐type tumors within the RTK‐negative subgroup. Moreover, among the six patients exhibiting CLDN18‐ARDHGAP fusion, five were in the RTK‐negative group. The RTK‐positive subtype had a higher prevalence of patients displaying the intestinal histological type, which may provide insights into the higher proportion of patients with liver metastases in this subgroup. Moreover, RTK‐negative subtypes tend to lack distinct signatures and therapeutic targets. , However, our study revealed that PD‐L1 showed similarly widespread expression across the two subtypes, and CLDN18 exhibited significantly higher expression within the RTK‐negative subgroup. Consequently, approximately 80% of the patients categorized as RTK negative were positive for either CLDN18 or PD‐L1. This underscores their potential as viable therapeutic targets in this subtype and emphasizes the prospective utility of zolbetuximab and ICI as molecular targeting strategies. Furthermore, although FGFR2, MET, and EGFR did not exhibit gene amplification, they demonstrated varying degrees of elevated expression in both the IHC and WTS analyses. Currently, agents targeting these molecules have not received clinical approval but have attracted significant attention as potential molecular targets. Additionally, patients who lacked targetable biomarkers, as determined by IHC, exhibited notable elevations in the expression of certain biomarkers in WTS analysis. Patients who have previously remained ineligible for targeted therapy owing to their weak expression levels, as determined by IHC, may derive potential benefits from targeted therapy employing antibody–drug conjugates, provided their expression is assessed via WTS and a discernible level of expression is ascertained. These observations underscore the importance of comprehensive molecular profiling to establish the potential of these markers as pivotal targets for such populations. This study had certain limitations. First, it was a single‐institution study with a limited sample size. In particular, the sample size was small in the context of HER2 treatment, necessitating additional investigations. Second, although we assessed WTS using TPM values, it is imperative to acknowledge that this constitutes a bulk sample analysis, and the distribution of these expressions remains unclear. Moreover, in tissues comprising a substantial stromal component, bulk WTS analysis may complicate the interpretation of expression values, as it encompasses the evaluation of both tumor and stromal tissue constituents. This can be achieved through methods such as multiplex IHC or spatial single‐cell transcriptome analysis, both of which have the potential to address this concern. , , Third, simultaneous sample collection for IHC and WTS was not feasible for all patients. Nonetheless, a noteworthy level of concordance exists, suggesting the potential to elucidate the clinical utility of the WTS. However, we must acknowledge that mRNA expression does not always correlate with protein expression, which is a crucial aspect of molecular‐targeted therapies. The translational process of ERBB2 mRNA into HER2 protein should be further investigated to interpret the WTS data. Finally, considering the absence of established criteria for the IHC assessment of multiple biomarkers and the undetermined optimal threshold for WTS expression, the exact proportion of patients with positive IHC results or the precise classification of WTS‐high or ‐low expression remains elusive. While this study assessed the relationship between WTS and IHC, with particular emphasis on the potential of WTS as an innovative therapeutic strategy, it is imperative to acknowledge that IHC is extensively utilized for the selection of contemporary targeted therapies, with each methodology possessing distinct advantages and drawbacks. WTS is notably adept at identifying fusion genes, whereas IHC stands out for its cost‐effectiveness and practicality, primarily due to its shorter turnaround time. Future developments in sequencing technologies may bridge these gaps. Further studies focusing on the association between WTS and IHC, the utility of WTS in therapeutic efficacy, and, ultimately, the comparison of their prognostic impact are warranted. In conclusion, the present study highlights a significant correlation between IHC status, WTS data, and the clinical impact of WTS. These findings suggest that in cases where the evaluation of molecular targets poses challenges via IHC owing to heterogeneity or a lack of established criteria, the utilization of WTS for the determination of their positivity can be a sensible approach. In patients with RTK negative IHC, CLDN18 and PD‐L1 are useful biomarkers, and the comprehensive analysis provided by WTS opens new avenues for the exploration of novel molecular targets. These results hold promise for the development of innovative therapies targeting multiple molecules, even those traditionally challenging to assess using conventional IHC methods. Such initiatives may propel the advancement of precision medicine and molecular‐targeted therapies.
Tadayoshi Hashimoto: Conceptualization; data curation; formal analysis; funding acquisition; investigation; methodology; project administration; visualization; writing – original draft. Yoshiaki Nakamura: Conceptualization; data curation; funding acquisition; investigation; methodology; project administration; resources; supervision; validation; visualization; writing – original draft. Saori Mishima: Resources; writing – review and editing. Izuma Nakayama: Resources; writing – review and editing. Daisuke Kotani: Resources; writing – review and editing. Akihito Kawazoe: Resources; writing – review and editing. Yasutoshi Kuboki: Resources; writing – review and editing. Hideaki Bando: Resources; writing – review and editing. Takashi Kojima: Resources; writing – review and editing. Naoko Iida: Data curation; software; visualization; writing – review and editing. Taro Shibuki: Resources; writing – review and editing. Mitsuho Imai: Writing – review and editing. Takao Fujisawa: Visualization; writing – review and editing. Michiko Nagamine: Validation; writing – review and editing. Naoya Sakamoto: Investigation; validation; visualization; writing – review and editing. Takeshi Kuwata: Supervision; validation; visualization; writing – review and editing. Takayuki Yoshino: Conceptualization; funding acquisition; methodology; project administration; resources; software; supervision; writing – review and editing. Kohei Shitara: Conceptualization; investigation; methodology; project administration; resources; supervision; validation; writing – review and editing.
This study was supported by SCRUM‐Japan Funds ( http://www.scrum‐japan.ncc.go.jp/index.html ) and the Japan Agency for Medical Research and Development (AMED: 23ck0106890h0001).
Hashimoto T. has no conflicts of interest. Nakamura Y. reports grants from Taiho Pharmaceutical Co., Ltd.; Chugai Pharmaceutical Co., Ltd.; Daiichi Sankyo Co., Ltd.; Guardant Health, Inc.; Genomedia, Inc.; Roche Diagnostics; and Seagen, Inc. and honoraria from Chugai Pharmaceutical Co., Ltd. and Guardant Health AMEA outside the submitted work. Mishima S. reports honoraria from Chugai, Lilly, and Merck biopharma outside the submitted work. Nakayama I. has no conflicts of interest to declare. Kotani D. reports honoraria from Takeda, Chugai, Lilly, MSD, Ono, Taiho, Bristol‐Myers Squibb, Daiichi‐Sankyo, Pfizer, Novartis, Eisai, Seagen, Merckbiopharma, and Sysmex and research funding from Ono, MSD, Novartis, Servier, Janssen, IQVIA, Syneoshealth, Cimic, and Cimicshiftzero outside the submitted work. Kawazoe A. reports receiving personal fees from Daiichi Sankyo, Lilly Ono, Taiho, Bristol Myers Squibb, Merck Serono Biopharma, Sumitomo Dainippon, Zymework, and AstraZeneca outside the submitted work. Kuboki Y. reports receiving personal fees for consulting and advisory roles from Incyte, Takeda, Boehringer Ingelheim, Amgen, and Abbie; honoraria from Taiho, Lilly, and Takeda; and research funding (all to institution) from Taiho, Astelas, Lilly, Takeda, Daiichi‐Sankyo, AstraZeneca, Boehringer Ingelheim, Chugai, Genmab, Incyte, Abbie, Amgen, Merk, Hengrui, and Novartis outside the submitted work. Bando H. reports research funding from Ono Pharmaceutical and honoraria from Ono Pharmaceutical, Eli Lilly Japan, and Taiho Pharmaceutical outside of the submitted work. Kojima T. reports research grant from Beigene Ltd., AstraZeneca, Chugai Pharmaceutical, Parexel International, Shionogi, Taiho Pharmaceutical, Astellas Amgen BioPharma, MSD, and Ono Pharmaceutical; honoraria from Ono Pharmaceutical, Covidien Japan, MSD, Boehringer Ingelheim, Kyowa Kirin, EA Pharma, Bristol‐Myers Squibb, 3H Clinical Trial, AstraZeneca, Taiho Pharmaceutical, Liang Yi Hui Healthcare Oncology News China, Japanese Society of Pharmaceutical Health Care and Sciences, Oncolys BioPharma, and BMS; advisory roles for Ono Pharmaceutical, Taiho Pharmaceutical, Japanese Society of Pharmaceutical Health Care and Sciences, and Liang Yi Hui Healthcare Oncology News China outside the submitted work. Iida N. declares no conflicts of interest. Shibuki T. declares no conflicts of interest. Imai M. reports receiving honoraria from Caris Life Sciences and consulting fees from Sumitomo Corp. outside of the submitted work. Fujisawa T. received honoraria from Amelief outside of the submitted work. Nagamine M. declares no conflicts of interest. Sakamoto N. has no conflicts of interest. Kuwata T. reports receiving research grants from Roche Diagnostics and honoraria from Astellas, Bayer, Bristol‐Myer Squibb Japan, Falco Biosystems, Diichi‐Sankyo, MSD, Ono Pharm, and Roche Diagnostics outside the submitted work. Yoshino T. reports honoraria from Chugai Pharmaceutical Co., Ltd., Takeda Pharmaceutical Co., Ltd., Merck, Bayer Yakuhin, Ono Pharmaceutical, and MSD K.K; consulting fee from Sumitomo Corp.; and research grant from Amgen, Chugai Pharmaceutical Co., Ltd., Daiichi Sankyo Co., Ltd., Eisai, FALCO Biosystems, Genomedia Inc., Molecular Health, MSD, Nippon Boehringer Ingelheim, Ono, Pfizer, Roche Diagnostics, Sanofi, Sysmex, and Taiho Pharmaceutical Co., Ltd., outside the submitted work. Shitara K. reports receiving personal fees for consulting and advisory roles from Bristol Myers Squibb, Takeda, Ono Pharmaceutical, Novartis, Daiichi Sankyo, Amgen, Boehringer Ingelheim, Merck Pharmaceutical, Astellas, Guardant Health Japan, Janssen, AstraZeneca, Zymeworks Biopharmaceuticals, ALX Oncology Inc., and Bayer; receiving honoraria from Bristol‐Myers Squibb, Ono Pharmaceutical, Janssen, Eli Lilly, Astellas, and AstraZeneca; and receiving research funding (all to institution) from Astellas, Ono Pharmaceutical, Daiichi Sankyo, Taiho Pharmaceutical, Chugai, Merck Pharmaceutical, Amgen, Eisai, PRA Health Sciences, and Syneos Health, outside the submitted work.
The study protocol was approved by the Institutional Review Board of National Cancer Center Hospital East. Informed Consent: All subjects have written informed consent. Registry and the Registration No. of the study/trial: UMIN000019129. Animal Studies: N/A.
Table S1.
|
Microbial ecology of the deep terrestrial subsurface | 3b64ce7c-d89a-4f43-acad-0352698acd92 | 11170664 | Microbiology[mh] | The terrestrial subsurface is one of Earth’s largest environments and predicted to host as many microbial cells as global surface soils and more than all oceans combined . Especially given the massive volume of this ecosystem, subsurface microbes play an important role in global biogeochemical cycling. The deep terrestrial subsurface is a source of valuable compounds such as ores, minerals, oil, and natural gas. It is also of interest to nuclear waste management organizations for its potential to host deep geological repositories for long-term storage of materials such as used nuclear fuel and other radioactive waste and for its potential in carbon capture and storage of hydrogen for use as an energy vector . Further, certain deep subsurface environments on Earth can serve as analogues to saline subsurface environments on other planets like Mars . Nonetheless, the deep terrestrial subsurface remains underexplored, particularly because of logistical challenges of sampling such inaccessible locations. Microorganisms are diverse in their metabolic needs, but there are several common requirements for all known life on Earth: water, carbon, nutrients, physical space, and energy for growth and reproduction. In many of Earth’s environments, these requirements are met readily, but the deep subsurface is typically nutrient poor. As availability of the necessities of life tends to decrease with depth , so do the average abundances of microbial cells . In these nutrient-deprived conditions, life in the deep subsurface operates at a slower pace than it does in most surface environments. For example, the average generation time for microbial cells in terrestrial deep subsurface environments has been estimated to be centuries . This, coupled with relatively small population sizes, may lead to evolution driven by stochastic processes, like genetic drift, rather than deterministic factors, such as selection . In the deep subsurface, water exists in the form of groundwater, which is a broad term used to describe fluid located below the surface in pore spaces of rocks and soil, in the fractures between rocks, and in aquifers . Aquifers typically occur in the first 100 m below the surface but can also extend to much greater depths . They are commonly composed of unconsolidated porous rock/sediment (e.g. sand, gravel) or consolidated porous rock (e.g. sandstone). Other aquifers consist of water within interconnected fractures, cracks, or joints in solid rock . The salinity of groundwater increases with depth and can result in hypersaline environments at some of the greatest depths sampled . Deep groundwater may also host high concentrations of heavy metals, which can be toxic to microorganisms . Given an absence of sunlight, and a lack of associated primary production from photosynthesis, access to organic carbon in the deep subsurface is more limited for microorganisms than it is in surface environments. Some organic carbon in the deep subsurface was included with sediments at the time of their deposition and now through diagenesis exists primarily as oil and petroleum deposits. Subsurface organic carbon also exists in clay, shales, coal, and other deposits. Living near organic carbon deposits can be advantageous for microorganisms, especially heterotrophs, but it is not the only strategy. Organic carbon can also be produced in situ by chemolithoautotrophs that fix inorganic carbon, which allows for microbial life in the subsurface beyond carbon reservoirs . In addition to a lack of organic carbon, deep subsurface environments are often anoxic, and with limited nutrients, thus most subsurface microorganisms rely on non-oxygen electron acceptors and inorganic electron donors for metabolism. However, some deep subsurface environments have access to oxygen via oxidizing water originating from the surface . Alternatively, a recent study demonstrated higher than expected concentrations of dissolved oxygen in old groundwaters that may have been produced in situ via microbial dismutation, a process termed “dark oxygen production” . Physical space for microorganisms to inhabit the deep subsurface is highly variable, ranging from pore spaces smaller than the size of a microbial cell to larger fractures and faults that are sometimes interconnected . Rock type influences both pore size and organic carbon availability. Sedimentary rocks are generally more porous than igneous and metamorphic rocks, providing more space for microorganisms to grow and interact . They also generally have not been exposed to the same high-temperature and -pressure conditions as igneous and metamorphic rocks; thus, microbial populations found within them could theoretically have been present since the rock’s deposition . In contrast, igneous and metamorphic rocks, which together represent most of the deep subsurface, rely on nutrient and energy source transport via fractures and are usually void of organic matter . Because the pore spaces of these rocks are usually too small for microbial cells, fractures provide the most likely habitats . There is no universal depth that defines the deep terrestrial subsurface biome. Previous publications have described the terrestrial subsurface as deeper than 8 m , and the deep terrestrial subsurface as deeper than 100 m . Temperature prevents microbial growth beyond a certain depth , increasing by ~25°C per kilometer below the surface in terrestrial environments . This means that any currently known microorganisms could not survive below depths of ~5 km . For the purposes of this review, the deep terrestrial subsurface (also referred to simply as “subsurface” throughout) comprises rocks and groundwater at least 100 m below the surface of continents. Historical context The first documented evidence for subsurface life on Earth was the description of fungi and algae in subterranean gold mines of Guanajuato, Mexico by Alexander von Humboldt in the late 18th century . Despite this early observation, the microbiology of terrestrial environments in general only began with studies of soil in the late 1800s, with researchers initially searching for pathogens. Using the techniques available at the time, Robert Koch first observed that below ~1 m, soil samples were nearly free of bacteria . This conclusion was supported by others studying soil microbiology at the time , and into the 1900s, where lower numbers of culturable microorganisms, using highly nutritious organic carbon-containing medium, from lower soil depths was attributed to a lack of air and food . Because of early work on soil microbiology that showed very low numbers of microorganisms at the bottom of the soil zone, it was believed that microbial growth below this zone was very limited or non-existent. Coupled with technical challenges sampling the deep subsurface, there was relatively little interest in pursuing the study of deep subsurface microbiology. Around the 1920s, the presence of hydrogen sulfide in oil reservoirs (“oil souring”) led to predictions that subsurface-associated sulfate-reducing bacteria (SRB) could be responsible. Ernst Georg Wolzogen-Kühr, a German microbiologist, showed the presence of a specific sulfate-reducing bacterium, then referred to as Microspira desulfuricans , up to 70 feet below the Earth’s surface . Despite these observations, the geology community believed that sulfate reduction in oil deposits was due entirely to abiotic chemical reactions, and the prevailing opinion remained that the subsurface was sterile. This paradigm was again refuted in a 1926 publication reporting the presence of Microspira in crude oil samples from depths of 500 m and again in a 1930 publication . In 1931, Charles Lipman at the University of California, Berkeley presented evidence for microorganisms living in coal samples extracted from 600 m belowground, and he claimed to be the first to postulate that the microorganisms had been there for millions of years, since the deposition of the plant matter that became coal . Over the next few decades, SRB were isolated from several other subsurface oil-well associated environments . A dominance of SRB in the literature on subsurface microbiology at this time was likely due to the use of targeted cultivation methods that favored their discovery over other types of microorganisms, as there was interest at the time to confirm their suspected role in oil souring. Nonetheless, additional types of microorganisms were found in oil-deposit samples and other subsurface environments, including Pseudomonas , denitrifiers, sulfur oxidizers, and microorganisms capable of using petroleum-associated compounds . It was postulated that subsurface soil samples were inhabited by microorganisms with less nutrient adaptability , although non-chemoorganoheterotrophic metabolisms were not discussed. Investigations into subsurface microbiology at this time were still largely limited to spring or well water and rarely looked directly at subsurface core material due to difficulties with obtaining such samples. In the 1970s, agricultural and industrial activities led to groundwater contamination, and one possibility was that subsurface microorganisms could help degrade these contaminants . Several years later, subsurface microbiology gained additional relevance in the context of belowground disposal of radioactive and heavy metal waste. Initial work exploring a potential influence of microorganisms on long-term nuclear waste storage began in the late 1970s in Canada, Switzerland, the UK, and the USA, and soon after in Finland, France, Italy, Japan, and Sweden . By the end of the 20th century, adequate controls and aseptic sampling technique were employed to convince the scientific community that there was indeed microbial life in the subsurface . Chemical energy for primary production In the absence of sunlight, subsurface communities must rely on non-photosynthetic primary production. It was originally thought that subsurface life must be supported by organic carbon deposits that were formed by ancient photosynthetic events. Although subsurface microbial communities that are near organic carbon deposits, such as oil, do take advantage of these carbon sources, other communities rely entirely on chemolithoautotrophic metabolism and fix their own carbon from inorganic sources available in the subsurface. The first deep terrestrial subsurface microbial community shown to be completely supported by chemolithoautotrophic primary production was discovered in 1995 . Since then, geogenic gases such as dihydrogen (H 2 ), methane (CH 4 ), and carbon dioxide (CO 2 ) have been linked with belowground primary production . For primary production to occur, microorganisms must have the capacity to fix inorganic carbon into biomass. Several different carbon fixation pathways exist in microorganisms, but perhaps the most important for deep subsurface metabolism is the reductive acetyl-CoA pathway, or Wood–Ljungdahl pathway, because it is the preferred pathway for microorganisms living in low-energy environments near the thermodynamic limit of life . This pathway is commonly used by acetogens, methanogens, and sulfate-reducing microorganisms that, in addition to fixing inorganic carbon, use the pathway for energy production . Metagenomic studies have demonstrated that the reductive acetyl-CoA pathway dominates within deep terrestrial subsurface microbial communities . Hydrogen-driven ecosystems A common feature of deep subsurface microbial communities is a reliance on H 2 for energy . Hydrogen gas is present in subsurface environments through processes like radiolysis of water and serpentinization . Although hydrogen-fueled microbial metabolisms in the deep terrestrial subsurface were demonstrated prior to the availability of metagenomics , subsequent metagenomic studies have further reinforced a prevalence of genes involved in H 2 oxidation associated with deep subsurface samples . For example, metagenomes generated from samples of three different borehole depths showed a significant enrichment of hydrogenases in borehole samples from 2.3 km compared to those from 0.6 or 1.5 km , suggesting that hydrogen becomes increasingly important with distance below the Earth’s surface. Hydrogen gas can be coupled to the reduction of many different electron acceptors that are relevant to deep terrestrial subsurface metabolism, supporting methanogenesis , homoacetogenesis , sulfate/sulfite reduction , and iron reduction . Common microbial subsurface communities Prior to high-throughput sequencing and metagenomics, deep terrestrial subsurface microbial community characterization generally involved either culturing approaches or clone library analysis (16S rRNA gene amplicon sequencing of selected clones). Using such traditional approaches, deep subsurface communities were commonly reported to be dominated by iron-reducing bacteria, SRB, methanogenic archaea, and acetogens . When subsurface samples were taken from locations near hydrocarbon reservoirs, fermentative microorganisms were also detected . Because a subset of microorganisms is favored by cultivation conditions, microbial abundance estimates obtained by these techniques can be much lower than those from microscopy-based techniques, sometimes by orders of magnitude . With the advent of high-throughput amplicon and metagenomic sequencing, it has been possible to study deep biosphere microbial communities with increased resolution, circumventing the biases of culture-based approaches. For additional information about novel techniques for studying deep subsurface environments beyond DNA sequencing–based approaches, see a recent review . A survey of existing amplicon sequencing data from global terrestrial deep subsurface environments showed a universal dominance of phyla Proteobacteria (Pseudomonadota ) and Firmicutes ( Bacillota ). It was proposed that the vast metabolic diversity within these phyla could account for their dominance in deep terrestrial subsurface environments, seemingly independent of underlying geology and environmental factors . Metagenomic studies of deep terrestrial subsurface environments commonly reveal microbial communities with diverse metabolisms. For example, a study of deep subsurface samples from the Horonobe Underground Research Laboratory (Japan) found a diverse microbial community consisting of 29 phyla, including 13 uncultured representatives that had never been detected at this site . The most abundant metabolic functions encoded by the metagenomes were sulfate reduction, sulfur oxidation, nitrate reduction, iron reduction, methane oxidation, and methanogenesis. Almost all reconstructed genomes showed the potential for fermentation, several had genes for nitrogen fixation, many encoded the Calvin–Benson–Basham or reductive acetyl-CoA pathways for carbon fixation, and more than half had genes involved in hydrogen oxidation. The functions detected in these metagenomes are common for deep terrestrial subsurface microorganisms ; however, not all deep subsurface environments host such diversity . Low-diversity microbial communities Some deep terrestrial subsurface microbial communities have very low diversity. For example, water circulating within igneous rocks ~200 m belowground in Idaho was sampled and shown to be dominated by methanogens, at >90% of all detected taxa . A similarly low-diversity community was discovered in groundwater from 2.8 km belowground in the Mponeng gold mine in South Africa, which had a microbial community dominated by a single SRB population belonging to the Firmicutes ( Bacillota ) phylum . Metagenomic sequencing of fracture fluid recovered from this same environment revealed a metagenome with >99% of reads belonging to this same population’s genome . Additional reads in the metagenome were considered to be laboratory or drilling contaminants. Named Candidatus Desulforudis audaxviator, which in Latin means “bold traveler in search of sulfur,” the assembled genome suggested complete self-sufficiency for this subsurface bacterium. In addition to the ability to couple sulfate reduction with H 2 (e.g. derived from radioactive decay of uranium) or formate oxidation for energy metabolism, the genome for Ca. D. audaxviator contains all genes necessary for carbon and nitrogen fixation and encodes all necessary amino acid biosynthesis pathways. Metabolically flexible, Ca. D. audaxviator can switch from heterotrophy to autotrophy as conditions change. Adaptations such as this could help to explain its ability to thrive in such a harsh environment independently . Since its discovery, Ca. D. audaxviator has been reported in other global subsurface samples . A similarly low-diversity microbial community was later discovered in porous sandstone near an oil deposit, dominated (>98%) by Halomonas sulfidaeris , a heterotroph capable of using aromatic organic compounds . Microeukaryotes Most research exploring microorganisms in deep terrestrial subsurface environments has focused on bacteria and archaea, but microeukaryotes have been detected as well . In bedrock fracture water from Finland, fungi were detected at all tested depths (300–800 m), with the phylum Ascomycota being the most prevalent . This study demonstrated a depth-independent distribution of fungal community diversities and several reads associated with potentially novel fungal species. Despite low abundance overall, several fungal species (“mold” and yeast) were detected in groundwater from the Äspo Hard Rock Laboratory . Heat-tolerant taxa from the phylum Nematoda have also been detected in subsurface fracture water to depths approaching 3.6 km within the Beatrix gold mine, South Africa , where they were suggested to be feeding on prokaryotes. Their heat tolerance may be linked to heat-shock proteins that are transcriptionally induced when these subsurface nematodes grow under heat stress conditions . Additional eukaryotes from phyla Platyhelminthes , Rotifera , Annelida , and Arthropoda have been detected in South African mines at approximate depths of 1.5 km belowground . The presence of microeukaryotes in subsurface environments may originate from surface water recharge and, predictably, their subsurface persistence is likely governed by food availability . Factors influencing subsurface microbial community composition The factors that affect microbial community composition and diversity of deep terrestrial subsurface environments remain poorly understood. Although the least diverse microbial communities discovered have been in some of the deepest sampled environments , other deep subsurface environments host relatively diverse microbial communities . Decreasing diversity with depth is likely a combination of related factors that influence microbial community composition, such as water recharge and origin, water activity (e.g. salinity), organic matter availability, and electron donor and acceptor diversity. Several 1–5-km-deep samples taken from boreholes in South Africa had microbial communities dominated by either Firmicutes ( Bacillota ) or Proteobacteria ( Pseudomonadota ) phyla . In general, Proteobacteria ( Pseudomonadota ) taxa tend to dominate fracture fluids that have more recently mixed with meteoric (i.e. associated with precipitation) waters, which are relatively shallow subsurface fluids. In contrast, representatives of the Firmicutes ( Bacillota ) dominate deeper subsurface communities, which tend to be fed from deep groundwaters with little or no meteoric water input . This trend could be explained by the selection for microorganisms, often Firmicutes ( Bacillota ) members, capable of using the reductive acetyl-CoA pathway for carbon fixation in lower energy deep environments, with less fluid input from meteoric sources. Indeed, a metagenomics study observed a higher relative abundance of Firmicutes ( Bacillota ) members in fracture fluids with little mixing of meteoric waters, which was associated with a higher abundance of protein-encoding genes associated with the reductive acetyl-CoA pathway . A correlation between water origin and microbial community composition has been reported for other environments, including the Fennoscandian Shield and serpentinite springs in Canada . Water recharge, as well as organic matter availability, is also reported to be positively correlated with subsurface microbial community diversity . Although addressed by these experiments, additional factors could favor the persistence of certain microorganisms at greater depths compared to others, such as the ability of some microorganisms, including members of the Bacillota , to form spores and withstand unfavorable conditions. In addition to carbon fixation pathways, other adaptations for nutrient-poor conditions of the deep subsurface could help explain persistence of certain microorganisms in these environments. For example, H. sulfidaeris , which was found to dominate (>98%) a microbial community in sandstone, is well adapted to use the various aromatic organic compounds available nearby due to oil deposit proximity . It also has adaptations for survival in the hypersaline subsurface, including transmembrane transporters for ions, heavy metal and ion efflux pumps, and various other osmotic regulators. As a facultative anaerobe, it can also adapt to changes in oxygen availability and is tolerant to high temperature and pressure . The microorganisms detected at the deepest depth sampled in a borehole in Finland had similar adaptations to the high salt and metal concentrations . Some obligate fermenting microorganisms can use the osmoprotectant compounds produced by other organisms as a carbon and energy source. It was observed that the microbial community composition in 2.5-km-deep shale wells in Pennsylvania shifted in response to increasing salt concentration associated with hydraulic fracturing of shale to favor halotolerant bacterial and archaeal species: Candidatus Frackibacter, which was discovered at the site, Halanaerobium , Halomonadaceae , Marinobacter , Methanohalophilus , and Methanolobus . All genomes had evidence of an osmoprotectant strategy, including use of the molecule glycine betaine, proposed to be produced by other microorganisms present to fuel their fermentative metabolisms . Another proposed adaptation to oligotrophic deep subsurface conditions is small cell size . Approximately 50% of the cells in microbial communities of groundwater collected from the Äspö Hard Rock Laboratory passed through a 0.22-μm filter. These small cells often had genomes that were assigned to phylum Proteobacteria ( Pseudomonadota ), and all had matches to known representative species reported to have cell sizes larger than 0.3 μm . Another factor that has been shown to influence microbial community composition is the underlying geology of deep terrestrial subsurface environments. Microorganisms often make use of the molecules and ions available in the rocks they inhabit, either as electron sources or as sources of limiting minerals . This includes metal sulfides like pyrite , metals such as iron and manganese and their oxides , silicate rocks like feldspar that provide a source of phosphorus , and gypsum-derived sulfate , which are not evenly distributed in all rocks . Profiles of available electron donors in subsurface ecosystems correlate with microbial community composition , but host rock lithology has rarely been directly linked to microorganisms living within that rock. Nonetheless, one study compared the lithology and microbial community compositions of 15 types of host rock taken from many different locations and showed that host rock lithology was a primary driver of microbial community structure . A study out of the Deep Mine Microbial Observatory (South Dakota) looking at biofilms in fluid-filled fractures supports these results and suggests that the types of minerals present could be an important factor for which microorganisms colonize the rock surfaces . Similarly, microbial communities within granite were dependent on mineral inclusions, especially those containing aluminum, silica, and calcium . Another study showed that aquifer fluid type (e.g. gabbro, hyperalkaline peridotite, and alkaline peridotite) was correlated to microbial community composition . Although a single geochemical parameter accounted for the correlation, differing pH, Eh, and availability of carbon and electron acceptors among rock types were predicted to be key factors . As microorganisms use the minerals present in the rock, they chemically transform them. While this process has been studied in surface environments such as clay minerals in soil , it is an important consideration to make in deep subsurface environments, especially when they will be modified and potentially amended with non-native materials (e.g. clay, concrete) through the construction of underground repositories, such as for long-term storage of used nuclear fuel, carbon capture, and hydrogen storage. A recent study showed that stochastic geological activity may play a role in microbial community structure and succession, with a stronger influence than environmental selection for deep hard rock aquifer systems . The findings suggest that geological activity causing or changing fractures, which leads to the isolation or mixing of fracture fluids and the nutrients and microbial communities within, plays a significant role in microbial community turnover and the establishment of new microbial communities when environmental conditions and underlying geology of the rock formation remain unchanged . Further understanding the factors that determine microbial community composition and drive succession in deep terrestrial subsurface environments will be critical for the planning of deep subsurface activities that could be impacted by microbial activity, such as the construction of underground repositories for used nuclear fuel storage. Ecological interactions within the subsurface Biofilms As is the case in most environments, many deep terrestrial subsurface microorganisms exist in biofilms. The proximity of different groups of biofilm microorganisms makes many of the interactions discussed below possible. In the deep subsurface, biofilms can form on rock fractures and in pore spaces, which are very poorly studied compared to deep subsurface fluids like groundwater due to the difficulty of obtaining such samples . Biofilms have been shown to be naturally present on rock fractures , and their microbial community composition differs from surrounding groundwater . Initial studies on deep subsurface biofilm have shown that mineral composition of the rock plays a role in biofilm formation, size, and composition . Deep subsurface biofilms could be an important environment for continued study to build our understanding of microbial interactions in the deep subsurface. Interconnectedness of microbial metabolisms Most studied deep terrestrial subsurface environments have microbial community members capable of metabolic processes that are often interdependent. Metabolic end products from one population can be used as electron sources for another . For example, interspecies hydrogen transfer is a key interaction that has been observed or suggested for various anoxic environments. This process can reduce the partial pressure of hydrogen in the immediate environment sufficiently for H 2 -producing metabolic reactions such as acetogenesis to become thermodynamically favorable . Within the subsurface environment context, a simple community consisting of Pseudomonas and a SRB belonging to the family Peptococcaceae was discovered in Opalinus Clay borehole water via metagenomic sequencing . It was proposed that Pseudomonas ferments organic macromolecules, potentially leached from the clay, which releases organic acids and H 2 gas. In turn, the SRB population couples organic acid oxidation to sulfate reduction. In fermentative communities, sequential fermentation steps performed by multiple different syntrophs can prevent the build-up of fermentation products. Although the roles of anaerobic fungi in deep subsurface environments are poorly understood, the discovery of fossilized fungi in deep anoxic fractured crystalline water suggest that they may also be involved with interspecies hydrogen transfer in deep terrestrial subsurface systems, similar to their well-studied rumen counterparts . In some cases, microorganisms with “complementary” metabolisms living in close association with one another results in cryptic cycles that can make it challenging to detect metabolic activity because the concentrations of electron acceptors and donors remain low despite active cycling. With sulfur in particular, this can have the added advantage of preventing the accumulation of toxic end products; sulfide produced by SRB does not reach toxic concentrations when it is rapidly depleted by sulfide oxidizers. Evidence for such cryptic sulfur cycling in the subsurface includes metagenomic sequencing of deep subsurface sediments from the Horonobe Underground Research Laboratory, which revealed a high relative abundance of microorganisms capable of sulfur cycling, despite consistently low concentrations of sulfate and sulfide in the associated groundwater . A similar observation was made for groundwater from ~300 m belowground in Sweden, where there was undetectable sulfide in the water, but sulfate-reducing and sulfide-oxidizing bacteria were both abundant in the metagenomes, further suggesting that cryptic sulfur cycling could be occurring . Results such as these highlight the importance of combining multiple experimental techniques to study these poorly understood ecosystems. Another less-well understood form of syntrophy in deep terrestrial subsurface environments is the sharing of electrons between anaerobic methanotrophic (ANME) archaea and other groups of microorganisms, such as sulfate-reducing bacteria, which has been suggested to occur directly via a nanowire structure rather than through the exchange of electron donors . In subsurface environments where both ANME archaea and methanogens are present, a cryptic carbon cycle can exist where methane is produced by the methanogen, and used by the methanotroph, which, in turn, produces carbon dioxide that can be used by the methanogen . Microorganisms with interconnected metabolisms may be even more prevalent in subsurface environments than currently recognized. A recent metagenomics study suggested that most microorganisms within subsurface groundwater communities were incapable of performing multiple sequential redox transformations, including complete sulfide oxidation to sulfate and complete denitrification to N 2 gas, and instead, the pathways were performed through multiple different species living in close association with one another . Although metagenomics can provide predictions about potential interactions, future studies will need to couple metagenomics with techniques such as enrichment cultivation, microscopy, and isotope labeling techniques to demonstrate such syntrophic relationships. Episymbiosis The recent discovery of the candidate phyla radiation (CPR) of bacteria as well as DPANN (an acronym of the first five phyla included in the superphylum: Diapherotrites , Parvarchaeota , Aenigmarchaeota , Nanoloarchaeota , Nanoarchaeota ) archaea in deep terrestrial subsurface environments suggests an important role for episymbiosis in deep subsurface environments. Both CPR bacteria and DPANN archaea are relatively abundant in groundwater, and are generally episymbiotic, attaching to host cells . Studies also show that CPR bacteria can be detected in some of the deepest sampled environments . For the DPANN archaea, metagenomes obtained from several years of samples from a deep aquifer system demonstrate consistent co-occurrence patterns for a DPANN symbiont, Candidatus Huberiarchaeum crystalense, and its host, Candidatus Altiarchaeum hamiconexum, with several characteristics similar to the well-studied relationship between Nanoarchaeum equitans (also DPANN) and its host Ignicoccus hospitalis . Although the presence of Ca. H. crystalense and its host has not been reported for many deep subsurface environments, likely due to their recent discovery, it may well be that they are difficult to detect using traditional sampling methods due to their small size (i.e. passing through sample filters), unusual ribosome structure, and missing ribosomal proteins . The metabolic and ecological roles of CPR and DPANN are not yet well known, but many members possess genes for fermenting carbon compounds to produce acetate, lactate, formate, and ethanol, possibly using polysulfides as terminal electron acceptors . Other studies suggest that some episymbiotic taxa could play metabolic supporting roles in nitrite reduction to ammonia and sulfate reduction . Both CPR and DPANN representatives likely benefit from their hosts by scavenging vitamins, sugars, nucleotides, and reduced redox equivalents , as well as membrane lipids . Others have speculated that S-layer production by several of these episymbionts could play a protective role against viruses for host cells . Additional metagenomics studies of deep subsurface environments are necessary to develop an improved understanding of the impact of DPANN and CPR members on microbial community ecology and biogeochemical cycling within the deep subsurface. Viruses It has long been known that viruses play an important role in driving microbial diversification and controlling the balance of microbial communities in well-studied environments. Until recently, little was known about the role of viruses in deep terrestrial subsurface environments. To first determine if viruses were present in the deep subsurface, granitic groundwater samples from 69- to 450-m deep in the Äspö Hard Rock Laboratory (Sweden) were analyzed . Overall, cell abundances and viral counts indicated that viruses from seven different families, including several known lytic viruses, were present and were about 10-fold more abundant than bacterial and archaeal cells. This suggests that viruses have a similarly important role in controlling the abundance of subsurface microbial populations as they do in more well-characterized aquatic, terrestrial, and host-associated environments. A single-celled genomics approach showed evidence for viral infection of a Firmicutes ( Bacillota )-dominated community in fracture water from 3-km deep in South Africa and a recent study discovered two new bacteriophages native to groundwater , together suggesting that subsurface environments host diverse and yet to be discovered populations of bacteriophages.
The first documented evidence for subsurface life on Earth was the description of fungi and algae in subterranean gold mines of Guanajuato, Mexico by Alexander von Humboldt in the late 18th century . Despite this early observation, the microbiology of terrestrial environments in general only began with studies of soil in the late 1800s, with researchers initially searching for pathogens. Using the techniques available at the time, Robert Koch first observed that below ~1 m, soil samples were nearly free of bacteria . This conclusion was supported by others studying soil microbiology at the time , and into the 1900s, where lower numbers of culturable microorganisms, using highly nutritious organic carbon-containing medium, from lower soil depths was attributed to a lack of air and food . Because of early work on soil microbiology that showed very low numbers of microorganisms at the bottom of the soil zone, it was believed that microbial growth below this zone was very limited or non-existent. Coupled with technical challenges sampling the deep subsurface, there was relatively little interest in pursuing the study of deep subsurface microbiology. Around the 1920s, the presence of hydrogen sulfide in oil reservoirs (“oil souring”) led to predictions that subsurface-associated sulfate-reducing bacteria (SRB) could be responsible. Ernst Georg Wolzogen-Kühr, a German microbiologist, showed the presence of a specific sulfate-reducing bacterium, then referred to as Microspira desulfuricans , up to 70 feet below the Earth’s surface . Despite these observations, the geology community believed that sulfate reduction in oil deposits was due entirely to abiotic chemical reactions, and the prevailing opinion remained that the subsurface was sterile. This paradigm was again refuted in a 1926 publication reporting the presence of Microspira in crude oil samples from depths of 500 m and again in a 1930 publication . In 1931, Charles Lipman at the University of California, Berkeley presented evidence for microorganisms living in coal samples extracted from 600 m belowground, and he claimed to be the first to postulate that the microorganisms had been there for millions of years, since the deposition of the plant matter that became coal . Over the next few decades, SRB were isolated from several other subsurface oil-well associated environments . A dominance of SRB in the literature on subsurface microbiology at this time was likely due to the use of targeted cultivation methods that favored their discovery over other types of microorganisms, as there was interest at the time to confirm their suspected role in oil souring. Nonetheless, additional types of microorganisms were found in oil-deposit samples and other subsurface environments, including Pseudomonas , denitrifiers, sulfur oxidizers, and microorganisms capable of using petroleum-associated compounds . It was postulated that subsurface soil samples were inhabited by microorganisms with less nutrient adaptability , although non-chemoorganoheterotrophic metabolisms were not discussed. Investigations into subsurface microbiology at this time were still largely limited to spring or well water and rarely looked directly at subsurface core material due to difficulties with obtaining such samples. In the 1970s, agricultural and industrial activities led to groundwater contamination, and one possibility was that subsurface microorganisms could help degrade these contaminants . Several years later, subsurface microbiology gained additional relevance in the context of belowground disposal of radioactive and heavy metal waste. Initial work exploring a potential influence of microorganisms on long-term nuclear waste storage began in the late 1970s in Canada, Switzerland, the UK, and the USA, and soon after in Finland, France, Italy, Japan, and Sweden . By the end of the 20th century, adequate controls and aseptic sampling technique were employed to convince the scientific community that there was indeed microbial life in the subsurface .
In the absence of sunlight, subsurface communities must rely on non-photosynthetic primary production. It was originally thought that subsurface life must be supported by organic carbon deposits that were formed by ancient photosynthetic events. Although subsurface microbial communities that are near organic carbon deposits, such as oil, do take advantage of these carbon sources, other communities rely entirely on chemolithoautotrophic metabolism and fix their own carbon from inorganic sources available in the subsurface. The first deep terrestrial subsurface microbial community shown to be completely supported by chemolithoautotrophic primary production was discovered in 1995 . Since then, geogenic gases such as dihydrogen (H 2 ), methane (CH 4 ), and carbon dioxide (CO 2 ) have been linked with belowground primary production . For primary production to occur, microorganisms must have the capacity to fix inorganic carbon into biomass. Several different carbon fixation pathways exist in microorganisms, but perhaps the most important for deep subsurface metabolism is the reductive acetyl-CoA pathway, or Wood–Ljungdahl pathway, because it is the preferred pathway for microorganisms living in low-energy environments near the thermodynamic limit of life . This pathway is commonly used by acetogens, methanogens, and sulfate-reducing microorganisms that, in addition to fixing inorganic carbon, use the pathway for energy production . Metagenomic studies have demonstrated that the reductive acetyl-CoA pathway dominates within deep terrestrial subsurface microbial communities .
A common feature of deep subsurface microbial communities is a reliance on H 2 for energy . Hydrogen gas is present in subsurface environments through processes like radiolysis of water and serpentinization . Although hydrogen-fueled microbial metabolisms in the deep terrestrial subsurface were demonstrated prior to the availability of metagenomics , subsequent metagenomic studies have further reinforced a prevalence of genes involved in H 2 oxidation associated with deep subsurface samples . For example, metagenomes generated from samples of three different borehole depths showed a significant enrichment of hydrogenases in borehole samples from 2.3 km compared to those from 0.6 or 1.5 km , suggesting that hydrogen becomes increasingly important with distance below the Earth’s surface. Hydrogen gas can be coupled to the reduction of many different electron acceptors that are relevant to deep terrestrial subsurface metabolism, supporting methanogenesis , homoacetogenesis , sulfate/sulfite reduction , and iron reduction .
Prior to high-throughput sequencing and metagenomics, deep terrestrial subsurface microbial community characterization generally involved either culturing approaches or clone library analysis (16S rRNA gene amplicon sequencing of selected clones). Using such traditional approaches, deep subsurface communities were commonly reported to be dominated by iron-reducing bacteria, SRB, methanogenic archaea, and acetogens . When subsurface samples were taken from locations near hydrocarbon reservoirs, fermentative microorganisms were also detected . Because a subset of microorganisms is favored by cultivation conditions, microbial abundance estimates obtained by these techniques can be much lower than those from microscopy-based techniques, sometimes by orders of magnitude . With the advent of high-throughput amplicon and metagenomic sequencing, it has been possible to study deep biosphere microbial communities with increased resolution, circumventing the biases of culture-based approaches. For additional information about novel techniques for studying deep subsurface environments beyond DNA sequencing–based approaches, see a recent review . A survey of existing amplicon sequencing data from global terrestrial deep subsurface environments showed a universal dominance of phyla Proteobacteria (Pseudomonadota ) and Firmicutes ( Bacillota ). It was proposed that the vast metabolic diversity within these phyla could account for their dominance in deep terrestrial subsurface environments, seemingly independent of underlying geology and environmental factors . Metagenomic studies of deep terrestrial subsurface environments commonly reveal microbial communities with diverse metabolisms. For example, a study of deep subsurface samples from the Horonobe Underground Research Laboratory (Japan) found a diverse microbial community consisting of 29 phyla, including 13 uncultured representatives that had never been detected at this site . The most abundant metabolic functions encoded by the metagenomes were sulfate reduction, sulfur oxidation, nitrate reduction, iron reduction, methane oxidation, and methanogenesis. Almost all reconstructed genomes showed the potential for fermentation, several had genes for nitrogen fixation, many encoded the Calvin–Benson–Basham or reductive acetyl-CoA pathways for carbon fixation, and more than half had genes involved in hydrogen oxidation. The functions detected in these metagenomes are common for deep terrestrial subsurface microorganisms ; however, not all deep subsurface environments host such diversity .
Some deep terrestrial subsurface microbial communities have very low diversity. For example, water circulating within igneous rocks ~200 m belowground in Idaho was sampled and shown to be dominated by methanogens, at >90% of all detected taxa . A similarly low-diversity community was discovered in groundwater from 2.8 km belowground in the Mponeng gold mine in South Africa, which had a microbial community dominated by a single SRB population belonging to the Firmicutes ( Bacillota ) phylum . Metagenomic sequencing of fracture fluid recovered from this same environment revealed a metagenome with >99% of reads belonging to this same population’s genome . Additional reads in the metagenome were considered to be laboratory or drilling contaminants. Named Candidatus Desulforudis audaxviator, which in Latin means “bold traveler in search of sulfur,” the assembled genome suggested complete self-sufficiency for this subsurface bacterium. In addition to the ability to couple sulfate reduction with H 2 (e.g. derived from radioactive decay of uranium) or formate oxidation for energy metabolism, the genome for Ca. D. audaxviator contains all genes necessary for carbon and nitrogen fixation and encodes all necessary amino acid biosynthesis pathways. Metabolically flexible, Ca. D. audaxviator can switch from heterotrophy to autotrophy as conditions change. Adaptations such as this could help to explain its ability to thrive in such a harsh environment independently . Since its discovery, Ca. D. audaxviator has been reported in other global subsurface samples . A similarly low-diversity microbial community was later discovered in porous sandstone near an oil deposit, dominated (>98%) by Halomonas sulfidaeris , a heterotroph capable of using aromatic organic compounds .
Most research exploring microorganisms in deep terrestrial subsurface environments has focused on bacteria and archaea, but microeukaryotes have been detected as well . In bedrock fracture water from Finland, fungi were detected at all tested depths (300–800 m), with the phylum Ascomycota being the most prevalent . This study demonstrated a depth-independent distribution of fungal community diversities and several reads associated with potentially novel fungal species. Despite low abundance overall, several fungal species (“mold” and yeast) were detected in groundwater from the Äspo Hard Rock Laboratory . Heat-tolerant taxa from the phylum Nematoda have also been detected in subsurface fracture water to depths approaching 3.6 km within the Beatrix gold mine, South Africa , where they were suggested to be feeding on prokaryotes. Their heat tolerance may be linked to heat-shock proteins that are transcriptionally induced when these subsurface nematodes grow under heat stress conditions . Additional eukaryotes from phyla Platyhelminthes , Rotifera , Annelida , and Arthropoda have been detected in South African mines at approximate depths of 1.5 km belowground . The presence of microeukaryotes in subsurface environments may originate from surface water recharge and, predictably, their subsurface persistence is likely governed by food availability .
The factors that affect microbial community composition and diversity of deep terrestrial subsurface environments remain poorly understood. Although the least diverse microbial communities discovered have been in some of the deepest sampled environments , other deep subsurface environments host relatively diverse microbial communities . Decreasing diversity with depth is likely a combination of related factors that influence microbial community composition, such as water recharge and origin, water activity (e.g. salinity), organic matter availability, and electron donor and acceptor diversity. Several 1–5-km-deep samples taken from boreholes in South Africa had microbial communities dominated by either Firmicutes ( Bacillota ) or Proteobacteria ( Pseudomonadota ) phyla . In general, Proteobacteria ( Pseudomonadota ) taxa tend to dominate fracture fluids that have more recently mixed with meteoric (i.e. associated with precipitation) waters, which are relatively shallow subsurface fluids. In contrast, representatives of the Firmicutes ( Bacillota ) dominate deeper subsurface communities, which tend to be fed from deep groundwaters with little or no meteoric water input . This trend could be explained by the selection for microorganisms, often Firmicutes ( Bacillota ) members, capable of using the reductive acetyl-CoA pathway for carbon fixation in lower energy deep environments, with less fluid input from meteoric sources. Indeed, a metagenomics study observed a higher relative abundance of Firmicutes ( Bacillota ) members in fracture fluids with little mixing of meteoric waters, which was associated with a higher abundance of protein-encoding genes associated with the reductive acetyl-CoA pathway . A correlation between water origin and microbial community composition has been reported for other environments, including the Fennoscandian Shield and serpentinite springs in Canada . Water recharge, as well as organic matter availability, is also reported to be positively correlated with subsurface microbial community diversity . Although addressed by these experiments, additional factors could favor the persistence of certain microorganisms at greater depths compared to others, such as the ability of some microorganisms, including members of the Bacillota , to form spores and withstand unfavorable conditions. In addition to carbon fixation pathways, other adaptations for nutrient-poor conditions of the deep subsurface could help explain persistence of certain microorganisms in these environments. For example, H. sulfidaeris , which was found to dominate (>98%) a microbial community in sandstone, is well adapted to use the various aromatic organic compounds available nearby due to oil deposit proximity . It also has adaptations for survival in the hypersaline subsurface, including transmembrane transporters for ions, heavy metal and ion efflux pumps, and various other osmotic regulators. As a facultative anaerobe, it can also adapt to changes in oxygen availability and is tolerant to high temperature and pressure . The microorganisms detected at the deepest depth sampled in a borehole in Finland had similar adaptations to the high salt and metal concentrations . Some obligate fermenting microorganisms can use the osmoprotectant compounds produced by other organisms as a carbon and energy source. It was observed that the microbial community composition in 2.5-km-deep shale wells in Pennsylvania shifted in response to increasing salt concentration associated with hydraulic fracturing of shale to favor halotolerant bacterial and archaeal species: Candidatus Frackibacter, which was discovered at the site, Halanaerobium , Halomonadaceae , Marinobacter , Methanohalophilus , and Methanolobus . All genomes had evidence of an osmoprotectant strategy, including use of the molecule glycine betaine, proposed to be produced by other microorganisms present to fuel their fermentative metabolisms . Another proposed adaptation to oligotrophic deep subsurface conditions is small cell size . Approximately 50% of the cells in microbial communities of groundwater collected from the Äspö Hard Rock Laboratory passed through a 0.22-μm filter. These small cells often had genomes that were assigned to phylum Proteobacteria ( Pseudomonadota ), and all had matches to known representative species reported to have cell sizes larger than 0.3 μm . Another factor that has been shown to influence microbial community composition is the underlying geology of deep terrestrial subsurface environments. Microorganisms often make use of the molecules and ions available in the rocks they inhabit, either as electron sources or as sources of limiting minerals . This includes metal sulfides like pyrite , metals such as iron and manganese and their oxides , silicate rocks like feldspar that provide a source of phosphorus , and gypsum-derived sulfate , which are not evenly distributed in all rocks . Profiles of available electron donors in subsurface ecosystems correlate with microbial community composition , but host rock lithology has rarely been directly linked to microorganisms living within that rock. Nonetheless, one study compared the lithology and microbial community compositions of 15 types of host rock taken from many different locations and showed that host rock lithology was a primary driver of microbial community structure . A study out of the Deep Mine Microbial Observatory (South Dakota) looking at biofilms in fluid-filled fractures supports these results and suggests that the types of minerals present could be an important factor for which microorganisms colonize the rock surfaces . Similarly, microbial communities within granite were dependent on mineral inclusions, especially those containing aluminum, silica, and calcium . Another study showed that aquifer fluid type (e.g. gabbro, hyperalkaline peridotite, and alkaline peridotite) was correlated to microbial community composition . Although a single geochemical parameter accounted for the correlation, differing pH, Eh, and availability of carbon and electron acceptors among rock types were predicted to be key factors . As microorganisms use the minerals present in the rock, they chemically transform them. While this process has been studied in surface environments such as clay minerals in soil , it is an important consideration to make in deep subsurface environments, especially when they will be modified and potentially amended with non-native materials (e.g. clay, concrete) through the construction of underground repositories, such as for long-term storage of used nuclear fuel, carbon capture, and hydrogen storage. A recent study showed that stochastic geological activity may play a role in microbial community structure and succession, with a stronger influence than environmental selection for deep hard rock aquifer systems . The findings suggest that geological activity causing or changing fractures, which leads to the isolation or mixing of fracture fluids and the nutrients and microbial communities within, plays a significant role in microbial community turnover and the establishment of new microbial communities when environmental conditions and underlying geology of the rock formation remain unchanged . Further understanding the factors that determine microbial community composition and drive succession in deep terrestrial subsurface environments will be critical for the planning of deep subsurface activities that could be impacted by microbial activity, such as the construction of underground repositories for used nuclear fuel storage.
Biofilms As is the case in most environments, many deep terrestrial subsurface microorganisms exist in biofilms. The proximity of different groups of biofilm microorganisms makes many of the interactions discussed below possible. In the deep subsurface, biofilms can form on rock fractures and in pore spaces, which are very poorly studied compared to deep subsurface fluids like groundwater due to the difficulty of obtaining such samples . Biofilms have been shown to be naturally present on rock fractures , and their microbial community composition differs from surrounding groundwater . Initial studies on deep subsurface biofilm have shown that mineral composition of the rock plays a role in biofilm formation, size, and composition . Deep subsurface biofilms could be an important environment for continued study to build our understanding of microbial interactions in the deep subsurface. Interconnectedness of microbial metabolisms Most studied deep terrestrial subsurface environments have microbial community members capable of metabolic processes that are often interdependent. Metabolic end products from one population can be used as electron sources for another . For example, interspecies hydrogen transfer is a key interaction that has been observed or suggested for various anoxic environments. This process can reduce the partial pressure of hydrogen in the immediate environment sufficiently for H 2 -producing metabolic reactions such as acetogenesis to become thermodynamically favorable . Within the subsurface environment context, a simple community consisting of Pseudomonas and a SRB belonging to the family Peptococcaceae was discovered in Opalinus Clay borehole water via metagenomic sequencing . It was proposed that Pseudomonas ferments organic macromolecules, potentially leached from the clay, which releases organic acids and H 2 gas. In turn, the SRB population couples organic acid oxidation to sulfate reduction. In fermentative communities, sequential fermentation steps performed by multiple different syntrophs can prevent the build-up of fermentation products. Although the roles of anaerobic fungi in deep subsurface environments are poorly understood, the discovery of fossilized fungi in deep anoxic fractured crystalline water suggest that they may also be involved with interspecies hydrogen transfer in deep terrestrial subsurface systems, similar to their well-studied rumen counterparts . In some cases, microorganisms with “complementary” metabolisms living in close association with one another results in cryptic cycles that can make it challenging to detect metabolic activity because the concentrations of electron acceptors and donors remain low despite active cycling. With sulfur in particular, this can have the added advantage of preventing the accumulation of toxic end products; sulfide produced by SRB does not reach toxic concentrations when it is rapidly depleted by sulfide oxidizers. Evidence for such cryptic sulfur cycling in the subsurface includes metagenomic sequencing of deep subsurface sediments from the Horonobe Underground Research Laboratory, which revealed a high relative abundance of microorganisms capable of sulfur cycling, despite consistently low concentrations of sulfate and sulfide in the associated groundwater . A similar observation was made for groundwater from ~300 m belowground in Sweden, where there was undetectable sulfide in the water, but sulfate-reducing and sulfide-oxidizing bacteria were both abundant in the metagenomes, further suggesting that cryptic sulfur cycling could be occurring . Results such as these highlight the importance of combining multiple experimental techniques to study these poorly understood ecosystems. Another less-well understood form of syntrophy in deep terrestrial subsurface environments is the sharing of electrons between anaerobic methanotrophic (ANME) archaea and other groups of microorganisms, such as sulfate-reducing bacteria, which has been suggested to occur directly via a nanowire structure rather than through the exchange of electron donors . In subsurface environments where both ANME archaea and methanogens are present, a cryptic carbon cycle can exist where methane is produced by the methanogen, and used by the methanotroph, which, in turn, produces carbon dioxide that can be used by the methanogen . Microorganisms with interconnected metabolisms may be even more prevalent in subsurface environments than currently recognized. A recent metagenomics study suggested that most microorganisms within subsurface groundwater communities were incapable of performing multiple sequential redox transformations, including complete sulfide oxidation to sulfate and complete denitrification to N 2 gas, and instead, the pathways were performed through multiple different species living in close association with one another . Although metagenomics can provide predictions about potential interactions, future studies will need to couple metagenomics with techniques such as enrichment cultivation, microscopy, and isotope labeling techniques to demonstrate such syntrophic relationships. Episymbiosis The recent discovery of the candidate phyla radiation (CPR) of bacteria as well as DPANN (an acronym of the first five phyla included in the superphylum: Diapherotrites , Parvarchaeota , Aenigmarchaeota , Nanoloarchaeota , Nanoarchaeota ) archaea in deep terrestrial subsurface environments suggests an important role for episymbiosis in deep subsurface environments. Both CPR bacteria and DPANN archaea are relatively abundant in groundwater, and are generally episymbiotic, attaching to host cells . Studies also show that CPR bacteria can be detected in some of the deepest sampled environments . For the DPANN archaea, metagenomes obtained from several years of samples from a deep aquifer system demonstrate consistent co-occurrence patterns for a DPANN symbiont, Candidatus Huberiarchaeum crystalense, and its host, Candidatus Altiarchaeum hamiconexum, with several characteristics similar to the well-studied relationship between Nanoarchaeum equitans (also DPANN) and its host Ignicoccus hospitalis . Although the presence of Ca. H. crystalense and its host has not been reported for many deep subsurface environments, likely due to their recent discovery, it may well be that they are difficult to detect using traditional sampling methods due to their small size (i.e. passing through sample filters), unusual ribosome structure, and missing ribosomal proteins . The metabolic and ecological roles of CPR and DPANN are not yet well known, but many members possess genes for fermenting carbon compounds to produce acetate, lactate, formate, and ethanol, possibly using polysulfides as terminal electron acceptors . Other studies suggest that some episymbiotic taxa could play metabolic supporting roles in nitrite reduction to ammonia and sulfate reduction . Both CPR and DPANN representatives likely benefit from their hosts by scavenging vitamins, sugars, nucleotides, and reduced redox equivalents , as well as membrane lipids . Others have speculated that S-layer production by several of these episymbionts could play a protective role against viruses for host cells . Additional metagenomics studies of deep subsurface environments are necessary to develop an improved understanding of the impact of DPANN and CPR members on microbial community ecology and biogeochemical cycling within the deep subsurface. Viruses It has long been known that viruses play an important role in driving microbial diversification and controlling the balance of microbial communities in well-studied environments. Until recently, little was known about the role of viruses in deep terrestrial subsurface environments. To first determine if viruses were present in the deep subsurface, granitic groundwater samples from 69- to 450-m deep in the Äspö Hard Rock Laboratory (Sweden) were analyzed . Overall, cell abundances and viral counts indicated that viruses from seven different families, including several known lytic viruses, were present and were about 10-fold more abundant than bacterial and archaeal cells. This suggests that viruses have a similarly important role in controlling the abundance of subsurface microbial populations as they do in more well-characterized aquatic, terrestrial, and host-associated environments. A single-celled genomics approach showed evidence for viral infection of a Firmicutes ( Bacillota )-dominated community in fracture water from 3-km deep in South Africa and a recent study discovered two new bacteriophages native to groundwater , together suggesting that subsurface environments host diverse and yet to be discovered populations of bacteriophages.
As is the case in most environments, many deep terrestrial subsurface microorganisms exist in biofilms. The proximity of different groups of biofilm microorganisms makes many of the interactions discussed below possible. In the deep subsurface, biofilms can form on rock fractures and in pore spaces, which are very poorly studied compared to deep subsurface fluids like groundwater due to the difficulty of obtaining such samples . Biofilms have been shown to be naturally present on rock fractures , and their microbial community composition differs from surrounding groundwater . Initial studies on deep subsurface biofilm have shown that mineral composition of the rock plays a role in biofilm formation, size, and composition . Deep subsurface biofilms could be an important environment for continued study to build our understanding of microbial interactions in the deep subsurface.
Most studied deep terrestrial subsurface environments have microbial community members capable of metabolic processes that are often interdependent. Metabolic end products from one population can be used as electron sources for another . For example, interspecies hydrogen transfer is a key interaction that has been observed or suggested for various anoxic environments. This process can reduce the partial pressure of hydrogen in the immediate environment sufficiently for H 2 -producing metabolic reactions such as acetogenesis to become thermodynamically favorable . Within the subsurface environment context, a simple community consisting of Pseudomonas and a SRB belonging to the family Peptococcaceae was discovered in Opalinus Clay borehole water via metagenomic sequencing . It was proposed that Pseudomonas ferments organic macromolecules, potentially leached from the clay, which releases organic acids and H 2 gas. In turn, the SRB population couples organic acid oxidation to sulfate reduction. In fermentative communities, sequential fermentation steps performed by multiple different syntrophs can prevent the build-up of fermentation products. Although the roles of anaerobic fungi in deep subsurface environments are poorly understood, the discovery of fossilized fungi in deep anoxic fractured crystalline water suggest that they may also be involved with interspecies hydrogen transfer in deep terrestrial subsurface systems, similar to their well-studied rumen counterparts . In some cases, microorganisms with “complementary” metabolisms living in close association with one another results in cryptic cycles that can make it challenging to detect metabolic activity because the concentrations of electron acceptors and donors remain low despite active cycling. With sulfur in particular, this can have the added advantage of preventing the accumulation of toxic end products; sulfide produced by SRB does not reach toxic concentrations when it is rapidly depleted by sulfide oxidizers. Evidence for such cryptic sulfur cycling in the subsurface includes metagenomic sequencing of deep subsurface sediments from the Horonobe Underground Research Laboratory, which revealed a high relative abundance of microorganisms capable of sulfur cycling, despite consistently low concentrations of sulfate and sulfide in the associated groundwater . A similar observation was made for groundwater from ~300 m belowground in Sweden, where there was undetectable sulfide in the water, but sulfate-reducing and sulfide-oxidizing bacteria were both abundant in the metagenomes, further suggesting that cryptic sulfur cycling could be occurring . Results such as these highlight the importance of combining multiple experimental techniques to study these poorly understood ecosystems. Another less-well understood form of syntrophy in deep terrestrial subsurface environments is the sharing of electrons between anaerobic methanotrophic (ANME) archaea and other groups of microorganisms, such as sulfate-reducing bacteria, which has been suggested to occur directly via a nanowire structure rather than through the exchange of electron donors . In subsurface environments where both ANME archaea and methanogens are present, a cryptic carbon cycle can exist where methane is produced by the methanogen, and used by the methanotroph, which, in turn, produces carbon dioxide that can be used by the methanogen . Microorganisms with interconnected metabolisms may be even more prevalent in subsurface environments than currently recognized. A recent metagenomics study suggested that most microorganisms within subsurface groundwater communities were incapable of performing multiple sequential redox transformations, including complete sulfide oxidation to sulfate and complete denitrification to N 2 gas, and instead, the pathways were performed through multiple different species living in close association with one another . Although metagenomics can provide predictions about potential interactions, future studies will need to couple metagenomics with techniques such as enrichment cultivation, microscopy, and isotope labeling techniques to demonstrate such syntrophic relationships.
The recent discovery of the candidate phyla radiation (CPR) of bacteria as well as DPANN (an acronym of the first five phyla included in the superphylum: Diapherotrites , Parvarchaeota , Aenigmarchaeota , Nanoloarchaeota , Nanoarchaeota ) archaea in deep terrestrial subsurface environments suggests an important role for episymbiosis in deep subsurface environments. Both CPR bacteria and DPANN archaea are relatively abundant in groundwater, and are generally episymbiotic, attaching to host cells . Studies also show that CPR bacteria can be detected in some of the deepest sampled environments . For the DPANN archaea, metagenomes obtained from several years of samples from a deep aquifer system demonstrate consistent co-occurrence patterns for a DPANN symbiont, Candidatus Huberiarchaeum crystalense, and its host, Candidatus Altiarchaeum hamiconexum, with several characteristics similar to the well-studied relationship between Nanoarchaeum equitans (also DPANN) and its host Ignicoccus hospitalis . Although the presence of Ca. H. crystalense and its host has not been reported for many deep subsurface environments, likely due to their recent discovery, it may well be that they are difficult to detect using traditional sampling methods due to their small size (i.e. passing through sample filters), unusual ribosome structure, and missing ribosomal proteins . The metabolic and ecological roles of CPR and DPANN are not yet well known, but many members possess genes for fermenting carbon compounds to produce acetate, lactate, formate, and ethanol, possibly using polysulfides as terminal electron acceptors . Other studies suggest that some episymbiotic taxa could play metabolic supporting roles in nitrite reduction to ammonia and sulfate reduction . Both CPR and DPANN representatives likely benefit from their hosts by scavenging vitamins, sugars, nucleotides, and reduced redox equivalents , as well as membrane lipids . Others have speculated that S-layer production by several of these episymbionts could play a protective role against viruses for host cells . Additional metagenomics studies of deep subsurface environments are necessary to develop an improved understanding of the impact of DPANN and CPR members on microbial community ecology and biogeochemical cycling within the deep subsurface.
It has long been known that viruses play an important role in driving microbial diversification and controlling the balance of microbial communities in well-studied environments. Until recently, little was known about the role of viruses in deep terrestrial subsurface environments. To first determine if viruses were present in the deep subsurface, granitic groundwater samples from 69- to 450-m deep in the Äspö Hard Rock Laboratory (Sweden) were analyzed . Overall, cell abundances and viral counts indicated that viruses from seven different families, including several known lytic viruses, were present and were about 10-fold more abundant than bacterial and archaeal cells. This suggests that viruses have a similarly important role in controlling the abundance of subsurface microbial populations as they do in more well-characterized aquatic, terrestrial, and host-associated environments. A single-celled genomics approach showed evidence for viral infection of a Firmicutes ( Bacillota )-dominated community in fracture water from 3-km deep in South Africa and a recent study discovered two new bacteriophages native to groundwater , together suggesting that subsurface environments host diverse and yet to be discovered populations of bacteriophages.
Deep terrestrial subsurface microbiology is still a relatively new field, with immense opportunity for further exploration and discovery. The widespread availability of metagenomic techniques has allowed researchers to explore subsurface microbial communities at a resolution not previously possible and has offered insight into the metabolisms and adaptations that these microorganisms use to survive relatively harsh conditions deep below the Earth’s surface. Although metagenomics can generate hypotheses about metabolic roles and symbiotic interactions, future research involving enrichment cultivation and microcosm experiments should ideally be coupled to cultivation-independent techniques. Together, these approaches can demonstrate how subsurface microorganisms interact with one other and confirm that taxa detected in situ represent living and viable microorganisms rather than relic DNA, for example. For future microbial ecology studies of the subsurface, an important goal will continue to be elucidating factors that govern microbial distributions, as well as the factors that influence deep subsurface microbial community diversity. Research is still leading to the discovery of new types of microorganisms, such as CPR bacteria and DPANN archaea, that evaded detection using traditional characterization methods. These recent findings suggest that we are just scratching the surface of belowground microbial diversity. Sampling of deep subsurface environments remains challenging and has largely been limited to mines and boreholes that are constructed for reasons aside from microbiology. Our understanding of the deep terrestrial subsurface is limited to these “windows” of sampling opportunity, and there remain vast expanses of the deep subsurface that are completely unexplored. Various studies on deep subsurface microbiology to date have given us a perspective of what is happening, but it remains challenging to make broad generalizations of subsurface life because it is unclear how generalizable observations from individual sites might on a global scale. Increased understanding of the microorganisms capable of living in deep terrestrial subsurface environments, and the factors that influence their growth, will help with modelling global biogeochemical cycling and making predictions about future subsurface activities in relation to human activities such as mining and nuclear waste storage.
|
Comparison of the effects of lidocaine and articaine used for buccal infiltration and supplemental palatinal infiltration anesthesia in maxillary molars with irreversible pulpitis: a prospective randomized study | a8bac1a1-cf91-40cf-96fd-519cb964c2a9 | 11807300 | Dentistry[mh] | Pain is one of the most common reasons that patients to visit a dentist, with irreversible pulpitis (IP) often associated with severe pain. Providing pain-free and comfortable treatment is crucial in modern dentistry. Local anesthetic solutions are the most commonly used drugs for pain control in dental procedures because they numb the areas requiring treatment, ensuring that patients do not experience pain during the procedure . Anesthetizing mandibular molars with IP is significantly more challenging than anesthetizing maxillary molars. Consequently, most studies have focused on anesthetic success in mandibular molars. However, several studies have shown that 12–46% of maxillary molars with IP do not achieve complete anesthesia after a buccal infiltration injection with 2% lidocaine . Maxillary buccal infiltration anesthesia is widely used to provide pulpal anesthesia in maxillary teeth . However, achieving anesthesia in maxillary molars can be particularly challenging in the palatal root canal because of anatomical variations, root length, and excessive bone thickness in the area. Additionally, the inflamed state of the tissue in teeth with IP can complicate the depth of pulpal anesthesia. Askari et al. found that when buccal infiltration anesthesia was administered to maxillary first molars using 2% lidocaine with 1:80,000 epinephrine, teeth with longer palatal and distobuccal roots exhibited significantly more anesthetic failures. This finding is consistent with other studies showing that a single buccal infiltration may not effectively anesthetize the palatal roots of maxillary molars . Hosseini et al. compared the anesthetic success of 4% articaine with 1:100,000 epinephrine and 2% lidocaine with 1:80,000 epinephrine for buccal infiltration in maxillary first molars with IP and found no statistically significant difference between the two. In the same study, the lengths of the mesiobuccal and distobuccal roots did not significantly affect anesthetic success, whereas the palatal root length was associated with anesthetic failure. The chemical composition of articaine contains a unique thiophene ring instead of the benzene ring found in lidocaine and other amide local anesthetics. This difference increases lipid solubility, thereby increasing diffusion through the lipid membrane of the epineurium. Therefore, the solution penetration ability of articaine is more powerful than 2% lidocaine as reported previously. The present study was performed compare the anesthetic efficacy of 4% articaine with 1:100,000 epinephrine and 2% lidocaine with 1:80,000 epinephrine using buccal or palatal infiltration anesthesia in maxillary molars with IP. The two null hypotheses of the study were as follows. First, there is no statistical difference in anesthetic efficacy between 4% articaine with 1:100,000 epinephrine and 2% lidocaine with 1:80,000 epinephrine, regardless of the anesthesia method. Second, palatal infiltration anesthesia, in addition to buccal infiltration anesthesia, does not differ statistically in effectiveness from buccal infiltration anesthesia alone, regardless of the type of anesthetic used. Ethics and consent to participate This study was designed as a prospective, randomized, single-blind clinical trial. Approval was obtained from the Akdeniz University Faculty of Medicine Clinical Research Ethics Committee on 05.05.2021 (decision number 690) and registered at ClinicalTrials.gov (NCT06342869)on 2024-04-02. Each patient participating in the study received an informed consent form detailing the treatment and possible complications, and their written consent was obtained. The study design was based on the CONSORT (Consolidated Standards of Reporting Trials) 2010 statement. The details are presented in the Consolidated Standards Of Reporting Trials flow diagram (Fig. ). Inclusion and exclusion criteria The inclusion criteria were upper maxillary molars with IP as confirmed by pulp tests, no radiolucency at the root tip on the preoperative periapical radiograph, and the ability to understand the pain scale used in the study. The exclusion criteria were an age of < 18 or > 65 years, allergy to any local anesthetics, history of a systemic disease, pregnancy the inability to obtain a response from the cold test and electric pulp test performed on the relevant tooth, no vital tissue encountered upon opening the pulp chamber, and treatment with analgesics or other drugs interfering with pain perception within the past 12 h.The sample consisted of 80 appropriate patients with symptomatic irreversible pulpitis of maxillary first molars. Sample size calculation The sample size was performed using data from a previous study using the multiple means comparison option of Minitab software, considering α=0.5, β=0.2, minumun significant difference of 20, and the mean standard deviation of 16.5. This study was designed as a prospective, randomized, single-blind clinical trial. Approval was obtained from the Akdeniz University Faculty of Medicine Clinical Research Ethics Committee on 05.05.2021 (decision number 690) and registered at ClinicalTrials.gov (NCT06342869)on 2024-04-02. Each patient participating in the study received an informed consent form detailing the treatment and possible complications, and their written consent was obtained. The study design was based on the CONSORT (Consolidated Standards of Reporting Trials) 2010 statement. The details are presented in the Consolidated Standards Of Reporting Trials flow diagram (Fig. ). The inclusion criteria were upper maxillary molars with IP as confirmed by pulp tests, no radiolucency at the root tip on the preoperative periapical radiograph, and the ability to understand the pain scale used in the study. The exclusion criteria were an age of < 18 or > 65 years, allergy to any local anesthetics, history of a systemic disease, pregnancy the inability to obtain a response from the cold test and electric pulp test performed on the relevant tooth, no vital tissue encountered upon opening the pulp chamber, and treatment with analgesics or other drugs interfering with pain perception within the past 12 h.The sample consisted of 80 appropriate patients with symptomatic irreversible pulpitis of maxillary first molars. The sample size was performed using data from a previous study using the multiple means comparison option of Minitab software, considering α=0.5, β=0.2, minumun significant difference of 20, and the mean standard deviation of 16.5. The process and application of block randomization were handled by a secretary. An online tool ( www.randomization.com ) was used to randomly allocate the participants into 4 groups. The treatment codes in the closed envelope were given to the clinician administering the injection by individual not involved in the study. The envelope was opened immediately after the patient preparation and just before starting the treatment. The local anesthesia method was selected according to the code inside the envelope. Trial design The research was planned as a prospective clinical trial that was double-blind and randomized. Eighty volunteer patients who met the inclusion criteria were randomly assigned to one of the four groups specified in the protocols: Group 1: Buccal infiltration with 1.2 mL of 4% articaine (Maxicaine Fort Ampoule, VEM İlaç San. ve Tic. Aş, İstanbul) containing 1:100,000 epinephrine. Group 2: Buccal infiltration with 1.2 mL of 2% lidocaine (Jetocaine Ampoule, Adeka İlaç ve Kimyasal Ürünler Tic. Aş, Samsun) containing 1:80,000 epinephrine. Group 3: Buccal infiltration with 1.2 mL of 4% articaine (Maxicaine Fort Ampoule) containing 1:100,000 epinephrine plus palatal infiltration with 0.5 mL of the same solution. Group 4: Buccal infiltration with 1.2 mL of 2% lidocaine (Jetocaine Ampoule) containing 1:80,000 epinephrine plus palatal infiltration with 0.5 mL of the same solution. Procedure The diagnosis of symptomatic irreversible pulpitis in teeth was made using the cold test Green Endo-Ice (1,1,1,2 tetrafluoroethane; Hygenic Corp.) and a digital electrical pulp tester (Parkell Inc.) Pulp sensitivity was confirmed by a positive response to the electric pulp test and prolonged symptomatic response to the cold test.To eliminate bias in sample selection, diagnosis of irreversible pulpitis was made by a clinician not involved in the study. Pain intensity assessment were performed by another clinician blinded to the technique of anesthesia injection. The pain scale and procedures were explained to the patients. They were asked to select their pain score before anesthesia using the Heft–Parker Visual Analog Scale (HPVAS), recorded as the pain score before treatment (HPVAS-1). The score measured with an electric pulp testing device before anesthesia was recorded as the response to the vitality test before anesthesia (EPT-1). All injections and treatments were performed by the same endodontist. For infiltration anesthesia, the needle was inserted deep into the muco-buccal fold of the relevant tooth between the mesiobuccal and distobuccal roots. After negative aspiration, the anesthetic solution was injected at a speed of 1 mL/min. After waiting 10 min(16) for the anesthesia to take effect, the relevant tooth was measured again with the vitalometer, and the score was recorded as the response to the vitality test after anesthesia (EPT-2). After isolation under a rubber dam, the patients were instructed to raise their hands if they felt any pain during preparation of the access cavity. If the patient raised their hand, the procedure was stopped, and the pain score they selected on the HPVAS was recorded as the pain score during entrance cavity preparation (HPVAS-2). If the pain score was between 54 and 170 on the HPVAS, the anesthesia was considered unsuccessful and supplemental anesthetic techniques were used before continuing. If no pain was felt (HPVAS = 0) or the score was ≤ 54, the anesthesia was considered successful and the preparation of the access cavity continued without supplemental anesthesia. Once the entrance cavity preparation was completed and the canal orifices were reached, a 10 K-type file(Kerr Dental, Orange, CA) was used to enter the palatal canal. If the patient felt pain and raised their hand, the procedure was stopped, and the pain score was recorded as the pain score during palatal canal entry (HPVAS-3). Anesthesia was considered unsuccessful if the pain score during canal entry was between 54 and 170 on the HPVAS, and the procedure was continued with supplemental intrapulpal anesthesia. If no pain was felt (HPVAS = 0) or the score was ≤ 54, the anesthesia was considered successful. Root canal treatment was then completed. Outcomes The main objective of this study is to compare the effects of anesthetic solutions containing articaine or lidocaine used in palatal infiltration anesthesia, in addition to buccal infiltration anesthesia, in patients with symptomatic irreversible pulpitis in the maxillary molars. Anesthetic efficacy was measured using a combination of the electrical pulp test and the (VAS). EPT values ranging from 0 mA to 70 mA, taken before and after anesthesia, were recorded.The VAS scale was answered by the patients before anesthesia, 10 min after anesthesia, and during the entry into the palatal canal. If the perceived pain score is 54 < HPVAS < 170, anesthesia was considered “unsuccessful.” The procedure was considered “successful” if the patient did not feel any pain (HPVAS = 0) or if HPVAS ≤ 54. Statistical analysis Statistical analysis was performed using SPSS 23.0 (IBM Corp., Armonk, NY, USA). One-way analysis of variance and descriptive statistics were used to evaluate differences in the age distributions between the groups. The chi-square test was used to determine any differences in sex or the distribution of the first and second molars between the groups. The Kolmogorov-Smirnov test was used to determine whether each group followed a normal distribution. The Kruskal–Wallis test was used to identify any differences in the distribution of pain scores (HPVAS-1) reported by patients at initial presentation and their responses to the electric pulp test (EPT-1) before anesthesia between the groups. The Kruskal–Wallis test was also used to evaluate the difference between the initial pain score (HPVAS-1) and the pain score after access cavity preparation (HPVAS-2) following anesthesia. Finally, the Kruskal–Wallis test was used to determine the intergroup distribution of the difference between the response to the vitality test before anesthesia (EPT-1) and after anesthesia (EPT-2). The research was planned as a prospective clinical trial that was double-blind and randomized. Eighty volunteer patients who met the inclusion criteria were randomly assigned to one of the four groups specified in the protocols: Group 1: Buccal infiltration with 1.2 mL of 4% articaine (Maxicaine Fort Ampoule, VEM İlaç San. ve Tic. Aş, İstanbul) containing 1:100,000 epinephrine. Group 2: Buccal infiltration with 1.2 mL of 2% lidocaine (Jetocaine Ampoule, Adeka İlaç ve Kimyasal Ürünler Tic. Aş, Samsun) containing 1:80,000 epinephrine. Group 3: Buccal infiltration with 1.2 mL of 4% articaine (Maxicaine Fort Ampoule) containing 1:100,000 epinephrine plus palatal infiltration with 0.5 mL of the same solution. Group 4: Buccal infiltration with 1.2 mL of 2% lidocaine (Jetocaine Ampoule) containing 1:80,000 epinephrine plus palatal infiltration with 0.5 mL of the same solution. The diagnosis of symptomatic irreversible pulpitis in teeth was made using the cold test Green Endo-Ice (1,1,1,2 tetrafluoroethane; Hygenic Corp.) and a digital electrical pulp tester (Parkell Inc.) Pulp sensitivity was confirmed by a positive response to the electric pulp test and prolonged symptomatic response to the cold test.To eliminate bias in sample selection, diagnosis of irreversible pulpitis was made by a clinician not involved in the study. Pain intensity assessment were performed by another clinician blinded to the technique of anesthesia injection. The pain scale and procedures were explained to the patients. They were asked to select their pain score before anesthesia using the Heft–Parker Visual Analog Scale (HPVAS), recorded as the pain score before treatment (HPVAS-1). The score measured with an electric pulp testing device before anesthesia was recorded as the response to the vitality test before anesthesia (EPT-1). All injections and treatments were performed by the same endodontist. For infiltration anesthesia, the needle was inserted deep into the muco-buccal fold of the relevant tooth between the mesiobuccal and distobuccal roots. After negative aspiration, the anesthetic solution was injected at a speed of 1 mL/min. After waiting 10 min(16) for the anesthesia to take effect, the relevant tooth was measured again with the vitalometer, and the score was recorded as the response to the vitality test after anesthesia (EPT-2). After isolation under a rubber dam, the patients were instructed to raise their hands if they felt any pain during preparation of the access cavity. If the patient raised their hand, the procedure was stopped, and the pain score they selected on the HPVAS was recorded as the pain score during entrance cavity preparation (HPVAS-2). If the pain score was between 54 and 170 on the HPVAS, the anesthesia was considered unsuccessful and supplemental anesthetic techniques were used before continuing. If no pain was felt (HPVAS = 0) or the score was ≤ 54, the anesthesia was considered successful and the preparation of the access cavity continued without supplemental anesthesia. Once the entrance cavity preparation was completed and the canal orifices were reached, a 10 K-type file(Kerr Dental, Orange, CA) was used to enter the palatal canal. If the patient felt pain and raised their hand, the procedure was stopped, and the pain score was recorded as the pain score during palatal canal entry (HPVAS-3). Anesthesia was considered unsuccessful if the pain score during canal entry was between 54 and 170 on the HPVAS, and the procedure was continued with supplemental intrapulpal anesthesia. If no pain was felt (HPVAS = 0) or the score was ≤ 54, the anesthesia was considered successful. Root canal treatment was then completed. The main objective of this study is to compare the effects of anesthetic solutions containing articaine or lidocaine used in palatal infiltration anesthesia, in addition to buccal infiltration anesthesia, in patients with symptomatic irreversible pulpitis in the maxillary molars. Anesthetic efficacy was measured using a combination of the electrical pulp test and the (VAS). EPT values ranging from 0 mA to 70 mA, taken before and after anesthesia, were recorded.The VAS scale was answered by the patients before anesthesia, 10 min after anesthesia, and during the entry into the palatal canal. If the perceived pain score is 54 < HPVAS < 170, anesthesia was considered “unsuccessful.” The procedure was considered “successful” if the patient did not feel any pain (HPVAS = 0) or if HPVAS ≤ 54. Statistical analysis was performed using SPSS 23.0 (IBM Corp., Armonk, NY, USA). One-way analysis of variance and descriptive statistics were used to evaluate differences in the age distributions between the groups. The chi-square test was used to determine any differences in sex or the distribution of the first and second molars between the groups. The Kolmogorov-Smirnov test was used to determine whether each group followed a normal distribution. The Kruskal–Wallis test was used to identify any differences in the distribution of pain scores (HPVAS-1) reported by patients at initial presentation and their responses to the electric pulp test (EPT-1) before anesthesia between the groups. The Kruskal–Wallis test was also used to evaluate the difference between the initial pain score (HPVAS-1) and the pain score after access cavity preparation (HPVAS-2) following anesthesia. Finally, the Kruskal–Wallis test was used to determine the intergroup distribution of the difference between the response to the vitality test before anesthesia (EPT-1) and after anesthesia (EPT-2). Table presents the distribution of sex, mean age, tooth number, HPVAS-1, and EPT-1 across the groups of patients participating in the study. (Table ) There were no significant differences between the groups in terms of sex, mean age, tooth number, initial pain scores, or initial EPT responses. The pain scores measured during endodontic access cavity preparation (HPVAS-2) and palatal canal entry (HPVAS-3) after anesthesia are provided in Table . The lowest pain scores (HPVAS-2) during access cavity preparation were observed in Group 3, in which palatal infiltration anesthesia was administered in addition to buccal infiltration anesthesia with 4% articaine. The highest scores were recorded in Group 2, in which only buccal infiltration anesthesia was performed with 2% lidocaine. However, there was no statistically significant difference between the groups. When anesthesia was deemed successful, the access cavity was completed without additional anesthesia, and the canal orifices were reached. In group 1, 4 patients; in group 2, 6 patients; in group 3, 1 patient; and in group 4, 5 patients were considered to have anesthesia unsuccesful according to HPVAS 2 scores.The lowest pain scores during palatal canal entry (HPVAS-3) were observed in Group 4, in which palatal infiltration anesthesia was administered in addition to buccal infiltration anesthesia with 2% lidocaine, followed by Group 3, which received a combination of palatal and buccal infiltration with 4% articaine. Again, no statistically significant difference was found between the groups ( p > 0.05). Table shows the HPVAS-2 and HPVAS-3 scores according to the anesthetic technique. To determine whether the anesthetic technique affected anesthetic success, the groups were divided into two categories: buccal infiltration anesthesia only (Groups 1 and 2) and palatal infiltration anesthesia in addition to buccal infiltration (Groups 3 and 4). When palatal infiltration anesthesia was added to buccal infiltration, both the pain scores during access cavity preparation (HPVAS-2) and palatal canal entry (HPVAS-3) were lower. Although the difference in HPVAS-2 scores was not statistically significant ( p > 0.05), the difference in HPVAS-3 scores was statistically significant ( p < 0.05). Regardless of the type of anesthesia, when considering success rates based on the anesthesia technique, the groups where palatal infiltration anesthesia was performed in addition to buccal infiltration anesthesia were found to be more successful than the groups where only buccal infiltration anesthesia was performed. (Tables and ) Table shows the differences between the pain scores and EPT values before and after anesthesia. The greatest differences in pain score and EPT values were observed in Group 3. The differences in HPVAS scores were not statistically significant ( p > 0.05), whereas the differences in EPT values were statistically significant ( p < 0.05). This study compared the anesthetic effects of two different anesthetic solutions—buccal infiltration and buccal plus palatal infiltration anesthesia—in maxillary molars with IP. There were no significant differences among the four patient groups in terms of age, sex, tooth number, HPVAS-1, and EPT-1 scores, indicating that age, sex, and preoperative pain had minimal impacts on the results. All patients included in the study reported moderate to severe pain (moderate: VAS score of 54–114 mm; severe: VAS score of 114–170 mm), consistent with the diagnosis of symptomatic IP in maxillary first and second molars. While mandibular teeth with IP are generally more difficult to anesthetize than maxillary posterior teeth, several studies have shown that 12–46% of maxillary molars with IP fail to achieve complete anesthesia after a buccal infiltration injection with 2% lidocaine. In our study, anesthetic success was evaluated using the HPVAS. Previous studies have indicated that the VAS is methodologically sound, simple, easy to administer, and adaptable for pain assessment in individual patients. The HPVAS is a standardized method for data collection in clinical research of this type . We used 2% lidocaine and 4% articaine, the most commonly used anesthetic solutions in dentistry . Numerous studies have compared these two solutions for both upper jaw infiltration and lower jaw inferior alveolar block anesthesia. Certosimo et al. used EPT values and VAS scores to evaluate the ability of EPT to measure local anesthesia. Their results showed that EPT can be a valuable tool for predicting potential anesthetic problems. In our study, the highest pre-anesthesia and post-anesthesia HPVAS differences were found in Group 3, in which buccal and palatal infiltration anesthesia with articaine was performed. Consistently, EPT differences were also highest in Group 3, and this difference was statistically significant. The unique chemical structure of articaine contains a thiophene ring instead of the benzene ring found in lidocaine and other amide local anesthetics . This difference increases lipid solubility, thereby enhancing the diffusion from the lipid membrane of the epineurium, which may explain the significantly higher success rate in Group 3 compared to lidocaine in our study. Malamed stated that approximately 0.6 ml (one-third of the ampoule) is sufficient for buccal infiltration anesthesia in the upper jaw, and 0.2–0.3 ml of the solution is sufficient for palatal infiltration anesthesia. Sreekumar et al. compared the effects of three different doses (0.6 ml, 0.9 ml, and 1.2 ml) of 4% articaine with 1:100,000 epinephrine for buccal infiltration anesthesia in their study and concluded that the 1.2 ml dose provided a faster onset of pulpal anesthesia, a higher success rate, and a longer duration of soft tissue and pulpal anesthesia compared to the 0.6 ml dose. Based on this data, we applied a 1.2 ml volume of local anesthetic solution for buccal infiltration anesthesia in our study. In healthy maxillary molars, the success rate of pulpal anesthesia with 2% lidocaine ranges from 72 to 100% . However, the success rate drops to 54–80% in maxillary molars with IP , highlighting the difficulty of achieving anesthesia in inflamed tissues such as those seen in IP. Kanaa et al. compared 2 mL buccal infiltration anesthesia with 4% articaine containing 1:100,000 epinephrine and 2% lidocaine containing 1:80,000 epinephrine in healthy maxillary teeth, finding no significant difference in anesthetic efficacy between the two. Sood et al. compared 4% articaine with 1:100,000 epinephrine to 2% lidocaine with 1:80,000 epinephrine in mandibular molars with IP, finding a slightly higher pulpal anesthetic success rate for articaine (76%) than lidocaine (58%) as measured by EPT. However, during root canal treatment, the success rates were 88% for articaine and 82% for lidocaine with no statistically significant difference. The local anesthetic solutions used in our study contained 4% and 2% active substances for articaine and lidocaine, respectively, meaning that twice as much active substance was injected in the articaine groups relative to the lidocaine groups. Nusstein et al. found no significant difference in pulpal anesthesia success when comparing 1.8 mL and 3.6 mL of lidocaine for inferior alveolar nerve blocks. Syed et al. compared equal-milligram doses of 0.8 mL of 4% articaine containing 1:100,000 epinephrine and 1.6 mL of 2% lidocaine containing 1:80,000 epinephrine, finding no significant difference in anesthetic success. Therefore, doubling the volume of lidocaine in our study likely would not have affected the outcome. A meta-analysis showed that articaine had a significant advantage over lidocaine for additional infiltration after mandibular block anesthesia in teeth with symptomatic IP, but no advantage when used alone for mandibular block or maxillary infiltration . In our study, there was no significant difference in mean pain scores during access cavity preparation and palatal canal entry between groups receiving buccal or both buccal and palatal injections with 2% lidocaine containing 1:80,000 epinephrine or 4% articaine containing 1:100,000 epinephrine ( p > 0.05). Ulusoy et al. found that single buccal infiltration did not provide adequate pulpal anesthesia in the palatal root canal of maxillary first molars with IP. In a study comparing buccal infiltration of lidocaine and articaine in the orthodontic extraction of bilateral premolars, significantly more pain was noted in the palatal area with lidocaine, necessitating additional palatal anesthesia . Guglielmo et al. compared 0.5 mL palatal infiltration plus 1.8 mL buccal infiltration to 1.8 mL buccal infiltration alone using 2% lidocaine containing 1:100,000 epinephrine, reporting success rates of 95% and 88%, respectively. These high success rates may be attributed to the study being performed on vital, asymptomatic, non-inflamed healthy teeth. In a study by Askari et al. , maxillary first molars with longer palatal and distobuccal roots showed significantly more anesthetic failures after buccal infiltration with 2% lidocaine containing 1:80,000 epinephrine. This finding aligns with other studies suggesting that single buccal infiltration may not be effective for numbing the palatal roots of maxillary molars . Parirokh et al. , also reported higher anesthesia failure rates in maxillary molars with IP and longer distances from the palatal root apex to the buccal cortical plate. The main reason for these failures may be the distance between the injection site and the apex of the palatal root. Thus, in maxillary molars with long or distant palatal roots, anesthetic difficulties may arise due to the greater distance from the injection site. In our study, 0.5 mL palatal infiltration anesthesia was applied in addition to 1.2 mL buccal infiltration in Groups 3 and 4. Regardless of the type of anesthetic used, adding palatal infiltration to buccal infiltration resulted in lower pain scores during access cavity preparation (HPVAS-2) and palatal canal entry (HPVAS-3) than when using buccal infiltration alone. Although this difference was not statistically significant in HPVAS-2 ( p > 0.05), it was statistically significant in HPVAS-3 ( p < 0.05). The buccal–palatal distance of the alveolar crest and the thickness of the alveolar bone in the maxillary molar region may hinder the spread of local anesthetics to the palatal tissues after buccal injection. However, Aggarwal et al. found no significant difference between buccal infiltration alone and buccal plus palatal infiltration in maxillary molars with IP. Individual differences in bone density, tooth morphology, and anesthetic technique may explain the variability in anesthetic success across different studies . A limitation of this study is that the degree of pain recorded using the HPVAS is a qualitative measure that varies from person to person depending on their pain threshold. Further studies with different methodologies are needed to evaluate the success rates of local anesthetics in achieving pulpal anesthesia in maxillary molars with IP. The hypothesis that there is no statistical difference in anesthesia efficacy between 4% articaine containing 1:100,000 epinephrine and 2% lidocaine containing 1:80,000 epinephrine is accepted. The hypothesis that palatal infiltration anesthesia in addition to buccal infiltration anesthesia is not statistically different in effectiveness compared with buccal infiltration anesthesia alone is rejected. Palatal infiltration significantly reduced pain during access to the palatal canals compared with buccal infiltration alone. The differences in pain scores before and after anesthesia were consistent with the differences in EPT, indicating that EPT may be a valuable tool for predicting potential anesthetic problems. |
Molecular Epidemiology and In-Depth Characterization of | b14cabcd-5b4e-4a2b-97f1-a1e193a09bd3 | 11764700 | Biochemistry[mh] | Emergence and rise of antimicrobial resistance (AMR) in pathogenic bacteria have become a significant public health threat over the last several decades, and this problem continues to escalate. The first comprehensive assessment of the global burden of AMR highlighted the excessive mortality rates associated with AMR infections . In a recent response to this alarming trend, the World Health Organisation (WHO) has published an updated list of priority AMR pathogens, for which new therapeutic alternatives are urgently needed . Among other priority pathogens, the list includes carbapenem-resistant and cephalosporin-resistant Enterobacterales. The increasing resistance to carbapenems and cephalosporins in these bacteria compromises the efficacy of therapy and limits treatment options for patients facing challenging infections. Among Enterobacterales, Klebsiella pneumoniae has been estimated to be second only to Escherichia coli in the mortality rates directly attributable to AMR, with 193 and 219 million deaths globally, respectively . K. pneumoniae is considered one of the most widespread opportunistic bacteria that cause pneumonia, urinary tract infections, meningitis, sepsis, and other life-threatening diseases . Its pathogenicity mechanisms, however, are still poorly understood. Another problem with this bacterium is its multidrug resistance (MDR). It belongs to the ESKAPE bacteria ( Enterococcus faecium , Staphylococcus aureus , K. pneumoniae , Acinetobacter baumannii , Pseudomonas aeruginosa , and Enterobacter species), for which the therapeutic options are limited because of the widespread MDR phenotypes . Thus, the attention to MDR K. pneumoniae is eminent because of its clinical significance as one of the leading pathogens associated with nosocomial infections that are particularly difficult to treat, especially in patients who are in the risk groups such as elderly, neonates, and patients with chronic diseases or immunocompromised . Another important pathotype includes hypervirulent K. pneumoniae (hvKp) strains, which are susceptible to antimicrobials but can affect healthy individuals and may cause severe community-acquired infections such as pyogenic liver abscess with metastatic infections, pneumonia, urinary tract infections, and other diseases . The prevalence of this pathotype remains underestimated since the reliable biomarkers for hvKp are still in development , and detection of hypervirulence is not commonly performed in a standard clinical microbiology laboratory. Furthermore, the emergence of convergent K. pneumoniae strains that combine MDR and hvKp phenotypes has been increasingly reported . Dissemination of this emerging pathotype may pose a global public health threat . Of special concern are the K. pneumoniae lineages exhibiting both hvKp and carbapenem resistance phenotypes . This drastically limits treatment options and necessitates the use of the last-resort antimicrobials (AMs) with known side effects such as colistin because the alternative antimicrobial therapies are not always available. The global dissemination of epidemically important high-risk clones of K. pneumoniae poses a significant threat that requires extensive monitoring and appropriate control measures. Especially important is the genomic surveillance since there is growing evidence indicating a remarkable genetic diversity among K. pneumoniae strains. This diversity is mainly driven by horizontal gene exchange, and it results in the emergence of a variety of clones at different times and in different regions . This clonal diversity changes the landscape of difficult-to-treat or/and hvKp infections and challenges effective infection prevention and control. To deal with this threat, many countries implemented monitoring programs, which help to understand the epidemiology and drug resistance of this pathogen and take the necessary measures to treat and control it. In particular, this information is especially useful for health professionals, who deal with severely infected patients and have to make a prompt decision regarding empirical AM therapy, which should be the most appropriate under the current regional circumstances. In some countries and regions, however, there is a paucity of information regarding the genomic structure of local K. pneumoniae pathotypes. In particular, it is evident for Armenia, where this information is very scarce . There is only a single published report concerning the genomic study of eight MDR K. pneumoniae clinical isolates from Armenia . The aim of this study, therefore, was to explore the molecular epidemiology, resistome, and virulome of K. pneumoniae clinical isolates in Armenia. We attempted to characterise these K. pneumoniae isolates in depth, including various phenotypic and genomic characteristics, to present the combined data for appropriate public health measures at the local and regional levels.
2.1. Antimicrobial Susceptibility Among Clinical Isolates of K. pneumoniae Among our 48 clinical isolates of K. pneumoniae , the highest rates of susceptibility were detected towards colistin (93.75%, 45/48, MIC ≤ 1 µg/mL) and tigecycline (93.75%, 45/48, MIC ≤ 0.5 µg/mL), followed by meropenem (89.58%, 43/48) and imipenem (89.58%, 43/48) . Among cephems, the most effective was cefoxitin, with a 79.17% (38/48) susceptibility rate, while 6.25% (3/48) and 14.58% (7/48) of isolates displayed intermediate and full resistance, correspondingly . Susceptibility to other AMs of this class was 56.25% (27/48) for cefepime, 50% (24/48) for ceftazidime, and 47.92% (23/48) for ceftriaxone. These findings indicate the high prevalence of resistance to the 3rd and 4th generation cephalosporins among our isolates. The similar levels of susceptibility were found towards all beta-lactam combination agents tested in our study; the most efficient was ticarcillin-clavulanate, with a 50% (24/48) susceptibility rate. Susceptibility to another beta-lactam, aztreonam, was also in the same range (52.08%, 25/48). As for azithromycin, 66.67% (32/48) of isolates had minimum inhibitory concentrations (MICs) of ≤16 µg/mL, whereas the remaining 33.33% (16/48) of isolates displayed MICs higher than 64 µg/mL . The rates of susceptibility to other AMs, in descending order, were as follows: 77.08% (37/48) for amikacin, 64.58% (31/48) for chloramphenicol, 56.25% (27/48) for gentamycin, 50% (24/48) for ciprofloxacin, 43.75% (21/48) for tetracycline, 35.42% (17/48) for tobramycin, 35.42% (17/48) for trimethoprim-sulfamethoxazole, and 2.08% (1/48) for ampicillin. Resistance to one or two classes of AMs was identified in 35.42% (17/48) of clinical isolates of K. pneumoniae . Among them, the most common were isolates recovered from the stool samples collected from children, 41.18% (7/17) . The most prevalent AMR profile in these non-MDR K. pneumoniae isolates was ampicillin resistance (23.53%, 4/17). We identified an extensively drug-resistant (XDR) phenotype in four clinical K. pneumoniae isolates (8.33%, 4/48), which exhibited the identical profile of full resistance to all AMs tested except for colistin (MIC ≤ 1 µg/mL) and tigecycline (MIC < 0.5 µg/mL) . All XDR K. pneumoniae isolates were isolated in 2022 from the urine (3) and stool (1) samples of paediatric patients. Notably, resistance to amikacin was detected only in XDR isolates and absent in other K. pneumoniae clinical isolates. Our results indicated that a substantial proportion of human K. pneumoniae isolates in this study were MDR, 56.25% (27 out of 48) . The highest susceptibility rates among MDR isolates were found towards meropenem (96.3%, 26/27) and imipenem (96.3%, 26/27), followed by colistin (88.89%, 24/27, MIC ≤ 1 µg/mL), tigecycline (88.89%, 24/27, MIC ≤ 0.5 µg/mL), cefoxitin (85.19%, 23/27), and amikacin (74.07%, 20/27). The MDR isolates displayed lower susceptibility rates towards azithromycin (55.56%, 15/27, MIC ≤ 16 µg/mL), chloramphenicol (51.85%, 14/27), gentamycin (40.74%, 11/27), cefepime (37.04%, 10/27), tobramycin (33.33%, 9/27), and aztreonam (29.63%, 8/27). Susceptibility rates to other AMs were even lower . We identified five MDR isolates resistant to nine classes of AMs, but their resistance profiles were not identical. The most common among MDR isolates was resistance to eight AM classes (33.33%, 9/27). Three isolates (KpA44, KpA46, and KpA373) had the identical AMR profile . The extended spectrum beta-lactamase (ESBL)-producer phenotype was identified in 45.83% (22/48) of our K. pneumoniae clinical isolates . ESBL production, however, was not detected in the XDR isolates. The rate of ESBL production was 77.78% in MDR isolates (21/27) and 5.88% in non-MDR isolates (1/17). Among the ESBL-producing isolates, the highest susceptibility was towards carbapenems, 95.45% (21/22). Carbapenem resistance was detected only in one ESBL-positive isolate, with intermediate resistance to imipenem and full resistance to meropenem. Resistance to colistin was detected in two ESBL-producing isolates, with an MIC of 2 µg/mL. Three ESBL-producers were resistant to tigecycline, with one isolate having an MIC of 1 µg/mL, while the other two had 2 µg/mL. The ESBL-positive MDR K. pneumoniae isolates also demonstrated high levels of resistance towards the 3rd and 4th generation cephalosporins: all of them were resistant to ceftriaxone, and only 4.76% (1/21) and 23.81% (5/21) were susceptible to ceftazidime and cefepime, respectively. However, 85.71% (18/21) of the ESBL-producing MDR isolates were susceptible to cefoxitin, suggesting it could be a therapeutic option for the treatment of ESBL-producing K. pneumoniae to limit the use of carbapenems. Susceptibility to other AMs in this group was: amikacin (66.67%, 14/21), azithromycin (57.14%, 12/21, MIC ≤ 16 µg/mL), chloramphenicol (47.62%, 10/21), gentamycin (28.57%, 6/21), tobramycin (23.81%, 5/21), and beta-lactam combination agents such as ampicillin-sulbactam (19.05%, 4/21). All MDR ESBL-producing K. pneumoniae isolates were resistant to five or more classes of AMs. Among the six ESBL-negative MDR isolates, the susceptibility rates to all cephalosporins tested were significantly higher compared to ESBL-producers: 100% to ceftriaxone and ceftazidime ( p < 0.0001) and 83.33% (5/6) to cefepime ( p < 0.05) . The ESBL-negative isolates were also significantly more susceptible to gentamycin and ciprofloxacin compared to ESBL-producers: 83.33% vs. 28.57% and 66.67% vs. 14.29%, respectively ( p < 0.05). In addition, one isolate with an MIC to colistin of 2 µg/mL was identified among the MDR ESBL-negative isolates. MDR phenotype with resistance to five or more classes of AMs was significantly lower in ESBL-negative MDR isolates compared to ESBL-producers (50% vs. 100%, p < 0.01). Thus, we detected an alarming presence of carbapenem resistance in XDR K. pneumoniae clinical isolates, as well as the high rates of complete resistance to the 3rd and 4th generation cephalosporins. All ESBL-producers were resistant to five or more classes of AMs, hence seriously limiting therapeutic options. 2.2. In Vitro Activity of Bacteriophage Preparations Against Human K. pneumoniae Isolates One of the alternative options for treatment of XDR and MDR infections is phage therapy. We therefore tested our K. pneumoniae isolates for susceptibility to the commercial bacteriophage preparations “Bacteriophage Klebsiella pneumoniae Purified” (BKpP) and “Bacteriophage Klebsiella Polyvalent Purified” (BKPP) (manufacturer: SPA “Microgen” Moscow, Russia), which include phage cocktails active against K. pneumoniae and Klebsiella spp., respectively. Complete resistance to both phage cocktails was detected in 27.08% (13/48) of our isolates . Susceptible or intermediate susceptible phenotypes toward BKpP were found in 35 isolates (72.92%) and towards BKPP—in 32 (66.67%). Notably, all XDR K. pneumoniae isolates were highly susceptible to BKpP and intermediately susceptible to BKPP. Both phage preparations demonstrated a higher activity against MDR isolates compared to non-MDR isolates: 81.48% (22/27) vs. 47.06% (8/17) for BKpP and 74.07% (20/27) vs. 52.94% (9/17) for BKPP. Thus, the combination of both phage cocktails provided a 100% coverage against XDR K. pneumoniae isolates, while the efficiency against the MDR strains was in the range of 74.07–81.48%. At the same time, the three K. pneumoniae clinical isolates with the hypermucoviscous (HMV) phenotype (see below) and highly susceptible to antibiotics were mostly resistant against the phage cocktails . Thus, the commercial phage formulations BKpP and BKPP displayed significant in vitro activity against MDR and XDR K. pneumoniae clinical isolates and, therefore, may serve as alternative or adjunct therapies to control this infection. 2.3. Hypermucoviscous K. pneumoniae Clinical Isolates We detected three clinical K. pneumoniae isolates (6.25%, 3/48) with a hypermucoviscous (HMV) phenotype using the string test . All these isolates were recovered from urine samples of adults. The HMV isolates showed the distinctive AMR profiles of resistance to two classes of AMs and were classified as non-MDR isolates. In particular, all HMV isolates showed resistance to tobramycin (intermediate) and ampicillin (intermediate or resistant), whereas two of them exhibited intermediate resistance to piperacillin-tazobactam (KpA687, KpA704) and one isolate was intermediately resistant to gentamycin (KpA828). In addition, one HMV K. pneumoniae clinical isolate, KpA828, was an ESBL-producer. It was the only ESBL-positive isolate among the non-MDR K. pneumoniae isolates. No HMV phenotypes were present among the MDR or XDR strains. 2.4. Enterobacterial Repetitive Intergenic Consensus (ERIC-PCR) Typing of K. pneumoniae Clinical Isolates To explore the genetic relatedness among our 48 clinical isolates of K. pneumoniae , we used ERIC-PCR . This analysis yielded differential patterns consisting of 9–18 bands. The dendrogram generated from ERIC-PCR data demonstrated that all isolates in our collection showed at least a 63.6% similarity of the band patterns and were grouped into two main clades, A and B . The largest clade, A, composed of 83.33% (40/48) of the isolates, including all four XDR isolates and the majority of MDR isolates (25 out of 27). The non-MDR isolates were dispersed across the tree; however, their prevalence varied from 27.5% (11/40) in the clade A to 75% (6/8) in the clade B. In total, 42 different ERIC types were identified, with 37 unique types, suggesting a significant genetic diversity among our K. pneumoniae clinical isolates. Three out of four XDR isolates displayed a 100% similarity of band pattern (cluster I), suggesting that they may belong to a single clone, while another XDR isolate showed 97.2% similarity to the XDR cluster . In additon, five MDR isolates (including two isolates in cluster II) showed 91.4% similarity to the cluster of XDR isolates. Among other clusters, cluster III grouped together two MDR isolates displaying resistance to nine classes of AMs, while another isolate resistant to nine classes showed 97.2% similarity to the cluster isolates. In addition, two MDR isolates in cluster IV recovered from endotracheal tubes in paediatric patients are of note. Remarkably, all isolates that were clustered together were isolated in the same hospital within the period of up to three weeks, suggesting a possible nosocomial infection. Thus, ERIC-PCR typing revealed a high genetic diversity among our K. pneumoniae clinical isolates, indicating a predominantly polyclonal distribution of K. pneumoniae strains. At the same time, there are indications of clonal spread of XDR and some of the MDR isolates. 2.5. Whole Genome Sequencing (WGS) of K. pneumoniae Clinical Isolates A total of 21 K. pneumoniae isolates from Armenia were subjected to WGS. This analysis included all four XDR isolates, 13 MDR isolates resistant to seven or more classes of AMs, and one MDR isolate resistant to five classes of AMs. The MDR isolates were selected for WGS based on AMR profile, specimen origin, and year of isolation. In addition, two non-MDR isolates exhibiting HMV phenotype and one isolate susceptible to all AMs tested (except for tobramycin) were also subjected to WGS. ERIC-PCR results were used to avoid redundant sequencing of clonal isolates. Genome sequences are available in the NCBI database under Bioproject PRJNA1141898. Individual accession numbers are listed in . The general genomic information of our K. pneumoniae isolates was generated by using the BIGSdb-Pasteur database ( https://bigsdb.pasteur.fr/klebsiella/ , accessed on 20 September 2024) and the Pathogenwatch resource ( https://pathogen.watch , accessed on 20 September 2024). This information is summarised in . The general genomic features, such as genome sizes of K. pneumoniae isolates, were in the range of 4.97–5.94 Mb, with GC-content in the range of 56.53–57.76%, which is consistent with the accepted criteria for Klebsiella spp. in the BIGSdb-Pasteur database. 2.6. Molecular Epidemiology of K. pneumoniae Clinical Isolates The WGS results indicated that our Klebsiella isolates belong to the phylogroup Kp1, K. pneumoniae sensu stricto . A total of 12 different sequence types (ST) were identified among our 21 K. pneumoniae clinical isolates . The most common was ST395 detected in 7 K. pneumoniae isolates (33.33%, 7/21) that were isolated in 2022 (6) and 2024 (1) from paediatric patients. Among the ST395 K. pneumoniae isolates were all four XDR isolates with complete resistance to carbapenems and three MDR isolates with resistance to eight classes of AMs. The ST395 isolates were recovered from various sources: urine (3), stool (2), endotracheal tube (1), and wound fluid (1). ST395 is an international high-risk clonal lineage associated with MDR phenotypes of clinical relevance, including the production of carbapenemases and ESBL, as well as resistance to other classes of AMs . The carbapenem-resistant KpA699 isolate, which was recovered from the urine sample of an adult patient in 2022 and exhibited resistance to nine classes of AMs, was assigned to ST15. The other two isolates, KpA13 and KpA230, which were isolated from the throat infection of paediatric patients and resistant to nine classes of AMs, were assigned to K. pneumoniae ST39. Two strains resistant to eight classes of AMs were assigned to ST307: KpA500, isolated from the stool sample of an adult in 2018, and KpA204, isolated from the throat infection of a paediatric patient in 2024. In addition, two isolates, KpA250 and KpA314, which were isolated from endotracheal tubes of children in 2022, belonged to ST29. All other STs were represented by a single isolate, suggesting a significant level of genetic diversity among the clinical isolates of K. pneumoniae and confirming our earlier observations with ERIC-PCR. The colistin-resistant KpA511 with resistance to nine classes of AMs was assigned to ST219. The KpA6101 isolate, resistant to eight classes of AMs (including polimyxins) but susceptible to all cephems, was assigned to ST5275. Another colistin non-susceptible isolate, KpA324, with resistance to eight classes of AMs, was the representative of ST449. The remaining MDR isolate subjected to WGS, KpA7002, was assigned to ST873. Among the three non-MDR isolates, the two HMV isolates were assigned to ST107 (KpA704) and ST25 (KpA828, ESBL-producer), while the susceptible isolate to all but one AMs was assigned to ST1480 (KpA857). Thus, despite the limitations of this work due to the non-consecutive collection of K. pneumoniae clinical isolates and the small number of genomes sequenced, our results indicate the circulation of international high-risk K. pneumoniae clones in Armenia. Notably, all our ST395 isolates of K. pneumoniae belonged to the same sublineage and clonal group (SL395 and CG395, correspondingly) and also shared other important characteristics. An identical core genome sequence type (cgST) was identified among the K. pneumoniae ST395 isolates only, whereas all other isolates were assigned to the individual cgSTs . In particular, three XDR K. pneumoniae ST395 isolates were assigned to cgST-*a23d. These isolates (KpA278, KpA285, and KpA542) were recovered from paediatric patients at the same facility within a three-week period. In addition, one XDR isolate, KpA481, isolated in 2022, was assigned to cgST-*72e4, together with the MDR KpA44 strain isolated in 2024. The limited genetic diversity among the ST395 isolates is possibly due to the sampling bias, with three isolates obtained from the same facility within a short period of time. Isolates in other K. pneumoniae STs were also identified to the sublineage and clonal group levels . Isolates within the same ST were assigned to the same sublineages and clonal groups. Furthermore, ST1480 (KpA857) and ST5275 (MDR KpA6101) isolates were assigned to the same sublineage, SL37, whereas ST107 (KpA704, HMV) and ST219 (MDR KpA511) isolates were assigned to the sublineage SL107. 2.7. Capsule (K) and Lipopolysaccharide (O) Types Deduced from WGS Data The most common capsular serotype was K2 (38.1%, 8/21), which was identified in all seven ST395 isolates and in one ST25 isolate, KpA828, with the HMV phenotype ( , sourced from Kaptive ). The second most common serotype was K19 (14.29%, 3/21), detected in two ST29 isolates (KpA250, KpA314) and one ST15 isolate (KpA699). Other capsular serotypes were K62 (9.52%, 2/21), identified in two ST39 isolates (KpA13 and KpA204), and KL102 (9.52%, 2/21), detected in ST307 isolates (KpA204 and KpA500). All other capsular serotypes were represented by a single isolate: K10 was detected in the ST107 isolate (KpA704 with the HMV phenotype), K22—in the ST449 isolate (KpA324), K23—in the ST5275 isolate (KpA6101), K39—in the ST1480 isolate (KpA857), K52—in the ST873 isolate (KpA7002), and KL114—in the ST219 isolate (KpA511). The most common O serotype was O2 (47.62%, 10/21), identified in all ST395 (subtype O2a) and ST307 (subtypes O2a and O2afg) isolates, as well as in the ST5275 isolate (O2afg). The second most common serotype was O1, detected in eight isolates (38.1%) assigned to the following STs: ST15, ST29, ST39, ST107, ST219, and ST449. Collectively, these two O types accounted for 85.71% (18/21) of all sequenced isolates. In addition, O3 serotype (subtype O3b) was identified in the ST1480 isolate KpA857. In the remaining two isolates, the OL101 locus in the ST873 isolate and the unknown (O3/O3a) locus in theST25 isolate were identified, with no recognised O serotype information . 2.8. Resistome Analysis The genetic background for the AMR phenotypes observed was explored with the use of WGS data . In this analysis, we mainly focused on determining the genetic basis for AMR in the two previously defined resistance phenotypes, that is, XDR and MDR isolates of K. pneumoniae . The emphasis is also made on resistance mechanisms towards AMs of clinical relevance, such as beta-lactams, aminoglycosides, and macrolides, although the others are also described if appropriate. 2.8.1. Resistome Analysis of XDR Isolates of K. pneumoniae Four XDR ST395 isolates with complete resistance to all beta-lactams demonstrated the presence of the metallo-β-lactamase-encoding bla NDM-1 gene (conferring resistance to carbapenems ) and were associated with the ble MBL gene (encoding bleomycin resistance protein ). The bla NDM-1 gene, however, was not detected in other clinical isolates . In addition, an identical combination of 5 other genes associated with resistance to beta-lactams ( bla CTX-M-15 , bla OXA-1 , bla SHV-11 , bla TEM-1 , and ftsI (D350N, S357N)) was found in three XDR ST395 isolates, whereas one isolate (KpA542) lacked the bla CTX-M-15 gene in this combination. Notably, the ESBL-producer phenotype was not detected in the XDR isolates, which can be explained by the production of the NDM-type carbapenemase that masks this phenotype and confers complete resistance to nearly all beta-lactams, including carbapenems . Complete resistance to amikacin in the XDR K. pneumoniae ST395 isolates was associated with the armA gene, which encodes 16S rRNA methyltransferase , and by the presence of other genes conferring resistance to aminoglycosides such as aac ( 3 )- IIe , aac ( 6 ′)- Ib - cr6 , and aph ( 3 ′)-VIa . Notably, the armA gene was detected in the XDR isolates only but not in other isolates. The high level of resistance to azithromycin in all XDR isolates could be explained by the carriage of the four-gene complex: mphA , mphE , mrx , and msrE . The genetic basis for other mechanisms of AMR among XDR isolates was also explored. Fluoroquinolone resistance in these isolates included the identical combination of aac ( 6 ′)- Ib - cr6 , gyrA (S83I), parC (S80I), and qnrS1 , except for the KpA542 isolate, which missed the aac ( 6 ′)- Ib - cr6 gene . Resistance to folate pathway antagonists was associated with the combination of two genes, dfrA5 and sul1 , whereas the additional dfrA1 and sul2 genes were found in two and three XDR isolates, respectively. Furthermore, the tet (A) gene encoding resistance to tetracyclines and the combination of the catA1 and catB3 genes conferring resistance to phenicols were detected in the XDR isolates. To the best of our knowledge, this is the first report of NDM-1 carbapenemase-producing and pan-aminoglycoside-resistant human K. pneumoniae isolates exhibiting the XDR phenotype/genotype from Armenia. Thus, the extensive bioinformatic analysis of WGS data revealed a highly similar genetic background of the resistome of XDR strains, which encodes the phenotypic resistance to 10 classes of AMs. 2.8.2. Resistome Analysis of MDR Isolates of K. pneumoniae Similarly to XDR strains, in three MDR K. pneumoniae ST395 isolates with resistance to 8 classes of AMs, the combination of five genes encoding resistance to beta-lactams ( bla CTX-M-15 , bla OXA-1 , bla SHV-11 , bla TEM-1 , and ftsI (D350N, S357N)) was also found. These MDR isolates were ESBL-producers and resistant to all beta-lactams tested, excluding carbapenems and cefoxitin. A similar combination of five genes, except for MLST-associated variation in the bla SHV gene, was found in isolates belonging to ST29 (2 strains) and ST307 (2 strains). The ST29 isolates were resistant to beta-lactams, whereas one of the ST307 isolates, KpA500, had intermediate resistance to amoxicillin-clavulanic acid, and KpA204 had intermediate resistance to cefoxitin but was susceptible to cefepime. Interestingly, the following combination of four genes was identified in the KpA699 isolate (ST15) with complete resistance to all beta-lactams, including meropenem and cefoxitin, except for intermediate resistance to imipenem: bla CTX-M-15 , bla SHV-28 , bla TEM-1 , and ftsI (D350N, S357N). KpA699 had no carbapenemase or AmpC-type beta-lactamase genes, suggesting that other mechanisms must be responsible for this clinically significant phenotype. In all other MDR isolates with resistance to beta-lactams, a combination of two or three beta-lactamase-encoding genes and the ftsI (D350N, S357N) gene was identified . MDR isolates possessed bla CTX-M , which was not detected only in one isolate (KpA6101, ST5275). KpA6101 had a combination of the bla LAP-2 , bla SHV-11 , bla TEM-1, and ftsI (D350N, S357N) genes, was ESBL-negative, and was susceptible to all cephems. In the KpA324 isolate, the combination of the bla CTX-M-14 , bla SHV-33 , and ftsI (D350N, S357N) genes was associated with susceptibility to ampicillin-sulbactam and intermediate resistance to ceftazidime and amoxicillin-clavulanic acid. The MDR isolates also possessed bla SHV , which was absent in one isolate only (KpA230, ST39). KpA230 had the bla CTX-M-15 , bla TEM-1 , and ftsI (D350N, S357N) genes in combination and displayed complete resistance to all beta-lactam combination agents and cephems, except for cefepime. Regarding aminoglycoside resistance determinants, all but one MDR K. pneumoniae isolate (13 out of 14) carried aminoglycoside-modifying enzyme genes in combination . The most common genes were aph ( 3 ′′)- Ib (71.43%, 10/14), aph ( 6 )- Id (64.29%, 9/14), aac ( 3 )- IIe (64.29%, 9/14), and aac ( 6 ′)- Ib - cr6 (50%, 7/14). In all isolates showing full resistance to gentamicin, the aac ( 3 )- IIe (9 isolates) or aac ( 3 )- IId (2 isolates) genes were identified. These genes encode aminoglycoside 3-N-acetyltransferase enzymes, which inactivate gentamicin and tobramycin. In gentamicin-susceptible isolates, these genes were not detected ( p < 0.01). The genetic basis for amikacin resistance in five MDR isolates was more complex. The aph ( 3 ′)-VIa gene conferring resistance to amikacin , in combination with the aac ( 6 ′)- Ib - cr6 and aac ( 3 )- IIe genes, was identified in one isolate, KpA44 . There were two gene profiles encoding for aminoglycoside-modifying enzymes. The first was represented by the aac ( 3 )- IIe and aac ( 6 ′)- Ib - cr6 genes and was detected in KpA7001. These two genes, in combination with the additional aph ( 3 ′′)- Ib and aph ( 6 )- Id genes, were detected in KpA250 and KpA314. These gene profiles were also detected in amikacin-susceptible isolates (KpA769 and KpA204). Finally, the combination of aac ( 3 )- IIe , aph ( 3 ′′)- Ib , and aph ( 6 )- Id genes was identified in one isolate, KpA7002, with intermediate resistance to amikacin. Among the MDR isolates with azithromycin resistance (5 isolates, MIC ≥ 64 µg/mL), three isolates had the combination of the mphA and mrx genes . In another isolate with an MIC of ≥ 64 µg/mL, only one gene, mphA , was detected. Macrolide resistance genes were not identified in one isolate (KpA500, MIC ≥ 64 µg/mL), suggesting that other mechanisms are involved in azithromycin resistance. In addition, the single mphE gene was detected in one isolate, KpA699, that had an MIC < 16 µg/mL to azithromycin. The mechanisms of resistance to other classes of antimicrobials in the clinical MDR K. pneumoniae isolates were also explored. Resistance to phenicols in these isolates was associated with the cat2 gene, as well as with the combinations of the catA1 gene with the floR or catB3 genes . The presence of a single catB3 gene was detected in one chloramphenicol-resistant isolate, KpA204. This gene, however, was also detected in three chloramphenicol susceptible isolates (KpA250, KpA314, and KpA500). In one isolate with full resistance to chloramphenicol (KpA324), no known acquired phenicol resistance gene can be found. Resistance to ciprofloxacin in these isolates was commonly associated with the combination of mutations in both the gyrA (S83I or S83F) and parC (S83I) genes with the qnr genes ( qnrS1 , qnrB1 , qnrS1 , and qnrB20 ). In one isolate, KpA699, the additional aac ( 6 ′)- Ib - cr6 gene, as well as the presence of the two substitutions in the gyrA (S83F, D87A) gene, were detected. In the other three ciprofloxacin-resistant isolates, the qnrB1 and aac ( 6 ′)- Ib - cr6 genes in combination (two ST29 isolates) or the single qnrB1 gene (KpA511) were identified. Furthermore, the combination of qnrB20 and qnrS1 genes was detected in the ciprofloxacin-resistant isolate KpA230, whereas a single qnrS1 gene was associated with the intermediate resistance phenotype of KpA6101. The presence of the tet (A) and tetR (A) genes was detected in all isolates exhibiting resistance to tetracycline, except for one isolate (KpA511), which carried the tet (B) and tetR (B) genes. In the only isolate showing an intermediate phenotype to tetracycline (KpA699), the tet genes were not detected. Notably, the three tigecycline non-susceptible isolates (KpA13, KpA204, and KpA7001) harboured the same tet (A) gene variant as the tigecycline-susceptible isolates. However, no acquired genetic determinants associated with resistance to this antibiotic were detected. The high level of resistance to folate pathway antagonists in MDR isolates (100%) was in agreement with the presence of the dfrA gene variants ( dfrA1 , dfrA5 , dfrA14 , dfrA17 , dfrA12 , and dfrA27 ), in combination with the sul1 or/and sul2 genes. The fosA6 and uhpT (E350Q) genes encoding resistance to fosfomycin were detected in all MDR isolates, as well as the arnT , eptB , and ompA genes conferring resistance to peptide antibiotics. In addition, the aar - 3 gene encoding resistance to ansamycins was present in one isolate, KpA699. Of note, any acquired resistance determinants associated with resistance to colistin were not identified in our MDR isolates. 2.8.3. Resistome Analysis of Non-MDR Isolates of K. pneumoniae The presence of AMR determinants in the three non-MDR isolates was also examined. The only AMR mechanisms encountered were beta-lactam resistance determinants . KpA857 isolate, susceptible to all beta-lactams, possessed ftsI (S357N, D350N). Two non-MDR HMV isolates carried the identical combination of two genes, bla SHV-11 and ftsI (D350N, S357N), but with different phenotypes. KpA704 was ESBL-negative and had intermediate resistance to ampicillin and piperacillin-tazobactam, while KpA828 was an ESBL-producer with full resistance to ampicillin. 2.9. Efflux Systems in K. pneumoniae Clinical Isolates The main multidrug efflux systems (ES) in Klebsiella spp., AcrAB and OqxAB , were identified in 100% and 95.24% of K. pneumoniae isolates, respectively . The oqxA and oqxB genes were not detected in one isolate only, the MDR KpA13 isolate (ST39). In addition, the following most prevalent ES were identified: AcrEF (100%), EefAB (100%), EmrAB (100%), KpnEF (100%), KpnGH (95.24%, 20/21), LptD (100%), MacAB (100%), and MsbA (100%). The prevalence of other efflux pumps was lower: Tet(A) (71.43%, 15/21), Tet(B) (4.76%, 1/21), QacEdelta1 (52.38%, 11/21), FloR (9.52%, 2/21), and CrcB (4.76%, 1/21). The acrR gene encoding for the repressor of the AcrAB-TolC pump was present in all genomes. However, in five MDR isolates (KpA13, KpA230, KpA44, KpA6101, and KpA699), an identical profile of substitutions in this gene was identified: P161R, G164A, F172S, R173G, L195V, F197I, K201M (sourced from the ResFinder database , http://genepi.food.dtu.dk/resfinder , accessed on 20 September 2024). Mutations in the acrR gene contribute to the overexpression of the AcrAB-TolC complex, leading to the higher level of resistance to multiple antibiotics . In addition, the marA gene that encodes the global activator MarA mediating the overexpression of the AcrAB pump was identified in all our isolates. Also, mutations in the marR gene encoding MarR repressor of marA , which also lead to overexpression of the AcrAB pump and a reduced susceptibility to multiple antibiotics , were detected. The prevalence of ramA and ramR genes that also encode the regulators of the AcrAB pump (activator of acrAB and repressor of ramA , correspondingly ) was lower, 66.67% (14/21). It should be noted here that the ramAR genes were not detected only in the ST395 isolates. In addition, in isolate KpA699, a substitution in the ramR (A19V) gene was detected. This mutation is known as contributing to a reduced susceptibility to tigecycline ; the isolate, however, was susceptible to this antibiotic. Regarding other genes involved in efflux pump regulation, the following genes were identified in all our isolates, irrespective of AM resistance phenotype: baeR , crp , leuO , h - ns , and rsmA . Of note, the rarA gene encoding a transcriptional activator of the efflux pump OqxAB was not detected only in KpA13, which was also negative for oqxAB genes. Futhermore, the emrR gene, a negative regulator of the EmrAB-TolC efflux system associated with resistance to nalidixic acid and thiolactomycin , was not detected only in non-MDR HMV KpA828, with potential overexpression of the EmrAB pump. In addition, the tet (R) repressor gene (76.19%) was present in all isolates carrying Tet pumps. The results indicated a common set of predominant Klebsiella pumps in most isolates, regardless of their AMR phenotype. More differences were identified in the repertoire of genes regulating efflux pumps. In particular, the absence of ramAR genes in all our K. pneumoniae isolates belonging to the ST395 is of note. Additionally, the co-occurrence of mutations in the acrR and ramR genes was detected in one isolate only, KpA699, for which the resistance mechanisms to three classes/subclasses of AMs (carbapenems, cephamycins, and tetracyclines) cannot be explained by the presence of any acquired resistance genes. Potentially, the overexpression of the AcrAB efflux pump due to mutations in the acrR and ramR genes may contribute to the aforementioned resistances in this isolate. 2.10. Analysis of ompK Genes Loss of, or mutations in, the major porins of K. pneumoniae result in AMR, and this possibility was explored among our isolates. The intact ompK35 , ompK36 , and ompK37 genes were identified in 42.86% (9/21) of the sequenced K. pneumoniae clinical isolates, including all non-MDR isolates (3) and 33.33% of the MDR isolates (6/18). The ompK35 gene (GenBank: AJ011501.1) was identified in 12 isolates (57.14%) with 98.43–100% identity, whereas in the remaining nine isolates (42.86%), a truncated form of OmpK35 porin was predicted. It should be noted here that an identical deletion in the ompK35 gene leading to the truncated porin protein was predicted in all ST395 isolates (four XDR and three MDR). In addition, in the KpA500 (ST307) isolate, an insertion of 26 bp in the ompK35 gene after nucleotide 226 resulted in a premature stop codon, which also resulted in a truncated protein form. In the KpA699 isolate, a premature TAA stop codon in the ompK35 gene appeared as the result of point mutation, also leading to a truncated form of the protein. In this isolate and in one ST395 isolate (KpA44), OmpK35 porin deficiency was coupled with mutations in the ompK36 and ompK37 genes (see below). In all other isolates with the truncated form of OmpK35, the intact ompK36 and ompK37 genes were detected. It should be noted here that the ompK36 (GenBank: Z33506.1) and ompK37 (GenBank: AJ011502.1) genes were identified in all isolates, in five of which (23.81%) mutations in both of these genes were observed. These five isolates (KpA13, KpA44, KpA230, KpA6101, and KpA699) had the identical combination of nine mutations in the ompK36 gene (N49S, L59V, G189T, F198Y, F207Y, A217S, T222L, D223G, and N304E; sourced from the ResFinder database ). A217S substitution is associated with resistance to carbapenems, while the other mutations are associated with cephalosporin resistance. In KpA699, these mutations were combined with the Gly134Asp135 duplication in loop 3 of the OmpK36 protein (OmpK36GD, sourced from Kleborate ), which leads to the attenuated diffusion of carbapenems . An identical combination of the four mutations in the ompK37 gene (I70M, I128M, N230G, and E244D; sourced from the ResFinder database ) was found in three isolates: both ST39 isolates (KpA13 and KpA230) and one ST395 isolate (KpA44). All these mutations are known as associated with resistance to carbapenems . In addition, the combination of two mutations (I70M and I128M) was found in KpA6101 (ST5275) and KpA699 (ST15). Notably, among these five isolates with mutations in the ompK36 and ompK37 genes that are associated with resistance to carbapenems, only one isolate, KpA699, was non-susceptible to carbapenems (intermediate to imipenem and resistant to meropenem), whereas other isolates were sensitive. Thus, mutations in genes encoding OmpK porins may contribute to the reduced permeability to AMs. The combined mutations in the ompK36 and ompK37 genes (23.81%, 5/21) and/or a truncated OmpK35 porin (42.86%, 9/21) may play a part in the increased resistance towards clinically important AMs in 57.14% (12/21) of isolates. In particular, these mechanisms may be responsible for resistance to carbapenems in carbapenemase-negative strains such as KpA699. It carries mutations in the ompK36 (OmpK36GD, A217S) and ompK37 (I70M, I128M) genes that are associated with resistance to carbapenems, as well as a truncated form of OmpK35 porin. 2.11. Virulence-Related Genes The most prevalent (13/21, 61.9%) was the virulence score of 1 (sourced from Kleborate ), which included isolates carrying the ybt locus encoding iron-scavenging siderophore yersiniabactin . Among these isolates, the non-MDR HMV KpA828 (ST25) strain is of note. In all but one of these isolates, the ybt loci ( ybt lineages 1, 8, 9, 14, 15, and 16) were located within various structural variants of integrative conjugative elements (ICE Kp4 , ICE Kp9 , ICE Kp3 , ICE Kp5 , and ICE Kp12 ) and were mainly distributed according to STs . In one AM-susceptible isolate, KpA857 (ST1480), however, ybt ( ybt 4) was located on the pCAV1099-114 plasmid (IncFIB(K) incompatibility group). The virulence score of 4 was assigned by Kleborate to five isolates (23.81%, 5/21) that belonged to ST395 . Four of them were carbapenem-resistant XDR strains and one MDR strain (KpA7001). In all of these isolates, the virulence determinants were associated with ICE Kp12 and included: the ybt locus (lineage 16, sequence type: 53–2LV), the iucABCD and iutA genes encoding aerobactin siderophore ( iuc 1; AbST: 63), and the rmpA2 gene (allele 28; sourced from the BIGSdb-Kp database) encoding the regulator of mucoid phenotype . In addition, in one isolate (KpA481), the additional rmpADC ( rmp 1/KpVP-1 lineage) and peg344 (metabolic transporter of unknown function) genes were identified. A frameshift mutation in the rmpA2 gene due to insertion within a poly-G tract, which results in a premature stop codon (TAA) and truncated protein (47%), was identified in all rmpA2 -positive isolates. This mutation may explain the HMV-negative phenotypes of these isolates found earlier by the sting-test. Despite the HMV-negative phenotypes, however, the presence of other virulence markers indicates a high virulence potential. In all five isolates, the rmp and aerobactin-encoding loci were co-localised on the same contigs. In addition, these isolates shared an identical plasmid replicon profile with the characteristic presence of IncFIB(K)/IncFIB(pNDM-Mar)/IncHI1B(pNDM-MAR) replicons, except for the KpA481 isolate that had only the IncFIB(pNDM-Mar) replicon. The virulence score of “0” was assigned to three K. pneumoniae clinical isolates (14.29%), one of which was the non-MDR KpA704 (ST107) isolate with the HMV phenotype, while the other two isolates were MDR, KpA204 (ST307) and KpA6101 (ST5275). In addition to Kleborate, the presence of virulence factors in WGS data was further analysed with VFanalyzer and the BIGSdb-Kp database at the Pasteur Institute. According to VFanalyzer, the genes encoding Type 1 and Type 3 fimbriae were detectable in all our isolates, except for the mrkH gene (transcriptional activator regulating biofilm formation ), which was missing in five isolates belonging to ST395 . Other genes identified in all of the isolates were ent locus encoding siderophore enterobactin, the K locus that determines polysaccharide capsule type, the csrAB genes involved in capsule synthesis regulation, the chromosome-located iro genes encoding siderophore esterase ( iroE ) and salmochelin receptor ( iroN ), and the chromosomal iutA gene for ferric aerobactin receptor. The three almost entire clusters of Type VI Secretion System (T6SS-I, T6SS-II, and T6SS-III), but with missing genes encoding Tli1, LysM, and two hypothetical proteins, were identified only in two ST307 isolates . The T6SS-I and T6SS-III clusters were detectable in almost all our isolates (95.24% and 100%, respectively), whereas the T6SS-II cluster was not identified in 61.9% (13/21) of isolates. The intact T6SS-I cluster similar to that in the MDR K. pneumoniae HS11286 strain (NC_016845) was identified in seven isolates (33.33%). Six of them belonged to ST395 and one, KpA324, to ST449. The remaining 14 isolates lacked the tle1 and/or tli1 genes (effector-immunity pair of proteins participating in intra- and interspecies antagonism ) . This pair of toxin-antitoxin genes was absent in seven isolates (33.33%) belonging to ST15, ST25, ST107, ST219, ST873, ST1480, and ST5275. The single tle1 gene was detected in two ST29 isolates (9.52%), and the single tli1 gene was found in five isolates (23.81%) belonging to ST39, ST307, and ST395. Notably, the tli1 gene copy number ranged from one in ST39 isolates to seven in ST307 isolates. In one isolate, KpA828 (ST25), a reduced T6SS-I cluster limited to only two genes ( clpV / tssH and hcp / tssD ) was found. Other virulence-related genes were detected at lower frequencies. However, the presence of the following fimbrial adherence determinants is of note: the stbABCDE genes identified in two ST29 isolates (9.52%) and the steB and stfD genes detected in two ST307 isolates (9.52%). 2.12. In Silico Identified Plasmid Replicons Plasmid replicons were identified in almost all sequenced isolates, except for one isolate, KpA828 (non-MDR, HMV), which also had the smallest genome size among our isolates . The number of plasmid replicons per isolate ranged from 1 to 7 . Notably, a high number of plasmid replicons was characteristic for the isolates belonging to ST395 (from 4 to 7 replicons). The highest number of plasmid replicons was detected in four ST395 isolates, three of which were XDR strains carrying the bla NDM-1 gene (KpA278, KpA285, and KpA542) and one—the MDR isolate KpA7001. In these isolates, an identical profile of the following 7 plasmid replicons was identified: Col(pHAD28) (KU674895), ColRNAI (DQ298019), IncFIB(K) (JN233704), IncFIB(pNDM-Mar) (JN420336), IncFII(K) (JN233704), IncHI1B(pNDM-MAR) (JN420336), and IncR (DQ449578). In another XDR isolate carrying the bla NDM-1 gene, KpA481, the plasmid profile was represented by 4 replicons only: Col(pHAD28), ColRNAI, IncFIB(pNDM-Mar), and IncR. The strain also had the smallest genome size among the XDR isolates . The most common plasmid replicons were IncR and IncFIB(K), detected with the same prevalence of 57.14% (12/21). Among IncR-positive isolates, the most common AMR gene identified on the replicon contigs was the bla CTX-M-15 gene, which was detected in seven isolates. Notably, the IncR replicon was identified in all seven ST395 isolates. In three of them it was co-located with the bla CTX-M-15 gene, whereas in two other isolates the replicon was located on the same contigs carrying the MDR regions. In particular, the following genes were identified in KpA769: bla CTX-M-15 , aac ( 3 )- IIe , catB3 , bla OXA-1 , aac ( 6 ′)- Ib - cr6 , sul1 , qacEdelta1 , dfrA1 , tetR (A), and tet (A). Of these, five genes were also detected in the KpA7001 contig with the IncR replicon: bla CTX-M-15 , aac ( 3 )- IIe , catB3 , bla OXA-1 , and aac ( 6 ′)- Ib - cr6 . Besides, the tet genes were detected in the IncR contigs of two isolates belonging to other STs: tet (B) and tetR (B) in KpA511 (ST219) and tet (A) and tetR (A) in KpA6101 (ST5275). The IncFIB(K) replicon was detected in 12 isolates, but the AMR genes that were co-located with the replicon on the same contigs were found only in two isolates, KpA7002 (ST873) and KpA500 (ST307). In both these isolates, the aph ( 6 )- Id , aph ( 3 ″)- Ib , and sul2 genes were co-located with the IncFIB(K) replicon on the same contigs, while in KpA500 the co-location of four additional AMR genes ( bla TEM-1 , catB3 , bla OXA-1 , and aac ( 6 ′)- Ib -cr6) was detected. Among other plasmid replicons, the prevalence of Col(pHAD28) and ColRNAI plasmid replicons was 42.86% (9/21) and 33.33% (7/21), correspondingly. The combination of these replicons was identified in all seven ST395 isolates, but no linkage with any AMR genes was detectable in the replicon contigs. In KpA324 (ST449), however, the Col(pHAD28) replicon was co-located with the bla CTX-M-14 gene on the same contig. Notably, the IncFII(K) replicon was identified in eight isolates (38.1%) and always in combination with the IncFIB(K) replicon, while the IncFIB(K) replicon on its own was present in four other isolates. The prevalence of IncFIB(K)(pCAV1099-114) (CP011596) plasmid replicon was lower, 19.05% (4/21); however, in three out of four isolates, the AMR genes were co-located on the replicon contigs. In particular, in the KpA511 (ST219) isolate, a 6588 bp resistance region carrying the following 10 genes was identified: qnrS1 , aph ( 6 )- Id , aph ( 3 ″)- Ib , sul2 , dfrA12 , aadA2 , qacEdelta1 , sul1 , mrx , and mphA . In two isolates belonging to ST39, KpA13 and KpA230, the number of AMR genes on the replicon contig was restricted to four ( qnrS1 , aph ( 6 )- Id , aph ( 3 ″)- Ib , and sul2 ) and three ( qnrS1 , mphA , and mrx ), respectively. Notably, the IncFIB(K)(pCAV1099-114) replicon was also detected in the AM sensitive-isolate KpA857 (ST1480). The IncFIA(pBK30683) (KF954760) plasmid replicon was found only in two ST29 isolates (KpA250 and KpA314) and was not associated with AMR genes. The IncQ (M28829) plasmid replicon was identified in one isolate only, KpA44 (ST395), and the aph ( 3 ′)- IV gene (amikacin resistance) was co-located with this replicon on the same contig. 2.13. Prophage Regions and CRISPR Arrays in K. pneumoniae Clinical Isolates The genomic sequences were analysed for the presence of prophage regions using the PHASTEST web server ( https://phaster.ca , accessed on 20 September 2024). The prophage regions were identified in all our isolates . A total of 24 prophage regions were identified, and their main characteristics are summarised in . The most prevalent phage was Klebsi_phiKO2 (NC_005857), detected in 38.1% of isolates. The number of prophages per isolate ranged from 1 to 6. Notably, the highest number of prophages was detected in isolates belonging to ST395 . All these ST395 isolates, except KpA769, shared an identical profile of the following 6 prophages: Edward_GF_2 (NC_026611), Escher_HK639 (NC_016158), Klebsi_3LV2017 (NC_047817), Klebsi_ST147_VIM1phi7.1 (NC_049451), Klebsi_ST512_KPC3phi13.2 (NC_049452), and Salmon_SEN34 (NC_028699). Interestingly, all these prophages, except for one (Escher_HK639), were detected in ST395 isolates only. The KpA769 isolate was missing two prophages from this profile (Escher_HK639 and Salmon_SEN34) but had the additional two prophages, Escher_RCS47 (NC_042128) and Salmon_Fels_1 (NC_010391). Among the other isolates, an identical prophage profile was detected only in two ST29 isolates, KpA250 and KpA314, which harboured the following three prophages: Escher_RCS47, Klebsi_phiKO2 (NC_005857), and Klebsi_ST15_OXA48phi14.1 (NC_049454). The prophage regions were analysed for the presence of AMR and virulence-related genes using CARD and VFanalyzer tools. No virulence-associated genes were detected in the prophage regions, consistent with the previously published results demonstrating the low prevalence of virulence factor genes in K. pneumoniae prophages . On the contrary, the carriage of AMR genes on prophages was significant, and these genes were detected in nine prophages within the genomes of six K. pneumoniae isolates . Notably, the prophages carried six genes of resistance to beta-lactams, with one of them, Escher_RCS47, harbouring two of them in the KpA769 (ST 395) isolate. Interestingly, Escher_RCS47 was also identified in two other isolates belonging to ST29 and ST39 . This phage is one of the major phage types harbouring AMR genes, which was found to be predominantly located on plasmids . It should be noted here that in KpA769 (ST395), the Escher_RCS47 region was detected on the contig carrying the IncR plasmid replicon and the MDR region including ten AMR genes mentioned above (see ). Of these ten genes, eight that encode resistance to 5 classes of AMs and quaternary ammonium compounds were located in the prophage region . The role of prophage-located AMR genes in the MDR phenotype is also noticeable for other isolates listed in . In particular, in the KpA699 isolate (ST15), two Klebsi_phiKO2 phage regions, which are probably the result of superinfection, had, in summary, nine AMR genes conferring resistance to five classes of AMs. Another phage carrying multiple AMR genes is the Staphy_SPbeta_like phage, which confers resistance to six classes of AMs in the KpA204 (ST307) isolate. Importantly, there are indications that this phage can move beyond the generic barriers and can be detected in several species of Gram-positive and Gram-negative bacteria . We performed more detailed analysis of this phage region with MobileElementFinder ( https://cge.food.dtu.dk/services/MobileElementFinder , accessed on 20 September 2024) and found the presence of two composite transposons: (i) cn_3826_IS26 (isfinder db, accession X00011) harbouring catB3 , bla OXA-1, and aac ( 6 ′)- Ib - cr genes and (ii) cn_15047_IS26 (isfinder db, accession X00011) carrying tet (A), tetR , and qnrB1 genes. This combination of two mechanisms of horizontal gene transfer may explain its mobility beyond the generic barriers. Additionally, the AMR genes located in two phage regions in the KpA13 (ST39) isolate encoded resistance to four classes of AMs. These results indicated an important contribution of both intact and questionable prophages to the MDR phenotype in a subset of our clinical K. pneumoniae isolates. Of note, we did not detect any genes associated with AMR or virulence-related genes in the incomplete phage regions that were identified in our isolates. Clustered regularly interspaced short palindromic repeats (CRISPR) arrays were identified in five isolates (23.81%) belonging to the following STs: ST15 (KpA699, 2 arrays), ST39 (KpA13 and KpA230, 1 array), ST449 (KpA324, 2 arrays), and ST873 (KpA7002, 1 array). Of these, the KpA699 isolate carrying two CRISPR arrays had four plasmids and five prophage regions, whereas the low number of phages and plasmid replicons was detected in the other isolate carrying two CRISPR arrays, KpA324 (2 and 1, respectively). The majority of K. pneumoniae strains, however, had no detectable CRISPR arrays, thus with no phage immunity traits and, therefore, no boundaries against phage infections. Subsequently, the strains within the problematic ST395 or ST307 are not immune against phages and can easily participate in horizontal gene exchange via phages, thus contributing to their evolution in the form, for example, of the acquisition of AMR genes. The absence of phage immunity in the XDR and MDR strains belonging to ST395 makes phage therapy a viable option to treat and control this infection. 2.14. Phylogenomic Analyses To estimate genetic relatedness among our K. pneumoniae clinical isolates based on genomic sequences, we performed whole genome-based phylogenetic analysis using the type strain genome server (TYGS) ( https://tygs.dsmz.de , accessed on 20 September 2024) . The results demonstrated an ST-based distribution of the genomic sequences across the phylogenetic tree . All isolates belonging to the same ST were grouped together, with a high similarity score, and the average nucleotide identity (ANI, https://www.ezbiocloud.net/tools/ani , accessed on 20 September 2024) values were in the range of 99.97–99.99%. In particular, K. pneumoniae ST395 isolates with the XDR or MDR phenotypes and genotypes were clustered together, displaying a low genetic diversity, with an ANI value of 99.99%. In addition, the isolates representing the same clonal lineage were also grouped together, although with a lower degree of similarity: KpA6101 (ST5275) and KpA857 (ST1480) were assigned to sublineage SL37, while KpA511 (ST219) and KpA704 (ST107) were assigned to sublineage SL107. All other branches were represented by single isolates. To place our K. pneumoniae clinical isolates within the international context, we performed whole genome-based phylogenetic analysis involving our strains and the most closely related genomic sequences from other countries. The latter genomic sequences of K. pneumoniae strains were obtained from the Pathogenwatch global resource ( https://pathogen.watch , accessed on 20 September 2024). The most closely related strains of K. pneumoniae were identified based on core genome analysis, and the combined phylogenetic analysis revealed two main clades, A and B ( A, ). Then, the whole-genome comparisons were made with the combined datasets, including our and international strains, using the TYGS resource ( B). The noticeable difference between the clades A and B is that the former includes a more diverse range of countries and continents, while the latter is mainly confined to Europe, with minor inclusions from other geographical locations ( B). Also, the years of isolation of the clade A strains were earlier compared to clade B (2004–2022 vs. 2008–2024). Our seven XDR and MDR K. pneumoniae ST395 isolates demonstrated low genetic diversity and were located within clade B ( B). The most genetically close strains were from Russia and Germany, with the ANI values of 99.96–99.98%. Another international high-risk clone, ST307, was also located within the clade B ( B). These MDR K. pneumoniae ST307 strains were isolated in the region from 2018 to 2024, including our isolates and also the isolates in another study . The closest genomic matches were the strains from the USA. In particular, four K. pneumoniae ST307 clinical isolates from Armenia collected in 2018 and 2019 were close to the strain from the USA collected in 2014, while the KpA204 isolate from 2024 was close to the strains collected in the USA in 2019 (ANI values of 99.97–99.98%). The low genetic diversity was also identified in ST39 strains within clade B ( B). The ANI value of 99.98% was obtained for the isolates collected in Armenia in 2024 and the strains isolated in Ethiopia in 2020. The carbapenem-resistant strain KpA699, belonging to the international high-risk clone ST15 and isolated in 2022, was located within the clade A ( B). The closest genomic matches to this isolate were the strains from Turkey, collected in 2013 and 2014, and the strain from Australia, isolated in 2014 ( B). The low genetic diversity was also observed among the ST449 strains in clade A, which included the KpA324 isolate from Armenia and strains from Germany, Spain, Madagascar, and Japan (ANI values of 99.97–99.99%). Another MDR isolate from Armenia, KpA511 (ST219), had the closest genomic matches with the strains from India and Turkey (ANI values of 99.98% and 99.99%, correspondingly). The remaining isolates from Armenia had less relatedness to international clones and were clustered within the groups demonstrating a higher level of genomic diversity ( B). It should be noted here that the KpA6101 isolate in our collection was the only isolate belonging to ST5275 in the Pathogenwatch database. We were unable, therefore, to perform comparative genomic analysis within this ST. In the phylogenetic tree, this single ST5275 genome formed a sister group with the ST1480 genomes ( B).
Among our 48 clinical isolates of K. pneumoniae , the highest rates of susceptibility were detected towards colistin (93.75%, 45/48, MIC ≤ 1 µg/mL) and tigecycline (93.75%, 45/48, MIC ≤ 0.5 µg/mL), followed by meropenem (89.58%, 43/48) and imipenem (89.58%, 43/48) . Among cephems, the most effective was cefoxitin, with a 79.17% (38/48) susceptibility rate, while 6.25% (3/48) and 14.58% (7/48) of isolates displayed intermediate and full resistance, correspondingly . Susceptibility to other AMs of this class was 56.25% (27/48) for cefepime, 50% (24/48) for ceftazidime, and 47.92% (23/48) for ceftriaxone. These findings indicate the high prevalence of resistance to the 3rd and 4th generation cephalosporins among our isolates. The similar levels of susceptibility were found towards all beta-lactam combination agents tested in our study; the most efficient was ticarcillin-clavulanate, with a 50% (24/48) susceptibility rate. Susceptibility to another beta-lactam, aztreonam, was also in the same range (52.08%, 25/48). As for azithromycin, 66.67% (32/48) of isolates had minimum inhibitory concentrations (MICs) of ≤16 µg/mL, whereas the remaining 33.33% (16/48) of isolates displayed MICs higher than 64 µg/mL . The rates of susceptibility to other AMs, in descending order, were as follows: 77.08% (37/48) for amikacin, 64.58% (31/48) for chloramphenicol, 56.25% (27/48) for gentamycin, 50% (24/48) for ciprofloxacin, 43.75% (21/48) for tetracycline, 35.42% (17/48) for tobramycin, 35.42% (17/48) for trimethoprim-sulfamethoxazole, and 2.08% (1/48) for ampicillin. Resistance to one or two classes of AMs was identified in 35.42% (17/48) of clinical isolates of K. pneumoniae . Among them, the most common were isolates recovered from the stool samples collected from children, 41.18% (7/17) . The most prevalent AMR profile in these non-MDR K. pneumoniae isolates was ampicillin resistance (23.53%, 4/17). We identified an extensively drug-resistant (XDR) phenotype in four clinical K. pneumoniae isolates (8.33%, 4/48), which exhibited the identical profile of full resistance to all AMs tested except for colistin (MIC ≤ 1 µg/mL) and tigecycline (MIC < 0.5 µg/mL) . All XDR K. pneumoniae isolates were isolated in 2022 from the urine (3) and stool (1) samples of paediatric patients. Notably, resistance to amikacin was detected only in XDR isolates and absent in other K. pneumoniae clinical isolates. Our results indicated that a substantial proportion of human K. pneumoniae isolates in this study were MDR, 56.25% (27 out of 48) . The highest susceptibility rates among MDR isolates were found towards meropenem (96.3%, 26/27) and imipenem (96.3%, 26/27), followed by colistin (88.89%, 24/27, MIC ≤ 1 µg/mL), tigecycline (88.89%, 24/27, MIC ≤ 0.5 µg/mL), cefoxitin (85.19%, 23/27), and amikacin (74.07%, 20/27). The MDR isolates displayed lower susceptibility rates towards azithromycin (55.56%, 15/27, MIC ≤ 16 µg/mL), chloramphenicol (51.85%, 14/27), gentamycin (40.74%, 11/27), cefepime (37.04%, 10/27), tobramycin (33.33%, 9/27), and aztreonam (29.63%, 8/27). Susceptibility rates to other AMs were even lower . We identified five MDR isolates resistant to nine classes of AMs, but their resistance profiles were not identical. The most common among MDR isolates was resistance to eight AM classes (33.33%, 9/27). Three isolates (KpA44, KpA46, and KpA373) had the identical AMR profile . The extended spectrum beta-lactamase (ESBL)-producer phenotype was identified in 45.83% (22/48) of our K. pneumoniae clinical isolates . ESBL production, however, was not detected in the XDR isolates. The rate of ESBL production was 77.78% in MDR isolates (21/27) and 5.88% in non-MDR isolates (1/17). Among the ESBL-producing isolates, the highest susceptibility was towards carbapenems, 95.45% (21/22). Carbapenem resistance was detected only in one ESBL-positive isolate, with intermediate resistance to imipenem and full resistance to meropenem. Resistance to colistin was detected in two ESBL-producing isolates, with an MIC of 2 µg/mL. Three ESBL-producers were resistant to tigecycline, with one isolate having an MIC of 1 µg/mL, while the other two had 2 µg/mL. The ESBL-positive MDR K. pneumoniae isolates also demonstrated high levels of resistance towards the 3rd and 4th generation cephalosporins: all of them were resistant to ceftriaxone, and only 4.76% (1/21) and 23.81% (5/21) were susceptible to ceftazidime and cefepime, respectively. However, 85.71% (18/21) of the ESBL-producing MDR isolates were susceptible to cefoxitin, suggesting it could be a therapeutic option for the treatment of ESBL-producing K. pneumoniae to limit the use of carbapenems. Susceptibility to other AMs in this group was: amikacin (66.67%, 14/21), azithromycin (57.14%, 12/21, MIC ≤ 16 µg/mL), chloramphenicol (47.62%, 10/21), gentamycin (28.57%, 6/21), tobramycin (23.81%, 5/21), and beta-lactam combination agents such as ampicillin-sulbactam (19.05%, 4/21). All MDR ESBL-producing K. pneumoniae isolates were resistant to five or more classes of AMs. Among the six ESBL-negative MDR isolates, the susceptibility rates to all cephalosporins tested were significantly higher compared to ESBL-producers: 100% to ceftriaxone and ceftazidime ( p < 0.0001) and 83.33% (5/6) to cefepime ( p < 0.05) . The ESBL-negative isolates were also significantly more susceptible to gentamycin and ciprofloxacin compared to ESBL-producers: 83.33% vs. 28.57% and 66.67% vs. 14.29%, respectively ( p < 0.05). In addition, one isolate with an MIC to colistin of 2 µg/mL was identified among the MDR ESBL-negative isolates. MDR phenotype with resistance to five or more classes of AMs was significantly lower in ESBL-negative MDR isolates compared to ESBL-producers (50% vs. 100%, p < 0.01). Thus, we detected an alarming presence of carbapenem resistance in XDR K. pneumoniae clinical isolates, as well as the high rates of complete resistance to the 3rd and 4th generation cephalosporins. All ESBL-producers were resistant to five or more classes of AMs, hence seriously limiting therapeutic options.
One of the alternative options for treatment of XDR and MDR infections is phage therapy. We therefore tested our K. pneumoniae isolates for susceptibility to the commercial bacteriophage preparations “Bacteriophage Klebsiella pneumoniae Purified” (BKpP) and “Bacteriophage Klebsiella Polyvalent Purified” (BKPP) (manufacturer: SPA “Microgen” Moscow, Russia), which include phage cocktails active against K. pneumoniae and Klebsiella spp., respectively. Complete resistance to both phage cocktails was detected in 27.08% (13/48) of our isolates . Susceptible or intermediate susceptible phenotypes toward BKpP were found in 35 isolates (72.92%) and towards BKPP—in 32 (66.67%). Notably, all XDR K. pneumoniae isolates were highly susceptible to BKpP and intermediately susceptible to BKPP. Both phage preparations demonstrated a higher activity against MDR isolates compared to non-MDR isolates: 81.48% (22/27) vs. 47.06% (8/17) for BKpP and 74.07% (20/27) vs. 52.94% (9/17) for BKPP. Thus, the combination of both phage cocktails provided a 100% coverage against XDR K. pneumoniae isolates, while the efficiency against the MDR strains was in the range of 74.07–81.48%. At the same time, the three K. pneumoniae clinical isolates with the hypermucoviscous (HMV) phenotype (see below) and highly susceptible to antibiotics were mostly resistant against the phage cocktails . Thus, the commercial phage formulations BKpP and BKPP displayed significant in vitro activity against MDR and XDR K. pneumoniae clinical isolates and, therefore, may serve as alternative or adjunct therapies to control this infection.
We detected three clinical K. pneumoniae isolates (6.25%, 3/48) with a hypermucoviscous (HMV) phenotype using the string test . All these isolates were recovered from urine samples of adults. The HMV isolates showed the distinctive AMR profiles of resistance to two classes of AMs and were classified as non-MDR isolates. In particular, all HMV isolates showed resistance to tobramycin (intermediate) and ampicillin (intermediate or resistant), whereas two of them exhibited intermediate resistance to piperacillin-tazobactam (KpA687, KpA704) and one isolate was intermediately resistant to gentamycin (KpA828). In addition, one HMV K. pneumoniae clinical isolate, KpA828, was an ESBL-producer. It was the only ESBL-positive isolate among the non-MDR K. pneumoniae isolates. No HMV phenotypes were present among the MDR or XDR strains.
To explore the genetic relatedness among our 48 clinical isolates of K. pneumoniae , we used ERIC-PCR . This analysis yielded differential patterns consisting of 9–18 bands. The dendrogram generated from ERIC-PCR data demonstrated that all isolates in our collection showed at least a 63.6% similarity of the band patterns and were grouped into two main clades, A and B . The largest clade, A, composed of 83.33% (40/48) of the isolates, including all four XDR isolates and the majority of MDR isolates (25 out of 27). The non-MDR isolates were dispersed across the tree; however, their prevalence varied from 27.5% (11/40) in the clade A to 75% (6/8) in the clade B. In total, 42 different ERIC types were identified, with 37 unique types, suggesting a significant genetic diversity among our K. pneumoniae clinical isolates. Three out of four XDR isolates displayed a 100% similarity of band pattern (cluster I), suggesting that they may belong to a single clone, while another XDR isolate showed 97.2% similarity to the XDR cluster . In additon, five MDR isolates (including two isolates in cluster II) showed 91.4% similarity to the cluster of XDR isolates. Among other clusters, cluster III grouped together two MDR isolates displaying resistance to nine classes of AMs, while another isolate resistant to nine classes showed 97.2% similarity to the cluster isolates. In addition, two MDR isolates in cluster IV recovered from endotracheal tubes in paediatric patients are of note. Remarkably, all isolates that were clustered together were isolated in the same hospital within the period of up to three weeks, suggesting a possible nosocomial infection. Thus, ERIC-PCR typing revealed a high genetic diversity among our K. pneumoniae clinical isolates, indicating a predominantly polyclonal distribution of K. pneumoniae strains. At the same time, there are indications of clonal spread of XDR and some of the MDR isolates.
A total of 21 K. pneumoniae isolates from Armenia were subjected to WGS. This analysis included all four XDR isolates, 13 MDR isolates resistant to seven or more classes of AMs, and one MDR isolate resistant to five classes of AMs. The MDR isolates were selected for WGS based on AMR profile, specimen origin, and year of isolation. In addition, two non-MDR isolates exhibiting HMV phenotype and one isolate susceptible to all AMs tested (except for tobramycin) were also subjected to WGS. ERIC-PCR results were used to avoid redundant sequencing of clonal isolates. Genome sequences are available in the NCBI database under Bioproject PRJNA1141898. Individual accession numbers are listed in . The general genomic information of our K. pneumoniae isolates was generated by using the BIGSdb-Pasteur database ( https://bigsdb.pasteur.fr/klebsiella/ , accessed on 20 September 2024) and the Pathogenwatch resource ( https://pathogen.watch , accessed on 20 September 2024). This information is summarised in . The general genomic features, such as genome sizes of K. pneumoniae isolates, were in the range of 4.97–5.94 Mb, with GC-content in the range of 56.53–57.76%, which is consistent with the accepted criteria for Klebsiella spp. in the BIGSdb-Pasteur database.
The WGS results indicated that our Klebsiella isolates belong to the phylogroup Kp1, K. pneumoniae sensu stricto . A total of 12 different sequence types (ST) were identified among our 21 K. pneumoniae clinical isolates . The most common was ST395 detected in 7 K. pneumoniae isolates (33.33%, 7/21) that were isolated in 2022 (6) and 2024 (1) from paediatric patients. Among the ST395 K. pneumoniae isolates were all four XDR isolates with complete resistance to carbapenems and three MDR isolates with resistance to eight classes of AMs. The ST395 isolates were recovered from various sources: urine (3), stool (2), endotracheal tube (1), and wound fluid (1). ST395 is an international high-risk clonal lineage associated with MDR phenotypes of clinical relevance, including the production of carbapenemases and ESBL, as well as resistance to other classes of AMs . The carbapenem-resistant KpA699 isolate, which was recovered from the urine sample of an adult patient in 2022 and exhibited resistance to nine classes of AMs, was assigned to ST15. The other two isolates, KpA13 and KpA230, which were isolated from the throat infection of paediatric patients and resistant to nine classes of AMs, were assigned to K. pneumoniae ST39. Two strains resistant to eight classes of AMs were assigned to ST307: KpA500, isolated from the stool sample of an adult in 2018, and KpA204, isolated from the throat infection of a paediatric patient in 2024. In addition, two isolates, KpA250 and KpA314, which were isolated from endotracheal tubes of children in 2022, belonged to ST29. All other STs were represented by a single isolate, suggesting a significant level of genetic diversity among the clinical isolates of K. pneumoniae and confirming our earlier observations with ERIC-PCR. The colistin-resistant KpA511 with resistance to nine classes of AMs was assigned to ST219. The KpA6101 isolate, resistant to eight classes of AMs (including polimyxins) but susceptible to all cephems, was assigned to ST5275. Another colistin non-susceptible isolate, KpA324, with resistance to eight classes of AMs, was the representative of ST449. The remaining MDR isolate subjected to WGS, KpA7002, was assigned to ST873. Among the three non-MDR isolates, the two HMV isolates were assigned to ST107 (KpA704) and ST25 (KpA828, ESBL-producer), while the susceptible isolate to all but one AMs was assigned to ST1480 (KpA857). Thus, despite the limitations of this work due to the non-consecutive collection of K. pneumoniae clinical isolates and the small number of genomes sequenced, our results indicate the circulation of international high-risk K. pneumoniae clones in Armenia. Notably, all our ST395 isolates of K. pneumoniae belonged to the same sublineage and clonal group (SL395 and CG395, correspondingly) and also shared other important characteristics. An identical core genome sequence type (cgST) was identified among the K. pneumoniae ST395 isolates only, whereas all other isolates were assigned to the individual cgSTs . In particular, three XDR K. pneumoniae ST395 isolates were assigned to cgST-*a23d. These isolates (KpA278, KpA285, and KpA542) were recovered from paediatric patients at the same facility within a three-week period. In addition, one XDR isolate, KpA481, isolated in 2022, was assigned to cgST-*72e4, together with the MDR KpA44 strain isolated in 2024. The limited genetic diversity among the ST395 isolates is possibly due to the sampling bias, with three isolates obtained from the same facility within a short period of time. Isolates in other K. pneumoniae STs were also identified to the sublineage and clonal group levels . Isolates within the same ST were assigned to the same sublineages and clonal groups. Furthermore, ST1480 (KpA857) and ST5275 (MDR KpA6101) isolates were assigned to the same sublineage, SL37, whereas ST107 (KpA704, HMV) and ST219 (MDR KpA511) isolates were assigned to the sublineage SL107.
The most common capsular serotype was K2 (38.1%, 8/21), which was identified in all seven ST395 isolates and in one ST25 isolate, KpA828, with the HMV phenotype ( , sourced from Kaptive ). The second most common serotype was K19 (14.29%, 3/21), detected in two ST29 isolates (KpA250, KpA314) and one ST15 isolate (KpA699). Other capsular serotypes were K62 (9.52%, 2/21), identified in two ST39 isolates (KpA13 and KpA204), and KL102 (9.52%, 2/21), detected in ST307 isolates (KpA204 and KpA500). All other capsular serotypes were represented by a single isolate: K10 was detected in the ST107 isolate (KpA704 with the HMV phenotype), K22—in the ST449 isolate (KpA324), K23—in the ST5275 isolate (KpA6101), K39—in the ST1480 isolate (KpA857), K52—in the ST873 isolate (KpA7002), and KL114—in the ST219 isolate (KpA511). The most common O serotype was O2 (47.62%, 10/21), identified in all ST395 (subtype O2a) and ST307 (subtypes O2a and O2afg) isolates, as well as in the ST5275 isolate (O2afg). The second most common serotype was O1, detected in eight isolates (38.1%) assigned to the following STs: ST15, ST29, ST39, ST107, ST219, and ST449. Collectively, these two O types accounted for 85.71% (18/21) of all sequenced isolates. In addition, O3 serotype (subtype O3b) was identified in the ST1480 isolate KpA857. In the remaining two isolates, the OL101 locus in the ST873 isolate and the unknown (O3/O3a) locus in theST25 isolate were identified, with no recognised O serotype information .
The genetic background for the AMR phenotypes observed was explored with the use of WGS data . In this analysis, we mainly focused on determining the genetic basis for AMR in the two previously defined resistance phenotypes, that is, XDR and MDR isolates of K. pneumoniae . The emphasis is also made on resistance mechanisms towards AMs of clinical relevance, such as beta-lactams, aminoglycosides, and macrolides, although the others are also described if appropriate. 2.8.1. Resistome Analysis of XDR Isolates of K. pneumoniae Four XDR ST395 isolates with complete resistance to all beta-lactams demonstrated the presence of the metallo-β-lactamase-encoding bla NDM-1 gene (conferring resistance to carbapenems ) and were associated with the ble MBL gene (encoding bleomycin resistance protein ). The bla NDM-1 gene, however, was not detected in other clinical isolates . In addition, an identical combination of 5 other genes associated with resistance to beta-lactams ( bla CTX-M-15 , bla OXA-1 , bla SHV-11 , bla TEM-1 , and ftsI (D350N, S357N)) was found in three XDR ST395 isolates, whereas one isolate (KpA542) lacked the bla CTX-M-15 gene in this combination. Notably, the ESBL-producer phenotype was not detected in the XDR isolates, which can be explained by the production of the NDM-type carbapenemase that masks this phenotype and confers complete resistance to nearly all beta-lactams, including carbapenems . Complete resistance to amikacin in the XDR K. pneumoniae ST395 isolates was associated with the armA gene, which encodes 16S rRNA methyltransferase , and by the presence of other genes conferring resistance to aminoglycosides such as aac ( 3 )- IIe , aac ( 6 ′)- Ib - cr6 , and aph ( 3 ′)-VIa . Notably, the armA gene was detected in the XDR isolates only but not in other isolates. The high level of resistance to azithromycin in all XDR isolates could be explained by the carriage of the four-gene complex: mphA , mphE , mrx , and msrE . The genetic basis for other mechanisms of AMR among XDR isolates was also explored. Fluoroquinolone resistance in these isolates included the identical combination of aac ( 6 ′)- Ib - cr6 , gyrA (S83I), parC (S80I), and qnrS1 , except for the KpA542 isolate, which missed the aac ( 6 ′)- Ib - cr6 gene . Resistance to folate pathway antagonists was associated with the combination of two genes, dfrA5 and sul1 , whereas the additional dfrA1 and sul2 genes were found in two and three XDR isolates, respectively. Furthermore, the tet (A) gene encoding resistance to tetracyclines and the combination of the catA1 and catB3 genes conferring resistance to phenicols were detected in the XDR isolates. To the best of our knowledge, this is the first report of NDM-1 carbapenemase-producing and pan-aminoglycoside-resistant human K. pneumoniae isolates exhibiting the XDR phenotype/genotype from Armenia. Thus, the extensive bioinformatic analysis of WGS data revealed a highly similar genetic background of the resistome of XDR strains, which encodes the phenotypic resistance to 10 classes of AMs. 2.8.2. Resistome Analysis of MDR Isolates of K. pneumoniae Similarly to XDR strains, in three MDR K. pneumoniae ST395 isolates with resistance to 8 classes of AMs, the combination of five genes encoding resistance to beta-lactams ( bla CTX-M-15 , bla OXA-1 , bla SHV-11 , bla TEM-1 , and ftsI (D350N, S357N)) was also found. These MDR isolates were ESBL-producers and resistant to all beta-lactams tested, excluding carbapenems and cefoxitin. A similar combination of five genes, except for MLST-associated variation in the bla SHV gene, was found in isolates belonging to ST29 (2 strains) and ST307 (2 strains). The ST29 isolates were resistant to beta-lactams, whereas one of the ST307 isolates, KpA500, had intermediate resistance to amoxicillin-clavulanic acid, and KpA204 had intermediate resistance to cefoxitin but was susceptible to cefepime. Interestingly, the following combination of four genes was identified in the KpA699 isolate (ST15) with complete resistance to all beta-lactams, including meropenem and cefoxitin, except for intermediate resistance to imipenem: bla CTX-M-15 , bla SHV-28 , bla TEM-1 , and ftsI (D350N, S357N). KpA699 had no carbapenemase or AmpC-type beta-lactamase genes, suggesting that other mechanisms must be responsible for this clinically significant phenotype. In all other MDR isolates with resistance to beta-lactams, a combination of two or three beta-lactamase-encoding genes and the ftsI (D350N, S357N) gene was identified . MDR isolates possessed bla CTX-M , which was not detected only in one isolate (KpA6101, ST5275). KpA6101 had a combination of the bla LAP-2 , bla SHV-11 , bla TEM-1, and ftsI (D350N, S357N) genes, was ESBL-negative, and was susceptible to all cephems. In the KpA324 isolate, the combination of the bla CTX-M-14 , bla SHV-33 , and ftsI (D350N, S357N) genes was associated with susceptibility to ampicillin-sulbactam and intermediate resistance to ceftazidime and amoxicillin-clavulanic acid. The MDR isolates also possessed bla SHV , which was absent in one isolate only (KpA230, ST39). KpA230 had the bla CTX-M-15 , bla TEM-1 , and ftsI (D350N, S357N) genes in combination and displayed complete resistance to all beta-lactam combination agents and cephems, except for cefepime. Regarding aminoglycoside resistance determinants, all but one MDR K. pneumoniae isolate (13 out of 14) carried aminoglycoside-modifying enzyme genes in combination . The most common genes were aph ( 3 ′′)- Ib (71.43%, 10/14), aph ( 6 )- Id (64.29%, 9/14), aac ( 3 )- IIe (64.29%, 9/14), and aac ( 6 ′)- Ib - cr6 (50%, 7/14). In all isolates showing full resistance to gentamicin, the aac ( 3 )- IIe (9 isolates) or aac ( 3 )- IId (2 isolates) genes were identified. These genes encode aminoglycoside 3-N-acetyltransferase enzymes, which inactivate gentamicin and tobramycin. In gentamicin-susceptible isolates, these genes were not detected ( p < 0.01). The genetic basis for amikacin resistance in five MDR isolates was more complex. The aph ( 3 ′)-VIa gene conferring resistance to amikacin , in combination with the aac ( 6 ′)- Ib - cr6 and aac ( 3 )- IIe genes, was identified in one isolate, KpA44 . There were two gene profiles encoding for aminoglycoside-modifying enzymes. The first was represented by the aac ( 3 )- IIe and aac ( 6 ′)- Ib - cr6 genes and was detected in KpA7001. These two genes, in combination with the additional aph ( 3 ′′)- Ib and aph ( 6 )- Id genes, were detected in KpA250 and KpA314. These gene profiles were also detected in amikacin-susceptible isolates (KpA769 and KpA204). Finally, the combination of aac ( 3 )- IIe , aph ( 3 ′′)- Ib , and aph ( 6 )- Id genes was identified in one isolate, KpA7002, with intermediate resistance to amikacin. Among the MDR isolates with azithromycin resistance (5 isolates, MIC ≥ 64 µg/mL), three isolates had the combination of the mphA and mrx genes . In another isolate with an MIC of ≥ 64 µg/mL, only one gene, mphA , was detected. Macrolide resistance genes were not identified in one isolate (KpA500, MIC ≥ 64 µg/mL), suggesting that other mechanisms are involved in azithromycin resistance. In addition, the single mphE gene was detected in one isolate, KpA699, that had an MIC < 16 µg/mL to azithromycin. The mechanisms of resistance to other classes of antimicrobials in the clinical MDR K. pneumoniae isolates were also explored. Resistance to phenicols in these isolates was associated with the cat2 gene, as well as with the combinations of the catA1 gene with the floR or catB3 genes . The presence of a single catB3 gene was detected in one chloramphenicol-resistant isolate, KpA204. This gene, however, was also detected in three chloramphenicol susceptible isolates (KpA250, KpA314, and KpA500). In one isolate with full resistance to chloramphenicol (KpA324), no known acquired phenicol resistance gene can be found. Resistance to ciprofloxacin in these isolates was commonly associated with the combination of mutations in both the gyrA (S83I or S83F) and parC (S83I) genes with the qnr genes ( qnrS1 , qnrB1 , qnrS1 , and qnrB20 ). In one isolate, KpA699, the additional aac ( 6 ′)- Ib - cr6 gene, as well as the presence of the two substitutions in the gyrA (S83F, D87A) gene, were detected. In the other three ciprofloxacin-resistant isolates, the qnrB1 and aac ( 6 ′)- Ib - cr6 genes in combination (two ST29 isolates) or the single qnrB1 gene (KpA511) were identified. Furthermore, the combination of qnrB20 and qnrS1 genes was detected in the ciprofloxacin-resistant isolate KpA230, whereas a single qnrS1 gene was associated with the intermediate resistance phenotype of KpA6101. The presence of the tet (A) and tetR (A) genes was detected in all isolates exhibiting resistance to tetracycline, except for one isolate (KpA511), which carried the tet (B) and tetR (B) genes. In the only isolate showing an intermediate phenotype to tetracycline (KpA699), the tet genes were not detected. Notably, the three tigecycline non-susceptible isolates (KpA13, KpA204, and KpA7001) harboured the same tet (A) gene variant as the tigecycline-susceptible isolates. However, no acquired genetic determinants associated with resistance to this antibiotic were detected. The high level of resistance to folate pathway antagonists in MDR isolates (100%) was in agreement with the presence of the dfrA gene variants ( dfrA1 , dfrA5 , dfrA14 , dfrA17 , dfrA12 , and dfrA27 ), in combination with the sul1 or/and sul2 genes. The fosA6 and uhpT (E350Q) genes encoding resistance to fosfomycin were detected in all MDR isolates, as well as the arnT , eptB , and ompA genes conferring resistance to peptide antibiotics. In addition, the aar - 3 gene encoding resistance to ansamycins was present in one isolate, KpA699. Of note, any acquired resistance determinants associated with resistance to colistin were not identified in our MDR isolates. 2.8.3. Resistome Analysis of Non-MDR Isolates of K. pneumoniae The presence of AMR determinants in the three non-MDR isolates was also examined. The only AMR mechanisms encountered were beta-lactam resistance determinants . KpA857 isolate, susceptible to all beta-lactams, possessed ftsI (S357N, D350N). Two non-MDR HMV isolates carried the identical combination of two genes, bla SHV-11 and ftsI (D350N, S357N), but with different phenotypes. KpA704 was ESBL-negative and had intermediate resistance to ampicillin and piperacillin-tazobactam, while KpA828 was an ESBL-producer with full resistance to ampicillin.
K. pneumoniae Four XDR ST395 isolates with complete resistance to all beta-lactams demonstrated the presence of the metallo-β-lactamase-encoding bla NDM-1 gene (conferring resistance to carbapenems ) and were associated with the ble MBL gene (encoding bleomycin resistance protein ). The bla NDM-1 gene, however, was not detected in other clinical isolates . In addition, an identical combination of 5 other genes associated with resistance to beta-lactams ( bla CTX-M-15 , bla OXA-1 , bla SHV-11 , bla TEM-1 , and ftsI (D350N, S357N)) was found in three XDR ST395 isolates, whereas one isolate (KpA542) lacked the bla CTX-M-15 gene in this combination. Notably, the ESBL-producer phenotype was not detected in the XDR isolates, which can be explained by the production of the NDM-type carbapenemase that masks this phenotype and confers complete resistance to nearly all beta-lactams, including carbapenems . Complete resistance to amikacin in the XDR K. pneumoniae ST395 isolates was associated with the armA gene, which encodes 16S rRNA methyltransferase , and by the presence of other genes conferring resistance to aminoglycosides such as aac ( 3 )- IIe , aac ( 6 ′)- Ib - cr6 , and aph ( 3 ′)-VIa . Notably, the armA gene was detected in the XDR isolates only but not in other isolates. The high level of resistance to azithromycin in all XDR isolates could be explained by the carriage of the four-gene complex: mphA , mphE , mrx , and msrE . The genetic basis for other mechanisms of AMR among XDR isolates was also explored. Fluoroquinolone resistance in these isolates included the identical combination of aac ( 6 ′)- Ib - cr6 , gyrA (S83I), parC (S80I), and qnrS1 , except for the KpA542 isolate, which missed the aac ( 6 ′)- Ib - cr6 gene . Resistance to folate pathway antagonists was associated with the combination of two genes, dfrA5 and sul1 , whereas the additional dfrA1 and sul2 genes were found in two and three XDR isolates, respectively. Furthermore, the tet (A) gene encoding resistance to tetracyclines and the combination of the catA1 and catB3 genes conferring resistance to phenicols were detected in the XDR isolates. To the best of our knowledge, this is the first report of NDM-1 carbapenemase-producing and pan-aminoglycoside-resistant human K. pneumoniae isolates exhibiting the XDR phenotype/genotype from Armenia. Thus, the extensive bioinformatic analysis of WGS data revealed a highly similar genetic background of the resistome of XDR strains, which encodes the phenotypic resistance to 10 classes of AMs.
K. pneumoniae Similarly to XDR strains, in three MDR K. pneumoniae ST395 isolates with resistance to 8 classes of AMs, the combination of five genes encoding resistance to beta-lactams ( bla CTX-M-15 , bla OXA-1 , bla SHV-11 , bla TEM-1 , and ftsI (D350N, S357N)) was also found. These MDR isolates were ESBL-producers and resistant to all beta-lactams tested, excluding carbapenems and cefoxitin. A similar combination of five genes, except for MLST-associated variation in the bla SHV gene, was found in isolates belonging to ST29 (2 strains) and ST307 (2 strains). The ST29 isolates were resistant to beta-lactams, whereas one of the ST307 isolates, KpA500, had intermediate resistance to amoxicillin-clavulanic acid, and KpA204 had intermediate resistance to cefoxitin but was susceptible to cefepime. Interestingly, the following combination of four genes was identified in the KpA699 isolate (ST15) with complete resistance to all beta-lactams, including meropenem and cefoxitin, except for intermediate resistance to imipenem: bla CTX-M-15 , bla SHV-28 , bla TEM-1 , and ftsI (D350N, S357N). KpA699 had no carbapenemase or AmpC-type beta-lactamase genes, suggesting that other mechanisms must be responsible for this clinically significant phenotype. In all other MDR isolates with resistance to beta-lactams, a combination of two or three beta-lactamase-encoding genes and the ftsI (D350N, S357N) gene was identified . MDR isolates possessed bla CTX-M , which was not detected only in one isolate (KpA6101, ST5275). KpA6101 had a combination of the bla LAP-2 , bla SHV-11 , bla TEM-1, and ftsI (D350N, S357N) genes, was ESBL-negative, and was susceptible to all cephems. In the KpA324 isolate, the combination of the bla CTX-M-14 , bla SHV-33 , and ftsI (D350N, S357N) genes was associated with susceptibility to ampicillin-sulbactam and intermediate resistance to ceftazidime and amoxicillin-clavulanic acid. The MDR isolates also possessed bla SHV , which was absent in one isolate only (KpA230, ST39). KpA230 had the bla CTX-M-15 , bla TEM-1 , and ftsI (D350N, S357N) genes in combination and displayed complete resistance to all beta-lactam combination agents and cephems, except for cefepime. Regarding aminoglycoside resistance determinants, all but one MDR K. pneumoniae isolate (13 out of 14) carried aminoglycoside-modifying enzyme genes in combination . The most common genes were aph ( 3 ′′)- Ib (71.43%, 10/14), aph ( 6 )- Id (64.29%, 9/14), aac ( 3 )- IIe (64.29%, 9/14), and aac ( 6 ′)- Ib - cr6 (50%, 7/14). In all isolates showing full resistance to gentamicin, the aac ( 3 )- IIe (9 isolates) or aac ( 3 )- IId (2 isolates) genes were identified. These genes encode aminoglycoside 3-N-acetyltransferase enzymes, which inactivate gentamicin and tobramycin. In gentamicin-susceptible isolates, these genes were not detected ( p < 0.01). The genetic basis for amikacin resistance in five MDR isolates was more complex. The aph ( 3 ′)-VIa gene conferring resistance to amikacin , in combination with the aac ( 6 ′)- Ib - cr6 and aac ( 3 )- IIe genes, was identified in one isolate, KpA44 . There were two gene profiles encoding for aminoglycoside-modifying enzymes. The first was represented by the aac ( 3 )- IIe and aac ( 6 ′)- Ib - cr6 genes and was detected in KpA7001. These two genes, in combination with the additional aph ( 3 ′′)- Ib and aph ( 6 )- Id genes, were detected in KpA250 and KpA314. These gene profiles were also detected in amikacin-susceptible isolates (KpA769 and KpA204). Finally, the combination of aac ( 3 )- IIe , aph ( 3 ′′)- Ib , and aph ( 6 )- Id genes was identified in one isolate, KpA7002, with intermediate resistance to amikacin. Among the MDR isolates with azithromycin resistance (5 isolates, MIC ≥ 64 µg/mL), three isolates had the combination of the mphA and mrx genes . In another isolate with an MIC of ≥ 64 µg/mL, only one gene, mphA , was detected. Macrolide resistance genes were not identified in one isolate (KpA500, MIC ≥ 64 µg/mL), suggesting that other mechanisms are involved in azithromycin resistance. In addition, the single mphE gene was detected in one isolate, KpA699, that had an MIC < 16 µg/mL to azithromycin. The mechanisms of resistance to other classes of antimicrobials in the clinical MDR K. pneumoniae isolates were also explored. Resistance to phenicols in these isolates was associated with the cat2 gene, as well as with the combinations of the catA1 gene with the floR or catB3 genes . The presence of a single catB3 gene was detected in one chloramphenicol-resistant isolate, KpA204. This gene, however, was also detected in three chloramphenicol susceptible isolates (KpA250, KpA314, and KpA500). In one isolate with full resistance to chloramphenicol (KpA324), no known acquired phenicol resistance gene can be found. Resistance to ciprofloxacin in these isolates was commonly associated with the combination of mutations in both the gyrA (S83I or S83F) and parC (S83I) genes with the qnr genes ( qnrS1 , qnrB1 , qnrS1 , and qnrB20 ). In one isolate, KpA699, the additional aac ( 6 ′)- Ib - cr6 gene, as well as the presence of the two substitutions in the gyrA (S83F, D87A) gene, were detected. In the other three ciprofloxacin-resistant isolates, the qnrB1 and aac ( 6 ′)- Ib - cr6 genes in combination (two ST29 isolates) or the single qnrB1 gene (KpA511) were identified. Furthermore, the combination of qnrB20 and qnrS1 genes was detected in the ciprofloxacin-resistant isolate KpA230, whereas a single qnrS1 gene was associated with the intermediate resistance phenotype of KpA6101. The presence of the tet (A) and tetR (A) genes was detected in all isolates exhibiting resistance to tetracycline, except for one isolate (KpA511), which carried the tet (B) and tetR (B) genes. In the only isolate showing an intermediate phenotype to tetracycline (KpA699), the tet genes were not detected. Notably, the three tigecycline non-susceptible isolates (KpA13, KpA204, and KpA7001) harboured the same tet (A) gene variant as the tigecycline-susceptible isolates. However, no acquired genetic determinants associated with resistance to this antibiotic were detected. The high level of resistance to folate pathway antagonists in MDR isolates (100%) was in agreement with the presence of the dfrA gene variants ( dfrA1 , dfrA5 , dfrA14 , dfrA17 , dfrA12 , and dfrA27 ), in combination with the sul1 or/and sul2 genes. The fosA6 and uhpT (E350Q) genes encoding resistance to fosfomycin were detected in all MDR isolates, as well as the arnT , eptB , and ompA genes conferring resistance to peptide antibiotics. In addition, the aar - 3 gene encoding resistance to ansamycins was present in one isolate, KpA699. Of note, any acquired resistance determinants associated with resistance to colistin were not identified in our MDR isolates.
K. pneumoniae The presence of AMR determinants in the three non-MDR isolates was also examined. The only AMR mechanisms encountered were beta-lactam resistance determinants . KpA857 isolate, susceptible to all beta-lactams, possessed ftsI (S357N, D350N). Two non-MDR HMV isolates carried the identical combination of two genes, bla SHV-11 and ftsI (D350N, S357N), but with different phenotypes. KpA704 was ESBL-negative and had intermediate resistance to ampicillin and piperacillin-tazobactam, while KpA828 was an ESBL-producer with full resistance to ampicillin.
The main multidrug efflux systems (ES) in Klebsiella spp., AcrAB and OqxAB , were identified in 100% and 95.24% of K. pneumoniae isolates, respectively . The oqxA and oqxB genes were not detected in one isolate only, the MDR KpA13 isolate (ST39). In addition, the following most prevalent ES were identified: AcrEF (100%), EefAB (100%), EmrAB (100%), KpnEF (100%), KpnGH (95.24%, 20/21), LptD (100%), MacAB (100%), and MsbA (100%). The prevalence of other efflux pumps was lower: Tet(A) (71.43%, 15/21), Tet(B) (4.76%, 1/21), QacEdelta1 (52.38%, 11/21), FloR (9.52%, 2/21), and CrcB (4.76%, 1/21). The acrR gene encoding for the repressor of the AcrAB-TolC pump was present in all genomes. However, in five MDR isolates (KpA13, KpA230, KpA44, KpA6101, and KpA699), an identical profile of substitutions in this gene was identified: P161R, G164A, F172S, R173G, L195V, F197I, K201M (sourced from the ResFinder database , http://genepi.food.dtu.dk/resfinder , accessed on 20 September 2024). Mutations in the acrR gene contribute to the overexpression of the AcrAB-TolC complex, leading to the higher level of resistance to multiple antibiotics . In addition, the marA gene that encodes the global activator MarA mediating the overexpression of the AcrAB pump was identified in all our isolates. Also, mutations in the marR gene encoding MarR repressor of marA , which also lead to overexpression of the AcrAB pump and a reduced susceptibility to multiple antibiotics , were detected. The prevalence of ramA and ramR genes that also encode the regulators of the AcrAB pump (activator of acrAB and repressor of ramA , correspondingly ) was lower, 66.67% (14/21). It should be noted here that the ramAR genes were not detected only in the ST395 isolates. In addition, in isolate KpA699, a substitution in the ramR (A19V) gene was detected. This mutation is known as contributing to a reduced susceptibility to tigecycline ; the isolate, however, was susceptible to this antibiotic. Regarding other genes involved in efflux pump regulation, the following genes were identified in all our isolates, irrespective of AM resistance phenotype: baeR , crp , leuO , h - ns , and rsmA . Of note, the rarA gene encoding a transcriptional activator of the efflux pump OqxAB was not detected only in KpA13, which was also negative for oqxAB genes. Futhermore, the emrR gene, a negative regulator of the EmrAB-TolC efflux system associated with resistance to nalidixic acid and thiolactomycin , was not detected only in non-MDR HMV KpA828, with potential overexpression of the EmrAB pump. In addition, the tet (R) repressor gene (76.19%) was present in all isolates carrying Tet pumps. The results indicated a common set of predominant Klebsiella pumps in most isolates, regardless of their AMR phenotype. More differences were identified in the repertoire of genes regulating efflux pumps. In particular, the absence of ramAR genes in all our K. pneumoniae isolates belonging to the ST395 is of note. Additionally, the co-occurrence of mutations in the acrR and ramR genes was detected in one isolate only, KpA699, for which the resistance mechanisms to three classes/subclasses of AMs (carbapenems, cephamycins, and tetracyclines) cannot be explained by the presence of any acquired resistance genes. Potentially, the overexpression of the AcrAB efflux pump due to mutations in the acrR and ramR genes may contribute to the aforementioned resistances in this isolate.
Loss of, or mutations in, the major porins of K. pneumoniae result in AMR, and this possibility was explored among our isolates. The intact ompK35 , ompK36 , and ompK37 genes were identified in 42.86% (9/21) of the sequenced K. pneumoniae clinical isolates, including all non-MDR isolates (3) and 33.33% of the MDR isolates (6/18). The ompK35 gene (GenBank: AJ011501.1) was identified in 12 isolates (57.14%) with 98.43–100% identity, whereas in the remaining nine isolates (42.86%), a truncated form of OmpK35 porin was predicted. It should be noted here that an identical deletion in the ompK35 gene leading to the truncated porin protein was predicted in all ST395 isolates (four XDR and three MDR). In addition, in the KpA500 (ST307) isolate, an insertion of 26 bp in the ompK35 gene after nucleotide 226 resulted in a premature stop codon, which also resulted in a truncated protein form. In the KpA699 isolate, a premature TAA stop codon in the ompK35 gene appeared as the result of point mutation, also leading to a truncated form of the protein. In this isolate and in one ST395 isolate (KpA44), OmpK35 porin deficiency was coupled with mutations in the ompK36 and ompK37 genes (see below). In all other isolates with the truncated form of OmpK35, the intact ompK36 and ompK37 genes were detected. It should be noted here that the ompK36 (GenBank: Z33506.1) and ompK37 (GenBank: AJ011502.1) genes were identified in all isolates, in five of which (23.81%) mutations in both of these genes were observed. These five isolates (KpA13, KpA44, KpA230, KpA6101, and KpA699) had the identical combination of nine mutations in the ompK36 gene (N49S, L59V, G189T, F198Y, F207Y, A217S, T222L, D223G, and N304E; sourced from the ResFinder database ). A217S substitution is associated with resistance to carbapenems, while the other mutations are associated with cephalosporin resistance. In KpA699, these mutations were combined with the Gly134Asp135 duplication in loop 3 of the OmpK36 protein (OmpK36GD, sourced from Kleborate ), which leads to the attenuated diffusion of carbapenems . An identical combination of the four mutations in the ompK37 gene (I70M, I128M, N230G, and E244D; sourced from the ResFinder database ) was found in three isolates: both ST39 isolates (KpA13 and KpA230) and one ST395 isolate (KpA44). All these mutations are known as associated with resistance to carbapenems . In addition, the combination of two mutations (I70M and I128M) was found in KpA6101 (ST5275) and KpA699 (ST15). Notably, among these five isolates with mutations in the ompK36 and ompK37 genes that are associated with resistance to carbapenems, only one isolate, KpA699, was non-susceptible to carbapenems (intermediate to imipenem and resistant to meropenem), whereas other isolates were sensitive. Thus, mutations in genes encoding OmpK porins may contribute to the reduced permeability to AMs. The combined mutations in the ompK36 and ompK37 genes (23.81%, 5/21) and/or a truncated OmpK35 porin (42.86%, 9/21) may play a part in the increased resistance towards clinically important AMs in 57.14% (12/21) of isolates. In particular, these mechanisms may be responsible for resistance to carbapenems in carbapenemase-negative strains such as KpA699. It carries mutations in the ompK36 (OmpK36GD, A217S) and ompK37 (I70M, I128M) genes that are associated with resistance to carbapenems, as well as a truncated form of OmpK35 porin.
The most prevalent (13/21, 61.9%) was the virulence score of 1 (sourced from Kleborate ), which included isolates carrying the ybt locus encoding iron-scavenging siderophore yersiniabactin . Among these isolates, the non-MDR HMV KpA828 (ST25) strain is of note. In all but one of these isolates, the ybt loci ( ybt lineages 1, 8, 9, 14, 15, and 16) were located within various structural variants of integrative conjugative elements (ICE Kp4 , ICE Kp9 , ICE Kp3 , ICE Kp5 , and ICE Kp12 ) and were mainly distributed according to STs . In one AM-susceptible isolate, KpA857 (ST1480), however, ybt ( ybt 4) was located on the pCAV1099-114 plasmid (IncFIB(K) incompatibility group). The virulence score of 4 was assigned by Kleborate to five isolates (23.81%, 5/21) that belonged to ST395 . Four of them were carbapenem-resistant XDR strains and one MDR strain (KpA7001). In all of these isolates, the virulence determinants were associated with ICE Kp12 and included: the ybt locus (lineage 16, sequence type: 53–2LV), the iucABCD and iutA genes encoding aerobactin siderophore ( iuc 1; AbST: 63), and the rmpA2 gene (allele 28; sourced from the BIGSdb-Kp database) encoding the regulator of mucoid phenotype . In addition, in one isolate (KpA481), the additional rmpADC ( rmp 1/KpVP-1 lineage) and peg344 (metabolic transporter of unknown function) genes were identified. A frameshift mutation in the rmpA2 gene due to insertion within a poly-G tract, which results in a premature stop codon (TAA) and truncated protein (47%), was identified in all rmpA2 -positive isolates. This mutation may explain the HMV-negative phenotypes of these isolates found earlier by the sting-test. Despite the HMV-negative phenotypes, however, the presence of other virulence markers indicates a high virulence potential. In all five isolates, the rmp and aerobactin-encoding loci were co-localised on the same contigs. In addition, these isolates shared an identical plasmid replicon profile with the characteristic presence of IncFIB(K)/IncFIB(pNDM-Mar)/IncHI1B(pNDM-MAR) replicons, except for the KpA481 isolate that had only the IncFIB(pNDM-Mar) replicon. The virulence score of “0” was assigned to three K. pneumoniae clinical isolates (14.29%), one of which was the non-MDR KpA704 (ST107) isolate with the HMV phenotype, while the other two isolates were MDR, KpA204 (ST307) and KpA6101 (ST5275). In addition to Kleborate, the presence of virulence factors in WGS data was further analysed with VFanalyzer and the BIGSdb-Kp database at the Pasteur Institute. According to VFanalyzer, the genes encoding Type 1 and Type 3 fimbriae were detectable in all our isolates, except for the mrkH gene (transcriptional activator regulating biofilm formation ), which was missing in five isolates belonging to ST395 . Other genes identified in all of the isolates were ent locus encoding siderophore enterobactin, the K locus that determines polysaccharide capsule type, the csrAB genes involved in capsule synthesis regulation, the chromosome-located iro genes encoding siderophore esterase ( iroE ) and salmochelin receptor ( iroN ), and the chromosomal iutA gene for ferric aerobactin receptor. The three almost entire clusters of Type VI Secretion System (T6SS-I, T6SS-II, and T6SS-III), but with missing genes encoding Tli1, LysM, and two hypothetical proteins, were identified only in two ST307 isolates . The T6SS-I and T6SS-III clusters were detectable in almost all our isolates (95.24% and 100%, respectively), whereas the T6SS-II cluster was not identified in 61.9% (13/21) of isolates. The intact T6SS-I cluster similar to that in the MDR K. pneumoniae HS11286 strain (NC_016845) was identified in seven isolates (33.33%). Six of them belonged to ST395 and one, KpA324, to ST449. The remaining 14 isolates lacked the tle1 and/or tli1 genes (effector-immunity pair of proteins participating in intra- and interspecies antagonism ) . This pair of toxin-antitoxin genes was absent in seven isolates (33.33%) belonging to ST15, ST25, ST107, ST219, ST873, ST1480, and ST5275. The single tle1 gene was detected in two ST29 isolates (9.52%), and the single tli1 gene was found in five isolates (23.81%) belonging to ST39, ST307, and ST395. Notably, the tli1 gene copy number ranged from one in ST39 isolates to seven in ST307 isolates. In one isolate, KpA828 (ST25), a reduced T6SS-I cluster limited to only two genes ( clpV / tssH and hcp / tssD ) was found. Other virulence-related genes were detected at lower frequencies. However, the presence of the following fimbrial adherence determinants is of note: the stbABCDE genes identified in two ST29 isolates (9.52%) and the steB and stfD genes detected in two ST307 isolates (9.52%).
Plasmid replicons were identified in almost all sequenced isolates, except for one isolate, KpA828 (non-MDR, HMV), which also had the smallest genome size among our isolates . The number of plasmid replicons per isolate ranged from 1 to 7 . Notably, a high number of plasmid replicons was characteristic for the isolates belonging to ST395 (from 4 to 7 replicons). The highest number of plasmid replicons was detected in four ST395 isolates, three of which were XDR strains carrying the bla NDM-1 gene (KpA278, KpA285, and KpA542) and one—the MDR isolate KpA7001. In these isolates, an identical profile of the following 7 plasmid replicons was identified: Col(pHAD28) (KU674895), ColRNAI (DQ298019), IncFIB(K) (JN233704), IncFIB(pNDM-Mar) (JN420336), IncFII(K) (JN233704), IncHI1B(pNDM-MAR) (JN420336), and IncR (DQ449578). In another XDR isolate carrying the bla NDM-1 gene, KpA481, the plasmid profile was represented by 4 replicons only: Col(pHAD28), ColRNAI, IncFIB(pNDM-Mar), and IncR. The strain also had the smallest genome size among the XDR isolates . The most common plasmid replicons were IncR and IncFIB(K), detected with the same prevalence of 57.14% (12/21). Among IncR-positive isolates, the most common AMR gene identified on the replicon contigs was the bla CTX-M-15 gene, which was detected in seven isolates. Notably, the IncR replicon was identified in all seven ST395 isolates. In three of them it was co-located with the bla CTX-M-15 gene, whereas in two other isolates the replicon was located on the same contigs carrying the MDR regions. In particular, the following genes were identified in KpA769: bla CTX-M-15 , aac ( 3 )- IIe , catB3 , bla OXA-1 , aac ( 6 ′)- Ib - cr6 , sul1 , qacEdelta1 , dfrA1 , tetR (A), and tet (A). Of these, five genes were also detected in the KpA7001 contig with the IncR replicon: bla CTX-M-15 , aac ( 3 )- IIe , catB3 , bla OXA-1 , and aac ( 6 ′)- Ib - cr6 . Besides, the tet genes were detected in the IncR contigs of two isolates belonging to other STs: tet (B) and tetR (B) in KpA511 (ST219) and tet (A) and tetR (A) in KpA6101 (ST5275). The IncFIB(K) replicon was detected in 12 isolates, but the AMR genes that were co-located with the replicon on the same contigs were found only in two isolates, KpA7002 (ST873) and KpA500 (ST307). In both these isolates, the aph ( 6 )- Id , aph ( 3 ″)- Ib , and sul2 genes were co-located with the IncFIB(K) replicon on the same contigs, while in KpA500 the co-location of four additional AMR genes ( bla TEM-1 , catB3 , bla OXA-1 , and aac ( 6 ′)- Ib -cr6) was detected. Among other plasmid replicons, the prevalence of Col(pHAD28) and ColRNAI plasmid replicons was 42.86% (9/21) and 33.33% (7/21), correspondingly. The combination of these replicons was identified in all seven ST395 isolates, but no linkage with any AMR genes was detectable in the replicon contigs. In KpA324 (ST449), however, the Col(pHAD28) replicon was co-located with the bla CTX-M-14 gene on the same contig. Notably, the IncFII(K) replicon was identified in eight isolates (38.1%) and always in combination with the IncFIB(K) replicon, while the IncFIB(K) replicon on its own was present in four other isolates. The prevalence of IncFIB(K)(pCAV1099-114) (CP011596) plasmid replicon was lower, 19.05% (4/21); however, in three out of four isolates, the AMR genes were co-located on the replicon contigs. In particular, in the KpA511 (ST219) isolate, a 6588 bp resistance region carrying the following 10 genes was identified: qnrS1 , aph ( 6 )- Id , aph ( 3 ″)- Ib , sul2 , dfrA12 , aadA2 , qacEdelta1 , sul1 , mrx , and mphA . In two isolates belonging to ST39, KpA13 and KpA230, the number of AMR genes on the replicon contig was restricted to four ( qnrS1 , aph ( 6 )- Id , aph ( 3 ″)- Ib , and sul2 ) and three ( qnrS1 , mphA , and mrx ), respectively. Notably, the IncFIB(K)(pCAV1099-114) replicon was also detected in the AM sensitive-isolate KpA857 (ST1480). The IncFIA(pBK30683) (KF954760) plasmid replicon was found only in two ST29 isolates (KpA250 and KpA314) and was not associated with AMR genes. The IncQ (M28829) plasmid replicon was identified in one isolate only, KpA44 (ST395), and the aph ( 3 ′)- IV gene (amikacin resistance) was co-located with this replicon on the same contig.
K. pneumoniae Clinical Isolates The genomic sequences were analysed for the presence of prophage regions using the PHASTEST web server ( https://phaster.ca , accessed on 20 September 2024). The prophage regions were identified in all our isolates . A total of 24 prophage regions were identified, and their main characteristics are summarised in . The most prevalent phage was Klebsi_phiKO2 (NC_005857), detected in 38.1% of isolates. The number of prophages per isolate ranged from 1 to 6. Notably, the highest number of prophages was detected in isolates belonging to ST395 . All these ST395 isolates, except KpA769, shared an identical profile of the following 6 prophages: Edward_GF_2 (NC_026611), Escher_HK639 (NC_016158), Klebsi_3LV2017 (NC_047817), Klebsi_ST147_VIM1phi7.1 (NC_049451), Klebsi_ST512_KPC3phi13.2 (NC_049452), and Salmon_SEN34 (NC_028699). Interestingly, all these prophages, except for one (Escher_HK639), were detected in ST395 isolates only. The KpA769 isolate was missing two prophages from this profile (Escher_HK639 and Salmon_SEN34) but had the additional two prophages, Escher_RCS47 (NC_042128) and Salmon_Fels_1 (NC_010391). Among the other isolates, an identical prophage profile was detected only in two ST29 isolates, KpA250 and KpA314, which harboured the following three prophages: Escher_RCS47, Klebsi_phiKO2 (NC_005857), and Klebsi_ST15_OXA48phi14.1 (NC_049454). The prophage regions were analysed for the presence of AMR and virulence-related genes using CARD and VFanalyzer tools. No virulence-associated genes were detected in the prophage regions, consistent with the previously published results demonstrating the low prevalence of virulence factor genes in K. pneumoniae prophages . On the contrary, the carriage of AMR genes on prophages was significant, and these genes were detected in nine prophages within the genomes of six K. pneumoniae isolates . Notably, the prophages carried six genes of resistance to beta-lactams, with one of them, Escher_RCS47, harbouring two of them in the KpA769 (ST 395) isolate. Interestingly, Escher_RCS47 was also identified in two other isolates belonging to ST29 and ST39 . This phage is one of the major phage types harbouring AMR genes, which was found to be predominantly located on plasmids . It should be noted here that in KpA769 (ST395), the Escher_RCS47 region was detected on the contig carrying the IncR plasmid replicon and the MDR region including ten AMR genes mentioned above (see ). Of these ten genes, eight that encode resistance to 5 classes of AMs and quaternary ammonium compounds were located in the prophage region . The role of prophage-located AMR genes in the MDR phenotype is also noticeable for other isolates listed in . In particular, in the KpA699 isolate (ST15), two Klebsi_phiKO2 phage regions, which are probably the result of superinfection, had, in summary, nine AMR genes conferring resistance to five classes of AMs. Another phage carrying multiple AMR genes is the Staphy_SPbeta_like phage, which confers resistance to six classes of AMs in the KpA204 (ST307) isolate. Importantly, there are indications that this phage can move beyond the generic barriers and can be detected in several species of Gram-positive and Gram-negative bacteria . We performed more detailed analysis of this phage region with MobileElementFinder ( https://cge.food.dtu.dk/services/MobileElementFinder , accessed on 20 September 2024) and found the presence of two composite transposons: (i) cn_3826_IS26 (isfinder db, accession X00011) harbouring catB3 , bla OXA-1, and aac ( 6 ′)- Ib - cr genes and (ii) cn_15047_IS26 (isfinder db, accession X00011) carrying tet (A), tetR , and qnrB1 genes. This combination of two mechanisms of horizontal gene transfer may explain its mobility beyond the generic barriers. Additionally, the AMR genes located in two phage regions in the KpA13 (ST39) isolate encoded resistance to four classes of AMs. These results indicated an important contribution of both intact and questionable prophages to the MDR phenotype in a subset of our clinical K. pneumoniae isolates. Of note, we did not detect any genes associated with AMR or virulence-related genes in the incomplete phage regions that were identified in our isolates. Clustered regularly interspaced short palindromic repeats (CRISPR) arrays were identified in five isolates (23.81%) belonging to the following STs: ST15 (KpA699, 2 arrays), ST39 (KpA13 and KpA230, 1 array), ST449 (KpA324, 2 arrays), and ST873 (KpA7002, 1 array). Of these, the KpA699 isolate carrying two CRISPR arrays had four plasmids and five prophage regions, whereas the low number of phages and plasmid replicons was detected in the other isolate carrying two CRISPR arrays, KpA324 (2 and 1, respectively). The majority of K. pneumoniae strains, however, had no detectable CRISPR arrays, thus with no phage immunity traits and, therefore, no boundaries against phage infections. Subsequently, the strains within the problematic ST395 or ST307 are not immune against phages and can easily participate in horizontal gene exchange via phages, thus contributing to their evolution in the form, for example, of the acquisition of AMR genes. The absence of phage immunity in the XDR and MDR strains belonging to ST395 makes phage therapy a viable option to treat and control this infection.
To estimate genetic relatedness among our K. pneumoniae clinical isolates based on genomic sequences, we performed whole genome-based phylogenetic analysis using the type strain genome server (TYGS) ( https://tygs.dsmz.de , accessed on 20 September 2024) . The results demonstrated an ST-based distribution of the genomic sequences across the phylogenetic tree . All isolates belonging to the same ST were grouped together, with a high similarity score, and the average nucleotide identity (ANI, https://www.ezbiocloud.net/tools/ani , accessed on 20 September 2024) values were in the range of 99.97–99.99%. In particular, K. pneumoniae ST395 isolates with the XDR or MDR phenotypes and genotypes were clustered together, displaying a low genetic diversity, with an ANI value of 99.99%. In addition, the isolates representing the same clonal lineage were also grouped together, although with a lower degree of similarity: KpA6101 (ST5275) and KpA857 (ST1480) were assigned to sublineage SL37, while KpA511 (ST219) and KpA704 (ST107) were assigned to sublineage SL107. All other branches were represented by single isolates. To place our K. pneumoniae clinical isolates within the international context, we performed whole genome-based phylogenetic analysis involving our strains and the most closely related genomic sequences from other countries. The latter genomic sequences of K. pneumoniae strains were obtained from the Pathogenwatch global resource ( https://pathogen.watch , accessed on 20 September 2024). The most closely related strains of K. pneumoniae were identified based on core genome analysis, and the combined phylogenetic analysis revealed two main clades, A and B ( A, ). Then, the whole-genome comparisons were made with the combined datasets, including our and international strains, using the TYGS resource ( B). The noticeable difference between the clades A and B is that the former includes a more diverse range of countries and continents, while the latter is mainly confined to Europe, with minor inclusions from other geographical locations ( B). Also, the years of isolation of the clade A strains were earlier compared to clade B (2004–2022 vs. 2008–2024). Our seven XDR and MDR K. pneumoniae ST395 isolates demonstrated low genetic diversity and were located within clade B ( B). The most genetically close strains were from Russia and Germany, with the ANI values of 99.96–99.98%. Another international high-risk clone, ST307, was also located within the clade B ( B). These MDR K. pneumoniae ST307 strains were isolated in the region from 2018 to 2024, including our isolates and also the isolates in another study . The closest genomic matches were the strains from the USA. In particular, four K. pneumoniae ST307 clinical isolates from Armenia collected in 2018 and 2019 were close to the strain from the USA collected in 2014, while the KpA204 isolate from 2024 was close to the strains collected in the USA in 2019 (ANI values of 99.97–99.98%). The low genetic diversity was also identified in ST39 strains within clade B ( B). The ANI value of 99.98% was obtained for the isolates collected in Armenia in 2024 and the strains isolated in Ethiopia in 2020. The carbapenem-resistant strain KpA699, belonging to the international high-risk clone ST15 and isolated in 2022, was located within the clade A ( B). The closest genomic matches to this isolate were the strains from Turkey, collected in 2013 and 2014, and the strain from Australia, isolated in 2014 ( B). The low genetic diversity was also observed among the ST449 strains in clade A, which included the KpA324 isolate from Armenia and strains from Germany, Spain, Madagascar, and Japan (ANI values of 99.97–99.99%). Another MDR isolate from Armenia, KpA511 (ST219), had the closest genomic matches with the strains from India and Turkey (ANI values of 99.98% and 99.99%, correspondingly). The remaining isolates from Armenia had less relatedness to international clones and were clustered within the groups demonstrating a higher level of genomic diversity ( B). It should be noted here that the KpA6101 isolate in our collection was the only isolate belonging to ST5275 in the Pathogenwatch database. We were unable, therefore, to perform comparative genomic analysis within this ST. In the phylogenetic tree, this single ST5275 genome formed a sister group with the ST1480 genomes ( B).
The global rise of K. pneumoniae pathotypes with hypervirulent and MDR traits poses a significant threat to public health. To deal with this threat, many countries implemented monitoring programs, which help to understand the epidemiology and drug resistance of this pathogen and take the necessary measures to treat and control it. In particular, this information is especially useful for doctors, who deal with severely infected patients and have to make a prompt decision regarding the empirical AM therapy, which should be the most appropriate under the current circumstances. However, in certain regions of the world, there is a paucity of information regarding the local K. pneumoniae pathotypes, and this compromises the timely and efficient therapy to reduce excessive morbidity and mortality rates. In particular, this is the case for Armenia, and the main objective of this work was to collect epidemiological and drug resistance data in the interest of public health. Thus, we performed complex, in-depth analysis of K. pneumoniae pathotypes circulating in this region. We analysed the collection of 48 isolates of K. pneumoniae , which were obtained from hospital patients during the period from 2018 to 2024. Extensive analysis of AMR with the panel of 22 antibiotics, which covered 12 different classes of antibiotics, revealed that the majority of the isolates are XDR (resistance to at least 10 classes of AMs) and MDR (resistance to at least five classes of AMs) bacteria, comprising together 64.58% of the isolates. Resistance to less than five classes of AMs was encountered in 35.42% of the isolates. Thus, therapeutic options for the majority of K. pneumoniae infections are limited. In particular, XDR isolates (8.33%, 4/48) demonstrated an identical profile of complete resistance to 10 classes of AMs, including carbapenems, but they were still susceptible to colistin and tigecycline. The MDR isolates of K. pneumoniae (56.25%, 27/48) demonstrated a high proportion of ESBL-producers, 77.78% (21/27), with resistance to the 3rd and 4th generation cephalosporins. Interestingly, co-resistance to more classes of AMs was mainly correlated with the ESBL production, in contrast to the non-producers. In the MDR group, intermediate resistance to colistin and tigecycline was found in 11.11% (3/27) of the isolates. The highest susceptibility in MDR isolates was to carbapenems and cefoxitin. The latter suggests that cefoxitin, a cephamycin resistant to ESBL hydrolysis, could be a viable therapeutic option for the treatment of infections caused by MDR K. pneumoniae to limit the use of carbapenems as has been suggested earlier . The non-MDR isolates demonstrated susceptibility to all first-line drugs used to treat K. pneumoniae infections. Interestingly, the HMV phenotype, which is considered to be one of the markers of hypervirulence , was identified in three non-MDR isolates (14.29%). However, this phenotype was not detectable among our XDR and MDR isolates. One of the alternatives to AMs is phage therapy, and for this purpose, we tested two commercial phage preparations, BKpP and BKPP (SPA “Microgen” Moscow, Russia), which include phage lines active against K. pneumoniae and Klebsiella spp., respectively. Both these phage cocktails displayed significant in vitro activity against K. pneumoniae clinical isolates, especially the XDR (100%) and MDR (75–82.14%) strains. Given the limited options of AM therapy against these XDR or MDR isolates, the high efficiency of the commercial bacteriophage preparations could be of interest, suggesting that they may serve as an alternative/adjunct therapy to control these hard-to-treat infections. The initial genetic characterisation of our isolates was performed with ERIC-PCR fingerprinting, which allows us to estimate genetic relatedness among the isolates of enteric bacteria and vibrios . This analysis demonstrated a high genetic diversity among our K. pneumoniae strains, demonstrating their predominantly polyclonal structure. However, the XDR and some of the MDR isolates produced identical or very similar ERIC-PCR fingerprints, suggesting the possibility of the clonal spread. The ERIC-PCR results were also used as one of the criteria for selection of strains for WGS analysis to avoid redundant sequencing. The selected 21 human K. pneumoniae isolates of clinical and epidemiological significance were subjected to WGS. All these isolates belonged to the phylogroup Kp1, K. pneumoniae sensu stricto . The following 12 STs were identified: ST15 (1), ST25 (1), ST29 (2), ST39 (2), ST107 (1), ST219 (1), ST307 (2), ST395 (7), ST449 (1), ST873 (1), ST1480 (1), and ST5275 (1). This analysis indicated the circulation of international high-risk MDR clones in Armenia. In particular, 33.33% of the sequenced strains belonged to ST395, which was first detected in France in 2010 and now is emerging as the international high-risk clonal lineage of K. pneumoniae . All four of our XDR isolates with carbapenem resistance and three ESBL-producing MDR isolates with resistance to 8 classes of AMs belonged to this ST. It is also concerning that all our ST395 isolates were recovered from a vulnerable cohort, that is, paediatric patients. This ST is associated with clinically important MDR phenotypes such as the production of carbapenemases and ESBL as well as resistance to other classes of AMs . We also detected the representatives of other international high-risk MDR clones, ST15 and ST307 . The carbapenem-resistant MDR isolate KpA699 belonged to ST15, and two isolates with resistance to eight AM classes (KpA204 and KpA500) were assigned to ST307. Taken together with the ST395 data, our results indicate that 10 out of 18 XDR and MDR isolates belong to the recognised international high-risk MDR clones. Among other MDR isolates with resistance to nine classes of AMs, two were assigned to ST39, which is also considered an emerging pathogen , and one to ST219. In addition, two MDR strains, KpA250 and KpA314, belonged to ST29. Need to mention here that strains for sequencing were selected based on ERIC-PCR data, and only one clone from the clusters with identical fingerprints was selected for WGS. Thus, the prevalence of international high-risk MDR clones in our collection may be higher. Because of our focus on hard-to-treat XDR and MDR infections, the WGS coverage of the non-MDR K. pneumoniae strains, which accounted for 35.42% of all isolates, remained insufficient. Nevertheless, two non-MDR HMV isolates were assigned to the STs with the hvKp phenotype. The ESBL-producer KpA828 belonged to ST25 , and KpA704 to ST107, while the isolate that was susceptible to AMs was assigned to ST1480. Thus, all our isolates are the well-known K. pneumoniae lineages disseminated in many countries. In the only previous report from the region, genomic characterisation of eight clinical isolates of K. pneumoniae , which were isolated in 2019, was published (ENA Project: PRJEB51925). The presence of MDR ST307 isolates (N = 4) was detected as well, but also other STs, which were not present in our analysis: ST37, ST147, ST807, and ST967. These findings indicated the presence of another international high-risk MDR clone of K. pneumoniae , ST147 , in the region. The capsule polysaccharide of K. pneumoniae is one of the important virulence factors, which protects against phagocytosis and the bactericidal activity of the serum . There are indications that K1 and K2 serotypes may be involved in more clinically serious cases of bacteremia . The lipopolysaccharide (O antigen) also plays a role as a virulence factor . The most prevalent ST395 isolates shared an identical profile of capsular and LPS serotypes, K2:O2 (subtype O2a). The second most common profile, K19:O1, was found in the ST15 and ST29 isolates (14.29%, 3/21). An identical serotype, K62:O1, was also detected in two ST39 isolates. In all other isolates, the serotypes were correlated with the corresponding STs. Further, based on WGS data, we analysed the genetic background for the phenotypically observed AMR profiles of our K. pneumoniae isolates. The resistome of four XDR ST395 isolates was highly similar. In particular, resistance to carbapenems was associated with the production of NDM-1 carbapenemase, while resistance to amikacin was associated with the presence of the armA gene. The ESBL-producer phenotype was not detected in the XDR isolates, but the combination of the bla CTX-M-15 , bla OXA-1 , bla SHV-11, and bla TEM-1 genes was present, except for one isolate lacking the bla CTX-M-15 gene. The above gene combination was also detected in all MDR ST395 isolates with the ESBL-producer phenotype. There was a good concordance between the phenotypic and genotypic data: aac ( 3 )- IIe , aac ( 6 ′)- Ib - cr6 , and aph ( 3 ′)-VIa (aminoglycoside resistance); mphA , mphE , mrx , and msrE (macrolide resistance); aac ( 6 ′)- Ib - cr6 , gyrA (S801), parC (S83I), and qnrS1 (fluoroquinolone resistance); dfrA1 , dfrA5 , sul1 , and sul2 (resistance to folate pathway antagonists); tet (A)/ tetR (A) (tetracycline resistance); catA1 and catB3 (phenicol resistance); fosA6 and uhpT (E350Q) (fosfomycin resistance). Also, the truncated form of OmpK35 porin (72% of full length) and the lack of the ramAR genes, which encode the regulators of the AcrAB pump, were the characteristic features of all our XDR and MDR K. pneumoniae isolates belonging to ST395. Among the other MDR isolates, the ESBL-producing phenotype was associated with the combination of genes encoding CTX-M and SHV beta-lactamases, while the production of SHV-11 was detected only in a non-MDR isolate with the HMV phenotype. Regarding resistance determinants to other classes of AMs, they were mostly identified as specific AMR gene combinations and were mostly in concordance with the phenotypic data. It must be emphasised, however, that no acquired colistin or tigecycline resistance genes can be identified in three isolates with intermediate resistance to these AMs. Although tigecycline resistance could be the result of mutations in the tet (A) gene , our three tigecycline non-susceptible isolates (KpA13, KpA204, and KpA7001) carried the same tet (A) gene variant as the susceptible isolates, thus ruling out this probability. Potential mechanisms could be the overexpression of the acrB gene and its regulator RamA due to mutations in their regulators and reduced permeability of AMs due to mutations in the ompK genes. The genes encoding OmpK35, OmpK36, and OmpK37 porins were present in all genomic sequences. However, the combined mutations in the ompK36 and ompK37 genes were identified in five (23.81%) MDR strains, and/or a truncated form of OmpK35 porin was predicted in nine (42.86%) MDR strains, suggesting that the reduced permeability to AMs may contribute to the increased MICs towards clinically important AMs in 57.14% (12/21) of K. pneumoniae clinical isolates. The main MDR efflux systems in Klebsiella spp., AcrAB and OqxAB, as well as other efflux systems (AcrEF, EefAB, EmrAB, KpnGH, KpnEF, LptD, MacAB, and MsbA) were present with a high prevalence in our clinical isolates, including the non-MDR strains. In addition, the five strains with the combined mutations in ompK36 and ompK37 also demonstrated an identical profile of multiple substitutions in the acrR gene (repressor of the AcrAB pump). These highly similar profiles of mutations in ompK36/ompK37 and acrR were detected in clinical isolates belonging to different STs: ST15 (1), ST39 (2), ST395 (1), and ST5275 (1). In these isolates, the reduced permeability to AMs and overexpression of the AcrAB pump may contribute to a higher level of resistance to multiple AMs, including carbapenems. For example, the carbapenem-resistant isolate KpA699 (ST15) harboured no known carbapenemase genes but had mutations in the ompK36 (OmpK36GD, A217S) and ompK37 (I70M, I128M) genes associated with resistance to carbapenems. Additionally, the strain carried a truncated OmpK35 porin, comprising 49% of the full length. In addition, the co-occurrence of mutations in the local repressor genes of the AcrAB efflux pump, acrR and ramR , was observed in this isolate. Thus, the reduced permeability of porins and overexpression of the AcrAB efflux pump may be the mechanisms of carbapenem resistance in KpA699. These non-specific resistance mechanisms may also be responsible for its resistance to cefoxitin and tetracycline, for which no acquired genetic determinants of resistance can be found. However, the similar mutations contributing to the reduced permeability of porins and overexpression of the AcrAB efflux pump can also be found in the other K. pneumoniae strains, in which the specific AMR genes can be detected. Thus, these non-specific mechanisms of AMR are widespread among clinical K. pneumoniae strains and may potentially contribute to the elevated MIC values or interfere with therapies even if the appropriate AMs are used. The virulome analysis revealed a high virulence potential in five out of seven ST395 isolates, including all XDR and one MDR (KpA7001) strain. These five isolates carried ybt locus, which encodes the siderophore yersiniabactin (lineage 16) and which is located within ICE Kp12 ; the iucABCD and iutA genes, which encode the aerobactin siderophore; and the rmpA2 gene, which encodes the regulator of the mucoid phenotype. The additional virulence factors, rmpADC (mucoid phenotype) and peg344 (metabolic transporter), were detected in one of the XDR isolates, KpA481. Notably, all these isolates had the highest score of 4 by Kleborate . However, an identical truncated form of RmpA2 protein (47%) was predicted in all ST395 isolates, and they displayed the HMV-negative phenotype. This suggests that our K. pneumoniae ST395 isolates with the score of 4 cannot be classified as hypervirulent, based on in silico prediction of virulence biomarkers . Still, these carbapenem-resistant or ESBL-producing strains display a high virulence potential. These highly virulent strains are not detectable by conventional laboratory tests and have to be monitored using genomic approaches. To our knowledge, this is the first report and comprehensive characterisation of carbapenem-resistant XDR and ESBL-producing MDR K. pneumoniae ST395 clinical isolates from Armenia carrying important virulence determinants. Regarding the virulence potential of MDR and non-MDR K. pneumoniae clinical isolates, the majority of them had the ybt locus encoding yersiniabactin located on integrative conjugative elements (ICE Kp ) and were scored 1 by Keborate. In addition, the virulence score of 0 was assigned to 14.29% (3/21) of the isolates. Notably, two non-MDR isolates with the HMV phenotype were scored 0 and 1 by Keborate and thus cannot be classified as hypervirulent. Given the low number of non-MDR isolates subjected to WGS, further research is needed to evaluate the prevalence of hypervirulent strains among the non-MDR K. pneumoniae circulating in the region. Mobile genetic elements (MGE) play an important role in the evolution of K. pneumoniae lineages towards MDR and hypervirulence. The virulence loci, such as ybt , for example, were located within the integrative conjugative elements ICE Kp4 , ICE Kp9 , ICE Kp3 , ICE Kp5 , and ICE Kp12 in different isolates of K. pneumoniae. As mentioned before, in the five ST395 isolates, several virulence determinants were located within ICE Kp12 . Regarding another class of MGEs, plasmids: except for one strain, KpA828, plasmid replicons were detected in all other isolates, ranging from one to seven per isolate. The highest number of replicons was identified among carbapenem-resistant and highly virulent isolates belonging to ST395, which had an identical profile of seven replicons, belonging to the next incompatibility groups: Col(pHAD28), ColRNAI, IncFIB(K), IncFIB(pNDM-Mar), IncFII(K), IncHI1B(pNDM-MAR), and IncR. Importantly, some plasmids harbour a large array of AMR genes, which, if transferred, may instantly confer resistance to a multitude of AMs. In the KpA511 strain, for example, the IncFIB(K) plasmid pCAV1099-114 carried a 6588 bp resistance region encompassing the following 10 AMR genes: qnrS1 , aph ( 6 )- Id , aph ( 3 ″)- Ib , sul2 , dfrA12 , aadA2 , qacEdelta1 , sul1 , mrx , and mphA . Prophage sequences were detected in all the sequenced isolates. All isolates belonging to ST395 did not contain any CRISPR-Cas sequences and were characterised not only by a high number of plasmid replicons but also by a high number of prophages. All these isolates, except one (KpA769), shared an identical profile of the following 6 prophages: Edward_GF_2, Escher_HK639, Klebsi_ST147_VIM1phi7.1, Klebsi_ST512_KPC3phi13.2, Klebsi_3LV2017, and Salmon_SEN34. Among isolates in our collection, all of these prophages, except for one (Escher_HK639), were detected only in ST395 isolates, suggesting the clonal dissemination of these prophages with the hosts, without horizontal transfer events to the representatives of other STs. In nine prophage sequences, we detected the AMR genes, including the genes of resistance to the first-line drugs such as beta-lactams (6 genes). One ST395 isolate, KpA769, contained the prophage with two of these genes, with a total of eight AM genes. This suggests that the prophages in K. pneumoniae may serve as significant vehicles for the horizontal dissemination of AMR genes. Interestingly, the same prophages may carry a different AMR gene load in the hosts belonging to different STs. The Escher_RCS47 (NC_042128) prophage, for example, carried qacEdelta1 , dfrA7 , aph ( 3 ′)- Ia , and catA1 in KpA13 (ST39) but dfrA14 and bla CTX - M - 15 —in KpA250 (ST29). This again suggests the scenario of the clonal dissemination of prophage sequences with their hosts belonging to different STs. And finally, although a large-scale database analysis may suggest the presence of virulence genes in K. pneumoniae prophages , we did not detect any virulence genes in the prophage sequences of our strains. Other MGEs, such as ICEs, may be largely responsible for the horizontal dissemination of virulence genes. Thus, our results revealed the circulation of international high-risk MDR clones of K. pneumoniae in Armenia. They are associated with an elevated risk of treatment failure due to difficult-to-treat AMR phenotypes and pose a considerable threat to patients, especially in vulnerable cohorts such as children. Particularly problematic are the clonally related NDM-1 carbapenemase-producing XDR and ESBL-producing MDR K. pneumoniae ST395 strains. They display a high virulence potential, harbour multiple plasmid replicons and prophages, and are associated with an increased risk of nosocomial infections. Especially concerning is the finding of carbapenem-resistant XDR strains in infected children. While some non-carbapenem antimicrobials, such as colistin, are utilised for treating carbapenem-resistant infections, their use is typically restricted to adult patients due to the side effects. Another restriction is that colistin and tigecycline are not licensed for use in Armenia. Regrettably, resistance to these antibiotics is on the rise, further complicating treatment options for carbapenem-resistant infections . We also found resistance to these antibiotics in our three K. pneumoniae isolates. Despite these challenges, recent research efforts have resulted in the development of several new drugs and combination therapies, which demonstrated good efficacy against carbapenem-resistant bacteria . Some of these promising agents include ceftazidime-avibactam, cefiderocol, ceftolozane-tazobactam, imipenem-cilastatin-relebactam, meropenem-vaborbactam, plazomicin, and eravacycline. Unfortunately, these drugs are currently not available for clinical use in Armenia. The prevalence of international high-risk MDR clones among the clinical isolates of K. pneumoniae in Armenia requires further monitoring and control measures. One of the limitations of the current analysis, however, is due to the lack of sufficient genomic data from the neighbouring countries, with which active interaction in the form of travel is maintained, such as Georgia and Iran. Nevertheless, our findings underscore the pressing need for genomic surveillance of K. pneumoniae infections of epidemiological significance in the country and beyond. Such surveillance efforts are crucial for improving AM therapy strategies and for the identification of intervention points to limit the dissemination of MDR K. pneumoniae .
4.1. Human Isolates of K. pneumoniae This study was performed with the use of a collection of K. pneumoniae strains isolated from patients in three hospitals in Armenia between 2018 and 2024. A total of 48 non-duplicate K. pneumoniae isolates were isolated from the stool (17), urine (16), throat (10), endotracheal tube (3), eye (1), and wound fluid (1) samples. The majority of these isolates, 38 (79.17%), were obtained from paediatric patients, and the rest 10 (20.83%)—from adults. Clinical and microbiological data of these patients were collected and anonymised. The gender distribution was as follows: 25 males (52.08%) and 23 females (47.92%). A total of 37 (77.08%) isolates were collected from patients who had not taken any medications, including antibiotics, before the hospital admission. In addition, 11 (22.92%) strains were isolated from patients receiving treatment with third-generation cephalosporins. Among these, three isolates were from the endotracheal tubes in paediatric patients and eight were isolates from the urine of adult patients. Of the 48 patients in this study, 31 (64.58%) and 17 (35.42%) were from Yerevan and regions, respectively. Additional information regarding patients and K. pneumoniae isolates is presented in . Ethical Statement. The study protocol was approved by the Ethics Committee of the Institute of Molecular Biology NAS RA (IORG number 0003427, IRB/IEC: 00004079); protocol code: Approval 01/2017, date of approval: 14 June 2017; protocol code: Approval 05/23, date of approval: 31 October 2023. 4.2. Antimicrobial Susceptibility Testing All 48 K. pneumoniae isolates were tested for susceptibility towards 22 individual antibiotics, which covered 12 different classes of antibiotics. The SOPs were strictly followed, in accordance with the guidelines of the Clinical and Laboratory Standards Institute (CLSI) for standard disc diffusion assays . For this assay, Muller–Hinton agar (Liofilchem ® s.r.l., Roseto degli Abruzzi, Italy) was used. Bacterial inoculum was adjusted to the equivalent of a 0.5 McFarland standard. The following AM discs (Liofilchem ® s.r.l., Roseto degli Abruzzi, Italy) were used: amikacin (30 µg), amoxicillin-clavulanic acid (20 µg/10 µg), ampicillin (10 µg); ampicillin-sulbactam (10 µg/10 µg), aztreonam (30 µg), cefepime (30 µg), cefoxitin (30 µg), ceftazidime (30 µg), ceftriaxone (30 µg), chloramphenicol (30 µg), ciprofloxacin (5 µg), gentamicin (10 µg), imipenem (10 µg), meropenem (10 µg), piperacillin-tazobactam (100 µg/10 µg), tetracycline (30 µg), ticarcillin-clavulanate (75 µg/10 µg), tobramycin (10 µg), and trimethoprim-sulfamethoxazole (1.25 µg/23.75 µg). The results of susceptibility testing were interpreted based on the CLSI criteria . The MIC for azithromycin was determined by performing the agar dilution method according to the CLSI standards . For determination of MIC for colistin, the broth disc elution method was used, according to CLSI recommendations . The MIC for tigecycline (Sigma–Aldrich, St. Louis, MO, USA) was determined by using the broth dilution method. The MIC values were assessed in accordance with the European Committee on Antimicrobial Susceptibility Testing (EUCAST) criteria: ≤0.5 mg/L was considered susceptible and >0.5 mg/L—resistant . The ESBL-producer phenotype was identified by the disc diffusion test using cefotaxime and ceftazidime with and without clavulanic acid, according to the CLSI guidelines . Escherichia coli strains ATCC 25922 and ATCC 35218 were used for quality control. Isolates that showed resistance to representatives of at least three classes of AMs were considered as MDR, and isolates non-susceptible to ≥1 AM agent in all but ≤2 classes of AMs were considered as XDR . 4.3. Susceptibility to Bacteriophage Preparations Bacteriophage susceptibility of K. pneumoniae clinical isolates was assessed with the “streak assay” . The commercial bacteriophage preparations “Bacteriophage Klebsiella pneumoniae Purified” (BKpP) and “Bacteriophage Klebsiella Polyvalent Purified” (BKPP) from (SPA “Microgen” Moscow, Russia) were used. 4.4. Hypermucoviscous (HMV) Phenotype Identification The “string test” to determine the HMV phenotype of K. pneumoniae isolates was performed as described previously . The isolates were inoculated on agar plates and incubated at 37 °C overnight. HMV isolates were identified by the formation of a viscous string from the colonies on the agar surface to an inoculating loop measuring at least 5 mm. 4.5. Bacterial DNA Extraction Total bacterial DNA samples for ERIC-PCR analysis were isolated by the boiling lysate method and frozen at −20 °C until further analysis. For whole genome sequencing (WGS), bacterial DNA samples were extracted using the UltraClean ® Microbial DNA Isolation Kit (MO BIO Laboratories Inc., San Diego, CA, USA) according to the manufacturer’s recommendations. DNA samples were stored in the 10 mM Tris buffer, without EDTA, at −20 °C. 4.6. ERIC-PCR The primers used for ERIC-PCR were: ERIC-1R (5′-ATGTAAGCTCCTGGGGATTCAC-3′) and ERIC-2 (5′-AAGTAAGTGACTGGGGTGAGCG-3′) (Integrated DNA Technologies, BVBA—Löwen, Belgium) . PCR was performed as described previously , with some modifications. The PCR conditions were as follows: initial denaturation at 94 °C for 4 min, followed by 35 cycles of denaturation at 94 °C for 1 min, primer annealing at 52 °C for 1 min, an extension at 72 °C for 4 min, and a final extension at 74 °C for 10 min. The amplified products were separated by gel electrophoresis in 1.5% agarose. HyperLadder™ 1 kb (Bioline, Memphis, TN, USA) was used as a molecular weight marker. The amplicon patterns generated by ERIC-PCR were analysed with the gel analysis software GelAnalyzer 19.1 ( www.gelanalyzer.com ). After normalisation and pattern alignment, the dendrogram showing the amplicon similarity among isolates was generated with the Dice coefficient and the unweighted pair group method with arithmetic average (UPGMA) algorithm for cluster analysis ( http://insilico.ehu.eus/dice_upgma , accessed on 20 September 2024). 4.7. WGS of K. pneumoniae Isolates WGS of 21 K. pneumoniae isolates in this study was provided by MicrobesNG ( https://microbesng.com ). Sequencing of two isolates from 2018 and 2019 (KpA500 and KpA6101) was performed on the Illumina HiSeq 2500, and the other 19 isolates were sequenced on the Illumina NovaSeq 6000 platform. Sequencing was performed with 2 × 250 bp paired-end reads at 30× coverage. Reads were adapter-trimmed using Trimmomatic 0.30 with a sliding window quality cutoff of Q15. Contigs were annotated using Prokka 1.11 . Whole genome sequences of K. pneumoniae isolates are available in the NCBI database under Bioproject/PRJNA1141898. Accession numbers for individual isolates are listed in . 4.8. Bioinformatics Analyses The general information on the genomes of our K. pneumoniae isolates was obtained using the Pathogenwatch resource ( https://pathogen.watch , accessed on 20 September 2024) and BIGSdb-Pasteur database ( https://bigsdb.pasteur.fr/klebsiella/ , accessed on 20 September 2024). Assignment of the isolates to STs and cgMLST was performed using the BIGSdb-Pasteur database ( https://bigsdb.pasteur.fr/klebsiella/ , accessed on 20 September 2024). Capsule (K) type and O serotype of the K. pneumoniae isolates were identified using the Kaptive tool ( https://kaptive-web.erc.monash.edu , accessed on 20 September 2024). Virulence scores by Kleborate were obtained using the Pathogenwatch resource ( https://pathogen.watch , accessed on 20 September 2024). Prediction of known or potential virulence factors in silico was performed using the Virulence Factor Database (VFDB) ( http://www.mgc.ac.cn/cgi-bin/VFs/v5/main.cgi , accessed on 20 September 2024). Antibiotic resistance genes were identified using the Resistance Gene Identifier (RGI) tool in the comprehensive antibiotic resistance database (CARD) ( https://card.mcmaster.ca/analyze/rgi , accessed on 20 September 2024), the ResFinder v.4.6.0 tool ( http://genepi.food.dtu.dk/resfinder , accessed on 20 September 2024), and the BIGSdb-Pasteur database ( https://bigsdb.pasteur.fr/klebsiella/ , accessed on 20 September 2024). Detection of plasmid replicons and determination of incompatibility groups was performed using the PlasmidFinder 2.1 tool ( https://cge.food.dtu.dk/services/PlasmidFinder/ , accessed on 20 September 2024). Identification of mobile genetic elements and their linkage with AMR genes was performed with MobileElementFinder (v1.0.2) ( https://cge.food.dtu.dk/services/MobileElementFinder , accessed on 20 September 2024). Identification and annotation of prophage sequences within bacterial genomes was performed using the PHASTEST web server ( https://phastest.ca/ , accessed on 20 September 2024). CRISPR sequences were identified using the Pathosystems resource integration centre (PATRIC) ( https://www.patricbrc.org , accessed on 20 September 2024). Other genes in contigs were analysed using the BLAST server ( http://blast.ncbi.nlm.nih.gov/Blast.cgi , accessed on 20 September 2024). The Average Nucleotide Identity (ANI) value was determined using the ANI Calculator tool ( https://www.ezbiocloud.net/tools/ani , accessed on 20 September 2024). 4.9. Whole Genome-Based Phylogenetic Analyses Whole-genome-based phylogenetic trees of K. pneumoniae isolates were obtained using the type strain genome server (TYGS) platform ( https://tygs.dsmz.de , accessed on 20 September 2024) . Pairwise comparison of genomes was performed using the genome blast distance phylogeny (GBDP) method, and intergenomic distances were inferred as described earlier . Phylogenetic trees were constructed with FastME 2.1.6.1 . The Pathogenwatch resource ( https://pathogen.watch , accessed on 20 September 2024) was used to generate phylogenetic trees of K. pneumoniae strains based on core genome distances. The dendrograms were constructed based on scaled pairwise scores for assemblies and the neighbour-joining method (APE package ). The phangorn package was used to obtain the midpoint rooted tree. Tree annotation and visualisation were performed using iTol v.6 ( https://itol.embl.de/ , accessed on 20 September 2024). 4.10. Statistical Analyses The p -value (two-tailed) from Fisher’s exact test was calculated using the online GraphPad QuickCalcs resource ( http://www.graphpad.com/quickcalcs/contingency1.cfm , accessed on 20 September 2024) to evaluate statistical differences between the compared groups. p -values ≤ 0.05 were considered significant.
This study was performed with the use of a collection of K. pneumoniae strains isolated from patients in three hospitals in Armenia between 2018 and 2024. A total of 48 non-duplicate K. pneumoniae isolates were isolated from the stool (17), urine (16), throat (10), endotracheal tube (3), eye (1), and wound fluid (1) samples. The majority of these isolates, 38 (79.17%), were obtained from paediatric patients, and the rest 10 (20.83%)—from adults. Clinical and microbiological data of these patients were collected and anonymised. The gender distribution was as follows: 25 males (52.08%) and 23 females (47.92%). A total of 37 (77.08%) isolates were collected from patients who had not taken any medications, including antibiotics, before the hospital admission. In addition, 11 (22.92%) strains were isolated from patients receiving treatment with third-generation cephalosporins. Among these, three isolates were from the endotracheal tubes in paediatric patients and eight were isolates from the urine of adult patients. Of the 48 patients in this study, 31 (64.58%) and 17 (35.42%) were from Yerevan and regions, respectively. Additional information regarding patients and K. pneumoniae isolates is presented in . Ethical Statement. The study protocol was approved by the Ethics Committee of the Institute of Molecular Biology NAS RA (IORG number 0003427, IRB/IEC: 00004079); protocol code: Approval 01/2017, date of approval: 14 June 2017; protocol code: Approval 05/23, date of approval: 31 October 2023.
All 48 K. pneumoniae isolates were tested for susceptibility towards 22 individual antibiotics, which covered 12 different classes of antibiotics. The SOPs were strictly followed, in accordance with the guidelines of the Clinical and Laboratory Standards Institute (CLSI) for standard disc diffusion assays . For this assay, Muller–Hinton agar (Liofilchem ® s.r.l., Roseto degli Abruzzi, Italy) was used. Bacterial inoculum was adjusted to the equivalent of a 0.5 McFarland standard. The following AM discs (Liofilchem ® s.r.l., Roseto degli Abruzzi, Italy) were used: amikacin (30 µg), amoxicillin-clavulanic acid (20 µg/10 µg), ampicillin (10 µg); ampicillin-sulbactam (10 µg/10 µg), aztreonam (30 µg), cefepime (30 µg), cefoxitin (30 µg), ceftazidime (30 µg), ceftriaxone (30 µg), chloramphenicol (30 µg), ciprofloxacin (5 µg), gentamicin (10 µg), imipenem (10 µg), meropenem (10 µg), piperacillin-tazobactam (100 µg/10 µg), tetracycline (30 µg), ticarcillin-clavulanate (75 µg/10 µg), tobramycin (10 µg), and trimethoprim-sulfamethoxazole (1.25 µg/23.75 µg). The results of susceptibility testing were interpreted based on the CLSI criteria . The MIC for azithromycin was determined by performing the agar dilution method according to the CLSI standards . For determination of MIC for colistin, the broth disc elution method was used, according to CLSI recommendations . The MIC for tigecycline (Sigma–Aldrich, St. Louis, MO, USA) was determined by using the broth dilution method. The MIC values were assessed in accordance with the European Committee on Antimicrobial Susceptibility Testing (EUCAST) criteria: ≤0.5 mg/L was considered susceptible and >0.5 mg/L—resistant . The ESBL-producer phenotype was identified by the disc diffusion test using cefotaxime and ceftazidime with and without clavulanic acid, according to the CLSI guidelines . Escherichia coli strains ATCC 25922 and ATCC 35218 were used for quality control. Isolates that showed resistance to representatives of at least three classes of AMs were considered as MDR, and isolates non-susceptible to ≥1 AM agent in all but ≤2 classes of AMs were considered as XDR .
Bacteriophage susceptibility of K. pneumoniae clinical isolates was assessed with the “streak assay” . The commercial bacteriophage preparations “Bacteriophage Klebsiella pneumoniae Purified” (BKpP) and “Bacteriophage Klebsiella Polyvalent Purified” (BKPP) from (SPA “Microgen” Moscow, Russia) were used.
The “string test” to determine the HMV phenotype of K. pneumoniae isolates was performed as described previously . The isolates were inoculated on agar plates and incubated at 37 °C overnight. HMV isolates were identified by the formation of a viscous string from the colonies on the agar surface to an inoculating loop measuring at least 5 mm.
Total bacterial DNA samples for ERIC-PCR analysis were isolated by the boiling lysate method and frozen at −20 °C until further analysis. For whole genome sequencing (WGS), bacterial DNA samples were extracted using the UltraClean ® Microbial DNA Isolation Kit (MO BIO Laboratories Inc., San Diego, CA, USA) according to the manufacturer’s recommendations. DNA samples were stored in the 10 mM Tris buffer, without EDTA, at −20 °C.
The primers used for ERIC-PCR were: ERIC-1R (5′-ATGTAAGCTCCTGGGGATTCAC-3′) and ERIC-2 (5′-AAGTAAGTGACTGGGGTGAGCG-3′) (Integrated DNA Technologies, BVBA—Löwen, Belgium) . PCR was performed as described previously , with some modifications. The PCR conditions were as follows: initial denaturation at 94 °C for 4 min, followed by 35 cycles of denaturation at 94 °C for 1 min, primer annealing at 52 °C for 1 min, an extension at 72 °C for 4 min, and a final extension at 74 °C for 10 min. The amplified products were separated by gel electrophoresis in 1.5% agarose. HyperLadder™ 1 kb (Bioline, Memphis, TN, USA) was used as a molecular weight marker. The amplicon patterns generated by ERIC-PCR were analysed with the gel analysis software GelAnalyzer 19.1 ( www.gelanalyzer.com ). After normalisation and pattern alignment, the dendrogram showing the amplicon similarity among isolates was generated with the Dice coefficient and the unweighted pair group method with arithmetic average (UPGMA) algorithm for cluster analysis ( http://insilico.ehu.eus/dice_upgma , accessed on 20 September 2024).
WGS of 21 K. pneumoniae isolates in this study was provided by MicrobesNG ( https://microbesng.com ). Sequencing of two isolates from 2018 and 2019 (KpA500 and KpA6101) was performed on the Illumina HiSeq 2500, and the other 19 isolates were sequenced on the Illumina NovaSeq 6000 platform. Sequencing was performed with 2 × 250 bp paired-end reads at 30× coverage. Reads were adapter-trimmed using Trimmomatic 0.30 with a sliding window quality cutoff of Q15. Contigs were annotated using Prokka 1.11 . Whole genome sequences of K. pneumoniae isolates are available in the NCBI database under Bioproject/PRJNA1141898. Accession numbers for individual isolates are listed in .
The general information on the genomes of our K. pneumoniae isolates was obtained using the Pathogenwatch resource ( https://pathogen.watch , accessed on 20 September 2024) and BIGSdb-Pasteur database ( https://bigsdb.pasteur.fr/klebsiella/ , accessed on 20 September 2024). Assignment of the isolates to STs and cgMLST was performed using the BIGSdb-Pasteur database ( https://bigsdb.pasteur.fr/klebsiella/ , accessed on 20 September 2024). Capsule (K) type and O serotype of the K. pneumoniae isolates were identified using the Kaptive tool ( https://kaptive-web.erc.monash.edu , accessed on 20 September 2024). Virulence scores by Kleborate were obtained using the Pathogenwatch resource ( https://pathogen.watch , accessed on 20 September 2024). Prediction of known or potential virulence factors in silico was performed using the Virulence Factor Database (VFDB) ( http://www.mgc.ac.cn/cgi-bin/VFs/v5/main.cgi , accessed on 20 September 2024). Antibiotic resistance genes were identified using the Resistance Gene Identifier (RGI) tool in the comprehensive antibiotic resistance database (CARD) ( https://card.mcmaster.ca/analyze/rgi , accessed on 20 September 2024), the ResFinder v.4.6.0 tool ( http://genepi.food.dtu.dk/resfinder , accessed on 20 September 2024), and the BIGSdb-Pasteur database ( https://bigsdb.pasteur.fr/klebsiella/ , accessed on 20 September 2024). Detection of plasmid replicons and determination of incompatibility groups was performed using the PlasmidFinder 2.1 tool ( https://cge.food.dtu.dk/services/PlasmidFinder/ , accessed on 20 September 2024). Identification of mobile genetic elements and their linkage with AMR genes was performed with MobileElementFinder (v1.0.2) ( https://cge.food.dtu.dk/services/MobileElementFinder , accessed on 20 September 2024). Identification and annotation of prophage sequences within bacterial genomes was performed using the PHASTEST web server ( https://phastest.ca/ , accessed on 20 September 2024). CRISPR sequences were identified using the Pathosystems resource integration centre (PATRIC) ( https://www.patricbrc.org , accessed on 20 September 2024). Other genes in contigs were analysed using the BLAST server ( http://blast.ncbi.nlm.nih.gov/Blast.cgi , accessed on 20 September 2024). The Average Nucleotide Identity (ANI) value was determined using the ANI Calculator tool ( https://www.ezbiocloud.net/tools/ani , accessed on 20 September 2024).
Whole-genome-based phylogenetic trees of K. pneumoniae isolates were obtained using the type strain genome server (TYGS) platform ( https://tygs.dsmz.de , accessed on 20 September 2024) . Pairwise comparison of genomes was performed using the genome blast distance phylogeny (GBDP) method, and intergenomic distances were inferred as described earlier . Phylogenetic trees were constructed with FastME 2.1.6.1 . The Pathogenwatch resource ( https://pathogen.watch , accessed on 20 September 2024) was used to generate phylogenetic trees of K. pneumoniae strains based on core genome distances. The dendrograms were constructed based on scaled pairwise scores for assemblies and the neighbour-joining method (APE package ). The phangorn package was used to obtain the midpoint rooted tree. Tree annotation and visualisation were performed using iTol v.6 ( https://itol.embl.de/ , accessed on 20 September 2024).
The p -value (two-tailed) from Fisher’s exact test was calculated using the online GraphPad QuickCalcs resource ( http://www.graphpad.com/quickcalcs/contingency1.cfm , accessed on 20 September 2024) to evaluate statistical differences between the compared groups. p -values ≤ 0.05 were considered significant.
We performed a comprehensive analysis of K. pneumoniae pathotypes in Armenia, and the main conclusions are: The majority (64.58%) of clinical K. pneumoniae isolates are represented by the XDR and MDR strains, with resistance from five to ten AM classes. Only 35.42% of the isolates are resistant to less than five AM classes. Phage therapy could be a viable option for an alternative/adjunct therapy of XDR and MDR isolates of K. pneumoniae . Epidemiologically, the most problematic K. pneumoniae lineages are represented by international high-risk MDR clones belonging to ST395, ST15, and ST307. The XDR and MDR strains demonstrate a high virulence potential, with a number of virulence determinants ranging from capsule polysaccharides to siderophores to regulators of the mucoid phenotype. In part, AMR mechanisms in K. pneumoniae are non-specific and driven by mutations in the porin genes, which reduce permeability to AMs, and by mutations in the regulators of efflux pumps, which allow overexpression of drug efflux pumps such as AcrAB. These mechanisms are responsible for AMR in strains with the apparent absence of specific AMR genes. K. pneumoniae isolates possess an extensive range of MGEs, ranging from ICEs to plasmids to prophages, especially in ST395 strains. Many AMR and virulence genes are located on MGEs, which may allow rapid evolution towards MDR and hypervirulent traits in these bacteria. The overall situation with K. pneumoniae pathotypes in Armenia dictates the urgent need for genomic surveillance of this infection, especially in light of the emergence of global hypervirulent STs such as hvKp ST23 ( https://www.who.int/emergencies/disease-outbreak-news/item/2024-DON527 , accessed on 20 September 2024).
|
Use of Urodynamics by Gynecologists and Urologists in Brazil | 13ccc206-e9d9-434e-93e9-7a7faf30dcb9 | 9948106 | Gynaecology[mh] | Urodynamic studies (UDSs) are a set of tests that evaluate the storage and emptying of urine, and they are widely used by gynecologists and urologists in the management of urinary incontinence (UI) and to assess the function of the lower urinary tract. The objective of UDSs is to reproduce the patient's symptoms and make the pathophysiological correlation, identifying the factors that contribute to urinary tract dysfunction. The International Continence Society (ICS) recommends performing at least three stages of this exam, which are flowmetry, cystometry, and the pressure-flow study. The approach to female UI is divided into initial and specialized. The initial approach should include: anamnesis, physical examination with the stress test, urinalysis, urinary diary, and assessment of residual urinary volume. Recent guidelines suggest that, when the conservative treatment fails or when UI is defined as complicated, additional tests are needed, with UDSs are the main one. Patients classified as complicated UI are those with urine leakage associated with prolapse or urgency, patients with bladder symptoms emptying, those undergoing radical pelvic surgery or radiotherapy, those who have recurrences and patients in whom the initial approach did not define the clinical diagnosis. Despite their importance as functional tests, the role of UDSs in evaluating female patients with UI remains under debate regarding the situations in which they should be indicated. In order to know the indications for UDSs made by gynecologists and urologists in Brazil, where there is no filed of expertise in urogynecology, we performed a survey. The objectives of the present study were to verify whether UDSs are routinely used in the conservative and surgical approaches to female UI, in what other clinical situations they are requested by the participants, and to compareing the responses of gynecologists and urologists. The present is an opinion survey aimed at Brazilian gynecologists and urologists and applied through a semistructured questionnaire. The study was approved by the Ethics in Research Committee of Universidade Federal de Minas Gerais (under CAAE: 34191120.5.0000.5149), and was carried out between August 2020 and January 2021. The questionnaire was sent by email, by Federação Brasileira das Associações de Ginecologia e Obstetrícia (Brazilian Federation of Gynecology and Obstetrics Associations, Febrasgo, in Portuguese) and Sociedade Brasileira de Urologia (Brazilian Society of Urology, SBU, in Portuguese), to 30 thousand gynecologists and urologists, and before answering, those who were willing to participate marked the consent form and were not identified after filling out the questionnarie. The questionnaire consisted of questions about the clinic practice and requests for UDSs in approaching to female IU, and was developed by two specialists in gynecology and urology. The main objective was to verify the percentage of participants who routinely requested UDSs before starting the conservative or surgical treatments of female UI. The other objectives were: to confirm whether UDSs are requested before the surgical treatment of female UI; to assess the main clinical conditions for which the participants request UDSs; to assess the availability of UDSs in the participants' location; to identify whether the surgical treatment for UI was based on the pressure of urine leakage; and to assess whether there was a difference in UDS indications between gynecologists and urologists. The sample calculation was not performed because it is an opinion poll. The numerical variables are expressed in terms of their values of central tendency and variability, considering the nature of their distribution. The categorical variables are expressed in terms of absolute and relative frequencies. For the descriptive analysis of th variables with normal distribution, the results were expressed as means ± standard deviations. To compare the responses of gynecologists and urologists, the Student t -test was used, after the performance of the Levene test to verify the homogeneity of variances by group. For the categorical variables, the Pearson Chi-squared test (χ 2 ) and the Fisher exact test were also used. In cases of significant association between two variables of interest, the odds ratio (OR) was evaluated. In all statistical calculations, the confidence level was set a 0.95. The statistical analysis was performed using the Statistical Package for the Social Sciences (IBM SPSS Statistics for Windows, IBM Corp., Armonk, NY, United States)software, version 21.0. Of 30 thousand questionnaries sent, only 329 (1.1%) were filled out. Out of those 329 participants, 238 (72.3%) were gynecologists and 91 (27.7%), urologists. Regarding the years of experience in the specialty, the average was of 21.2 years among the gynecologists, and most were female (60.9%), and 17.5 years among the urologists (93.4% of them male), with a statistically significant difference ( p = 0.023 for of the years of professional experience, and p = 0.001 for gender). There was no statistically significant difference regarding professional qualification (postgraduate courses and specialization). As for the location where they work, most gynecologists worked in the capital city of their states (55.5%), but only 39.6% of urologists worked in the capital city ( p = 0.023) . Urodynamic studies were available to the vast majority of participants (98.7% of gynecologists and 100% of urologists); 73% of gynecologists and 88% of urologists indicate UDSs in the preoperative period of anti-incontinence surgeries, and there was no statistical difference between the two groups; 53.4% of gynecologists and 62.6% of urologists do not indicate UDSs in the preoperative period of surgeries for genital prolapse, with no statistical difference between groups; and most gynecologists (73.5%) and urologists (86.6%) do not request UDSs before starting the conservative treatment of UI . When asked about UDSs in cases of mixed incontinence, 54.2% of gynecologists and 52.7% of urologists indicated them. There was a statistical difference regarding the indication of UDSs in the approach to idiopathic overactive bladder (OAB), as urologists indicated this less frequently than gynecologists: 3.3% and 11.8% respectively . Most urologists perform UDSs (71.4%) as opposed to gynecologists (27.7%), which was statistically significant ( p = 0.001). Among the participants who perform UDSs, most use two urethral catheters, use a device made in Brazil, and perform the three main exams that are part of UDSs (uroflowmetry, cystometry, and pressure-flow study). When we evaluated the protocol for the performance of UDSs, we only observed a difference regarding the use of prophylactic antibiotics, which was greater among urologists. The main piece of data from the UDSs to indicate anti-continence surgery was the pressure of urinary loss, both for gynecologists and urologists . Most Brazilian gynecologists and urologists participating in the present study do not request UDSs before starting the conservative treatment of UI. Most Brazilian gynecologists and urologists participating in the present study do not request UDSs before starting the conservative treatment of UI; and this clinical approach is in accordance with the main national and international protocols and guidelines that show there is no evidence that performing UDSs before the conservative treatment will result in lower rates of subsequent UI. However, gynecologists indicate UDSs more frequently in this situation than urologists (OR = 2.4). There is a consensus that these tests should not be indicated in the initial assessment of uncomplicated female UI. On the other hand, most participants request UDSs before the surgical treatment of female UI, with no statistical difference between gynecologists and urologists (73% and 88% respectively). Although the indications for UDSs are controversial, their performance can be waivered preoperatively in cases of uncomplicated UI, as shown in the study by Nager et al. (2012), who did not observe significant differences in the surgical outcomes of patients who did or did not undergo the exams. Routine UDSs in the preoperative period of uncomplicated stress UI (SUI) is not recommended by Febrasgo, the European Association of Urology (EAU), England's National Institute of Excellence in Health and Care (NICE), or the American College of Gynecology and Obstetrics (ACOG). Other authors point out that there are situations in which UDSs provide additional information to the clinical assessment, even in cases of uncomplicated SUI, and that these exams should be requested mainly in cases of suspected bladder-emptying dysfunction. When listing the main reasons to request UDSs preoperatively, the participants responded: in order to obtain authorization to use the synthetic sling (both in the private and public health care systems), because it is part of their institution's protocol, in orderto share decisions with the patient, and due to legal concerns. The other clinical indications for UDSs evaluated were genital prolapse, OAB, and mixed incontinence. Regarding genital prolapse, 53% of gynecologists and 62% of urologists request UDSs preoperatively, with no statistical difference between groups. It was not possible to identify the main reason for this request, but it may be related to the investigation of occult UI and for the indication of anti-incontinence surgery in the same surgical act. In these cases, UDSs would be indicated for patients complaining of urine loss concomitant with prolapse or for the diagnosis of occult UI. Most gynecologists and urologists indicate UDSs in the initial approach to OAB (88.2% and 96.7% respectively), and there was a statistically significant difference between the groups ( p = 0.001). There is a greater chance that urologists will request UDSs in this situation compared to gynecologists (OR = 3.9). There is no indication for UDSs in the initial approach to idiopathic OAB. This finding suggests the need to review the care protocols of the participants for idiopathic OAB. Although most gynecologists (54.2%) and urologists (52.7%) indicate UDSs in the management of mixed UI, we observed that more than 40% of the participants do not request it for this clinical condition. The fact that there is no question about mixed urinary incontinence in the questionnaire may have contributed to this finding, and there may be a correlation with the fact that most participants request UDSs for OAB. Overall, UDSs are available to most participants. Comparing the both groups, there are more women in gynecology (60.9%) than in urology (6.6%). Most participants were gynecologists. Despite the voluntary random sample, gynecologists probably treat more women with UI than urologists; therefore, they had greater participation in the questionnaire. However, urologists (71%) perform more UDSs than gynecologists (27%). Ideally, cystometry and the pressure-flow study should be performed with a double-lumen catheter, but a minority of participants use this catheter (21.4% of gynecologists and 24.6% of urologists), probably because t is 15 times more expensive than the two urethral catheters. The recording of leak pressure during cystometry was the main piece of data in the UDSs to indicate anti-incontinence surgery by most participants (73.1% of gynecologists and 74.7% of urologists). Urinary tract infection is the most common complication after UDSs, estimated in 8.4% of cases, and the main risk factors are advanced age, diabetes mellitus, genital prolapse, previous anti-incontinence surgery, and recent urinary tract infection. The ACOG does not recommend antibiotic prophylaxis for UDSs, and a recent systematic review concluded that there are insufficient studies to recommend its routine use. However, Cameron et al. recommend a single oral dose of antibiotics before UDSs for women with neurogenic dysfunction, high postvoiding residual volume, asymptomatic bacteriuria, immunosuppression, age over 70 years, and those using an indwelling urinary catheter or intermittent catheterization. The use of prophylactic antibiotics before UDSs was indicated by 36.4% of gynecologists and 56.9% of urologists in the present study. The main limitations of the present study were not classifying complicated and uncomplicated UI for each question in the questionnaire, and not correlating the request for UDSs in cases of genital prolapse with the investigation of occult UI. Another important limitation of our study was the participation of less than 10% of gynecologists and urologists registered in Brazil. These participants are probably more interested in female UI, especially gynecologists, who accounted for the majority of the sample. This is a limiting factor to extend our conclusion to all gynecologists and urologists in Brazil. The relevance of the present study was the characterization of the main indications for UDSs in this sample of Brazilian gynecologists and urologists. Most Brazilian gynecologists and urologists participating in the present study do not request UDSs before the conservative treatment of UI, according to national and internacional guidelines, and often request these exams before the surgical treatment of female UI. The indication for these exams in the initial approach of idiopathic OAB should be reviewed by the participants. |
Comparison of culture, microscopic smear and molecular methods in diagnosis of tuberculosis | a7b86efd-c681-4463-838d-66f76574698c | 6194869 | Pathology[mh] | Tuberculosis (TB) is a chronic disease caused by a type of Mycobacterium tuberculosis (MTB). TB is spread from person to person through the air. TB is the most common cause of death from infectious disease. In 2016, 6.3 million new cases of TB were reported (up from 6.1 million in 2015), equivalent to 61% of the estimated incidence of 10.4 million; the latest treatment outcome data show a global treatment success rate of 83%, similar to recent years . In 2016, a total of 12.417 TB cases were reported in Turkey, with an Incidence rate of 14: 100 000 patients with suspected TB . Clinicians evaluate patients with suspected TB by medical history, physical examination , chest radiograph and checking up on patiens symptoms . TB is diagnosed by detecting of Mtb bacteria in a clinical specimen. Culture are remains the gold standard for laboratory confirmation of TB disease, and growing bacteria are required to perform drug-susceptibility testing. GeneXpert MTB/RIF (GX) (Cepheid, Sunnyvale, California, USA) assay is a new molecular test for TB which diagnoses Mtb by detecting the presence of Mtb bacteria, as well as testing for resistance to the drug rifampin . In this study, we retrospectively evaluated the performance of solid and liquid culture media, acid-fast bacilli (AFB) testing and GeneXpert methods for respiratory and non-respiratory specimens for the diagnosis of TB.
Clinical specimens. A retrospective study was conducted from January 2016 to June 2017 at the Ataturk Research and Traning Hospital, Department of Medical Microbiology , Izmir, Turkey. Respiratory and non-respiratory clinical specimens collected from patiens with suspected Mtb or nontuberculous mycobacterial (NTM) infection. A total of 790 specimens were assessed by solid (Löwenstein-Jensen), liquid (Bactec MGIT960) culture media and GX assay. Of the 790 specimens 483 were respiratory (sputum, broncho alveolar lavage, tracheal aspirate), and 307 were non-respiratory (urine, pleural fluid, ascites, tissue biopsy, abscess, bile fluid, cerebro spinal fluid) specimens. Laboratory methods. Clinical specimens were decontaminated using the N-acetyl-L-cysteine sodium hydroxide method (NALC-NaOH). After the centrifugation step, the sediment was resuspended in 1 to 1,5 of sterile phosphate buffer (pH 6.8). This suspension was used for inoculation of culture media. A smear of the processed sediment was prepared and examined for the presence of AFB. Liquid culture media based on fluorometric detection of growth. Mycobacteria Growth Indicator Tube (MGIT) tubes were inoculated with 0.5 ml of the processed specimen. The tubes were incubated in the MGIT 960 instrument at 37°C. Solid culture media, Lowenstein-Jensen (LJ) (Salubris , Turkey) was inoculated with 0.25 ml suspension processed for each specimen and incubated at 37°C. For tubes identified as positive, a smear of a sample from the tube was prepared for examination for AFB. All smears were stained by the Kinyoun method and examined with a light microscope. MTB strains isolated from culture were identified using the MGIT TBc ID method (MPT 64: Becton Dickinson, Sparks, Maryland, USA). After identification of MTB complex strains, drug susceptibility test (DST) was performed using MGIT SIRE (Becton Dickinson-Sparks, Maryland, USA) according to the manufacturer’s recommendations. Tests were performed using the final concentration (83 μg/ml) of streptomycin (STR), (8,3 μg/ml) isoniazid (INH), (83 μg/ml) rifampin (RIF), (415 μg/ml) etambutol (EMB). The GX for MTB/RIF assay procedure, the GX assay was performed following the manufacturer’s recommendations. Decontaminated samples were mixed with a sample reagent containing sodium hydroxide and isopropanol alcohol (GX reagent). Two milliliters of each sample was transferred to a test cartridge and inserted into the GX platform. Results were available 1 hour and 55 minutes later.
A total of 790 specimens with suspected TB infection which were assayed by liquid and solid culture, smear microscopy, GX method and conventional drug susceptibility testing. The results of culture, smear microscopy, and GX for all specimens are presented in . Of the 790 specimens, 32 (4.05%) were culture positive for MTB. Of the 32 culture positive specimens, 24 (3.03%) were respiratory and 8 (1.01%) were non-respiratory. Two specimens were culture-positive for non-tuberculosis mycobacteria (NTB). These two bacteria outside of MTBC were not detected by molecular methods. Because only MTBC types could be detected with the GX assay. Because of this, these two bacteria were considered out of the evaluation. According to culture results, the overall sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of GX and smear microscopy are shown in . Thirty two Mtb isolates were tested for RMP resistance by the conventional drug susceptibility testing. Twenty nine (90.6 %) were found to be susceptible to RMP, while three (9.4 %) were resistant to RMP. All of the three samples identified as resistant by conventional methods were also found to be resistant by the GX method.
Classic laboratory techniques such as direct microscopy for the diagnosis of tuberculosis are far from being sensitive. Furthermore, cultures are time-consuming, they require biosafety precautions and need educated laboratory personnel . Molecular techniques have substantially changed in the field of tuberculosis diagnosis and they have been proven to yield rapid results as well as being highly sensitive. Culture continues to be the gold standard for the diagnosis of TB, but isolation can take up to 6 weeks due to slow growth rate of the organism . Smear microscopy to detect acid-fast bacilli in clinical specimens is a rapid and inexpensive test, although our study showed that microscopic detection sensitivity was 54 % in respiratory samples and 50 % in non-respiratory samples. However, despite having been proved to be a sensitive and rapid method when compared to the other methods evaluated in this study, GX proved to be more sensitive in both respiratory (100 vs. 54 %) and non-respiratory (87 vs. 50 %) specimens than smear testing. Ionniadis et al. Analyzed 80 respiratory and 41 non-respiratory samples, and reported the sensitivity, specifity, PPV, NPV of the GX system for respiratory and non-respiratory samples as 90%, 94%, 93%, 91%, 100%, 91%, 50%, 100% respectively. The GX system was found to be an advantageous technique for the identification of MTB , especially in smear-negative samples . Bunsow et al. performed a study including 290 respiratory and 305 nonrespiratory. They reported the sensitivity and specifity, PPV, NPV values of GX as for respiratory specimens 97%, 98%, 95%, 99%, respectively and for non-respiratory specimens as 33%, 99%, 80%, 97% respectively. The values for respiratory samples were higher than the values of our study. The GX system was reported to be a rapid and it gave accurate results in identifiyng MTB particularly in smear positive respiratory specimens . Zeka et al. performed a study including 253 respiratory and 176 non-respiratory specimens. They found the sensitivity and specificity, PPV, NPV values of GX for respiratory and non-respiratory specimens as 86%, 99%, 96%, 98% and 67%, 96%, 93%, 80% respectively. They reported that the GX assay was a rapid and useful technique in the identification of MTB . Bilgin et al . performed a study including 243 respiratory, 684 non-respiratory specimens. The sensitivity, specificity, PPV, NPV values of GX for respiratory and non-respiratory samples were 100%, 98%, 87%, 100% and 71%, 98%, 71%, 98%, respectively. The GX method was reported to be a practical technique because it has a high sensitivity and gives rapid results for identification of MTB . Wadwai et al . performed a study consisting of 547 non-respiratory specimens and they found the sensitivity and specificity of GX as 77% and 75%, respectively . In another study, Tortellini et al . evaluated 1476 non-respiratory specimens and reported the sensitivity and specificity of the GX as 81% and 99%, respecticely . Both studies concluded that the NALC-NaOH decontamination could affect the quality of the specimens reducing the sensitivity of the GX for MTB detection. The main purpose of this study was to assess the effectiveness of the GX assay in testing AFB-negative specimens collected from patients with clinical signs highly suggestive of active TB. The results of the culture, smear microscopy and GX assay in our study correlate with those reported by other studies when the effectiveness of the GX assay in detecting the presence of MTB bacilli in AFB negative specimens is considered. In our study, since culture was accepted as a standard, a total of five false positives were detected. A total of five samples were identified from four respiratory specimens from one non-respiratory specimen. Contamination in molecular methods is a consideration. In addition, live bacteria may not be taken as a specimen in treated patients. Since live and dead bacilli can not be discriminated by PCR methods, it is known that false positivity can be seen in patients with a history of MTB . In our study, seventeen AFB positive samples was detected in culture-positive 32 samples. The sensitivity and specificity of AFB were found to be 53% and 100%, respectively. Similar results were found in the studies. In a study in Thailand, sensitivity and specificity of the sputum AFB smear and GeneXpertMTB/ RIF assay test were 48% and 84%, and 94% and 92%, respectively . Although AFB is effective in eliminating tuberculosis-negative patients, it is less effective in detection than Genexpert. Thirty-two patients were diagnosed as TB in our hospital in this study period. We think that tuberculosis cases will increase due to immigration from Middle East (especially, Syria) and frequent use of immunosuppressant therapy. In conclusion, early diagnosis has great importance for the treatment of tuberculosis and the GX system is an easy and helpful tool for rapid and reliable results with high specifity and sensitivity.
|
Adiposity is associated with a higher number of thyroid nodules and worse fine-needle aspiration outcomes | ad3c6fd5-7f28-47e2-941d-85ed52c69ff8 | 11816029 | Surgical Procedures, Operative[mh] | Over the past few decades, the prevalence of thyroid nodules worldwide has been significantly increasing. This increased rate is primarily attributed to thyroid ultrasonography’s high availability and use . Nevertheless, although thyroid nodules are benign in their majority, there is a 7–15% risk of malignancy, highlighting the importance of early and accurate detection and proper investigation . The global incidence of obesity has also spiked dramatically over the past few years, affecting people of all ages. BMI, the most cited method of assessing obesity, has been linked to an increased risk of various cancers, including thyroid cancer . Multiple studies suggest that higher body mass index (BMI) may be correlated with the prevalence of thyroid nodules. However, the association between adiposity and the number of thyroid nodules or the risk of thyroid malignancy remains elusive . This study aims to assess the impact of adiposity on thyroid nodules by examining the relationship between BMI and ultrasonographic (US) and cytological characteristics of a cohort of 310 patients with thyroid nodules. We hypothesized that adiposity increases the number of thyroid nodules and the malignancy risk. Ultrasound and fine-needle aspiration (FNA) cytology results were evaluated to explore potential correlations between BMI and nodule features, including malignancy risk. Study design and participants This is a retrospective cohort study of 310 consecutive patients diagnosed with thyroid nodules and evaluated at the Thyroid & Endocrinology Center between 2020 and 2021. The Thyroid & Endocrinology Center is a referral clinic and teaching affiliate of the European University Cyprus School of Medicine. Data recorded were gender, age, weight, height, thyroid-stimulating hormone (TSH) levels, the number of nodules and the maximal diameter of the largest nodule. In cases where FNA was performed, cytology results were recorded. The sole inclusion criterion in the study was the diagnosis of thyroid nodules, solitary or multiple. Patients with no nodules, autoimmune thyroiditis and pregnant patients were excluded. All patients included were adults. Patients were divided into two groups according to their BMI: normal BMI < 25 kg/m 2 and overweight and obese patients with BMI ≥ 25 kg/m 2 . BMI was calculated using weight and height measurements at the first appointment, and the formula used was weight (kg) divided by height (m) squared . All patients underwent an initial comprehensive thyroid and neck ultrasound examination by an experienced endocrinologist. A GE Logiq E9 ultrasound system was used with an ML6-15 probe. Data regarding the size and the number of nodules in each patient were collected. A thyroid nodule was considered any discrete lesion compared to the surrounding normal thyroid gland parenchyma measuring a minimum of 0.2 cm. The sonographic patterns were classified based on the US features of the thyroid nodules, estimating the risk of malignancy according to 2015 ATA guidelines . One hundred seventy-one nodules underwent US-guided FNA. The cytological diagnosis was categorized based on the Royal College of Pathologists’ reporting . This study was approved by the Cyprus National Bioethics Committee (ΕΕΒΚ ΕΠ 2022.01.89). Data collection and analysis were anonymous, using codes for patients as a reference. Statistical analysis The following parameters were examined regarding the impact and association between BMI and thyroid nodules: the number of nodules, size of nodules and FNA cytology results. The data collected are presented as mean and SD for numerical variables. Categorical variables are presented with absolute values and percentages. To compare the values of a continuous variable between two independent groups, we performed an independent samples t -test. The Pearson chi-square test was used to compare categorical variables with multiple possible outcomes: gender, FNA results and the number of patients with solitary nodules. Differences in FNA cytology between group 1 and group 2 were further evaluated using multivariable analysis with multiple ordinal logistic regression, adjusting for gender and age. A two-sided P value <0.05 was considered statistically significant. The data were entered into an Excel worksheet, and the statistical analysis was done with the R software package ( https://www.r-project.org/ ). This is a retrospective cohort study of 310 consecutive patients diagnosed with thyroid nodules and evaluated at the Thyroid & Endocrinology Center between 2020 and 2021. The Thyroid & Endocrinology Center is a referral clinic and teaching affiliate of the European University Cyprus School of Medicine. Data recorded were gender, age, weight, height, thyroid-stimulating hormone (TSH) levels, the number of nodules and the maximal diameter of the largest nodule. In cases where FNA was performed, cytology results were recorded. The sole inclusion criterion in the study was the diagnosis of thyroid nodules, solitary or multiple. Patients with no nodules, autoimmune thyroiditis and pregnant patients were excluded. All patients included were adults. Patients were divided into two groups according to their BMI: normal BMI < 25 kg/m 2 and overweight and obese patients with BMI ≥ 25 kg/m 2 . BMI was calculated using weight and height measurements at the first appointment, and the formula used was weight (kg) divided by height (m) squared . All patients underwent an initial comprehensive thyroid and neck ultrasound examination by an experienced endocrinologist. A GE Logiq E9 ultrasound system was used with an ML6-15 probe. Data regarding the size and the number of nodules in each patient were collected. A thyroid nodule was considered any discrete lesion compared to the surrounding normal thyroid gland parenchyma measuring a minimum of 0.2 cm. The sonographic patterns were classified based on the US features of the thyroid nodules, estimating the risk of malignancy according to 2015 ATA guidelines . One hundred seventy-one nodules underwent US-guided FNA. The cytological diagnosis was categorized based on the Royal College of Pathologists’ reporting . This study was approved by the Cyprus National Bioethics Committee (ΕΕΒΚ ΕΠ 2022.01.89). Data collection and analysis were anonymous, using codes for patients as a reference. The following parameters were examined regarding the impact and association between BMI and thyroid nodules: the number of nodules, size of nodules and FNA cytology results. The data collected are presented as mean and SD for numerical variables. Categorical variables are presented with absolute values and percentages. To compare the values of a continuous variable between two independent groups, we performed an independent samples t -test. The Pearson chi-square test was used to compare categorical variables with multiple possible outcomes: gender, FNA results and the number of patients with solitary nodules. Differences in FNA cytology between group 1 and group 2 were further evaluated using multivariable analysis with multiple ordinal logistic regression, adjusting for gender and age. A two-sided P value <0.05 was considered statistically significant. The data were entered into an Excel worksheet, and the statistical analysis was done with the R software package ( https://www.r-project.org/ ). demonstrates the demographic and clinicopathological characteristics of the 310 patients with thyroid nodules according to BMI. The mean age in group 1 (BMI < 25) was 41.9 years ± 14.3, ranging from 18 to 86 years, whereas in group 2 (BMI ≥ 25), the mean age was 51.2 ± 12.6, ranging from 20 to 80 ( P < 0.01). There were more male patients in group 2 than in group 1. The two groups had no statistically significant difference in TSH levels. In group 1, 37 patients (29.37%) had a solitary nodule and 89 (70.63%) had multiple nodules. In group 2, 144 patients (78.26%) had multiple nodules and 40 (21.74%) had a solitary nodule. The mean number of nodules in group 1 was 3.66 ± 1.93, and it was 4.25 ± 2.42 in group 2 ( P value = 0.05). There was no statistically significant difference in the maximal nodule diameter between the two groups. In group 1, 64 (50.79%) patients had very low to low suspicion nodules, 39 (30.95%) had intermediate suspicion, and 23 (18.25%) patients had high suspicion of malignancy. In group 2, 78 (42.39%) patients had low suspicion, 66 (35.87%) had intermediate suspicion nodules and 40 (21.74%) had high suspicion sonographic patterns. demonstrates the clinicopathological and cytological characteristics of the 171 nodules that underwent US-guided FNA. The mean age in group 1 was 43.82 ± 12.28, whereas in group 2, it was 49.93 ± 12.20 ( P < 0.01). There were more male patients in group 2 than in group 1. There were no statistically significant differences in TSH levels, the number of nodules, maximal nodule diameter or 2015 ATA sonographic patterns. In group 1, 55 nodules (90.16%) had Thy2 cytology, 5 (5.16%) nodules had Thy3 cytology, and one patient had Thy5 cytology (1.63%). In group 2, 86 nodules (78.18%) had Thy2 cytology, nine nodules (8.18%) had Thy3 cytology, and 15 nodules (13.645%) had Thy4–Thy5 cytology. A statistically significant difference was observed between the two groups in the FNA category Thy4–Thy5 ( P = 0.04, Pearson’s chi-square test). This difference remained statistically significant after adjusting for gender and age using an ordinal logistic regression model ( P = 0.029). In this study, we evaluated data from 310 patients to investigate the possible association between obesity and thyroid nodules. Our results showed that overweight and obese individuals (BMI ≥ 25 kg/m 2 ) had a trend for more thyroid nodules compared to individuals with normal BMI. In addition, patients with BMI ≥ 25 kg/m 2 had worse FNA outcomes than patients with BMI < 25 kg/m 2 , suggesting a positive correlation between obesity and the risk of developing thyroid malignancy. Multiple studies suggest a relationship between a higher BMI and thyroid nodules . A recent large-scale study by Xu et al . found that BMI was correlated with a higher risk of thyroid nodules and that overweight and individuals with higher central obesity were found to have a significantly higher prevalence of multiple nodules compared with solitary thyroid nodules . Hu et al. associated thyroid nodules with higher BMI and other components of the metabolic syndrome, such as insulin resistance; also, the prevalence of thyroid nodules increased with age and was significantly higher in women . Moon et al . unveiled a link between BMI and the occurrence of thyroid nodules, specifically in women . Song et al. had shown that women with a BMI of 25 or more had an elevated risk of thyroid nodules . Kim et al. showed the relationship between BMI and thyroid nodules in Korean women . In our study, there were more patients with intermediate and high suspicion sonographic pattern nodules in the overweight and obese group compared to patients with normal weight; however, this was not statistically significant. Lai et al . found that a higher BMI exhibited an augmented risk of thyroid nodules with highly suspicious sonographic patterns . An association between obesity and a taller-than-wide nodule was also suggested in women . In another study, severely obese individuals presented with increased hypoechogenicity and more frequency of thyroid nodules during ultrasound evaluation, but no significant difference was seen in the 2015 ATA and TI-RADS criteria . In addition, Zhao et al . found that overweight and obese individuals were at greater risk of multifocality than non-overweight individuals . In our study, overweight and obese patients had more suspicious and malignant cytology. Our results agree with Zhao et al . who found a correlation between obesity and thyroid cancer, where the prevalence of obese patients was higher in the malignant population examined. However, Rotondi et al . and Ahmadi et al . found no association between obesity and differentiated thyroid carcinoma . There are several mechanisms that can explain the association between obesity, thyroid nodules and cancer . Obesity is a chronic, low-grade inflammatory disease characterized by increased systemic inflammatory markers with a nonspecific immune response. These inflammatory factors act as signal mediators in peritumoral tissue and the progression of tumor growth. The increase in adipose tissue leads to a rise in leptin synthesis, and this increase in chronic inflammation state augments the secretion of TNF and cytokine IL-6 which contribute to cancer development, progression and metastasis by decreasing tumor suppressor genes and increasing oncogene expression . Obesity and metabolic syndrome trigger the development of thyroid nodules by stimulating thyroid proliferation and angiogenesis due to hyperinsulinemia, hyperglycemia and dyslipidemia. Liu and coworkers suggested that insulin resistance is related to the distribution and structure of thyroid blood vessels, which may promote thyroid nodule generation. Furthermore, insulin and insulin growth factor-1 (IGF-1) regulate the expression of thyroid genes and the proliferation and differentiation of thyroid cells. Thyroid cells may synthesize IGF-1 and express the IGF-1 receptor, with the expression levels being higher in the thyroid cells of the nodules than in non-nodular thyroid cells. It was also observed that thyroid nodules decreased in volume after metformin administration . Our study’s main limitation is its single-center retrospective design. Another limitation is that BMI was used to evaluate obesity, which is a good measurement of body fat but a weak indicator of adiposity distribution. In addition, the study did not include other parameters of the metabolic syndrome or examine other confounding factors, such as the duration of the obesity and other environmental differences. Moreover, in our study, there were more males in the overweight and obese group and they were also older compared to the normal weight group. The prevalence of thyroid nodules appears to increase with age , and gender can affect both the prevalence and the suspicious features of thyroid nodules . In conclusion, our study demonstrated that patients with a higher BMI have a trend for more thyroid nodules and worse FNA outcomes. Further studies are required to clarify the mechanisms behind the observed association between obesity and thyroid nodules. There is also a need for further investigation of this association to uncover the potential ties between obesity, thyroid nodules and thyroid cancer. Future research should focus on the impact of weight loss on thyroid cancer among obese and overweight individuals, deepen our understanding of the disease and aim to create more efficient preventative and therapeutic strategies. Obesity is a modifiable risk factor and can become a crucial focus for public health initiatives aimed at reducing the occurrence of thyroid cancer and thyroid nodule development . The authors declare that they have no competing interests regarding the publication of this work. This research did not receive any specific grant from any funding agency in the public, commercial or not-for-profit sector. The data supporting the findings of this study are available on request from the corresponding author. ED, AE and PE were involved in the conception, design and writing of the study, in the collection and interpretation of the data and in the drafting the manuscript. MF was involved in the process of the collection, analysis and interpretation of the data. DL was involved in the conception and design of the study, in the statistical analysis and in the drafting of the manuscript. SP was involved in the conception and design of the study and in the interpretation of the data. PP was involved in the conception of the study, in the analysis of the data and in the drafting of the paper. All authors have critically reviewed and approved the final version of the manuscript. |
SOS – save our seaside! The microbiological risks to human health of raw sewage in our coastal waters | 702423bc-b46f-4770-8f72-fdabb0a00ff4 | 11806199 | Microbiology[mh] | Despite having spent the last 18 years living in the land-locked midlands, I remain a card-carrying thalassophile (to save you from looking it up, a thalassophile is someone who loves the sea). I grew up in Plymouth, Devon, and spent much of my childhood in, on or around the sea before moving to Birmingham for university in 2007. Although the decision to relocate to the midlands has taken me on an incredible scientific voyage, I’ve always missed the sea – canals just don’t cut it! Fortunately, having family in the southwest has always given me just cause to regularly return to the coast to satisfy my cravings for the sea. Despite rapidly approaching my fortieth anniversary of birth, I still sail, sea swim, paddleboard, kayak and surf (badly) at every given opportunity. However, my addiction to the sea is not without consequences. In spring 2024, whilst sea swimming, I contracted a bilateral atypical bacterial pneumonia that completely knocked me off my feet. I was lucky, and the bacteria causing my infection were sensitive to clarithromycin. Following my illness, I discovered Surfers Against Sewage and their report of raw sewage being dumped close to where I had been swimming hours before I took to the water. The experience got me thinking about the bacterial pathogens in our beautiful coastal waters and the risks they pose to those like me, mad enough to take to the sea all year round. My investigation has led me to understand that the pollution of coastal seawater by sewage has become an increasingly critical issue for public health, particularly in the UK. Pathogenic bacteria found in sewage are of significant concern as they can directly affect the quality of recreational waters and overall public health . In this article, I investigate the major pathogenic bacteria associated with sewage contamination in UK coastal waters, their potential risks and the measures taken to mitigate their impact. As with most things these days, money seems to be the limiting factor, and the investment required to improve our national infrastructure to decontaminate our waste before releasing it into our coastal waters seems, for the time being at least, to be dead in the water. Sewage contamination in coastal waters can introduce a variety of harmful micro-organisms, including pathogenic bacteria that can cause gastrointestinal, respiratory and skin infections. Among the bacterial pathogens found in sewage-contaminated seawater are Brucella spp., Chlamydia spp., Escherichia coli (enteropathogenic antibiotic-resistant strains), Leptospira spp., Rickettsia spp., Salmonella spp., Treponema hyodysenteriae , Bacillus anthracis , Erysipelothrix rhusiopathiae , Mycobacterium spp. and faecal Streptococci spp. The list goes on. These bacteria can be found in the intestinal tracts of humans and animals and can be transmitted through the ingestion, inhalation or contact with contaminated water . E. coli is the most commonly studied bacterium in sewage-contaminated waters and is widely used as an indicator of faecal contamination . High concentrations of E. coli (>1000 c.f.u./100 ml) in seawater often indicate the presence of other pathogenic micro-organisms, which may include Salmonella , Campylobacter and Vibrio species. These organisms that are common causes of foodborne illnesses are detected in sewage-contaminated waters . The risks posed by pathogenic bacteria from sewage contamination are significant for both public health and marine ecosystems. The ingestion of contaminated water has led to outbreaks of gastroenteritis, with symptoms including diarrhoea, vomiting and fever . For example, Vibrio infections, particularly those caused by Vibrio parahaemolyticus and Vibrio vulnificus , can cause severe shellfish-derived food poisoning and, in some cases, death. Concerningly, there is an increasing prevalence of these Vibrio spp. identified in UK coastal waters as sewage has been found to promote the growth of these pathogens . Vulnerable populations, such as the elderly, immunocompromised individuals and pregnant women, are especially at risk. Inhalation of sewage aerosol has resulted in outbreaks of bacterial pneumonia, and a study found that 58% of seawater drowning-associated pneumonia is caused by aerobic Gram-negative bacilli, some displaying levels of antibiotic resistance . Whilst the direct causative data are sparce, on the balance of probabilities, inhaling sewage-contaminated seawater is likely to cause bacterial pneumonia. Infections caused by exposure to sewage-contaminated seawater are not limited to gastrointestinal or respiratory. Many other types of bacterial infection have also been linked to faecal pathogens in our coastal waters, including skin and soft tissue infections, ear and eye infections and tonsilitis. In addition to the direct human health risks, pathogenic bacteria can disrupt marine ecosystems. Sewage pollution can lead to algal blooms in coastal waters, which can harm marine life and reduce biodiversity. Pathogens from sewage are also impacting shellfish populations, which are filter feeders and, as a result, can accumulate harmful bacteria that may re-enter the human food chain, perpetuating antimicrobial resistance. To address the risks posed by sewage contamination, several regulatory and technological measures are in place. In the UK, the Environment Agency and DEFRA are responsible for regulating water quality through the Bathing Water Directive and the Shellfish Hygiene Directive, which set standards for microbial water quality at recreational beaches and in areas where shellfish are harvested. Monitoring programs are in place to test for pathogens such as E. coli and Enterococci , and sewage treatment plants are required to meet strict discharge standards to minimize the release of harmful bacteria into the environment. Storm overflows were intended to release surplus sewage into the sea on rare occasions, but despite this intention, some water companies are responsible for up to 200 discharges of raw untreated sewage into our coastal waters each year . Whilst new sewage treatment technologies, such as tertiary filtration and UV disinfection, have been developed to reduce bacterial concentrations in effluent before it is discharged into coastal waters, they are expensive to implement and are limited by the volume of sewage. During heavy rainfall events, water companies will continue to discharge untreated sewage into the sea, posing a direct threat to water quality and public health. The presence of pathogenic bacteria in sewage-contaminated seawater in the UK is a significant public health concern. Pathogens like E. coli , Salmonella , Vibrio and Campylobacter , to name but a few, can lead to severe infections and perpetuate the spread of antibiotic resistance. Whilst regulatory measures and advanced sewage treatment technologies are promised, ongoing vigilance and investment in infrastructure are essential to mitigate the risks posed by sewage pollution and to protect both public health and our coastal waters. This makes for pretty sober reading, and it is clear that, when taking to the sea, we should be less concerned about what’s lurking beneath the surface and more concerned about what lies within. Am I going to stop enjoying the water? No. However, armed with this new knowledge and the Safer Seas and Rivers Service App run by Surfers Against Sewage, I will always check to see if sewage has been discharged in the area before taking to the water, especially with my children. Professor Whitty, of ‘ next slide please ’ fame, said himself in a report from Department of Health and Social Care regarding sewage in water, ‘ Nobody wants a child to ingest human faeces ’. You’re not wrong there, Chris. Whilst we’re waiting for the improved management, innovation and investment that is required to solve the issue and save our seaside, don’t bury your head in the sand regarding water quality. Check before you swim. Afterall, prevention is always better than cure. |
Expression of fibroblast growth factor receptor 2 (FGFR2) in combined hepatocellular-cholangiocarcinoma and intrahepatic cholangiocarcinoma: clinicopathological study | c6b3f5c5-17ee-43a9-a191-4be795dd2122 | 11186861 | Anatomy[mh] | Combined hepatocellular-cholangiocarcinoma (cHCC-CCA), which generally has a poor prognosis, comprises of hepatocellular carcinoma (HCC), cholangiocarcinoma (CCA), and diverse components with intermediate features between HCC and CCA . The histological diagnosis of cHCC-CCA is sometimes difficult and controversial because of intratumoral heterogeneity with diverse intermediate components . A consensus paper has provided simplified terminology and refined the diagnostic criteria for cHCC-CCA , and the current WHO classification 2019 adopted this consensus . The histopathological diagnosis of cHCC-CCA needs to be standardized for the appropriate clinical treatment of patients . Previous studies disclosed that some cHCC-CCAs had similar genetic alterations to HCCs, whereas other cHCC-CCAs had similar genetic alterations to CCAs . The genetic alterations and other molecular features in cHCC-CCAs may be therapeutic targets, as well as HCCs and CCAs . Accumulating data suggest that about a half of iCCAs have targetable genetic alterations . The fibroblast growth factor receptor 2 (FGFR2), which is one of four FGFR family members that encode transmembrane receptor tyrosine kinases, has attracted much attention . The FGFR2 fusions or rearrangements are found as genetic abnormalities in 10–20% of iCCA, especially in small duct-type iCCAs . Over 150 fusion partners are detected in FGFR2 fusion genes , and a recent study revealed the truncation of exon 18 (E18) of FGFR2 is a potent driver mutation and could be a therapeutic target . Immunohistochemical FGFR2 expression may be a candidate surrogate marker for detecting FGFR2 genetic alterations with high specificity and a prognostic marker in iCCA . FGFR2 inhibitors, such as pemigatinib and futibatinib, inhibit tumor cell growth in FGFR‐driven cancers by receptor autophosphorylation and subsequent activation of FGF/ FGFR signaling . Favorable therapeutic effects of these FGFR inhibitors are observed in several clinical trials in iCCAs . cHCC-CCAs shares various features such as histological findings of iCCA components, etiologies, and possible cell origin with small duct-type iCCA ; however, there were only a few studies on FGFR2 genetic alterations in cHCC-CCA, so far . In previous studies, FGFR2-fusions were detected in 0–6.5% of cHCC-CCAs, and the prevalence was higher in CCA-like cHCC-CCAs compared to HCC-like ones . We examined a prevalence of FGFR2 genetic alterations and its clinicopathological significance in cHCC-CCA in this study. We took advantage of an immunostaining for FGFR2 as a surrogate marker and then performed the fusion-specific PCR with following direct sequencing and 5′/3′ imbalance PCR for the detection of exon 18 (E18)-truncated FGFR2 including FGFR2 fusions . There has been no study on the immunohistochemical expression of FGFR2 in cHCC-CCAs, to our knowledge.
Patients and preparation of tissue specimens One hundred and seventy-nine patients with primary liver carcinoma were retrieved from our pathological files (1996–2022). The Ethics Committee of Kanazawa University approved the present study (The approval number: 2012–021 [160]; the date, June 11, 2013). Primary liver carcinomas were re-evaluated according to the WHO classification of digestive system tumors 2019 and classified into 75 with cHCC-CCA, 35 with small duct-type iCCA, 30 with large duct-type iCCA, and 35 with hepatocellular carcinoma (HCC). A diagnosis of cHCC-CCA was made regardless of the percentage of each component in the present study . Cholangiolocarcinoma/cholangiolocellular carcinoma (CLC) was classified into small duct-type iCCA as a subtype in the present study, according to the WHO classification of digestive system tumors 2019 . Clinical and pathological features in each group of primary liver carcinomas are summarized in Table . All specimens were surgically resected and fixed in 10% buffered formalin and embedded in paraffin. Three-micrometer-thick sections were cut from each paraffin block. Several sections were routinely processed for histological studies, including hematoxylin and eosin stain, reticulum stain, AZAN, and mucin staining, and the remainder were processed for the following immunohistochemistry. Histological grading and the ductal plate malformation (DPM) pattern The histological grading of cHCC-CCA was classified into low and high grades being based on tumor differentiation . The DPM-pattern was evaluated as previously described . The DPM-pattern was characterized by neoplastic glands of carcinoma showing an irregularly shaped and dilated lumen, and some of these glands showed microcystic dilatation, resembling DPM. The degree of the DPM-pattern was divided into absent (< 5% of the tumor), focal (5–50%), and extensive (> 50%). Among the 75 cHCC-CCA, 47, 21, and 7 showed the absent, focal, and extensive patterns, respectively. Immunohistochemistry The expression of FGFR2, ARID1A, p53, PBRM1, BAP1, MTAP, and nestin was examined by immunostaining, as previously described . The primary antibodies used are shown in Supplementary Table . Positive and negative controls were routinely included. Evaluation of immunostaining for FGFR2 The expression of FGFR2 in cell membrane was evaluated as described previously: score 3, strong, complete membrane staining in more than 10% of the malignant cells; score 2, weak to moderate, complete membrane staining in more than 10% of the malignant cells; and score 0/1, less intense staining or less than 10% of cells, according to previous study . A score of 2 or 3 was considered positive and scores of 0 or 1 were considered negative (two‐grade system). Evaluation of immunostaining for p53 Strong and diffuse nuclear expression was regarded as a mutation of p53, as previously described . Three patterns of aberrant or mutation-type p53 staining that are indicative of an underlying p53 mutation, including overexpression (strong nuclear staining in at least 75% of tumor cells), the null pattern (loss of staining in 100% of tumor cells), and the cytoplasmic pattern, were demonstrated in previous studies . Accordingly, the null and cytoplasmic patterns were also examined, but not observed in any specimens in the present study. Evaluation of immunostaining for ARID1A, PBRM1, and BAP1 The total or focal loss of nuclear expression was regarded as a genetic alteration. Total and focal loss of expression was observed. When the expression was totally lost in the tumor, the specimen was regarded as “total loss,” whereas the expression was lost in a part of the tumor, the specimen was regarded as “focal clonal loss.” It has been reported as a reliable marker for inactivating genetic alterations in ARID1A, PBRM1, and BAP1; however, the immunostaining was not affected by some missense mutations . Evaluation of immunostaining for MTAP MTAP loss in immunohistochemistry is reportedly a reliable surrogate for CDKN2A homozygous deletion . Loss of cytoplasmic expression of MTAP was regarded as CDKN2A homozygous deletion. Total and focal loss of expression was observed . Evaluation of immunostaining for nestin The expression of nestin (diffuse cytoplasmic) was evaluated according to the percentage of positive cells in each lesion: score 0, less than 5%; score 1, 5–10%; score 2, 10–80%; score 3, more than 80%. Scores 1–3 were regarded as positive. Inter-observer agreement was almost perfect in the present study. Extraction of RNA samples and assessment of FGFR2 genetic alterations Twenty-four cHCC-CCAs and 9 small duct-type iCCAs were examined for FGFR2 genetic alterations using PCR and direct sequence. Representative whole sections which include both HCC and iCCA components to various degree were used for the extraction of RNA in each case. RNA samples were extracted from FFPE sections using RNeasy-FFPE kit (QIAGEN, Hilden, Germany), and then cDNA samples are made using Quant Accuracy RT-RamDA cDNA Synthesis Kit (TOYOBO, Osaka, Japan) according to manufactures’ protocols. Detection of FGFR2 fusions PCR was performed using FGFR2-fusion-specific primers (Supplementary Table ). Direct sequences of PCR products were performed as described previously . 5′/3′ imbalance strategy for the detection of exon 18 (E18)-truncated FGFR2 E18-truncated FGFR2 including the FGFR2 fusion genes were detected by measuring the ratio of the expression levels of 5′ portion (exon 5, E5) versus the 3′ portion (E18) of the FGFR2 expression using the Thunderbird qPCR Master Mix (Toyobo, Tokyo, Japan) and the QuantStudio 6 Pro real-time PCR system (Thermo-Fisher, Waltham, USA) according to manufactures’ protocol. The PCR primers used were shown in Table . This 5′/3′ imbalance strategy was developed, with high specificity and sensitivity, for detection of the ALK fusion gene . In the presence of the E18-truncation in FGFR2 gene including FGFR2 fusion gene, the 3′ portion of the FGFR2 gene (E18) is lost, but the 5′ portion (E5) remains. This strategy could effectively detect the E18-truncated FGFR2 gene no matter which partner genes were at the 3′ portion of the fusion genes. Extraction of DNA samples and mutation analysis of KRAS, IDH1, IDH2, and the TERT promoter The extraction of DNA samples, PCR and sequencing were performed as previously described . The primer sets for PCR are shown in Table . Statistical analysis The Kruskal–Wallis test was used for continuous variables without a normal distribution. If a significant difference was observed in an analysis of variance, pairwise comparisons were performed using Dunn’s post hoc test with corrections for multiple comparisons. When the p value was less than 0.05, the difference was considered as significant. All analyses were performed using the GraphPad Prism software (GraphPad Software, San Diego, CA, USA).
One hundred and seventy-nine patients with primary liver carcinoma were retrieved from our pathological files (1996–2022). The Ethics Committee of Kanazawa University approved the present study (The approval number: 2012–021 [160]; the date, June 11, 2013). Primary liver carcinomas were re-evaluated according to the WHO classification of digestive system tumors 2019 and classified into 75 with cHCC-CCA, 35 with small duct-type iCCA, 30 with large duct-type iCCA, and 35 with hepatocellular carcinoma (HCC). A diagnosis of cHCC-CCA was made regardless of the percentage of each component in the present study . Cholangiolocarcinoma/cholangiolocellular carcinoma (CLC) was classified into small duct-type iCCA as a subtype in the present study, according to the WHO classification of digestive system tumors 2019 . Clinical and pathological features in each group of primary liver carcinomas are summarized in Table . All specimens were surgically resected and fixed in 10% buffered formalin and embedded in paraffin. Three-micrometer-thick sections were cut from each paraffin block. Several sections were routinely processed for histological studies, including hematoxylin and eosin stain, reticulum stain, AZAN, and mucin staining, and the remainder were processed for the following immunohistochemistry.
The histological grading of cHCC-CCA was classified into low and high grades being based on tumor differentiation . The DPM-pattern was evaluated as previously described . The DPM-pattern was characterized by neoplastic glands of carcinoma showing an irregularly shaped and dilated lumen, and some of these glands showed microcystic dilatation, resembling DPM. The degree of the DPM-pattern was divided into absent (< 5% of the tumor), focal (5–50%), and extensive (> 50%). Among the 75 cHCC-CCA, 47, 21, and 7 showed the absent, focal, and extensive patterns, respectively.
The expression of FGFR2, ARID1A, p53, PBRM1, BAP1, MTAP, and nestin was examined by immunostaining, as previously described . The primary antibodies used are shown in Supplementary Table . Positive and negative controls were routinely included. Evaluation of immunostaining for FGFR2 The expression of FGFR2 in cell membrane was evaluated as described previously: score 3, strong, complete membrane staining in more than 10% of the malignant cells; score 2, weak to moderate, complete membrane staining in more than 10% of the malignant cells; and score 0/1, less intense staining or less than 10% of cells, according to previous study . A score of 2 or 3 was considered positive and scores of 0 or 1 were considered negative (two‐grade system). Evaluation of immunostaining for p53 Strong and diffuse nuclear expression was regarded as a mutation of p53, as previously described . Three patterns of aberrant or mutation-type p53 staining that are indicative of an underlying p53 mutation, including overexpression (strong nuclear staining in at least 75% of tumor cells), the null pattern (loss of staining in 100% of tumor cells), and the cytoplasmic pattern, were demonstrated in previous studies . Accordingly, the null and cytoplasmic patterns were also examined, but not observed in any specimens in the present study. Evaluation of immunostaining for ARID1A, PBRM1, and BAP1 The total or focal loss of nuclear expression was regarded as a genetic alteration. Total and focal loss of expression was observed. When the expression was totally lost in the tumor, the specimen was regarded as “total loss,” whereas the expression was lost in a part of the tumor, the specimen was regarded as “focal clonal loss.” It has been reported as a reliable marker for inactivating genetic alterations in ARID1A, PBRM1, and BAP1; however, the immunostaining was not affected by some missense mutations . Evaluation of immunostaining for MTAP MTAP loss in immunohistochemistry is reportedly a reliable surrogate for CDKN2A homozygous deletion . Loss of cytoplasmic expression of MTAP was regarded as CDKN2A homozygous deletion. Total and focal loss of expression was observed . Evaluation of immunostaining for nestin The expression of nestin (diffuse cytoplasmic) was evaluated according to the percentage of positive cells in each lesion: score 0, less than 5%; score 1, 5–10%; score 2, 10–80%; score 3, more than 80%. Scores 1–3 were regarded as positive. Inter-observer agreement was almost perfect in the present study.
The expression of FGFR2 in cell membrane was evaluated as described previously: score 3, strong, complete membrane staining in more than 10% of the malignant cells; score 2, weak to moderate, complete membrane staining in more than 10% of the malignant cells; and score 0/1, less intense staining or less than 10% of cells, according to previous study . A score of 2 or 3 was considered positive and scores of 0 or 1 were considered negative (two‐grade system).
Strong and diffuse nuclear expression was regarded as a mutation of p53, as previously described . Three patterns of aberrant or mutation-type p53 staining that are indicative of an underlying p53 mutation, including overexpression (strong nuclear staining in at least 75% of tumor cells), the null pattern (loss of staining in 100% of tumor cells), and the cytoplasmic pattern, were demonstrated in previous studies . Accordingly, the null and cytoplasmic patterns were also examined, but not observed in any specimens in the present study.
The total or focal loss of nuclear expression was regarded as a genetic alteration. Total and focal loss of expression was observed. When the expression was totally lost in the tumor, the specimen was regarded as “total loss,” whereas the expression was lost in a part of the tumor, the specimen was regarded as “focal clonal loss.” It has been reported as a reliable marker for inactivating genetic alterations in ARID1A, PBRM1, and BAP1; however, the immunostaining was not affected by some missense mutations .
MTAP loss in immunohistochemistry is reportedly a reliable surrogate for CDKN2A homozygous deletion . Loss of cytoplasmic expression of MTAP was regarded as CDKN2A homozygous deletion. Total and focal loss of expression was observed .
The expression of nestin (diffuse cytoplasmic) was evaluated according to the percentage of positive cells in each lesion: score 0, less than 5%; score 1, 5–10%; score 2, 10–80%; score 3, more than 80%. Scores 1–3 were regarded as positive. Inter-observer agreement was almost perfect in the present study.
Twenty-four cHCC-CCAs and 9 small duct-type iCCAs were examined for FGFR2 genetic alterations using PCR and direct sequence. Representative whole sections which include both HCC and iCCA components to various degree were used for the extraction of RNA in each case. RNA samples were extracted from FFPE sections using RNeasy-FFPE kit (QIAGEN, Hilden, Germany), and then cDNA samples are made using Quant Accuracy RT-RamDA cDNA Synthesis Kit (TOYOBO, Osaka, Japan) according to manufactures’ protocols. Detection of FGFR2 fusions PCR was performed using FGFR2-fusion-specific primers (Supplementary Table ). Direct sequences of PCR products were performed as described previously . 5′/3′ imbalance strategy for the detection of exon 18 (E18)-truncated FGFR2 E18-truncated FGFR2 including the FGFR2 fusion genes were detected by measuring the ratio of the expression levels of 5′ portion (exon 5, E5) versus the 3′ portion (E18) of the FGFR2 expression using the Thunderbird qPCR Master Mix (Toyobo, Tokyo, Japan) and the QuantStudio 6 Pro real-time PCR system (Thermo-Fisher, Waltham, USA) according to manufactures’ protocol. The PCR primers used were shown in Table . This 5′/3′ imbalance strategy was developed, with high specificity and sensitivity, for detection of the ALK fusion gene . In the presence of the E18-truncation in FGFR2 gene including FGFR2 fusion gene, the 3′ portion of the FGFR2 gene (E18) is lost, but the 5′ portion (E5) remains. This strategy could effectively detect the E18-truncated FGFR2 gene no matter which partner genes were at the 3′ portion of the fusion genes.
PCR was performed using FGFR2-fusion-specific primers (Supplementary Table ). Direct sequences of PCR products were performed as described previously .
E18-truncated FGFR2 including the FGFR2 fusion genes were detected by measuring the ratio of the expression levels of 5′ portion (exon 5, E5) versus the 3′ portion (E18) of the FGFR2 expression using the Thunderbird qPCR Master Mix (Toyobo, Tokyo, Japan) and the QuantStudio 6 Pro real-time PCR system (Thermo-Fisher, Waltham, USA) according to manufactures’ protocol. The PCR primers used were shown in Table . This 5′/3′ imbalance strategy was developed, with high specificity and sensitivity, for detection of the ALK fusion gene . In the presence of the E18-truncation in FGFR2 gene including FGFR2 fusion gene, the 3′ portion of the FGFR2 gene (E18) is lost, but the 5′ portion (E5) remains. This strategy could effectively detect the E18-truncated FGFR2 gene no matter which partner genes were at the 3′ portion of the fusion genes.
The extraction of DNA samples, PCR and sequencing were performed as previously described . The primer sets for PCR are shown in Table .
The Kruskal–Wallis test was used for continuous variables without a normal distribution. If a significant difference was observed in an analysis of variance, pairwise comparisons were performed using Dunn’s post hoc test with corrections for multiple comparisons. When the p value was less than 0.05, the difference was considered as significant. All analyses were performed using the GraphPad Prism software (GraphPad Software, San Diego, CA, USA).
FGFR2 expression in primary liver carcinoma and the background liver Figure shows examples of FGFR2 expression in cHCC-CCAs and other types of primary liver carcinomas and the background livers. Supplementary Fig. shows examples of histology in cHCC-CCAs and small duct iCCA showing FGFR2 expression. FGFR2 was expressed in the cell membrane of carcinoma cells if present. FGFR2 was not expressed in non-neoplastic bile ducts or hepatocytes (Fig. ). The expression of FGFR2 was observed in a part of cHCC-CCAs and small duct-type iCCAs (Fig. ). The expression of FGFR2 was detected in one large duct-type iCCAs and none of HCCs. The expression of FGFR2 was detected in significantly more patients with cHCC-CCAs (21.3%) and small duct-type iCCAs (25.7%), compared to those with large duct-type iCCAs (3.3%) and HCCs (0%) ( p < 0.05) (Table ). Relationships between FGFR2 expression and clinicopathological features in cHCC-CCA Table summarizes the association of FGFR2 expression with clinicopathological features and genetic alterations in 75 patients with cHCC-CCA. FGFR2-positive cHCC-CCAs were significantly smaller size ( p < 0.05), with more predominant cholangiolocarcinoma component ( p < 0.05) and less nestin expression ( p < 0.05), compared to FGFR2-negative cHCC-CCAs. Genetic alterations of ARID1A and BAP1 and multiple genetic alterations were significantly more frequent in FGFR2-positive cHCC-CCAs, compared to FGFR2-negative cHCC-CCAs ( p < 0.05). Detection of FGFR2 fusions FGFR2::BICC1 fusion was detected in a case of cHCC-CCA (a 44-year-old female, glycogen storage disease type I, a tumor size of 7 cm, F4; same case as shown in Fig. A) (Fig. A). FGFR2 fusions with other partners ( AHCYL1 , PPHLN1 , TACC2 , CCDC6 , MGEA5 , G3BP2 , OPTN , AFF3 , CASP7 , OFD1 , KIAA1598 ) were not detected in cHCC-CCAs and small duct-type iCCAs. Detection of E18-truncated FGFR2 in cHCC-CCAs and CCAs Twenty-four cHCC-CCAs (17 FGFR-immunohistochemically (IHC)-positive and 7 FGFR-IHC negative cases) and 9 small duct-type iCCAs (8 FGFR-IHC-positive and one FGFR-IHC negative cases) were examined for the E18-truncated FGFR2 by measuring the ratio of the 5′ portion (E5) versus the 3′ portion (E18) of the FGFR2 gene expression. The ratio between the expressions of the E5 versus E18 of the FGFR2 gene ranged 0.42 to 32.00 (mean, 8.10) in FGFR2-IHC positive cHCC-CCAs and small duct iCCAs, whereas it ranged 0.06 to 8.94 (mean, 2.63) in FGFR2-IHC negative cases (Fig. B). The ratio between the expressions of the E5 versus E18 of the FGFR2 gene was more than 2 in 19 of 25 FGFR2-positive cHCC-CCAs and small duct iCCAs (76%) and 2 of 8 FGFR2-negative cases (25%). The 5′/3′ (E5/E18) imbalance in FGFR2 genes (E5/E18 ratio > 2) indicating E18-truncated FGFR2 was significantly more frequently detected in FGFR2-positive cHCC-CCAs and small duct iCCAs, compared to FGFR2-negative cases ( p < 0.05) (Fig. B). The E5/E18 ratios in the FGFR-IHC-positive cases shown in Figs. A–C were 7.48, 22.7, and 20.6, respectively.
Figure shows examples of FGFR2 expression in cHCC-CCAs and other types of primary liver carcinomas and the background livers. Supplementary Fig. shows examples of histology in cHCC-CCAs and small duct iCCA showing FGFR2 expression. FGFR2 was expressed in the cell membrane of carcinoma cells if present. FGFR2 was not expressed in non-neoplastic bile ducts or hepatocytes (Fig. ). The expression of FGFR2 was observed in a part of cHCC-CCAs and small duct-type iCCAs (Fig. ). The expression of FGFR2 was detected in one large duct-type iCCAs and none of HCCs. The expression of FGFR2 was detected in significantly more patients with cHCC-CCAs (21.3%) and small duct-type iCCAs (25.7%), compared to those with large duct-type iCCAs (3.3%) and HCCs (0%) ( p < 0.05) (Table ).
Table summarizes the association of FGFR2 expression with clinicopathological features and genetic alterations in 75 patients with cHCC-CCA. FGFR2-positive cHCC-CCAs were significantly smaller size ( p < 0.05), with more predominant cholangiolocarcinoma component ( p < 0.05) and less nestin expression ( p < 0.05), compared to FGFR2-negative cHCC-CCAs. Genetic alterations of ARID1A and BAP1 and multiple genetic alterations were significantly more frequent in FGFR2-positive cHCC-CCAs, compared to FGFR2-negative cHCC-CCAs ( p < 0.05).
FGFR2::BICC1 fusion was detected in a case of cHCC-CCA (a 44-year-old female, glycogen storage disease type I, a tumor size of 7 cm, F4; same case as shown in Fig. A) (Fig. A). FGFR2 fusions with other partners ( AHCYL1 , PPHLN1 , TACC2 , CCDC6 , MGEA5 , G3BP2 , OPTN , AFF3 , CASP7 , OFD1 , KIAA1598 ) were not detected in cHCC-CCAs and small duct-type iCCAs.
Twenty-four cHCC-CCAs (17 FGFR-immunohistochemically (IHC)-positive and 7 FGFR-IHC negative cases) and 9 small duct-type iCCAs (8 FGFR-IHC-positive and one FGFR-IHC negative cases) were examined for the E18-truncated FGFR2 by measuring the ratio of the 5′ portion (E5) versus the 3′ portion (E18) of the FGFR2 gene expression. The ratio between the expressions of the E5 versus E18 of the FGFR2 gene ranged 0.42 to 32.00 (mean, 8.10) in FGFR2-IHC positive cHCC-CCAs and small duct iCCAs, whereas it ranged 0.06 to 8.94 (mean, 2.63) in FGFR2-IHC negative cases (Fig. B). The ratio between the expressions of the E5 versus E18 of the FGFR2 gene was more than 2 in 19 of 25 FGFR2-positive cHCC-CCAs and small duct iCCAs (76%) and 2 of 8 FGFR2-negative cases (25%). The 5′/3′ (E5/E18) imbalance in FGFR2 genes (E5/E18 ratio > 2) indicating E18-truncated FGFR2 was significantly more frequently detected in FGFR2-positive cHCC-CCAs and small duct iCCAs, compared to FGFR2-negative cases ( p < 0.05) (Fig. B). The E5/E18 ratios in the FGFR-IHC-positive cases shown in Figs. A–C were 7.48, 22.7, and 20.6, respectively.
The data obtained in this study are summarized as follows: (1) FGFR2 expression was detected in significantly more patients with cHCC-CCA (21.3%) and small duct-type iCCA (25.7%), compared to those with large duct-type iCCA (3.3%) and HCC (0%) ( p < 0.05); (2) FGFR2-positive cHCC-CCAs were significantly smaller size ( p < 0.05), with more predominant cholangiolocarcinoma component ( p < 0.01) and less nestin expression ( p < 0.05). (3) Genetic alterations of ARID1A and BAP1 and multiple genes were significantly more frequent in FGFR2-positive cHCC-CCAs ( p < 0.05). (4) FGFR2::BICC1 fusion was detected in a case of cHCC-CCA with FGFR2 expression. (5) E18-truncated FGFR2 was significantly more frequently detected in FGFR2-positive cHCC-CCAs and small duct-type iCCAs, compared to FGFR2-negative ones ( p < 0.05). In the present study, we examined the immunohistochemical expression of FGFR2 as a surrogate marker for FGFR2 genetic alterations in cHCC-CCAs and other types of primary liver carcinomas. FGFR2-immunohistochemistry reportedly correlates with the FGFR2 genetic alterations, and it can be a surrogated marker with high specificity . FGFR2 genetic alterations, especially FGFR2 fusions, were detected in 10–20% of iCCA, mainly in small duct-type iCCAs in previous studies . In the present study, FGFR2 expression was detected in 25.7% of small duct-type iCCAs, whereas FGFR2 expression was rarely detected in large duct-type iCCAs (3.3%). The prevalence rate and the selective detection of FGFR2 expression in small duct-type iCCAs are consistent with previous studies . These findings also support that the immunohistochemical FGFR2 expression is a good surrogate marker corresponding to FGFR2 genetic alterations. In the present study, the FGFR2 expression was detected in 21.3% of cHCC-CCAs, similarly to small duct-type iCCAs. This finding clearly suggests that cHCC-CCAs with FGFR2 genetic alterations may be targets of the therapy with FGFR2 inhibitors, as well as small duct-type iCCAs. FGFR2 genetic alterations, especially FGFR2 fusions, were detected in 0–6.5% of cHCC-CCAs in previously . Therefore, the frequency may be higher, compared to previous studies. The FGFR2 genetic alterations were more frequently detected in CCA-like cHCC-CCA than HCC-like cHCC-CCA . In the present study, the FGFR2-positive cHCC-CCAs were significantly more frequent in cHCC-CCAs with predominant CLC component. Taken together, a higher proportion of CLC-component/CCA-like cHCC-CCA may be related to the higher frequency of FGFR2 expression in the present study. cHCC-CCA and small duct-type iCCA share various features, and a possible cell origin and a carcinogenesis pathway have been discussed . FGFR2 genetic alterations may suggest one of such common features in cHCC-CCA and small duct-type iCCA. There are several issues in terms of the sensitivity of the assays for FGFR2 genetic alterations using next-generation sequencing (NGS) or FISH , since over 150 genes have been identified as fusion partner with FGFR2 . We tried to detect several common FGFR2 fusions by using the FGFR2-fusion-specific primers. As results, FGFR2::BICC1 fusion was detected in only one case of cHCC-CCA with FGFR2 expression in the present study. It is known that there are discrepancies between the FGFR2 fusions and effect of FGFR2 inhibitors , which may be due to difficulties in the detection of diverse FGFR2 genetic alterations. More reliable assays may be mandatory for the detection of FGFR2 genetic alterations. Zingg et al. reported recently that E18-truncated variant of FGFR2 is a potent driver mutation, and any FGFR2 variant with a truncated E18 should be considered for FGFR-targeted therapies . In the present study, we applied 5′/3′ imbalance RT-PCR for the detection of E18-truncated FGFR2 including FGFR2 fusion genes. E18-truncated FGFR2 was significantly more frequently detected in FGFR2-positive cHCC-CCAs and small duct-type iCCAs, compared to FGFR2-negative ones. These findings suggest that there may be other types of FGFR2 fusions which were not examined in this study in cHCC-CCAs and small duct-type iCCAs with FGFR2 expression. Taken together, the immunostaining and the PCR-based detection of FGFR2 genetic alterations may be useful surrogate markers for screening the application of FGFR2 inhibitors. Interestingly, nestin expression was significantly lower in FGFR2-positive cHCC-iCCAs, compared to FGFR2-negative cHCC-iCCAs. Nestin, an embryonic type VI intermediate filament (IF) protein, was originally identified as a marker for neural stem cells in early development . Recent studies revealed that cHCC-ICCs and small duct-type iCCAs showed the significantly higher expression of nestin, compared to HCCs . In our previous study , nestin-positive cHCC-CCA was characterized by a smaller tumor size, the more frequent presence of CLC components, a higher rate of p53 mutations, and a higher rate of multiple genetic alterations. In the present study, FGFR2-positive cHCC-CCAs were significantly smaller size, predominant CLC components and multiple genetic alterations, compared to FGFR2-negative cHCC-CCAs. Therefore, FGFR2-positive cHCC-CCAs and nestin-positive cHCC-CCAs share similar features such as smaller tumor size, the more frequent presence of CLC components, and multiple genetic alterations. There may be, however, some distinct difference between nestin-positive cHCC-CCAs and FGFR2-positive cHCC-CCAs. The primary limitations of this study are the small cohort size and limited information on the association of the immunohistochemical FGFR2 expression with genetic alterations of FGFR2 and clinical outcomes. Analysis using NGS, especially RNA-based NGS, such as hybrid capture RNA NGS, is mandatory to further validate whether the immunohistochemical detection of FGFR2 expression is an effective surrogate marker for the detection of E18-truncated FGFR2 including FGFR2 fusion genes. If the immunohistochemical detection of FGFR2 expression is validated, the immunohistochemical assays may be used for screening the application of FGFR2 inhibitors. When FGFR2-immunohistochemistry was negative, further analysis using NGS would be applied. This strategy will be effective for shortening the turn-around time of NGS analysis and prompt application of FGFR2 inhibitors. In conclusion, FGFR2 expression was detected in cHCC-CCAs as frequently as small duct-type iCCAs. This finding suggests a possible therapeutic indication of FGFR2 inhibitors for the patients with cHCC-CCAs.
Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 30 KB)
|
The Pursuit of Scientific Rigor and Definitional Clarity in Health Literacy Research | 33587435-1327-4876-acf9-09747e657169 | 11230643 | Health Literacy[mh] | |
Access to dental care among individuals with intellectual and developmental disabilities in India: A scoping review | 30d4befb-893d-4069-a593-75ea24c58660 | 11628667 | Dentistry[mh] | INTRODUCTION There are 26 million individuals with disability in India, and 179 out of every 100 000 are estimated to have intellectual and developmental disability (IDD). Due to inherent and external barriers, access to dental services is limited and as a result, outcomes of dental diseases are worse in individuals with IDD compared to those without. , , , Cognitive and adaptive impairments limit their capability to participate in care. In addition, physical inaccessibility of dental clinics, lack of dentists’ skills, communication difficulties, , , , and difficulties identifying dental diseases , add further restrictions. Evidence from India suggests that dental attendance among individuals with IDD is minimal because of lack of awareness, poor patient cooperation, transportation issues, and cost of treatment. Dental care was sought only during emergencies and primarily for dental extractions, indicating issues with reach of dental services. Dental diseases can lead to severe morbidities and affect quality of life. Therefore, reducing the burden of oral disease among individuals with IDD requires a better understanding of issues they face in accessing dental care. However, evidence in this regard is lacking in India. Thus, this scoping review explores barriers and facilitators with access to dental care among individuals with IDD in India.
METHODOLOGY The study utilized the Arksey O'Malley framework for scoping reviews and adhered to the PRISMA guidelines for reporting studies. , The research question focused on the factors impacting access to dental care for children and adolescents with IDD in India. To conduct the search, five databases (Table ) were selected and searched using specific keywords and Boolean search operators. Additionally, snowballing of cited and citing references was included in the search strategy. We included studies focusing on carers or dentist perspectives on accessing dental care for children with IDD. Access was defined as “ the opportunity to reach and obtain appropriate health care services in situations of perceived need for care ”. Other inclusion criteria were studies between 2000 and 2021, conducted in India and examining barriers and facilitators as primary or secondary outcomes. Both quantitative and qualitative studies were included. Thematic analysis was undertaken utilizing a priori domains described in the Levesque's framework for access. The framework explored both the health‐provider (supply side) and caregivers’ (demand side) perspective based on the various domains. We also explored the perceived need for dental care based on dental visit patterns as we felt this was significant. The first author (P.P.) conducted searches and selected articles; ML later validated the articles selected.
RESULTS The various search terms used are highlighted in Table . Identified articles were transferred onto Zotero, and duplicates were removed. Abstracts were then reviewed by two authors against the inclusion criteria (P.P. and B.A.). The remaining articles were evaluated independently by three researchers (P.P., B.A., and M.L.). The PRISMA flow chart depicts the search and selection process (Figure ). 3.1 Description of population and study design All 17 studies were cross‐sectional in design (Table ). The sample composition consisted of dentists and caregivers, , caregivers only, , , , , , , dentists only, , , , children and parents, and other teachers and caregivers. Dentists had been sampled from dental colleges and government hospitals. A few studies exclusively focused on caregivers of children with Down syndrome (DS) and others ASD , ; the majority included a mix of various disabilities including IDD. Caregivers were selected from children studying in special schools or visiting tertiary care facilities. Studies exploring dentist perspectives did not concentrate on IDD in general and included other disabilities. Ten studies used convenience sampling, five studies randomly sampling, with cluster randomization of schools in another. Most studies used questionnaires to evaluate access, and all were validated and tested (Table ). 3.2 Perceived need and health care utilization Studies reported about 60%–80% of participants avoiding dental visits. , , Although one study found higher dental visits among children with special needs than those without; most were for restorations, but recent visits were low, with most having visited more than 2 years ago. Puthipuriyil reported that the mean dental visits were every 6.3 months, primarily for dental emergencies or doctor's prescription. Perceived needs, recorded as parents’ awareness of their child's dental needs, were low in studies. , , 3.3 Demand‐based themes 3.3.1 Ability to perceive Few studies report better knowledge scores among parents of children with IDD , , , , compared to parents of children without special needs. However, knowledge scores did not match practice. Two studies reported participants prioritizing good dental health , and 90% of participants in a study agreed that dental visits are necessary. However, caregivers prioritized medical concerns over dental issues. Reported perception regarding child's oral health ranged from satisfactory to poor with dental visits being issue based. , , Participants in two studies indicated preferring a general dentist over a specialist and vice versa in another. , Others considered consulting a general practitioner in case of dental emergencies. 3.3.2 Ability to seek One study evaluated and found dental anxiety to be reportedly less among children with ASD, but cited dentists’ inability to manage their child's anxiety prior to dental treatment as a deterrent. Findings among children with DS; despite a higher proportion being fearful of dentist, they were more likely to visit a dentist. Dental instruments and injections were reported as a reason for fear. Parents’ commitment and children's inability to communicate were also referred to as an obstacle to dental care. 3.3.3 Ability to reach Five studies reported transportation being a barrier to dental visit. , , , , The proportion reporting transportation as an issue ranged from 15% to 20% in various studies. Distance from the clinic was reported in 77% of the sample in one study and 33% in another. , Those living near a dental clinic were more likely to visit and only 3.3% in Mumbai city did not have a clinic nearby. Other issues cited were non‐availability of transport and lack of time for dental care and visits. , , 3.3.4 Ability to pay Nine papers reported direct and indirect costs as barriers to dental care. , , , , , , , , Financial constraints ranged from 11.2% to 68%. Two studies reported travel as being expensive, , while treatment cost hindered another. 3.3.5 Ability to engage Patient behavior during treatment deterred their ability to engage. Five studies reported the child's behavior as a deterrent to care. , , , , Three studies reported communication as a barrier. , , Other factors reported were difficulty managing the child in the reception area and waiting time. , 3.4 Supply‐based themes 3.4.1 Approachability Very few studies explored the approachability of dental services. Tele‐dentistry was suggested as a better alternative to improve care for children with disabilities. 3.4.2 Acceptability Very few dentists were unwilling to treat children with disabilities. One study reported unwillingness in 2% of the sample. Although dentists presented a favorable attitude and considered treating individuals with a disability highly rewarding, a survey showed higher stress levels while treating children with disabilities. Attitude scores varied with years of experience and qualification. In a survey of 45 dentists in Chennai, 65% were interested in reducing disparities in access among children with disabilities. However, dentists felt they may not achieve the same level of oral hygiene as those without IDD. 3.4.3 Availability and Accommodation Infrastructural limitations were the most common barrier to access. Two studies reported dentists responding positively to having disabled‐friendly facilities. , In contrast, 86% of dentists sampled in Kerala responded negatively to the presence of friendly amenities. Bose et al. corroborated these findings. Other studies quoted the lack of specialized equipment required to treat children with special needs as a barrier. Bose pointed out that 60% of dentists were concerned regarding the insufficiency of dentists available to cater to the needs of children with disabilities. One study reported that 13.3% of parents sampled reported a lack of trained dentists, while another reported 81% of parents thinking so. , Other barriers found were waiting time at clinics, visit frequency, lack of priority, and lack of proper scheduling. 3.4.4 Affordability One study reported that 90% of dentists felt dental insurance was needed to finance care, while in another study, 78% of dental practitioners reported making considerations regarding the financial aspects. Similarly, 60% of dental practitioners disagreed that dentists are entitled to higher pay for treating children with learning disabilities. 3.4.5 Appropriateness Most dentists were keen on treating children at the clinic while a few thought general anesthesia and conscious sedation would be appropriate. Most common treatment provided were emergency procedures, oral hygiene instructions, and preventive care. , More than 50% of dentists in two studies believed that their training needed improvement , and 82% in another study felt this could be corrected by including special care dentistry in the dental curriculum. The lack of training was reflected in one study as only 28.3% of dentists sampled reported being comfortable delivering simple dental procedures. However, studies also highlighted dentist incapacity and lack of confidence in managing children with special needs. , , Communication issues were commonly highlighted. , , Due to lack of training general dentists would rather refer to a pediatric dentist for treatment, as a few felt that specialist clinics were more suitable for this cohort. However, some did report on the unavailability of a specialist as an issue. Other reasons for referrals were the lack of equipment and instruments to accommodate children with IDD. 3.4.6 Awareness regarding rights Studies found that 82% of dentists were unaware of the various Indian laws for people with disability while 65% conveyed a need to decrease health disparities and improve access to oral health. Additionally, 83.8% of respondents agreed that laws should be introduced to prevent dentists from discriminating against people with a learning disability.
Description of population and study design All 17 studies were cross‐sectional in design (Table ). The sample composition consisted of dentists and caregivers, , caregivers only, , , , , , , dentists only, , , , children and parents, and other teachers and caregivers. Dentists had been sampled from dental colleges and government hospitals. A few studies exclusively focused on caregivers of children with Down syndrome (DS) and others ASD , ; the majority included a mix of various disabilities including IDD. Caregivers were selected from children studying in special schools or visiting tertiary care facilities. Studies exploring dentist perspectives did not concentrate on IDD in general and included other disabilities. Ten studies used convenience sampling, five studies randomly sampling, with cluster randomization of schools in another. Most studies used questionnaires to evaluate access, and all were validated and tested (Table ).
Perceived need and health care utilization Studies reported about 60%–80% of participants avoiding dental visits. , , Although one study found higher dental visits among children with special needs than those without; most were for restorations, but recent visits were low, with most having visited more than 2 years ago. Puthipuriyil reported that the mean dental visits were every 6.3 months, primarily for dental emergencies or doctor's prescription. Perceived needs, recorded as parents’ awareness of their child's dental needs, were low in studies. , ,
Demand‐based themes 3.3.1 Ability to perceive Few studies report better knowledge scores among parents of children with IDD , , , , compared to parents of children without special needs. However, knowledge scores did not match practice. Two studies reported participants prioritizing good dental health , and 90% of participants in a study agreed that dental visits are necessary. However, caregivers prioritized medical concerns over dental issues. Reported perception regarding child's oral health ranged from satisfactory to poor with dental visits being issue based. , , Participants in two studies indicated preferring a general dentist over a specialist and vice versa in another. , Others considered consulting a general practitioner in case of dental emergencies. 3.3.2 Ability to seek One study evaluated and found dental anxiety to be reportedly less among children with ASD, but cited dentists’ inability to manage their child's anxiety prior to dental treatment as a deterrent. Findings among children with DS; despite a higher proportion being fearful of dentist, they were more likely to visit a dentist. Dental instruments and injections were reported as a reason for fear. Parents’ commitment and children's inability to communicate were also referred to as an obstacle to dental care. 3.3.3 Ability to reach Five studies reported transportation being a barrier to dental visit. , , , , The proportion reporting transportation as an issue ranged from 15% to 20% in various studies. Distance from the clinic was reported in 77% of the sample in one study and 33% in another. , Those living near a dental clinic were more likely to visit and only 3.3% in Mumbai city did not have a clinic nearby. Other issues cited were non‐availability of transport and lack of time for dental care and visits. , , 3.3.4 Ability to pay Nine papers reported direct and indirect costs as barriers to dental care. , , , , , , , , Financial constraints ranged from 11.2% to 68%. Two studies reported travel as being expensive, , while treatment cost hindered another. 3.3.5 Ability to engage Patient behavior during treatment deterred their ability to engage. Five studies reported the child's behavior as a deterrent to care. , , , , Three studies reported communication as a barrier. , , Other factors reported were difficulty managing the child in the reception area and waiting time. ,
Ability to perceive Few studies report better knowledge scores among parents of children with IDD , , , , compared to parents of children without special needs. However, knowledge scores did not match practice. Two studies reported participants prioritizing good dental health , and 90% of participants in a study agreed that dental visits are necessary. However, caregivers prioritized medical concerns over dental issues. Reported perception regarding child's oral health ranged from satisfactory to poor with dental visits being issue based. , , Participants in two studies indicated preferring a general dentist over a specialist and vice versa in another. , Others considered consulting a general practitioner in case of dental emergencies.
Ability to seek One study evaluated and found dental anxiety to be reportedly less among children with ASD, but cited dentists’ inability to manage their child's anxiety prior to dental treatment as a deterrent. Findings among children with DS; despite a higher proportion being fearful of dentist, they were more likely to visit a dentist. Dental instruments and injections were reported as a reason for fear. Parents’ commitment and children's inability to communicate were also referred to as an obstacle to dental care.
Ability to reach Five studies reported transportation being a barrier to dental visit. , , , , The proportion reporting transportation as an issue ranged from 15% to 20% in various studies. Distance from the clinic was reported in 77% of the sample in one study and 33% in another. , Those living near a dental clinic were more likely to visit and only 3.3% in Mumbai city did not have a clinic nearby. Other issues cited were non‐availability of transport and lack of time for dental care and visits. , ,
Ability to pay Nine papers reported direct and indirect costs as barriers to dental care. , , , , , , , , Financial constraints ranged from 11.2% to 68%. Two studies reported travel as being expensive, , while treatment cost hindered another.
Ability to engage Patient behavior during treatment deterred their ability to engage. Five studies reported the child's behavior as a deterrent to care. , , , , Three studies reported communication as a barrier. , , Other factors reported were difficulty managing the child in the reception area and waiting time. ,
Supply‐based themes 3.4.1 Approachability Very few studies explored the approachability of dental services. Tele‐dentistry was suggested as a better alternative to improve care for children with disabilities. 3.4.2 Acceptability Very few dentists were unwilling to treat children with disabilities. One study reported unwillingness in 2% of the sample. Although dentists presented a favorable attitude and considered treating individuals with a disability highly rewarding, a survey showed higher stress levels while treating children with disabilities. Attitude scores varied with years of experience and qualification. In a survey of 45 dentists in Chennai, 65% were interested in reducing disparities in access among children with disabilities. However, dentists felt they may not achieve the same level of oral hygiene as those without IDD. 3.4.3 Availability and Accommodation Infrastructural limitations were the most common barrier to access. Two studies reported dentists responding positively to having disabled‐friendly facilities. , In contrast, 86% of dentists sampled in Kerala responded negatively to the presence of friendly amenities. Bose et al. corroborated these findings. Other studies quoted the lack of specialized equipment required to treat children with special needs as a barrier. Bose pointed out that 60% of dentists were concerned regarding the insufficiency of dentists available to cater to the needs of children with disabilities. One study reported that 13.3% of parents sampled reported a lack of trained dentists, while another reported 81% of parents thinking so. , Other barriers found were waiting time at clinics, visit frequency, lack of priority, and lack of proper scheduling. 3.4.4 Affordability One study reported that 90% of dentists felt dental insurance was needed to finance care, while in another study, 78% of dental practitioners reported making considerations regarding the financial aspects. Similarly, 60% of dental practitioners disagreed that dentists are entitled to higher pay for treating children with learning disabilities. 3.4.5 Appropriateness Most dentists were keen on treating children at the clinic while a few thought general anesthesia and conscious sedation would be appropriate. Most common treatment provided were emergency procedures, oral hygiene instructions, and preventive care. , More than 50% of dentists in two studies believed that their training needed improvement , and 82% in another study felt this could be corrected by including special care dentistry in the dental curriculum. The lack of training was reflected in one study as only 28.3% of dentists sampled reported being comfortable delivering simple dental procedures. However, studies also highlighted dentist incapacity and lack of confidence in managing children with special needs. , , Communication issues were commonly highlighted. , , Due to lack of training general dentists would rather refer to a pediatric dentist for treatment, as a few felt that specialist clinics were more suitable for this cohort. However, some did report on the unavailability of a specialist as an issue. Other reasons for referrals were the lack of equipment and instruments to accommodate children with IDD. 3.4.6 Awareness regarding rights Studies found that 82% of dentists were unaware of the various Indian laws for people with disability while 65% conveyed a need to decrease health disparities and improve access to oral health. Additionally, 83.8% of respondents agreed that laws should be introduced to prevent dentists from discriminating against people with a learning disability.
Approachability Very few studies explored the approachability of dental services. Tele‐dentistry was suggested as a better alternative to improve care for children with disabilities.
Acceptability Very few dentists were unwilling to treat children with disabilities. One study reported unwillingness in 2% of the sample. Although dentists presented a favorable attitude and considered treating individuals with a disability highly rewarding, a survey showed higher stress levels while treating children with disabilities. Attitude scores varied with years of experience and qualification. In a survey of 45 dentists in Chennai, 65% were interested in reducing disparities in access among children with disabilities. However, dentists felt they may not achieve the same level of oral hygiene as those without IDD.
Availability and Accommodation Infrastructural limitations were the most common barrier to access. Two studies reported dentists responding positively to having disabled‐friendly facilities. , In contrast, 86% of dentists sampled in Kerala responded negatively to the presence of friendly amenities. Bose et al. corroborated these findings. Other studies quoted the lack of specialized equipment required to treat children with special needs as a barrier. Bose pointed out that 60% of dentists were concerned regarding the insufficiency of dentists available to cater to the needs of children with disabilities. One study reported that 13.3% of parents sampled reported a lack of trained dentists, while another reported 81% of parents thinking so. , Other barriers found were waiting time at clinics, visit frequency, lack of priority, and lack of proper scheduling.
Affordability One study reported that 90% of dentists felt dental insurance was needed to finance care, while in another study, 78% of dental practitioners reported making considerations regarding the financial aspects. Similarly, 60% of dental practitioners disagreed that dentists are entitled to higher pay for treating children with learning disabilities.
Appropriateness Most dentists were keen on treating children at the clinic while a few thought general anesthesia and conscious sedation would be appropriate. Most common treatment provided were emergency procedures, oral hygiene instructions, and preventive care. , More than 50% of dentists in two studies believed that their training needed improvement , and 82% in another study felt this could be corrected by including special care dentistry in the dental curriculum. The lack of training was reflected in one study as only 28.3% of dentists sampled reported being comfortable delivering simple dental procedures. However, studies also highlighted dentist incapacity and lack of confidence in managing children with special needs. , , Communication issues were commonly highlighted. , , Due to lack of training general dentists would rather refer to a pediatric dentist for treatment, as a few felt that specialist clinics were more suitable for this cohort. However, some did report on the unavailability of a specialist as an issue. Other reasons for referrals were the lack of equipment and instruments to accommodate children with IDD.
Awareness regarding rights Studies found that 82% of dentists were unaware of the various Indian laws for people with disability while 65% conveyed a need to decrease health disparities and improve access to oral health. Additionally, 83.8% of respondents agreed that laws should be introduced to prevent dentists from discriminating against people with a learning disability.
DISCUSSION The review of 17 articles maps various barriers and facilitators faced by children and adolescents with IDD in accessing dental care in India. These findings were collated under the categories developed by Levesque. By using the framework, we were able to highlight both patient and provider perspectives regarding access to dental care. Most studies were cross‐sectional in design, and a majority used convenience sampling; hence, the quality of evidence must be treated appropriately. Studies were conducted in major cities or towns, but none covered rural India, where most of those with disabilities live. Access varies with the severity of IDD ; this factor was not measured in any of the studies. Considerable variations in findings could be due to differences in measuring instruments used and satisficing. Hence, our findings may need to be more generalizable, but the rigor of studies and reliability of tools used need improvement. Health literacy drives the demand for health care by improving the ability to perceive dental needs. However, this needed to be more evident. Despite good dental literacy, low perceived need resulted in infrequent dental visits. The proportion of children with IDD visiting a dental clinic (20%−30%) was not dissimilar to dental visits among the general public (24%), which varied across states and literacy levels. Nonetheless, reasons for dental visits differed and were mainly due to aggravated circumstances among individuals with IDD. Approachability, the ability to identify, reach, and use health services, was not measured by any study. Not knowing where to report dental problems and lack of appropriate information regarding the oral health of IDD has been documented to affect dental visits. This may explain why caregivers visit non‐dental professionals for dental needs. Although caregivers may find it challenging to identify services, evidence suggests their ability to identify dental issues may be limited. Caregivers tend to underestimate the severity and extent of dental disease in their charge. They may draw from previous experiences to identify and recognize dental symptoms, resulting in delayed care seeking. Secondly, due to inherent physiologic limitations, the ability of the individual with IDD to perceive pain or discomfort is altered. As a result, delayed or untypical responses to pain may affect identification. Raising awareness among caregivers regarding early identification and improving preventive habits may reduce the risk of undetected disease. The availability of dentists cannot be considered an issue in India. The dentist‐to‐population ratio in India ranges from 1:1000 to 1:20 000 in some states. Urban areas tend to have a higher density of dentists, while in rural areas, non‐availability may impede access. Nonetheless our findings suggest that most dentists are unprepared to cater to the needs of individuals with IDD. Barriers faced by dentists in treating children with IDD include infrastructure limitations, lack of training, and availability of specialists. To cater to children with IDD, Glassman suggests that a dentist should not only possess the skills but also the necessary attitude. While most dentists reported a willing attitide, they require support to overcome these challenges . These facts emphasise the need to introduce special care dentistry training at the undergraduate level and to develop other training programs to assist dentists in the field. Dental bodies, like the Indian Dental Association and the International Association of Disability and Oral Health, should consider developing guidelines for the prevention and care of people with IDD in India. Additionally,the Indian Association of Pediatric and Preventive Dentistry recently started a special care dentistry certificate course, which is an important step forward. India could also model high‐income countries by recognizing special care dentistry as a specialty.
CONCLUSION Despite India guaranteeing universal health access to those with disabilities, numerous barriers to accessing dental health have been highlighted here. This calls for improved training of dentists, policy, and program‐based interventions to facilitate better access. The need for improved preventive and primary dental care cannot be overemphasized as most dental visits are issue based. Factors affecting dental health‐seeking behavior among individuals with IDD need further exploration.Along with improving dentists capacity to treat, collaborative approaches are needed to improve access to dental services both from the carers and providers perspective.
The authors declare no conflict of interest.
|
Expression of epithelial growth factor receptor as a protein marker in oral reticular and erosive lichen planus | e4b0df8c-1d1a-45f9-bb2f-7b59af895032 | 11197364 | Anatomy[mh] | Oral lichen planus (OLP) is a chronic inflammatory mucosal disease with autoimmune nature affecting the buccal mucosa, tongue, and gingiva with an incidence of 0.5–2% of the general population . Epithelial thinning and hyperkeratosis with serrated rete ridges and also hydropic degeneration of basal epithelial cells with infiltrating band-like lymphocytes (predominantly T-cells) can be seen in histological Sect . This disease is typically characterized by the presence of white lace-like lesions, with or without atrophic or erosive areas . OLP is classified as a potential premalignant condition with a 0.44–1.2% malignant transformation rate . The most dangerous consequence of this lesion is the development of oral squamous cell carcinoma . OLP can be divided into six clinical subtypes: reticular, plaque-like, atrophic, erosive/ulcerative, popular, and bullous. Reticular, erosive, and plaque-like are the most common subtypes . The reticular form is the most common type of OLP, which is completely recognizable due to white and slightly raised lines with erythematous borders that are stretched in different directions and create a network-like appearance (Wickham’s lines). The erosive form is the second most common form of OLP that causes ulcers. White radial lines can often be seen in the peripheral parts of these ulcers. Patients with erosive OLP experience a wide range of discomforts, from burning to severe pain that may even interfere with their eating . The probability of progressing to malignancy of erosive, atrophic, and plaque-like types is higher than other types of lesions, and the lesions in the tongue and buccal mucosa are more likely to progress to malignancy . Atrophied oral mucosa in severe erosive OLP lesions has the highest risk of progressing to malignancy . Epithelial growth factor receptor (EGFR) is a polypeptide containing 53 amino acids that are encoded by a gene on the short arm of chromosome 7 and belong to the human epithelial receptor (HER) growth factor family of tyrosine kinase receptors. Abnormalities of EGFR are widely associated with tumorigenesis and tumor progression . Increased expression of the EGFR, as an intramembrane receptor, is associated with the occurrence of many cancers, including breast cancer, prostate cancer, and oral squamous cell carcinoma (OSCC) . Due to the important role of this receptor in signaling for the proliferation, differentiation, and migration of all types of cells, it is effective as a mitogen in maintaining the integrity of epithelial cells and on the other hand in carcinogenesis . Despite its great importance, the etiology of OLP has not yet been fully identified . Several studies have investigated the role of EGFR in the pathogenesis of oral carcinoma. EGFR overexpression, which promotes the proliferation and differentiation of keratinocytes, is present in approximately 80% of OSCC . Ma et al. in 2022 described that EGFR is one of the most important targets in the development of OLP. Some studies showed overexpression of EGFR in OSCC . The other study showed a progressive increase in EGFR expression, which was proportional to the severity of premalignant lesions . Despite the association of EGFR overexpression with oral carcinogenesis of oral potentially malignant lesions, few studies have analyzed its expression in OLP, showing controversial results. One of these studies described low EGFR expression in OLP samples , but another study observed a high expression in all their samples of OLP . In the study by Agha-Hosseini et al. , there was no significant difference in the level of EGFR between the saliva and serum of patients with OLP and patients with OSCC. Boccellino et al. in 2023 developed a diagnostic test kit to predict the development of oral cancer based on the expression of EGFR and steroid receptors. They reported that this test is non-invasive, particularly reliable, very fast, and economical. Therefore, studies in this field can be the basis for the invention of effective methods that can improve the prognosis of the lesions by their early detection. This study aimed to compare the expression of EGFR as a protein marker in Reticular and Erosive OLP. Ethical approval and study design This descriptive-analytical cross-sectional study was approved by the Research Ethics Committee of Isfahan University of medical sciences (IR.MUI.REC.1396.3.401). Participants The study was conducted on 20 paraffin blocks of reticular OLP lesions, 20 paraffin blocks of erosive OLP lesions (the samples did not have dysplasia), and 10 paraffin blocks of inflammatory fibrous hyperplasia lesions (which is an inflammatory and benign lesion as a control group) from patients who referred to the pathology department of the Faculty of Dentistry of Isfahan University of Medical Sciences in 2006–2016 (50 in total). Their lesions were accurately diagnosed based on clinicopathology criteria by two maxillofacial pathologists, simultaneously. These diagnose were based on the American Academy of Oral and Maxillofacial Pathology approach published in 2016 . Histopathological criteria include band-like or patchy, predominately lymphocytic infiltrate in the lamina propria confined to the epithelium-lamina propria interface, basal cell liquefactive (hydropic) degeneration, lymphocytic exocytosis, absence of epithelial dysplasia, and absence of verrucous epithelial architectural change. Distorted paraffin blocks that do not have enough tissue and blocks on which immunohistochemical staining was not possible for any reason such as samples in which antigens of interest were masked or destroyed during the fixation process or the antibodies used do not recognize the target antigen were excluded from the study. Among the samples, five of them from the reticular group, four of them from the erosive group, and two from the control group were excluded from the study process. Finally, the analyzes were performed based on data from 39 samples. Setting First of all, all specimens, which were stained by hematoxylin and eosin, were examined by two oral and maxillofacial pathologists, simulataneously. After confirming the diagnosis of samples, immunohistochemical staining for EGFR was carried out by streptavidin-biotin method with appropriate positive, negative, and reagent controls. The tissue sections were kept at 37 °C and fixed overnight at 600 °C before immunohistochemistry. Dewaxing was carried out in xylene and rehydration was carried out in gradient alcohol (absolute alcohol of 70% and 50%) and finally in distilled water for 5 min each. Blocking was carried out by using 3% H2O2 in methanol for 30 min. Antigen retrieval was carried out using citrate buffer (pH = 6.0) method to optimize staining for 120 min at 98 °C. The sections were immunostained with primary polyclonal antibody for EGFR (Scytek, USA). Sections were incubated overnight at 4 °C with primary antibody in a humid chamber. The following day, the sections were stained using labeled streptavidinbiotin biogenex kit (DAKO LSAB + system, K0679) with modified timings, and the sections were incubated for 2 h in the corresponding biotinylated secondary antibody solution, followed by conjugated streptavidin horseradish peroxidase complex for 1 h. Bound peroxidase was revealed using 0.05% 3- diaminobenzedinetetrahydro (DAB) in TBS. The sections were dehydrated, cleared and mounted . Then the samples were simultaneously observed by two oral and maxillofacial pathologists with an optical microscope (Olympus/Tokyo) in a magnification of 400 in five non-overlapping fields. The slides were examined in terms of the percentage of stained cells, intensity of staining, the pattern of staining, and the location of stained cells. Data measurement For the percentage of stained cells two oral and maxillofacial pathologists, simultaneously counted the stained cells and calculated the mean percentage of stained cells. The pattern of staining was categorized into membranous, cytoplasmic, and membranous-cytoplasmic groups . The intensity of staining was also evaluated as follows: very weak: hardly visible with 400 magnification, weak: easily seen with 400 magnification, weak to moderate: hardly seen at 100 magnification, moderate to severe: easily seen at 100 magnification, severe: seen at 40 magnification . The location of stained cells was categorized into basal-parabasal, basal-parabasal-intermediate, upper intermediate, intermediate, and all layers groups. Study size Samples were selected by easy sampling method. The following formula was used to determine the sample size in each group, assuming the number of samples in each group was equal (α = 0.05, (power of test) 1- β = 0.80, d = 15). [12pt]{minimal} $$n=_{1-}+{Z}_{1- })}^{2}({ }_{1}^{2}+{ }_{2}^{2})}{{d}^{2}}$$ In this study, 50 samples were used (20 for each group and 10 for the control group). Statistical methods Data were analyzed in SPSS software version 20. The data were analyzed by descriptive statistical methods and Kruskal-Wallis, Man-Whitney-U, and Fisher’s exact tests. The significance level was considered α = 0.05. This descriptive-analytical cross-sectional study was approved by the Research Ethics Committee of Isfahan University of medical sciences (IR.MUI.REC.1396.3.401). The study was conducted on 20 paraffin blocks of reticular OLP lesions, 20 paraffin blocks of erosive OLP lesions (the samples did not have dysplasia), and 10 paraffin blocks of inflammatory fibrous hyperplasia lesions (which is an inflammatory and benign lesion as a control group) from patients who referred to the pathology department of the Faculty of Dentistry of Isfahan University of Medical Sciences in 2006–2016 (50 in total). Their lesions were accurately diagnosed based on clinicopathology criteria by two maxillofacial pathologists, simultaneously. These diagnose were based on the American Academy of Oral and Maxillofacial Pathology approach published in 2016 . Histopathological criteria include band-like or patchy, predominately lymphocytic infiltrate in the lamina propria confined to the epithelium-lamina propria interface, basal cell liquefactive (hydropic) degeneration, lymphocytic exocytosis, absence of epithelial dysplasia, and absence of verrucous epithelial architectural change. Distorted paraffin blocks that do not have enough tissue and blocks on which immunohistochemical staining was not possible for any reason such as samples in which antigens of interest were masked or destroyed during the fixation process or the antibodies used do not recognize the target antigen were excluded from the study. Among the samples, five of them from the reticular group, four of them from the erosive group, and two from the control group were excluded from the study process. Finally, the analyzes were performed based on data from 39 samples. First of all, all specimens, which were stained by hematoxylin and eosin, were examined by two oral and maxillofacial pathologists, simulataneously. After confirming the diagnosis of samples, immunohistochemical staining for EGFR was carried out by streptavidin-biotin method with appropriate positive, negative, and reagent controls. The tissue sections were kept at 37 °C and fixed overnight at 600 °C before immunohistochemistry. Dewaxing was carried out in xylene and rehydration was carried out in gradient alcohol (absolute alcohol of 70% and 50%) and finally in distilled water for 5 min each. Blocking was carried out by using 3% H2O2 in methanol for 30 min. Antigen retrieval was carried out using citrate buffer (pH = 6.0) method to optimize staining for 120 min at 98 °C. The sections were immunostained with primary polyclonal antibody for EGFR (Scytek, USA). Sections were incubated overnight at 4 °C with primary antibody in a humid chamber. The following day, the sections were stained using labeled streptavidinbiotin biogenex kit (DAKO LSAB + system, K0679) with modified timings, and the sections were incubated for 2 h in the corresponding biotinylated secondary antibody solution, followed by conjugated streptavidin horseradish peroxidase complex for 1 h. Bound peroxidase was revealed using 0.05% 3- diaminobenzedinetetrahydro (DAB) in TBS. The sections were dehydrated, cleared and mounted . Then the samples were simultaneously observed by two oral and maxillofacial pathologists with an optical microscope (Olympus/Tokyo) in a magnification of 400 in five non-overlapping fields. The slides were examined in terms of the percentage of stained cells, intensity of staining, the pattern of staining, and the location of stained cells. For the percentage of stained cells two oral and maxillofacial pathologists, simultaneously counted the stained cells and calculated the mean percentage of stained cells. The pattern of staining was categorized into membranous, cytoplasmic, and membranous-cytoplasmic groups . The intensity of staining was also evaluated as follows: very weak: hardly visible with 400 magnification, weak: easily seen with 400 magnification, weak to moderate: hardly seen at 100 magnification, moderate to severe: easily seen at 100 magnification, severe: seen at 40 magnification . The location of stained cells was categorized into basal-parabasal, basal-parabasal-intermediate, upper intermediate, intermediate, and all layers groups. Samples were selected by easy sampling method. The following formula was used to determine the sample size in each group, assuming the number of samples in each group was equal (α = 0.05, (power of test) 1- β = 0.80, d = 15). [12pt]{minimal} $$n=_{1-}+{Z}_{1- })}^{2}({ }_{1}^{2}+{ }_{2}^{2})}{{d}^{2}}$$ In this study, 50 samples were used (20 for each group and 10 for the control group). Data were analyzed in SPSS software version 20. The data were analyzed by descriptive statistical methods and Kruskal-Wallis, Man-Whitney-U, and Fisher’s exact tests. The significance level was considered α = 0.05. The sample consisted of 39 paraffin blocks which were in three groups of reticular OLP, erosive OLP, and control with the number of 15, 16, and eight, respectively. Percentage of stained cells The mean percentage of stained cells in the first group (reticular) was 12.72 ± 7.30, in the second group (erosive) was 19.07 ± 13.58, in the third group (control) was 8.30 ± 4.77 (Fig. ). The Mann-Whitney-U test showed that there was no significant difference in the mean percentage of stained cells between erosive OLP and reticular OLP ( P -value = 0.213) and between reticular OLP and control group ( P -value = 0.137), but there was a significant difference between erosive OLP and control group ( P -value = 0.035). Pattern of staining The number and percentage of samples in different groups with various staining patterns showed in Table . Fisher’s exact test showed that there was no significant difference between the frequency distribution of staining patterns in 3 types of lesions ( P -value = 0.90) (Fig. ). Intensity of staining The number and percentage of samples in different groups of staining Intensity showed in Table . Kruskal-Wallis test showed that there was no significant difference between the intensity of staining in 3 groups ( P -value = 0.19) (Fig. ). Location of stained cells The number and percentage of samples in different groups of stained cells location showed in Table . Kruskal-Wallis test showed that there was no significant difference between the location of stained cells in different layers of the epithelium in the 3 groups ( P -value = 0.90) (Fig. ). The mean percentage of stained cells in the first group (reticular) was 12.72 ± 7.30, in the second group (erosive) was 19.07 ± 13.58, in the third group (control) was 8.30 ± 4.77 (Fig. ). The Mann-Whitney-U test showed that there was no significant difference in the mean percentage of stained cells between erosive OLP and reticular OLP ( P -value = 0.213) and between reticular OLP and control group ( P -value = 0.137), but there was a significant difference between erosive OLP and control group ( P -value = 0.035). The number and percentage of samples in different groups with various staining patterns showed in Table . Fisher’s exact test showed that there was no significant difference between the frequency distribution of staining patterns in 3 types of lesions ( P -value = 0.90) (Fig. ). The number and percentage of samples in different groups of staining Intensity showed in Table . Kruskal-Wallis test showed that there was no significant difference between the intensity of staining in 3 groups ( P -value = 0.19) (Fig. ). The number and percentage of samples in different groups of stained cells location showed in Table . Kruskal-Wallis test showed that there was no significant difference between the location of stained cells in different layers of the epithelium in the 3 groups ( P -value = 0.90) (Fig. ). The results of this study showed that, there was a significant difference only between the percentage of stained cells in erosive OLP and the control group. However, there was no significant difference in the mean percentage of stained cells between erosive OLP and reticular OLP and between reticular OLP and control. From the control group to the erosive lichen planus, there is an ascending trend in EGFR staining. So, it probably shows that the malignant changes might increase in erosive type. According to the researchers’ opinion, clinician should emphasize on erosive type and consider the malignant change for it. In 2012, Zhao et al. described that there were significant differences in the expression of EGFR between the OLP with erosive and ulcerative lesions and without erosive and ulcerative lesions. Strongly positive rates of EGFR were seen in erosive and ulcerative OLP. On the other hand, Cortés-Ramírez et al. in 2014 reported that the EGFR is not related to any of the specific clinical and histopathological aspects of the OLP, and they suggested that more complex and different molecular mechanisms are involved in the process. In 2015, Kouhsoltani et al. showed that the lack of Her-2/neu (as a protein of EGFR family) overexpression indicates that molecular targeting of Her-2/neu protein is not recommended as adjuvant therapy in OSCC and OLP. The present study showed that there was no significant difference between the frequency distribution of staining patterns in 3 types of lesions. In Cortés-Ramírez et al. study in 2014, which was conducted on different types of OLP, most of the samples had membrane-cytoplasmic staining. In contrast, in present study, the samples which have the typical characteristics of OLP showed a more cytoplasmic staining pattern. In the study by Kumagai et al. in 2010, the occurrence of protein marker in the control group (normal mucosa) was observed more in the basal layer, while in the samples of OLP, all cases showed EGFR expression in basal and parabasal epithelial cells. Thirty-nine cases (88.6%) showed EGFR expression in the spinous layer and only in 5 (11.4%) cases reached the superficial layers. They also reported an increase in the expression of EGFR in keratinocyte cells of OLP lesions. In the present study, the occurrence of EGFR protein marker was seen in all layers and was not limited to basal and parabasal layers but there was no significant difference between the location of stained cells in different layers of the epithelium. In a recent study by Ma et al. in 2022, among 52 possible targets, TNF, IL-6, CD4, EGFR, IL1B, IL10, AKT1, VEGFA, TP53, and IL2 had the highest degree values, indicating that these targets are important in the development of OLP and are expected to be targeted for clinical treatment of OLP. They also recommended that Cordyceps sinensis as a traditional Chinese medicine could be a beneficial choice in the OLP treatment. The present study concluded that EGFR might probably utilized as a marker for the treatment for erosive type of lichen planus. Whereas there was no significant difference between EGFR expression in reticular OLP and erosive OLP and control group; therefore, EGFR is not applicable for the reticular type. Since reticular OLP is asymptomatic in most patients in comparison to erosive OLP which shows severe signs and symptoms, EGFR as an important treatment target for erosive type can be possibly noticed in this study. González-Moles et al. presented a hypothesis about the potential for the malignant transformation of OLP. In this scoping review, 20 systematic reviews and meta-analyses published until October 2022 were critically appraised. They recommended that OLP the potential for the malignant transformation hypothetically derives from the aggressions of the inflammatory infiltrate and a particular type of epithelial response based on increased epithelial proliferation, evasion of growth-suppressive signals, and lack of apoptosis. Currently, the treatment of OLP is palliative. Patients with this disease commonly use adrenocorticosteroids and immunosuppressants to reduce inflammation and promote healing. However, OLP is prone to recurrence, and long-term hormone therapy has important side effects, such as mucosal atrophy, secondary candidiasis, and dryness . Therefore, finding medications without major adverse reactions is very crucial. Various studies about EGFR expression in OLP lesions showed controversial results, however, it seems that this protein marker is associated with OLP. Therefore, further studies are recommended to clearly show this association and find efficient treatments for OLP. It is also suggested to carry out studies in which periodic follow-up of patients is done to check the incidence of oral cancer and malignancy. In conclusion, the results of this study showed that in comparison of reticular OLP, erosive OLP, and the control group there was a significant difference just between erosive OLP and the control group in the percentage of stained cells. There were no differences between these groups in pattern, intensity, and location of staining. So, the reticular lichen planus is as notable clinically as erosive type in terms of having the malignancy potential. |
FIGO staging in ovarian carcinoma and histological subtypes | d4eeaa5a-fdab-4d31-9fb1-01f69f785fa3 | 7286752 | Gynaecology[mh] | |
Added diagnostic value of routinely measured hematology variables in diagnosing immune checkpoint inhibitor mediated toxicity in the emergency department | 99108535-3383-40ae-bb00-1daf96c7f1ed | 10278460 | Internal Medicine[mh] | INTRODUCTION Within the immunotherapeutic field of cancer treatment, multiple new and promising treatment options have emerged over the past years. Among these, immune checkpoint inhibitors (ICI) are increasingly being used as an oncologic treatment strategy for multiple types of cancer and have drastically improved survival of responding patients. For example, patients with advanced melanoma treated with combined nivolumab and ipilimumab therapy have shown to result in a median overall survival of over 60 months, whereas the median survival of patients with metastatic melanoma used to be less than 1 year before the introduction of checkpoint inhibitors. The proportion of cancer patients benefiting from ICI is increasing rapidly, with now over 40% of cancer patients qualifying for ICI treatment. However, their use is associated with a wide variety of immune‐related adverse events (irAE), such as auto‐immune colitis and pneumonitis. Because of overlap in clinical presentation, it can be difficult to differentiate these irAE from progressive disease or other inflammatory conditions, such as infections. Especially in the emergency department (ED) where time and resources are limited, this may lead to diagnostic delay, inappropriate treatment, and a considerable amount of (unnecessary) diagnostic testing. , Accurate and early diagnosis of patients presenting in the ED with irAE is therefore key to start adequate treatment as soon as possible. , Currently, there are only a few biomarkers available that can aid in diagnosing irAE. , A solution to this problem might be found in routinely measured hematological variables. Bacterial infection and viral infections are commonly characterized by high neutrophil and lymphocyte counts respectively, whereas auto‐immune diseases and allergies typically show high eosinophil counts. Previous research has found associations between irAE and increased counts of standard hematology measurements (e.g., absolute lymphocyte count and eosinophil count). In addition, changes in B‐ and T‐cell receptor repertoire show associations with irAE onset and prognosis. However, none of these biomarkers have been extensively validated or are used in clinical practice. Most modern hematology analyzers not only provide blood cell counts, but also measure morphologic characteristics, such as cell size, intrinsic properties and cell viability that carry diagnostic and prognostic value. This raises the question whether they may also be of use in the setting of immunological toxicity. , , To answer these types of questions, scrutinizing complex datasets with conventional statistical methods, such as logistic regression, do not provide stable estimates of the variable's coefficients as models contain too many variables and a low number of samples. New advanced statistical and machine learning (ML) methods are able to remove irrelevant variables thereby reducing the number of variables. In addition, variables of high importance, also known as predictors, can be identified by evaluating the trained coefficient of the trained model. This way, ML allows for the possible identification of new biomarkers and exploration of new horizons in research to aid irAE diagnosis. The aim of this study was therefore to determine the added value of routinely measured hematology characteristics, modeled through ML, as compared to the standard diagnostic practice. This may aid in the diagnosis of irAE in the ED and understanding of the pathophysiology.
METHODS 2.1 Study population This retrospective observational study included all visits to the ED of the University Medical Center Utrecht (UMC Utrecht) between 2013 and 2020 of patients who were being treated with any type of ICI for any type of cancer, until 3 months after cessation of treatment. Because irAE can occur even after cessation of treatment, we chose to include ED visits up to 3 months after treatment with ICI ended. The cutoff of 3 months was chosen after discussion between the authors. If patients had more than one disease episode (defined as a consecutive period with infection‐like symptoms), all patient's ED visits were included separately, whereas for patients with multiple ED visits during one disease episode, only the first visit was included. If patients visited the ED multiple times for the same condition (e.g., due to worsening of symptoms), only the first visit was included. 2.2 Data collection For all ED visits, demographic (age and sex), medication, and hematology data were extracted from the Utrecht Patient Orientated Database (UPOD). In brief, UPOD is a relational database combining clinical characteristics, medication, and laboratory measurements of patients in the UMC Utrecht since 2004. We used hematological variables measured by the CELL‐DYN Sapphire hematology analyzer ( Abbott diagnostics ). The CELL‐DYN Sapphire is a cell counter equipped with a 488‐nm blue diode laser and uses multiple techniques, such as electrical impedance, spectrophotometry, and laser light scattering, to measure morphological characteristics of leukocytes (incl. 5‐part differential), red blood cells (RBCs), and platelets for both classification and enumeration. Each time a component of a complete blood cell count (CBC) is requested, all data generated by the hematology analyzer are automatically stored in UPOD, including a substantial number of raw and research‐only values and background data on cell characteristics which are made available for research purposes. Only visits with available Sapphire data within the first 4 h after ED presentation were included in this study to ensure we only used data from patients with infection‐like symptoms during the ED visit. UPOD data acquisition and management is in accordance with current regulations concerning privacy and ethics. 2.3 irAE label definition A manual chart review was done for all ED visits within our study population by two of the authors (TVtH and BV). Visits for evidently unrelated conditions were excluded. We recorded both the preliminary and definite diagnosis. The preliminary diagnosis was defined as the diagnosis made by the treating physician in the ED and was characterized as either suspected irAE or other . The definitive diagnosis was defined as the diagnosis made by the treating physician at discharge from the hospital or at the end of treatment and was characterized as irAE or other . Ambiguous cases were resolved through consensus. 2.4 Model development Two models were trained to evaluate the added diagnostic performance of the hematology variables for irAE diagnosis. The first model (base) assessed the preliminary diagnosis, sex, and age with logistic regression thereby imitating clinical practice at the ED, whereas the second model (extended) also included the 77 additional hematology variables. A quality control protocol was performed to remove variables with no additional predictive value during model development: hematology variables with a Pearson correlation of >0.80 or with low number of unique ( n = 5) values were removed. The extended model was trained using lasso machine learning that can automatically reduce the number of variables, thereby reducing the risk of overfitting and aiding the interpretability of the model. Means and standard deviations are shown for normally distributed variables whereas medians and inter‐quartile ranges (IQR) are shown for non‐normally distributed variables. Model performance was assessed using cross validation (CV). With CV, the data are split in K number of partitions (folds), of which K‐1 folds are used for training and 1 for testing. This exercise is repeated K times resulting in K models with K performance estimates. Contrary to the conventional train‐and‐test split, multiple models are trained on multiple data splits, thereby using all data to assess the model's performance. The lasso algorithm performs shrinkage of coefficients that can get as small as 0, thereby removing variables. The lambda hyper‐parameter of lasso determines the degree of shrinkage and was optimized in a double loop cross‐validation (DLCV) scheme, also known as nested cross validation (Figure ). A K of 10 was used for both the CV and DLCV schemes. 2.5 Model evaluation The discrimination of models was assessed by plotting receiver operator characteristic (ROC) curves. The area under the ROC (AUROC) is a measure of discrimination, an AUROC of 1 indicates a perfect model, whereas an AUROC 0.5 indicates a random model. The 95% confidence interval (CI) of the AUROC was computed with the R cvAUC package by evaluating the test performances of the two model configurations trained in both CV schemes. Variable coefficients of the ten models trained in the DLCV were evaluated as variable importance (predictors). The clinical application and value of the trained models was evaluated with both calibration plots and net benefit curves. Calibration plots portray the agreement between predicted probabilities and the observed frequency of irAE. A calibration with an intercept 0 and slope of 1 shows perfect calibration, whereas a slope of >1 shows a model that overestimates outcome and a slope of <1 underestimates diagnosis. 80% and 95% CI intervals of the calibration plots were generated with the R givitR package. Net benefit is a measure to evaluate the clinical benefit of a prediction model by comparing the benefit [treating diseased, true positives (TP)] and cost [treating non‐diseased, false‐positive (FP)]. Net benefit is assessed by subtracting the cost from the benefit for the complete range of predictions values ( p t ). Formula 1 shows that the net benefit increases by the number of TP and is penalized by the number of non‐diseased (FP), especially when the prediction threshold value increases p t 1 − p t . Besides the net benefit, the number needed to treat (NNT) is shown as a comparison to how healthcare professionals consider whether the patient has a specific illness or that treatment is required. All analyses were performed in R version 4.1.2. (1) net benefit p t = TP n − FP n p t 1 − p t 2.6 Post hoc subgroup analysis To assess the independence of the identified biomarkers we adjusted for the baseline clinical variables, we performed a multivariate analysis including the identified biomarkers, age, sex, cancer type, and ICI medication. To reduce the number of coefficients and to remove groups with low prevalence, various cancer types, and ICI medications were grouped. A second post hoc analysis was performed to check whether the identified biomarkers were associated with disease severity as measured by CTCAE grade.
Study population This retrospective observational study included all visits to the ED of the University Medical Center Utrecht (UMC Utrecht) between 2013 and 2020 of patients who were being treated with any type of ICI for any type of cancer, until 3 months after cessation of treatment. Because irAE can occur even after cessation of treatment, we chose to include ED visits up to 3 months after treatment with ICI ended. The cutoff of 3 months was chosen after discussion between the authors. If patients had more than one disease episode (defined as a consecutive period with infection‐like symptoms), all patient's ED visits were included separately, whereas for patients with multiple ED visits during one disease episode, only the first visit was included. If patients visited the ED multiple times for the same condition (e.g., due to worsening of symptoms), only the first visit was included.
Data collection For all ED visits, demographic (age and sex), medication, and hematology data were extracted from the Utrecht Patient Orientated Database (UPOD). In brief, UPOD is a relational database combining clinical characteristics, medication, and laboratory measurements of patients in the UMC Utrecht since 2004. We used hematological variables measured by the CELL‐DYN Sapphire hematology analyzer ( Abbott diagnostics ). The CELL‐DYN Sapphire is a cell counter equipped with a 488‐nm blue diode laser and uses multiple techniques, such as electrical impedance, spectrophotometry, and laser light scattering, to measure morphological characteristics of leukocytes (incl. 5‐part differential), red blood cells (RBCs), and platelets for both classification and enumeration. Each time a component of a complete blood cell count (CBC) is requested, all data generated by the hematology analyzer are automatically stored in UPOD, including a substantial number of raw and research‐only values and background data on cell characteristics which are made available for research purposes. Only visits with available Sapphire data within the first 4 h after ED presentation were included in this study to ensure we only used data from patients with infection‐like symptoms during the ED visit. UPOD data acquisition and management is in accordance with current regulations concerning privacy and ethics.
irAE label definition A manual chart review was done for all ED visits within our study population by two of the authors (TVtH and BV). Visits for evidently unrelated conditions were excluded. We recorded both the preliminary and definite diagnosis. The preliminary diagnosis was defined as the diagnosis made by the treating physician in the ED and was characterized as either suspected irAE or other . The definitive diagnosis was defined as the diagnosis made by the treating physician at discharge from the hospital or at the end of treatment and was characterized as irAE or other . Ambiguous cases were resolved through consensus.
Model development Two models were trained to evaluate the added diagnostic performance of the hematology variables for irAE diagnosis. The first model (base) assessed the preliminary diagnosis, sex, and age with logistic regression thereby imitating clinical practice at the ED, whereas the second model (extended) also included the 77 additional hematology variables. A quality control protocol was performed to remove variables with no additional predictive value during model development: hematology variables with a Pearson correlation of >0.80 or with low number of unique ( n = 5) values were removed. The extended model was trained using lasso machine learning that can automatically reduce the number of variables, thereby reducing the risk of overfitting and aiding the interpretability of the model. Means and standard deviations are shown for normally distributed variables whereas medians and inter‐quartile ranges (IQR) are shown for non‐normally distributed variables. Model performance was assessed using cross validation (CV). With CV, the data are split in K number of partitions (folds), of which K‐1 folds are used for training and 1 for testing. This exercise is repeated K times resulting in K models with K performance estimates. Contrary to the conventional train‐and‐test split, multiple models are trained on multiple data splits, thereby using all data to assess the model's performance. The lasso algorithm performs shrinkage of coefficients that can get as small as 0, thereby removing variables. The lambda hyper‐parameter of lasso determines the degree of shrinkage and was optimized in a double loop cross‐validation (DLCV) scheme, also known as nested cross validation (Figure ). A K of 10 was used for both the CV and DLCV schemes.
Model evaluation The discrimination of models was assessed by plotting receiver operator characteristic (ROC) curves. The area under the ROC (AUROC) is a measure of discrimination, an AUROC of 1 indicates a perfect model, whereas an AUROC 0.5 indicates a random model. The 95% confidence interval (CI) of the AUROC was computed with the R cvAUC package by evaluating the test performances of the two model configurations trained in both CV schemes. Variable coefficients of the ten models trained in the DLCV were evaluated as variable importance (predictors). The clinical application and value of the trained models was evaluated with both calibration plots and net benefit curves. Calibration plots portray the agreement between predicted probabilities and the observed frequency of irAE. A calibration with an intercept 0 and slope of 1 shows perfect calibration, whereas a slope of >1 shows a model that overestimates outcome and a slope of <1 underestimates diagnosis. 80% and 95% CI intervals of the calibration plots were generated with the R givitR package. Net benefit is a measure to evaluate the clinical benefit of a prediction model by comparing the benefit [treating diseased, true positives (TP)] and cost [treating non‐diseased, false‐positive (FP)]. Net benefit is assessed by subtracting the cost from the benefit for the complete range of predictions values ( p t ). Formula 1 shows that the net benefit increases by the number of TP and is penalized by the number of non‐diseased (FP), especially when the prediction threshold value increases p t 1 − p t . Besides the net benefit, the number needed to treat (NNT) is shown as a comparison to how healthcare professionals consider whether the patient has a specific illness or that treatment is required. All analyses were performed in R version 4.1.2. (1) net benefit p t = TP n − FP n p t 1 − p t
Post hoc subgroup analysis To assess the independence of the identified biomarkers we adjusted for the baseline clinical variables, we performed a multivariate analysis including the identified biomarkers, age, sex, cancer type, and ICI medication. To reduce the number of coefficients and to remove groups with low prevalence, various cancer types, and ICI medications were grouped. A second post hoc analysis was performed to check whether the identified biomarkers were associated with disease severity as measured by CTCAE grade.
RESULTS 3.1 Patient characteristics Between 2013 and 2020, 409 ED visits of 257 patients who were treated with ICI and had available blood counts were included in this study (mean ED visits per patient 1.6). The irAE diagnosis of 91 visits were inconclusive from the medical records, of which the diagnosis was later adjusted in 24 cases. In both the other ( n = 268) and irAE ( n = 141) sub‐groups there were more males, 63.1% and 64.5%, respectively (Table ). Mean age did not differ between the other (62.2) and irAE group (61.7). The use of both ipilimumab and nivolumab were significantly higher in the irAE group ( p < 0.01), whereas the use of nivolumab and pembrolizumab were significantly lower in the irAE group ( p < 0.01). An overview of the irAE diagnoses is shown in Table . 3.2 Model performance After removing variables that did not meet our quality control criteria, 53 of the 77 Sapphire variables were used in the extended model (Table and Figure ). The base model had an AUROC of 0.67 (0.60–0.79 95% CI) and the extended model had an AUROC of 0.79 (0.75–0.84 95% CI), a difference in 0.13. The training performance was marginally higher for both the base and extended model as compared to the test performance, 0.74 (0.72–0.76 95% CI) and 0.86 (0.84–0.87 95% CI), respectively, providing evidence there was no overfitting. In line with the AUROC metrics, the extended model trained on all data shows the best ROC and PRC curves (Figure ). 3.3 Discriminative metrics To assess the potential value in clinical practice of the extended model, predictions of the base and extended models were evaluated with both calibration and net benefit plots. The extended model showed better calibration than the base model (Figure ). The 95% CI of the base model are very wide compared to the extended model and the predictions of the extended model are more equally distributed. In addition, decision curve analysis showed improved net benefit of the extended model as compared to the base model over the complete threshold probability range (Figure ). 3.4 Variable importance Variables' coefficients, as well as the number of times a variable was selected by the extended model, were documented during training, and are shown in Figure and Table . The preliminary diagnosis was highly predictive for irAE diagnosis in both the base and extended model with a coefficient of 3.53 ± 0.14 and 2.88 ± 0.18, respectively. The extended model also identified the following Sapphire variables as predictors for irAE diagnosis: number of eosinophils (eos), red blood cell count measured with impedance (rbci), coefficient of variance neutrophil depolarization (ndcv), and red blood cell distribution width (rdw), of which the latter was negatively associated with irAE. Eos was highly correlated with percentage of eosinophils (peos) and rbci with other red blood cell measurements variables (rbco, hgb, and hct) (Table ). The sex and age variables were not selected by lasso in any of the ten iterations in the DLCV scheme. 3.5 Post hoc subgroup analysis After adjusting for age, sex, cancer type (grouped as skin, lung, urological or other) and ICI medication (grouped as ipilimumab, nivolumab, pembrolizumab, ipilimumab, and nivolumab, or other) we found that three of the four identified variables were still significantly associated with irAE, namely: eos ( p ‐value 0.0144), rbci ( p ‐value 0.0035), and rdw ( p ‐value 0.0003). In this model we did not find a significant association for ndcv ( p ‐value 0.0781). Furthermore, we did not find an association between the values of the identified variables and the irAE severity as measured by CTCAE grade (Supplementary Figure).
Patient characteristics Between 2013 and 2020, 409 ED visits of 257 patients who were treated with ICI and had available blood counts were included in this study (mean ED visits per patient 1.6). The irAE diagnosis of 91 visits were inconclusive from the medical records, of which the diagnosis was later adjusted in 24 cases. In both the other ( n = 268) and irAE ( n = 141) sub‐groups there were more males, 63.1% and 64.5%, respectively (Table ). Mean age did not differ between the other (62.2) and irAE group (61.7). The use of both ipilimumab and nivolumab were significantly higher in the irAE group ( p < 0.01), whereas the use of nivolumab and pembrolizumab were significantly lower in the irAE group ( p < 0.01). An overview of the irAE diagnoses is shown in Table .
Model performance After removing variables that did not meet our quality control criteria, 53 of the 77 Sapphire variables were used in the extended model (Table and Figure ). The base model had an AUROC of 0.67 (0.60–0.79 95% CI) and the extended model had an AUROC of 0.79 (0.75–0.84 95% CI), a difference in 0.13. The training performance was marginally higher for both the base and extended model as compared to the test performance, 0.74 (0.72–0.76 95% CI) and 0.86 (0.84–0.87 95% CI), respectively, providing evidence there was no overfitting. In line with the AUROC metrics, the extended model trained on all data shows the best ROC and PRC curves (Figure ).
Discriminative metrics To assess the potential value in clinical practice of the extended model, predictions of the base and extended models were evaluated with both calibration and net benefit plots. The extended model showed better calibration than the base model (Figure ). The 95% CI of the base model are very wide compared to the extended model and the predictions of the extended model are more equally distributed. In addition, decision curve analysis showed improved net benefit of the extended model as compared to the base model over the complete threshold probability range (Figure ).
Variable importance Variables' coefficients, as well as the number of times a variable was selected by the extended model, were documented during training, and are shown in Figure and Table . The preliminary diagnosis was highly predictive for irAE diagnosis in both the base and extended model with a coefficient of 3.53 ± 0.14 and 2.88 ± 0.18, respectively. The extended model also identified the following Sapphire variables as predictors for irAE diagnosis: number of eosinophils (eos), red blood cell count measured with impedance (rbci), coefficient of variance neutrophil depolarization (ndcv), and red blood cell distribution width (rdw), of which the latter was negatively associated with irAE. Eos was highly correlated with percentage of eosinophils (peos) and rbci with other red blood cell measurements variables (rbco, hgb, and hct) (Table ). The sex and age variables were not selected by lasso in any of the ten iterations in the DLCV scheme.
Post hoc subgroup analysis After adjusting for age, sex, cancer type (grouped as skin, lung, urological or other) and ICI medication (grouped as ipilimumab, nivolumab, pembrolizumab, ipilimumab, and nivolumab, or other) we found that three of the four identified variables were still significantly associated with irAE, namely: eos ( p ‐value 0.0144), rbci ( p ‐value 0.0035), and rdw ( p ‐value 0.0003). In this model we did not find a significant association for ndcv ( p ‐value 0.0781). Furthermore, we did not find an association between the values of the identified variables and the irAE severity as measured by CTCAE grade (Supplementary Figure).
DISCUSSION Accurate identification of irAE in patients using ICI in the ED is of vital importance to guide treatment decisions. With new statistical methods and ML, we explored the possible added diagnostic value of 77 hematological variables measured by the CELL‐DYN Sapphire in diagnosing irAE in patients using ICI as compared to standard clinical practice. The extended model showed improvement in discrimination, calibration, and net benefit as compared to the base model, indicating that the hematological variables indeed have added value in the diagnostic process of identifying irAE in patients using ICI in the emergency department setting. Our extended model showed better performance as well as calibration over the base model. However, due to the low number of values of the base model and the good predictive performance of the preliminary diagnosis, the predictions of the base model were not equally distributed. The net benefit of the extended model was better than the base model, especially in the therapeutic range around 25%. The exact threshold for the number needed to treat will vary depending on the characteristics of the individual patient and the severity of the symptoms. A false‐positive diagnosis of irAE will lead to cessation of the checkpoint inhibitor, which would possibly withhold a life‐saving therapy from the patient. On the other hand, a false‐negative diagnosis will lead to a delayed treatment for irAE, which is potentially fatal. Of all variables, the preliminary diagnosis was deemed highly important by both the base and extended models indicating that the first diagnosis of the physician is a very good proxy for irAE diagnosis. Both age and sex showed low importance in the base model and were not selected by the lasso algorithm in any of the 10 DLCV iterations, which is in line with existing evidence. Interestingly, only a few of the 77 hematological variables were selected by the lasso algorithm in each iteration. This diagnostic study cannot not determine causality. However, a causal relationship can be postulated based on the literature. Eosinophiles are thought to play a pathogenic role in auto‐immune disorders and are known to be associated with irAE. Neutrophil depolarization is a feature of neutrophil activation, which has also been associated with auto‐immunity, but this has not been studied extensively. We found the red blood cell distribution width (rdw) to be negatively associated with irAE. Increased rdw is known to be associated with infections, which are arguably the most likely alternative diagnosis when considering irAE. Our study has some limitations. The population is highly heterogeneous, with multiple types of tumors and treatments. This may have hampered the identification of a specific predictor for a particular subset of patients. Unfortunately, we did not have enough data to stratify patients based on either cancer type or medication. Even though the post hoc group analysis showed significant results for 3 of the 4 identified variables after adjusting for the baseline characteristics, future research is needed to validate these results. Moreover, the diagnoses were retrospectively defined or changed as our data was collected on routine basis. To our knowledge, this study is one of the first of its kind in exploring the diagnostic potential of these raw and research‐only hematological variables using ML in the emergency department setting. Since the raw data from this type of hematology analyzer are not ubiquitously available, we were not able to externally validate our results. As a result, this study has to be viewed as exploratory and more research is required before these hematological variables, either individually or in a model, can be used in clinical practice. The diagnostic performance of such a model might be improved by combining hematological variables with other new sets of biomarkers, as well as the preliminary diagnosis. This study raises the question if the hematological variables might also have diagnostic value in the setting of other diseases and treatments. , , As they are inexpensive and relatively easily and rapidly obtained in general blood counts, they could be an interesting new tool in future diagnostic research. As shown here, a clinical diagnostic model may aid the clinical decision‐making process of a physician by providing a continuous prediction score that can be combined with the professional interpretation by a clinical chemist to accommodate integral diagnostics of a patient's clinical state. Instead of looking at differences between patients using cross‐sectional data, within‐patient differences may be a better approximation of a patient's health trajectory potentially allowing for predicting the incidence of irAE at the start of ICI treatment. Overall, we show that hematological variables show diagnostic performance in the identification of irAE in patients using ICI at the ED and that they have added value compared to standard diagnostic practice. Our results suggest new directions for further research using (advanced) hematological variables for irAE diagnosis in the emergency setting.
Michael S. A. Niemantsverdriet: Conceptualization (equal); formal analysis (equal); methodology (equal); writing – original draft (equal). Bram E. L. Vrijsen: Conceptualization (equal); data curation (equal); formal analysis (equal); writing – original draft (equal). Thérèse Visser 't Hooft: Data curation (equal). Karijn P. M. Suijkerbuijk: Data curation (equal). Wouter W. van Solinge: Supervision (equal). Maarten J. ten Berg: Conceptualization (equal); formal analysis (equal). Saskia Haitjema: Conceptualization (equal); data curation (equal); formal analysis (equal).
None.
MN is employed by SkylineDx, Rotterdam and receives a PhD fellowship from SkylineDx, Rotterdam. KS: Consulting/advisory relationship: Bristol Myers Squibb, Merck Sharp and Dome, Abbvie, Pierre Fabre, Novartis. Honoraria received: Novartis, Roche, Merck Sharp and Dome. Research funding, TigaTx, Bristol Myers Squibb, Philips, unrelated to this project. All paid to institution and outside the submitted work. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
This study was performed in accordance with the Declaration of Helsinki and the ethical guidelines of our institution. The institutional review board of the UMC Utrecht approved this study (reference number 20–591/C) and waived the need for informed consent as only pseudonymized data were used for this study. Data collection and handling was conducted in accordance with European privacy legislation (GDPR).
Data S1: Click here for additional data file.
|
Competencia comunicativa de médicos familiares en una unidad de medicina familiar | 533431bf-ce24-44b4-a6c3-31d11a123a7f | 10395916 | Family Medicine[mh] | La comunicación es un fenómeno inherente a toda interrelación humana. En el ámbito de la Medicina, la comunicación clínica puede suceder entre profesionales de la salud o entre los profesionales y sus pacientes, familiares, o personas cercanas. En el área de la salud, la comunicación es una herramienta indispensable para la relación médico-paciente, con la que se logra así una atención más detallada, adherencia al tratamiento y mejores resultados clínicos de control de enfermedades crónico-degenerativas. A partir de lo dicho por el paciente, el médico contará con información completa y precisa para realizar el cumplimiento de sus funciones y determinar acertadamente el diagnóstico y el tratamiento a seguir. Por lo tanto, es fundamental tomar en cuenta las expectativas de los pacientes, sean de índole médica o administrativa, sin ignorar la evidencia clínica. , , Cuando un paciente a acude a consulta, lo primero que percibe es el aspecto físico del profesional y del entorno, su expresión facial, postura, la distancia que mantiene, su mirada, etcétera; es decir, lo que puede condicionar su interacción. Algunos autores estiman que el individuo recibe su información en un 83% de la vista, 1% del gusto, 11% del oído, 3% del olfato y 2% del tacto. Se considera que la comunicación no verbal puede ejercer cinco veces más efecto sobre la comprensión del mensaje que la comunicación verbal. El personal médico no percibe las expectativas de los pacientes durante la consulta, más si se toma en cuenta que la atención centrada en el paciente respeta estas mismas sin ignorar la evidencia clínica. Diferentes factores pueden influir en la percepción de los pacientes y se clasifican en dos áreas: las condiciones relacionadas con la persona y las condiciones externas. Por lo tanto uno de los principales retos para establecer un adecuado intercambio de información es conocer la audiencia, público o interlocutor al cual se dirige el mensaje para así determinar de manera adecuada las características de la comunicación y entonces el mensaje sea recibido lo más fielmente posible. Por otra parte, es trascendente la vinculación del paciente con el personal médico, lo cual en muchos casos constituye una acción casi terapéutica, en la que el paciente se siente escuchado, atendido y con conocimiento del significado de su enfermedad. En caso de presentar una mala comunicación, esto se refleja en la incomprensión de su enfermedad, falta de adherencia terapéutica y pérdida de confianza y respeto a los profesionales de la salud. Los pacientes y familiares esperan que además de ser experto técnicamente en sus habilidades clínicas, sea capaz de mostrar empatía a partir de sus acciones, gestos y palabras. Si bien para la práctica de cualquier médico es importante contar con habilidades de comunicación, para el médico familiar lo es aún más, ya que su práctica diaria consiste en considerar no solo el aspecto biológico de los pacientes, sino también los aspectos psicológicos y sociales. La demanda en Medicina Familiar en la Unidad de Medicina Familiar No. 27 en lo que respecta consulta externa es de 1600 pacientes por día, por lo que conocer la percepción del paciente sobre el desempeño médico es primordial para identificar las habilidades interpersonales y de comunicación evaluadas por la herramienta Communication Assessment Tool (CAT), , , , , validada al español en el año 2020 en Chile. , , ,
Con el fin de conocer la percepción de los pacientes mayores de 18 años acerca de la competencia comunicativa de los médicos familiares, se diseñó un estudio de tipo descriptivo y transversal, el cual se llevó a cabo en la Unidad de Medicina Familiar No. 27 del Instituto Mexicano del Seguro Social (IMSS) en Tijuana, Baja California, con un total de 263,829 derechohabientes mayores de edad en el año 2021, divididos entre los 80 consultorios de Medicina Familiar. El estudio se registró ante el Comité Local de Investigación en Salud 204 y Comité de Ética en Investigación 204-8 con número de registro R-2021-204-038. Se hizo un cálculo de la muestra con la fórmula de población finita, con un intervalo de confianza de 95% (IC 95%) y un error estándar máximo aceptable del 5%. Se hizo muestreo no probabilístico por cuotas de ambos turnos de la unidad, por lo que resultó un tamaño muestral de 197 pacientes. Se invitó a participar a derechohabientes de edad mayor o igual a 18 años, adscritos a la unidad que acudieran a consulta de Medicina Familiar y que presentaran una enfermedad crónico-degenerativa, como diabetes mellitus, hipertensión arterial sistémica, asma, artritis reumatoide y enfermedad pulmonar obstructiva crónica. Se excluyeron pacientes que no supieran leer y escribir, ya que el instrumento es autoaplicable. Se eliminaron pacientes que estuvieran embarazadas. Se recolectó información mediante un cuestionario para conocer variables sociodemográficas de los participantes, como edad, sexo, escolaridad y ocupación, y también se empleó el instrumento CAT, el cual consta de 15 elementos con evaluación en escala de Likert: 14 preguntas dirigidas al médico y una al personal médico. Se recolectaron las respuestas en una base de datos, las cuales fueron analizadas con el programa de IBM SPSS, versión 26. Las variables cuantitativas se expresaron en forma de media, mediana y desviación estándar (DE). Las variables cualitativas en frecuencias con porcentajes. Se representaron en gráficas y cuadros. Para el análisis bivariado se utilizó la prueba de chi cuadrada y se consideró una p < 0.05 como significativa.
Para la homogeneidad de la población de estudio se aceptaron 200 pacientes, 100 correspondían al turno matutino y 100 al turno vespertino. La edad de los participantes osciló entre los 18 y los 81 años, con una media de 44.17 años, una mediana de 43.5 años, una moda 29 años y una DE de 15.043. Referente a la variable turno y sexo, se encontró que en el turno matutino participaron 59 mujeres y 41 hombres, en comparación con el turno vespertino, en el cual participaron 64 mujeres y 36 hombres, lo cual da un total de 123 mujeres y 77 hombres, cuyos porcentajes se muestran en la . En cuanto a la variable estado civil, se encontró que 23% (46) eran solteros, 38% (76) casados, 9% (18) divorciados, 25% (50) vivían en unión libre y 5% (10) eran viudos. Respecto a la escolaridad, el 1% de los participantes no cursó ningún grado, pero sabían leer y escribir, el 18.5% cursó primaria, el 48.5% cursó secundaria, el 24.5% cursó bachillerato, el 7% licenciatura y el 0.5% posgrado. Respecto a la variable ocupación, el 0.5% de ellos eran estudiantes, el 0.5% estaban desempleados, 16.5% se dedicaban al hogar, 56% eran obreros, 2.5% comerciantes, 5.5% técnicos, 6% profesionistas y el 12.5% eran jubilados o pensionados. En la se observa el comportamiento de las respuestas brindadas por los participantes de este estudio. Se encontró que las preguntas 2 y 4 son las mejor ponderadas; estas describen los ítems correspondientes a Me trató con respeto y Entendió mis principales preocupaciones sobre mi salud, respectivamente. En contraste las preguntas 10 y 11 fueron las de menor calificación (Me animó a hacer preguntas y Me involucró en las decisiones tanto como quise). El 54.6% de la población estudiada consideró excelente la competencia comunicativa durante la consulta de Medicina Familiar en la Unidad de Medicina Familiar No. 27, como se observa en la . A manera de hallazgo se encontró que el porcentaje ponderado como excelente durante el turno matutino fue mayor que en el turno vespertino, al obtener 59.6% en la mañana contra 49.7% en la tarde sin diferencia significativa entre turnos, como lo muestra el . Entre las observaciones importantes de este estudio, se encontró que 49 (24.5%) de los participantes formaron parte del programa Unifila, el cual tiene funcionando cuatro años y el médico tratante no es el que normalmente atiende su estado de salud, hallazgo importante derivado de la calidad de la atención a los usuarios y fundamental para encontrar o detectar mejoras en la misma. Al comparar el resultado de los pacientes atendidos en el programa Unifila con los pacientes atendidos en su consultorio habitual, se identifica que en dicho programa hay solo 35.9% de respuestas calificadas como excelente en el instrumento utilizado, en contraste con 60.7% de aquellos pacientes que no fueron parte de este programa, con una diferencia significativa entre la percepción de la comunicación por parte del personal médico (p < 0.01) .
La relación médico-paciente no solo se trata de una relación profesional: antes que nada es una relación humana, en la que el médico está interesado en el paciente, en su salud y bienestar, por lo que debe estar basada en valores como respeto, empatía, responsabilidad, igualdad, honestidad y transparencia. De ahí deriva la comunicación que sucede entre el médico y el paciente, la cual afecta no solo al binomio dentro del consultorio, sino que tiene efecto en el entorno del paciente, en su ámbito familiar, incluso a largo plazo en el sistema de salud, ya que la comunicación es un factor clave relacionado con la satisfacción del paciente, la calidad de la atención brindada y el uso óptimo de recursos. En el caso de programas de atención como el del estudio Unifila, estos merman la misma percepción de la competencia comunicativa del personal de salud. Aunado a lo mencionado, las características socioculturales de los pacientes influyen de manera significativa para el rapport al momento de la atención medica. Hernández-Torres et al. mencionan en su investigación el deber que tiene el Médico Familiar para construir y mantener una comunicación adecuada con el paciente, ya que debe incursionar no solo en el aspecto biológico del individuo, sino también en las dimensiones psicológica y social, requiriendo un espíritu humanista, apertura a las necesidades del paciente, disposición para trabajar en equipo, así como transmitir experiencias y conocimientos, tanto al paciente, como a la familia y al equipo de salud. Esto viene a confirmar que la atención médica debe ser centrada en el paciente, y para lograrlo y mantener las competencias comunicativas es esencial que exista una habilidad para una comunicación asertiva, habilidad fundamental para convivir, también llamada “habilidad para la vida”. Ahora, si comparamos la percepción de la comunicación entre pacientes que acudieron a consulta con su médico familiar frente a aquellos a los que no los atendió ese mismo médico, observamos que disminuye drásticamente la percepción de una excelente comunicación. Aunado a esto, Vega-Hurtado comenta que “debemos comprender que hay una gran diversidad de pacientes (por ejemplo, pacientes valientes, directos, manipuladores, ansiosos, exigentes, amables, etcétera) y tenemos que adquirir el conocimiento para poder reconocerlos y relacionarnos con todos ellos”, y vincular así la empatía como una manera de valorar universal para las interacciones sociales, que va muy de la mano con la comunicación asertiva, competencia que al igual debe adquirir en su adiestramiento el médico familiar como competencia transversal. El rapport se debe lograr sin importar las características del paciente y el médico familiar debe impulsar dicha equidad en la consulta que se lleva día con día. Al comparar nuestro estudio con países de primer mundo, como Estados Unidos, encontramos que en este estudio tenemos un porcentaje menor con la respuesta excelente ante la aplicación del instrumento CAT, con un resultado de 54.6%. Esto a diferencia de otros países latinoamericanos, como Perú y Chile, donde la percepción de ellos se ponderó por debajo del resultado local. Podemos considerar que el estilo de vida al igual que el sistema de salud de cada país contribuyen al sentir de la calidad de la atención e incluyen la comunicación como parte de ella. La calidad de las habilidades de comunicación de los médicos se ha reconocido cada vez más como un determinante crítico del cambio de comportamiento del paciente y mejores resultados ante la enfermedad. Más allá de las aplicaciones de cambio de comportamiento, las habilidades de comunicación efectiva también han sido reconocidas como importantes para involucrar a los pacientes en su cuidado, abordar las lagunas de conocimiento, ayudar a los pacientes a superar los miedos al tratamiento y mejorar la calidad y efectividad general de la relación terapéutica. Ante los hallazgos obtenidos en los resultados, se observó una notable diferencia entre los turnos matutino y vespertino de la unidad, por lo que se tiene que evaluar qué factores influyen en este resultado, esto debido a que los médicos deben estar igual de preparados en ambos turnos, ya que estos cumplen con las mismas funciones y cuentan con la misma preparación como especialistas. Aunque para poder tener una noción integral y objetiva de la comunicación, es necesario conocer el punto de vista de todos los participantes, es decir, es de interés conocer también el punto de vista del personal médico y está la limitante de que en este estudio solo se investigó la percepción del paciente. Una fortaleza del estudio reside en la aportación del valorar ético para mejorar las competencias de los médicos familiares en temas como la comunicación asertiva, la inteligencia emocional, el manejo del estrés, la resolución de problemas y la habilidad para la comunicación oral. De esta manera, hay que iniciar desde la etapa de adiestramiento en una especialidad, en este caso Medicina Familiar, para la adquisición de competencias transversales, que son bases del aprendizaje para toda la vida.
En general, la relación médico-paciente no solo se trata de una relación profesional, antes que nada es una relación humana en la que el médico está interesado en el paciente, en su salud y bienestar, por lo que debe estar basada en valores como respeto, empatía, responsabilidad, igualdad, honestidad y transparencia. De ahí deriva la comunicación que se da entre el médico y el paciente, la cual afecta no solo al binomio dentro del consultorio, sino que tiene efecto en el entorno del paciente, en su ámbito familiar, incluso a largo plazo en el sistema de salud, ya que la comunicación es un factor clave relacionado con la satisfacción del paciente, la calidad de la atención brindada y el uso óptimo de recursos. La competencia comunicativa en los médicos familiares de la unidad estudiada, a pesar de haber tenido un resultado mejor que el de varios países, cuenta con áreas de oportunidad para optimizar durante la consulta. El médico familiar debe tener las herramientas necesarias para permitir que el paciente aclare todas sus dudas y así le brinde todas las opciones disponibles a fin de que se involucre en los acuerdos realizados de forma guiada durante la consulta, con el fin de obtener mejores resultados en su salud. Y una forma para lograr esto es con una adecuada preparación que nos lleve a lograr la comunicación asertiva ante cualquier situación. Por lo tanto, se puede concluir que para ser un buen médico no solo es necesario tener conocimientos médicos, si no que debe haber un conocimiento integral que incluya el ámbito psicosocial, como lo es el arte de la comunicación y las relaciones interpersonales.
|
Radioligand therapy in the therapeutic strategy for patients with gastro-entero-pancreatic neuroendocrine tumors: a consensus statement from the Italian Association for Neuroendocrine Tumors (Itanet), Italian Association of Nuclear Medicine (AIMN), Italian Society of Endocrinology (SIE), Italian Association of Medical Oncology (AIOM). | ca06459d-4f87-4261-951e-f9d2a570a5b9 | 11729074 | Internal Medicine[mh] | Neuroendocrine neoplasms (NENs) comprise a heterogeneous group of malignancies arising from the diffuse neuroendocrine cell system. Gastroenteropancreatic (GEP) NENs represent the most common subtype, with an increasing worldwide incidence over the past decades . According to their histopathological features, mitotic count, and Ki-67 index, GEP-NENs are classified as neuroendocrine tumors (NETs) or neuroendocrine carcinomas (NECs). GEP-NETs are well-differentiated neoplasms, defined as grade G1 (Ki-67 < 3%, mitotic count < 2/2 mm²), G2 (Ki-67 3–20%, mitotic count 2–20/2 mm²), or G3 (Ki-67 > 20%, mitotic count > 20/2 mm²). In contrast, GEP-NECs are aggressive and poorly differentiated neoplasms G3 (Ki-67 > 20%, mitotic count > 20/2 mm²) . The majority of GEP-NETs are sporadic and non-functional . Therapy goals encompass tumor excision with curative intent and/or the halting of disease progression, and the control of clinical symptoms in functional NETs. Surgery, if feasible, represents the primary and only curative approach for localized GEP-NET G1 or G2 but may also be considered in the context of advanced NETs for palliative resection, debulking surgery, or hepatic metastasectomy. At diagnosis, up to 80% of GEP-NETs are locally advanced or metastatic; therefore, non-surgical strategies such as somatostatin analogs (SSA), radioligand therapy (RLT), targeted therapies with the mTOR inhibitor everolimus or the multiple tyrosine kinase inhibitor sunitinib, and systemic chemotherapy, should be evaluated. Specifically, RLT is an effective and relatively safe option that has been investigated for over 20 years in well-differentiated NETs expressing somatostatin receptors (SSTR). RLT involves of administering radionuclide-labeled SSA, which selectively targets NET cells. The role of RLT in NENs is evolving, and novel strategies are under evaluation, including the implementation of new radiopharmaceuticals, combination with other therapies, or intra-arterial administration . Currently, [177Lu]Lu-[DOTA0,Tyr3]-octreotate (177Lu-DOTATATE) is indicated for unresectable, metastatic or locally advanced, G1 or G2, SSTR-positive GEP-NETs as a second-line option after SSA. The approval by the European Medicines Agency (EMA) in 2017 and the US Food and Drug Administration (FDA) in 2018 was strongly encouraged by the hallmark phase III NETTER-1 trial , which demonstrated a significant improvement in PFS, response rate, and quality of life (QoL) in the 177Lu-DOTATATE arm compared to high-dose octreotide (60 mg/month) in patients with advanced midgut NETs progressive on SSA. To date, the optimal therapeutic algorithm for GEP-NETs, comprising the role of RLT, has not been standardized. Current clinical practice considers RLT when progression occurs on previous pharmacological treatment. The European Neuroendocrine Tumor Society (ENETS) supports the role of RLT in intestinal NETs as second-line therapy after the failure of SSA or as third-line therapy after the failure of everolimus . Regarding pancreatic NETs (panNETs), RLT is recommended in lower-grade NETs in case of progression after SSA, chemotherapy, or targeted drugs (everolimus/sunitinib) . The European Society of Medical Oncology (ESMO) guidelines encourage considering RLT earlier in the treatment sequence, especially in panNETs. According to ESMO guidelines, RLT is recommended as second-line therapy in progressive midgut NETs after SSA but may also be considered in carefully selected NET G3 cases . Both ENETS and ESMO guidelines recognize the role of RLT in managing carcinoid syndrome or functional NETs refractory to SSA . Given the non-complete uniformity of the current recommendations, it is crucial to provide clinicians with clear and well-structured guidance for personalized therapeutic decisions in real-world clinical practice. Therapy should be tailored to each patient according to tumor pathological and functional status, SSTR imaging, patient choice, and comorbidities. Therefore, multidisciplinary care of patients affected by GEP-NETs at referral centers is pivotal in integrating and optimizing diagnostic and therapeutic strategies . This work was developed by representatives from each of the participating scientific societies. After an initial web meeting, 10 questions were identified, focusing on the role of RLT in GEP-NETs, as detailed in Table . The questions were limited to sporadic, well-differentiated tumors, excluding high-grade NEC and non-sporadic tumors related to hereditary syndromes. Hence, the manuscript consistently uses the term “NET” in this context. Each question was addressed by a specialized team from the societies, leveraging their expertise. They conducted a PubMed literature search using the following keywords: (“radioligand therapy” OR “peptide receptor radionuclide therapy” OR “PRRT”) AND (“gastroenteropancreatic neuroendocrine tumors” OR “GEP-NETs” OR “gastroenteropancreatic NETs” OR “gastrointestinal neuroendocrine tumors” OR “pancreatic neuroendocrine tumors”). Since 177Lu-Dotatate is the only therapy approved by authorities for treating patients with GEP-NET, the literature search was limited to articles covering 177Lu-DOTATATE exclusively. Studies focusing on treatments with other radioligands were considered outside the scope of this work. Recommendations are provided based on the highest quality evidence available and the collective expertise of the authors. These are categorized by both the level of evidence (ranging from 1 to 5) and the strength of the recommendation (graded A to D), as outlined in suppl. Table according to the GRADE system . The manuscript was refined through textual email discussions and virtual meetings in October 2023, January 2024, and April 2024, leading to a consensus draft. After external review and approval from the executive boards of all societies, the final draft was endorsed. Statements Q1. Who is the potential candidate for treatment with RLT? RLT with 177Lu-DOTATATE is currently approved by both the Food and Drug Administration (FDA) and the European Medicines Agency (EMA) for the treatment of unresectable or metastatic, progressive, well-differentiated, G1/G2, SSTR-positive GEP-NETs. This indication is based on the multicenter, phase III, randomized, open-label NETTER-1 trial and large retrospective cohort studies . The NETTER-1 trial randomized 229 patients with well-differentiated, metastatic midgut NETs who progressed on standard dose octreotide LAR to receive either 177Lu-DOTATATE at 7.4 GBq every 8 weeks or octreotide i.m. at 60 mg every 4 weeks. The estimated rate of PFS at month 20 was 65% in the 177Lu-DOTATATE arm and 11% in the control arm (HR: 0.21, P < 0.0001), with consistent benefits across major prespecified subgroups. Moreover, RLT with 177Lu-DOTATATE significantly improved many QoL domains compared with high-dose octreotide . While the NETTER-1 trial enrolled only patients with midgut NETs, a large body of evidence suggests that RLT with 177Lu-DOTATATE is also safe and effective in SSTR-positive pancreatic and hindgut primaries . More recently, the multicenter, phase III, randomized, open-label NETTER-2 trial has investigated 177Lu-DOTATATE plus octreotide versus high-dose octreotide in patients with newly diagnosed, advanced, SSTR-positive G2/G3 GEP-NETs with Ki-67 ranging between 10% and 55% . The median PFS was significantly prolonged in the investigational arm (22.8 months) compared to the control arm (8.5 months; stratified HR: 0.28, p < 0.0001), with a significantly higher overall response rate (ORR) in the 177Lu-DOTATATE arm (43%) versus the high-dose octreotide arm (9.3%; OR: 7.81, p < 0.0001). On this basis, likely, regulatory authorities will formally expand the indications for RLT to include frontline treatment of patients with GEP-NETs harboring a Ki-67 between 10% and 55%. At present, potential candidates for RLT with 177Lu-DOTATATE include patients with advanced SSTR-positive GEP-NETs who have progressed on prior SSA therapy. Since high tumor burden negatively impacts the efficacy of RLT , early placement of RLT in the therapeutic algorithm is advocated. Therefore, all patients with SSTR-positive advanced GEP-NETs progressive on first-line treatment should be considered for RLT. In patients with bulky, symptomatic disease (particularly in the case of pancreatic primaries) who need rapid tumor shrinkage, chemotherapy might be preferred over RLT. In the future, potential candidates for RLT will also include patients with newly diagnosed G2/G3 GEP-NETs and Ki-67 ranging between 10% and 55%. The progressive expansion of the patient population potentially amenable to treatment with 177Lu-DOTATATE, in line with the advent of 177Lu-PSMA-617 for the treatment of prostate cancer , might pose several challenges from a production and drug administration standpoint. Timely preparation is needed to avoid bottlenecks and allow the administration of RLT to all potential candidates without delays. Recommendation The candidate for RLT is a patient with advanced (unresectable or metastatic) SSTR-positive GEP-NET who has progressed on prior therapy with SSA. For these patients, early incorporation of 177Lu-DOTATATE RLT into the treatment algorithm is recommended (1b - A). Q2. How should progressive disease be defined before planning RLT? Assessing disease progression in GEP-NETs before planning RLT involves a thorough evaluation using various clinical, imaging, and laboratory methods. Here are the key steps and considerations in assessing disease progression. Imaging Studies: Utilize radiological imaging such as computed tomography (CT) and magnetic resonance imaging (MRI) scans to assess evidence of primary tumors and metastasis and estimate tumor burden . These investigations help quantify neoplastic infiltration, pleural or ascitic fluid volume, and the presence of carcinoid heart disease (evaluated by echocardiography). CT and MRI also identify previously unrecognized lesions or conditions needing urgent treatment, such as pathological spinal fractures, and are essential for ruling out indications for locoregional therapies like embolization or chemoembolization in patients with liver-only disease . Functional Imaging: Functional imaging, particularly 68-Gallium-SSTR PET scans (SSTR-PET), is specific for NETs . This imaging modality helps identify the presence of SSTRs on tumor cells, guiding the selection of patients suitable for RLT. For lesions with high proliferative indexes, [18 F]FDG PET/CT may complement the assessment by visualizing heightened metabolic activity, thus refining the evaluation of lesions targeted with alternative therapies . Recent advancements include the introduction of volumetric parameters like SSR-derived tumor volume and total lesion SSR as tools to aid in predicting PFS before RLT . Biomarkers: While specific tumor markers are assessed in functioning tumors associated with clinical syndromes, the use of biochemical markers like chromogranin A, alkaline phosphatase, or alterations in transaminase ratios, has been proposed to predict therapy effectiveness, although without definitive evidence of their predictive significance . Elevated chromogranin A levels alone should not be considered definitive evidence of disease progression due to the marker’s low specificity. Histological Evaluation: For long-term survivors with multiple secondary disease localizations and historical biopsies, it’s crucial to consider a further histological evaluation before planning RLT due to the potential change in tumor grade over time . This is especially pertinent if the historical biopsy was from the primary tumor and there has been a significant increase in metastatic lesion number and sites. Performing an [18 F]FDG PET/CT scan may help guide the selection of the most aggressive metastasis for biopsy. Clinical Symptoms: Assess the patient’s symptoms, including changes in flushing, diarrhea, abdominal pain, or other related symptoms. Worsening or new symptoms may indicate disease progression, necessitating a CT, MRI, or PET scan to provide a comprehensive overview of the patient’s clinical condition. Multidisciplinary Team Consultation: Engage a multidisciplinary team experienced in managing GEP-NETs, including oncologists, endocrinologists, gastroenterologists, radiologists, nuclear medicine specialists, pathologists, and surgeons, in the assessment process. Discuss the patient’s case to ensure a comprehensive understanding of the disease status and align with the patient’s will and expectations. Multidisciplinary management significantly enhances care levels in patients with GEP-NETs . It is essential to approach disease progression assessment in GEP-NETs using these methods. Treatment decisions are often based on a comprehensive evaluation of all available information, with plans typically personalized to each patient’s specific situation, considering factors like tumor grade, location, and overall health status. Recommendation An accurate multidisciplinary assessment of patients who are candidates for RLT is mandatory before initiating treatment. This assessment should include a complete radiological evaluation using CT and/or MRI, as well as SSTR-PET. In selected patients with a significant change in disease behavior—such as a noticeable increase in tumor lesions or an evident increase in tumor burden—performing [18 F]FDG PET/CT and/or repeating the histological evaluation may be proposed (3a - A). Q3. If and how does the FDG PET influence the decision to perform RLT? While [18 F]FDG PET/CT is not typically the primary imaging modality for GEP-NETs, it can be informative in certain cases and may influence decisions regarding RLT administration. EANM and ENETS guidelines recommend including [18 F]FDG PET/CT in the diagnostic pathway for higher G2 (Ki67: 10–20%), G3 NET, and NEC. The 2020 ESMO guidelines offer broader recommendations, suggesting the evaluation of both [18 F]FDG PET/CT and SSTR-PET for all G2-G3 NETs . However, [18 F]FDG PET/CT can also be positive in low-grade NETs of the G1 type, maintaining an unfavorable prognostic significance even in these tumors, confirming that the role of this technique in low-proliferation forms still needs full clarification . Some previous studies have investigated the use of both tracers, but they rely on retrospective data from populations that are not homogeneous regarding the primary lesion . SSTR-PET and [18 F]FDG PET/CT together may be indicated for certain cases, including at initial diagnosis for intermediate proliferative activity tumors and during follow-up when assessing treatment changes or discrepancies between radiological and clinical evaluations . Here’s how [18 F]FDG PET/CT might influence the decision to perform RLT. Tumor Metabolic Activity: [18 F]FDG PET/CT provides information about the metabolic activity of tumors. NETs are generally slow-growing and may not exhibit high glucose metabolism, making [18 F]FDG PET/CT less sensitive for these tumors. However, in poorly differentiated or more aggressive lesions with higher metabolic activity, [18 F]FDG PET/CT may be used to assess aggressive lesions’ presence, number, and location, guiding treatment decisions towards alternatives to RLT, such as chemotherapy . Tumor Intra and Inter-lesion Heterogeneity: GEP-NETs may exhibit heterogeneity in receptor expression and metabolic activity. Combining information from both radiotracers provides a more comprehensive view of tumor characteristics. For instance, elevated [18 F]FDG PET/CT activity might indicate swift progression in pancreatic NETs, even when early diagnosed or confirmed as well-differentiated. The presence of [18 F]FDG PET/CT uptake could indicate undifferentiated disease foci, significantly impacting therapy response and prognosis . Lesions showing matched SSTR imaging with SSTR-PET and [18 F]FDG PET/CT uptake may suggest a good response probability to RLT, even in combination with chemotherapy . Disease staging, monitoring, and therapeutic decision-making: the decision to perform RLT is based on the presence of SSTRs on tumor cells. If GEP-NETs show SSTR expression, RLT may be considered. However, in cases of uncertain diagnostic presentations (such as non-conclusive findings in CT, MRI, or SSTR-PET) or rapid clinical progression, it is advisable to also perform [18 F]FDG PET/CT for a comprehensive overview of the multi-metastatic disease. Ultimately, the decision to perform RLT is multifaceted and should be made in consultation with a multidisciplinary team of specialists, considering the specific characteristics of the patient’s tumors and their responses to various imaging modalities and previous therapies. The goal is to tailor the treatment plan to the individual patient’s needs and the characteristics of their neuroendocrine lesions. Recommendation [18 F]FDG PET/CT is recommended before RLT in cases with heterogeneous uptake at SSTR-PET, and in patients with suspicion of rapidly progressive disease (3b - A). Q4. What is the evidence for choosing RLT versus targeted agents after the failure of somatostatin analogues? The phase 3 trials conducted on patients with intestinal NET reported that median PFS was not reached for RLT with 177Lu-Dotatate, while it was 11 months and 16.4 months for everolimus in non-functioning and functioning tumors, respectively . Although these studies were designed on populations that are not directly comparable, the higher anti-proliferative efficacy of RLT compared with everolimus is now well established. This constitutes the first and most significant evidence in favor of choosing RLT after the failure of SSA treatment. The ORR was significantly higher with RLT than with everolimus . In patients with advanced panNET initially considered unresectable or borderline, neoadjuvant treatment with 177Lu-Dotatate enabled successful surgery in 31% of cases . Therefore, early use of RLT can alter these tumors’ natural history. Patients with GEP-NET who are candidates to receive SSA as first-line therapy typically present with low-proliferating tumors and a long life expectancy. In this setting, the second-line therapy needs to be effective, but safety is of primary importance to avoid serious adverse events and related treatment interruptions or withdrawals. The ultimate goal is to achieve long-term tumor stabilization and a good QoL. For this purpose, RLT offers a better risk/benefit ratio than targeted therapies. By comparing different therapeutic sequences, RLT was found to be safer than either everolimus or chemotherapy as a second-line therapy . From the patient’s perspective, a French national survey indicated that RLT had the best median perceived tolerance compared to all other treatments, including everolimus, sunitinib, and chemotherapy . On the other hand, toxicity, rather than tumor progression, was the most frequent reason for discontinuation of everolimus and sunitinib . The long-term safety results of the NETTER-1 trial confirmed that 177Lu-Dotatate is safe, and no new serious adverse events were reported during the long-term follow-up . Beyond the low toxicity rate, RLT has been reported to significantly impact health-related quality of life in large randomized trials performed in gastroenteropancreatic NETs, improving both global health status and specific symptoms . The phase II non-comparative OCLURANDOM study recently randomized patients with advanced, progressive, SSTR-positive panNET to receive either 177Lu-DOTATATE or sunitinib. The 12-month PFS rate was 80.5% in the RLT arm versus 42% in the sunitinib arm , thus confirming that RLT outperforms targeted agents in patients progressive on first-line therapy with SSA. Two prospective, randomized, phase II trials (COMPETE and COMPOSE) are currently underway to compare the efficacy of RLT versus everolimus or versus the best standard of care (chemotherapy or everolimus, according to the investigator’s choice) in patients with unresectable progressive GEP-NETs (ClinicalTrials.gov NCT03049189 and NCT04919226). Recommendation In patients with progressive G1-G2 GEP-NETs, RLT should be preferred as a second-line treatment over targeted agents (everolimus or sunitinib) after the failure of SSA due to its better-expected efficacy and safety profile (2b - B). Q5. What is the evidence for choosing RLT versus chemotherapy after the failure of somatostatin analogs? Both retrospective and prospective evidences indicate that chemotherapy is effective in treating GEP-NETs . Specifically, alkylating agents such as streptozocin, dacarbazine, and temozolomide (alone or in combination with capecitabine) have demonstrated antitumor activity in panNETs . The prospective ECOG-ACRIN E2211 phase II trial recently compared temozolomide alone to temozolomide plus capecitabine in 144 patients with advanced progressive G1-G2 panNETs. The study showed a significant improvement in PFS in the combination arm (median PFS 22.7 vs. 14.4 months respectively) and a trend towards improved ORR (40% vs. 34%) and median OS (58.7 vs. 53.8 months, respectively), although 45% of patients experienced G3/G4 toxicity . While most well-differentiated gastrointestinal NETs tend to be resistant to alkylating agents, fluoropyrimidine-based combinations (e.g., FOLFOX) show antitumor activity in this patient population, potentially causing rapid tumor shrinkage . A large, multicenter, retrospective study of 508 patients with advanced GEP-NETs recently showed that second-line therapy with RLT was associated with improved PFS compared to targeted therapies or chemotherapy (median 2.2 years [95% CI, 1.8–2.8 years] vs. 0.6 years [95% CI, 0.4-1.0 years] respectively in the matched population; P < 0.001). This effect was consistent across different primary sites and hormonal statuses, though the advantage in PFS was not observed in tumors with a Ki-67 greater than 10% . According to retrospective evidence, RLT is associated with improved survival outcomes in patients who did not receive chemotherapy before RLT initiation . Several clinical trials are currently comparing RLT with chemotherapy in patients with progressive disease (NCT05247905, NCT04919226), and results are eagerly awaited. Overall, many factors should be considered when choosing between RLT and chemotherapy in patients who are progressive on first-line SSA therapy. These include the pace of tumor growth and the need for rapid tumor shrinkage. While the density of SSTR expression by SSTR-PET scan can accurately preselect the patients most likely to respond to RLT, methylguanine-DNA methyltransferase testing might be helpful in predicting response to temozolomide-based regimens. Recommendation In patients with progressive G1-G2 GEP-NETs, RLT should be preferred as a second-line treatment over chemotherapy after the failure of SSA. However, chemotherapy remains an option to consider in the treatment of panNET patients who have a high tumor burden and/or the presence of tumor-related symptoms, or in cases of rapid progression, regardless of the primary tumor site (3b - A). Q6. What is the evidence for choosing RLT versus high-dose somatostatin analogs after the failure of standard-dose somatostatin analogs in NF NETs? While it is well-established that escalating the dose of SSA can enhance symptom control in functioning tumors when the standard SSA dosage proves ineffective, the actual impact of increased SSA dosages on tumor growth, particularly in the clinical context of non-functioning tumors, remains ambiguous. Until recently, selecting a second-line therapy after the standard SSA dose fails in well-differentiated G1-G2 GEP-NETs was notably challenging. Earlier retrospective studies suggested a potential improvement in PFS with increased SSA doses . However, this observation was not corroborated in prospective studies involving patients with radiologically confirmed progressive disease under standard SSA doses. In such clinical scenarios, the reported median PFS values, as indicated by the CLARINET FORTE study and the control arms of the NETTER-1 trial , ranged between 5 and 8 months. A recent meta-analysis examining 783 patients in 11 studies found that the proportion of patients experiencing disease progression under high-dose SSA was 62% (with a 95% confidence interval ranging between 53% and 70%) per 100 subjects treated annually . Conversely, in the same clinical scenario of progressive well-differentiated GEP-NETs, RLT demonstrated a significantly higher PFS rate, as observed in both randomized controlled trials and real-world study settings. Data from the phase-3 NETTER-1 trial, where the median PFS was not reached in the initial analysis and was estimated at 25 months in the final analysis , aligns with findings from retrospective multicenter studies. These studies reported a median PFS of approximately 2.5 years . A similar trend was observed when considering the ORR as an endpoint. In the context of high-dose SSA, although earlier retrospective small-scale studies reported promising objective response rates of up to 31% , prospective trials indicated a significantly lower likelihood of achieving an objective tumor response, with rates ranging between 3 and 4% . On the other hand, when analyzing the ORR for RLT, the values vary significantly. The NETTER-1 study reported a rate of 18% , while the larger retrospective study by Brabander et al. indicated a range between 31 and 58% . Based on these considerations, RLT has demonstrated greater efficacy compared to high-dose SSA in the various clinical settings evaluated, including both RCTs and retrospective real-world studies. This superiority is evident in terms of both PFS and ORR. Recommendation In patients with progressive G1-G2 GEP-NETs, RLT is recommended as a second-line treatment over high-dose SSA after the failure of standard dose SSA due to its better expected efficacy. High-dose SSA remains an option as a temporary bridge until RLT initiation or in patients unfit for other antitumor treatments due to comorbidities (1b - A). Q7. How and when should the efficacy of RLT be monitored after initiating treatment? 3D imaging, particularly through contrast-enhanced CT or MRI, is the main method for evaluating treatment response by observing changes in lesion dimensions over time . Tumor size measurements are primarily conducted according to the Response Evaluation Criteria in Solid Tumours version 1.1 (RECIST 1.1) . However, assessing treatment response based solely on changes in tumor size presents several challenges, especially with GEP-NETs. These tumors may stabilize or initially increase in size even when responding to treatment. Additionally, the occurrence of central tumor necrosis frequently reported during RLT complicates assessments with radiological criteria due to the ‘false-positive’ increases. Furthermore, shrinkage following RLT can be a delayed occurrence . These factors underscore the limitations associated with RECIST 1.1 criteria, suggesting that their use in evaluating slow-growing neoplasms such as GEP-NETs should be cautiously approached. To address these limitations, the Choi criteria have been introduced, assessing both the dimensional changes and the density variation of lesions in CT images with contrast enhancement. Numerous studies comparing the two criteria for NET evaluation consistently show equal or markedly superior results for Choi versus RECIST . However, it is important to note that while the arterial phase of CT is most commonly used in assessing GEP-NETs, considering their vascularity, the Choi criteria rely on images obtained during the portal venous phase . This discrepancy represents a major limitation in applying the Choi criteria in the neuroendocrine context. In light of these challenges, new methods have been proposed to assess therapy response, including the application of long-established tools used for evaluating growth rates in other neoplastic pathologies . The tumor growth rate (TGR) is one emerging tool based on the variation in the volume of target lesions, normalized for the time between two radiological assessments (CT or MRI). Recent studies have also highlighted its application in the neuroendocrine field , showing that baseline TGR highlights the heterogeneity of well-differentiated GEP-NETs and predicts increases in Ki-67 index over time . Additionally, Weber M et al. evaluated the utility of hybrid techniques such as SSTR-PET/MRI in a small sample study. The results suggest that pre-therapeutic SSTR-PET/MRI may not be a reliable predictor of treatment response to RLT in NET patients. Conversely, patients treated with SSA exhibit variations in the apparent diffusion coefficient map on MRI imaging compared to those treated with RLT. Finally, features extracted from SSTR-PET/MRI performed before RLT were not good predictors of treatment response . Recommendation RECIST 1.1 criteria, evaluated by contrast-enhanced CT or MRI, should be used to monitor the efficacy of RLT during follow-up. Attention should also be paid to changes in tumor lesion morphology beyond modifications in their size (3b - A). Q8. How to manage frail patients who have to undergo RLT? Frailty is a syndrome with complex multifactorial physiopathology affecting up to 17% of the geriatric population . This clinical status implies major vulnerability across multiple health domains, including weakness, decreased functional performance, unintentional weight loss, cognitive impairment, increased risk of comorbidities, and organ dysfunction, leading to adverse health outcomes . As the prevalence of GEP-NETs and the elderly population rate increase globally, it is reasonable to hypothesize that a progressively higher proportion of patients with GEP-NETs will be frail. Data from the Surveillance, Epidemiology, and End Results (SEER) analysis of 29,664 GEP-NET cases showed that the median age at diagnosis was 63 years, with the peak incidence observed at age 80. Additionally, another database analysis of 22,744 cases revealed the highest incidence rate of GEP-NETs in patients over 70 years old, with 16–17 cases per 100,000 . The frail oncological population tends to receive delayed or incomplete diagnostic evaluations and often suboptimal therapy, considering the patient’s comorbidities and major risk of toxicity or complications, leading to an unfavorable therapeutic risk/benefit ratio . Regarding RLT, frail patients more commonly present with altered renal function or hematological disorders, thus tending to be less frequently eligible for RLT. Currently, there are no standardized recommendations in the literature regarding using RLT in frail patients. Theiler et al. conducted a retrospective matched cohort study to assess the efficacy and safety of RLT with 90Y-DOTATOC or 177Lu-DOTATATE in elderly patients over 79 years old affected by well-differentiated G1 or G2, SSTR-positive NETs compared to their younger counterparts. The exclusion criteria included ECOG performance status ≥ 3, hematological impairment (hemoglobin < 80 g/L, platelet count < 75 × 10 9 /L), reduced eGFR (< 45 mL/min), or increased levels of AST/ALT (> 3 times upper range of normal). Overall, despite a higher baseline rate of comorbidities, renal and hematological impairment, and a lower ECOG performance status in the elderly cohort, RLT was found to be an effective strategy with a similar toxicity profile in both groups. Nevertheless, long-term adverse events, particularly renal dysfunction when administered 90Y-DOTATOC rather than 177Lu-DOTATATE, cannot be completely ruled out. No statistically significant differences were observed regarding the OS. The median OS in the elderly and younger group was respectively 3.4 years and 6.0 years ( p = 0.094) . These results suggest that RLT may be a valid and relatively safe therapeutic option in a carefully selected cohort of frail patients. However, more robust and large-cohort studies are warranted to explore the risk/benefit ratio, also in the long-term, of RLT in this subgroup of patients. Such initiatives would be of remarkable impact, considering that alternative medical options such as targeted drugs (everolimus or sunitinib) or systemic chemotherapy are generally associated with higher toxicity and deterioration of QoL. An interdisciplinary and multidimensional approach is fundamental to guide therapeutic decisions in such a vulnerable population, especially when standardized guidelines are lacking. To provide the best care for frail individuals, it is necessary to scrupulously identify adequately eligible patients. Therefore, in a multidisciplinary context, validated assessment tools should be implemented to prudently evaluate important domains such as functional, cognitive, and nutritional status, potential limitations in activities of daily living, social settings, and comorbidities. Recommendation RLT should also be considered in frail patients as a valid therapeutic option despite the lack of specific supporting data. It is reasonable, especially in the elderly population with comorbidities, to pay greater attention to renal function and potential marrow toxicity before initiating therapy (5 - B). Q9. Is there a room for RLT in G3 GEP-NETs? Retrospective evidence suggested that RLT can be a relevant therapeutic option in patients with SSTR-positive G3 GEP-NETs, leading to disease control rates ranging between 30% and 80% and median PFS between 9 and 23 months . In the recent NETTER-2 trial, which evaluated 226 enrolled patients, 35% had G3 tumors. Overall, treatment with RLT was associated with a significant improvement in PFS (median PFS: 8.5 months in the control arm versus 22.8 months in the investigational arm; stratified HR: 0.28, p < 0.0001) and ORR (9.3% in the control arm versus 43% in the investigational arm; stratified OR: 7.81, p < 0.0001) . Notably, PFS and ORR improvements were consistent across all pre-specified subgroups, including the G3 subgroup. Based on these results, it is likely that first-line treatment with RLT will be approved soon by regulatory authorities, becoming the first standard treatment option supported by high-level evidence for patients with advanced, G2-G3, SSTR-positive GEP-NETs. Another prospective phase III trial, the COMPOSE trial, is currently underway to compare first or second-line RLT versus the best standard of care (chemotherapy or everolimus according to investigator’s choice) in patients with either G2 or G3 unresectable SSTR-positive GEP-NETs . The trial results are eagerly awaited, as they will provide much-needed information on treatment sequencing also in patients with G3 GEP-NETs. No high-level evidence of antitumor activity currently exists for treatment modalities alternative to RLT in patients with metastatic G3 GEP-NETs. According to retrospective data and in light of the recent results of the NETTER-2 trial , SSA may exert some antiproliferative activity in patients with G3 GEP-NETs, although with significantly inferior outcomes compared to RLT. On the other hand, small series have documented the activity of either sunitinib or everolimus (alone or in combination with temozolomide) in G3 GEP-NETs . Alkylating-based (i.e., CAPTEM or STZ/5-FU) and fluoropyrimidine-based (i.e., FOLFOX) chemotherapy protocols appear effective in patients with G3 GEP-NETs . According to retrospective evidence, the CAPTEM regimen is associated with a median PFS ranging between 9 and 15 months in patients with advanced G3 tumors of the digestive tract . Responses to temozolomide-based regimens appear more frequent in the first-line setting and in pancreatic primaries. The efficacy of etoposide-platinum chemotherapy appears limited in advanced G3 NETs, with the response rate in this population inferior to that observed in patients with poorly differentiated NECs . Overall, RLT might be currently considered as a preferred option in the first-line treatment of patients with advanced SSTR-positive G3 GEP-NETs. Chemotherapy, particularly alkylating-based regimens, might be reserved to SSTR-negative G3 NETs or to patients progressing on RLT. Recommendation As soon as RLT is approved by regulatory authorities, it should be considered a valid option for patients with G2-G3 GEP-NETs expressing SSTR (1b - A). Q10. Is there a rationale for repeating RLT treatment? The rationale for repeating RLT in patients with GEP-NETs involves several factors. The decision is typically individualized, based on a combination of clinical assessments, imaging, and biochemical evaluations. If there is evidence of disease progression or recurrence following the initial course of RLT, a repeat treatment may be considered to target new or recurrent lesions. Initially, an SSTR-PET evaluation should be conducted to confirm the presence of somatostatin receptors on the NET lesions. According to the Delphi consensus, a partial response or stable disease must have been achieved for at least one year after the first RLT treatment . To accurately determine which patients could benefit from retreatment, implementing dosimetry in clinical practice is crucial. Dosimetry correlates tumor-absorbed doses and treatment effectiveness, especially in larger tumors . Recent studies have demonstrated the safety and efficacy of an RLT rechallenge with dosimetry calculations based on healthy organs such as the kidneys and bone marrow . These findings suggest that incorporating personalized dosimetry, aimed at identifying organs with dose limits and determining the maximum tolerated accumulated activity, can enhance standard clinical practices by ensuring that therapeutic doses stay within safe limits for healthy organs. Notably, patients who reached the maximum tolerable absorbed dose of 23 Gy in their kidneys experienced nearly double the median PFS and OS . This highlights the significant potential benefits of adopting a personalized approach over fixed dosing in terms of oncological outcomes. The decision to repeat RLT is complex and requires careful consideration of various factors. Regular follow-up assessments, imaging studies, and ongoing communication between the patient and the dedicated tumor board are crucial for determining the most appropriate course of action in managing NETs. Recommendation Although not yet approved by regulatory authorities, retreatment with RLT should be considered a valid therapeutic option for those patients who had a favorable response to initial RLT at the time of disease progression. Dosimetry data, including initial RLT, should be used to tailor the personalized dose for the retreatment approach (3b - B). Q1. Who is the potential candidate for treatment with RLT? RLT with 177Lu-DOTATATE is currently approved by both the Food and Drug Administration (FDA) and the European Medicines Agency (EMA) for the treatment of unresectable or metastatic, progressive, well-differentiated, G1/G2, SSTR-positive GEP-NETs. This indication is based on the multicenter, phase III, randomized, open-label NETTER-1 trial and large retrospective cohort studies . The NETTER-1 trial randomized 229 patients with well-differentiated, metastatic midgut NETs who progressed on standard dose octreotide LAR to receive either 177Lu-DOTATATE at 7.4 GBq every 8 weeks or octreotide i.m. at 60 mg every 4 weeks. The estimated rate of PFS at month 20 was 65% in the 177Lu-DOTATATE arm and 11% in the control arm (HR: 0.21, P < 0.0001), with consistent benefits across major prespecified subgroups. Moreover, RLT with 177Lu-DOTATATE significantly improved many QoL domains compared with high-dose octreotide . While the NETTER-1 trial enrolled only patients with midgut NETs, a large body of evidence suggests that RLT with 177Lu-DOTATATE is also safe and effective in SSTR-positive pancreatic and hindgut primaries . More recently, the multicenter, phase III, randomized, open-label NETTER-2 trial has investigated 177Lu-DOTATATE plus octreotide versus high-dose octreotide in patients with newly diagnosed, advanced, SSTR-positive G2/G3 GEP-NETs with Ki-67 ranging between 10% and 55% . The median PFS was significantly prolonged in the investigational arm (22.8 months) compared to the control arm (8.5 months; stratified HR: 0.28, p < 0.0001), with a significantly higher overall response rate (ORR) in the 177Lu-DOTATATE arm (43%) versus the high-dose octreotide arm (9.3%; OR: 7.81, p < 0.0001). On this basis, likely, regulatory authorities will formally expand the indications for RLT to include frontline treatment of patients with GEP-NETs harboring a Ki-67 between 10% and 55%. At present, potential candidates for RLT with 177Lu-DOTATATE include patients with advanced SSTR-positive GEP-NETs who have progressed on prior SSA therapy. Since high tumor burden negatively impacts the efficacy of RLT , early placement of RLT in the therapeutic algorithm is advocated. Therefore, all patients with SSTR-positive advanced GEP-NETs progressive on first-line treatment should be considered for RLT. In patients with bulky, symptomatic disease (particularly in the case of pancreatic primaries) who need rapid tumor shrinkage, chemotherapy might be preferred over RLT. In the future, potential candidates for RLT will also include patients with newly diagnosed G2/G3 GEP-NETs and Ki-67 ranging between 10% and 55%. The progressive expansion of the patient population potentially amenable to treatment with 177Lu-DOTATATE, in line with the advent of 177Lu-PSMA-617 for the treatment of prostate cancer , might pose several challenges from a production and drug administration standpoint. Timely preparation is needed to avoid bottlenecks and allow the administration of RLT to all potential candidates without delays. Recommendation The candidate for RLT is a patient with advanced (unresectable or metastatic) SSTR-positive GEP-NET who has progressed on prior therapy with SSA. For these patients, early incorporation of 177Lu-DOTATATE RLT into the treatment algorithm is recommended (1b - A). Q2. How should progressive disease be defined before planning RLT? Assessing disease progression in GEP-NETs before planning RLT involves a thorough evaluation using various clinical, imaging, and laboratory methods. Here are the key steps and considerations in assessing disease progression. Imaging Studies: Utilize radiological imaging such as computed tomography (CT) and magnetic resonance imaging (MRI) scans to assess evidence of primary tumors and metastasis and estimate tumor burden . These investigations help quantify neoplastic infiltration, pleural or ascitic fluid volume, and the presence of carcinoid heart disease (evaluated by echocardiography). CT and MRI also identify previously unrecognized lesions or conditions needing urgent treatment, such as pathological spinal fractures, and are essential for ruling out indications for locoregional therapies like embolization or chemoembolization in patients with liver-only disease . Functional Imaging: Functional imaging, particularly 68-Gallium-SSTR PET scans (SSTR-PET), is specific for NETs . This imaging modality helps identify the presence of SSTRs on tumor cells, guiding the selection of patients suitable for RLT. For lesions with high proliferative indexes, [18 F]FDG PET/CT may complement the assessment by visualizing heightened metabolic activity, thus refining the evaluation of lesions targeted with alternative therapies . Recent advancements include the introduction of volumetric parameters like SSR-derived tumor volume and total lesion SSR as tools to aid in predicting PFS before RLT . Biomarkers: While specific tumor markers are assessed in functioning tumors associated with clinical syndromes, the use of biochemical markers like chromogranin A, alkaline phosphatase, or alterations in transaminase ratios, has been proposed to predict therapy effectiveness, although without definitive evidence of their predictive significance . Elevated chromogranin A levels alone should not be considered definitive evidence of disease progression due to the marker’s low specificity. Histological Evaluation: For long-term survivors with multiple secondary disease localizations and historical biopsies, it’s crucial to consider a further histological evaluation before planning RLT due to the potential change in tumor grade over time . This is especially pertinent if the historical biopsy was from the primary tumor and there has been a significant increase in metastatic lesion number and sites. Performing an [18 F]FDG PET/CT scan may help guide the selection of the most aggressive metastasis for biopsy. Clinical Symptoms: Assess the patient’s symptoms, including changes in flushing, diarrhea, abdominal pain, or other related symptoms. Worsening or new symptoms may indicate disease progression, necessitating a CT, MRI, or PET scan to provide a comprehensive overview of the patient’s clinical condition. Multidisciplinary Team Consultation: Engage a multidisciplinary team experienced in managing GEP-NETs, including oncologists, endocrinologists, gastroenterologists, radiologists, nuclear medicine specialists, pathologists, and surgeons, in the assessment process. Discuss the patient’s case to ensure a comprehensive understanding of the disease status and align with the patient’s will and expectations. Multidisciplinary management significantly enhances care levels in patients with GEP-NETs . It is essential to approach disease progression assessment in GEP-NETs using these methods. Treatment decisions are often based on a comprehensive evaluation of all available information, with plans typically personalized to each patient’s specific situation, considering factors like tumor grade, location, and overall health status. Recommendation An accurate multidisciplinary assessment of patients who are candidates for RLT is mandatory before initiating treatment. This assessment should include a complete radiological evaluation using CT and/or MRI, as well as SSTR-PET. In selected patients with a significant change in disease behavior—such as a noticeable increase in tumor lesions or an evident increase in tumor burden—performing [18 F]FDG PET/CT and/or repeating the histological evaluation may be proposed (3a - A). Q3. If and how does the FDG PET influence the decision to perform RLT? While [18 F]FDG PET/CT is not typically the primary imaging modality for GEP-NETs, it can be informative in certain cases and may influence decisions regarding RLT administration. EANM and ENETS guidelines recommend including [18 F]FDG PET/CT in the diagnostic pathway for higher G2 (Ki67: 10–20%), G3 NET, and NEC. The 2020 ESMO guidelines offer broader recommendations, suggesting the evaluation of both [18 F]FDG PET/CT and SSTR-PET for all G2-G3 NETs . However, [18 F]FDG PET/CT can also be positive in low-grade NETs of the G1 type, maintaining an unfavorable prognostic significance even in these tumors, confirming that the role of this technique in low-proliferation forms still needs full clarification . Some previous studies have investigated the use of both tracers, but they rely on retrospective data from populations that are not homogeneous regarding the primary lesion . SSTR-PET and [18 F]FDG PET/CT together may be indicated for certain cases, including at initial diagnosis for intermediate proliferative activity tumors and during follow-up when assessing treatment changes or discrepancies between radiological and clinical evaluations . Here’s how [18 F]FDG PET/CT might influence the decision to perform RLT. Tumor Metabolic Activity: [18 F]FDG PET/CT provides information about the metabolic activity of tumors. NETs are generally slow-growing and may not exhibit high glucose metabolism, making [18 F]FDG PET/CT less sensitive for these tumors. However, in poorly differentiated or more aggressive lesions with higher metabolic activity, [18 F]FDG PET/CT may be used to assess aggressive lesions’ presence, number, and location, guiding treatment decisions towards alternatives to RLT, such as chemotherapy . Tumor Intra and Inter-lesion Heterogeneity: GEP-NETs may exhibit heterogeneity in receptor expression and metabolic activity. Combining information from both radiotracers provides a more comprehensive view of tumor characteristics. For instance, elevated [18 F]FDG PET/CT activity might indicate swift progression in pancreatic NETs, even when early diagnosed or confirmed as well-differentiated. The presence of [18 F]FDG PET/CT uptake could indicate undifferentiated disease foci, significantly impacting therapy response and prognosis . Lesions showing matched SSTR imaging with SSTR-PET and [18 F]FDG PET/CT uptake may suggest a good response probability to RLT, even in combination with chemotherapy . Disease staging, monitoring, and therapeutic decision-making: the decision to perform RLT is based on the presence of SSTRs on tumor cells. If GEP-NETs show SSTR expression, RLT may be considered. However, in cases of uncertain diagnostic presentations (such as non-conclusive findings in CT, MRI, or SSTR-PET) or rapid clinical progression, it is advisable to also perform [18 F]FDG PET/CT for a comprehensive overview of the multi-metastatic disease. Ultimately, the decision to perform RLT is multifaceted and should be made in consultation with a multidisciplinary team of specialists, considering the specific characteristics of the patient’s tumors and their responses to various imaging modalities and previous therapies. The goal is to tailor the treatment plan to the individual patient’s needs and the characteristics of their neuroendocrine lesions. Recommendation [18 F]FDG PET/CT is recommended before RLT in cases with heterogeneous uptake at SSTR-PET, and in patients with suspicion of rapidly progressive disease (3b - A). Q4. What is the evidence for choosing RLT versus targeted agents after the failure of somatostatin analogues? The phase 3 trials conducted on patients with intestinal NET reported that median PFS was not reached for RLT with 177Lu-Dotatate, while it was 11 months and 16.4 months for everolimus in non-functioning and functioning tumors, respectively . Although these studies were designed on populations that are not directly comparable, the higher anti-proliferative efficacy of RLT compared with everolimus is now well established. This constitutes the first and most significant evidence in favor of choosing RLT after the failure of SSA treatment. The ORR was significantly higher with RLT than with everolimus . In patients with advanced panNET initially considered unresectable or borderline, neoadjuvant treatment with 177Lu-Dotatate enabled successful surgery in 31% of cases . Therefore, early use of RLT can alter these tumors’ natural history. Patients with GEP-NET who are candidates to receive SSA as first-line therapy typically present with low-proliferating tumors and a long life expectancy. In this setting, the second-line therapy needs to be effective, but safety is of primary importance to avoid serious adverse events and related treatment interruptions or withdrawals. The ultimate goal is to achieve long-term tumor stabilization and a good QoL. For this purpose, RLT offers a better risk/benefit ratio than targeted therapies. By comparing different therapeutic sequences, RLT was found to be safer than either everolimus or chemotherapy as a second-line therapy . From the patient’s perspective, a French national survey indicated that RLT had the best median perceived tolerance compared to all other treatments, including everolimus, sunitinib, and chemotherapy . On the other hand, toxicity, rather than tumor progression, was the most frequent reason for discontinuation of everolimus and sunitinib . The long-term safety results of the NETTER-1 trial confirmed that 177Lu-Dotatate is safe, and no new serious adverse events were reported during the long-term follow-up . Beyond the low toxicity rate, RLT has been reported to significantly impact health-related quality of life in large randomized trials performed in gastroenteropancreatic NETs, improving both global health status and specific symptoms . The phase II non-comparative OCLURANDOM study recently randomized patients with advanced, progressive, SSTR-positive panNET to receive either 177Lu-DOTATATE or sunitinib. The 12-month PFS rate was 80.5% in the RLT arm versus 42% in the sunitinib arm , thus confirming that RLT outperforms targeted agents in patients progressive on first-line therapy with SSA. Two prospective, randomized, phase II trials (COMPETE and COMPOSE) are currently underway to compare the efficacy of RLT versus everolimus or versus the best standard of care (chemotherapy or everolimus, according to the investigator’s choice) in patients with unresectable progressive GEP-NETs (ClinicalTrials.gov NCT03049189 and NCT04919226). Recommendation In patients with progressive G1-G2 GEP-NETs, RLT should be preferred as a second-line treatment over targeted agents (everolimus or sunitinib) after the failure of SSA due to its better-expected efficacy and safety profile (2b - B). Q5. What is the evidence for choosing RLT versus chemotherapy after the failure of somatostatin analogs? Both retrospective and prospective evidences indicate that chemotherapy is effective in treating GEP-NETs . Specifically, alkylating agents such as streptozocin, dacarbazine, and temozolomide (alone or in combination with capecitabine) have demonstrated antitumor activity in panNETs . The prospective ECOG-ACRIN E2211 phase II trial recently compared temozolomide alone to temozolomide plus capecitabine in 144 patients with advanced progressive G1-G2 panNETs. The study showed a significant improvement in PFS in the combination arm (median PFS 22.7 vs. 14.4 months respectively) and a trend towards improved ORR (40% vs. 34%) and median OS (58.7 vs. 53.8 months, respectively), although 45% of patients experienced G3/G4 toxicity . While most well-differentiated gastrointestinal NETs tend to be resistant to alkylating agents, fluoropyrimidine-based combinations (e.g., FOLFOX) show antitumor activity in this patient population, potentially causing rapid tumor shrinkage . A large, multicenter, retrospective study of 508 patients with advanced GEP-NETs recently showed that second-line therapy with RLT was associated with improved PFS compared to targeted therapies or chemotherapy (median 2.2 years [95% CI, 1.8–2.8 years] vs. 0.6 years [95% CI, 0.4-1.0 years] respectively in the matched population; P < 0.001). This effect was consistent across different primary sites and hormonal statuses, though the advantage in PFS was not observed in tumors with a Ki-67 greater than 10% . According to retrospective evidence, RLT is associated with improved survival outcomes in patients who did not receive chemotherapy before RLT initiation . Several clinical trials are currently comparing RLT with chemotherapy in patients with progressive disease (NCT05247905, NCT04919226), and results are eagerly awaited. Overall, many factors should be considered when choosing between RLT and chemotherapy in patients who are progressive on first-line SSA therapy. These include the pace of tumor growth and the need for rapid tumor shrinkage. While the density of SSTR expression by SSTR-PET scan can accurately preselect the patients most likely to respond to RLT, methylguanine-DNA methyltransferase testing might be helpful in predicting response to temozolomide-based regimens. Recommendation In patients with progressive G1-G2 GEP-NETs, RLT should be preferred as a second-line treatment over chemotherapy after the failure of SSA. However, chemotherapy remains an option to consider in the treatment of panNET patients who have a high tumor burden and/or the presence of tumor-related symptoms, or in cases of rapid progression, regardless of the primary tumor site (3b - A). Q6. What is the evidence for choosing RLT versus high-dose somatostatin analogs after the failure of standard-dose somatostatin analogs in NF NETs? While it is well-established that escalating the dose of SSA can enhance symptom control in functioning tumors when the standard SSA dosage proves ineffective, the actual impact of increased SSA dosages on tumor growth, particularly in the clinical context of non-functioning tumors, remains ambiguous. Until recently, selecting a second-line therapy after the standard SSA dose fails in well-differentiated G1-G2 GEP-NETs was notably challenging. Earlier retrospective studies suggested a potential improvement in PFS with increased SSA doses . However, this observation was not corroborated in prospective studies involving patients with radiologically confirmed progressive disease under standard SSA doses. In such clinical scenarios, the reported median PFS values, as indicated by the CLARINET FORTE study and the control arms of the NETTER-1 trial , ranged between 5 and 8 months. A recent meta-analysis examining 783 patients in 11 studies found that the proportion of patients experiencing disease progression under high-dose SSA was 62% (with a 95% confidence interval ranging between 53% and 70%) per 100 subjects treated annually . Conversely, in the same clinical scenario of progressive well-differentiated GEP-NETs, RLT demonstrated a significantly higher PFS rate, as observed in both randomized controlled trials and real-world study settings. Data from the phase-3 NETTER-1 trial, where the median PFS was not reached in the initial analysis and was estimated at 25 months in the final analysis , aligns with findings from retrospective multicenter studies. These studies reported a median PFS of approximately 2.5 years . A similar trend was observed when considering the ORR as an endpoint. In the context of high-dose SSA, although earlier retrospective small-scale studies reported promising objective response rates of up to 31% , prospective trials indicated a significantly lower likelihood of achieving an objective tumor response, with rates ranging between 3 and 4% . On the other hand, when analyzing the ORR for RLT, the values vary significantly. The NETTER-1 study reported a rate of 18% , while the larger retrospective study by Brabander et al. indicated a range between 31 and 58% . Based on these considerations, RLT has demonstrated greater efficacy compared to high-dose SSA in the various clinical settings evaluated, including both RCTs and retrospective real-world studies. This superiority is evident in terms of both PFS and ORR. Recommendation In patients with progressive G1-G2 GEP-NETs, RLT is recommended as a second-line treatment over high-dose SSA after the failure of standard dose SSA due to its better expected efficacy. High-dose SSA remains an option as a temporary bridge until RLT initiation or in patients unfit for other antitumor treatments due to comorbidities (1b - A). Q7. How and when should the efficacy of RLT be monitored after initiating treatment? 3D imaging, particularly through contrast-enhanced CT or MRI, is the main method for evaluating treatment response by observing changes in lesion dimensions over time . Tumor size measurements are primarily conducted according to the Response Evaluation Criteria in Solid Tumours version 1.1 (RECIST 1.1) . However, assessing treatment response based solely on changes in tumor size presents several challenges, especially with GEP-NETs. These tumors may stabilize or initially increase in size even when responding to treatment. Additionally, the occurrence of central tumor necrosis frequently reported during RLT complicates assessments with radiological criteria due to the ‘false-positive’ increases. Furthermore, shrinkage following RLT can be a delayed occurrence . These factors underscore the limitations associated with RECIST 1.1 criteria, suggesting that their use in evaluating slow-growing neoplasms such as GEP-NETs should be cautiously approached. To address these limitations, the Choi criteria have been introduced, assessing both the dimensional changes and the density variation of lesions in CT images with contrast enhancement. Numerous studies comparing the two criteria for NET evaluation consistently show equal or markedly superior results for Choi versus RECIST . However, it is important to note that while the arterial phase of CT is most commonly used in assessing GEP-NETs, considering their vascularity, the Choi criteria rely on images obtained during the portal venous phase . This discrepancy represents a major limitation in applying the Choi criteria in the neuroendocrine context. In light of these challenges, new methods have been proposed to assess therapy response, including the application of long-established tools used for evaluating growth rates in other neoplastic pathologies . The tumor growth rate (TGR) is one emerging tool based on the variation in the volume of target lesions, normalized for the time between two radiological assessments (CT or MRI). Recent studies have also highlighted its application in the neuroendocrine field , showing that baseline TGR highlights the heterogeneity of well-differentiated GEP-NETs and predicts increases in Ki-67 index over time . Additionally, Weber M et al. evaluated the utility of hybrid techniques such as SSTR-PET/MRI in a small sample study. The results suggest that pre-therapeutic SSTR-PET/MRI may not be a reliable predictor of treatment response to RLT in NET patients. Conversely, patients treated with SSA exhibit variations in the apparent diffusion coefficient map on MRI imaging compared to those treated with RLT. Finally, features extracted from SSTR-PET/MRI performed before RLT were not good predictors of treatment response . Recommendation RECIST 1.1 criteria, evaluated by contrast-enhanced CT or MRI, should be used to monitor the efficacy of RLT during follow-up. Attention should also be paid to changes in tumor lesion morphology beyond modifications in their size (3b - A). Q8. How to manage frail patients who have to undergo RLT? Frailty is a syndrome with complex multifactorial physiopathology affecting up to 17% of the geriatric population . This clinical status implies major vulnerability across multiple health domains, including weakness, decreased functional performance, unintentional weight loss, cognitive impairment, increased risk of comorbidities, and organ dysfunction, leading to adverse health outcomes . As the prevalence of GEP-NETs and the elderly population rate increase globally, it is reasonable to hypothesize that a progressively higher proportion of patients with GEP-NETs will be frail. Data from the Surveillance, Epidemiology, and End Results (SEER) analysis of 29,664 GEP-NET cases showed that the median age at diagnosis was 63 years, with the peak incidence observed at age 80. Additionally, another database analysis of 22,744 cases revealed the highest incidence rate of GEP-NETs in patients over 70 years old, with 16–17 cases per 100,000 . The frail oncological population tends to receive delayed or incomplete diagnostic evaluations and often suboptimal therapy, considering the patient’s comorbidities and major risk of toxicity or complications, leading to an unfavorable therapeutic risk/benefit ratio . Regarding RLT, frail patients more commonly present with altered renal function or hematological disorders, thus tending to be less frequently eligible for RLT. Currently, there are no standardized recommendations in the literature regarding using RLT in frail patients. Theiler et al. conducted a retrospective matched cohort study to assess the efficacy and safety of RLT with 90Y-DOTATOC or 177Lu-DOTATATE in elderly patients over 79 years old affected by well-differentiated G1 or G2, SSTR-positive NETs compared to their younger counterparts. The exclusion criteria included ECOG performance status ≥ 3, hematological impairment (hemoglobin < 80 g/L, platelet count < 75 × 10 9 /L), reduced eGFR (< 45 mL/min), or increased levels of AST/ALT (> 3 times upper range of normal). Overall, despite a higher baseline rate of comorbidities, renal and hematological impairment, and a lower ECOG performance status in the elderly cohort, RLT was found to be an effective strategy with a similar toxicity profile in both groups. Nevertheless, long-term adverse events, particularly renal dysfunction when administered 90Y-DOTATOC rather than 177Lu-DOTATATE, cannot be completely ruled out. No statistically significant differences were observed regarding the OS. The median OS in the elderly and younger group was respectively 3.4 years and 6.0 years ( p = 0.094) . These results suggest that RLT may be a valid and relatively safe therapeutic option in a carefully selected cohort of frail patients. However, more robust and large-cohort studies are warranted to explore the risk/benefit ratio, also in the long-term, of RLT in this subgroup of patients. Such initiatives would be of remarkable impact, considering that alternative medical options such as targeted drugs (everolimus or sunitinib) or systemic chemotherapy are generally associated with higher toxicity and deterioration of QoL. An interdisciplinary and multidimensional approach is fundamental to guide therapeutic decisions in such a vulnerable population, especially when standardized guidelines are lacking. To provide the best care for frail individuals, it is necessary to scrupulously identify adequately eligible patients. Therefore, in a multidisciplinary context, validated assessment tools should be implemented to prudently evaluate important domains such as functional, cognitive, and nutritional status, potential limitations in activities of daily living, social settings, and comorbidities. Recommendation RLT should also be considered in frail patients as a valid therapeutic option despite the lack of specific supporting data. It is reasonable, especially in the elderly population with comorbidities, to pay greater attention to renal function and potential marrow toxicity before initiating therapy (5 - B). Q9. Is there a room for RLT in G3 GEP-NETs? Retrospective evidence suggested that RLT can be a relevant therapeutic option in patients with SSTR-positive G3 GEP-NETs, leading to disease control rates ranging between 30% and 80% and median PFS between 9 and 23 months . In the recent NETTER-2 trial, which evaluated 226 enrolled patients, 35% had G3 tumors. Overall, treatment with RLT was associated with a significant improvement in PFS (median PFS: 8.5 months in the control arm versus 22.8 months in the investigational arm; stratified HR: 0.28, p < 0.0001) and ORR (9.3% in the control arm versus 43% in the investigational arm; stratified OR: 7.81, p < 0.0001) . Notably, PFS and ORR improvements were consistent across all pre-specified subgroups, including the G3 subgroup. Based on these results, it is likely that first-line treatment with RLT will be approved soon by regulatory authorities, becoming the first standard treatment option supported by high-level evidence for patients with advanced, G2-G3, SSTR-positive GEP-NETs. Another prospective phase III trial, the COMPOSE trial, is currently underway to compare first or second-line RLT versus the best standard of care (chemotherapy or everolimus according to investigator’s choice) in patients with either G2 or G3 unresectable SSTR-positive GEP-NETs . The trial results are eagerly awaited, as they will provide much-needed information on treatment sequencing also in patients with G3 GEP-NETs. No high-level evidence of antitumor activity currently exists for treatment modalities alternative to RLT in patients with metastatic G3 GEP-NETs. According to retrospective data and in light of the recent results of the NETTER-2 trial , SSA may exert some antiproliferative activity in patients with G3 GEP-NETs, although with significantly inferior outcomes compared to RLT. On the other hand, small series have documented the activity of either sunitinib or everolimus (alone or in combination with temozolomide) in G3 GEP-NETs . Alkylating-based (i.e., CAPTEM or STZ/5-FU) and fluoropyrimidine-based (i.e., FOLFOX) chemotherapy protocols appear effective in patients with G3 GEP-NETs . According to retrospective evidence, the CAPTEM regimen is associated with a median PFS ranging between 9 and 15 months in patients with advanced G3 tumors of the digestive tract . Responses to temozolomide-based regimens appear more frequent in the first-line setting and in pancreatic primaries. The efficacy of etoposide-platinum chemotherapy appears limited in advanced G3 NETs, with the response rate in this population inferior to that observed in patients with poorly differentiated NECs . Overall, RLT might be currently considered as a preferred option in the first-line treatment of patients with advanced SSTR-positive G3 GEP-NETs. Chemotherapy, particularly alkylating-based regimens, might be reserved to SSTR-negative G3 NETs or to patients progressing on RLT. Recommendation As soon as RLT is approved by regulatory authorities, it should be considered a valid option for patients with G2-G3 GEP-NETs expressing SSTR (1b - A). Q10. Is there a rationale for repeating RLT treatment? The rationale for repeating RLT in patients with GEP-NETs involves several factors. The decision is typically individualized, based on a combination of clinical assessments, imaging, and biochemical evaluations. If there is evidence of disease progression or recurrence following the initial course of RLT, a repeat treatment may be considered to target new or recurrent lesions. Initially, an SSTR-PET evaluation should be conducted to confirm the presence of somatostatin receptors on the NET lesions. According to the Delphi consensus, a partial response or stable disease must have been achieved for at least one year after the first RLT treatment . To accurately determine which patients could benefit from retreatment, implementing dosimetry in clinical practice is crucial. Dosimetry correlates tumor-absorbed doses and treatment effectiveness, especially in larger tumors . Recent studies have demonstrated the safety and efficacy of an RLT rechallenge with dosimetry calculations based on healthy organs such as the kidneys and bone marrow . These findings suggest that incorporating personalized dosimetry, aimed at identifying organs with dose limits and determining the maximum tolerated accumulated activity, can enhance standard clinical practices by ensuring that therapeutic doses stay within safe limits for healthy organs. Notably, patients who reached the maximum tolerable absorbed dose of 23 Gy in their kidneys experienced nearly double the median PFS and OS . This highlights the significant potential benefits of adopting a personalized approach over fixed dosing in terms of oncological outcomes. The decision to repeat RLT is complex and requires careful consideration of various factors. Regular follow-up assessments, imaging studies, and ongoing communication between the patient and the dedicated tumor board are crucial for determining the most appropriate course of action in managing NETs. Recommendation Although not yet approved by regulatory authorities, retreatment with RLT should be considered a valid therapeutic option for those patients who had a favorable response to initial RLT at the time of disease progression. Dosimetry data, including initial RLT, should be used to tailor the personalized dose for the retreatment approach (3b - B). RLT with 177Lu-DOTATATE is currently approved by both the Food and Drug Administration (FDA) and the European Medicines Agency (EMA) for the treatment of unresectable or metastatic, progressive, well-differentiated, G1/G2, SSTR-positive GEP-NETs. This indication is based on the multicenter, phase III, randomized, open-label NETTER-1 trial and large retrospective cohort studies . The NETTER-1 trial randomized 229 patients with well-differentiated, metastatic midgut NETs who progressed on standard dose octreotide LAR to receive either 177Lu-DOTATATE at 7.4 GBq every 8 weeks or octreotide i.m. at 60 mg every 4 weeks. The estimated rate of PFS at month 20 was 65% in the 177Lu-DOTATATE arm and 11% in the control arm (HR: 0.21, P < 0.0001), with consistent benefits across major prespecified subgroups. Moreover, RLT with 177Lu-DOTATATE significantly improved many QoL domains compared with high-dose octreotide . While the NETTER-1 trial enrolled only patients with midgut NETs, a large body of evidence suggests that RLT with 177Lu-DOTATATE is also safe and effective in SSTR-positive pancreatic and hindgut primaries . More recently, the multicenter, phase III, randomized, open-label NETTER-2 trial has investigated 177Lu-DOTATATE plus octreotide versus high-dose octreotide in patients with newly diagnosed, advanced, SSTR-positive G2/G3 GEP-NETs with Ki-67 ranging between 10% and 55% . The median PFS was significantly prolonged in the investigational arm (22.8 months) compared to the control arm (8.5 months; stratified HR: 0.28, p < 0.0001), with a significantly higher overall response rate (ORR) in the 177Lu-DOTATATE arm (43%) versus the high-dose octreotide arm (9.3%; OR: 7.81, p < 0.0001). On this basis, likely, regulatory authorities will formally expand the indications for RLT to include frontline treatment of patients with GEP-NETs harboring a Ki-67 between 10% and 55%. At present, potential candidates for RLT with 177Lu-DOTATATE include patients with advanced SSTR-positive GEP-NETs who have progressed on prior SSA therapy. Since high tumor burden negatively impacts the efficacy of RLT , early placement of RLT in the therapeutic algorithm is advocated. Therefore, all patients with SSTR-positive advanced GEP-NETs progressive on first-line treatment should be considered for RLT. In patients with bulky, symptomatic disease (particularly in the case of pancreatic primaries) who need rapid tumor shrinkage, chemotherapy might be preferred over RLT. In the future, potential candidates for RLT will also include patients with newly diagnosed G2/G3 GEP-NETs and Ki-67 ranging between 10% and 55%. The progressive expansion of the patient population potentially amenable to treatment with 177Lu-DOTATATE, in line with the advent of 177Lu-PSMA-617 for the treatment of prostate cancer , might pose several challenges from a production and drug administration standpoint. Timely preparation is needed to avoid bottlenecks and allow the administration of RLT to all potential candidates without delays. The candidate for RLT is a patient with advanced (unresectable or metastatic) SSTR-positive GEP-NET who has progressed on prior therapy with SSA. For these patients, early incorporation of 177Lu-DOTATATE RLT into the treatment algorithm is recommended (1b - A). Assessing disease progression in GEP-NETs before planning RLT involves a thorough evaluation using various clinical, imaging, and laboratory methods. Here are the key steps and considerations in assessing disease progression. Imaging Studies: Utilize radiological imaging such as computed tomography (CT) and magnetic resonance imaging (MRI) scans to assess evidence of primary tumors and metastasis and estimate tumor burden . These investigations help quantify neoplastic infiltration, pleural or ascitic fluid volume, and the presence of carcinoid heart disease (evaluated by echocardiography). CT and MRI also identify previously unrecognized lesions or conditions needing urgent treatment, such as pathological spinal fractures, and are essential for ruling out indications for locoregional therapies like embolization or chemoembolization in patients with liver-only disease . Functional Imaging: Functional imaging, particularly 68-Gallium-SSTR PET scans (SSTR-PET), is specific for NETs . This imaging modality helps identify the presence of SSTRs on tumor cells, guiding the selection of patients suitable for RLT. For lesions with high proliferative indexes, [18 F]FDG PET/CT may complement the assessment by visualizing heightened metabolic activity, thus refining the evaluation of lesions targeted with alternative therapies . Recent advancements include the introduction of volumetric parameters like SSR-derived tumor volume and total lesion SSR as tools to aid in predicting PFS before RLT . Biomarkers: While specific tumor markers are assessed in functioning tumors associated with clinical syndromes, the use of biochemical markers like chromogranin A, alkaline phosphatase, or alterations in transaminase ratios, has been proposed to predict therapy effectiveness, although without definitive evidence of their predictive significance . Elevated chromogranin A levels alone should not be considered definitive evidence of disease progression due to the marker’s low specificity. Histological Evaluation: For long-term survivors with multiple secondary disease localizations and historical biopsies, it’s crucial to consider a further histological evaluation before planning RLT due to the potential change in tumor grade over time . This is especially pertinent if the historical biopsy was from the primary tumor and there has been a significant increase in metastatic lesion number and sites. Performing an [18 F]FDG PET/CT scan may help guide the selection of the most aggressive metastasis for biopsy. Clinical Symptoms: Assess the patient’s symptoms, including changes in flushing, diarrhea, abdominal pain, or other related symptoms. Worsening or new symptoms may indicate disease progression, necessitating a CT, MRI, or PET scan to provide a comprehensive overview of the patient’s clinical condition. Multidisciplinary Team Consultation: Engage a multidisciplinary team experienced in managing GEP-NETs, including oncologists, endocrinologists, gastroenterologists, radiologists, nuclear medicine specialists, pathologists, and surgeons, in the assessment process. Discuss the patient’s case to ensure a comprehensive understanding of the disease status and align with the patient’s will and expectations. Multidisciplinary management significantly enhances care levels in patients with GEP-NETs . It is essential to approach disease progression assessment in GEP-NETs using these methods. Treatment decisions are often based on a comprehensive evaluation of all available information, with plans typically personalized to each patient’s specific situation, considering factors like tumor grade, location, and overall health status. An accurate multidisciplinary assessment of patients who are candidates for RLT is mandatory before initiating treatment. This assessment should include a complete radiological evaluation using CT and/or MRI, as well as SSTR-PET. In selected patients with a significant change in disease behavior—such as a noticeable increase in tumor lesions or an evident increase in tumor burden—performing [18 F]FDG PET/CT and/or repeating the histological evaluation may be proposed (3a - A). While [18 F]FDG PET/CT is not typically the primary imaging modality for GEP-NETs, it can be informative in certain cases and may influence decisions regarding RLT administration. EANM and ENETS guidelines recommend including [18 F]FDG PET/CT in the diagnostic pathway for higher G2 (Ki67: 10–20%), G3 NET, and NEC. The 2020 ESMO guidelines offer broader recommendations, suggesting the evaluation of both [18 F]FDG PET/CT and SSTR-PET for all G2-G3 NETs . However, [18 F]FDG PET/CT can also be positive in low-grade NETs of the G1 type, maintaining an unfavorable prognostic significance even in these tumors, confirming that the role of this technique in low-proliferation forms still needs full clarification . Some previous studies have investigated the use of both tracers, but they rely on retrospective data from populations that are not homogeneous regarding the primary lesion . SSTR-PET and [18 F]FDG PET/CT together may be indicated for certain cases, including at initial diagnosis for intermediate proliferative activity tumors and during follow-up when assessing treatment changes or discrepancies between radiological and clinical evaluations . Here’s how [18 F]FDG PET/CT might influence the decision to perform RLT. Tumor Metabolic Activity: [18 F]FDG PET/CT provides information about the metabolic activity of tumors. NETs are generally slow-growing and may not exhibit high glucose metabolism, making [18 F]FDG PET/CT less sensitive for these tumors. However, in poorly differentiated or more aggressive lesions with higher metabolic activity, [18 F]FDG PET/CT may be used to assess aggressive lesions’ presence, number, and location, guiding treatment decisions towards alternatives to RLT, such as chemotherapy . Tumor Intra and Inter-lesion Heterogeneity: GEP-NETs may exhibit heterogeneity in receptor expression and metabolic activity. Combining information from both radiotracers provides a more comprehensive view of tumor characteristics. For instance, elevated [18 F]FDG PET/CT activity might indicate swift progression in pancreatic NETs, even when early diagnosed or confirmed as well-differentiated. The presence of [18 F]FDG PET/CT uptake could indicate undifferentiated disease foci, significantly impacting therapy response and prognosis . Lesions showing matched SSTR imaging with SSTR-PET and [18 F]FDG PET/CT uptake may suggest a good response probability to RLT, even in combination with chemotherapy . Disease staging, monitoring, and therapeutic decision-making: the decision to perform RLT is based on the presence of SSTRs on tumor cells. If GEP-NETs show SSTR expression, RLT may be considered. However, in cases of uncertain diagnostic presentations (such as non-conclusive findings in CT, MRI, or SSTR-PET) or rapid clinical progression, it is advisable to also perform [18 F]FDG PET/CT for a comprehensive overview of the multi-metastatic disease. Ultimately, the decision to perform RLT is multifaceted and should be made in consultation with a multidisciplinary team of specialists, considering the specific characteristics of the patient’s tumors and their responses to various imaging modalities and previous therapies. The goal is to tailor the treatment plan to the individual patient’s needs and the characteristics of their neuroendocrine lesions. [18 F]FDG PET/CT is recommended before RLT in cases with heterogeneous uptake at SSTR-PET, and in patients with suspicion of rapidly progressive disease (3b - A). The phase 3 trials conducted on patients with intestinal NET reported that median PFS was not reached for RLT with 177Lu-Dotatate, while it was 11 months and 16.4 months for everolimus in non-functioning and functioning tumors, respectively . Although these studies were designed on populations that are not directly comparable, the higher anti-proliferative efficacy of RLT compared with everolimus is now well established. This constitutes the first and most significant evidence in favor of choosing RLT after the failure of SSA treatment. The ORR was significantly higher with RLT than with everolimus . In patients with advanced panNET initially considered unresectable or borderline, neoadjuvant treatment with 177Lu-Dotatate enabled successful surgery in 31% of cases . Therefore, early use of RLT can alter these tumors’ natural history. Patients with GEP-NET who are candidates to receive SSA as first-line therapy typically present with low-proliferating tumors and a long life expectancy. In this setting, the second-line therapy needs to be effective, but safety is of primary importance to avoid serious adverse events and related treatment interruptions or withdrawals. The ultimate goal is to achieve long-term tumor stabilization and a good QoL. For this purpose, RLT offers a better risk/benefit ratio than targeted therapies. By comparing different therapeutic sequences, RLT was found to be safer than either everolimus or chemotherapy as a second-line therapy . From the patient’s perspective, a French national survey indicated that RLT had the best median perceived tolerance compared to all other treatments, including everolimus, sunitinib, and chemotherapy . On the other hand, toxicity, rather than tumor progression, was the most frequent reason for discontinuation of everolimus and sunitinib . The long-term safety results of the NETTER-1 trial confirmed that 177Lu-Dotatate is safe, and no new serious adverse events were reported during the long-term follow-up . Beyond the low toxicity rate, RLT has been reported to significantly impact health-related quality of life in large randomized trials performed in gastroenteropancreatic NETs, improving both global health status and specific symptoms . The phase II non-comparative OCLURANDOM study recently randomized patients with advanced, progressive, SSTR-positive panNET to receive either 177Lu-DOTATATE or sunitinib. The 12-month PFS rate was 80.5% in the RLT arm versus 42% in the sunitinib arm , thus confirming that RLT outperforms targeted agents in patients progressive on first-line therapy with SSA. Two prospective, randomized, phase II trials (COMPETE and COMPOSE) are currently underway to compare the efficacy of RLT versus everolimus or versus the best standard of care (chemotherapy or everolimus, according to the investigator’s choice) in patients with unresectable progressive GEP-NETs (ClinicalTrials.gov NCT03049189 and NCT04919226). In patients with progressive G1-G2 GEP-NETs, RLT should be preferred as a second-line treatment over targeted agents (everolimus or sunitinib) after the failure of SSA due to its better-expected efficacy and safety profile (2b - B). Both retrospective and prospective evidences indicate that chemotherapy is effective in treating GEP-NETs . Specifically, alkylating agents such as streptozocin, dacarbazine, and temozolomide (alone or in combination with capecitabine) have demonstrated antitumor activity in panNETs . The prospective ECOG-ACRIN E2211 phase II trial recently compared temozolomide alone to temozolomide plus capecitabine in 144 patients with advanced progressive G1-G2 panNETs. The study showed a significant improvement in PFS in the combination arm (median PFS 22.7 vs. 14.4 months respectively) and a trend towards improved ORR (40% vs. 34%) and median OS (58.7 vs. 53.8 months, respectively), although 45% of patients experienced G3/G4 toxicity . While most well-differentiated gastrointestinal NETs tend to be resistant to alkylating agents, fluoropyrimidine-based combinations (e.g., FOLFOX) show antitumor activity in this patient population, potentially causing rapid tumor shrinkage . A large, multicenter, retrospective study of 508 patients with advanced GEP-NETs recently showed that second-line therapy with RLT was associated with improved PFS compared to targeted therapies or chemotherapy (median 2.2 years [95% CI, 1.8–2.8 years] vs. 0.6 years [95% CI, 0.4-1.0 years] respectively in the matched population; P < 0.001). This effect was consistent across different primary sites and hormonal statuses, though the advantage in PFS was not observed in tumors with a Ki-67 greater than 10% . According to retrospective evidence, RLT is associated with improved survival outcomes in patients who did not receive chemotherapy before RLT initiation . Several clinical trials are currently comparing RLT with chemotherapy in patients with progressive disease (NCT05247905, NCT04919226), and results are eagerly awaited. Overall, many factors should be considered when choosing between RLT and chemotherapy in patients who are progressive on first-line SSA therapy. These include the pace of tumor growth and the need for rapid tumor shrinkage. While the density of SSTR expression by SSTR-PET scan can accurately preselect the patients most likely to respond to RLT, methylguanine-DNA methyltransferase testing might be helpful in predicting response to temozolomide-based regimens. In patients with progressive G1-G2 GEP-NETs, RLT should be preferred as a second-line treatment over chemotherapy after the failure of SSA. However, chemotherapy remains an option to consider in the treatment of panNET patients who have a high tumor burden and/or the presence of tumor-related symptoms, or in cases of rapid progression, regardless of the primary tumor site (3b - A). While it is well-established that escalating the dose of SSA can enhance symptom control in functioning tumors when the standard SSA dosage proves ineffective, the actual impact of increased SSA dosages on tumor growth, particularly in the clinical context of non-functioning tumors, remains ambiguous. Until recently, selecting a second-line therapy after the standard SSA dose fails in well-differentiated G1-G2 GEP-NETs was notably challenging. Earlier retrospective studies suggested a potential improvement in PFS with increased SSA doses . However, this observation was not corroborated in prospective studies involving patients with radiologically confirmed progressive disease under standard SSA doses. In such clinical scenarios, the reported median PFS values, as indicated by the CLARINET FORTE study and the control arms of the NETTER-1 trial , ranged between 5 and 8 months. A recent meta-analysis examining 783 patients in 11 studies found that the proportion of patients experiencing disease progression under high-dose SSA was 62% (with a 95% confidence interval ranging between 53% and 70%) per 100 subjects treated annually . Conversely, in the same clinical scenario of progressive well-differentiated GEP-NETs, RLT demonstrated a significantly higher PFS rate, as observed in both randomized controlled trials and real-world study settings. Data from the phase-3 NETTER-1 trial, where the median PFS was not reached in the initial analysis and was estimated at 25 months in the final analysis , aligns with findings from retrospective multicenter studies. These studies reported a median PFS of approximately 2.5 years . A similar trend was observed when considering the ORR as an endpoint. In the context of high-dose SSA, although earlier retrospective small-scale studies reported promising objective response rates of up to 31% , prospective trials indicated a significantly lower likelihood of achieving an objective tumor response, with rates ranging between 3 and 4% . On the other hand, when analyzing the ORR for RLT, the values vary significantly. The NETTER-1 study reported a rate of 18% , while the larger retrospective study by Brabander et al. indicated a range between 31 and 58% . Based on these considerations, RLT has demonstrated greater efficacy compared to high-dose SSA in the various clinical settings evaluated, including both RCTs and retrospective real-world studies. This superiority is evident in terms of both PFS and ORR. In patients with progressive G1-G2 GEP-NETs, RLT is recommended as a second-line treatment over high-dose SSA after the failure of standard dose SSA due to its better expected efficacy. High-dose SSA remains an option as a temporary bridge until RLT initiation or in patients unfit for other antitumor treatments due to comorbidities (1b - A). 3D imaging, particularly through contrast-enhanced CT or MRI, is the main method for evaluating treatment response by observing changes in lesion dimensions over time . Tumor size measurements are primarily conducted according to the Response Evaluation Criteria in Solid Tumours version 1.1 (RECIST 1.1) . However, assessing treatment response based solely on changes in tumor size presents several challenges, especially with GEP-NETs. These tumors may stabilize or initially increase in size even when responding to treatment. Additionally, the occurrence of central tumor necrosis frequently reported during RLT complicates assessments with radiological criteria due to the ‘false-positive’ increases. Furthermore, shrinkage following RLT can be a delayed occurrence . These factors underscore the limitations associated with RECIST 1.1 criteria, suggesting that their use in evaluating slow-growing neoplasms such as GEP-NETs should be cautiously approached. To address these limitations, the Choi criteria have been introduced, assessing both the dimensional changes and the density variation of lesions in CT images with contrast enhancement. Numerous studies comparing the two criteria for NET evaluation consistently show equal or markedly superior results for Choi versus RECIST . However, it is important to note that while the arterial phase of CT is most commonly used in assessing GEP-NETs, considering their vascularity, the Choi criteria rely on images obtained during the portal venous phase . This discrepancy represents a major limitation in applying the Choi criteria in the neuroendocrine context. In light of these challenges, new methods have been proposed to assess therapy response, including the application of long-established tools used for evaluating growth rates in other neoplastic pathologies . The tumor growth rate (TGR) is one emerging tool based on the variation in the volume of target lesions, normalized for the time between two radiological assessments (CT or MRI). Recent studies have also highlighted its application in the neuroendocrine field , showing that baseline TGR highlights the heterogeneity of well-differentiated GEP-NETs and predicts increases in Ki-67 index over time . Additionally, Weber M et al. evaluated the utility of hybrid techniques such as SSTR-PET/MRI in a small sample study. The results suggest that pre-therapeutic SSTR-PET/MRI may not be a reliable predictor of treatment response to RLT in NET patients. Conversely, patients treated with SSA exhibit variations in the apparent diffusion coefficient map on MRI imaging compared to those treated with RLT. Finally, features extracted from SSTR-PET/MRI performed before RLT were not good predictors of treatment response . RECIST 1.1 criteria, evaluated by contrast-enhanced CT or MRI, should be used to monitor the efficacy of RLT during follow-up. Attention should also be paid to changes in tumor lesion morphology beyond modifications in their size (3b - A). Frailty is a syndrome with complex multifactorial physiopathology affecting up to 17% of the geriatric population . This clinical status implies major vulnerability across multiple health domains, including weakness, decreased functional performance, unintentional weight loss, cognitive impairment, increased risk of comorbidities, and organ dysfunction, leading to adverse health outcomes . As the prevalence of GEP-NETs and the elderly population rate increase globally, it is reasonable to hypothesize that a progressively higher proportion of patients with GEP-NETs will be frail. Data from the Surveillance, Epidemiology, and End Results (SEER) analysis of 29,664 GEP-NET cases showed that the median age at diagnosis was 63 years, with the peak incidence observed at age 80. Additionally, another database analysis of 22,744 cases revealed the highest incidence rate of GEP-NETs in patients over 70 years old, with 16–17 cases per 100,000 . The frail oncological population tends to receive delayed or incomplete diagnostic evaluations and often suboptimal therapy, considering the patient’s comorbidities and major risk of toxicity or complications, leading to an unfavorable therapeutic risk/benefit ratio . Regarding RLT, frail patients more commonly present with altered renal function or hematological disorders, thus tending to be less frequently eligible for RLT. Currently, there are no standardized recommendations in the literature regarding using RLT in frail patients. Theiler et al. conducted a retrospective matched cohort study to assess the efficacy and safety of RLT with 90Y-DOTATOC or 177Lu-DOTATATE in elderly patients over 79 years old affected by well-differentiated G1 or G2, SSTR-positive NETs compared to their younger counterparts. The exclusion criteria included ECOG performance status ≥ 3, hematological impairment (hemoglobin < 80 g/L, platelet count < 75 × 10 9 /L), reduced eGFR (< 45 mL/min), or increased levels of AST/ALT (> 3 times upper range of normal). Overall, despite a higher baseline rate of comorbidities, renal and hematological impairment, and a lower ECOG performance status in the elderly cohort, RLT was found to be an effective strategy with a similar toxicity profile in both groups. Nevertheless, long-term adverse events, particularly renal dysfunction when administered 90Y-DOTATOC rather than 177Lu-DOTATATE, cannot be completely ruled out. No statistically significant differences were observed regarding the OS. The median OS in the elderly and younger group was respectively 3.4 years and 6.0 years ( p = 0.094) . These results suggest that RLT may be a valid and relatively safe therapeutic option in a carefully selected cohort of frail patients. However, more robust and large-cohort studies are warranted to explore the risk/benefit ratio, also in the long-term, of RLT in this subgroup of patients. Such initiatives would be of remarkable impact, considering that alternative medical options such as targeted drugs (everolimus or sunitinib) or systemic chemotherapy are generally associated with higher toxicity and deterioration of QoL. An interdisciplinary and multidimensional approach is fundamental to guide therapeutic decisions in such a vulnerable population, especially when standardized guidelines are lacking. To provide the best care for frail individuals, it is necessary to scrupulously identify adequately eligible patients. Therefore, in a multidisciplinary context, validated assessment tools should be implemented to prudently evaluate important domains such as functional, cognitive, and nutritional status, potential limitations in activities of daily living, social settings, and comorbidities. RLT should also be considered in frail patients as a valid therapeutic option despite the lack of specific supporting data. It is reasonable, especially in the elderly population with comorbidities, to pay greater attention to renal function and potential marrow toxicity before initiating therapy (5 - B). Retrospective evidence suggested that RLT can be a relevant therapeutic option in patients with SSTR-positive G3 GEP-NETs, leading to disease control rates ranging between 30% and 80% and median PFS between 9 and 23 months . In the recent NETTER-2 trial, which evaluated 226 enrolled patients, 35% had G3 tumors. Overall, treatment with RLT was associated with a significant improvement in PFS (median PFS: 8.5 months in the control arm versus 22.8 months in the investigational arm; stratified HR: 0.28, p < 0.0001) and ORR (9.3% in the control arm versus 43% in the investigational arm; stratified OR: 7.81, p < 0.0001) . Notably, PFS and ORR improvements were consistent across all pre-specified subgroups, including the G3 subgroup. Based on these results, it is likely that first-line treatment with RLT will be approved soon by regulatory authorities, becoming the first standard treatment option supported by high-level evidence for patients with advanced, G2-G3, SSTR-positive GEP-NETs. Another prospective phase III trial, the COMPOSE trial, is currently underway to compare first or second-line RLT versus the best standard of care (chemotherapy or everolimus according to investigator’s choice) in patients with either G2 or G3 unresectable SSTR-positive GEP-NETs . The trial results are eagerly awaited, as they will provide much-needed information on treatment sequencing also in patients with G3 GEP-NETs. No high-level evidence of antitumor activity currently exists for treatment modalities alternative to RLT in patients with metastatic G3 GEP-NETs. According to retrospective data and in light of the recent results of the NETTER-2 trial , SSA may exert some antiproliferative activity in patients with G3 GEP-NETs, although with significantly inferior outcomes compared to RLT. On the other hand, small series have documented the activity of either sunitinib or everolimus (alone or in combination with temozolomide) in G3 GEP-NETs . Alkylating-based (i.e., CAPTEM or STZ/5-FU) and fluoropyrimidine-based (i.e., FOLFOX) chemotherapy protocols appear effective in patients with G3 GEP-NETs . According to retrospective evidence, the CAPTEM regimen is associated with a median PFS ranging between 9 and 15 months in patients with advanced G3 tumors of the digestive tract . Responses to temozolomide-based regimens appear more frequent in the first-line setting and in pancreatic primaries. The efficacy of etoposide-platinum chemotherapy appears limited in advanced G3 NETs, with the response rate in this population inferior to that observed in patients with poorly differentiated NECs . Overall, RLT might be currently considered as a preferred option in the first-line treatment of patients with advanced SSTR-positive G3 GEP-NETs. Chemotherapy, particularly alkylating-based regimens, might be reserved to SSTR-negative G3 NETs or to patients progressing on RLT. As soon as RLT is approved by regulatory authorities, it should be considered a valid option for patients with G2-G3 GEP-NETs expressing SSTR (1b - A). The rationale for repeating RLT in patients with GEP-NETs involves several factors. The decision is typically individualized, based on a combination of clinical assessments, imaging, and biochemical evaluations. If there is evidence of disease progression or recurrence following the initial course of RLT, a repeat treatment may be considered to target new or recurrent lesions. Initially, an SSTR-PET evaluation should be conducted to confirm the presence of somatostatin receptors on the NET lesions. According to the Delphi consensus, a partial response or stable disease must have been achieved for at least one year after the first RLT treatment . To accurately determine which patients could benefit from retreatment, implementing dosimetry in clinical practice is crucial. Dosimetry correlates tumor-absorbed doses and treatment effectiveness, especially in larger tumors . Recent studies have demonstrated the safety and efficacy of an RLT rechallenge with dosimetry calculations based on healthy organs such as the kidneys and bone marrow . These findings suggest that incorporating personalized dosimetry, aimed at identifying organs with dose limits and determining the maximum tolerated accumulated activity, can enhance standard clinical practices by ensuring that therapeutic doses stay within safe limits for healthy organs. Notably, patients who reached the maximum tolerable absorbed dose of 23 Gy in their kidneys experienced nearly double the median PFS and OS . This highlights the significant potential benefits of adopting a personalized approach over fixed dosing in terms of oncological outcomes. The decision to repeat RLT is complex and requires careful consideration of various factors. Regular follow-up assessments, imaging studies, and ongoing communication between the patient and the dedicated tumor board are crucial for determining the most appropriate course of action in managing NETs. Although not yet approved by regulatory authorities, retreatment with RLT should be considered a valid therapeutic option for those patients who had a favorable response to initial RLT at the time of disease progression. Dosimetry data, including initial RLT, should be used to tailor the personalized dose for the retreatment approach (3b - B). This position paper strongly advocates for the early integration of RLT into the treatment regimen for advanced SSTR-positive GEP-NETs following the failure of SSA. Before initiating RLT, [18 F]FDG PET/CT is recommended for patients with heterogeneous uptake on SSTR-PET or those suspected of rapid tumor progression. RLT with 177Lu-DOTATATE stands out as the preferred second-line treatment over targeted therapies, chemotherapy, or high-dose SSA for progressive G1-G2 GEP-NETs thanks to its superior efficacy and safety profile. This recommendation applies provided that the disease homogeneously expresses SSTRs, is not rapidly progressing, or is not highly symptomatic. To assess the effectiveness of RLT, RECIST 1.1 criteria through contrast-enhanced CT or MRI are advised, emphasizing changes in tumor morphology. Looking forward, it is anticipated that upon regulatory approval, RLT will be considered a valid treatment option for patients with well-differentiated high-grade SSTR-positive GEP-NETs. Additionally, retreatment with RLT will be suggested for those who have shown a favorable response to the initial treatment upon disease progression, ideally using tailored dosimetry. The key messages from this position paper are summarized in Table . Below is the link to the electronic supplementary material. Supplementary Material 1 |
MNBDR: A Module Network Based Method for Drug Repositioning | 8aa20176-902d-463d-b557-20d80063f30a | 7824496 | Pharmacology[mh] | The traditional process of drug development is particularly slow and costly, which usually takes 12–15 years and billions of dollars . In addition, the rate of new drug candidates being Food and Drug Administration (FDA)-approved has been lessen although the levels of investments in pharmaceutical R&D remarkably increase . At the same time, this philosophy of rational drug design that “one gene, one drug, one disease” paradigm overlooks the inherent complexity of diseases . In this case, drug repositioning (or repurposing), which aims to identify novel disease indications for known safety and pharmacology approved-drugs, is very economical and efficient. Compared with traditional process of drug development, repositioning a drug may reduce the drug development period to 6.5 years and costs on average $300 million . Therefore, drug repositioning should be “the primary strategy in drug discovery for every broadly focused, research-based pharmaceutical company” . One of the seminal method is connectivity map (CMap) , and the assumption of it was that biological state could be described in terms of a genomic signature. They measured genome-wide transcriptional expression data across a multiple of cell lines treated with small drug molecules and matched these profiles with disease perturbation gene expression profiles to find new associations between drugs and diseases. Although it is difficult to interpret the meaning of predicted associations, the robust of disease signatures and the effectiveness of the method has been experimentally validated . Inspired by the rationale behind the CMap method , numerous approaches for drug repositioning based on gene expression data and connectivity map have been developed. Zhang et al. proposed a simple method to filter reference gene-expression profiles for the connection scoring scheme. In addition, more connection methods, such as eXtreme Sum score (XSum) , Xcos , was proposed to calculate the similarities between gene expression patterns of diseases and drugs. Iorio et al. developed a drug repositioning method that constructed drug–drug similarity networks by comparing drug perturbation gene expression profiles. Saberian et al. presented a novel machine learning-based method, which explored the anti-similarity between drugs and diseases to uncover new uses for drugs. However, these previous methods ignored the fact that both the pathogenesis of diseases and drug mode of action (MoA) have been revealed to be tightly connected with gene modules . Chung et al. developed Functional Module Connectivity Map (FMCM), using functional gene modules as disease signatures to build a connectivity map and its performance was superior to traditional signature-based drug-repurposing methods. Jia et al. introduced a new framework incorporating the gene expression data and pathway analysis. They provided a new approach to explain the drug mode of action in a disease context. As we know, proteins, nucleic acids, and small molecules could form a dense network of molecular interactions in a cell, and there may be cross-talks among different functional modules in the cell . Therefore, in drug repositioning, it may be helpful to consider the cross-talks among the function modules. However, as far as we know, there were no drug repositioning methods taking the cross-talks among modules into consideration. To fill the gap, we present Module Network Based Drug Repositioning (MNBDR), a novel computational module for drug repositioning. We applied the module network to the field of drug repositioning for the first time and proposed two new indicators to evaluate the expression levels of modules and the score of drug-disease. First of all, dense clusters in PPI network were detected as modules. After that, as described in our previous study , the cross-talks among modules were identified by testing whether the connections among the genes in two modules were significantly high. Based on both the gene expression data of disease samples and the module network, Pagerank algorithm was applied to rank the important modules in disease. Lastly, the gene expression data of the important modules in drug stimulation samples were further pooled together to calculate an overall connectivity score for each pair of drug–disease. In order to validate our method, we applied MNBDR in 19 cancer datasets and compared it with several popular signature-based drug repurposing methods. We showed that MNBDR performed better than previous methods. Finally, we analyzed the function of the important modules in our module network to investigate the biological meaning of our method.
2.1. Data Set and Preprocessing The drug stimulation data was downloaded from The Library of Integrated Network-Based Cellular Signatures (LINCS) program (level 5; accession number: GSE70138), which contains 118,051 gene expression profiles from multiple human cultured cell lines (treatment and control) treated with 1827 bioactive small chemical molecules at varying concentrations. Each expression profile consists of moderated z-score value of 12,328 genes. LINCS team defined nine touchstone cell lines and we used five cell lines (PC3, A375, HALE, MCF7 and HT29) which have the sample sizes more than 10,000. The pre-processing procedure for drug gene expression data was included in the . Cheng et al. showed that the majority of the compounds do not have sufficient therapeutic effects on cell lines. In our work, we applied the compound filtering procedure described by Cheng et al. and used expression signal strength (ESS) to filter the drug stimulation samples. The details were described in . The microarray data for whole-genome mRNA expression of disease samples were downloaded from TCGA (The Cancer Genome Atlas) research network . In order to generate more stable disease features, only data sets with at least three normal and three disease samples were considered for further processing. Lastly, we obtained a total of 3486 control samples and 60,460 disease samples from 19 cancer data sets. Then, for each cancer data set, we averaged disease and control samples and calculated the corresponding fold changes for all the genes. The PPI data was derived from STRING database . In order to reduce the false-positive interactions which are probably originated from prediction methods, we followed the strategy of Zhou et al. and only the interactions with a confidence score of 770 or above were kept. In total, there were 36,619 unique interactions among 9474 proteins in the PPI. 2.2. Benchmark Standard The golden standard of known drug indications were obtained from Quan et al. . They identified the drug-indication relationships through Drug–Gene Interaction database (DGIdb) , Therapeutic Target Database (TTD) , and DrugBank . Only the clinically supported or FDA-approved drug–disease relationships were used. In this study, we got a total of 2877 associations between 19 cancers and 477 drugs. All the drug and disease interactions used in this work were shown in . 2.3. Construction of the Module Network To construct the module network, we adopted a similar strategy as our previous study . First of all, we used MCODE in Cytoscape to detect dense clusters in the PPI network and only the clusters containing no less than 5 nodes were retained as modules. After that, for each pair of modules, the number of edges (PPI interactions) between the two modules was calculated. Then two random gene sets, which have the same number of genes with the two modules, were randomly selected, and the edges among the two random gene sets were counted. The random process was repeated 1000 times and the 1000 edge numbers were used as null distribution. Then, the p -value of the cross-talks among the two modules was calculated based on the null distribution. If the number of edges between two modules was significantly high ( p -value < 0.01), then there was a cross-talk between the two modules. Finally, all the modules and cross-talks among these modules constituted the module network ( A). 2.4. Feature Space Transformation For each disease (or drug), we mapped gene expression data from gene’s feature space to the module’s feature space. Taking breast cancer as an example, first of all, the fold-change of all the genes’ expression levels in the breast cancer samples and control samples was calculated. Then for all the n dense clusters ( M 1 , M 2 , …, M i , …, M n ) in PPI, the importance ( Imp i ) for module M i was calculated as follow: I m p = { F m a x − F m i n , F m a x > 0 , F m i n < 0 M A X ( | F m a x | , | F m i n | ) , o t h e r s Fmax and Fmin are the maximum and minimum fold-change of all the genes in M i . At last, we obtained { Imp 1 , Imp 2 , …, Imp i , …, Imp n }, which can characterize the difference of the gene expression levels of all the modules in the disease. 2.5. Module Rank Based on Pagerank We assumed that the modules with both important topological coefficient in the module network and significantly differential expression levels would be more essential in disease. We thus used network propagation algorithms to simulate cross-talks of functional modules, which was defined as follows: P k = λ W P k − 1 + ( 1 − λ ) P 0 where W denotes a transition matrix that is the column normalization of the adjacency matrix. In our work, the nodes of the adjacency matrix are modules, and the edges are the connections among the modules in our module network. Here, P 0 represents our initial, or prior, information of the modules. In this work, we set P 0 as { Imp 1 , Imp 2 , Imp 3 , …, Imp n } of all the modules in the corresponding disease. As we know, if the propagation process repeated too much times, information will eventually spread out over the whole network and the local neighborhood of the important nodes will be missed . Therefore, a damping factor λ (0 < λ < 1) was defined to avoid it. In this study, λ was set as 0.85, which was typical value for Pagerank . 2.6. Drug Prioritizing Inspired by the normalized discounted cumulative gain (NDCG) , we proposed a new indicator S to evaluate the drug–disease score between each drug to a specific disease. The indicator is described as follows: S = ∑ i = 1 n V ( i ) | P ( i ) − i | + 1 For the top n modules (1st, 2nd, …, i th, …, n th) in disease progression, V ( i ) is the imp of the i th modules in drug response and P ( i ) is the position of the i th module in the ranked module list in drug response. That is, if the important modules in disease were also ranked on the top of the module list in drug response, a high score S would be obtained. At last, for each disease, all the drugs were prioritized based on S . 2.7. Evaluation Metrics We used the area under the curve (AUC) of the receiver operator characteristic (ROC) measure to evaluate model performance. The ROC curve can be drawn with the true-positive rates (TPRs) and the false-positive rates (FPRs) at different cutoffs. TPR is the proportion of positive samples identified correctly among the total positive samples, while FPR is the ratio of misidentified negative samples accounting for all the negative samples. TPR and FPR are defined as follows: T P R = T P T P + F N , F P R = F P T N + F P where TP and TN are the numbers of correctly identified positive and negative samples, and FN and FP are the numbers of positive and negative samples that are misidentified. At the same time, we also used AUC0.1, which is widely used in the field of drug repositioning , to evaluate our algorithm. Index AUC0.1 is the area under the curve measured of the ROC under the condition of FPR ≤ 0.1. It guarantees that indicator can focus on the early retrieval performance of the model by restricting FPR . It is essential because it is more realistic in drug repositioning when the candidate drug number is small. Thus, in this work, we applied AUC0.1 as the main index for module evaluation. To better compare model performance, we also used the average AUC (AvgAUC) of all the diseases as our evaluation index. To determine the statistical significance of the results, we calculated non-parametric p -value by performing 10,000 runs with random permutations of the drug–disease relation. 2.8. Assessment To assess the performance of MNBDR, we compared the prediction results with several methods. In order to investigate the impact of cross-talks between modules on prediction performance, we compared MNBDR with two other methods (Gene based method and Module based method). The Gene based method only used the gene’s fold-change to rank the genes and used the ranked gene list to screen drugs, which is similar with the traditional CMap. In the meanwhile, Module based method, which ranked the modules using the gene expression levels (without taking the module networks into calculation) and screened the drugs based on the ranked module list. In addition, we also compared the performance of MNBDR with six classic connectivity methods (GASE0Score , GASE1Score , GASE2Score , KSScore , ZhangScore , XSumScore ). At last, the performances of MNBDR and three latest published methods (LLE-DML , Cogena ) and EMUDRA ) were compared. MNBDR and Module based method are described as above and have been implemented in Python package, which are freely available at https://github.com/nbnbhwyy/MNBDR . Details about other methods were available in .
The drug stimulation data was downloaded from The Library of Integrated Network-Based Cellular Signatures (LINCS) program (level 5; accession number: GSE70138), which contains 118,051 gene expression profiles from multiple human cultured cell lines (treatment and control) treated with 1827 bioactive small chemical molecules at varying concentrations. Each expression profile consists of moderated z-score value of 12,328 genes. LINCS team defined nine touchstone cell lines and we used five cell lines (PC3, A375, HALE, MCF7 and HT29) which have the sample sizes more than 10,000. The pre-processing procedure for drug gene expression data was included in the . Cheng et al. showed that the majority of the compounds do not have sufficient therapeutic effects on cell lines. In our work, we applied the compound filtering procedure described by Cheng et al. and used expression signal strength (ESS) to filter the drug stimulation samples. The details were described in . The microarray data for whole-genome mRNA expression of disease samples were downloaded from TCGA (The Cancer Genome Atlas) research network . In order to generate more stable disease features, only data sets with at least three normal and three disease samples were considered for further processing. Lastly, we obtained a total of 3486 control samples and 60,460 disease samples from 19 cancer data sets. Then, for each cancer data set, we averaged disease and control samples and calculated the corresponding fold changes for all the genes. The PPI data was derived from STRING database . In order to reduce the false-positive interactions which are probably originated from prediction methods, we followed the strategy of Zhou et al. and only the interactions with a confidence score of 770 or above were kept. In total, there were 36,619 unique interactions among 9474 proteins in the PPI.
The golden standard of known drug indications were obtained from Quan et al. . They identified the drug-indication relationships through Drug–Gene Interaction database (DGIdb) , Therapeutic Target Database (TTD) , and DrugBank . Only the clinically supported or FDA-approved drug–disease relationships were used. In this study, we got a total of 2877 associations between 19 cancers and 477 drugs. All the drug and disease interactions used in this work were shown in .
To construct the module network, we adopted a similar strategy as our previous study . First of all, we used MCODE in Cytoscape to detect dense clusters in the PPI network and only the clusters containing no less than 5 nodes were retained as modules. After that, for each pair of modules, the number of edges (PPI interactions) between the two modules was calculated. Then two random gene sets, which have the same number of genes with the two modules, were randomly selected, and the edges among the two random gene sets were counted. The random process was repeated 1000 times and the 1000 edge numbers were used as null distribution. Then, the p -value of the cross-talks among the two modules was calculated based on the null distribution. If the number of edges between two modules was significantly high ( p -value < 0.01), then there was a cross-talk between the two modules. Finally, all the modules and cross-talks among these modules constituted the module network ( A).
For each disease (or drug), we mapped gene expression data from gene’s feature space to the module’s feature space. Taking breast cancer as an example, first of all, the fold-change of all the genes’ expression levels in the breast cancer samples and control samples was calculated. Then for all the n dense clusters ( M 1 , M 2 , …, M i , …, M n ) in PPI, the importance ( Imp i ) for module M i was calculated as follow: I m p = { F m a x − F m i n , F m a x > 0 , F m i n < 0 M A X ( | F m a x | , | F m i n | ) , o t h e r s Fmax and Fmin are the maximum and minimum fold-change of all the genes in M i . At last, we obtained { Imp 1 , Imp 2 , …, Imp i , …, Imp n }, which can characterize the difference of the gene expression levels of all the modules in the disease.
We assumed that the modules with both important topological coefficient in the module network and significantly differential expression levels would be more essential in disease. We thus used network propagation algorithms to simulate cross-talks of functional modules, which was defined as follows: P k = λ W P k − 1 + ( 1 − λ ) P 0 where W denotes a transition matrix that is the column normalization of the adjacency matrix. In our work, the nodes of the adjacency matrix are modules, and the edges are the connections among the modules in our module network. Here, P 0 represents our initial, or prior, information of the modules. In this work, we set P 0 as { Imp 1 , Imp 2 , Imp 3 , …, Imp n } of all the modules in the corresponding disease. As we know, if the propagation process repeated too much times, information will eventually spread out over the whole network and the local neighborhood of the important nodes will be missed . Therefore, a damping factor λ (0 < λ < 1) was defined to avoid it. In this study, λ was set as 0.85, which was typical value for Pagerank .
Inspired by the normalized discounted cumulative gain (NDCG) , we proposed a new indicator S to evaluate the drug–disease score between each drug to a specific disease. The indicator is described as follows: S = ∑ i = 1 n V ( i ) | P ( i ) − i | + 1 For the top n modules (1st, 2nd, …, i th, …, n th) in disease progression, V ( i ) is the imp of the i th modules in drug response and P ( i ) is the position of the i th module in the ranked module list in drug response. That is, if the important modules in disease were also ranked on the top of the module list in drug response, a high score S would be obtained. At last, for each disease, all the drugs were prioritized based on S .
We used the area under the curve (AUC) of the receiver operator characteristic (ROC) measure to evaluate model performance. The ROC curve can be drawn with the true-positive rates (TPRs) and the false-positive rates (FPRs) at different cutoffs. TPR is the proportion of positive samples identified correctly among the total positive samples, while FPR is the ratio of misidentified negative samples accounting for all the negative samples. TPR and FPR are defined as follows: T P R = T P T P + F N , F P R = F P T N + F P where TP and TN are the numbers of correctly identified positive and negative samples, and FN and FP are the numbers of positive and negative samples that are misidentified. At the same time, we also used AUC0.1, which is widely used in the field of drug repositioning , to evaluate our algorithm. Index AUC0.1 is the area under the curve measured of the ROC under the condition of FPR ≤ 0.1. It guarantees that indicator can focus on the early retrieval performance of the model by restricting FPR . It is essential because it is more realistic in drug repositioning when the candidate drug number is small. Thus, in this work, we applied AUC0.1 as the main index for module evaluation. To better compare model performance, we also used the average AUC (AvgAUC) of all the diseases as our evaluation index. To determine the statistical significance of the results, we calculated non-parametric p -value by performing 10,000 runs with random permutations of the drug–disease relation.
To assess the performance of MNBDR, we compared the prediction results with several methods. In order to investigate the impact of cross-talks between modules on prediction performance, we compared MNBDR with two other methods (Gene based method and Module based method). The Gene based method only used the gene’s fold-change to rank the genes and used the ranked gene list to screen drugs, which is similar with the traditional CMap. In the meanwhile, Module based method, which ranked the modules using the gene expression levels (without taking the module networks into calculation) and screened the drugs based on the ranked module list. In addition, we also compared the performance of MNBDR with six classic connectivity methods (GASE0Score , GASE1Score , GASE2Score , KSScore , ZhangScore , XSumScore ). At last, the performances of MNBDR and three latest published methods (LLE-DML , Cogena ) and EMUDRA ) were compared. MNBDR and Module based method are described as above and have been implemented in Python package, which are freely available at https://github.com/nbnbhwyy/MNBDR . Details about other methods were available in .
3.1. Framework Overview Based on the fact that cross-talks among functional modules could play important roles in drug response and disease progression, we proposed a computational method, which used module network to identify essential modules in disease progression and improve CMap drug screening strategy, to do drug repositioning. shows the pipeline of our method. Our framework includes the following steps: (i) As PPI network exhibit a “scale-free” topology , communities exist in PPI network. The dense cluster in PPI may work together as a functional unit, we mined the communities in PPI as functional gene sets (denoted as modules in this work). After that, a permutation test was applied to identify the cross-talks among these modules (Method). As a result, we obtained 486 significant pairs among 116 modules, which were shown in . (ii) For each disease, the perturbation of the genes was calculated based on the gene expression data of disease samples and control samples, and then mapped the perturbation of the genes to the module’s space through an index Imp (Method) to obtain the initial score of the modules. Subsequently, we applied a network propagation algorithm to learn the topology information of the module network to refine the scores of the disease modules. In the propagation algorithm, λ is an important parameter and we adopted a typical value (0.85) . In addition, we changed λ in the algorithm and found the result was robust . At last, we selected the n modules with the highest scores to characterize the corresponding disease. In this study, n is set to 15, which is about 10% of all the nodes in the module network. In order to validate the robustness of our model, we also varied n from 3 to 50 and found the result was stable, and our method could achieve the best performance with n = 15 (The details are included in ). (iii) For these modules, the perturbation scores after each drug’s stimulation was also calculated based on the gene expression data of the samples stimulated by the corresponding drug and the control sample (Method). Then, a new indicator S (Method) was proposed to evaluate the effect of each drug to the specific disease. Finally, all the drugs for each disease could be ranked based on the indicator. 3.2. Comparing with CMap CMap is the most famous method to screen drugs using gene expression data . Cheng et al. have made a systematic assessment of CMap. In this work, we use this method as a benchmark (Gene based method). In addition, in order to validate the hypothesis that the cross-talks information among functional modules could facilitate drug repositioning, we compared our strategy (MNBDR), which integrating module networks and gene expression data to rank modules, with a simple expression ranking strategy which prioritizing modules based on expression data only (Module based method). Cancer is one of the most serious threats to human health and drug development for cancer is a big challenge . Here, we applied our method in 19 cancer data sets to comparing the performance of MNBDR, Gene based method and Module based method. In this work, we adopted the same index (AUC, AUC0.1 and p -value) in a previous work to evaluate the performance of drug screen methods . The detail result was shown in and . The results showed that MNBDR had the best performance in the two indexes: AveAUC and AveAUC0.1 (FPR = 0.1, specificity higher than 0.9). In the meanwhile, the method based on modules performed better than the method using gene as features (the original method of CMap), which was consistent with the previous report . In addition, this phenomenon also proved that the communities mined from PPI indeed were functional modules. Furthermore, because the main difference between our method and the method based on modules was that MNBDR using the cross-talks information in the module network, the better performance of our method validated the hypothesis of our strategy. At last, we also validated the performance by randomly permuted the drug–disease relations of the benchmark standard and calculated the p -values of the two indexes (Method). These p -values also proved the power of our method. The detail result was shown in . 3.3. Comparing with the Other Methods The connectivity approach is essential for drug screen using gene expression data and a previous paper compared several connectivity approaches . In order to evaluate the power of our method, we compared the performance of our method with all the five connectivity methods. From this result , it can be seen that all the connectivity approaches can achieve a better performance than random methods ( p -value < 0.01). Among them, XsumScore was the best. Apart from that, in the two indexes, MNBDR outperformed other connectivity methods. In addition to comparing with traditional connectivity methods, we also compared our method with three latest published methods (LLE-DML and Cogena and EMUDRA ) that used gene expression data to screen drugs for diseases. The results shown in indicated MNBDR was more effective than LLE-DML, Cogena, and EMUDRA. More importantly, the differences of AUC0.1 in the four methods are more obvious and AUC0.1 is very valuable for drug development , and we find that Cogena has similar performance with the Module based method. This can prove that our approach may be useful for different functional modules, which is valuable for further research. Moreover, LLE-DML achieves the second best performance. It performs as “black boxes” and it is very hard to investigate the important genes in the modules. In the meanwhile, our method could reveal the important modules in diseases, and could be used to investigate the biological mechanisms in disease progression and drug response. 3.4. Function Analysis of the Important Modules in Diseases We also investigated the function of the important modules to reveal the underlying mechanisms in disease and drug response. In our study, we selected the modules that are important in all the 19 cancers and set them as GCF (generalized cancer features). As a result, 15 modules were selected. Then, we used GSEA to analyze which pathways the genes, contained by GCF were involved in. Some enriched KEGG pathways for GCF genes were shown in and all the enriched pathways were shown in . Among the 47 significant pathways (FDR < 1.0 × 10 −4 ), we found that “Pathways in cancer” was on the top, with an FDR of 2.89 × 10 −13 . What is more, many sub-pathways of “Pathways in cancer” were enriched, such as “MAPK signaling pathway”, “PI3K-Akt signaling pathway”, “FoxO signaling pathway”, “Proteoglycans in cancer”, “Jak-STAT signaling pathway”, “Regulation of actin cytoskeleton”, “Focal adhesion” and “ErbB signaling pathway”. In these sub-pathways, “MAPK signaling pathway” is reported to be essential for cancer-immune evasion in human cancer cells . In addition, “PI3K-Akt signaling pathway” plays a major role not only in tumor development but also in the tumor’s potential response to cancer treatment . Recent studies indicate that numerous components of the phosphatidylinositol-3-kinase (PI3K)/AKT pathway have more frequent amplification, mutation and translocation than any other pathway in cancer patients . About “Proteoglycans in cancer”, the available evidence indicates both an up-regulation of ribosome production and changes in the ribosome structure might causally contribute to neoplastic transformation . Forkhead box O (FOXO) transcription factors are involved in multiple signaling pathways and function as tumor suppressors in a variety of cancers . Apart from this, there were also many pathways in specific cancers, such as “NON-Small cell lung cancer”, “Prostate cancer”, “endometrial cancer”, and “Basal cell carcinoma”. In a word, module genes are significantly enriched with many cancer-related pathways. 3.5. Case Study in Breast Cancer Breast cancer is one of the most common cancer and drug screen for breast is essential for the therapy . As described above, MNBDR could achieve a good performance in breast cancer data set. The details of drugs identified by MNBDR for breast cancer are included in the . Among the identified drugs, most of them are also supported by the literature, in addition to being confirmed by the benchmark data. Romidepsin is predicted as an efficient drug for breast cancer by our method. In the meanwhile, Romidepsin is a histone deacetylase inhibitor treatment of adult patients with cutaneous T-cell lymphoma (CTCL) . It modulates additional targets involved in cancer initiation and progression such as c-myc, Hsp90 and p53. Romidepsin has shown anticancer effects by induction of apoptosis, cell differentiation and cell cycle arrest, either alone or in combination . Colchicine has been considered as one of the most effective medications for alleviating crystal-induced joint inflammation . Inhibition of microtubule polymerization is the chief mechanism of action. Microtubules are among the main protein filaments that make up the cytoskeleton, which is crucial to the regulation of many activities . To date, microtubule-targeting agents (MTAs) remain one the most reliable classes of antineoplastic drugs in the treatment of BC . Based on this evidence, Colchicine which inhibits of microtubule polymerization may have a potential therapeutic effect on breast cancer and the experiment also got a certain degree of verification , and Colchicine is predicted as one of the most efficient drugs for breast cancer by MNBDR. Ciclopirox is also one of the top rank drugs predicted by our method for treating breast cancer. This drug is a diterpene triepoxide that is able to suppression the cell growth of breast cancer . All the results indicate our method could not only prioritize the drugs which have been approved, but also find the new adaptation disease for the old drugs.
Based on the fact that cross-talks among functional modules could play important roles in drug response and disease progression, we proposed a computational method, which used module network to identify essential modules in disease progression and improve CMap drug screening strategy, to do drug repositioning. shows the pipeline of our method. Our framework includes the following steps: (i) As PPI network exhibit a “scale-free” topology , communities exist in PPI network. The dense cluster in PPI may work together as a functional unit, we mined the communities in PPI as functional gene sets (denoted as modules in this work). After that, a permutation test was applied to identify the cross-talks among these modules (Method). As a result, we obtained 486 significant pairs among 116 modules, which were shown in . (ii) For each disease, the perturbation of the genes was calculated based on the gene expression data of disease samples and control samples, and then mapped the perturbation of the genes to the module’s space through an index Imp (Method) to obtain the initial score of the modules. Subsequently, we applied a network propagation algorithm to learn the topology information of the module network to refine the scores of the disease modules. In the propagation algorithm, λ is an important parameter and we adopted a typical value (0.85) . In addition, we changed λ in the algorithm and found the result was robust . At last, we selected the n modules with the highest scores to characterize the corresponding disease. In this study, n is set to 15, which is about 10% of all the nodes in the module network. In order to validate the robustness of our model, we also varied n from 3 to 50 and found the result was stable, and our method could achieve the best performance with n = 15 (The details are included in ). (iii) For these modules, the perturbation scores after each drug’s stimulation was also calculated based on the gene expression data of the samples stimulated by the corresponding drug and the control sample (Method). Then, a new indicator S (Method) was proposed to evaluate the effect of each drug to the specific disease. Finally, all the drugs for each disease could be ranked based on the indicator.
CMap is the most famous method to screen drugs using gene expression data . Cheng et al. have made a systematic assessment of CMap. In this work, we use this method as a benchmark (Gene based method). In addition, in order to validate the hypothesis that the cross-talks information among functional modules could facilitate drug repositioning, we compared our strategy (MNBDR), which integrating module networks and gene expression data to rank modules, with a simple expression ranking strategy which prioritizing modules based on expression data only (Module based method). Cancer is one of the most serious threats to human health and drug development for cancer is a big challenge . Here, we applied our method in 19 cancer data sets to comparing the performance of MNBDR, Gene based method and Module based method. In this work, we adopted the same index (AUC, AUC0.1 and p -value) in a previous work to evaluate the performance of drug screen methods . The detail result was shown in and . The results showed that MNBDR had the best performance in the two indexes: AveAUC and AveAUC0.1 (FPR = 0.1, specificity higher than 0.9). In the meanwhile, the method based on modules performed better than the method using gene as features (the original method of CMap), which was consistent with the previous report . In addition, this phenomenon also proved that the communities mined from PPI indeed were functional modules. Furthermore, because the main difference between our method and the method based on modules was that MNBDR using the cross-talks information in the module network, the better performance of our method validated the hypothesis of our strategy. At last, we also validated the performance by randomly permuted the drug–disease relations of the benchmark standard and calculated the p -values of the two indexes (Method). These p -values also proved the power of our method. The detail result was shown in .
The connectivity approach is essential for drug screen using gene expression data and a previous paper compared several connectivity approaches . In order to evaluate the power of our method, we compared the performance of our method with all the five connectivity methods. From this result , it can be seen that all the connectivity approaches can achieve a better performance than random methods ( p -value < 0.01). Among them, XsumScore was the best. Apart from that, in the two indexes, MNBDR outperformed other connectivity methods. In addition to comparing with traditional connectivity methods, we also compared our method with three latest published methods (LLE-DML and Cogena and EMUDRA ) that used gene expression data to screen drugs for diseases. The results shown in indicated MNBDR was more effective than LLE-DML, Cogena, and EMUDRA. More importantly, the differences of AUC0.1 in the four methods are more obvious and AUC0.1 is very valuable for drug development , and we find that Cogena has similar performance with the Module based method. This can prove that our approach may be useful for different functional modules, which is valuable for further research. Moreover, LLE-DML achieves the second best performance. It performs as “black boxes” and it is very hard to investigate the important genes in the modules. In the meanwhile, our method could reveal the important modules in diseases, and could be used to investigate the biological mechanisms in disease progression and drug response.
We also investigated the function of the important modules to reveal the underlying mechanisms in disease and drug response. In our study, we selected the modules that are important in all the 19 cancers and set them as GCF (generalized cancer features). As a result, 15 modules were selected. Then, we used GSEA to analyze which pathways the genes, contained by GCF were involved in. Some enriched KEGG pathways for GCF genes were shown in and all the enriched pathways were shown in . Among the 47 significant pathways (FDR < 1.0 × 10 −4 ), we found that “Pathways in cancer” was on the top, with an FDR of 2.89 × 10 −13 . What is more, many sub-pathways of “Pathways in cancer” were enriched, such as “MAPK signaling pathway”, “PI3K-Akt signaling pathway”, “FoxO signaling pathway”, “Proteoglycans in cancer”, “Jak-STAT signaling pathway”, “Regulation of actin cytoskeleton”, “Focal adhesion” and “ErbB signaling pathway”. In these sub-pathways, “MAPK signaling pathway” is reported to be essential for cancer-immune evasion in human cancer cells . In addition, “PI3K-Akt signaling pathway” plays a major role not only in tumor development but also in the tumor’s potential response to cancer treatment . Recent studies indicate that numerous components of the phosphatidylinositol-3-kinase (PI3K)/AKT pathway have more frequent amplification, mutation and translocation than any other pathway in cancer patients . About “Proteoglycans in cancer”, the available evidence indicates both an up-regulation of ribosome production and changes in the ribosome structure might causally contribute to neoplastic transformation . Forkhead box O (FOXO) transcription factors are involved in multiple signaling pathways and function as tumor suppressors in a variety of cancers . Apart from this, there were also many pathways in specific cancers, such as “NON-Small cell lung cancer”, “Prostate cancer”, “endometrial cancer”, and “Basal cell carcinoma”. In a word, module genes are significantly enriched with many cancer-related pathways.
Breast cancer is one of the most common cancer and drug screen for breast is essential for the therapy . As described above, MNBDR could achieve a good performance in breast cancer data set. The details of drugs identified by MNBDR for breast cancer are included in the . Among the identified drugs, most of them are also supported by the literature, in addition to being confirmed by the benchmark data. Romidepsin is predicted as an efficient drug for breast cancer by our method. In the meanwhile, Romidepsin is a histone deacetylase inhibitor treatment of adult patients with cutaneous T-cell lymphoma (CTCL) . It modulates additional targets involved in cancer initiation and progression such as c-myc, Hsp90 and p53. Romidepsin has shown anticancer effects by induction of apoptosis, cell differentiation and cell cycle arrest, either alone or in combination . Colchicine has been considered as one of the most effective medications for alleviating crystal-induced joint inflammation . Inhibition of microtubule polymerization is the chief mechanism of action. Microtubules are among the main protein filaments that make up the cytoskeleton, which is crucial to the regulation of many activities . To date, microtubule-targeting agents (MTAs) remain one the most reliable classes of antineoplastic drugs in the treatment of BC . Based on this evidence, Colchicine which inhibits of microtubule polymerization may have a potential therapeutic effect on breast cancer and the experiment also got a certain degree of verification , and Colchicine is predicted as one of the most efficient drugs for breast cancer by MNBDR. Ciclopirox is also one of the top rank drugs predicted by our method for treating breast cancer. This drug is a diterpene triepoxide that is able to suppression the cell growth of breast cancer . All the results indicate our method could not only prioritize the drugs which have been approved, but also find the new adaptation disease for the old drugs.
As cross-talks among modules may be important in disease progression and drug response, we proposed a module network based drug repositioning (MNBDR) method. We used module network, which is based on a permutation test method, to describe the cross-talks among the modules in PPI, and then, using the gene expression data of disease samples and control samples, a network diffusion method was used to rank important modules in disease. After that, the important modules in each drug were also identified using gene expression data of samples stimulated by the drug. Finally, a new index, which could reveal whether the important modules in disease progression were also important in drug response, was proposed to evaluate the efficiency of a drug to the specific disease. We evaluated our method using gene expression data of more than 7000 samples from 19 different cancers obtained from TCGA, as well as measurements of around 118,051 drug instances LINCS databases. The results showed that MNBDR consistently outperformed the other methods in terms of not only AUC but also AUC0.1, which indicated the proposed method performed well when the effective drugs are on the top of the ranked lists. Function annotation of the genes in the modules shows our method indeed could capture the import genes in disease and drug response. In addition, case study in breast cancer showed our method could not only prioritize the drugs which have been approved, but also find the new adaptation disease for the old drugs. In order to prevent overfitting, we did not train the model and we adopted typical values for the parameters in the model. As n (the number of important models) in the model is very important, we also used 10-fold cross-validation to select the best n in training set and compared the performance between our current strategy ( n = 15) and the optimal model in 10-fold cross validation. We repeated the 10-fold cross-validation 10 times and the result was shown in . We found that the result of optimal model is similar to our current strategy, which proves that our model does not have a data leakage problem. Of course, there are some drawbacks of our work. In this work, we only validated our method in cancer data sets. In fact, drug repositioning for other diseases are also valuable. In our future work, we will test our method in more diseases.
|
Overall survival is comparable between percutaneous radiofrequency ablation and liver resection as first-line therapies for solitary 3–5 cm hepatocellular carcinoma | 98311c4e-9a8c-4f5f-a99e-d3f8027a1980 | 11821760 | Surgical Procedures, Operative[mh] | Hepatocellular carcinoma (HCC) is the most common primary cancer with a liver-cell origin. HCC is the sixth most common cancer worldwide . It is also the second leading cause of cancer-related deaths in Taiwan ( https://www.mohw.gov.tw/cp-6650-79055-1.html ). The updated Barcelona Clinic Liver Cancer (BCLC) guidelines recommend that patients with a solitary HCC without macrovascular invasion or extrahepatic spread are considered for liver resection (LR) if they do not show clinically significant portal hypertension (CSPH) . The survival benefit offered by radiofrequency ablation (RFA) in patients with HCC of ≤ 3 cm may be competitive with that offered by LR. Therefore, RFA could be given priority because of its lower invasiveness and cost . For patients with a solitary HCC of 3.0–5.0 cm, LR is recommended as the first-line treatment in the absence of CSPH . However, there is little evidence to support this recommendation. In randomized controlled trials (RCTs) of 234 patients that reported outcomes for HCC of 3–5 cm, there was no significant difference in overall survival (OS) or recurrence-free survival (RFS) between LR and RFA. Although the evidence level for RCT was highest, the case number was limited in the RCTs. A meta-analysis of RCTs comparing LR and RFA for patients with HCC within Milan criteria . Trial sequential analysis performed on these data showed that a study randomizing more than 10,000 patients would be needed to obtain stable results and confirm whether LR is superior to RFA: such a study is unlikely to be designed . To our knowledge, only two retrospective studies have compared LR and RFA to treat a single HCC of 3.0–5.0 cm . Therefore, in this retrospective study, we aimed to compare the survival outcome of patients undergoing LR or RFA for a solitary HCC of 3.0–5.0 cm. The Institutional Review Board of Chang Gung Memorial Hospital-Kaohsiung Branch approved this study (reference number: 202000398B0). Data were extracted from the Kaohsiung Chang Gung Memorial Hospital’s HCC registry. Patient enrollment In this retrospective study, we enrolled 424 patients with Child–Pugh class A liver disease and a solitary HCC of 3–5 cm at BCLC stage A; 310 of these patients underwent LR and 114 underwent percutaneous RFA (Fig. ). All patients who received LR underwent an R0 resection. The raw data for the unmatched cohort are available via the following digital object identifier: https://www.dropbox.com/scl/fi/gavmbuzjraf3yvp06xahd/raw-data-single-hcc-3.0-5.0cm-unmatched.xlsx?rlkey=9ehwehvdnt0k8c5febq67iwse&st=6wx5cx5f&dl=0 The raw data for the matched cohort are available via the following digital object identifier: https://www.dropbox.com/scl/fi/t2p8nb09st0da0bzatoe1/raw-data-single-hcc-3.0-5.0cm-matched.xlsx?rlkey=pd21ud8y22ivald7ps52j0e3q&st=e46tcb7y&dl=0 Decision-making about treatment modalities for patients with a solitary hepatocellular carcinoma of 3–5 cm Each patient newly diagnosed with HCC was discussed by a multidisciplinary HCC team. In general, ideal surgical candidates (i.e., patients with well-preserved liver function without severe comorbidities and with good performance status) would be referred for LR. Variables of interest Our HCC registry data included 7th edition of American Joint Committee on Cancer (AJCC) stage and original BCLC staging system stage . Cirrhosis was defined according to histology for patients who underwent surgery and image studies for patients who underwent non-surgical treatments. Laboratory data included alpha-fetoprotein (AFP), hepatitis B surface antigen (HBsAg), anti-hepatitis C virus antibody (HCV), Child–Pugh class, and Model for End-Stage Liver Disease (MELD) score . Major resection was defined as resection of three or more liver segments. Comorbidities, etiology of chronic liver disease, post-treatment complications, recurrence modality, and treatments for recurrence were not recorded in our HCC registry data. Due to the relatively large sample size in the present study, we only manually reviewed these data from medical records for the matched cohorts; however, we manually reviewed all patients’ treatments for recurrence. The designation as alcoholic was according to the diagnosis of the physician in charge. Non-alcoholic fatty liver disease (NAFLD) was defined as the presence of hepatic steatosis on histology or image studies after excluding HBsAg-positive, anti-HCV-positive, or alcoholic cases . Severe post-treatment complications were defined as Clavien–Dindo classes III–V . OS was calculated as the time elapsed from the date of treatment to the date of the last follow-up or death. RFS was defined as the time from treatments to recurrence or last follow-up. The procedure for liver resection and percutaneous radiofrequency ablation and surveillance after curative treatment for hepatocellular carcinoma The procedure for LR and surveillance after LR or RFA for HCC were described in our previous publications . All RFA procedures were performed under. general anesthesia and percutaneously under ultrasonographic guidance using the multiple-electrode switching system-RFA with a radiofrequency electrode (Covidien LLC, Mansfield, MA, USA). Definition of well-preserved liver function in Child–Pugh class A liver disease In an Italian study that enrolled 543 patients with HCC who underwent LR, postoperative liver decompensation was independently associated with a MELD score of > 9 (odds ratio [OR] = 2.26; 95% confidence interval [CI] = 1.10–4.58; p = 0.02] . Therefore, we assumed that compensated liver function could be stratified with additional granularity by using a MELD score of > 9 for patients with HCC undergoing LR. Statistical analyses Patient characteristics are presented as number or median (interquartile range [IQR]). Categorical variables were analyzed using the chi-square test. Continuous variables were analyzed using the Mann–Whitney U test. The Kaplan–Meier estimator and log-rank test were used to compare OS and RFS between groups. Propensity score matching (PSM) was used to identify a cohort of patients receiving LR with preoperative characteristics similar to those of patients receiving RFA. PSM was estimated using a multivariate logistic regression model, with treatment approaches as the dependent variable and the following preoperative characteristics as covariates: age (> 65 vs ≤ 65 years), sex, AFP (≥ 20 vs < 20 ng/ml), and MELD score (> 9 vs ≤ 9). PSM was performed with 1:1 matching without replacement using a caliper width equal to 0.2 of the propensity score. Standardized mean difference (SMD) values < 0.1 indicated a trivial difference in the covariate between treatment groups, whereas values > 0.5 indicated substantial differences. Local recurrence after treatment was analyzed in a competing risks framework, with non-local recurrence as the competing event. Non-local recurrence after treatment was analyzed in a competing risks framework, with local recurrence as the competing event. Cumulative incidence functions (CIFs) were estimated according to Kalbfleisch et al. . The Gray test was performed to assess CIF differences between the LR and RFA groups. All p-values were two-tailed, and a p -value of < 0.05 was considered statistically significant. All statistical analyses were performed using the computing environment IBM SPSS Statistics, version 25. In this retrospective study, we enrolled 424 patients with Child–Pugh class A liver disease and a solitary HCC of 3–5 cm at BCLC stage A; 310 of these patients underwent LR and 114 underwent percutaneous RFA (Fig. ). All patients who received LR underwent an R0 resection. The raw data for the unmatched cohort are available via the following digital object identifier: https://www.dropbox.com/scl/fi/gavmbuzjraf3yvp06xahd/raw-data-single-hcc-3.0-5.0cm-unmatched.xlsx?rlkey=9ehwehvdnt0k8c5febq67iwse&st=6wx5cx5f&dl=0 The raw data for the matched cohort are available via the following digital object identifier: https://www.dropbox.com/scl/fi/t2p8nb09st0da0bzatoe1/raw-data-single-hcc-3.0-5.0cm-matched.xlsx?rlkey=pd21ud8y22ivald7ps52j0e3q&st=e46tcb7y&dl=0 Each patient newly diagnosed with HCC was discussed by a multidisciplinary HCC team. In general, ideal surgical candidates (i.e., patients with well-preserved liver function without severe comorbidities and with good performance status) would be referred for LR. Our HCC registry data included 7th edition of American Joint Committee on Cancer (AJCC) stage and original BCLC staging system stage . Cirrhosis was defined according to histology for patients who underwent surgery and image studies for patients who underwent non-surgical treatments. Laboratory data included alpha-fetoprotein (AFP), hepatitis B surface antigen (HBsAg), anti-hepatitis C virus antibody (HCV), Child–Pugh class, and Model for End-Stage Liver Disease (MELD) score . Major resection was defined as resection of three or more liver segments. Comorbidities, etiology of chronic liver disease, post-treatment complications, recurrence modality, and treatments for recurrence were not recorded in our HCC registry data. Due to the relatively large sample size in the present study, we only manually reviewed these data from medical records for the matched cohorts; however, we manually reviewed all patients’ treatments for recurrence. The designation as alcoholic was according to the diagnosis of the physician in charge. Non-alcoholic fatty liver disease (NAFLD) was defined as the presence of hepatic steatosis on histology or image studies after excluding HBsAg-positive, anti-HCV-positive, or alcoholic cases . Severe post-treatment complications were defined as Clavien–Dindo classes III–V . OS was calculated as the time elapsed from the date of treatment to the date of the last follow-up or death. RFS was defined as the time from treatments to recurrence or last follow-up. The procedure for LR and surveillance after LR or RFA for HCC were described in our previous publications . All RFA procedures were performed under. general anesthesia and percutaneously under ultrasonographic guidance using the multiple-electrode switching system-RFA with a radiofrequency electrode (Covidien LLC, Mansfield, MA, USA). In an Italian study that enrolled 543 patients with HCC who underwent LR, postoperative liver decompensation was independently associated with a MELD score of > 9 (odds ratio [OR] = 2.26; 95% confidence interval [CI] = 1.10–4.58; p = 0.02] . Therefore, we assumed that compensated liver function could be stratified with additional granularity by using a MELD score of > 9 for patients with HCC undergoing LR. Patient characteristics are presented as number or median (interquartile range [IQR]). Categorical variables were analyzed using the chi-square test. Continuous variables were analyzed using the Mann–Whitney U test. The Kaplan–Meier estimator and log-rank test were used to compare OS and RFS between groups. Propensity score matching (PSM) was used to identify a cohort of patients receiving LR with preoperative characteristics similar to those of patients receiving RFA. PSM was estimated using a multivariate logistic regression model, with treatment approaches as the dependent variable and the following preoperative characteristics as covariates: age (> 65 vs ≤ 65 years), sex, AFP (≥ 20 vs < 20 ng/ml), and MELD score (> 9 vs ≤ 9). PSM was performed with 1:1 matching without replacement using a caliper width equal to 0.2 of the propensity score. Standardized mean difference (SMD) values < 0.1 indicated a trivial difference in the covariate between treatment groups, whereas values > 0.5 indicated substantial differences. Local recurrence after treatment was analyzed in a competing risks framework, with non-local recurrence as the competing event. Non-local recurrence after treatment was analyzed in a competing risks framework, with local recurrence as the competing event. Cumulative incidence functions (CIFs) were estimated according to Kalbfleisch et al. . The Gray test was performed to assess CIF differences between the LR and RFA groups. All p-values were two-tailed, and a p -value of < 0.05 was considered statistically significant. All statistical analyses were performed using the computing environment IBM SPSS Statistics, version 25. Characteristics of patients undergoing liver resection or percutaneous radiofrequency ablation therapy in the unmatched cohort Tumor size of the LR group was larger than that of the RFA group ( p < 0.001). The proportion of male patients was higher ( p = 0.005) and the proportion aged > 65 years ( p < 0.001) or with a MELD score of > 9 ( p < 0.001) was lower in the LR group compared to the RFA group. There were no significant differences in AFP level, HBsAg positivity, and anti-HCV positivity between groups (Table ). Of the 310 patients who received LR, 126 (40.6%) underwent major resection; pathology data showed that 122 (39.4%) patients were with AJCC stage 1, 187 (60.2%) with stage 2, and 1 (0.3%) with stage 3 disease; 169 (54.7%) patients were non-cirrhotic and 186 (60%) patients showed microvascular invasion (MVI). We have not reported cirrhosis prevalence in the RFA group because image-defined cirrhosis is vague and subjective. Six (1.9%) patients in the LR group and three (2.6%) patients in the RFA group eventually received liver transplants. Treatments for recurrence in the unmatched cohort Of the 310 patients who received LR, 101 (32.6%) developed recurrence. The patients in the LR group underwent the following treatments for recurrence: 12 (11.8%) underwent LR, 40 (39.6%) underwent RFA, 3 (3.0%) received percutaneous ethanol injection (PEI), 37 (36.6%) underwent transarterial chemoembolization (TACE), 5 (5.0%) received targeted therapies (i.e., sorafenib or lenvatinib), 1 (1.0%) received atezolizumab + bevacizumab therapy, 1 (1.0%) received systemic therapy clinical trial, 1 (1.0%) was lost to follow up, and 3 (3.0%) received best supportive care (BSC). Of the 114 patients who received RFA, 58 (50.9%) developed recurrence. The patients in the RFA group underwent the following treatments for recurrence: 3 (5.2%) patients underwent LR, 31 (53.4%) underwent RFA, 1 (1.7%) received percutaneous ethanol injection (PEI), 17 (29.3%) underwent TACE, 3 (5.2%) received targeted therapies, and 1 (1.7%) received BSC. Five-year overall survival and recurrence-free survival of the unmatched cohort The 5-year OS of the LR group was 70% compared to the 48% of the RFA group ( p < 0.001) (Fig. ). The 5-year RFS of the LR group was 52% and that of the RFA group was 19% ( p < 0.001) (Fig. ). Five-year overall survival and recurrence-free survival of the unmatched cohort stratified by tumor size Among all patients ( n = 424), 208 underwent LR and 94 underwent RFA with a tumor size of 3.1–4.0 cm, and 102 underwent LR and 20 underwent RFA with a tumor size of 4.1–5.0 cm. Among patients with a tumor size of 3.1–4.0 cm, 5-year OS was 72% in the LR group and 51% in the RFA group ( p = 0.0032; Fig. ), and 5-year RFS was 54% in the LR group and 22% in the RFA group ( p = 0.0001; Fig. ). Among patients with a tumor size of 4.1–5.0 cm, 5-year OS was 67% in the LR group and 31% in the RFA group ( p = 0.0012; Fig. ), and 5-year RFS was 48% in the LR group and unmeasurable in the RFA group due to a limited follow-up period ( p = 0.0005; Fig. ). Baseline characteristics of matched cohorts There were no significant differences in the etiology of chronic liver disease; common comorbidities, including diabetes, hypertension and cardio-cerebral-vascular diseases; age; sex; MELD score; and AFP level between the two groups (Table ). One patient who underwent LR developed a severe post-treatment complication (i.e., massive right pleural effusion for which pigtail drainage was performed), whereas no patients in the RFA group developed complications ( p = 1.000). Five-year overall survival and recurrence-free survival of the matched cohort The 5-year OS of the LR group was 58%, whereas that of the RFA group was 50% ( p = 0.367) (Fig. ). The 5-year RFS of the LR group was 55% and that of the RFA group was 16% ( p = 0.001) (Fig. ). Five-year overall survival and recurrence-free survival of the matched cohort stratified by tumor size After PSM, there were 99 patients in the LR and RFA groups. Among the 99 patients in the LR group, 64 (64.6%) had a tumor size of 3.1–4.0 cm, and 35 (35.3%) had a tumor size of 4.1–5.0 cm. Among the 99 patients in the RFA group, 81 (81.8%) had a tumor size of 3.1–4.0 cm, and 18 (18.1%) had a tumor size of 4.1–5.0 cm. Among the patients with a tumor size of 3.1–4.0 cm, 5-year OS was 69% in the LR group and 53% in the RFA group ( p = 0.146; Fig. ), and 5-year RFS was 63% in the LR group and 18% in the RFA group ( p = 0.0007; Fig. ). Among the patients with a tumor size of 4.1–5.0 cm, 5-year OS was 33% in the LR group and 33% in the RFA group ( p = 0.6323; Fig. ), and 5-year RFS was 40% in the LR group and unmeasurable in the RFA group due to a limited follow-up period ( p = 0.0333; Fig. ). Characteristics of tumor recurrence and treatments for recurrence after propensity score matching Local recurrence was significantly higher in the RFA group compared to the LR group ( p = 0.005). There were no significant differences in the proportion of patients with recurrence beyond Milan criteria ( p = 0.548) and patients who underwent curative treatments ( p = 0.5) between the RFA group and the LR group (Table ). Thirty patients developed recurrence in the LR group and 52 patients in the RFA group. The details of recurrence modality are as follows: 8 (26.6%) patients were BCLC 0, 10 (33.3%) were BCLC A, 5 (16.6%) were BCLC B, and 9 (30%) were BCLC C in the LR group; 14 (26.9%) patients were BCLC 0, 23 (44.2%) were BCLC A, 7 (13.5%) were BCLC B, and 6 (11.5%) were BCLC C in the RFA group. The details of treatment modalities for recurrence are as follows: 3 (10.0%) patients underwent LR, 13 (43.3%) patients underwent RFA, 1 (3.3%) patient received PEI, 11 (36.6%) patients underwent TACE, 3 (10.0%) patients received targeted therapies (i.e., sorafenib or lenvatinib), and 1 (3.3%) patient received BSC in the LR group; 3 (5.7%) patients underwent LR, 26 (50.0%) patients underwent RFA, 1 (1.9%) patient received PEI, 16 (30.7%) patients underwent TACE, 3 (5.7%) patients received targeted therapies, and 1 (1.9%) patient received BSC in the RFA group. Tumor size of the LR group was larger than that of the RFA group ( p < 0.001). The proportion of male patients was higher ( p = 0.005) and the proportion aged > 65 years ( p < 0.001) or with a MELD score of > 9 ( p < 0.001) was lower in the LR group compared to the RFA group. There were no significant differences in AFP level, HBsAg positivity, and anti-HCV positivity between groups (Table ). Of the 310 patients who received LR, 126 (40.6%) underwent major resection; pathology data showed that 122 (39.4%) patients were with AJCC stage 1, 187 (60.2%) with stage 2, and 1 (0.3%) with stage 3 disease; 169 (54.7%) patients were non-cirrhotic and 186 (60%) patients showed microvascular invasion (MVI). We have not reported cirrhosis prevalence in the RFA group because image-defined cirrhosis is vague and subjective. Six (1.9%) patients in the LR group and three (2.6%) patients in the RFA group eventually received liver transplants. Of the 310 patients who received LR, 101 (32.6%) developed recurrence. The patients in the LR group underwent the following treatments for recurrence: 12 (11.8%) underwent LR, 40 (39.6%) underwent RFA, 3 (3.0%) received percutaneous ethanol injection (PEI), 37 (36.6%) underwent transarterial chemoembolization (TACE), 5 (5.0%) received targeted therapies (i.e., sorafenib or lenvatinib), 1 (1.0%) received atezolizumab + bevacizumab therapy, 1 (1.0%) received systemic therapy clinical trial, 1 (1.0%) was lost to follow up, and 3 (3.0%) received best supportive care (BSC). Of the 114 patients who received RFA, 58 (50.9%) developed recurrence. The patients in the RFA group underwent the following treatments for recurrence: 3 (5.2%) patients underwent LR, 31 (53.4%) underwent RFA, 1 (1.7%) received percutaneous ethanol injection (PEI), 17 (29.3%) underwent TACE, 3 (5.2%) received targeted therapies, and 1 (1.7%) received BSC. The 5-year OS of the LR group was 70% compared to the 48% of the RFA group ( p < 0.001) (Fig. ). The 5-year RFS of the LR group was 52% and that of the RFA group was 19% ( p < 0.001) (Fig. ). Among all patients ( n = 424), 208 underwent LR and 94 underwent RFA with a tumor size of 3.1–4.0 cm, and 102 underwent LR and 20 underwent RFA with a tumor size of 4.1–5.0 cm. Among patients with a tumor size of 3.1–4.0 cm, 5-year OS was 72% in the LR group and 51% in the RFA group ( p = 0.0032; Fig. ), and 5-year RFS was 54% in the LR group and 22% in the RFA group ( p = 0.0001; Fig. ). Among patients with a tumor size of 4.1–5.0 cm, 5-year OS was 67% in the LR group and 31% in the RFA group ( p = 0.0012; Fig. ), and 5-year RFS was 48% in the LR group and unmeasurable in the RFA group due to a limited follow-up period ( p = 0.0005; Fig. ). There were no significant differences in the etiology of chronic liver disease; common comorbidities, including diabetes, hypertension and cardio-cerebral-vascular diseases; age; sex; MELD score; and AFP level between the two groups (Table ). One patient who underwent LR developed a severe post-treatment complication (i.e., massive right pleural effusion for which pigtail drainage was performed), whereas no patients in the RFA group developed complications ( p = 1.000). The 5-year OS of the LR group was 58%, whereas that of the RFA group was 50% ( p = 0.367) (Fig. ). The 5-year RFS of the LR group was 55% and that of the RFA group was 16% ( p = 0.001) (Fig. ). After PSM, there were 99 patients in the LR and RFA groups. Among the 99 patients in the LR group, 64 (64.6%) had a tumor size of 3.1–4.0 cm, and 35 (35.3%) had a tumor size of 4.1–5.0 cm. Among the 99 patients in the RFA group, 81 (81.8%) had a tumor size of 3.1–4.0 cm, and 18 (18.1%) had a tumor size of 4.1–5.0 cm. Among the patients with a tumor size of 3.1–4.0 cm, 5-year OS was 69% in the LR group and 53% in the RFA group ( p = 0.146; Fig. ), and 5-year RFS was 63% in the LR group and 18% in the RFA group ( p = 0.0007; Fig. ). Among the patients with a tumor size of 4.1–5.0 cm, 5-year OS was 33% in the LR group and 33% in the RFA group ( p = 0.6323; Fig. ), and 5-year RFS was 40% in the LR group and unmeasurable in the RFA group due to a limited follow-up period ( p = 0.0333; Fig. ). Local recurrence was significantly higher in the RFA group compared to the LR group ( p = 0.005). There were no significant differences in the proportion of patients with recurrence beyond Milan criteria ( p = 0.548) and patients who underwent curative treatments ( p = 0.5) between the RFA group and the LR group (Table ). Thirty patients developed recurrence in the LR group and 52 patients in the RFA group. The details of recurrence modality are as follows: 8 (26.6%) patients were BCLC 0, 10 (33.3%) were BCLC A, 5 (16.6%) were BCLC B, and 9 (30%) were BCLC C in the LR group; 14 (26.9%) patients were BCLC 0, 23 (44.2%) were BCLC A, 7 (13.5%) were BCLC B, and 6 (11.5%) were BCLC C in the RFA group. The details of treatment modalities for recurrence are as follows: 3 (10.0%) patients underwent LR, 13 (43.3%) patients underwent RFA, 1 (3.3%) patient received PEI, 11 (36.6%) patients underwent TACE, 3 (10.0%) patients received targeted therapies (i.e., sorafenib or lenvatinib), and 1 (3.3%) patient received BSC in the LR group; 3 (5.7%) patients underwent LR, 26 (50.0%) patients underwent RFA, 1 (1.9%) patient received PEI, 16 (30.7%) patients underwent TACE, 3 (5.7%) patients received targeted therapies, and 1 (1.9%) patient received BSC in the RFA group. Cumulative incidence of local and non-local tumor recurrence in patients after propensity score matching The cumulative incidence of local tumor recurrence was significantly higher in the RFA group compared to the LR group ( p < 0.001) (Fig. ). The cumulative incidence of non-local recurrence did not differ between the two groups ( p = 0.7) (Fig. ). In the present study, the proportion of patients aged > 65 years and with a MELD score of > 9 was higher in the RFA group compared to the LR group in the unmatched cohort; these results suggest that older-aged patients (older age being a surrogate marker of severe comorbidities) and inadequate liver function reserve (i.e., a MELD score of > 9) were referred for RFA. The inherent selection bias between the two treatment modalities resulted in better OS and RFS of the LR group of the unmatched cohort. After PSM, 5-year OS did not differ between the LR group and the RFA group despite the 5-year RFS of the former being better. The results of the present study could be explained by local recurrence being higher in the RFA group ( p < 0.001); however, non-local recurrence was not different between the two groups ( p = 0.70). An Italian study reported that among patients with HCC who underwent RFA, the post-recurrence survival of those with local recurrence was better than that of patients with non-local recurrence; because local recurrence is considered to be incomplete ablation with residual tumor, re-ablation could be performed effectively. However, non-local recurrence could be partly due to occult metastasis from the primary tumor, which indicates aggressive tumor biology and, consequently, a worse outcome . Tumor size is a well-known prognostic factor for patients with HCC . Therefore, we performed subgroup analysis and stratified tumor size as 3.1–4.0 and 4.1–5.0 cm. Our results showed that 5-year OS was comparable in two treatment groups after PSM, irrespective of tumor size; however, 5-year RFS was superior in the LR group compared to the RFA group, irrespective of tumor size. We used a MELD score of > 9 to indicate inadequate liver function reserve in the present study. Traditionally, the MELD score is used to evaluate the severity of deterioration of liver function reserve in patients with liver decompensation . However, numerous studies have shown its utility for patients with HCC undergoing LR, further supporting the feasibility of application of the MELD score for patients undergoing LR for HCC. We used AFP ≥ 20 ng/ml as a covariate in the PSM. This cutoff value is from the American Association for the Study of Liver Diseases guidelines which recommend that patients with the risk of HCC undergo surveillance using contrast-enhanced computed tomography or magnetic resonance imaging if their AFP level is ≥ 20 ng/ml. Patients with HCC and AFP ≥ 20 ng/ml are also referred to as those with AFP-positive HCC . As tumor size increases, the risk of MVI also increases . Of the 310 patients who underwent LR enrolled in the present study, MVI was noted in 186 (60%). MVI indicates aggressive tumor biology and the increased risk of micro-metastasis. This suggests that such an unfavorable prognostic marker implies micro-metastasis, which could result in less effective complete tumor resection. Lei et al. enrolled 72 patients undergoing LR and 50 patients undergoing RFA to treat a single HCC of 3.0–5.0 cm. Their results showed that OS and RFS were comparable between the two groups. Cox regression analysis showed that neither LR nor RFA was a significant risk factor of OS or RFS. However, the study included a limited number of cases and 34.7% of the LR group and 26% of the RFA group had Child–Pugh class B liver disease . Ye et al. enrolled 196 patients who underwent LR and 192 patients who underwent RFA for a single HCC of 3.0–5.0 cm. After PSM, 5-year OS was 34% and 40% ( p = 0.103) and 5-year RFS was 10% and 15% ( p = 0.087) in the RFA group and LR group, respectively. In addition, 7.6% of the LR group and 8.8% of the RFA group had Child–Pugh class B liver disease . Postoperative liver decompensation is the most representative cause of morbidity and mortality in LR . Thus, the ideal candidates for LR should be those with well-preserved liver function. With the advent of local–regional therapies, those with early-stage HCC and inadequate liver function reserve should be referred for local–regional therapies if liver transplantation is not feasible . Accordingly, we only enrolled patients with Child–Pugh class A liver disease in the present study. In addition, two previous studies did not analyze differences in local and non-local recurrence between the two treatment modalities, which constitute the key concept for explaining the comparability of OS between the two treatment modalities. A Chinese multi-center study enrolled 1289 patients who underwent percutaneous microwave ablation (MWA) ( n = 414) or laparoscopic liver resection (LLR) ( n = 875) as the first-line therapy for a solitary HCC of 3–5 cm. After PSM, there were no differences in OS between MWA and LLR (hazard ratio [HR] = 0.88, 95% CI = 0.65–1.19, p = 0.420), and MWA was inferior to LLR in RFS (HR = 1.36, 95% CI = 1.05–1.75, p = 0.017) . Our findings are compatible with the results of the Chinese study despite the use of different thermal ablation modalities. The same group of authors conducted a study of the same patients, but with age restricted to > 60 years. The MWA group consisted of 309 patients and the LLR group of 363 patients. After PSM, OS was similar between the two groups (HR 0.98, p = 0.900) and RFS was inferior in the MWA group (HR 1.52, p = 0.007) . Microwave ablation has potential advantages compared to RFA, including the ability to achieve higher temperatures and larger ablation zones, with lower susceptibility to heat sink effects. Despite these advantages, a recent systematic review and meta‑analysis reported that the efficacy of MWA, as measured by incomplete ablation and complication rates, was similar to that of RFA for HCC less than 5 cm . This phenomenon may be explained by the fact that the efficacy of thermal ablation is largely dependent on operators’ experience. RFA has been introduced in clinical guidelines as a curative treatment for early-stage HCC since the early 2000s, whereas MWA has been increasingly applied in clinical practice in the last decade . However, we would not select patients with peri-vascular tumor for RFA treatment. Stereotactic body radiotherapy has also been noted for its suitability for treating tumors located in anatomical sites where RFA would be challenging . Bridging therapies are used in patients meeting liver transplantation criteria to delay HCC progression and minimize the risk of delisting while on the waiting list . Due to the extreme shortage of donors in Taiwan, among all 424 patients in our study, only 9 (2.1%) eventually received liver transplants. The strength of our study is that we enrolled a relatively large number of patients with a single HCC of 3.0–5.0 cm and Child–Pugh class A liver disease who underwent percutaneous RFA or LR compared to previous studies . The results of our study support the results of previous studies . The limitation of our study is that because it is a single-center retrospective investigation, it may show inherent selection bias. In addition, the study lacked data on tumor location (superficial vs deep) because it was not mentioned in our image reports. For patients with deep-seated HCC and the presence of CSPH, up-front liver transplantation is desirable but not always available. In general, these patients would be referred for RFA. The 5-year OS of patients with a solitary HCC of 3–5 cm was comparable between the LR and RFA groups after PSM. However, the two groups did not differ in severe post-treatment complications. Accordingly, percutaneous RFA could be the first-line treatment for patients with a solitary HCC of 3–5 cm who are reluctant to undergo surgery. The results of the present study, along with those of previous studies, can reassure physicians that the outcome of RFA is no worse than that of LR, even for patients with a single HCC of 3.0–5.0 cm. Therefore, clinicians should not indicate LR for patients who are not perfectly suitable for it. |
Mendelian randomization of plasma proteomics identifies novel ALS-associated proteins and their GO enrichment and KEGG pathway analyses | b1c2f8df-7a5a-48be-a86b-63592a4ec4e5 | 11874834 | Biochemistry[mh] | Amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s disease, is a progressive neurodegenerative disorder that affects motor neurons in the brain and spinal cord. Globally, two to three out of every 100,000 people develop ALS annually, with a higher prevalence among men . ALS can be classified into familial and sporadic forms, with familial ALS accounting for approximately 10% of cases . The disease is characterized by severe and progressive degeneration of motor neurons in the lower brainstem and upper cerebral cortex , leading to muscle atrophy, paralysis, stiffness, fasciculations, and spasticity. These symptoms result in difficulties with walking, hand coordination, speech, swallowing, and breathing . Unfortunately, ALS is often diagnosed only one year after symptom onset . Delayed diagnosis significantly hinders early therapeutic intervention, exacerbating disease progression and complicating treatment . Furthermore, ALS progresses rapidly, with an average survival period of 2 to 4 years post-diagnosis, making it one of the most lethal motor neuron diseases . By 2040, the global burden of ALS is projected to rise substantially, shifting from developed to developing nations and imposing a heavy strain on healthcare systems . The lack of biomarkers for early diagnosis, clinical stratification, and treatment monitoring severely impedes the development of novel ALS therapies . Although proteins encoded by genes such as SOD1, C9orf72, and FUS have been implicated in ALS pathogenesis, only tofersen—approved in the United States for adults with SOD1 mutations—has shown clinical efficacy . Given the economic and clinical significance of ALS and the incomplete characterization of its genetic underpinnings, identifying key plasma proteins involved in ALS pathogenesis is critical for developing new therapeutic strategies. To elucidate disease mechanisms, discover biomarkers, and uncover biological pathways, an increasing number of studies integrate proteomic and genomic data . Plasma proteins play vital roles in immune regulation, molecular transport, signal transduction, tissue repair, and homeostasis maintenance . As potential drivers of central nervous system disorders and major sources of drug targets, plasma proteins serve as diagnostic biomarkers and therapeutic intervention targets, holding significant value in human health and disease management . Consequently, identifying disease-associated plasma proteins can deepen our understanding of pathophysiology and offer molecular targets for drug development. Mendelian randomization (MR) is a statistical method that uses genetic variants as instrumental variables to infer causal relationships between exposures (e.g., proteomic factors) and outcomes (e.g., ALS). Unlike traditional observational studies, MR mitigates confounding bias through sensitivity analyses . In MR, genetic variants associated with protein levels (protein quantitative trait loci, pQTLs) act as instrumental variables. By selecting cis-acting pQTLs (genetic variants near the target gene), MR provides functional annotations for disease-associated loci, prioritizes candidate genes from GWAS findings, and reveals tissue-specific disease mechanisms. Integrating expression quantitative trait loci (eQTL) data and gene network analyses further enhances the exploration of gene interactions . MR has been widely applied to identify novel therapeutic targets and repurpose existing drugs . Leveraging large-scale blood proteome datasets (e.g., Decode ), MR analyses can uncover genetic components of complex diseases influenced by circulatory factors. Identifying proteins causally linked to ALS may improve our understanding of its genetic architecture and highlight potential therapeutic targets. Here, we employed a multiomics dataset to assess the causal effects of 4,907 plasma proteins on ALS, aiming to discover novel drug targets and dissect their pathophysiological roles. Subsequent enrichment analyses were conducted to identify pathways implicated in ALS pathogenesis, providing a theoretical foundation for developing effective therapies. This study seeks to advance therapeutic strategies for ALS, a disease with profound clinical challenges and limited treatment options.
Data from ALS ALS data come from the publicly accessible GWAS in the IEU OpenGWAS project ( https://gwas.mrcieu.ac.uk/ ), The GWAS ID is ebi-a-GCST005647. In all, 80,610 people with European ancestry participated in this GWAS, comprising 20,806 ALS patients and 59,804 controls. A total of 39,630,630 SNPs were examined (Fig. ). Data from plasma protein quantitative trait loci (pQTL) The study by Ferkingstad et al. (2021), which produced the largest pQTL dataset to date, was the source of the pQTL data in this study . In summary, 35,559 Icelanders participated in a genome-wide association study (GWAS) by Ferkingstad and colleagues, which examined plasma proteins using 4,907 aptamers. They found 18,084 sequence variations linked to plasma protein levels; uncommon variants minor allele frequency (MAF) < 1% accounted for 19% of these correlations. They discovered 257,490 connections by examining the relationships between plasma protein levels and 373 illnesses as well as other features. By combining pQTL data with genetic connections for traits and diseases, it was possible to identify 938 genes that could potentially be targets for drugs, and 12% of the lead variants in the GWAS catalog were in high linkage disequilibrium with pQTLs. Selection of instrumental variables We investigated the causal relationship between plasma proteins and ALS using dual-sample MR analysis . The MR method is based on the following assumptions: (i) instrumental variables are closely related to exposure (plasma protein levels); (ii) instrumental variables affect the outcome (ALS risk) only through their effect on exposure; (iii) instrumental variables are independent of any mixed factors. In order to obtain single nucleotide polymorphisms (SNPs) that are closely related to exposure, we first set p < 5 × 10 − 8 in accordance with the MR assumption . Second, we determined the linkage disequilibrium between each exposed SNP using PLINK software, setting the threshold for linkage equilibrium at r2 < 0.001 (a distance of 10,00 kb) . Minor Allele Frequency (MAF) was also considered in SNP selection to ensure that the instrumental variables used are common enough to avoid weak instrument bias. SNPs with low MAF (typically < 1%) are excluded as they may lead to imprecise estimates of the causal effect due to limited power . Horizontal pleiotropy, heterogeneity and sensitivity analysis are important tools for quality control of MR analysis results . We use MR-PRESSO and MR-Egger regression techniques to investigate possible horizontal pleiotropy among instrumental variables . Heterogeneity among selected instrumental variables was assessed using Cochran’s Q statistic and its associated p -value . The presence or absence of heterogeneity is indicated by the p value ( p < 0.05 indicates heterogeneity is present, p > 0.05 indicates no heterogeneity) . To assess whether any particular SNP has an excessive impact on the overall causal relationship, a leave-one-out analysis is carried out by eliminating each SNP in turn and computing the combined effect of the remaining SNPs. Additionally, we computed the F statistic (F = beta²/se²), where beta is the allele’s effect size and se is the standard error, to assess the validity of the included SNPs. If the F-statistic > 10, it suggests that the instrumental variable is robust; if F-statistic < 10, it is not (Supplementary Table ). Mendelian randomization analysis Mendelian randomization (MR) techniques, such as MR Egger, weighted median, inverse variance weighting (IVW), simple mode, and weighted mode, were mostly employed in this study . IVW is the most critical method for evaluating analysis results in stochastic models. By integrating these different methods, we can verify hypothesized causal relationships from different perspectives, thereby increasing the credibility and accuracy of causal inferences. R software was used for all statistical analyses, and the TwoSampleMR and MendelianRandomization packages were used for MR analysis. These packages provide tools for conducting MR analyses, testing hypotheses, and performing sensitivity analyses, providing a comprehensive framework for statistical assessment of causal relationships in genetic epidemiology. In addition, to solve the false positive problem caused by multiple comparisons, this study modified the false discovery rate (FDR) to the P -value to control the false positive rate in the multiple hypothesis test.By ranking all P -values and then correcting them, we ensure that false positive rates can still be effectively controlled at a high significance level. This revision process helps to improve the reliability of research results, especially in large-scale gene association studies, and effectively avoids the false negative problem caused by too strict revision methods. Analysis of plasma protein differences associated with ALS We performed differential expression analysis on positive results from forward MR analysis. Genes with p < 0.05 and |log fold change| (|logFC|) ≥ 1 are regarded as differential expression genes (DEGs). We used a volcano plot to display the common DEGs in plasma proteins, ultimately identifying a set of plasma protein genes associated with ALS. In the volcano plot, green indicates down regulated genes, and red indicates up regulated genes. GO and KEGG enrichment analysis We used the R software package clusterProfiler to conduct Gene Ontology (GO) and KEGG (Kyoto Encyclopedia of Genes and Genomes) analysis on differentially expressed plasma protein genes, successfully obtaining detailed information on cell components (CC), molecular functions (MF), biological processes (BP), and KEGG pathways. Based on this, we used two R packages, ggplot2 and circular, to visualize the results of the most representative parts of the analysis, specifically the top 30 KEGG pathways and the top 10 GO terms with the lowest p -values.
ALS data come from the publicly accessible GWAS in the IEU OpenGWAS project ( https://gwas.mrcieu.ac.uk/ ), The GWAS ID is ebi-a-GCST005647. In all, 80,610 people with European ancestry participated in this GWAS, comprising 20,806 ALS patients and 59,804 controls. A total of 39,630,630 SNPs were examined (Fig. ).
The study by Ferkingstad et al. (2021), which produced the largest pQTL dataset to date, was the source of the pQTL data in this study . In summary, 35,559 Icelanders participated in a genome-wide association study (GWAS) by Ferkingstad and colleagues, which examined plasma proteins using 4,907 aptamers. They found 18,084 sequence variations linked to plasma protein levels; uncommon variants minor allele frequency (MAF) < 1% accounted for 19% of these correlations. They discovered 257,490 connections by examining the relationships between plasma protein levels and 373 illnesses as well as other features. By combining pQTL data with genetic connections for traits and diseases, it was possible to identify 938 genes that could potentially be targets for drugs, and 12% of the lead variants in the GWAS catalog were in high linkage disequilibrium with pQTLs.
We investigated the causal relationship between plasma proteins and ALS using dual-sample MR analysis . The MR method is based on the following assumptions: (i) instrumental variables are closely related to exposure (plasma protein levels); (ii) instrumental variables affect the outcome (ALS risk) only through their effect on exposure; (iii) instrumental variables are independent of any mixed factors. In order to obtain single nucleotide polymorphisms (SNPs) that are closely related to exposure, we first set p < 5 × 10 − 8 in accordance with the MR assumption . Second, we determined the linkage disequilibrium between each exposed SNP using PLINK software, setting the threshold for linkage equilibrium at r2 < 0.001 (a distance of 10,00 kb) . Minor Allele Frequency (MAF) was also considered in SNP selection to ensure that the instrumental variables used are common enough to avoid weak instrument bias. SNPs with low MAF (typically < 1%) are excluded as they may lead to imprecise estimates of the causal effect due to limited power . Horizontal pleiotropy, heterogeneity and sensitivity analysis are important tools for quality control of MR analysis results . We use MR-PRESSO and MR-Egger regression techniques to investigate possible horizontal pleiotropy among instrumental variables . Heterogeneity among selected instrumental variables was assessed using Cochran’s Q statistic and its associated p -value . The presence or absence of heterogeneity is indicated by the p value ( p < 0.05 indicates heterogeneity is present, p > 0.05 indicates no heterogeneity) . To assess whether any particular SNP has an excessive impact on the overall causal relationship, a leave-one-out analysis is carried out by eliminating each SNP in turn and computing the combined effect of the remaining SNPs. Additionally, we computed the F statistic (F = beta²/se²), where beta is the allele’s effect size and se is the standard error, to assess the validity of the included SNPs. If the F-statistic > 10, it suggests that the instrumental variable is robust; if F-statistic < 10, it is not (Supplementary Table ).
Mendelian randomization (MR) techniques, such as MR Egger, weighted median, inverse variance weighting (IVW), simple mode, and weighted mode, were mostly employed in this study . IVW is the most critical method for evaluating analysis results in stochastic models. By integrating these different methods, we can verify hypothesized causal relationships from different perspectives, thereby increasing the credibility and accuracy of causal inferences. R software was used for all statistical analyses, and the TwoSampleMR and MendelianRandomization packages were used for MR analysis. These packages provide tools for conducting MR analyses, testing hypotheses, and performing sensitivity analyses, providing a comprehensive framework for statistical assessment of causal relationships in genetic epidemiology. In addition, to solve the false positive problem caused by multiple comparisons, this study modified the false discovery rate (FDR) to the P -value to control the false positive rate in the multiple hypothesis test.By ranking all P -values and then correcting them, we ensure that false positive rates can still be effectively controlled at a high significance level. This revision process helps to improve the reliability of research results, especially in large-scale gene association studies, and effectively avoids the false negative problem caused by too strict revision methods.
We performed differential expression analysis on positive results from forward MR analysis. Genes with p < 0.05 and |log fold change| (|logFC|) ≥ 1 are regarded as differential expression genes (DEGs). We used a volcano plot to display the common DEGs in plasma proteins, ultimately identifying a set of plasma protein genes associated with ALS. In the volcano plot, green indicates down regulated genes, and red indicates up regulated genes.
We used the R software package clusterProfiler to conduct Gene Ontology (GO) and KEGG (Kyoto Encyclopedia of Genes and Genomes) analysis on differentially expressed plasma protein genes, successfully obtaining detailed information on cell components (CC), molecular functions (MF), biological processes (BP), and KEGG pathways. Based on this, we used two R packages, ggplot2 and circular, to visualize the results of the most representative parts of the analysis, specifically the top 30 KEGG pathways and the top 10 GO terms with the lowest p -values.
Plasma proteins associated with ALS After strictly following the instrumental variable selection criteria of the study, 1250 plasma proteins were finally included in the MR analysis, and the corresponding SNP information is detailed in Supplementary Table . Notably, among these 1250 plasma protein subgroups, MR based on the results of IVW or Wald ratio method ( p < 0.05) and the corrected results (false discovery rate FDR < 0.05).The analysis revealed that 491 plasma proteins (Supplementary Table ) may be associated with ALS. It is necessary to explain here that in this study, the Odds Ratio (OR) is used to measure the impact of plasma protein on ALS. When OR > 1, it indicates that the plasma protein is a risk factor; conversely, when OR < 1, it means that it is a protective factor. The Confidence Interval (CI) reflects the uncertainty range of the estimated value, with a 95% CI typically used to evaluate the confidence level.Among the 491 ALS-associated plasma proteins, 95 plasma proteins were identified as potential risk factors for ALS ( β > 0, OR > 1), while 396 plasma proteins were judged as potential protective factors for ALS ( β < 0, OR < 1). We then performed an in-depth differential analysis of these plasma proteins associated with ALS. Among the top 20 most significantly different plasma proteins (Fig. ), 11 of them are up-regulated genes, specifically covering C1QC, UMOD, SLITRK5, ASAP2, TREML2, DAPK2, F2, ARHGEF10, POLM, SST and SIGLEC1 et al. The down-regulated plasma protein genes include ADPGK, BTNL9, COLEC12, ADGRF5, FAIM, CRTAM, PRSS3, BAG5 and PSMD11. After completing the heterogeneity and horizontal pleiotropy analysis process, we excluded F2 from subsequent studies due to heterogeneity issues (Q < 0.05) (Supplementary Table ). It is important to note that in the MR analysis of certain specific exposure factors (for example, ADPGK), we did not use the weighted median method. The reason is that the application of the weighted median method requires at least three independent SNPs as effective instrumental variables to ensure the reliability and statistical power of the analysis results. However, for genes like ADPGK, after detailed verification (Supplementary Table ), the number of SNPs meeting this requirement was fewer than three, meaning the weighted median method could not be applied.In light of this, in the analysis of these specific exposure factors, we only selected other appropriate methods such as IVW to conduct research. However, it should be clear that although the IVW method is still applicable when the number of SNPs is small, it has certain limitations. When pleiotropy exists, the robustness of this method may be weakened to a certain extent, which in turn has a potential impact on the accuracy and reliability of the analysis results. According to the MR analysis results (as shown in Fig. ), the tabular data reveal that multiple proteins exhibit characteristics of potential risk factors for the disease (OR > 1), including SIGLEC1, SLITRK5, SST, among others. Notably, SIGLEC1 demonstrates the strongest risk effect, with an inverse-variance weighted OR of 1.658 (95% CI: 1.036–2.655, p = 0.035), suggesting that this protein may significantly promote disease progression, and its underlying mechanisms warrant further investigation. Additionally, SLITRK5 shows significant risk associations through both methods (weighted median OR = 1.258, p = 0.029; inverse-variance weighted OR = 1.318, p = 0.003), indicating the robustness of its pathogenic role. It is noteworthy that POLM, although analyzed only by the inverse-variance weighted method (nsnp = 2), still exhibits an OR of 1.289 ( p = 0.023), highlighting its potential value as a novel risk factor. These results collectively unveil a complex disease risk regulatory network at the proteome level. On the other hand, from the MR analysis results shown in Fig. indicate that multiple proteins show the characteristics of potential protective factors for the disease (OR < 1), including ADGRF5, ADPGK, BTNLI9, FAIM, etc. Among them, ADPGK demonstrated the strongest protective effect, with an OR value of 0.417, which indicates that it may play a vital role in the prevention of the disease and holds significant value that should not be overlooked. In addition, a reverse MR analysis was performed on these 20 genes, and two plasma proteins, ADPGK and ADGRF5, were successfully identified. It was found that these two genes were negatively correlated with ALS, meaning that the decrease in ADPGK and ADGRF5 levels was closely related to the progression of ALS. From the observation of the forest plot (Fig. A), it is evident that the CI for most individual SNPs cross zero, clearly indicating that the effects of individual SNPs are not statistically significant. Furthermore, the overall effect estimate (marked by the red line) is also close to zero, which strongly suggests that the causal effect of ALS on ADGRF5 may be weak or even non-significant. However, neither the MR Egger nor the IVW methods revealed significant horizontal pleiotropy. The Leave-One-Out plot (Fig. B) visually presents the sensitivity analysis results of sequentially removing each SNP, aiming to verify whether any specific SNP has a significant impact on the overall causal effect estimate. In this plot, the red dot represents the overall effect estimate, while the horizontal line indicates its confidence interval. It is evident from the figure that after each SNP is excluded, the change in the effect estimate is minimal and remains close to zero, demonstrating that no single SNP significantly influences the causal effect estimate of ALS on ADGRF5. This stable estimate indicates the robustness of the causal effect. The Funnel Plot (Fig. C) is primarily used to assess potential bias in the MR analysis. The distribution of SNPs in the plot appears roughly symmetrical, without obvious skewness or aggregation, suggesting that the likelihood of horizontal pleiotropy or selection bias is low. The effect estimates of MR Egger and IVW methods on different SNPs are generally consistent, which further provides solid support for the reliability of causal effect analysis. The scatter plot clearly shows the relationship between the exposure effect and the outcome effect of each SNP. Among them, the fitting lines of different colors represent the effect estimates of different MR methods (such as IVW, MR Egger, weighted median, etc.). In Fig. D, the data points are mostly close to the zero value and relatively concentrated, which clearly indicates that the SNP effect is weak. The slopes of the fitting lines of different MR methods are close to zero, further confirming that the overall causal effect of ALS on ADGRF5 is small or does not have a significant effect. However, it is worth noting that the degree of directional consistency of the data fit is high, which strongly supports the robustness of the MR analysis." Figure presents the results of analyzing the causal effect of ALS on ADPGK using the MR method. The forest plots (Fig. A), Leave-One-Out plot (Fig. B), funnel plots (Fig. C), and scatter plots (Fig. D) all consistently indicate that the causal effect of ALS on ADPGK is weak or non-existent. Notably, both the scatter plot and funnel plot clearly show that there is no significant bias in the MR analysis, and the Leave-One-Out analysis further strengthens the robustness of the results. In summary, while the causal effect of ALS on ADPGK provides preliminary evidence, its significance has not been conclusively established, and further verification is needed to ensure the accuracy and reliability of the findings. Analysis of gene ontology and KEGG pathway enrichment We performed GO analysis on the positive plasma protein genes found in the forward MR study using the clusterProfiler R package. The results revealed significant enrichment in 60 CC, 143 MF, and 982 BP, all of which were statistically significant ( P < 0.05) (Supplementary Table ). Figure A shows a circular graph summarizing the results of the GO analysis of positive plasma protein genes identified in forward MR)studies. The diagram consists of four concentric circles: First, the outer ring shows the top 18 enriched categories of GO analysis. Different colors represent the GO category: purple for MF, yellow for CC, and green for BP. The second circle represents the total number of genes in the genomic background, as well as the Q values of up-regulated genes in a particular biological process. Notably, GO: 0062023 (extracellular matrix containing collagen) had the highest number of genes (429) and the most significant enrichment. The third circle shows the number of differential genes in each enrichment pathway. As can be seen from the figure, the collagen-containing extracellular matrix in CC and the negative regulation of the response to external stimuli in BP show the highest gene counts, indicating their important role in ALS pathogenesis. The fourth circle shows the enrichment factor for each GO. The concentration of GO: 35580 (specific granular cavity) was the highest, suggesting a potential immune-related mechanism in ALS. Based on the lowest P values, Fig. B displays the top 10 biological processes in the GO analysis.The top five CC are collagen-containing extracellular matrix, cytoplasmic vesicle lumen, vesicle lumen, secretory granule lumen, and specific granule lumen. The top five MF are glycosaminoglycan binding, peptidase inhibitor activity, sulfur compound binding, endopeptidase regulator activity, and endopeptidase inhibitor activity. The top five BP are external encapsulating structure organization, extracellular matrix organization, extracellular structure organization, chemotaxis, and taxis. Through KEGG Pathway analysis, we identified 50 signaling pathways ( P < 0.05) (Supplementary Table ). The top six signaling pathways are: PI3K-Akt signaling pathway, cytokine-cytokine receptor interaction, axon guidance, lipid and atherosclerosis, chemokine signaling pathway. These pathways are involved in the development and course of the disease because they collectively control inflammatory response, cell survival, axon guidance, and metabolic processes (Fig. C).
After strictly following the instrumental variable selection criteria of the study, 1250 plasma proteins were finally included in the MR analysis, and the corresponding SNP information is detailed in Supplementary Table . Notably, among these 1250 plasma protein subgroups, MR based on the results of IVW or Wald ratio method ( p < 0.05) and the corrected results (false discovery rate FDR < 0.05).The analysis revealed that 491 plasma proteins (Supplementary Table ) may be associated with ALS. It is necessary to explain here that in this study, the Odds Ratio (OR) is used to measure the impact of plasma protein on ALS. When OR > 1, it indicates that the plasma protein is a risk factor; conversely, when OR < 1, it means that it is a protective factor. The Confidence Interval (CI) reflects the uncertainty range of the estimated value, with a 95% CI typically used to evaluate the confidence level.Among the 491 ALS-associated plasma proteins, 95 plasma proteins were identified as potential risk factors for ALS ( β > 0, OR > 1), while 396 plasma proteins were judged as potential protective factors for ALS ( β < 0, OR < 1). We then performed an in-depth differential analysis of these plasma proteins associated with ALS. Among the top 20 most significantly different plasma proteins (Fig. ), 11 of them are up-regulated genes, specifically covering C1QC, UMOD, SLITRK5, ASAP2, TREML2, DAPK2, F2, ARHGEF10, POLM, SST and SIGLEC1 et al. The down-regulated plasma protein genes include ADPGK, BTNL9, COLEC12, ADGRF5, FAIM, CRTAM, PRSS3, BAG5 and PSMD11. After completing the heterogeneity and horizontal pleiotropy analysis process, we excluded F2 from subsequent studies due to heterogeneity issues (Q < 0.05) (Supplementary Table ). It is important to note that in the MR analysis of certain specific exposure factors (for example, ADPGK), we did not use the weighted median method. The reason is that the application of the weighted median method requires at least three independent SNPs as effective instrumental variables to ensure the reliability and statistical power of the analysis results. However, for genes like ADPGK, after detailed verification (Supplementary Table ), the number of SNPs meeting this requirement was fewer than three, meaning the weighted median method could not be applied.In light of this, in the analysis of these specific exposure factors, we only selected other appropriate methods such as IVW to conduct research. However, it should be clear that although the IVW method is still applicable when the number of SNPs is small, it has certain limitations. When pleiotropy exists, the robustness of this method may be weakened to a certain extent, which in turn has a potential impact on the accuracy and reliability of the analysis results. According to the MR analysis results (as shown in Fig. ), the tabular data reveal that multiple proteins exhibit characteristics of potential risk factors for the disease (OR > 1), including SIGLEC1, SLITRK5, SST, among others. Notably, SIGLEC1 demonstrates the strongest risk effect, with an inverse-variance weighted OR of 1.658 (95% CI: 1.036–2.655, p = 0.035), suggesting that this protein may significantly promote disease progression, and its underlying mechanisms warrant further investigation. Additionally, SLITRK5 shows significant risk associations through both methods (weighted median OR = 1.258, p = 0.029; inverse-variance weighted OR = 1.318, p = 0.003), indicating the robustness of its pathogenic role. It is noteworthy that POLM, although analyzed only by the inverse-variance weighted method (nsnp = 2), still exhibits an OR of 1.289 ( p = 0.023), highlighting its potential value as a novel risk factor. These results collectively unveil a complex disease risk regulatory network at the proteome level. On the other hand, from the MR analysis results shown in Fig. indicate that multiple proteins show the characteristics of potential protective factors for the disease (OR < 1), including ADGRF5, ADPGK, BTNLI9, FAIM, etc. Among them, ADPGK demonstrated the strongest protective effect, with an OR value of 0.417, which indicates that it may play a vital role in the prevention of the disease and holds significant value that should not be overlooked. In addition, a reverse MR analysis was performed on these 20 genes, and two plasma proteins, ADPGK and ADGRF5, were successfully identified. It was found that these two genes were negatively correlated with ALS, meaning that the decrease in ADPGK and ADGRF5 levels was closely related to the progression of ALS. From the observation of the forest plot (Fig. A), it is evident that the CI for most individual SNPs cross zero, clearly indicating that the effects of individual SNPs are not statistically significant. Furthermore, the overall effect estimate (marked by the red line) is also close to zero, which strongly suggests that the causal effect of ALS on ADGRF5 may be weak or even non-significant. However, neither the MR Egger nor the IVW methods revealed significant horizontal pleiotropy. The Leave-One-Out plot (Fig. B) visually presents the sensitivity analysis results of sequentially removing each SNP, aiming to verify whether any specific SNP has a significant impact on the overall causal effect estimate. In this plot, the red dot represents the overall effect estimate, while the horizontal line indicates its confidence interval. It is evident from the figure that after each SNP is excluded, the change in the effect estimate is minimal and remains close to zero, demonstrating that no single SNP significantly influences the causal effect estimate of ALS on ADGRF5. This stable estimate indicates the robustness of the causal effect. The Funnel Plot (Fig. C) is primarily used to assess potential bias in the MR analysis. The distribution of SNPs in the plot appears roughly symmetrical, without obvious skewness or aggregation, suggesting that the likelihood of horizontal pleiotropy or selection bias is low. The effect estimates of MR Egger and IVW methods on different SNPs are generally consistent, which further provides solid support for the reliability of causal effect analysis. The scatter plot clearly shows the relationship between the exposure effect and the outcome effect of each SNP. Among them, the fitting lines of different colors represent the effect estimates of different MR methods (such as IVW, MR Egger, weighted median, etc.). In Fig. D, the data points are mostly close to the zero value and relatively concentrated, which clearly indicates that the SNP effect is weak. The slopes of the fitting lines of different MR methods are close to zero, further confirming that the overall causal effect of ALS on ADGRF5 is small or does not have a significant effect. However, it is worth noting that the degree of directional consistency of the data fit is high, which strongly supports the robustness of the MR analysis." Figure presents the results of analyzing the causal effect of ALS on ADPGK using the MR method. The forest plots (Fig. A), Leave-One-Out plot (Fig. B), funnel plots (Fig. C), and scatter plots (Fig. D) all consistently indicate that the causal effect of ALS on ADPGK is weak or non-existent. Notably, both the scatter plot and funnel plot clearly show that there is no significant bias in the MR analysis, and the Leave-One-Out analysis further strengthens the robustness of the results. In summary, while the causal effect of ALS on ADPGK provides preliminary evidence, its significance has not been conclusively established, and further verification is needed to ensure the accuracy and reliability of the findings.
We performed GO analysis on the positive plasma protein genes found in the forward MR study using the clusterProfiler R package. The results revealed significant enrichment in 60 CC, 143 MF, and 982 BP, all of which were statistically significant ( P < 0.05) (Supplementary Table ). Figure A shows a circular graph summarizing the results of the GO analysis of positive plasma protein genes identified in forward MR)studies. The diagram consists of four concentric circles: First, the outer ring shows the top 18 enriched categories of GO analysis. Different colors represent the GO category: purple for MF, yellow for CC, and green for BP. The second circle represents the total number of genes in the genomic background, as well as the Q values of up-regulated genes in a particular biological process. Notably, GO: 0062023 (extracellular matrix containing collagen) had the highest number of genes (429) and the most significant enrichment. The third circle shows the number of differential genes in each enrichment pathway. As can be seen from the figure, the collagen-containing extracellular matrix in CC and the negative regulation of the response to external stimuli in BP show the highest gene counts, indicating their important role in ALS pathogenesis. The fourth circle shows the enrichment factor for each GO. The concentration of GO: 35580 (specific granular cavity) was the highest, suggesting a potential immune-related mechanism in ALS. Based on the lowest P values, Fig. B displays the top 10 biological processes in the GO analysis.The top five CC are collagen-containing extracellular matrix, cytoplasmic vesicle lumen, vesicle lumen, secretory granule lumen, and specific granule lumen. The top five MF are glycosaminoglycan binding, peptidase inhibitor activity, sulfur compound binding, endopeptidase regulator activity, and endopeptidase inhibitor activity. The top five BP are external encapsulating structure organization, extracellular matrix organization, extracellular structure organization, chemotaxis, and taxis. Through KEGG Pathway analysis, we identified 50 signaling pathways ( P < 0.05) (Supplementary Table ). The top six signaling pathways are: PI3K-Akt signaling pathway, cytokine-cytokine receptor interaction, axon guidance, lipid and atherosclerosis, chemokine signaling pathway. These pathways are involved in the development and course of the disease because they collectively control inflammatory response, cell survival, axon guidance, and metabolic processes (Fig. C).
To investigate potential associations between 4,907 circulating plasma proteins and ALS, this study employed MR methods and identified 19 proteins significantly linked to ALS risk. Among these, 11 proteins—C1QC, UMOD, SLITRK5, ASAP2, TREML2, DAPK2, ARHGEF10, POLM, SST, and SIGLEC1—were identified as potential risk factors for ALS. Conversely, eight proteins—ADPGK, BTNL9, COLEC12, ADGRF5, FAIM, CRTAM, PRSS3, BAG5, and PSMD11—may act as protective factors. Reverse MR analyses were subsequently conducted on these 19 proteins. Finally, GO enrichment and KEGG pathway analyses provided novel insights into ALS pathophysiology, highlighting potential therapeutic targets and mechanistic pathways. ALS is a neurodegenerative disease characterized by key neuropathological features, including endoplasmic reticulum stress, chronic neuroinflammation, impaired autophagy, mitochondrial dysfunction, oxidative stress, and DNA damage. These features are also shared with other neurodegenerative disorders such as Alzheimer’s disease (AD) and Parkinson’s disease (PD) . Additional ALS-associated mechanisms include Golgi apparatus fragmentation, excitotoxicity, axonal transport defects, deficient neurotrophic factors, altered glial function, viral infections, and genetic mutations . A pathological hallmark of ALS is the presence of neuronal cytoplasmic inclusions containing misfolded SOD1 aggregates in oligodendrocytes . Conformational changes in SOD1 are linked to accelerated aging processes and are observed in other age-related neurodegenerative diseases, including PD and AD . These findings suggest shared mechanisms involving SOD1 aggregation across ALS, PD, and AD. Notably, our study identified overlapping proteins implicated in the pathophysiology of PD and AD, supporting the existence of common molecular pathways among these disorders. The C1QC gene encodes the C1QC protein, a component of the complement system’s C1 complex. This protein initiates the classical complement pathway, which is essential for pathogen clearance and removal of apoptotic cells . Complement activation was first reported in ALS cases as early as the 1990s . Recent pathological studies have shown that C1q co-localizes with HLA-DR-positive microglia and GFAP-positive astrocytes in spinal cord tissues of deceased ALS patients, with upregulated expression in spinal neurons and glial cells . Evidence suggests that C1QC plays a critical role in synaptic pruning . Animal studies demonstrate that C1q alone can activate microglia into a pro-inflammatory state, leading to blood-brain barrier disruption . Furthermore, the C1Q complement gene is implicated in both AD and PD . In this study, C1QC was significantly upregulated and positively associated with ALS risk, suggesting that inhibiting complement signaling may represent a novel therapeutic strategy for ALS. UMOD (uromodulin), also known as the Tamm-Horsfall protein, encodes uromodulin . Researchers have found that mutant UMOD expression strongly upregulates mesencephalic astrocyte-derived neurotrophic factor (MANF). Notably, mutant uromodulin induces the unfolded protein response (UPR), disrupting endoplasmic reticulum (ER) function and proteostasis . UMOD mutations cause autosomal dominant tubulointerstitial kidney disease (ADTKD-UMOD), which—like ALS—is classified as an ER storage disorder and proteinopathy due to protein misfolding . Our findings further support that elevated UMOD expression may contribute to ALS pathogenesis, reinforcing its potential role as a risk factor. SLIT and NTRK-like protein 5 (SLITRK5) and its family members are neural transmembrane proteins that are widely expressed in the central nervous system (CNS) . SLITRK5 is believed to be a key factor in regulating essential functions such as neurite outgrowth, dendritic refinement, synapse development, and neuronal signaling in the CNS . Research by Salesse et al. found that overexpression of SLITRK5 in neurons induced more inhibitory inputs and promoted the formation of inhibitory synapses, which may reduce neuronal activity and inhibit dendritic growth . This led to a decrease in the number and length of neurites . Consistent with these findings suggesting SLITRK5’s involvement in CNS diseases including PD , our research indicates that the inhibitory effects of SLITRK5 may affect neural development and synaptic function, promoting ALS-related neurodegenerative changes. ASAP2, part of the ArfGTPase activating protein family, is involved in actin-based endocytosis, macrophage macropinocytosis, and phagocytosis, acting as a regulator of actin . The ASAP2 gene (ADP Ribosylation Factor GTPase Activating Protein 2) encodes a GTPase-activating protein that primarily participates in intracellular membrane trafficking, cytoskeletal dynamics, and signal transduction. Neuroinflammation in ALS is characterized by lymphocyte and macrophage infiltration, activation of microglia and reactive astrocytes, and complement involvement . ASAP2 may regulate the involvement of macrophages in the development of neuroinflammation in ALS and impact skeletal muscle function, warranting further investigation. The TREML2 genomic region has recently been associated with AD susceptibility and encodes the TREML2 protein . Research by Wang et al. found that in the context of AD, the upregulation of TREML2 may exert pro-inflammatory and proliferative effects on microglia . They provided the first evidence that TREML2 modulates inflammation by regulating microglial polarization and NLRP3 inflammasome activation . Song et al. discovered that TREML2 can amplify the immune-related neuroinflammatory response, exacerbating this pathological process .These findings align with our study, where upregulation of TREML2 may exacerbate the development of neuroinflammation, increasing the risk of ALS. Death-associated protein kinase 2 (DAPK2) belongs to the pro-apoptotic Ca²⁺/calmodulin-regulated serine/threonine kinase family . It plays roles in autophagy, secretory pathways, and transforming growth factor-beta (TGF-β) signal transduction through protein-protein interactions . Studies demonstrate that transient overexpression of DAPK2 promotes apoptosis . Studies indicate that neutrophils are highly activated in rapidly progressing ALS . DAPK2 activity has a pro-inflammatory effect and can positively regulate granulocyte migration. Increased DAPK2 activity may be one of the mechanisms inducing neuroinflammation in ALS . The ARHGEF10 gene encodes the ARHGEF10 protein, which is a Rho family small GTPase activating protein (GEF) with functions in regulating the cytoskeleton and cell motility. It has been reported that missense mutations in ARHGEF10 contribute to various central nervous system diseases and affect the expression of certain neurotransmitters, such as serotonin and norepinephrine . ARHGEF10 regulates the actin cytoskeleton and microtubule dynamics and participates in neuronal morphogenesis processes, including cell migration, axonal growth and guidance . Additionally, it plays an important role in myelination. Mutations in ARHGEF10 cause myelin to become thinner and nerve conduction to slow down . ARHGEF10 has been confirmed to activate RhoA, and a large number of studies have shown that the RhoA/Rho kinase pathway can exacerbate inflammation and oxidative stress . Therefore, overexpression of ARHGEF10 may increase the risk of ALS. The POLM protein encoded by the POLM gene, also known as DNA polymerase µ, plays a crucial role in DNA repair, especially in the non-homologous end joining (NHEJ) process of repairing DNA double-strand breaks (DSBs). In postmitotic cells, DNA double-strand breaks (DSBs) are repaired through the classic nonhomologous end joining (NHEJ) pathway. This process may lead to genome structural variations and disruption of three-dimensional genome organization, potentially contributing to promoting the progression of neurodegenerative diseases . Excess POLM may lead to imbalanced DNA repair, genomic instability, and neuroinflammation, thereby increasing the risk of ALS. The SST protein encoded by the SST gene is also known as somatostatin or growth hormone release inhibitory factor. This cyclic peptide can effectively inhibit hormone secretion and neuronal excitability . Research has found that in neurodegenerative diseases such as ALS, overactive somatostatin-positive interneurons (SST-ins) disinhibit layer 5 pyramidal neurons (L5 PNs), promoting their excitotoxicity. Hyperactivity of somatostatin interneurons can lead to inhibitory imbalances, leading to glutamate excitotoxicity and further neuronal damage . Research also recommends drug development targeting somatostatin receptor subtype 4 (SST4), as it has been shown to mediate analgesic, antidepressant, and anti-inflammatory effects without endocrine effects . Consistent with our findings, overexpression of somatostatin may lead to glutamate-induced excitotoxicity, a key mechanism leading to neuronal death in ALS. SIGLEC1 (sialic acid-binding Ig-like lectin 1), also known as CD169, is a member of the glycoprotein family that plays a vital role in the immune system. SIGLEC1 is mainly expressed on the surface of immune cells such as macrophages and dendritic cells. Its main function includes recognizing and binding sialic acid-modified glycans, thereby playing a role in immune responses . Soluble SIGLEC-1 (sSIGLEC-1) has been reported as a novel circulating plasma biomarker of type I interferon (IFN) activity in systemic autoimmune, inflammatory, and infectious diseases . Studies have shown that in ALS transgenic mice, there is a significant increase in SIGLEC1-positive macrophages in the peripheral nervous system, which is closely related to disease progression and neuronal degeneration . Recently, Taylor et al. reported that SIGLEC1 perivascular macrophages in the central nervous system are highly correlated with vascular amyloid deposition following Aβ immunotherapy . This finding aligns with previous research suggesting that marginal zone macrophages regulate aging and neurodegeneration through extracellular matrix remodeling . Previous studies on SIGLEC1 have provided evidence suggesting that elevated SIGLEC1 expression may serve as a risk factor for ALS, consistent with our findings. ADP-dependent glucokinase, or ADPGK, is a glycolytic enzyme that plays a critical role in maintaining energy and metabolic homeostasis in cells by converting glucose to glucose-6-phosphate during a critical step in glycolysis . Ongoing exploration of ALS metabolic pathways suggests that genes involved in cellular energy production and metabolic regulation, such as ADPGK, may affect neuronal survival and function by influencing glycolytic pathways . In 2019, Imle et al. found that knocking out ADPGK promoted apoptosis and increased endoplasmic reticulum stress in Jurkat T cells. Experimental validation in zebrafish embryos showed that the absence of the ADPGK gene led to increased cell apoptosis, further metabolic imbalance, and phenotypes such as shortened body axis and elongated dorsum . These studies align with our findings that low levels of ADPGK may lead to metabolic dysregulation and increased neuronal damage in ALS. Reverse MR analysis indicates that ALS progression may lead to reduced ADPGK levels. Glucose metabolism is related to muscle function, and a study found that elite strength athletes carry more strength-related alleles, including the ADPGK gene . Therefore, the downregulation of the ADPGK gene after ALS onset may contribute to muscle atrophy. Additionally, studies have shown reduced glucose utilization in the primary motor cortex and other brain regions of ALS patients , which may be related to decreased ADPGK levels following ALS onset. These results suggest that ADPGK is a viable target for ALS treatments in the future because it has a bidirectional causal relationship with ALS. BTNL9 (Butyrophilin-like 9) is a member of the butyrophilin and butyrophilin-like (BTNL) family, which regulates T cell activity and influences inflammatory diseases and cancer . Functional enrichment analysis shows that BTNL9 is involved in immune and tumor regulatory signaling pathways . Co-expression analysis by Zheng, P. et al. indicates that BTNL9 is associated with reduced immune responses . The immune system is a crucial component of ALS pathogenesis , and changes in immune responses can contribute to the disease mechanisms in both human and mouse models of ALS . BTNL1 and BTNL9 are reported to have high homology. In autoimmune and asthma mouse, administration of neutralizing antibodies against BTNL1 enhances T cell activation and exacerbates the disease . These findings are consistent with our study, where reduced expression of BTNL9 leads to decreased immune responses. This reduction in immune response may predispose the central nervous system to autoimmune reactions, potentially increasing the risk of ALS. The COLEC12 gene expresses COLEC12 (collectin subfamily member 12, also referred to as CL-12 or CL-P1), a pattern recognition molecule in the innate immune system . Bioinformatic analysis suggests that COLEC12 expression is strongly correlated with several immune infiltrating cells, including M2 macrophages, dendritic cells (DCs), neutrophils, and regulatory T cells (Tregs) . Research findings indicate that knocking out COLEC12 significantly activates inflammatory functions, increasing inflammation in osteosarcoma both in vivo and in vitro . COLEC12 encodes a member of the C-type lectin family, a scavenger receptor that plays a crucial role in the binding and clearance of amyloid-beta (Aβ) . This study suggests that COLEC12 plays a role in intercellular signaling and inflammatory responses, and as a possible protective factor, its downregulation may lead to reduced clearance of misfolded proteins in ALS, potentially exacerbating neuroinflammation and accelerating neurodegeneration. ADGRF5(Adhesion G Protein-Coupled Receptor F5), or GPR116 (G Protein-Coupled Receptor 116), is a transmembrane protein that belongs to the adhesion G protein-coupled receptor (GPCR) family of transmembrane proteins. The ADGRF5 protein is involved in regulating various physiological processes, including immune responses and inflammation . Recent exciting discoveries have shown that adhesion-GPCRs can regulate neuronal precursor migration, axon guidance, axon myelination, brain angiogenesis, and synapse formation . Kubo et al. observed that ADGRF5 knockout mice exhibited increased neutrophil development and enhanced type II immune response activity . This suggests that downregulation of ADGRF5 may lead to neurodegeneration and amplified inflammation, potentially being a risk factor for ALS, consistent with our findings. Additionally, research has shown that the lack of ADGRF5 in quiescent muscle stem cells (MuSC pool) results in time-dependent depletion and impaired tissue regeneration . Increased ALS risk leading to downregulation of ADGRF5 manifests in clinical symptoms such as muscle atrophy and paralysis, supporting the findings of our reverse MR analysis. Therefore, in our MR study, ADGRF5 expression shows bidirectional causality with ALS risk, and ADGRF5 may be a promising new target for anti-ALS drugs. FAIM (Fas Apoptosis Inhibitory Molecule) is a highly evolutionarily conserved 20 kDa protein that possesses anti-apoptotic and pro-survival properties. FAIM-L has been demonstrated to shield neural cells from Fas-induced apoptosis and is exclusively expressed in neural tissues . Kaku et al. discovered that FAIM counteracts the intracellular accumulation of mutant SOD1 protein aggregates by preventing protein aggregation and degrading cytotoxic substances . This is in line with our findings, which suggest that FAIM may be a protective factor for ALS.FAIM holds promise as a novel therapeutic target, potentially improving the condition of ALS patients by blocking or disrupting protein aggregation. CRTAM (Class-I Restricted T cell Associated Molecule) is a transmembrane protein highly expressed in activated T cells and natural killer (NK) cells, primarily involved in cell adhesion and signal transduction processes. CRTAM is highly expressed in the human cerebellum, particularly in Purkinje neurons . It has been reported that CRTAM-deficient mice exhibit reduced cytokine production of IFN-γ and IL-17 in CD4 T cells, as well as defects in cell polarity . Damage to the blood-brain barrier is a characteristic of several neurodegenerative diseases, including ALS. CRTAM plays a crucial role in the migration of neural stem cells induced by glioma cells, by promoting migration and regulating blood-brain barrier permeability . This is consistent with our findings that CRTAM may be one of the protective factors in ALS. The PRSS3 gene encodes trypsinogen and trypsinogen 4, with trypsin playing an important role in neurodevelopment, plasticity, and neurodegeneration . Research has shown that astrocytes play a crucial role in neurodegenerative diseases such as ALS, and mesotrypsin may selectively activate protease-activated receptor-1 (PAR-1) to regulate the function of astrocytes . Data from one study showed that axial symptoms in patients with Parkinson’s disease 5 years after deep brain stimulation were associated with PRSS3 . Therefore, PRSS3 is enriched in the brain, and the trypsin it encodes may affect signal transduction. Downregulation of PRSS3 may significantly impact the development and degeneration of ALS motor neurons, suggesting its potential as a therapeutic target for ALS. BAG5 (BCL2-associated athanogene 5) is a member of the BAG family and plays a regulatory role in apoptosis and protein folding. BAG5 interacts with Hsp70 and Hsp90 to prevent the refolding of misfolded proteins and the aggregation of intracellular proteins. It plays a role in regulating apoptosis and protein folding. Role in regulating ubiquitination, protein aggregation and cell death makes it a potential therapeutic target for neurodegenerative diseases such as Parkinson’s disease . BAG5 can also serve as a nucleotide exchange factor for Hsp70, promoting protein refolding . In addition, BAG5 protects cells from mitochondrial oxidative damage by regulating the degradation of the mitochondrial protective protein PINK1 (PTEN-induced kinase 1) . Therefore, BAG5 may exert protective effects by maintaining protein homeostasis, alleviating the pathological progression of ALS, and reducing the risk of disease progression, suggesting its potential role as a protective factor in ALS. PSMD11 is an important component of the proteasome complex, responsible for protein degradation and maintenance of cellular proteostasis. It plays a key role in various physiological and pathological processes such as cell cycle regulation, apoptosis, DNA repair and signal transduction. Under the regulation of cAMP/PKA, overexpression of PSMD11 can activate proteasome function and reduce the degradation of certain aggregated proteins . Phosphorylated PSMD11 enhances proteasome activity and improves its ability to degrade misfolded proteins . PSMD11 is involved in the degradation of ubiquitinated proteins, and lack of PSMD11 will lead to increased levels of ubiquitinated proteins in cells . This finding is consistent with our study suggesting that PSMD11 acts as a protective factor in ALS by maintaining proteasome activity and preventing the accumulation of misfolded proteins in cells.This emphasizes the importance of PSMD11 in protein quality control in ALS disease. Finally, we performed GO function and KEGG pathway enrichment analysis on significantly different proteins. The top three biological processes (BP) are “external envelope structural organization”, “extracellular matrix organization” and “extracellular structural organization”. Within the cellular component (CC) category, “collagen-containing extracellular matrix”is emphasized, involved in the formation, assembly, and maintenance of all extracellular structures, including the extracellular matrix (ECM), cell walls, and capsules. In the MF category, “glycosaminoglycan binding” refers to the ability to bind glycosaminoglycans (GAGs). GAGs are long-chain polysaccharides composed of repeating disaccharide units, widely present in the ECM. Hyaluronic acid (a type of GAG and a major component of the extracellular matrix) has been shown to increase in the serum and skin of patients with longer ALS durations . The ECM comprises proteins and polysaccharides, providing structural support and signaling functions, whose changes may affect neuronal growth, survival, and regeneration abilities . For instance, SIGLEC1 is involved in regulating ECM remodeling and neurodegeneration; TREML2, ASAP2, C1QC, and COLEC12 regulate neuroinflammation by upregulating immune cells; PSMD11 and BAG5 are involved in the degradation of ubiquitinated proteins; CRTAM and ADGRF5 mainly participate in cell adhesion and signal transduction; C1QC and CRTAM regulate blood-brain barrier permeability in ALS. By understanding the changes and mechanisms of these biological processes, researchers can gain a deeper understanding of the pathogenesis of ALS and develop potential treatments targeting these changes. These strategies may include stabilizing ECM components, suppressing inflammatory responses, and repairing the blood-brain barrier to slow or halt the progression of ALS. Through KEGG Pathway analysis, we identified the three most important signaling pathways: PI3K-Akt signaling pathway, Cytokine − cytokine receptor interaction, and Axon guidance. The PI3K-Akt pathway is a crucial intracellular signaling pathway that facilitates growth, angiogenesis, metabolism, proliferation, and cell survival by reacting to extracellular cues. In recent years, many researchers have found that the PI3K/AKT signaling pathway is closely related to ALS. Activation of the PI3K/AKT pathway has been shown to protect the cerebral cortex and astrocytes of ALS patients, reduce damage caused by oxidative stress, and improve cell survival and mitochondrial function . In addition, the PI3K/AKT signaling pathway is also involved in the adhesion and migration of reactive astrocytes . In addition, the PI3K/AKT signaling pathway is also involved in the adhesion and migration of reactive astrocytes . The Cytokine-Cytokine Receptor Interaction pathway is a crucial route for intercellular communication and is essential in the immune system, involved in regulating the activation, proliferation, and differentiation of immune cells. Some cytokines activate their receptors, triggering signaling pathways within neurons that lead to oxidative stress, mitochondrial dysfunction, and apoptosis . In ALS, elevated levels of pro-inflammatory cytokines (such as TNF-α, IL-1β, IL-6) can lead to neuronal damage and death . The cytokine-cytokine receptor interaction pathway influences neuronal survival and function in ALS by regulating neuroinflammation, cell stress responses, and apoptosis. The Axon Guidance pathway refers to the precise growth of neuronal axons to their target areas during development. This process is guided by a series of molecular signals to ensure correct connections within the complex neural network. According to some researchers, ALS is also a distal axonopathy , and the pathological alterations in motor axons and nerve terminals that are central to the ALS pathogenesis may be caused by abnormalities in the expression or function of axon guidance proteins . Lesnick et al. found that specific axon guidance pathway genes or their transcripts or proteins are associated with the pathogenesis of ALS . Körner et al. also found evidence of increased axon guidance protein signaling in the motor cortex of ALS patients . In our study, ARHGEF10, ADGRF5, and others are involved in the Axon Guidance pathway. It is evident that abnormalities in the Axon Guidance pathway in ALS may lead to axon degeneration, abnormal neural network connections, and neuronal dysfunction.
Our study has several limitations. First, all participants in the GWAS were derived from European populations, predominantly of European ancestry. While this provides valuable insights into the genetic architecture of ALS in this demographic, the limited population diversity may introduce potential biases. Therefore, validating protein associations and their relevance to ALS in non-European populations is essential to evaluate the generalizability of our findings.Second, publicly available datasets are inherently constrained and may lack comprehensive information. For instance, due to the absence of individual-level data, we could not perform additional analyses such as population stratification or disease risk stratification.Third, although MR is a robust method for causal inference, its findings require further validation through clinical and experimental studies to confirm causality and elucidate underlying mechanisms. Specifically, in vivo (e.g., animal models) and in vitro (e.g., cellular assays) experiments are needed to validate the roles of these proteins in ALS pathogenesis. Additionally, large-scale longitudinal studies and clinical trials in patient cohorts are necessary to assess the potential of these identified proteins as biomarkers.
This study identified 19 novel plasma proteins associated with ALS, including C1QC, UMOD, SLITRK5, ASAP2, TREML2, DAPK2, ARHGEF10, POLM, SST, SIGLEC1, ADPGK, BTNL9, COLEC12, ADGRF5, FAIM, CRTAM, PRSS3, BAG5, and PSMD11. Additionally, we performed GO functional analysis and KEGG pathway enrichment analysis. GO functional analysis revealed that these proteins are involved in several important biological processes, including external encapsulating structure organization, extracellular matrix organization, and extracellular structure organization. KEGG pathway analysis demonstrated significant enrichment of these proteins in key pathways, including the axon guidance signaling pathway, cytokine-cytokine receptor interactions, and the PI3K-Akt signaling pathway.In summary, our findings provide genetic evidence supporting the potential of these 19 proteins as novel biomarkers for ALS and their involvement in disease-related mechanistic pathways. Further clinical and experimental studies are warranted to validate these results.
Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3 Supplementary Material 4 Supplementary Material 5 Supplementary Material 6
|
Do all anatomic stems perform equally at long-term survival? A regional registry-based study on 12,010 total hip arthroplasty implants according to stem length and neck modularity | c7d83cf3-f5e1-40b8-b921-1ecb693a43b8 | 11845650 | Surgical Procedures, Operative[mh] | Since the introduction of modern total hip arthroplasty (THA) by Charnley in 1959 , advancements in femoral stem design have focused on improving length, shape, and fixation methods . Cementless fixation dominates in primary THA, accounting for 86% of procedures in the USA and 87.4% in Italy, with even higher rates in younger patients . Achieving precise geometric fit between the femoral component and bone is crucial for primary stability and successful osteointegration . Anatomic stems were designed to mimic the proximal femur’s natural geometry, enhancing stability and reducing stress shielding and subsidence, with long-term survival rates exceeding 90% . In recent years, bone-preserving short stems, often with an anatomic design, have gained popularity. These stems aim to minimize stress shielding, reduce thigh pain, and simplify revision surgery by focusing fixation at the metaphysis . Concurrently, modular stems have provided surgeons with greater intraoperative flexibility in adjusting limb length, offset, and anteversion . However, complications such as corrosion and taper fractures have raised concerns . Despite these innovations, limited comparative data exist on long-term outcomes of anatomic stems on the basis of length and modularity. No large registry studies have addressed this, leaving a critical gap in understanding how these variables influence survival rates and complications. Addressing this gap, our registry study leverages data from the Emilia-Romagna Region Registry of Orthopaedic Prosthetic Implants (RIPO) to evaluate the impact of stem length and modularity on implant survival and failure causes. Identifying optimal stem designs could reduce implant failures and the socioeconomic burden of revision surgeries . This observational retrospective registry study involved the analysis of data collected by the Emilia-Romagna (ER, Italy) Registry of the Orthopaedic Prosthetic Implants (RIPO). Established in 1990, RIPO records nearly 98% of arthroplasty implants performed in the Emilia-Romagna Region, including procedures conducted in both national healthcare system and private orthopedic facilities, for a total of 62 participating hospitals. The study focused on THAs performed for primary degenerative hip osteoarthritis (OA) between 2000 and 2019. All patients treated by THA within this time frame and officially registered in the RIPO registry were included in the study. There were no restrictions on the inclusion criteria for patients based on age and gender. The study focused exclusively on patients residing within the ER region to mitigate potential bias originated from loss at follow-up. As a result, any THA performed on patients residing outside ER were deliberately excluded from the analysis. Revision THAs, cemented implants, hemiarthroplasties, resurfacing procedures, and the use of megaprostheses for neoplastic and non-neoplastic conditions were also excluded. Data extraction from the RIPO database was performed on 9 August 2023, and implant survival and failure were collected until 31 December 2019. RIPO standard reporting included stem manufacturer, implant model, and fixation, but it did not specify the geometric shape and the length of stems. Therefore, two researchers (M.B., V.R.) independently selected the curved anatomic stems among the stems recorded from the RIPO, further dividing these into standard and short according to the traditional 120 mm length cutoff ; moreover, for each stem the presence or absence of a neck modularity was reported. In case of disagreement, the senior author (A.D.M.) determined the most appropriate stem attribution. The study considered several data, including patients’ age at surgery, sex, body mass index (BMI), and number of cementless anatomic femoral stems implanted in primary THAs during the study period, categorized according to their length and modularity. Implant survival was analyzed for each anatomic stem type, with failure defined as any surgery requiring revision of at least the femoral stem. All complications leading to the failure of femoral stems were analyzed. This comprehensive evaluation allowed us to assess, for each anatomic stem type, the percentage incidence and the magnitude of specific stem complications relative to the total causes of stem failure. Survival and complications were further analyzed on the basis of the presence of neck modularity, providing different stem version options. Ethical approval was not required for this study, since data collection is an ER standard practice, and the identity of the patients is concealed. Furthermore, no adjunctive clinical procedures were performed besides the analysis of registry data. Statistical analysis Descriptive statistics, such as median and range for continuous variables and frequency with percentage for categorical variables were used for data report. The chi-squared test was employed to assess statistical significance of qualitative data, while the analysis of variance (ANOVA) test was used for continuous data. Kaplan–Meier survivorship analysis was performed using the revision of at least the femoral stem component as endpoint, with implant survival of non-revised THAs considered as the last date of observation (31 December 2019 or the date of death available from the ER database). The log-rank test was used to compare survivorship between groups. The Wald test was conducted to analyze the p -values for data achieved from the Cox multiple regression analyses. The proportional hazards assumption was estimated using the Schoenfeld residual method and p -values < 0.05 were considered significant. Statistical analyses were conducted using SPSS 14.0, version 14.0.1 (SPSS Inc., Chicago, IL, USA), and JMP, version 12.0.1 (SAS Institute Inc., Cary, NC, USA, 1989–2007). Descriptive statistics, such as median and range for continuous variables and frequency with percentage for categorical variables were used for data report. The chi-squared test was employed to assess statistical significance of qualitative data, while the analysis of variance (ANOVA) test was used for continuous data. Kaplan–Meier survivorship analysis was performed using the revision of at least the femoral stem component as endpoint, with implant survival of non-revised THAs considered as the last date of observation (31 December 2019 or the date of death available from the ER database). The log-rank test was used to compare survivorship between groups. The Wald test was conducted to analyze the p -values for data achieved from the Cox multiple regression analyses. The proportional hazards assumption was estimated using the Schoenfeld residual method and p -values < 0.05 were considered significant. Statistical analyses were conducted using SPSS 14.0, version 14.0.1 (SPSS Inc., Chicago, IL, USA), and JMP, version 12.0.1 (SAS Institute Inc., Cary, NC, USA, 1989–2007). Participants A total of 12,010 cementless primary THAs using curved anatomic stems were performed in ER between 2000 and 2019 and formally registered in the RIPO registry. All demographic characteristics of study participants are presented in Table The most frequently implanted anatomic stem (Table ) during the study period was the APTA Adler (Adler Ortho, Milan, Italy), standard stem with a modular neck, accounting for 5041 implants (89.1% of overall standard anatomic stems). The most frequently implanted short anatomic stem was the ABGII Howmedica (Stryker Orthopaedics, Portage, USA), presenting a fixed neck, and accounting for 2031/6353 implants (32.0% of overall short anatomic stems). Among the short anatomic stems, the use of a fixed neck was reported in 4122/6353 surgeries (64.9%), while a modular neck was used in 2231 cases (35.1%) (Table ). Conversely, among standard anatomic stems, modular neck represented most of implants with 5041/5657 (89.1%), with fixed neck used in 616/5657 cases (10.9%). Stem failures requiring revision During the study period, a total of 473 out of 12,010 recorded THAs (3.93%) experienced failure requiring revision surgery. Table provides a breakdown of the number of stem failures requiring revision surgery during the follow-up period, according to their length and modularity. During the follow-up period, short anatomic stems showed a higher incidence of stem failure (5.1%) compared with standard anatomic stems (2.6%); short-modular stems exhibited the highest incidence of stem failure, followed in decreasing order by short-fixed, standard-modular, and standard-fixed stems, which showed the lowest incidence of stem failure (0.6%). Incidence of intraoperative periprosthetic fractures The incidence of intraoperative stem fractures was then analyzed by dividing them according to length and then according to fracture area (calcar, acetabulum, diaphysis). The most significant result concerns the incidence of intraoperative calcar fracture using the short stem, at 0.6% (Table ). Survivorship analysis between stem types Implant survival according to anatomic stems length Kaplan–Meier survivorship analysis revealed different survival rates at follow-up of 1, 3, 5, 7, 10, 15, and 17 years among the recorded anatomic stems divided according to their length (Fig. ). Pairwise comparisons considering the entire follow-up period showed significant differences in survival curves between anatomic standard stems compared with anatomic short stems ( p = 0.009), with standard stems showing higher survival compared with short stems throughout the entire follow-up period. Compared with anatomic standard stems, the relative risk (RR) of failure was 1.30 higher in patients with anatomic short stems, after adjusting for age and sex. Implant survival according to anatomic stems length and modularity Kaplan–Meier survivorship analysis revealed different survival rates at follow-up of 1, 3, 5, 7, 10, 15, and 17 years among the recorded anatomic stems divided according to their length and modularity (Fig. ). Pairwise comparisons considering the entire follow-up period showed statistically significant differences in survival curves between standard-fixed stems compared with short-modular stems ( p = 0.0271; RR 3.09 higher for short-modular stems), between standard-modular stems compared with short-modular stems ( p = 0.0003; RR 1.53 higher for short-modular stems), and between short-fixed stems compared with short-modular stems ( p = 0.0027; RR 1.41 higher for short-modular stems). Causes of stem failures Periprosthetic fracture (PF) represented the most common cause of anatomic stem failure necessitating revision surgery in short stems, with an incidence of 2.0% and accounting for 38.6% of all failures. Conversely, implant breakage emerged as the primary cause of failure in standard stems, observed in 0.9% of cases (Table ). Considering all anatomic stems in terms of length and modularity, the type of stem that recorded the highest incidence of PF was the short-fixed (2.0%), followed in descending order by short-modular, standard-modular, and lastly, the standard-fixed (0.5%). Pairwise comparisons considering the entire follow-up period showed significant differences in the incidence of PF between short-fixed stems compared with standard-modular stems ( p = 0.0005; RR 2.04 higher for short-fixed stems), and between short-modular stems compared with standard-modular stems ( p = 0.0243; RR 1.71 higher for short-modular stems). Multivariate analysis showed a significant difference in the incidence of PF on the basis of patient gender ( p = 0.0082), with female patients being more at risk of developing this complication with a RR of 1.59 compared with male patients. Among the other causes of stem failure, aseptic loosening of the stem showed the highest incidence in short-modular stems (1.6% of cases, accounting for 22.0% of failures), as well as dislocation (0.9), septic loosening, and global aseptic loosening. Primary instability showed the highest incidence in short-fixed stems, whereas pain without loosening appeared more frequent in short-modular stems. Implant breakage demonstrated the highest rate of incidence among standard-modular stems (1.1% of cases, accounting for 36.6% of failures), followed in descending order by short-modular, short-fixed, and lastly, standard-fixed, which showed no implant ruptures during the entire follow-up period. The modular neck of anatomic stems represented the most involved area of implant breakage, accounting for 72.1% of cases [46 recorded cases for standard-modular stems, entirely APTA-Adler (Adler Ortho, Milan, Italy); 3 recorded cases for short-modular stems, 1 Cremascoli Anca-fit (Wright Orthopedics Corp, MS, USA), and 2 SPS Modular Symbios (Symbios Orthopédie, Yverdon-les-Bains, Swiss)], representing the whole figures of ruptures of the stem component. Other causes of implant breakage included, in descending order, cup-inlay, femoral head, and combination of both. Focusing on ruptures of the modular neck, which represent the most involved area of implant breakage, Cox multivariate analysis showed an increased risk of neck fracture in obese patients compared wit underweight–normal weight patients with RR of 7.47 ( p = 0.0002) and compared with overweight patients with RR of 5.39 ( p = 0.0001). Male patients showed a higher risk of implant breakage compared with female patients, with RR of 5.87 ( p = 0.0001). A total of 12,010 cementless primary THAs using curved anatomic stems were performed in ER between 2000 and 2019 and formally registered in the RIPO registry. All demographic characteristics of study participants are presented in Table The most frequently implanted anatomic stem (Table ) during the study period was the APTA Adler (Adler Ortho, Milan, Italy), standard stem with a modular neck, accounting for 5041 implants (89.1% of overall standard anatomic stems). The most frequently implanted short anatomic stem was the ABGII Howmedica (Stryker Orthopaedics, Portage, USA), presenting a fixed neck, and accounting for 2031/6353 implants (32.0% of overall short anatomic stems). Among the short anatomic stems, the use of a fixed neck was reported in 4122/6353 surgeries (64.9%), while a modular neck was used in 2231 cases (35.1%) (Table ). Conversely, among standard anatomic stems, modular neck represented most of implants with 5041/5657 (89.1%), with fixed neck used in 616/5657 cases (10.9%). During the study period, a total of 473 out of 12,010 recorded THAs (3.93%) experienced failure requiring revision surgery. Table provides a breakdown of the number of stem failures requiring revision surgery during the follow-up period, according to their length and modularity. During the follow-up period, short anatomic stems showed a higher incidence of stem failure (5.1%) compared with standard anatomic stems (2.6%); short-modular stems exhibited the highest incidence of stem failure, followed in decreasing order by short-fixed, standard-modular, and standard-fixed stems, which showed the lowest incidence of stem failure (0.6%). The incidence of intraoperative stem fractures was then analyzed by dividing them according to length and then according to fracture area (calcar, acetabulum, diaphysis). The most significant result concerns the incidence of intraoperative calcar fracture using the short stem, at 0.6% (Table ). Implant survival according to anatomic stems length Kaplan–Meier survivorship analysis revealed different survival rates at follow-up of 1, 3, 5, 7, 10, 15, and 17 years among the recorded anatomic stems divided according to their length (Fig. ). Pairwise comparisons considering the entire follow-up period showed significant differences in survival curves between anatomic standard stems compared with anatomic short stems ( p = 0.009), with standard stems showing higher survival compared with short stems throughout the entire follow-up period. Compared with anatomic standard stems, the relative risk (RR) of failure was 1.30 higher in patients with anatomic short stems, after adjusting for age and sex. Implant survival according to anatomic stems length and modularity Kaplan–Meier survivorship analysis revealed different survival rates at follow-up of 1, 3, 5, 7, 10, 15, and 17 years among the recorded anatomic stems divided according to their length and modularity (Fig. ). Pairwise comparisons considering the entire follow-up period showed statistically significant differences in survival curves between standard-fixed stems compared with short-modular stems ( p = 0.0271; RR 3.09 higher for short-modular stems), between standard-modular stems compared with short-modular stems ( p = 0.0003; RR 1.53 higher for short-modular stems), and between short-fixed stems compared with short-modular stems ( p = 0.0027; RR 1.41 higher for short-modular stems). Kaplan–Meier survivorship analysis revealed different survival rates at follow-up of 1, 3, 5, 7, 10, 15, and 17 years among the recorded anatomic stems divided according to their length (Fig. ). Pairwise comparisons considering the entire follow-up period showed significant differences in survival curves between anatomic standard stems compared with anatomic short stems ( p = 0.009), with standard stems showing higher survival compared with short stems throughout the entire follow-up period. Compared with anatomic standard stems, the relative risk (RR) of failure was 1.30 higher in patients with anatomic short stems, after adjusting for age and sex. Kaplan–Meier survivorship analysis revealed different survival rates at follow-up of 1, 3, 5, 7, 10, 15, and 17 years among the recorded anatomic stems divided according to their length and modularity (Fig. ). Pairwise comparisons considering the entire follow-up period showed statistically significant differences in survival curves between standard-fixed stems compared with short-modular stems ( p = 0.0271; RR 3.09 higher for short-modular stems), between standard-modular stems compared with short-modular stems ( p = 0.0003; RR 1.53 higher for short-modular stems), and between short-fixed stems compared with short-modular stems ( p = 0.0027; RR 1.41 higher for short-modular stems). Periprosthetic fracture (PF) represented the most common cause of anatomic stem failure necessitating revision surgery in short stems, with an incidence of 2.0% and accounting for 38.6% of all failures. Conversely, implant breakage emerged as the primary cause of failure in standard stems, observed in 0.9% of cases (Table ). Considering all anatomic stems in terms of length and modularity, the type of stem that recorded the highest incidence of PF was the short-fixed (2.0%), followed in descending order by short-modular, standard-modular, and lastly, the standard-fixed (0.5%). Pairwise comparisons considering the entire follow-up period showed significant differences in the incidence of PF between short-fixed stems compared with standard-modular stems ( p = 0.0005; RR 2.04 higher for short-fixed stems), and between short-modular stems compared with standard-modular stems ( p = 0.0243; RR 1.71 higher for short-modular stems). Multivariate analysis showed a significant difference in the incidence of PF on the basis of patient gender ( p = 0.0082), with female patients being more at risk of developing this complication with a RR of 1.59 compared with male patients. Among the other causes of stem failure, aseptic loosening of the stem showed the highest incidence in short-modular stems (1.6% of cases, accounting for 22.0% of failures), as well as dislocation (0.9), septic loosening, and global aseptic loosening. Primary instability showed the highest incidence in short-fixed stems, whereas pain without loosening appeared more frequent in short-modular stems. Implant breakage demonstrated the highest rate of incidence among standard-modular stems (1.1% of cases, accounting for 36.6% of failures), followed in descending order by short-modular, short-fixed, and lastly, standard-fixed, which showed no implant ruptures during the entire follow-up period. The modular neck of anatomic stems represented the most involved area of implant breakage, accounting for 72.1% of cases [46 recorded cases for standard-modular stems, entirely APTA-Adler (Adler Ortho, Milan, Italy); 3 recorded cases for short-modular stems, 1 Cremascoli Anca-fit (Wright Orthopedics Corp, MS, USA), and 2 SPS Modular Symbios (Symbios Orthopédie, Yverdon-les-Bains, Swiss)], representing the whole figures of ruptures of the stem component. Other causes of implant breakage included, in descending order, cup-inlay, femoral head, and combination of both. Focusing on ruptures of the modular neck, which represent the most involved area of implant breakage, Cox multivariate analysis showed an increased risk of neck fracture in obese patients compared wit underweight–normal weight patients with RR of 7.47 ( p = 0.0002) and compared with overweight patients with RR of 5.39 ( p = 0.0001). Male patients showed a higher risk of implant breakage compared with female patients, with RR of 5.87 ( p = 0.0001). Our study revealed that the long-term survival rate differed significantly among various types of anatomic stems when categorized by length, with standard stems demonstrating significantly higher survival curves compared with short stems throughout the entire follow-up period ( p = 0.009; RR 1.3 for short stems). Our data align with current literature, including large analyses such as the 2018 report from the Australian Orthopedic Association National Joint Replacement Registry , which described that the cumulative incidence of loosening for short-stemmed THAs was about twice that of standard-length femoral components at 10 years. However, that study, as well as most of the current literature, focused on survival and complication comparisons of standard versus short femoral stems only, without considering the specific stem design and shape. In the current study we only considered anatomic stems, for which there is still a lack of large comparative studies in literature. Looking at individual studies, the results are conflicting compared with our report, with survival rates of short anatomic stems being extremely variable. Several authors reported excellent results; Nourissat et al. described 90 consecutive primary anatomic short ABG II THAs, reporting a cumulative stem survival rate at 10 years of 98.7% ± 1.3%. Similarly, Kim et al. in 2014 reported on 500 patients (630 THAs) operated on by a short metaphyseal-filling anatomic stem, and they found a 15-year survival rate of 99.4% (CI 98–100) for the femoral component. Furthermore, in 2021 they conducted a comparison study of 858 ultra-short anatomic stems and 858 standard-length anatomic stems , and found that the survival rate at 17 years of the ultra-short cementless anatomic stem (97.6% CI 94–100) was comparable to the standard-length cementless anatomic stem (96.6% CI 92–100). The same authors reported no significant differences in the survival of short, metaphyseal-fitting anatomic stems in different bone quality Dunn classes at 7 years. Short and ultra-short anatomic femoral stems were also studied recently in patients younger than 30 years affected by hip osteoarthritis secondary to avascular necrosis, and they reported a 17-year survival rates of the femoral component of 99.2% (CI 94–100). In the current study, stem modularity as well as length were compared; we found significant differences in terms of survival comparing standard-fixed stems to short-modular stems ( p = 0.0271; RR 3.09 for short-modular stems), standard-modular stems to short-modular stems ( p = 0.0003; RR 1.53 for short-modular stems), and short-fixed stems to short-modular stems ( p = 0.0027; RR 1.41 for short-modular stems). Our data align with the recent literature confirming that the presence of neck modularity in primary THAs negatively affects the long-term survival of the femoral component. For instance, Colas et al. in 2017 conducted a study on a French Nationwide Cohort of 324,108 patients, identifying a total of 8931 (3%) patients with exchangeable neck stem implants. They found that these modular implants were more likely to undergo revision compared with fixed neck stem designs (RR 1.36; CI, 1.24–1.49; p < 0.001). Most of current literature focus on survival and complication comparisons of modular versus fixed femoral stems, without considering the specific stem design and shape. When the anatomic design is specifically considered, there is a void in current literature. Looking at individual studies, survival rates of modular anatomic stems are extremely variable, but still show survival rates exceeding 90%, especially for standard stems. Castagnini et al. described a registry cohort of 1984 standard-modular anatomic stems with a reported 9-year survival of 98.6% (CI 97.9–99%). Similarly, in a 2017 registry study, Toni et al. reported on 300 THAs with standard-modular anatomic stems, finding a 15-year survival rate of 97.2% (95% CI 94.8–100%). Regarding short-modular anatomic stems, Mouttet et al. reported a series of 176 THAs using these stems, showing an excellent 5-year survival of the femoral component of 98.8%; however, at last follow-up, survival decreased to 93.2%. Tostain et al. included 61 primary THAs with anatomical short-modular stems, reporting a 10-year survival rate of 96% (CI 88–99%). Cossetto et al. reported on 185 THAs with short-modular anatomic stems, showing a 10-year survival of 99% (CI 97–100%). In our cohort, the most frequent stem-related complication was periprosthetic fracture (PF) in short stems (2.0% of cases, accounting for 38.6% of failures). The type of stem that recorded the highest incidence of PF was the short-fixed (2.0% of cases, accounting for 51.3% of failures); these were significantly more frequent in female patients ( p = 0.0082), with a relative risk (RR) of 1.59 compared with males. Pairwise comparisons showed statistically significant differences in the incidence of PF between short-fixed stems compared with standard-modular stems ( p = 0.0005; RR 2.04) and between short-modular stems compared with standard-modular stems ( p = 0.0243; RR 1.71). In a 2024 study performed by Turnbull’s group regarding the survival of 1000 consecutive Lubinus SP2 anatomic stem implants (standard anatomic stem with fixed neck) the incidence of periprosthetic fractures was 0.3%, which is consistent with the 0.5% reported in the current study . In a study involving 496 short-modular anatomic stems (ESOP stem, FH ® ), Martínez Martín et al. showed a periprosthetic fracture rate of 3.3%, a rate almost double compared with our findings (1.9%), supporting the decreased use of anatomic stems with full modularity (i.e., diaphyseal and metaphyseal). Regarding data on modular neck fractures in anatomical stems, literature is mostly focused on neck fractures in non-anatomic implants; the trend of use of stems with neck modularity appears to be decreasing overtime, as outlined by the latest Australian registry report in 2023 . However, modular necks are frequently used by orthopedic surgeons worldwide, and the knowledge that obese male patients are less suitable for the use of these implants may be of support for implant choice. There are several limitations to our research. First, it is retrospective and relies on observational data. As a result, it was not possible to establish cause–effect relationships or to evaluate individual factors that may have confounding effects. Moreover, intrinsic to the nature of the registry, it was not possible to estimate preoperative conditions, such as disease severity, functional aspects, and postoperative outcomes. Survivorship analyses are incomplete due to the loss of implants at risk during follow-up period, and the analysis based on periprosthetic fractures were limited to two stem types (standard-fixed and short-modular) due to an insufficient number of cases. Further, this is the first registry study that examines the survivorship of anatomical stems on the basis of their length and modularity, including an analysis of the specific causes of failure. Future prospective research should look at the overall survival of prosthetic stems while considering patient lifestyle and physical activity rate. The initial choice of stem implants is critical to the long-term success of THA surgery, and the findings of this study are useful for optimizing implant selection; the most important clinical application arising from this study is the support of orthopedic surgeons during the selection of femoral stem implants in the preoperative planning of THA procedures. This registry-based study provides robust evidence regarding the long-term survival of anatomic femoral stems in primary total hip arthroplasty. Our findings indicate that stem length and modularity are significant factors influencing implant survival. Specifically, anatomic stems demonstrated overall optimal survival rates. The fixed standard stem showed the lowest failure rate, while modular short stems had the highest at long-term follow-up. Modular neck designs, while offering flexibility in surgical adjustment, were associated with a higher incidence of complications, including implant breakage at the modular interface. These results suggest that standard length anatomic stems may be considered a useful stem in THA, though caution is advised when using modular stems, in particular short-anatomic, due to their associated risks. |
Specialty grand challenge in adrenal endocrinology | 16b4febc-26da-46c2-a77b-768d6df4691e | 10358978 | Physiology[mh] | The adrenal glands are on top of each kidney and weight around 4-6 grams each in an adult person, however, they can grow 50% during stress or pregnancy. The hormones from the adrenal glands, i.e., cortisol, aldosterone, adrenal androgens and catecholamines are involved in most of the body systems. Without cortisol, e.g., we would not be able to survive . If tumors or hyperplasia arise from the adrenal gland any of these hormones can be produced in excess and give rise to disorders such as pheochromocytoma, primary aldosteronism and Cushing syndrome. Some conditions are common (e.g., adrenal incidentalomas) while others are rare (e.g., adrenal medullary hyperplasia). However, one common feature is that all adrenal disorders are relatively unknown to physicians and most patients have never heard of the adrenal glands. In the following few paragraphs a few examples of adrenal disorders and examples of challenges as well as new findings will be presented. The most common form of primary adrenal insufficiency (PAI) in adults from high-income countries is autoimmune adrenalitis while in children it is congenital adrenal hyperplasia (CAH) . Another cause of PAI, adrenal tuberculosis, is claimed to be common in low- to middle income countries but very little is known of tuberculosis-induced PAI in high-income setting . Novel forms of PAI have emerged such as immune checkpoint inhibitor (ICI) induced PAI, even though ICIs more commonly induce secondary AI . Bilateral adrenal metastasis and bilateral adrenal hemorrhage may result in PAI . It is estimated that 10-20/100,000 of the population have PAI . Long term negative outcomes have been the focus lately, especially complications due to unphysiological glucocorticoid replacement but also adrenal crisis . New treatments have been introduced such as modified release glucocorticoids and hydrocortisone administered subcutaneously via pump . On the horizon cell-based therapies emerge such as allogeneic adrenocortical cell transplantation and adrenal-like steroidogenic cell manufacturing from either stem cells or lineage conversion of differentiated cells . CAH is considered a rare group of disorders affecting the steroid synthesis of which 21-hydroxylase deficiency is by far the most frequent . CAH is one of many genetic disorders of the adrenal glands causing PAI and the most common. The incidence of classic CAH is around 1/15000 according to neonatal screening programs . Non-classic CAH does not have apparent cortisol insufficiency but is usually diagnosed due to adrenal androgen excess or due to family screening. The prevalence of non-classic CAH in the general US Caucasian population has been claimed to be 1/200 , while in a country such as Sweden with around 10 million inhabitants only 90 cases of non-classic CAH have been diagnosed . However, it can be assumed that most patients with non-classic CAH probably never are diagnosed . The prevalence needs to be studied in larger cohorts which have been screened for non-classic CAH to be able to have more well-founded data. Moreover, long-term outcome data have emerged over the last few decades . Though, most studied patients with CAH have been younger than 30 years of age, and only a few above 50 years of age, the age where most long-term outcomes can be expected to appear. Since the introduction of glucocorticoids and mineralocorticoids in the 1950s, no major advancement has been seen until recently when new therapies have begun to emerge such as modified release glucocorticoids, hydrocortisone administered subcutaneously via pump, the CYP17A1 inhibitor abiraterone, corticotropin-releasing hormone-receptor 1 antagonists, corticotropin antibodies, corticotropin receptor (melanocortin 2 receptor [MC2R]) antagonists, as well as gene- and cell-based therapies . Adrenal incidentalomas are found in approximately 2% of the adult population and pose a rising challenge for endocrinologists worldwide demanding ever increasing resources to manage . Even if overt Cushing syndrome is not present, autonomous cortisol secretion (ACS) seems to be associated with increased mortality , especially in women below 65 years of age . There are even indications that patients with non-functional adrenal tumors have increased mortality . Performing adrenalectomy in patients with ACS is controversial , but small randomized controlled trials (RCTs) have just begun to be published . Moreover, there are many rare forms of adrenal incidentalomas such as adrenal myelolipomas, adrenal cysts and other adrenal lesions that require more studies to improve our understanding and management of these masses . Both adrenocortical cancer (ACC) and adrenal metastasis have poor prognosis . Very few RCTs have been done so most treatment recommendations are based on retrospective studies or small clinical trials . However, since only 1 new cases of ACC are diagnosed per million, per year , large multinational collaborations are required to find new treatments but also to find new genetic and molecular markers. The symptoms and signs of pheochromocytomas and paragangliomas (PPGLs) are diverse and can easily be misinterpreted . Even if only pheochromocytomas are found in the adrenal glands, while paragangliomas are found anywhere in the human body, both conditions are usually grouped since both result in catecholamine excess . Cardiovascular manifestations of PPGLs can be dramatic and fatal . Nowadays most pheochromocytomas are found in the work up of adrenal incidentalomas , and the proportion of pheochromocytomas that are found during yearly surveillance of a genetic syndrome increase. Phenoxybenzamine, a non-selective irreversible alpha-adrenoceptor antagonist, has been the standard preoperative treatment but its use has decreased in favor of selective reversible alpha-adrenoceptor antagonists such a doxazosin which are more readily available . Some patients use a calcium channel blocker instead or no medication at all preoperatively which is controversial, however, large RCTs are needed to tease out the best preoperative treatment. More and more genetic variants resulting in PPGLs are found increasing our understanding and changing our follow-up of these conditions . Metastatic PPGLs can appear many years after the initial surgery and curative therapy can then be difficult to achieve . Primary aldosteronism (PA) is severely underdiagnosed . It has been estimated that 4-14% of all patients with hypertension in primary care have it secondary to primary aldosteronism if properly investigated . However, the investigations are cumbersome , and easier algorithms are urgently needed. Unilateral hypersecretion of aldosterone, usually due to an aldosterone producing adenoma, is seen in approximately half of all patients with PA, with overproduction from both adrenals in the remaining cases, usually due to bilateral idiopathic hyperplasia. The treatment of choice if the patient is operable is adrenalectomy in unilateral PA and mineralocorticoid receptor antagonists (MRAs) for bilateral PA . Functional histopathology can improve both histological diagnosis and predict failure after adrenalectomy . If PA is left without specific treatment, the risk for cardiovascular diseases, chronic kidney disease and mortality are increased compared to essential hypertension . Unilateral adrenalectomy has been considered superior concerning cardiovascular outcomes and quality of life compared to medical treatment in unilateral disease . However, the dose of MRA is often suboptimal and an optimal dose may be more equivalent to adrenalectomy in unilateral disease, but this has to be investigated in future studies. Adrenal disorders are often misdiagnosed, and the management can be challenging. Some disorders are very rare while others are common but the knowledge about them among physicians and the general population are low. More RCTs and collaborations are required to improve the management and our understanding of adrenal disorders. Hopefully, the Section Adrenal Endocrinology at Frontiers in Endocrinology can be a place where new ideas and collaborations will thrive while the challenges are met. The author confirms being the sole contributor of this work and has approved it for publication. |
The effect of perioperative probiotics and synbiotics on postoperative infections in patients undergoing major liver surgery: a meta-analysis of randomized controlled trials | 582b6bca-1dae-4ac0-8390-0b6fa0a326c6 | 11841616 | Surgical Procedures, Operative[mh] | Surgical intervention, particularly liver resection and transplantation, remains the cornerstone of curative treatment for hepatocellular carcinoma (HCC) . For suitable candidates, surgical intervention offers the highest probability of complete remission for both primary and secondary cancers . Recent years have witnessed an increase in liver resection and transplantation procedures for HCC , accompanied by marked improvements in patient outcomes . However, despite advances in medical and surgical techniques, postoperative complications including intestinal barrier damage, bacterial translocation, hepatic injury, and endotoxin translocation remain frequent . Post-surgical oxidative stress leads to varying degrees of intestinal mucosal barrier damage, and this tissue invasion beyond the sterile intestinal tract increases susceptibility to postoperative infections . These infectious complications, including respiratory, intra-abdominal, and wound infections, represent independent risk factors for postoperative mortality in liver resection or transplantation patients . Probiotics and synbiotics have emerged as potential protective agents against postoperative infections . Preoperative antibiotic administration combined with surgical trauma disrupts gut microbiome balance and compromises intestinal epithelial barrier function, leading to bacterial translocation to mesenteric lymph nodes . Probiotics and synbiotics may help maintain intestinal barrier homeostasis by inhibiting bacterial translocation and enhancing both mucosal immune and non-immune mechanisms through competitive antagonism with potential pathogens . Studies have demonstrated their efficacy in reducing pulmonary, urogenital, and alimentary infections through pathogenic microorganism suppression . Multiple studies suggest that probiotics and synbiotics may reduce postoperative infection rates across various surgical procedures including colorectal surgery , gastrointestinal surgery , liver surgery , and abdominal surgery . However, current guidelines from the European Association for the Study of the Liver (EASL) and the American Association for the Study of Liver Disease (AASLD) do not recommend incorporating probiotics and synbiotics into HCC treatment protocols . Furthermore, randomized controlled trials (RCTs) assessing the effectiveness of probiotics and synbiotics in reducing post-liver surgery complications have produced conflicting results, possibly due to methodological variations and diverse outcome measures. While serious adverse effects such as bacteremia and fungemia are rare in patients with mild disease, these complications may pose greater risks for immunocompromised HCC patients . Therefore, a careful assessment of both benefits and risks is essential before recommending perioperative probiotic and synbiotic use. This updated meta-analysis aims to evaluate the impact of perioperative probiotics and synbiotics on postoperative infection rates following major liver surgery. This meta-analysis was conducted in accordance with the updated PRISMA statement , with the PRISMA checklist available in . The study protocol was prospectively registered on the Open Science Framework ( https://osf.io/xygvu ). A systematic literature search was conducted in PubMed, Embase, Scopus, and the Cochrane Library for English-language published through February 21st, 2024. Two authors performed the search using database-specific algorithms that included terms such as “probiotics”, “prebiotics”, “synbiotics”, “hepatectomy”, “liver transplantation”, and “randomized”. The complete search strategy is detailed in . Eligibility criteria Studies were eligible if they met the following criteria: (1) Population: Patients undergoing major liver surgeries, including liver resections, and liver transplantations; (2) Intervention: Probiotics, prebiotic, or synbiotics. The probiotic was defined as a preparation containing live microorganisms. When administered in sufficient amounts in a host compartment, such as the gastrointestinal tract, it provides health benefits . Prebiotic was a nondigestible food ingredient that beneficially affects the host by selectively stimulating the growth and/or activity of one or a limited number of bacteria in the colon . The synbiotics was defined as a product that contains both probiotics and prebiotics; (3) Comparison: Placebo or no intervention; (4) Outcomes: Primary outcome of interest was the incidence of postoperative infections. Secondary outcomes were duration of antibiotic therapy, length of intensive care unit (ICU) stay, and length of hospital stay. (5) Type of study: Randomized trials. Data extraction and quality assessment Two authors (H.W., K.Z.) independently screened studies against the inclusion criteria, first reviewing titles and abstracts, then evaluating full texts of potentially eligible studies. Any discrepancies were resolved through adjudication by a third reviewer (Z.G.). Two authors (H.W., K.Z.) independently extracted data including first author, publication year, study period, population characteristics, intervention and control methods, intervention period, and infection definitions. Study quality was independently assessed by two authors (H.W., K.Z.) using the Cochrane risk of bias tool , with disagreements resolved by a third reviewer (L.Z.). Statistical synthesis and analysis Pooled relative ratios (RR) and corresponding 95% confidence interval (CI) were computed for dichotomous outcomes, while mean difference (MD) and their 95% CI were computed for continuous outcomes. Study heterogeneity was assessed using Higgins inconsistency (I 2 ) statistics . Due to anticipated clinical heterogeneity among the included trials, a random-effect model was employed for result pooling. Publication bias was assessed using both funnel plot analysis and Egger’s regression test . Predefined subgroup analyses stratified results by surgery type (liver resection versus liver transplantation) and timing of intervention (preoperative versus postoperative versus perioperative). Sensitivity analyses were conducted by excluding each study to assess the influence of individual studies. Statistical analyses and bias risk assessment were performed using Review Manager Version 5.3 and “meta” package in R software (version 4.3.1). Patient and public involvement None. Studies were eligible if they met the following criteria: (1) Population: Patients undergoing major liver surgeries, including liver resections, and liver transplantations; (2) Intervention: Probiotics, prebiotic, or synbiotics. The probiotic was defined as a preparation containing live microorganisms. When administered in sufficient amounts in a host compartment, such as the gastrointestinal tract, it provides health benefits . Prebiotic was a nondigestible food ingredient that beneficially affects the host by selectively stimulating the growth and/or activity of one or a limited number of bacteria in the colon . The synbiotics was defined as a product that contains both probiotics and prebiotics; (3) Comparison: Placebo or no intervention; (4) Outcomes: Primary outcome of interest was the incidence of postoperative infections. Secondary outcomes were duration of antibiotic therapy, length of intensive care unit (ICU) stay, and length of hospital stay. (5) Type of study: Randomized trials. Two authors (H.W., K.Z.) independently screened studies against the inclusion criteria, first reviewing titles and abstracts, then evaluating full texts of potentially eligible studies. Any discrepancies were resolved through adjudication by a third reviewer (Z.G.). Two authors (H.W., K.Z.) independently extracted data including first author, publication year, study period, population characteristics, intervention and control methods, intervention period, and infection definitions. Study quality was independently assessed by two authors (H.W., K.Z.) using the Cochrane risk of bias tool , with disagreements resolved by a third reviewer (L.Z.). Pooled relative ratios (RR) and corresponding 95% confidence interval (CI) were computed for dichotomous outcomes, while mean difference (MD) and their 95% CI were computed for continuous outcomes. Study heterogeneity was assessed using Higgins inconsistency (I 2 ) statistics . Due to anticipated clinical heterogeneity among the included trials, a random-effect model was employed for result pooling. Publication bias was assessed using both funnel plot analysis and Egger’s regression test . Predefined subgroup analyses stratified results by surgery type (liver resection versus liver transplantation) and timing of intervention (preoperative versus postoperative versus perioperative). Sensitivity analyses were conducted by excluding each study to assess the influence of individual studies. Statistical analyses and bias risk assessment were performed using Review Manager Version 5.3 and “meta” package in R software (version 4.3.1). None. Study identification and characteristics The literature search identified 538 articles, of which 210 were duplicates. After screening titles and abstracts, 288 studies were excluded. Following full-text assessment, 30 additional studies were excluded , leaving 10 studies for final analysis . The characteristics of the included studies are outlined in . A total of 588 patients were analyzed: 293 receiving probiotics or synbiotics, and 295 received placebo during the respective study periods. The number of patients ranged from 19 to 100 across studies. Two studies used probiotics alone , whereas eight used synbiotics . Twelve different probiotic species were used, with Lactobacillus casei being the most common . Five studies examined liver resection patients , and five examined liver transplantation patients . The timing and duration of interventions varied among included studies: three studies administered probiotics or synbiotics preoperatively (14 days before surgery), three studies postoperatively (12 to 14 days after surgery), and four studies perioperatively. For trials reporting outcomes as median and interquartile range, we applied methodology to derive means and standard deviations. Quality assessment The Cochrane risk of bias assessment identified four studies with high risk due to inadequate blinding and allocation concealment. Eight studies inadequately reported randomization methods and/or allocation concealment. Five trials showed unclear risk regarding outcome assessment blinding. Publication bias was assessed by using Egger’s test and the funnel plot. Egger’s test revealed potential publication bias for antibiotic therapy duration ( , Egger’s test: P < 0.05). Trim-and-fill analysis continued to show reduced antibiotic therapy duration (MD −2.81, 95% CI [−3.11 to −2.50], P < 0.001, I 2 = 0%). No significant risk of publication bias was detected for other outcomes (Egger’s test, P > 0.05; ). Primary outcome Postoperative infection rates were 10.3% in the intervention group versus 33.2% in controls. Probiotics or synbiotics use significantly reduced infection rates (RR 0.36, 95% CI [0.24–0.54], P < 0.0001, I 2 = 6%, , ). Subgroup analyses by surgery type showed reduced infection rates for both liver resection (RR 0.39, 95% CI [0.21–0.72], P = 0.002, I 2 = 23%, , ) and transplantation (RR 0.28, 95% CI [0.13–0.59], P = 0.0008, I 2 = 38%, , ). All intervention timings showed significant benefits: preoperative (RR 0.31, 95% CI [0.14–0.71], P = 0.005, I 2 = 0%, , ), postoperative (RR 0.27, 95% CI [0.11–0.67], P = 0.005, I 2 = 38%, , ), perioperative (RR 0.44, 95% CI [0.21–0.95], P = 0.04, I 2 = 17%, , ). Post-hoc subgroup analysis indicated that both probiotics and synbiotics were associated with a significant reduction in the postoperative infection rates (Probiotics: RR 0.14, 95% CI [0.03–0.72], P = 0.02, I 2 = 0%; Synbiotics: RR 0.38, 95% CI [0.25–0.59] P < 0.0001, I 2 = 12%, , ). Sensitivity analysis revealed no significant difference in the postoperative infections rate, indicating robustness . Secondary outcomes Five trials reported antibiotic therapy duration, showing significant reduction with intervention (MD −2.82, 95% CI [−3.13 to −2.51], P < 0.001, I 2 = 0%, , ). Seven trials reported length of stay in ICU and eight reported length of stay in hospital, showing no significant differences for length of stay in ICU (MD −0.25, 95% CI [−0.84–0.34], P = 0.41, I 2 = 64%, ), or in hospital (MD −1.25, 95% CI [−2.74–0.25], P = 0.10, I 2 = 56%, ). Subgroup analyses showed the same outcome as the original meta-analysis . Sensitivity analyses confirmed the robustness of our results . The literature search identified 538 articles, of which 210 were duplicates. After screening titles and abstracts, 288 studies were excluded. Following full-text assessment, 30 additional studies were excluded , leaving 10 studies for final analysis . The characteristics of the included studies are outlined in . A total of 588 patients were analyzed: 293 receiving probiotics or synbiotics, and 295 received placebo during the respective study periods. The number of patients ranged from 19 to 100 across studies. Two studies used probiotics alone , whereas eight used synbiotics . Twelve different probiotic species were used, with Lactobacillus casei being the most common . Five studies examined liver resection patients , and five examined liver transplantation patients . The timing and duration of interventions varied among included studies: three studies administered probiotics or synbiotics preoperatively (14 days before surgery), three studies postoperatively (12 to 14 days after surgery), and four studies perioperatively. For trials reporting outcomes as median and interquartile range, we applied methodology to derive means and standard deviations. The Cochrane risk of bias assessment identified four studies with high risk due to inadequate blinding and allocation concealment. Eight studies inadequately reported randomization methods and/or allocation concealment. Five trials showed unclear risk regarding outcome assessment blinding. Publication bias was assessed by using Egger’s test and the funnel plot. Egger’s test revealed potential publication bias for antibiotic therapy duration ( , Egger’s test: P < 0.05). Trim-and-fill analysis continued to show reduced antibiotic therapy duration (MD −2.81, 95% CI [−3.11 to −2.50], P < 0.001, I 2 = 0%). No significant risk of publication bias was detected for other outcomes (Egger’s test, P > 0.05; ). Postoperative infection rates were 10.3% in the intervention group versus 33.2% in controls. Probiotics or synbiotics use significantly reduced infection rates (RR 0.36, 95% CI [0.24–0.54], P < 0.0001, I 2 = 6%, , ). Subgroup analyses by surgery type showed reduced infection rates for both liver resection (RR 0.39, 95% CI [0.21–0.72], P = 0.002, I 2 = 23%, , ) and transplantation (RR 0.28, 95% CI [0.13–0.59], P = 0.0008, I 2 = 38%, , ). All intervention timings showed significant benefits: preoperative (RR 0.31, 95% CI [0.14–0.71], P = 0.005, I 2 = 0%, , ), postoperative (RR 0.27, 95% CI [0.11–0.67], P = 0.005, I 2 = 38%, , ), perioperative (RR 0.44, 95% CI [0.21–0.95], P = 0.04, I 2 = 17%, , ). Post-hoc subgroup analysis indicated that both probiotics and synbiotics were associated with a significant reduction in the postoperative infection rates (Probiotics: RR 0.14, 95% CI [0.03–0.72], P = 0.02, I 2 = 0%; Synbiotics: RR 0.38, 95% CI [0.25–0.59] P < 0.0001, I 2 = 12%, , ). Sensitivity analysis revealed no significant difference in the postoperative infections rate, indicating robustness . Five trials reported antibiotic therapy duration, showing significant reduction with intervention (MD −2.82, 95% CI [−3.13 to −2.51], P < 0.001, I 2 = 0%, , ). Seven trials reported length of stay in ICU and eight reported length of stay in hospital, showing no significant differences for length of stay in ICU (MD −0.25, 95% CI [−0.84–0.34], P = 0.41, I 2 = 64%, ), or in hospital (MD −1.25, 95% CI [−2.74–0.25], P = 0.10, I 2 = 56%, ). Subgroup analyses showed the same outcome as the original meta-analysis . Sensitivity analyses confirmed the robustness of our results . Liver surgery remains a complex procedure with substantial risks, carrying mortality and major postoperative complications rates of 3.8% and 15.8%, respectively . This meta-analysis of 10 RCTs demonstrates that perioperative probiotics or synbiotics administration significantly reduces postoperative infection rates by more than 60% and shortens antibiotic therapy duration. These benefits were observed across both liver resection and transplantation procedures, although no significant effects were found on ICU or hospital length of stay. The observed reduction in infections aligns with established mechanisms whereby probiotics and synbiotics inhibit bacterial translocation, enhance host immunity, and promote beneficial bacterial growth . In a comprehensive network meta-analysis by , the results demonstrates that synbiotic therapy was the most effective intervention for reducing surgical site infections, sepsis, pneumonia, antibiotic usage, and hospital stay. Similarly, analyzed 34 RCTs of elective abdominal surgery patients, founding reduced postoperative infection risk with probiotic or synbiotic use. Our analysis, the largest to date focusing specifically on liver surgery patients, corroborates these findings and previous systematic reviews . The optimal probiotic formulation remains unclear due to substantial variation in species and combinations across studies. While most trials utilized lactobacilli alone or in combination, seven studies incorporated bifidobacteria species , and four included galacto-oligosaccharides to enhance bifidobacteria growth. While our findings demonstrate overall efficacy, they apply specifically to the strains studied in individual trials. Future research should focus on identifying optimal probiotic strains and combinations for maximal clinical benefit. The discordance between reduced infection rates and unchanged length of stay merits discussion. This pattern parallels findings by , who reported reduced ventilator-associated pneumonia without corresponding reductions in mechanical ventilation duration or ICU stay. Length of stay is influenced by multiple factors beyond infection control, including host immunity, underlying conditions, illness severity, and perioperative management quality . The observed reduction in infection rates and antibiotic usage suggests potential benefits in limiting antimicrobial resistance, though this hypothesis requires validation in larger cohorts. Our study has several strengths. First, we implemented a comprehensive approach to study selection, employing rigorous inclusion criteria and robust statistical analysis methods. Second, by focusing on major liver surgery, we minimized within-study and between-study variability and heterogeneity. Our investigation provides current evidence on the efficacy of probiotics and synbiotics therapy in patients undergoing liver surgery. Furthermore, acknowledging clinical diversity among patients, we performed subgroup analyses stratified by surgery type, demonstrating potential benefits of probiotics and synbiotics therapy in liver resection and transplantation procedures. These findings provide valuable insights for perioperative management in this population. Nevertheless, several limitations warrant discussion. First, all included trials had small sample sizes (<100 patients per arm), potentially introducing small study effect bias . The conversion of continuous variables from median and interquartile range to mean and standard deviation in some studies may have affected our results’ precision. Second, three included studies were conducted by the same research group (Rayes et al.), although each involved distinct patient populations without overlap. Third, probiotic preparations have not been standardized in terms of their preparation methods, timing and duration of treatment. probiotic preparations lacked standardization in terms of preparation methods, timing, and treatment duration. Variations in surgery types and illness severity among studies may have influenced outcomes. Additionally, the included studies primarily report short-term outcomes, limiting our ability to draw conclusions about long-term intervention effects. Future research should incorporate extended follow-up periods to provide a more comprehensive understanding of treatment outcomes. The findings demonstrate that perioperative administration of probiotics or synbiotics may reduce the postoperative infection rates and shorten antibiotic therapy duration in patients undergoing liver resections or transplantations. Healthcare providers may consider probiotics and synbiotics as adjunctive therapy to prevent postoperative infections among patients received liver surgeries. However, given the limited available evidence, larger RCTs are needed to validate these findings and evaluate the long-term effects of probiotics and synbiotics in perioperative liver surgery management. 10.7717/peerj.18874/supp-1 Supplemental Information 1 PRISMA checklist 10.7717/peerj.18874/supp-2 Supplemental Information 2 Search strategies 10.7717/peerj.18874/supp-3 Supplemental Information 3 List of excluded studies with reasons 10.7717/peerj.18874/supp-4 Supplemental Information 4 Heatmap, publication bias assessment by funnel plot and Egger’s test, sensitivity analyses, subgroup analyses 10.7717/peerj.18874/supp-5 Supplemental Information 5 The population, intervention, outcomes, and findings of the meta-analysis |
Pioneer of Cardiothoracic Surgery - Luiz Tavares da
Silva | eff38a67-5987-4478-a947-de11aaabe89f | 10653605 | Internal Medicine[mh] | Biography Luiz Carvalho Tavares da Silva came from a traditional Brazilian family. He was
born in the city of Recife (Pernambuco, Brazil) on April 16 th , 1916,
and died on June 27 th , 1994. His father, Arsenio Luiz Tavares da
Silva, was a professor of general surgery. His mother, Joana Miranda de
Carvalho, was a housewife from the State of Bahia. He had three siblings:
Manuel, Maria, and João. In 1948, he married Maria Dulce Coimbra de
Almeida Brennand, and they had seven children: Dulce, Francisca, Joana, Izabel,
Antonia, Luiz, and Manuel . High School, Medical School, and Postgraduate Studies Luiz Tavares had a privileged education at Colégio São Bento in the
city of São Paulo, having completed his medical studies at the
Universidade de São Paulo. (After graduating in 1939, he returned to
Recife to work with his father in the Hospital Centenário de Recife
(HCR). His post-graduation in thoracic surgery was in England, specifically in
Leeds, Oxford, and London. In England, he had the opportunity to work with
famous thoracic and cardiovascular surgeons such as Sir Philip Allison (his
personal friend), Sir Russel Brock, and Sir Holmes Saylors, among others. Medical Career Luiz Tavares’s ambition from the beginning was to become a leader in surgery and
he lost no time in pursuing his goal. After a series of junior appointments as a
general surgeon, he took the decision to specialize in the chest area and become
a thoracic surgeon. He went first to Leeds to train with Professor Philip
Allison. After went back to Recife, he returned to work at HCR. His main
surgical contribution there was to set a practical example. He always insisted
that he was not just a thoracic surgeon, for his work extended over a wide
field. His surgical technique was outstanding, and he was immediately recognized
as a leader in his specialty. In the operating theatre, he combined boldness and
originality in conception with meticulous care in execution. He spent much time
instructing nurses so as to ensure an optimal teamwork. His unit resembled a
small, closely-knit family, and his surgical team at HCR afforded a great
impulse to cardiac surgery, performing surgical procedures such as closed mitral
commissurotomy, ductus arteriosus ligation, Blalock-Taussig shunt, resection of
coarctation of the aorta, and resection of aortic aneurysms. As regards aortic
surgery, several techniques were performed at that time: aneurysmal sac wiring,
wrapped aorta with cellophane paper tapes, aneurysm resection, and a nylon graft
interposition using surface hypothermia at 28°C. Despite HCR being considered
the leading heart surgery unit in Pernambuco state, he decided to leave that
hospital, owing to its lack of investment in high-cost equipment, as he had
learnt that it was not possible to maintain one’s leadership in thoracic and
cardiac surgery without a high investment in equipment and human resources. The
existing difficulties led him to move to a new heart center in Hospital Dom
Pedro II, together with his surgical team comprising, Mauro Arruda, Milton Lins,
and Eugênio Albuquerque. Cardiac Institute In 1956, Fernando Simões Barbosa, a physician and pioneering cardiologist
in northeast Brazil, created a heart center named Instituto de Cardiologia do
Recife (ICR), located in Hospital Dom Pedro II and affiliated to the
Universidade Federal de Pernambuco (UFPE). The creation of this cardiac center
was possible thanks to the financial support of the Rockefeller Foundation, the
CAPES, and the CNPq. The ICR played a key role in the growth of cardiology in
the northeast region of the country, which by that time had acquired a national
reputation. The collaboration between physicians and surgeons working as
partners in the same place resulted in major advances in local cardiology, with
a constantly renewed and expanding team, whose scientific production was
recognized throughout Brazil. At the ICR, health care and scientific activities
were carried out with great intensity . When Luiz Tavares moved to the ICR, despite considerable initial resistance to
invest in expensive equipment, he succeeded in creating a first-class
department. With direct financial support from the Dean of the UFPE and his own
financial resources, he went to London and purchased a complete surgical cardiac
unit from the Genyto-Urinary Company. The operating table was the type used at
Brompton Hospital, London, and was very sophisticated at that time. The surgical
drapes used during surgery were made from Irish linen in a green color. The
quality of the human resources was also a matter of concern to him, especially
nursing professionals, exemplified by his hiring of an outstanding French nurse,
Eliane Leveque. He also sent countless young doctors to study abroad, thereby
creating a particularly strong connection between England and Recife. The ICR
was already highly developed at that time, with an outpatient clinic, a clinical
pathology laboratory, a department of graphical methods, a hemodynamic
laboratory, vectorcardiography, phonocardiography, an operating room, and a
postoperative intensive care. Regular clinical meetings were held with all the
staff to discuss the best way to treat a patient. The ICR was a cradle of
scientific exchange, its facilities being visited by numerous distinguished
individuals in the field of cardiology, such as, among many others, Prof. Hugo
Fillipozzi (Brazil), Prof. Euryclides de Jesus Zerbini (Brazil), Sir Philip
Allison (United Kingdom), Enrique Cabrera (Mexico), David Watson (United States
of America), Peter Sleight (United Kingdom), the Queen’s doctor), Aulf Gunning
(United Kingdom), Emmanuel Lee (United Kingdom), Marian Ionescu (United
Kindgdom), and Christopher Lincoln (United Kingdom). Pioneer in Thoracic and Cardiovascular Surgery In the north and northeast regions of Brazil, the first cardiac surgery under
direct vision was performed in ICR by Luiz Tavares and his assistants Mauro
Arruda, Milton Lins, and Mauricio Bouqvar. In January 1960, a patient with
pulmonary stenosis was successfully operated on using surface hypothermia and
total occlusion of the venae cavae. Three months later, the first patient was
operated on for cardiopulmonary and cardiac arrest . The patient operated on bypass had a diagnosis of
atrial septal defect, and the surgery was a success. Seven years after the first
operation - performed by John Gibbon in the USA - in the world, Luiz Tavares and
his surgical team achieved this feat in Recife. Recife was the third city in
Brazil performing open heart surgery with extracorporeal circulation, following
only São Paulo and Rio de Janeiro. Records show that at that time, even
Italy, a European country, had not yet performed its first open heart surgery
with the aid of a heart-lung machine. The pump machine used by Luiz Tavares was
a Pemco, with rollers and a Kay-Cross disk oxygenator. One year later, on April
7 th , 1961, Luiz Tavares performed the first correction of
ventricular septal defect (VSD) with deep surface hypothermia and total
circulatory arrest for 32 minutes. After this achievement, the ICR became an
active, productive center, responsible for the training of countless clinical
cardiologists and surgeons, publishing several papers in various scientific
journals. Prior to the first open heart surgery in 1960, exhaustive experimental
work on extracorporeal circulation and deep hypothermia in dogs had already been
carried out . Academic Life In 1956, Luiz Tavares replaced his father in the chair of the 2 nd Surgical Clinic of the Faculdade de Medicina do Recife through a public
examination. He was considered an outstanding candidate and remained there until
his retirement in 1978. During his academic life, he held two full
professorships. He defended the thesis: “Surgical medical study of Manson’s
schistosomiasis” in the competitive examination contest for the academic post of
“Docente Livre” at the Surgical Clinic of the Faculdade de Medicina do Recife.
This was the first publication on hepatosplenic schistosomiasis in Brazil. He
also defended the thesis “Diagrammatic hernia, esophagitis and peptic ulcer of
the esophagus”. During his professional life, he presented numerous scientific
papers in congresses and produced a large number of scientific publications. Since 1950, in each year there was a large number of students who passed the
medical selection exam but were not admitted to the medical school due to a
limited number of places for medicine. In response to the appeals of young
people who were unable to enter higher medical education, Luiz Tavares joined a
group of medical professors who decided to establish a new medical school named
Faculdade de Ciências Médicas (FCM). He became a founder member,
subsequently its chairman. At present, 72 years after the creation of FCM, more
than 9,000 physicians have graduated. He became Full Professor of Thoracic
Surgery at the two public universities existing in Recife at that time: the UFPE
and the Universidade de Pernambuco. In the year 2000, Ricardo de Carvalho Lima
replaced Luiz Tavares as Full Professor of Thoracic Surgery of FCM/UPE, as
result of a public examination, and he occupies this post to the present.
Continuing the work started by Luiz Tavares at FCM, Ricardo Lima created the
first medical residency program in cardiothoracic surgery at UPE. This residency
program came to fill an important gap in the training of cardiac surgeons, since
the traditional residency program in cardiothoracic surgery at UFPE had been
discontinued in the early 1990s. Also in 2019, a new residency program in
pediatric cardiology was created by Ricardo Lima. From 1970 to 2023, hundreds of
doctors have been trained. Hospital Oswaldo Cruz The Hospital Oswaldo Cruz (HOC) has its origins in Hospital Santa Ageda, created
to treat patients during a smallpox epidemic in 1884. Between 1951 and 1954, a
thoracic surgeon, Joaquim Cavalcanti, made enormous contributions to Brazilian
medicine, pioneering the first surgery to correct a congenital heart disease
(systemic-pulmonary shunt - Blalock-Taussig surgery) and surgery for acquired
heart disease (mitral valve repair) in the Brazilian north and northeast
regions. He died prematurely but had already planted the seeds of heart surgery
in the State of Pernambuco. In the early 1970s, when the ICR ceased to exist due to reforms implemented by
the Brazilian Federal Government, Luiz Tavares turned his attention to HOC and
inaugurated a new heart center. In August 1972, an agreement for the purpose of
establishing a new center of cardiology was signed between HOC and the Instituto
Nacional de Assistência Médica da Previdência Social
(INAMPS). This new cardiology unit was linked to the FCM and from then on, local
cardiology achieved great progress. Luiz Tavares (FCM), Antonio Figueira (FCM),
and Alcedo Gomes (INAMPS) were responsible for the abovementioned agreement. The
new heart center continues existing to the present day, training countless
clinicians and surgeons. In 1975, that agreement resulted in the creation of the
region’s first coronary unit, its first public cardiology emergency hospital,
and the first specialization course in cardiology. Luiz Tavares once again used his personal prestige in obtaining resources to
rebuild the surgical center and the intensive care unit for the exclusive use of
cardiology patients, with wards for both adults and children. Two operating
rooms were built with a high degree of sophistication, with electric tables, a
gasometer, and invasive monitoring. The postoperative intensive care unit was
directly connected to the operating room to facilitate patient transportation.
In 1971, the first surgery was performed there by Milton Lins on a patient with
rheumatic mitral stenosis, who underwent a digital mitral commissurotomy, and
the surgical team acquired great experience of this technique in Brazil.
Prestigious surgeons had the opportunity to operate at HOC, including Adib
Jatene (1972) and Christopher Lincoln (1977). The HOC cardiac center operated
uninterruptedly for 35 years (1971-2006) when in 2006 the unit moved to new
hospital facilities at the Pronto-Socorro Cardiológico
Universitário de Pernambuco Prof. Luiz Tavares (PROCAPE). Pronto-Socorro Cardiológico Universitário de Pernambuco Prof.
Luiz Tavares In 2006, after the Luiz Tavares’ great contribution to cardiology, another
professor, Enio Lustosa Cantarelli, understanding the need to promote the
expansion of cardiology, using public funds, conceived, built, and inaugurated a
new cardiology center. In honor of Professor Tavares, the new school hospital
was named Prof. Luiz Tavares (or PROCAPE). This new hospital is a public
teaching hospital in cardiology and part of the health complex of UPE with 220
beds . From 1974 to 2022, 48,380
heart surgeries were performed (24,026 at HOC and 24,354 at PROCAPE). This
teaching hospital offers 299 vacancies for regular curriculum health internships
and 95 vacancies for medical and multidisciplinary residency training, in
addition to being a major research center. Creation of Fundação de Hematologia e Hemoterapia de
Pernambuco In 1977, there was no national policy on hematology in Brazil. A political
decision by the state government and the leadership of two doctors, Luiz Tavares
and Antônio Figueira, led to the creation of the Fundação
de Hematologia e Hemoterapia de Pernambuco (HEMOPE) and this became the first
public blood center in Brazil. The aim was to improve the quality of hematology
and hemotherapy in Brazil, and this quality improvement involved three goals:
creating the discipline of hematology at FCM, developing scientific research,
and producing blood products industrially. The project was completed in 2011,
when the third objective was achieved with the inauguration of Hemobrás,
with the aim of producing blood products on an industrial scale. HEMOPE was
responsible for the radical change in national policy on hemotherapy under the
direction of the Ministry of Health. Today the hematology and hemotherapy system
in Brazil is a source of pride and one of the safest systems in the whole world,
arising from a very well-structured project led by Luiz Tavares, and can be
considered the embryo of modern hemotherapy in Brazil. The entire structure of
HEMOPE was based on the French system, which is considered one of the best
systems in the world. Medical Exchange Training with England When Luiz Tavares completed his training in Leeds, he returned to Recife, having
established during this time, a solid friendship with Professor Philip Allison
that would last for the rest of his life. Allison had enormous prestige
throughout the world and was responsible for the development of the heart-lung
machine in England, having influenced the professional career of Luiz Tavares.
It is fair to say that much of the innovative work performed during the Allison
era can be credited to his first assistant, Alfred James Gunning, who moved with
him from Leeds to Oxford. Gunning and Allison were pioneers in heart valve
homografts and pig xenografts, techniques subsequently used in many centers
around the world. Luiz Tavares’ friendship with those two brilliant English
surgeons established a solid basis for the medical exchange program between
Oxford and Recife. In 1970, Luiz Tavares consulted the British Council in Recife
and hired David Randall as an English teacher for the FCM students, with the aim
of improving their knowledge of the English language aiming at a future training
of these doctors in England. This led to countless medical doctors from Recife
going to England for training, contributing in an unusual way to Brazilian
medicine. Among some of these doctors are: José Aécio Vieira,
Antonio Figueira, Ney Cavalcanti, Caio Souza Leão, Sávio Barbosa,
Edgar Victor, Luciano Raposo, Alcides Bezerra, Carlos Moraes, Hildo Azevedo,
Marcelo Azevedo, Catarina Cavalcanti, Fernando Cavalcante, Fátima
Militão, Paulo Almeida, Francisco Bandeira, Amaro Andrade, Cláudio
Lacerda, Cícero Rodrigues, George Teles, Pedro Arruda, Ricardo de
Carvalho Lima, Guido Corrêa de Araújo, Gustavo Gibson, Leila
Beltrão, Leandro Araújo, Tércio Barcelar, Geraldo Furtado,
Ricardo Pernambuco, Marcelo Maia, Renato Della Santa, Eugenia Cabral, Gustavo
Caldas and others. Sports and hobbies Luiz Tavares was interested in underwater fishing, motorcycles, chess, and
painting . Of all his hobbies,
chess was his greatest passion. In addition to being a physician and splendid
thoracic and cardiovascular surgeon, he was an outstanding chess player . He became president of the
Brazilian Chess Confederation and Brazilian Chess Champion. He is considered a
great protector and supporter of the World Grand Chess Master, Henrique da Costa
Mecking (Mequinho), having accompanied his development from the beginning to
occupy the 3 rd place in the world ranking . During an international chess tournament, he had
a chance to meet Pelé, the world’s King of Soccer. Luiz Tavares asked him
for an autograph for Mequinho, who was a very shy person. In response, he heard
from Pelé: “but Doctor, how can I give an autograph to the best in the
world using his head, if I am only the best using my feet”. Luiz Tavares was also the founder of the Clube de Xadrez do Recife. He was
runner-up in the Brazilian national chess tournament in 1956 and the Brazilian
Champion in 1957, even though he was an “amateur” chess player. His brilliant
intellectuality took him to the rank of a chess grandmaster, a thinker, who
always hovered above the banality of everyday life. Tributes Luiz Tavares received numerous honors from scientific societies in Brazil and
abroad, but the most significant tribute came from England where he was
recognized as an Honorary Member of the Royal College of Surgeons of England.
The granting of this title for a Brazilian surgeon was unprecedented , and during the award ceremony, the
distinguished English cardiac surgeon Mr. Christopher Lincoln compared him to
the famous English cardiac surgeon Sir Lord Brock.
Luiz Carvalho Tavares da Silva came from a traditional Brazilian family. He was
born in the city of Recife (Pernambuco, Brazil) on April 16 th , 1916,
and died on June 27 th , 1994. His father, Arsenio Luiz Tavares da
Silva, was a professor of general surgery. His mother, Joana Miranda de
Carvalho, was a housewife from the State of Bahia. He had three siblings:
Manuel, Maria, and João. In 1948, he married Maria Dulce Coimbra de
Almeida Brennand, and they had seven children: Dulce, Francisca, Joana, Izabel,
Antonia, Luiz, and Manuel .
Luiz Tavares had a privileged education at Colégio São Bento in the
city of São Paulo, having completed his medical studies at the
Universidade de São Paulo. (After graduating in 1939, he returned to
Recife to work with his father in the Hospital Centenário de Recife
(HCR). His post-graduation in thoracic surgery was in England, specifically in
Leeds, Oxford, and London. In England, he had the opportunity to work with
famous thoracic and cardiovascular surgeons such as Sir Philip Allison (his
personal friend), Sir Russel Brock, and Sir Holmes Saylors, among others.
Luiz Tavares’s ambition from the beginning was to become a leader in surgery and
he lost no time in pursuing his goal. After a series of junior appointments as a
general surgeon, he took the decision to specialize in the chest area and become
a thoracic surgeon. He went first to Leeds to train with Professor Philip
Allison. After went back to Recife, he returned to work at HCR. His main
surgical contribution there was to set a practical example. He always insisted
that he was not just a thoracic surgeon, for his work extended over a wide
field. His surgical technique was outstanding, and he was immediately recognized
as a leader in his specialty. In the operating theatre, he combined boldness and
originality in conception with meticulous care in execution. He spent much time
instructing nurses so as to ensure an optimal teamwork. His unit resembled a
small, closely-knit family, and his surgical team at HCR afforded a great
impulse to cardiac surgery, performing surgical procedures such as closed mitral
commissurotomy, ductus arteriosus ligation, Blalock-Taussig shunt, resection of
coarctation of the aorta, and resection of aortic aneurysms. As regards aortic
surgery, several techniques were performed at that time: aneurysmal sac wiring,
wrapped aorta with cellophane paper tapes, aneurysm resection, and a nylon graft
interposition using surface hypothermia at 28°C. Despite HCR being considered
the leading heart surgery unit in Pernambuco state, he decided to leave that
hospital, owing to its lack of investment in high-cost equipment, as he had
learnt that it was not possible to maintain one’s leadership in thoracic and
cardiac surgery without a high investment in equipment and human resources. The
existing difficulties led him to move to a new heart center in Hospital Dom
Pedro II, together with his surgical team comprising, Mauro Arruda, Milton Lins,
and Eugênio Albuquerque.
In 1956, Fernando Simões Barbosa, a physician and pioneering cardiologist
in northeast Brazil, created a heart center named Instituto de Cardiologia do
Recife (ICR), located in Hospital Dom Pedro II and affiliated to the
Universidade Federal de Pernambuco (UFPE). The creation of this cardiac center
was possible thanks to the financial support of the Rockefeller Foundation, the
CAPES, and the CNPq. The ICR played a key role in the growth of cardiology in
the northeast region of the country, which by that time had acquired a national
reputation. The collaboration between physicians and surgeons working as
partners in the same place resulted in major advances in local cardiology, with
a constantly renewed and expanding team, whose scientific production was
recognized throughout Brazil. At the ICR, health care and scientific activities
were carried out with great intensity . When Luiz Tavares moved to the ICR, despite considerable initial resistance to
invest in expensive equipment, he succeeded in creating a first-class
department. With direct financial support from the Dean of the UFPE and his own
financial resources, he went to London and purchased a complete surgical cardiac
unit from the Genyto-Urinary Company. The operating table was the type used at
Brompton Hospital, London, and was very sophisticated at that time. The surgical
drapes used during surgery were made from Irish linen in a green color. The
quality of the human resources was also a matter of concern to him, especially
nursing professionals, exemplified by his hiring of an outstanding French nurse,
Eliane Leveque. He also sent countless young doctors to study abroad, thereby
creating a particularly strong connection between England and Recife. The ICR
was already highly developed at that time, with an outpatient clinic, a clinical
pathology laboratory, a department of graphical methods, a hemodynamic
laboratory, vectorcardiography, phonocardiography, an operating room, and a
postoperative intensive care. Regular clinical meetings were held with all the
staff to discuss the best way to treat a patient. The ICR was a cradle of
scientific exchange, its facilities being visited by numerous distinguished
individuals in the field of cardiology, such as, among many others, Prof. Hugo
Fillipozzi (Brazil), Prof. Euryclides de Jesus Zerbini (Brazil), Sir Philip
Allison (United Kingdom), Enrique Cabrera (Mexico), David Watson (United States
of America), Peter Sleight (United Kingdom), the Queen’s doctor), Aulf Gunning
(United Kingdom), Emmanuel Lee (United Kingdom), Marian Ionescu (United
Kindgdom), and Christopher Lincoln (United Kingdom).
In the north and northeast regions of Brazil, the first cardiac surgery under
direct vision was performed in ICR by Luiz Tavares and his assistants Mauro
Arruda, Milton Lins, and Mauricio Bouqvar. In January 1960, a patient with
pulmonary stenosis was successfully operated on using surface hypothermia and
total occlusion of the venae cavae. Three months later, the first patient was
operated on for cardiopulmonary and cardiac arrest . The patient operated on bypass had a diagnosis of
atrial septal defect, and the surgery was a success. Seven years after the first
operation - performed by John Gibbon in the USA - in the world, Luiz Tavares and
his surgical team achieved this feat in Recife. Recife was the third city in
Brazil performing open heart surgery with extracorporeal circulation, following
only São Paulo and Rio de Janeiro. Records show that at that time, even
Italy, a European country, had not yet performed its first open heart surgery
with the aid of a heart-lung machine. The pump machine used by Luiz Tavares was
a Pemco, with rollers and a Kay-Cross disk oxygenator. One year later, on April
7 th , 1961, Luiz Tavares performed the first correction of
ventricular septal defect (VSD) with deep surface hypothermia and total
circulatory arrest for 32 minutes. After this achievement, the ICR became an
active, productive center, responsible for the training of countless clinical
cardiologists and surgeons, publishing several papers in various scientific
journals. Prior to the first open heart surgery in 1960, exhaustive experimental
work on extracorporeal circulation and deep hypothermia in dogs had already been
carried out .
In 1956, Luiz Tavares replaced his father in the chair of the 2 nd Surgical Clinic of the Faculdade de Medicina do Recife through a public
examination. He was considered an outstanding candidate and remained there until
his retirement in 1978. During his academic life, he held two full
professorships. He defended the thesis: “Surgical medical study of Manson’s
schistosomiasis” in the competitive examination contest for the academic post of
“Docente Livre” at the Surgical Clinic of the Faculdade de Medicina do Recife.
This was the first publication on hepatosplenic schistosomiasis in Brazil. He
also defended the thesis “Diagrammatic hernia, esophagitis and peptic ulcer of
the esophagus”. During his professional life, he presented numerous scientific
papers in congresses and produced a large number of scientific publications. Since 1950, in each year there was a large number of students who passed the
medical selection exam but were not admitted to the medical school due to a
limited number of places for medicine. In response to the appeals of young
people who were unable to enter higher medical education, Luiz Tavares joined a
group of medical professors who decided to establish a new medical school named
Faculdade de Ciências Médicas (FCM). He became a founder member,
subsequently its chairman. At present, 72 years after the creation of FCM, more
than 9,000 physicians have graduated. He became Full Professor of Thoracic
Surgery at the two public universities existing in Recife at that time: the UFPE
and the Universidade de Pernambuco. In the year 2000, Ricardo de Carvalho Lima
replaced Luiz Tavares as Full Professor of Thoracic Surgery of FCM/UPE, as
result of a public examination, and he occupies this post to the present.
Continuing the work started by Luiz Tavares at FCM, Ricardo Lima created the
first medical residency program in cardiothoracic surgery at UPE. This residency
program came to fill an important gap in the training of cardiac surgeons, since
the traditional residency program in cardiothoracic surgery at UFPE had been
discontinued in the early 1990s. Also in 2019, a new residency program in
pediatric cardiology was created by Ricardo Lima. From 1970 to 2023, hundreds of
doctors have been trained.
The Hospital Oswaldo Cruz (HOC) has its origins in Hospital Santa Ageda, created
to treat patients during a smallpox epidemic in 1884. Between 1951 and 1954, a
thoracic surgeon, Joaquim Cavalcanti, made enormous contributions to Brazilian
medicine, pioneering the first surgery to correct a congenital heart disease
(systemic-pulmonary shunt - Blalock-Taussig surgery) and surgery for acquired
heart disease (mitral valve repair) in the Brazilian north and northeast
regions. He died prematurely but had already planted the seeds of heart surgery
in the State of Pernambuco. In the early 1970s, when the ICR ceased to exist due to reforms implemented by
the Brazilian Federal Government, Luiz Tavares turned his attention to HOC and
inaugurated a new heart center. In August 1972, an agreement for the purpose of
establishing a new center of cardiology was signed between HOC and the Instituto
Nacional de Assistência Médica da Previdência Social
(INAMPS). This new cardiology unit was linked to the FCM and from then on, local
cardiology achieved great progress. Luiz Tavares (FCM), Antonio Figueira (FCM),
and Alcedo Gomes (INAMPS) were responsible for the abovementioned agreement. The
new heart center continues existing to the present day, training countless
clinicians and surgeons. In 1975, that agreement resulted in the creation of the
region’s first coronary unit, its first public cardiology emergency hospital,
and the first specialization course in cardiology. Luiz Tavares once again used his personal prestige in obtaining resources to
rebuild the surgical center and the intensive care unit for the exclusive use of
cardiology patients, with wards for both adults and children. Two operating
rooms were built with a high degree of sophistication, with electric tables, a
gasometer, and invasive monitoring. The postoperative intensive care unit was
directly connected to the operating room to facilitate patient transportation.
In 1971, the first surgery was performed there by Milton Lins on a patient with
rheumatic mitral stenosis, who underwent a digital mitral commissurotomy, and
the surgical team acquired great experience of this technique in Brazil.
Prestigious surgeons had the opportunity to operate at HOC, including Adib
Jatene (1972) and Christopher Lincoln (1977). The HOC cardiac center operated
uninterruptedly for 35 years (1971-2006) when in 2006 the unit moved to new
hospital facilities at the Pronto-Socorro Cardiológico
Universitário de Pernambuco Prof. Luiz Tavares (PROCAPE).
In 2006, after the Luiz Tavares’ great contribution to cardiology, another
professor, Enio Lustosa Cantarelli, understanding the need to promote the
expansion of cardiology, using public funds, conceived, built, and inaugurated a
new cardiology center. In honor of Professor Tavares, the new school hospital
was named Prof. Luiz Tavares (or PROCAPE). This new hospital is a public
teaching hospital in cardiology and part of the health complex of UPE with 220
beds . From 1974 to 2022, 48,380
heart surgeries were performed (24,026 at HOC and 24,354 at PROCAPE). This
teaching hospital offers 299 vacancies for regular curriculum health internships
and 95 vacancies for medical and multidisciplinary residency training, in
addition to being a major research center.
In 1977, there was no national policy on hematology in Brazil. A political
decision by the state government and the leadership of two doctors, Luiz Tavares
and Antônio Figueira, led to the creation of the Fundação
de Hematologia e Hemoterapia de Pernambuco (HEMOPE) and this became the first
public blood center in Brazil. The aim was to improve the quality of hematology
and hemotherapy in Brazil, and this quality improvement involved three goals:
creating the discipline of hematology at FCM, developing scientific research,
and producing blood products industrially. The project was completed in 2011,
when the third objective was achieved with the inauguration of Hemobrás,
with the aim of producing blood products on an industrial scale. HEMOPE was
responsible for the radical change in national policy on hemotherapy under the
direction of the Ministry of Health. Today the hematology and hemotherapy system
in Brazil is a source of pride and one of the safest systems in the whole world,
arising from a very well-structured project led by Luiz Tavares, and can be
considered the embryo of modern hemotherapy in Brazil. The entire structure of
HEMOPE was based on the French system, which is considered one of the best
systems in the world.
When Luiz Tavares completed his training in Leeds, he returned to Recife, having
established during this time, a solid friendship with Professor Philip Allison
that would last for the rest of his life. Allison had enormous prestige
throughout the world and was responsible for the development of the heart-lung
machine in England, having influenced the professional career of Luiz Tavares.
It is fair to say that much of the innovative work performed during the Allison
era can be credited to his first assistant, Alfred James Gunning, who moved with
him from Leeds to Oxford. Gunning and Allison were pioneers in heart valve
homografts and pig xenografts, techniques subsequently used in many centers
around the world. Luiz Tavares’ friendship with those two brilliant English
surgeons established a solid basis for the medical exchange program between
Oxford and Recife. In 1970, Luiz Tavares consulted the British Council in Recife
and hired David Randall as an English teacher for the FCM students, with the aim
of improving their knowledge of the English language aiming at a future training
of these doctors in England. This led to countless medical doctors from Recife
going to England for training, contributing in an unusual way to Brazilian
medicine. Among some of these doctors are: José Aécio Vieira,
Antonio Figueira, Ney Cavalcanti, Caio Souza Leão, Sávio Barbosa,
Edgar Victor, Luciano Raposo, Alcides Bezerra, Carlos Moraes, Hildo Azevedo,
Marcelo Azevedo, Catarina Cavalcanti, Fernando Cavalcante, Fátima
Militão, Paulo Almeida, Francisco Bandeira, Amaro Andrade, Cláudio
Lacerda, Cícero Rodrigues, George Teles, Pedro Arruda, Ricardo de
Carvalho Lima, Guido Corrêa de Araújo, Gustavo Gibson, Leila
Beltrão, Leandro Araújo, Tércio Barcelar, Geraldo Furtado,
Ricardo Pernambuco, Marcelo Maia, Renato Della Santa, Eugenia Cabral, Gustavo
Caldas and others.
Luiz Tavares was interested in underwater fishing, motorcycles, chess, and
painting . Of all his hobbies,
chess was his greatest passion. In addition to being a physician and splendid
thoracic and cardiovascular surgeon, he was an outstanding chess player . He became president of the
Brazilian Chess Confederation and Brazilian Chess Champion. He is considered a
great protector and supporter of the World Grand Chess Master, Henrique da Costa
Mecking (Mequinho), having accompanied his development from the beginning to
occupy the 3 rd place in the world ranking . During an international chess tournament, he had
a chance to meet Pelé, the world’s King of Soccer. Luiz Tavares asked him
for an autograph for Mequinho, who was a very shy person. In response, he heard
from Pelé: “but Doctor, how can I give an autograph to the best in the
world using his head, if I am only the best using my feet”. Luiz Tavares was also the founder of the Clube de Xadrez do Recife. He was
runner-up in the Brazilian national chess tournament in 1956 and the Brazilian
Champion in 1957, even though he was an “amateur” chess player. His brilliant
intellectuality took him to the rank of a chess grandmaster, a thinker, who
always hovered above the banality of everyday life.
Luiz Tavares received numerous honors from scientific societies in Brazil and
abroad, but the most significant tribute came from England where he was
recognized as an Honorary Member of the Royal College of Surgeons of England.
The granting of this title for a Brazilian surgeon was unprecedented , and during the award ceremony, the
distinguished English cardiac surgeon Mr. Christopher Lincoln compared him to
the famous English cardiac surgeon Sir Lord Brock.
|
A comparative study of hospitalization costs of TKA inpatients before and after National Volume-based Procurement in Guangdong, China: an interrupted time-series analysis | 485b151e-602c-41b9-b7ef-8a2e4dd01ef7 | 11754277 | Surgical Procedures, Operative[mh] | According to the seventh national population census of China in 2020, individuals aged 60 and above were 190.64 million, accounting for 13.5% of the total population, which increased by 5.44% compared to the sixth census . As the aging population continues to grow, the prevalence of obesity rises in the population. The incidence of arthritis-related diseases is steadily increasing. Total knee arthroplasty (TKA) has become a widely adopted treatment for knee joint diseases, significantly transforming the management of patients with end-stage arthritis following the successful introduction of artificial joints In China, approximately 1 to 1.5 million individuals require knee joint replacement surgery . However, the high cost of the procedure often deters many patients from undergoing the surgery. The expense of TKA materials, particularly, has led some patients to forgo the surgery . Research indicates that consumables represent the largest proportion of hospitalization costs for TKA patients . As such, reducing the cost of artificial joint consumables is crucial to lowering the overall cost of TKA. High-value medical consumables are procured through various models in developed countries, including direct hospital procurement, national or group centralized procurement, and group purchasing organization (GPO) procurement . Countries with social health insurance or national healthcare systems, such as the UK, France, Germany, and Australia, commonly employ national centralized procurement . The GPO incorporates third-party commercial procurement companies. GPOs play an important part in hospital purchasing in the United States . The study indicated that China could improve its centralized procurement system by adopting value-based purchasing and partial evaluation approaches; establishing a standardized coding system; and introducing GPO while optimizing market surveillance mechanisms drawing from developed countries’ experiences . Volume-based Procurement works by buying in bulk to help the hospital negotiate lower prices from manufacturers . It refers to a situation in which the quantity of pharmaceuticals or medical consumables to be purchased is communicated in advance to the suppliers. Suppliers then provide quotations based on the required purchase quantity through open bidding and inquiry processes. Essentially, it resembles the concept of collective purchasing, where hospitals act as the units for group purchases. In 2018, the Fifth Plenary Session of the Central Committee for Deepening Reform of the Communist Party of China approved the “Reform Plan for Deepening the Government Procurement System.” It outlined the guiding ideology, principles, and objectives of the National Volume-based Procurement (NVBP) reform aka China’s Volume-based Policy (CVBP) . The meeting highlighted the following issues. For the procurement agencies, there is a need to establish a competitive mechanism. For the government, the importance of improving evaluation mechanisms and developing efficient transaction mechanisms was stressed. In terms of management, the meeting stressed the need for a sound supervision system, a modern procurement system with clear responsibilities efficient transaction rules, and advanced technical support. The National Healthcare Security Administration (NHSA), a newly formed agency in 2018, administers most of the NVBP program. Since its initiation, China has organized multiple rounds of centralized volume-based procurement for pharmaceuticals. Over time, the scope of procurement has expanded from chemical pharmaceuticals to biopharmaceuticals and medical consumables. It covers medicines and consumables in areas such as hypertension, diabetes, coronary heart disease, gastrointestinal disease, malignant tumors, and orthopedic trauma. In June 2021, the NHSA issued guidelines on the organized procurement of medical consumables nationally. The guidelines focus on national organization, alliance procurement, and platform operation. The main objective is to reduce the price of medical consumables, and the burden on patients and ensure better access to medical care . In 2022, a policy named “Opinions on Supporting Measures for National Organized Volume-based Procurement and Use of High-Value Medical Consumables (Artificial Joints)” was issued by the NHSA and China’s National Health Commission. The document emphasizes a series of measures, including strengthening policy coordination and leveraging the importance of medical insurance payment and incentives for medical institutions. Measures are aimed at ensuring the successful implementation of NVBP and promoting the high-quality development of the pharmaceutical industry . In 2022, Guangdong Province implemented a volume-based procurement system and issued supporting measures for high-value medical consumables, specifically artificial joints. Relevant issues have been clarified, such as payment standards for medical insurance, defining price adjustments for medical services, and assigning responsibilities to concerned parties . Since April 15th, the selected artificial joints have been implemented in 330 hospitals in Guangdong Province, including both public and private hospitals. These efforts aim to improve access to medical services for the insured and effectively reduce the burden of medical consumables for the public. There is a pressing need to reduce the financial burden of joint replacement surgery in China. Even in more developed regions, the cost of TKA remains high relative to average income levels, necessitating measures to make it more affordable for patients . Studies have revealed that NVBP on drugs has promoted pharmaceutical affordability in China . However, further discussion should be addressed on high-value medical consumables because of their negative correlation with medication costs and their relatively late implementation in Chinese public hospitals . Native Chinese studies have found significant price reductions in consumables for coronary vascular interventions under volume-based procurement. Since the implementation of NVBP in 2018, the significant reduction in consumable prices has alleviated the burden on patients and facilitated the promotion of artificial joints . However, a study also shows that the total cost of TKA has not decreased, with costs of diagnosis and treatment, anesthesia, nursing, and operation increasing significantly, even though the proportion of material costs is significantly reduced . Further analysis is needed considering multiple factors, including the healthcare insurance system and cost structures. The indirect effects of the NVBP policy on other hospitalization costs require ongoing monitoring and investigation. It is crucial to verify whether the reduction in consumable costs has translated into actual savings for patients. The ultimate goal is to ensure that the policy genuinely improves patients’ access to medical services. Guangzhou, a major metropolitan hub in southern China, is well-known for its advanced healthcare resources. The city’s Healthcare Security Administration has led the way in pioneering innovative procurement strategies, particularly through a market-oriented, group volume-based procurement system. As part of this initiative, Guangzhou has implemented a regional procurement program for medical consumables. This study aims to evaluate the impact of Guangdong Province’s volume-based procurement policy for artificial joints by analyzing hospitalization costs for TKA patients. By comparing costs before and after the policy’s implementation, including changes in specific expense categories, the study seeks to provide practical evidence regarding the affordability of TKA inpatients and offer recommendations for policy refinement.
Data source Data were extracted from the “Statistical Medical Records Database” of a tertiary hospital in Guangzhou, Guangdong Province, China, spanning from May 10, 2021, to December 26, 2023. Founded in 1953, the hospital is located in one of southern China’s cities renowned for its rich medical resources. The orthopedic department is considered one of the hospital’s leading specialties. In recent years, the hospital’s orthopedic department has been ranked on the prestigious China Science and Technology Evaluation Metrics (STEM) hospital ranking list. The study population comprised valid 1,196 samples who underwent TKA, identified using the Chinese surgical code 81.5400 (total knee arthroplasty). The following data were collected for analysis: (1) Demographic characteristics of TKA patients: gender, age, type of medical insurance; (2) Clinical information: diagnosis code, admission date, discharge date; (3) Expenses: total expenses, self-financed expenses, consumables costs, miscellaneous service fees, diagnostic fees, treatment fees, rehabilitation fees, total Chinese medicine costs, western medicine costs, blood and blood products costs. Patients To ensure the integrity and accuracy of the included case. Data preprocessing was conducted before the analysis in this study. The data preprocessing criteria were as follows: (1) Exclusion of duplicate data. Sample data with repetitive records were excluded based on demographic characteristics. (2) Removal of missing data. Missing information on variables was excluded. (3) Elimination of abnormal data. Sample case data with obvious logical errors were excluded, such as cases with less than 1 day of hospitalization or cases where the total hospitalization costs did not match the sum of individual costs. Procurement of artificial joints will be implemented from April 30, 2022, according to the local policy of the Health Security Administration of Guangdong Province . Thus, in this study site, the policy intervention date was set for May 1, 2022. The cost analysis was split into 2 periods based on the discharge date of TKA inpatients, as hospitalization expenses are calculated and billed on discharge day. One is a pre-policy implementation period (May 10, 2021, to April 30, 2022). The other is a post-policy implementation period (from May 1, 2022, to December 26, 2023). Outcome measures Ten variables extracted from inpatient medical records were used as outcome indicators to assess the hospitalization expenses of TKA patients. These variables included total expenses, self-financed expenses, consumables costs, miscellaneous service fees, diagnostic fees, treatment fees, rehabilitation fees, total Chinese medicine costs, western medicine costs, and blood and blood products costs. All values were expressed in the Chinese Yuan (CNY). Statistical analysis Descriptive statistical analysis SPSS 26.0 was employed for statistical analysis at this stage. A descriptive analysis was conducted on the demographic characteristics of TKA inpatients before and after policy implementation. The analysis described the sample size (n) and composition in terms of variables such as gender, age, days of hospitalization, and medical insurance type. Categorical variables were compared using the chi-square test. However, the total hospitalization expenses and individual cost variables in this study were found to deviate from a normal distribution based on the Kolmogorov–Smirnov test. Therefore, a non-parametric Mann–Whitney U rank-sum test was employed to compare the data before and after policy implementation. Multiple linear regression analysis was used to examine the relationship between days of hospitalization and demographic variables. A significance level of p < 0.05 was considered statistically significant. Grey relation analysis Excel 2019 was used to perform grey relation analysis on the data. Grey system theory is a theoretical framework proposed to address the issues of data relationship ambiguity, uncertainty, and randomness in dynamic changes . Grey relation analysis, a key element of grey system theory, assesses the level of correlation between a particular indicator and other factors. It determines the relationship between the indicator and these factors by comparing the similarity of the curve shapes. Greater similarity indicates a stronger correlation, while less similarity indicates a weaker one . Grey relation analysis was used to rank the influencing components of total expense among TKA inpatients. Interrupted time-series analysis Stata 16.0 was used for interrupted time-series (ITS) analysis. ITS analysis is a technique utilized to assess data presented in a time-series structure that involves interruptions or interventions, such as the implementation of a policy. This method finds extensive application in public health and policy evaluation domains, with the single-group ITS approach being the predominant methodology. In this research, a segmented linear regression model was utilized for analysis, as outlined in detail in . The evaluation of the policy concentrated on investigating both its immediate and enduring impacts. A significance level of p < 0.05 was considered statistically significant.
Data were extracted from the “Statistical Medical Records Database” of a tertiary hospital in Guangzhou, Guangdong Province, China, spanning from May 10, 2021, to December 26, 2023. Founded in 1953, the hospital is located in one of southern China’s cities renowned for its rich medical resources. The orthopedic department is considered one of the hospital’s leading specialties. In recent years, the hospital’s orthopedic department has been ranked on the prestigious China Science and Technology Evaluation Metrics (STEM) hospital ranking list. The study population comprised valid 1,196 samples who underwent TKA, identified using the Chinese surgical code 81.5400 (total knee arthroplasty). The following data were collected for analysis: (1) Demographic characteristics of TKA patients: gender, age, type of medical insurance; (2) Clinical information: diagnosis code, admission date, discharge date; (3) Expenses: total expenses, self-financed expenses, consumables costs, miscellaneous service fees, diagnostic fees, treatment fees, rehabilitation fees, total Chinese medicine costs, western medicine costs, blood and blood products costs.
To ensure the integrity and accuracy of the included case. Data preprocessing was conducted before the analysis in this study. The data preprocessing criteria were as follows: (1) Exclusion of duplicate data. Sample data with repetitive records were excluded based on demographic characteristics. (2) Removal of missing data. Missing information on variables was excluded. (3) Elimination of abnormal data. Sample case data with obvious logical errors were excluded, such as cases with less than 1 day of hospitalization or cases where the total hospitalization costs did not match the sum of individual costs. Procurement of artificial joints will be implemented from April 30, 2022, according to the local policy of the Health Security Administration of Guangdong Province . Thus, in this study site, the policy intervention date was set for May 1, 2022. The cost analysis was split into 2 periods based on the discharge date of TKA inpatients, as hospitalization expenses are calculated and billed on discharge day. One is a pre-policy implementation period (May 10, 2021, to April 30, 2022). The other is a post-policy implementation period (from May 1, 2022, to December 26, 2023).
Ten variables extracted from inpatient medical records were used as outcome indicators to assess the hospitalization expenses of TKA patients. These variables included total expenses, self-financed expenses, consumables costs, miscellaneous service fees, diagnostic fees, treatment fees, rehabilitation fees, total Chinese medicine costs, western medicine costs, and blood and blood products costs. All values were expressed in the Chinese Yuan (CNY).
Descriptive statistical analysis SPSS 26.0 was employed for statistical analysis at this stage. A descriptive analysis was conducted on the demographic characteristics of TKA inpatients before and after policy implementation. The analysis described the sample size (n) and composition in terms of variables such as gender, age, days of hospitalization, and medical insurance type. Categorical variables were compared using the chi-square test. However, the total hospitalization expenses and individual cost variables in this study were found to deviate from a normal distribution based on the Kolmogorov–Smirnov test. Therefore, a non-parametric Mann–Whitney U rank-sum test was employed to compare the data before and after policy implementation. Multiple linear regression analysis was used to examine the relationship between days of hospitalization and demographic variables. A significance level of p < 0.05 was considered statistically significant. Grey relation analysis Excel 2019 was used to perform grey relation analysis on the data. Grey system theory is a theoretical framework proposed to address the issues of data relationship ambiguity, uncertainty, and randomness in dynamic changes . Grey relation analysis, a key element of grey system theory, assesses the level of correlation between a particular indicator and other factors. It determines the relationship between the indicator and these factors by comparing the similarity of the curve shapes. Greater similarity indicates a stronger correlation, while less similarity indicates a weaker one . Grey relation analysis was used to rank the influencing components of total expense among TKA inpatients. Interrupted time-series analysis Stata 16.0 was used for interrupted time-series (ITS) analysis. ITS analysis is a technique utilized to assess data presented in a time-series structure that involves interruptions or interventions, such as the implementation of a policy. This method finds extensive application in public health and policy evaluation domains, with the single-group ITS approach being the predominant methodology. In this research, a segmented linear regression model was utilized for analysis, as outlined in detail in . The evaluation of the policy concentrated on investigating both its immediate and enduring impacts. A significance level of p < 0.05 was considered statistically significant.
SPSS 26.0 was employed for statistical analysis at this stage. A descriptive analysis was conducted on the demographic characteristics of TKA inpatients before and after policy implementation. The analysis described the sample size (n) and composition in terms of variables such as gender, age, days of hospitalization, and medical insurance type. Categorical variables were compared using the chi-square test. However, the total hospitalization expenses and individual cost variables in this study were found to deviate from a normal distribution based on the Kolmogorov–Smirnov test. Therefore, a non-parametric Mann–Whitney U rank-sum test was employed to compare the data before and after policy implementation. Multiple linear regression analysis was used to examine the relationship between days of hospitalization and demographic variables. A significance level of p < 0.05 was considered statistically significant.
Excel 2019 was used to perform grey relation analysis on the data. Grey system theory is a theoretical framework proposed to address the issues of data relationship ambiguity, uncertainty, and randomness in dynamic changes . Grey relation analysis, a key element of grey system theory, assesses the level of correlation between a particular indicator and other factors. It determines the relationship between the indicator and these factors by comparing the similarity of the curve shapes. Greater similarity indicates a stronger correlation, while less similarity indicates a weaker one . Grey relation analysis was used to rank the influencing components of total expense among TKA inpatients.
Stata 16.0 was used for interrupted time-series (ITS) analysis. ITS analysis is a technique utilized to assess data presented in a time-series structure that involves interruptions or interventions, such as the implementation of a policy. This method finds extensive application in public health and policy evaluation domains, with the single-group ITS approach being the predominant methodology. In this research, a segmented linear regression model was utilized for analysis, as outlined in detail in . The evaluation of the policy concentrated on investigating both its immediate and enduring impacts. A significance level of p < 0.05 was considered statistically significant.
General information of TKA inpatients before and after NVBP After data preprocessing, a total of 1,196 valid cases were included in this study, with 290 cases before policy implementation and 906 cases after implementation. The general information on TKA inpatients is detailed in . In terms of gender distribution, the proportion of females before and after policy implementation was 82.8 and 82.7%, respectively, which was higher than the proportion of males at 17.2 and 17.3%, respectively. TKA inpatients ranged in age from 36 to 89 years, with a mean age of 67.8 ± 7.452 years. The highest proportion of TKA inpatients were aged 60–80 years old, 80% before and 82.9% after policy implementation. The differences in gender, age group, and type of medical insurance among TKA in patients before and after policy implementation were not statistically significant ( p > 0.05). However, there was a statistically significant difference in days of hospitalization ( p = 0.05). Before the policy, an equivalent number of cases had hospital stays of less than 10 days and 10 to 20 days, both less than 50%. After the policy, however, the proportion of TKA in patients with a hospital stay of 10 to 20 days increased to 54.7%. Multiple linear regression indicated that insurance type significantly influenced days of hospitalization. Patients covered by resident basic medical insurance ( β = −1.060, p = 0.038) had shorter hospitalizations, while self-pay ( β = 1.504, p = 0.010) patients experienced longer stays . Total and itemized hospitalization expenses of TKA inpatients Total and itemized hospitalization expenses and composition of TKA inpatients before and after NVBP The results of the Mann–Whitney U test are presented in , indicating that most variables showed statistically significant differences before and after the implementation of the NVBP policy ( p -values <0.05). The total expenses for TKA inpatients ( Z = −25.15, p < 0.001) showed a statistically significant difference after policy implementation. The total expenses decreased from 65324.73 CNY per case to 34465.57 CNY per case, representing a reduction of 30859.16 CNY per case. The consumables cost for TKA inpatients ( Z = −25.42, p < 0.001) also showed a significant difference, decreasing from 42829.83 CNY per case to 11,137.09 CNY per case, representing a reduction of 31,692.74 CNY per case. The self-financed expenses ( Z = −15.22, p < 0.001), miscellaneous service fees ( Z = −2.46, p < 0.05), western medicine costs ( Z = −13.99, p < 0.001), and total Chinese medicine costs ( Z = −12.58, p < 0.001) statistically slightly decreased after policy implementation. However, the diagnostic fees ( Z = −2.97, p < 0.050) showed an increase after policy implementation. The comparison of total expenses of TKA inpatients before and after policy implementation is shown in . Before policy implementation, the majority of TKA inpatients had total expenses in the range of 60,000 to 79,999 CNY, accounting for 193 cases (66.6%). The lowest number of cases was observed among TKA in patients with total expenses of 80,000 CNY or more, accounting for 19 cases (6.6%). After the implementation, the proportion of TKA in patients with total expenses in the range of 20,000 to 39,999 CNY increased from 0 to 88.3%. On the other hand, the proportion of TKA in patients with total expenses in the range of 40,000 to 59,999 CNY, 60,000 to 79,999 CNY, and 80,000 CNY or more all decreased. As shown in , we conducted a grey relation analysis to rank the influencing components of total expenses. The results revealed that the top three influencing factors on the total expense were self-financed expenses, consumables costs, and treatment fees, with correlation coefficients of 0.809, 0.788, and 0.741, respectively. On the other hand, rehabilitation fees and blood and blood product costs had the least impact on the total expense, with correlation coefficients of 0.680 each. Consumables cost of TKA inpatients before and after NVBP Basic information on the costs of consumables for TKA in patients before and after policy implementation is shown in . Before policy implementation, the majority of TKA inpatients had consumable costs in the range of 40,000 to 59,999 CNY, accounting for 168 cases (57.9%). The lowest number of cases was observed among TKA inpatients with consumables costs of 80,000 CNY or more, accounting for 1 case (0.3%). After the implementation of the policy, the proportion of TKA in patients with consumables costing less than CNY 20,000 increased from 0 to 97.6%, with a total of 884 cases. On the other hand, the proportions of TKA in patients with consumable costs in the ranges of 20,000 to 39,999 CNY, 40,000 to 59,999 CNY, 60,000 to 79,999 CNY, and 80,000 CNY or more all decreased. The interrupted time series analysis results ITS analysis results are presented in . Upon the implementation of the procurement policy (May 1, 2022), TKA inpatients exhibited decreasing trends in total expenses ( β 2 = −28240.17, p < 0.001), consumables costs ( β 2 = − 31302.72, p < 0.001), and self-financed expenses ( β 2 = −13674.56, p < 0.001). Conversely, there were increases in miscellaneous service fees ( β 2 = 440.45, p <0.05), diagnostic fees ( β 2 =746.0, p < 0.05), and rehabilitation fees ( β 2 =207.36, p < 0.001), with statistically significant differences observed. The remaining indicators did not show statistically significant differences (All p -values≥ 0.05 ). After the policy implementation period (from May 1, 2022, to December 26, 2023), TKA inpatients showed decreasing trends in total expenses ( β 3 = − 106.95, p < 0.05), consumables costs ( β 3 = −65.05, p < 0.05), diagnostic fees ( β 3 = −22.44, p < 0.05), treatment costs ( β 3 = −28.01, p < 0.05), total Chinese medicine costs( β 3 = −9.98, p < 0.05), and blood and blood products fees( β 3 = −5.88, p < 0.05). According to the component ranking , the ITS analysis was conducted on the total and the four most relevant itemized hospitalization expenses across subgroups based on the length of hospitalization . For TKA inpatients of any length of stay, a reduction in total expenses, self-financed expenses, and consumables costs was observed following the implementation of the NVBP on May 1, 2022 (all p -values<0.05). After the policy implementation period (from May 1, 2022, to December 26, 2023), treatment fees showed a slightly increasing trend. In contrast, diagnostic fees increased for only TKA inpatients with a hospital stay ≥10 days ( β 2 =864.25, p = 0.025) but exhibited a slight decreasing trend thereafter ( β 3 = − 35.206, p = 0.001). The changes in total expenses, consumables costs, and self-financed expenses among TKA inpatients based on the ITS result are illustrated in – . At the time of policy implementation (May 1, 2022), the total expenses exhibited a significant instantaneous drop, decreasing by 30,859.16 CNY per case. Similarly, the consumables costs also showed an instantaneous drop, decreasing by 31,692.74 CNY per case. The self-financed expenses displayed a slight decrement, reducing by 15,165.1 CNY per case. Further details regarding the changing trends of other variables among TKA inpatients can be found in .
After data preprocessing, a total of 1,196 valid cases were included in this study, with 290 cases before policy implementation and 906 cases after implementation. The general information on TKA inpatients is detailed in . In terms of gender distribution, the proportion of females before and after policy implementation was 82.8 and 82.7%, respectively, which was higher than the proportion of males at 17.2 and 17.3%, respectively. TKA inpatients ranged in age from 36 to 89 years, with a mean age of 67.8 ± 7.452 years. The highest proportion of TKA inpatients were aged 60–80 years old, 80% before and 82.9% after policy implementation. The differences in gender, age group, and type of medical insurance among TKA in patients before and after policy implementation were not statistically significant ( p > 0.05). However, there was a statistically significant difference in days of hospitalization ( p = 0.05). Before the policy, an equivalent number of cases had hospital stays of less than 10 days and 10 to 20 days, both less than 50%. After the policy, however, the proportion of TKA in patients with a hospital stay of 10 to 20 days increased to 54.7%. Multiple linear regression indicated that insurance type significantly influenced days of hospitalization. Patients covered by resident basic medical insurance ( β = −1.060, p = 0.038) had shorter hospitalizations, while self-pay ( β = 1.504, p = 0.010) patients experienced longer stays .
Total and itemized hospitalization expenses and composition of TKA inpatients before and after NVBP The results of the Mann–Whitney U test are presented in , indicating that most variables showed statistically significant differences before and after the implementation of the NVBP policy ( p -values <0.05). The total expenses for TKA inpatients ( Z = −25.15, p < 0.001) showed a statistically significant difference after policy implementation. The total expenses decreased from 65324.73 CNY per case to 34465.57 CNY per case, representing a reduction of 30859.16 CNY per case. The consumables cost for TKA inpatients ( Z = −25.42, p < 0.001) also showed a significant difference, decreasing from 42829.83 CNY per case to 11,137.09 CNY per case, representing a reduction of 31,692.74 CNY per case. The self-financed expenses ( Z = −15.22, p < 0.001), miscellaneous service fees ( Z = −2.46, p < 0.05), western medicine costs ( Z = −13.99, p < 0.001), and total Chinese medicine costs ( Z = −12.58, p < 0.001) statistically slightly decreased after policy implementation. However, the diagnostic fees ( Z = −2.97, p < 0.050) showed an increase after policy implementation. The comparison of total expenses of TKA inpatients before and after policy implementation is shown in . Before policy implementation, the majority of TKA inpatients had total expenses in the range of 60,000 to 79,999 CNY, accounting for 193 cases (66.6%). The lowest number of cases was observed among TKA in patients with total expenses of 80,000 CNY or more, accounting for 19 cases (6.6%). After the implementation, the proportion of TKA in patients with total expenses in the range of 20,000 to 39,999 CNY increased from 0 to 88.3%. On the other hand, the proportion of TKA in patients with total expenses in the range of 40,000 to 59,999 CNY, 60,000 to 79,999 CNY, and 80,000 CNY or more all decreased. As shown in , we conducted a grey relation analysis to rank the influencing components of total expenses. The results revealed that the top three influencing factors on the total expense were self-financed expenses, consumables costs, and treatment fees, with correlation coefficients of 0.809, 0.788, and 0.741, respectively. On the other hand, rehabilitation fees and blood and blood product costs had the least impact on the total expense, with correlation coefficients of 0.680 each. Consumables cost of TKA inpatients before and after NVBP Basic information on the costs of consumables for TKA in patients before and after policy implementation is shown in . Before policy implementation, the majority of TKA inpatients had consumable costs in the range of 40,000 to 59,999 CNY, accounting for 168 cases (57.9%). The lowest number of cases was observed among TKA inpatients with consumables costs of 80,000 CNY or more, accounting for 1 case (0.3%). After the implementation of the policy, the proportion of TKA in patients with consumables costing less than CNY 20,000 increased from 0 to 97.6%, with a total of 884 cases. On the other hand, the proportions of TKA in patients with consumable costs in the ranges of 20,000 to 39,999 CNY, 40,000 to 59,999 CNY, 60,000 to 79,999 CNY, and 80,000 CNY or more all decreased. The interrupted time series analysis results ITS analysis results are presented in . Upon the implementation of the procurement policy (May 1, 2022), TKA inpatients exhibited decreasing trends in total expenses ( β 2 = −28240.17, p < 0.001), consumables costs ( β 2 = − 31302.72, p < 0.001), and self-financed expenses ( β 2 = −13674.56, p < 0.001). Conversely, there were increases in miscellaneous service fees ( β 2 = 440.45, p <0.05), diagnostic fees ( β 2 =746.0, p < 0.05), and rehabilitation fees ( β 2 =207.36, p < 0.001), with statistically significant differences observed. The remaining indicators did not show statistically significant differences (All p -values≥ 0.05 ). After the policy implementation period (from May 1, 2022, to December 26, 2023), TKA inpatients showed decreasing trends in total expenses ( β 3 = − 106.95, p < 0.05), consumables costs ( β 3 = −65.05, p < 0.05), diagnostic fees ( β 3 = −22.44, p < 0.05), treatment costs ( β 3 = −28.01, p < 0.05), total Chinese medicine costs( β 3 = −9.98, p < 0.05), and blood and blood products fees( β 3 = −5.88, p < 0.05). According to the component ranking , the ITS analysis was conducted on the total and the four most relevant itemized hospitalization expenses across subgroups based on the length of hospitalization . For TKA inpatients of any length of stay, a reduction in total expenses, self-financed expenses, and consumables costs was observed following the implementation of the NVBP on May 1, 2022 (all p -values<0.05). After the policy implementation period (from May 1, 2022, to December 26, 2023), treatment fees showed a slightly increasing trend. In contrast, diagnostic fees increased for only TKA inpatients with a hospital stay ≥10 days ( β 2 =864.25, p = 0.025) but exhibited a slight decreasing trend thereafter ( β 3 = − 35.206, p = 0.001). The changes in total expenses, consumables costs, and self-financed expenses among TKA inpatients based on the ITS result are illustrated in – . At the time of policy implementation (May 1, 2022), the total expenses exhibited a significant instantaneous drop, decreasing by 30,859.16 CNY per case. Similarly, the consumables costs also showed an instantaneous drop, decreasing by 31,692.74 CNY per case. The self-financed expenses displayed a slight decrement, reducing by 15,165.1 CNY per case. Further details regarding the changing trends of other variables among TKA inpatients can be found in .
The results of the Mann–Whitney U test are presented in , indicating that most variables showed statistically significant differences before and after the implementation of the NVBP policy ( p -values <0.05). The total expenses for TKA inpatients ( Z = −25.15, p < 0.001) showed a statistically significant difference after policy implementation. The total expenses decreased from 65324.73 CNY per case to 34465.57 CNY per case, representing a reduction of 30859.16 CNY per case. The consumables cost for TKA inpatients ( Z = −25.42, p < 0.001) also showed a significant difference, decreasing from 42829.83 CNY per case to 11,137.09 CNY per case, representing a reduction of 31,692.74 CNY per case. The self-financed expenses ( Z = −15.22, p < 0.001), miscellaneous service fees ( Z = −2.46, p < 0.05), western medicine costs ( Z = −13.99, p < 0.001), and total Chinese medicine costs ( Z = −12.58, p < 0.001) statistically slightly decreased after policy implementation. However, the diagnostic fees ( Z = −2.97, p < 0.050) showed an increase after policy implementation. The comparison of total expenses of TKA inpatients before and after policy implementation is shown in . Before policy implementation, the majority of TKA inpatients had total expenses in the range of 60,000 to 79,999 CNY, accounting for 193 cases (66.6%). The lowest number of cases was observed among TKA in patients with total expenses of 80,000 CNY or more, accounting for 19 cases (6.6%). After the implementation, the proportion of TKA in patients with total expenses in the range of 20,000 to 39,999 CNY increased from 0 to 88.3%. On the other hand, the proportion of TKA in patients with total expenses in the range of 40,000 to 59,999 CNY, 60,000 to 79,999 CNY, and 80,000 CNY or more all decreased. As shown in , we conducted a grey relation analysis to rank the influencing components of total expenses. The results revealed that the top three influencing factors on the total expense were self-financed expenses, consumables costs, and treatment fees, with correlation coefficients of 0.809, 0.788, and 0.741, respectively. On the other hand, rehabilitation fees and blood and blood product costs had the least impact on the total expense, with correlation coefficients of 0.680 each.
Basic information on the costs of consumables for TKA in patients before and after policy implementation is shown in . Before policy implementation, the majority of TKA inpatients had consumable costs in the range of 40,000 to 59,999 CNY, accounting for 168 cases (57.9%). The lowest number of cases was observed among TKA inpatients with consumables costs of 80,000 CNY or more, accounting for 1 case (0.3%). After the implementation of the policy, the proportion of TKA in patients with consumables costing less than CNY 20,000 increased from 0 to 97.6%, with a total of 884 cases. On the other hand, the proportions of TKA in patients with consumable costs in the ranges of 20,000 to 39,999 CNY, 40,000 to 59,999 CNY, 60,000 to 79,999 CNY, and 80,000 CNY or more all decreased.
ITS analysis results are presented in . Upon the implementation of the procurement policy (May 1, 2022), TKA inpatients exhibited decreasing trends in total expenses ( β 2 = −28240.17, p < 0.001), consumables costs ( β 2 = − 31302.72, p < 0.001), and self-financed expenses ( β 2 = −13674.56, p < 0.001). Conversely, there were increases in miscellaneous service fees ( β 2 = 440.45, p <0.05), diagnostic fees ( β 2 =746.0, p < 0.05), and rehabilitation fees ( β 2 =207.36, p < 0.001), with statistically significant differences observed. The remaining indicators did not show statistically significant differences (All p -values≥ 0.05 ). After the policy implementation period (from May 1, 2022, to December 26, 2023), TKA inpatients showed decreasing trends in total expenses ( β 3 = − 106.95, p < 0.05), consumables costs ( β 3 = −65.05, p < 0.05), diagnostic fees ( β 3 = −22.44, p < 0.05), treatment costs ( β 3 = −28.01, p < 0.05), total Chinese medicine costs( β 3 = −9.98, p < 0.05), and blood and blood products fees( β 3 = −5.88, p < 0.05). According to the component ranking , the ITS analysis was conducted on the total and the four most relevant itemized hospitalization expenses across subgroups based on the length of hospitalization . For TKA inpatients of any length of stay, a reduction in total expenses, self-financed expenses, and consumables costs was observed following the implementation of the NVBP on May 1, 2022 (all p -values<0.05). After the policy implementation period (from May 1, 2022, to December 26, 2023), treatment fees showed a slightly increasing trend. In contrast, diagnostic fees increased for only TKA inpatients with a hospital stay ≥10 days ( β 2 =864.25, p = 0.025) but exhibited a slight decreasing trend thereafter ( β 3 = − 35.206, p = 0.001). The changes in total expenses, consumables costs, and self-financed expenses among TKA inpatients based on the ITS result are illustrated in – . At the time of policy implementation (May 1, 2022), the total expenses exhibited a significant instantaneous drop, decreasing by 30,859.16 CNY per case. Similarly, the consumables costs also showed an instantaneous drop, decreasing by 31,692.74 CNY per case. The self-financed expenses displayed a slight decrement, reducing by 15,165.1 CNY per case. Further details regarding the changing trends of other variables among TKA inpatients can be found in .
This study investigated the hospitalization expenses of patients undergoing TKA at a tertiary hospital in Guangzhou, China, from May 10, 2021, to December 26, 2023. The primary aim was to evaluate the direct and indirect impacts of the NVBP policy on the hospitalization costs of TKA patients. The empirical findings presented in this study are intended to inform policy refinement and enhance the effectiveness of patient savings. Understanding the pricing mechanisms of medical consumables under government regulation is crucial in this context. Overall, the short-term impact of the policy was significant. Both total expenditure and the cost of consumables fell immediately after the policy was implemented, and these costs continued to fall thereafter. First, the ITS analysis revealed that the implementation of the NVBP policy had a significant impact on total expenses, resulting in an immediate decrease of 30,859.16 CNY per case. This demonstrates the direct effect of the NVBP in controlling overall costs. In the post-implementation phase (May 1, 2022, to December 26, 2023), a consistent downward trend in total expenses was observed, further indicating a reduction and effective control over the overall costs for TKA inpatients. The grey correlation analysis identified self-financed expenses, consumable costs, and treatment fees as the most closely associated factors with total expenses. Based on these correlations, it is recommended that healthcare professionals adhere strictly to guidelines and prioritize medications listed in the national essential drug list (which are covered by medical insurance) to help regulate self-financed expenses. Additionally, the grey correlation analysis showed that miscellaneous service fees and rehabilitation fees for medical staff constituted a smaller proportion of total costs. This distribution contrasts with the allocation of treatment and nursing fees for medical personnel in Taiwan and the United States, highlighting notable differences in cost structures . Second, the consumables cost experienced an immediate decrease at the time of policy implementation (May 1, 2022), with a reduction of 31,692.74 CNY. A downward trend in consumables cost was also observed after the policy implementation (from May 1, 2022, to December 26, 2023). The finding underscores the considerable influence of the NVBP program on artificial joints, aligning with current studies suggesting that NVBP can substantially reduce medication expenses and enhance medication benefits for patients . Previously, the pricing of high-value medical consumables in China was mainly determined by companies. It often resulted in inflated prices due to a lack of transparency in the market and non-standardized operations. Now, the ITS analysis in this study showed a significant decrease in consumables costs under the policy’s influence. Additionally, the correlation analysis revealed that the second most influential factor on TKA inpatients’ total expense was average consumables cost. The NVBP policy undoubtedly reduced consumables costs, thereby alleviating the overall burden on TKA inpatients. China can also learn from the experiences of other countries. Nevertheless, the overall performance of orthopedic surgeons may be influenced by a reduction in surgical costs. The issue of subjective motivation among orthopedic surgeons requires further investigation. In the United States, the implementation of surgeon scorecards has enabled cost comparisons among peers, effectively motivating surgeons to participate in reducing the cost of TKA . Third, the ITS analysis also revealed a decrease in self-financed expenses following the policy implementation. According to the National Bureau of Statistics of China, the per capita disposable income of Chinese residents was approximately 28,200 CNY in 2018 . Under the NVBP, the mediation of self-financed expenses was reduced to 16,864.9 CNY, making it more affordable for patients. This reduction in out-of-pocket costs significantly alleviates the financial burden on patients. Previous studies from various countries, including China, have indicated that TKA is a highly cost-effective procedure for patients . Our findings further demonstrate that the NVBP leads to greater reductions in TKA-related expenses, thereby increasing the opportunities for arthritis patients to enhance their quality of life. In addition, when the implementation of the policy (May 1, 2022) occurred, miscellaneous service fees, diagnostic fees, and rehabilitation fees showed an immediate but slight increase. In China, a reflection of the value of technical labor includes miscellaneous service fees and rehabilitation fees. This study observed an immediate increase in these two variables before and after the policy implementation. This may suggest that the value of medical professionals’ work was being better recognized. The structure of total expenses became more reasonable, in line with China’s policy direction for the quality development of public hospitals . However, as a non-technical labor cost, the diagnostic fee also exhibited an immediate upward trend in this study. The grey correlation analysis showed a close relationship between the diagnostic fee and total expense. Research on the centralized procurement of intraocular lenses also showed that the diagnostic fee accounted for a larger proportion of patients’ hospitalization costs . In this study, diagnostic fees were broken down into several components, including pathological diagnosis, laboratory testing, and medical imaging. Diagnostic costs increased for inpatients with longer hospital stays, who usually need more extensive screenings and diagnostic evaluations during prolonged hospitalization, more likely due to affordability after NVBP and flexibility of osteoarthritis patients. As overall hospitalization costs decrease and treatments become more affordable due to policy changes, orthopedic surgeons gain more flexibility to develop comprehensive and careful treatment plans for osteoarthritis patients. This shift allows for a more patient-centered approach to care. However, it is equally important to emphasize careful and individualized treatment plans for patients with shorter hospital stays, where no significant expense changes were observed in the study. For these patients, timely interventions and accurate assessments are crucial to ensure effective treatment and prevent possible premature discharge or inadequate care planning from hindering recovery. By striking a balance between cost-effective management and personalized care, the healthcare system can improve both efficiency and patient outcomes. However, after the policy implementation (from May 1, 2022, to December 26, 2023), there was a slight downward trend in diagnostic fees, treatment fees, total Chinese medicine costs, and blood and blood products costs for TKA inpatients. Previous research has shown that medication costs and consumable costs, as two major components of hospital revenue, seem to have a compensatory relationship. Other policies in China restricting medication profitability produced comparable results: a reduction in the proportion of drug costs was accompanied by an increase in consumable expenses . In this study, the decrease in consumable costs was associated with a downward trend in total Chinese medicine costs. It was understood that the hospital management made adjustments to the charging items to optimize the fee structure, which might be a reason for the reduction in various costs. Therefore, when formulating the volume-based procurement policy for artificial consumables, government departments should take into account the adjustment of other cost structures and provide clear cost standards for other items. In practical terms, NVBP faces the challenge that the interests of healthcare companies cannot be guaranteed . A compensation mechanism for healthcare institutions should be explored and implemented. Based on the findings, the volume-based procurement policy for artificial joints requires further measures to protect and recognize the interests and values of healthcare providers. China’s national procurement guidelines emphasize the importance of supporting measures, primarily focusing on increasing the proportion of medical service revenue and optimizing compensation systems to reflect healthcare professionals’ labor value . However, hospital administrators in this study reported that the overall reduction in surgical costs has notably impacted departmental performance metrics. Enhancing healthcare providers’ motivation to participate in high-value medical consumable reforms remains an area requiring policy refinement and improvement.
This study has several limitations. First, it is a single-center study conducted at a tertiary hospital in Guangzhou, which may limit the generalizability of the findings to a broader population. The results may not fully reflect the nationwide implementation of volume-based procurement of artificial joints in China, and regional differences in economic conditions and healthcare needs should be considered when interpreting the findings. Second, while a single-group ITS analysis allows for the inclusion of time-related factors, certain confounding variables may not have been fully accounted for. The study only conducted limited stratified analyses of hospitalization costs. Future research using a multi-group ITS analysis and a broader dataset could provide a more comprehensive understanding of the intervention’s overall effects and variations across different patient groups. A more detailed examination of how individual characteristics influence outcomes would also be beneficial. Third, this study focused primarily on the affordability of hospitalization costs for patients. However, a more thorough exploration of health outcomes and the quality of medical services provided to TKA patients is necessary to fully assess the policy’s impact on patient well-being. Future studies should aim to examine the broader implications of the policy on healthcare quality, thereby providing a more holistic evaluation of its effects. In summary, future research should seek to improve generalizability across different regions, account for confounding factors, and investigate the wider effects on health outcomes and the quality of care to offer a more comprehensive assessment of the policy’s impact.
Given China’s large and aging population, the demand for total knee arthroplasty surgery is steadily increasing. This study examines both the direct and indirect impacts of the NVBP policy on the costs associated with TKA inpatient care, offering empirical evidence to inform government policy optimization. The findings suggest that the NVBP for artificial joints in Guangdong Province has largely met its intended goals. The government plays a crucial role in shaping the pricing and distribution of medical consumables. The study reveals significant reductions in total expenses, consumable costs, and self-financed expenses for TKA patients, which enhances the accessibility of healthcare services and supports patient recovery. Moreover, while labor-related costs for healthcare professionals have seen a modest increase, the quality of treatment and medical planning has also improved. Ongoing attention to fluctuations in various fees is necessary. To further enhance the efficacy of the volume-based procurement policy, the government should consider adjusting medical service prices and refining compensation mechanisms to better motivate healthcare providers.
|
Proceedings of the fifth international Molecular Pathological Epidemiology (MPE) meeting | 2761ad90-805b-4eca-9d4d-60d82d68be43 | 9244289 | Pathology[mh] | Cancer epidemiology has a long history of success in establishing relationships between exposures and cancer development in population-based settings. The statistical associations initially identified through epidemiologic studies often rely on laboratory-based experimental studies to illustrate the underlying biological mechanisms. In the last few decades, however, the integration of biospecimens into epidemiologic studies and the advent of high-throughput genomic profiling technologies, have enabled molecular epidemiologists to increasingly take a “driver seat” in conducting mechanistic studies at an ever-expanding scope and ever-refining biological depth. The boundaries across epidemiology, genetics, statistics, and clinical and translational research have come down, giving rise to molecular epidemiology as a multidisciplinary tool to lead the forefront of our exploration into the complexity of cancer etiology and outcomes . It was under this “melting pot” backdrop that the concept of molecular pathological epidemiology (MPE) was first proposed to reflect the focus on the heterogeneity of the cancer pathology and genomics . There are two main types of cancer heterogeneity on which MPE research currently focuses. The first is etiological heterogeneity, i.e., disease subgroups arising through distinct causal pathways driven by external environmental factors and internal host genetic background. The inter-individual diversity in exposures, biological processes to internalize and respond to such exposures, plus stochastic variations during cell division and proliferation, dictate that no two cancers arise following the exact same pathway. Different tumorigenic processes leave unique characteristic imprints on cancer genome and pathology, which can be profiled in tumor tissues, and subsequently grouped based on shared similarities and related to exposome data for etiological inference. The second type is prognostic heterogeneity, i.e., disease subgroups behaving distinctively after diagnosis in cancer progression or response to cancer therapy, which again is likely shaped by a multitude of factors and the interplay between tumor and host and between tumor cells and their local microenvironment including infiltrating immune cells, the importance of which has been increasingly appreciated thanks to the recent successes of cancer immunotherapy. Our understanding of cancer heterogeneity is driven largely by the advances in pathological and molecular profiling technologies, beginning with morphological evaluation and immunohistochemical staining, followed by high-density microarrays, and then more recently next-generation sequencing (NGS) approaches and spatial imaging analysis and others. The rapid emergence and evolution of new technologies open doors to numerous unprecedented research opportunities for cancer epidemiologists. The transdisciplinary nature of MPE research makes it a quintessential team science that benefits from a broad, vibrant, and engaging community consisting of investigators from a wide range of disciplines. The International MPE Meeting Series provides a dedicated platform for this community to convene and exchange ideas, to communicate discoveries and challenges, and to network . The meeting series has grown from the inaugural local meeting of 10 investigators at the Harvard School of Public Health in April 2013 to more than 200 attendees from 16 countries in its fourth meeting in May/June 2018 held at Dana-Farber Cancer Institute in Boston, MA, USA. Due to the COVID-19 pandemic, the Fifth International MPE Meeting originally scheduled in 2020 was postponed to May 2021 and the meeting was held virtually through a teleconference platform. The meeting continued its tradition of being open and free to the research community and attracted more than 490 registrants from 21 countries around the world. Herein, we share the proceedings from this two-day meeting with 21 presentations including two keynote lectures. Moreover, three Meet-the-Experts sessions were also held virtually to provide opportunities for the meeting attendees to communicate directly with the speakers; these sessions were well attended with live discussions. Consistent with our intent to provide an open forum for the broad research community to communicate and discuss a wide-range of studies in which MPE principles may apply, the proceedings are organized into three broad themes, consisting of integrative MPE studies, novel cancer profiling technologies, and new statistical and data science approaches. A list of speakers and presentation titles are provided in Table . For clarity and unambiguous communications, we use HUGO Gene Nomenclature Committee-approved symbols for genes and gene products (proteins) along with common colloquial names in parenthesis if appropriate, following the standardized nomenclature recommended by an expert panel . Dr. Peter Campbell opened the meeting with a presentation that summarized efforts aimed at integrating genetic and MPE approaches toward a better understanding of the connection between obesity and colorectal cancer risk. High body mass index (BMI) has been an established risk factor for colorectal cancer for more than 15 years; however, associations are often higher for tumors that occur in the colon than the rectum and associations are also higher in men than in women. This concept of etiologic heterogeneity led he and colleagues to investigate potential heterogeneity between BMI and colorectal cancer risk and prognosis in the Colorectal Cancer Family Registry (Colon-CFR) according to tumor microsatellite instability (MSI) status . In a 2010 publication, they showed that high BMI was associated with the more-common non-MSI-high tumors, but not with MSI-high tumors . Given that the Colon-CFR is enriched for patients with Lynch syndrome, these results suggested that BMI is not a risk factor for MSI-high colorectal cancers due to Lynch syndrome but may not be generalizable to MSI-high tumors due to methylation of MLH1 . Dr. Campbell then presented more recent, unpublished work, from the large Genetics and Epidemiology of Colorectal Cancer Consortium (GECCO) of more than 10,000 cases with findings that BMI was indeed associated with non-MSI-high and MSI-high tumors due to methylation, but not with tumors consistent with Lynch syndrome. He also presented additional unpublished work from the GECCO consortium in regard to high BMI and associations with specific mutations in tumor tissues, including three examples in which BMI was more convincingly associated with tumors with or without specific mutations in genes or pathways known to regulate energy balance or metabolism. In the final part of his talk, Dr. Campbell presented results from a recent genome-wide association study (GWAS) x BMI publication where BMI was more strongly associated with colorectal cancer risk among women with certain variants at a common SMAD7 locus . Since SMAD7 is known to influence other bowel diseases, the authors would like to use an MPE approach in the future to identify any potential Gene x environment x tumor molecular phenotype associations in this context. Such an approach will necessitate considerable resources and broad collaborations to amass the number of cases and the amount of data necessary for this endeavor. Dr. Shuji Ogino’s lecture provided another high-level overview of the status of MPE research, highlighting the integration of immunology into MPE as a major development in this field. Cancer immunology has been rapidly advancing, and immunoprevention and immunotherapy strategies have a great potential to reduce the cancer burden. Neoplasms and cancers represent heterogeneous pathological processes due to interactive influences of the exposome (including the microbiome), the immune system, and neoplastic cells. To address this, investigators can examine the influences of exposures on tumor-immune interactions using the MPE research framework that can link the exposures with tumor pathological signatures. This is the so-called “immunology-MPE” approach . Using archival tumor tissue of over 1,500 colorectal cancers in the Nurses’ Health Study and the Health Professionals Follow-up Study, Dr. Ogino and colleagues conducted several proof-of-principle studies to provide evidence for influences of smoking , aspirin , vitamin D , inflammatory diet , and marine omega-3 fatty acids on tumor immunity and cancer incidence. Recently, Dr. Ogino and collaborators further developed and validated multiplex immunofluorescence assays to in-depth phenotype immune cells , as well as microbial assays for putative cancer pathogens. The immunology-MPE approach is also expected to advance research on early-onset cancers . The new research paradigms to integrate microbial and immune assays on archival tumor tissue in large-scale population studies can provide possible paths for precision prevention and public health. Dr. Shelley Tworoger spoke about her group’s efforts to evaluate the risk factors that predict tumor immunological response in ovarian cancer, using tumor tissue microarrays (TMA), which are well suited for large epidemiologic studies. Dr. Tworoger discussed several operational issues that researchers face. First, she spoke about missing or folded tissue in the TMA cores, which impacts accounting for total evaluable area and thus the density of immune markers. Special consideration is needed when preparing samples and the image analysis (e.g., using semi-automated segmentation) to ensure this is minimized and post hoc analyses are necessary to adjust for area of the sample. Additionally, immunological markers can often be sparse or only observed in a small proportion of tumors; as such, many markers do not follow a normal distribution. This leads to statistical challenges that can be mitigated by using categories, present/absent values, and specialized models (e.g., beta-binomial models). Further, correlations between TMA cores within the same person are lower for immune markers that are rarer, possibly necessitating use of full slides for some cell types. Finally, batch effects across TMAs may be present, which can be attributable to differences in antigenicity of the sample and is often lower in older samples. Such batch effects may require imaging reanalysis and statistical methods for batch correction. When thinking about the relationship between host exposures and tumor-immune response, Dr. Tworoger discussed the importance of evaluating factors that may not be necessarily associated with cancer risk, as such factors that could influence tumor-immune response independent of tumorigenesis. For example, in preliminary data, her team saw no association of early life abuse and ovarian cancer risk, but those who experienced early life abuse versus not had suggestively decreased helper T cells and B cells in their ovarian tumors. Overall, there is a need to understand exposures that affect the tumor-immune microenvironment to increase our understanding of cancer development and progression. To do this, novel epidemiological methods need to be developed to evaluate the tumor-immune milieu especially considering sample preparation, batch variation, and marker distributions. Dr. Mustapha Abubakar’s presentation demonstrated how MPE strategies could be adapted to identify novel tissue biomarkers for predicting risk of future invasive breast cancer development among women with pre-cancerous lesions. By integrating computational pathology and epidemiology, Dr. Abubakar’s group conducted a case–control study nested in a large cohort of women who were biopsied for benign breast disease (BBD) at Kaiser Permanente Northwest (1971–2006) and followed through mid-2015 . Patients who developed incident invasive breast cancer at least one year after BBD diagnosis and those who did not were matched on BBD diagnosis age and plan membership duration. By applying supervised machine learning algorithms to digitized H&E-stained slides, they generated quantitative tissue composition metrics, including epithelium, stroma, and adipose tissue, and determined their association with future invasive breast cancer diagnosis, overall and by BBD histological classification. They found that increasing epithelial area on BBD biopsy was associated with increased breast cancer risk, irrespective of BBD histological classification . Conversely, increasing stroma area was associated with decreased risk in non-proliferative disease (NPD) but with increased risk in proliferative disease (PD), supporting a context-dependent role of the stroma to either prevent or promote tumor formation. A metric of the proportion of fibroglandular tissue that is epithelium, relative to stroma, i.e., epithelium-to-stroma proportion (ESP) was independently and strongly predictive of increased breast cancer risk. In combination with mammographic breast density (MBD), women with high ESP and high MBD had substantially higher breast cancer risk than those with low ESP and low MBD. The findings were particularly striking for women with NPD (comprising approximately 70% of all BBD patients), for whom relevant predictive biomarkers of subsequent breast cancer development are lacking. These findings could thus have important implications for risk stratification and clinical management of women with NPD upon breast biopsy. Epigenetic alterations, including DNA methylation, are widespread in tumor genomes. Given the dynamics and versatility of DNA methylation regulation in response to changes in internal and external environment, it may provide a window into the biological effects of etiological factors on cancer genome. Dr. Christine Ambrosone presented the newest work from her group, following up on findings presented at the MPE meeting in 2018, which showed differential DNA methylation in breast tumors from Black and White women, particularly in ESR1 (estrogen receptor 1, ER)-negative cancer in Black women. Because one of the top differentially methylated loci , FOXA1 , was highly methylated in women who had children but did not breastfeed, and is important to differentiation of luminal cell progenitors , Dr. Ambrosone’s group sought to verify findings using a number of approaches. Using immunohistochemistry for FOXA1 protein, 1,329 breast tumors were stained and results showed that FOXA1 expression was lower in ESR1 (ER)-negative tumors and was lowest in parous women, with the relationship attenuated among women who breastfed . In another study based on women without breast cancer from the Komen Tissue Bank, breast tissues from 52 Black women who did not have children, 53 who were parous and did not breast feed, and 51 who breastfed their children were subjected to a targeted bisulfite sequencing approach using SureSelect Methyl-Seq. Results showed similar trends as those in the breast tumors, with lowest methylation in nulliparous women and higher in parous women who did not breastfeed. To further address how methylation or down-regulation of FOXA1 may affect progenitor cell pools in the breast, the team used a Foxa1 -knockout mouse and created strains to obtain experimental genotypes, dissecting the mammary glands and using flow cytometry to separate epithelial cell populations. Depletion of Foxa1 led to dramatic changes in proportions of mammary gland epithelial cell populations—with abnormal accumulation of differentiation-arrested luminal progenitors and marked decreases in the number of ESR1 (ER)-positive cells . Mouse models were also developed to mimic the human reproductive scenarios. For the virgin mice, luminal progenitors comprised 40% of the cell composition, which was increased to 50% in the parous mice. This increase in luminal progenitors was reduced when mice were allowed to nurse their pups, consistent with the hypothesis that parity results in increases in luminal progenitors, which are lowered with breastfeeding. Together, these findings could provide an epigenetic mechanism for the higher prevalence of ESR1 (ER)-negative breast cancer in Black women. Cancer health disparities have been well described in the literature, which may have multi-level causes with both biological and non-biological factors, as well as the interactions between the two, at play. Dr. Rebbeck posed the question “is there a biological basis for cancer disparities?”. He noted that Self-Identified Race and Ethnicity (SIRE) is a social, not biological, construct. SIRE is a manifestation of numerous underlying complex correlates including social factors such as culture, behavior, and environment. These factors are driven by historic systemic racism and other socio-political features. Ancestry is also correlated with SIRE, and includes genomic architecture and phenotypes determined by continental origin. Genomic architecture overall and the distribution of disease-associated genetic variation is also known to vary across SIRE and ancestral groups. In African Americans, these associations are complex, and are driven by distant evolutionary and population genetic forces as well as more recent admixture due to the transatlantic slave trade. Dr. Rebbeck presented data that suggests genomic architecture of cancer susceptibility and molecular signatures in tumors vary by SIRE. While this information has important implications for our understanding of the etiology of disease, risk assessment, and applications of precision medicine, these data do not imply that SIRE is a biological construct. Instead, diversity in etiology, prevention and treatment by SIRE can both inform our understanding of cancer etiology and lead to improved application of genomics into clinical and public health practice. Three presentations at the meeting provided a timely update on the burgeoning field of mutational signatures, which are a definite number of patterns of nucleotide substitution within trinucleotide contexts that can be deciphered mathematically from tumor mutation data. Some of those signatures have been linked to known cancer risk factors and endogenous biological processes, thus providing a “forensic” tool to detect cancer causes not only at an aggregated population level but potentially at an individual level. Dr. Paul Brennan presented on the findings of esophageal squamous cell carcinoma (ESCC) from the Mutographs project. ESCC has a remarkable geographic variation with high incidence in regions of Asia, Africa, and South America, yet the variation cannot be fully explained by known lifestyle and environmental risk factors. A total of 552 ESCC cases from eight countries with varying incidence rates were whole-genome sequenced, which revealed similar mutational profiles across all countries studied. Eight single base substitution (SBS) mutational signatures dominate within each country, led by APOBEC associated mutational signatures SBS2 and SBS13 found in 88% and 91% of cases, respectively. Several etiologic associations were identified, including SBS3 with deleterious BRCA1/BRCA2 variants, SBS16 with alcohol consumption, a novel T > C signature (SBS288J) with long term opium use, all of which had modest impact on mutation burden; yet no association was found between any of the mutational signatures with other major risk factors for ESCC, including hot drinks, indoor air pollution, and poor diet. As a result, no mutational signatures linked with an exogenous exposure could explain the geographic variation in ESCC incidence. These findings suggest that not all carcinogens generate distinct mutational signatures or increase mutation burden, whereas most mutations arise from tissue specific endogenous processes. Dr. Maria Teresa Landi spoke about her group’s efforts analyzing the pilot dataset in the Sherlock-Lung study, aiming to elucidate the mutational landscape and etiology of lung cancer in never smokers (LCINS) . Lung cancer is the leading cause of cancer death, with LCINS accounts for 10–25% of the disease burden. Despite its prevalence, the genomic landscape of LCINS is not well-characterized. The analyses of high-coverage whole-genome sequencing of 232 LCINS revealed three molecular subtypes defined by distinct copy number aberrations. While the dominant subtype (‘ piano ’) is characterized by quiet copy number profiles, the other subtypes are associated with specific arm-level amplifications and EGFR mutations (‘ mezzo-forte ’), and whole-genome doubling (‘ forte ’). Piano tumors are characterized by UBA1 mutations, germline AR variants, and stem-cell like features including low mutational burden, depleted TP53 alterations, high intra-tumor heterogeneity, long telomeres, and slow growth with cancer driver genes acquired several years prior to tumor diagnosis . In contrast, driver mutations in mezzo-forte and forte tumors are generally late clonal events acquired close to tumor diagnosis, thus potentially facilitating target identification with a single biopsy. Future studies utilizing single-cell RNA-sequencing and genome-wide DNA methylation will be required to verify piano tumors stem cell-like state and identify cancer initiating events in those with no apparent drivers. Strong tobacco smoking signatures were not detected in LCINS, even in cases with exposure to second-hand tobacco smoke. Patients with piano subtypes overall displayed better survival, particularly those with carcinoids or SETD2 mutations. While mutations in genes within the receptor tyrosine kinase (RTK)-RAS pathway have various impacts on survival, mutations in TP53, CHEK2, EGFR , or loss of 15q or 22q are associated with increased mortality. These genomic alterations in forte and mezzo-forte subtypes, and stem cell-like features in piano tumors create avenues for personalized therapeutic strategies in LCINS. The keynote lecture delivered by Dr. Stephen Chanock also delved into the newest effort to study tumor somatic changes in relation to exogenous exposures, in this case radioactive contaminants after the Chernobyl accident and incident papillary thyroid carcinoma (PTC) . The radioactive fallout as a potential carcinogenic exposure led to increased PTC incidence in contaminated regions. The established etiological causality herein provides a rare opportunity to investigate the impact of the environmental exposure on PTC genomics. Tumor samples from a total of 440 PTC patients from Ukraine, including 359 exposed to radioactive 131 I during childhood or in utero and 81 unexposed children, were profiled using multi-omic platforms. With increased estimated radiation dose, there were an increased number of small deletions and simple structural variants, which are hallmarks of nonhomologous end-joining repair, suggesting DNA double-strand breaks as early events in radiation-caused PTC. Moreover, an estimated 94% of PTCs were driven by alterations in the mitogen-activated protein kinase (MAPK) pathway, with a radiation dose-dependent enrichment of fusion versus mutation drivers. The effects on small deletions, simple structural variants, and fusions were most prominent among patients with radiation exposure at younger age. In mutational signature analysis, 7 COSMIC SBS signatures and 6 insertion/deletion signatures were identified, a majority of which were attributable to two clock-like signatures, yet none were correlated with environmental radiation exposure. Analyses of de novo mutational signatures revealed no novel signatures specific for radiation, either. The above three studies represent to date some of the most comprehensive and sophisticated endeavors to identify etiological causes of tumor somatic changes. The numerous discoveries that emerged from these studies are remarkable. The lack of a clear explanation to either the geographic variation in ESCC incidence or to the elevated PTC incidence following environmental radiation exposure by mutational signatures is similarly intriguing. A larger sample size of tumor genomic data with accurate and in-depth epidemiological annotation of exposome may be necessary in future studies. Modern pathological and genomic technologies have fundamentally advanced our understanding of cancer. No longer considered as one homologous disease entity, cancers manifest intra- and inter-tumor heterogeneities in almost every way they are dissected, by a scalpel, a microscope, or a DNA sequencer. When aggregated at a population-level in an epidemiological setting, these heterogeneities can be grouped based on shared features that provide a potential new understanding of cancer etiology and prognosis. The pace of the development and improvement of new technologies is impressive. As a result, we now have an unprecedented variety of tumor profiling tools at our disposal, and even newer ones are emerging. At this meeting, several speakers shared some of the most recent developments in technologies that may be adapted toward future MPE studies. In a keynote lecture, Dr. Todd Golub presented his group’s work on forging a path toward cancer precision medicine. He discussed the development of the Cancer Dependency Map (DepMap), a large-scale effort of the Broad Institute to systematically perturb human cancer models genetically and pharmacologically, thereby defining the vulnerabilities of each cell line, and predictive biomarkers of such vulnerabilities. Over 800 cancer cell lines have been comprehensively characterized at the DNA-level, RNA-level, protein-level, and metabolite-level. The cell lines were also subjected to genome-wide CRISPR/Cas9 loss of function screening, thereby identifying genetic dependencies of each model. Dr. Golub discussed the follow-up of two DepMap findings, including the dependency of ovarian cancer cells on the phosphate exporter XPR1, and the sensitivity of cells over-expressing the multi-drug resistance gene MDR to the compound tepoxalin, whose mechanism of cancer cell killing remains unknown. Moreover, Dr. Golub described his group’s development of the PRISM barcoding method, whereby each cell line is molecularly barcoded with a unique 24 nucleotide sequence, thereby allowing for cell lines to be pooled together. Barcode abundance is then measured, either by sequencing or by hybridization of Luminex beads coupled to anti-barcode tags, before and after small molecule treatment. The PRISM method was used to characterize the response of more than 500 cell lines to each of more than 4,000 drugs from the Broad’s drug repurposing library. Lastly, Dr. Golub described his group’s efforts to extend the PRISM method to the in vivo setting, where the metastatic potential of 488 DepMap cell lines was characterized in immunodeficient mice, thereby creating a Metastasis Map (MetMap) with potential to reveal for the first time at scale, tumor-microenvironment interactions. Dr. Guillermo Tearney discussed his group’s most recent efforts developing novel technologies for cellular tissue phenotyping . Living cells exhibit active intracellular molecular motion that reflect their functional states. Traditional microscopy techniques that solely capture high resolution static images of cells miss the opportunity to capture the wealth of information provided by complex intracellular activity. Dr. Tearney’s group recently developed dynamic micro-optical coherence tomography (DμOCT), an extension of μOCT that achieves near-isotropic sub-cellular resolution in all three dimensions (2 µm lateral × 1 µm axial) for assessing the metabolism of cells in cross-sectional and 3D tissues. DμOCT substantially enhanced the contrast of cells and organelles while revealing stratified, depth-dependent dynamics in the epithelial layers by acquiring a time series of µOCT images and conducting power frequency analysis of the temporal fluctuations that arise from intracellular motion on a pixel-per-pixel basis. His group has expanded the application DμOCT to encompass imaging of human skin in vivo, evaluating responses to drugs delivered via implantable microdevice, and determining dose response of melanoma spheroids to anti-cancer drugs. The results demonstrated potential utility of DμOCT for cellular phenotyping across a wide range of tissue types and for diverse bioscience and biomedical applications. Future work will focus on validating DμOCT for discriminating disease, cell activation state, and response to therapy, as well as developing technology for conducting DμOCT in vivo inside the human body. In the last several years, the rapid uptake of cancer immunotherapy in the clinical management of many cancers highlights the importance of continuous research efforts to deepen our understanding of tumor-immune interactions and heterogeneities in tumor microenvironment. Dr. Scott Rodig presented his group’s findings using classical Hodgkin lymphoma as a model system to study immune evasion in cancer. Classical Hodgkin lymphoma is a malignancy affecting mostly young adults and the elderly. The disease-defining feature of classical Hodgkin lymphoma is the presence of Hodgkin Reed-Sternberg (HRS) cells, which have a very high tumor mutation burden and are highly immunogenic. However, these cells reside in a specialized immunosuppressive microenvironmental niche, characteristic of high expression of CD274 (PD-L1)/PDCD1LG2 (PD-L2) to suppress T-cell activation and loss of MHC class I/ B2M and/or MHC class II to suppress antigen presentation. Using multiplex immunofluorescence and digital image analysis, Dr. Rodig’s group found abundant CD274 (PD-L1)-positive tumor-associated macrophages that colocalized with CD274 (PD-L1)-positive HRS cells in tumor microenvironment. Further, CD274 (PD-L1)-positive TAMs were more likely to be in contact with T cells, and CD274 (PD-L1)-positive HRS cells were more likely to be in contact with CD4 + T cells, a subset of which were positive for PDCD1 (PD-1) . In another study from Dr. Rodig’s group using multiplex immunofluorescence, enriched CTLA4-positive T cells that were in contact with HRS cells outnumbered PDCD1 (PD-1)-positive and LAG3-positive T cells. Moreover, in recurrent classical Hodgkin lymphomas despite therapy with PDCD1 (PD-1) blockade, CTLA4-positive cells were found to present and focally contact HRS cells, suggesting that patients refractory to PDCD1 (PD-1) blockade might benefit from CTLA4 blockade . Similar findings were also made in T-cell/histiocyte-rich large B-cell lymphoma, which is an aggressive rare malignant B cells within a robust but ineffective immune cell infiltrate. Unbiased clustering of spatially resolved immune signatures revealed increased CD274 (PD-L1) expressing macrophages and PDCD1 (PD-1) + T cells in tumor-immune “neighborhoods” in T-cell/histiocyte-rich large B-cell lymphoma, which could be used to distinguish from related subtypes of B-cell lymphoma . Lastly, Dr. Rodig also described a new workflow based on multiplex ion beam imaging (MIBI) for assessing the intact tumor microenvironment in diffuse large B-cell lymphoma, which has recapitulated their prior work and knowledge in diffuse large B-cell lymphoma not otherwise specified and T-cell/histiocyte-rich large B-cell lymphoma. Dr. Faisal Mahmood’s lecture focused on novel deep learning-based computational pathology methods his group developed and their applications in clinical research. Clustering-constrained-attention multiple-instance learning (CLAM) is a data-efficient and weakly supervised computational pathology method on whole-slide images that was developed to solve challenges often seen in deep-learning methods for pathology image analysis, including requirements for extensive annotation data from large datasets of WSIs and poor domain adaptation and interpretability. CLAM used attention-based learning to identify subregions of high diagnostic value to accurately classify whole slides and instance-level clustering over the identified representative regions to constrain and refine the feature space. In several applications of CLAM, Dr. Mahmood demonstrated superior performance for the subtyping of renal cell carcinoma and non-small-cell lung cancer, as well as for the detection of lymph node metastasis in breast cancer, in comparison to standard weakly supervised classification algorithms. CLAM was readily adaptable to independent test cohorts, varying tissue content, smartphone microscopy and standalone 3D printed microscopy . To predict the origins for cancers of unknown primary, Dr. Mahmood’s group developed another computational pathology algorithm, Tumor Origin Assessment via Deep Learning (TOAD), using routinely acquired histology slides. The model was first trained with whole-slide images of tumors with known primary origins to simultaneously identify the tumor as primary versus metastatic and to predict its origin. In the subsequent testing with tumors of known primary origins, TOAD achieved top-1 and top-3 accuracy as high as 83% and 96%, respectively. In another test with cancer cases of unknown origins where a differential diagnosis was assigned, predictions from the algorithm resulted in concordance for 61% of cases and a top-3 agreement of 82% . In unpublished work from Dr. Mahmood’s group, the algorithm was also successfully applied to assess endomyocardial biopsy, which achieved an accuracy comparable to human experts. These artificial intelligence (AI)-based computational pathology tools have the potential to be used in conjunction with or in lieu of ancillary tests for more accurate and efficient diagnosis. In the last decade, there has been increasing research attention to the concept of liquid biopsy, which uses non-invasive collection of various types of body fluids, usually peripheral blood, for the detection, profiling, selecting treatment, and monitoring treatment response for certain types of cancer. Liquid biopsy has the potential to inform about tumor development and progression through the release of a multitude of cell derived materials from tumor cells, the tumor microenvironment and from a systemic response to tumor development and progression. Some of the liquid biopsy-based tests have already been clinically adapted. At this meeting, several speakers discussed findings of research on liquid biopsy from their groups. Dr. Curtis Harris lectured on cancer-derived liquid biopsy metabolomics, which provided a non-invasive means of cancer screening. Correlation of the identified metabolites with specific cancers created biomarker profiles that can be utilized for diagnostic and prognostic evaluation of many types of human cancer. Through metabolomic analyses, creatine riboside has been identified and shown to be a companion-diagnostic biomarker in multiple types of human cancer including lung, liver, pancreas, breast, and brain cancers. In lung cancer, creatine riboside was increased in tumor tissues, and the urine levels were highly positively correlated with the levels of the metabolite in tumors and in blood. When paired with other identified urinary metabolite biomarkers, such as N-acetylneuraminic acid, creatine riboside was demonstrated with improved diagnostic capability and reliability for prognosis in lung cancer . These foundational studies have validated the use of urinary metabolite screening that has led to further investigation into liquid biopsy-based biomarker in association with human cancer. Dr. Viktor Adalsteinsson spoke about his group’s efforts to enhance the sensitivity of liquid biopsies to detect minimal residual disease after cancer therapy. Firstly, he showed that minimal residual disease could be detected after curative intent treatment for breast cancer with up to 39 months of lead time to clinical recurrence by tracking up to hundreds of patient-specific tumor mutations in cell-free DNA (cfDNA) . Then, he described three new technologies, including Duplex-Repair , MAESTRO , and CODEC , to maximize the accuracy and efficiency of mutation tracking in cfDNA and improve detection of MRD. Duplex-Repair addresses the challenge that existing sequencing preparation methods may copy base damage errors from one strand of a DNA duplex to both strands, rendering them indistinguishable from true mutations on both strands . MAESTRO overcomes the high cost of rare mutation detection by converting low abundance mutations into high abundance mutations prior to sequencing . CODEC converts existing NGS instruments into massively parallel ‘single duplex’ sequencers which can read both strands of each DNA duplex at 100-fold lower cost . Lastly, he showed that by tracking one thousand individualized mutations, it is possible to resolve one part-per-million tumor DNA in cfDNA, which is up to 100-fold more sensitive than prior liquid biopsy tests. His team is now working to put these technologies together and apply them to larger clinical studies to determine whether enhanced analytical sensitivity translates into better detection of minimal residual disease, longer lead time to recurrence, and more precise therapy and improved outcomes for patients. Dr. Samir Hanash’s presentation focused on the use of liquid biopsy to study the dynamic changes associated with the development and progression of pancreatic cancer by implementing a mouse-to-human approach. Genetically engineered mouse models complement the use of human biospecimens as they overcome the substantial heterogeneity of human subjects that is unrelated to disease and the potential biases in collection of human samples. Moreover, mouse models allow sampling of mice at defined stages of tumor development and identification of markers linked to pathways for tumor initiation and progression that are turned on in the genetically engineered mouse models. This work led to the identification of circulating proteins that are released at the earliest PanIn stages of pancreatic cancer development . Early on, his group identified 51 potential markers derived from mouse model studies, mass spectrometry studies of early-stage pancreatic cancer plasmas, and from the literature that were further screened to identify the most informative markers that are upregulated at early stages of pancreatic ductal adenocarcinoma (PDAC). Subsequent validation studies were focused on three markers consisting of CA19-9, LRG1, and TIMP1 to determine their lead-time trajectory for PDAC early detection using prospectively collected samples from the NCI Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial cohort . Increases in marker levels compared to baseline were observed starting two years before diagnosis with a steep rise closer to diagnosis, pointing to the merit of monitoring biomarker levels in subjects at risk to detect PDAC at the earliest stages. Dr. Karl Kelsey presented novel data defining the DNA methylation profile of lymphocyte memory, noting that this yields an enhanced library for deconvolution of peripheral blood. Epigenetic mechanisms, including DNA methylation, are critical drivers of immune cell lineage differentiation and activation. His group scanned genome-wide differences in DNA methylation among CD4 and CD8 naïve and memory cell states and combined this data with similar data on naïve and memory B-cell states. Overall, their findings were consistent with the literature describing the DNA methylome as a major driver of individual central and effector T-cell memory states as well as in memory B cells. Dr. Kelsey observed unusually large differences in DNA methylation in thousands of CpG sites in hundreds of genes associated with the development of memory in each lineage. The data similarly describe considerable overlap in genes with altered DNA methylation in the T-cell lineage, with primarily a loss of DNA methylation in over 125,000 CpGs significantly associated with the generation of central memory in both CD4 and CD8 cells. Furthermore, their analyses revealed specific CpG dinucleotides in both CD4 and CD8 cells whose methylation pattern is consistent with the circular model of memory generation. As evidence of common pathways in the generation of immune memory, they highlighted 22 gene loci, including several within the promoter region of the AIM2 gene, with dramatically altered DNA methylation in all three memory lineages. The description of the immune memory profile also allowed for the enhancement of reference-based deconvolution of blood DNA methylation to include 12 leukocyte subtypes (neutrophils, eosinophils, basophils, monocytes, B cells, CD4 + and CD8 + naïve and memory cells, natural killer, and T regulatory cells). Including derived variables, this enhanced method provided up to 56 immune profile variables . The IDOL (IDentifying Optimal Libraries) algorithm was used to identify libraries for deconvolution of DNA methylation data both for current and retrospective platforms . The accuracy of deconvolution estimates obtained using these enhanced libraries was validated using artificial mixtures, and whole-blood DNA methylation with known cellular composition from flow cytometry. This pioneering work enabled a more detailed understanding of lymphocyte memory, as well as enhanced representation of immune-cell profiles in blood, using only DNA and facilitates a standardized, thorough investigation of the immune system in human health and disease. One of the transformative changes in MPE research is the integration of multi-level, high-dimension molecular and pathological data into the framework of cancer epidemiology, which requires a full embrace of biostatistics, bioinformatics, and data sciences. Several speakers at this meeting shared novel methodological and resource development in this space. Dr. Nikolaus Schultz discussed his group’s efforts to develop methods and resources for the interpretation of genomic variants in cancer. With prospective clinical sequencing of tumors emerging as a mainstay in cancer care, there is an urgent need for clinical support tools that identify the clinical implications associated with specific mutation events. To this end, his group has developed several tools for the interpretation and visualization of cancer variants, enabling researchers and clinicians to make discoveries and treatment decisions: (1) Cancer Hotspots is a method and resource that identifies recurrently mutated cancer genes resulting in amino acid substitutions . These variants, so-called hotspots, are more likely to be drivers in cancer. (2) OncoKB is a precision oncology knowledge base that annotates the biologic and oncogenic effects as well as prognostic and predictive significance of somatic molecular alterations . Potential treatment implications are stratified by the level of evidence that a specific molecular alteration is predictive of drug. (3) The cBioPortal for Cancer Genomics is a web-based analysis tool for the visualization and analysis of cancer variants . Through its intuitive interface it makes complex cancer genomics data easily accessible by researchers and clinicians without bioinformatics experience. It integrates information from Cancer Hotspots and OncoKB to enable the identification of potential driver mutations and therapeutic options. These resources are used routinely at Memorial Sloan Kettering Cancer Center in clinical sequencing and by countless cancer genomic researchers and clinicians around the globe. Dr. Jonas Almeida described distributed computational systems designed for data integration, from Digital to Molecular Pathology, by having the code, rather than the data do the traveling between sensitive, user-governed, sub-systems. Primarily, these integrative designs were developed to address the logistics of reusable analytical workflows satisfying principles of FAIR stewardship of scientific data . Just as important, however, they do so with the data remaining under the control, and compliance, of various stake holders. The integration of molecular pathology data with whole-slide images was illustrated with hands-on demonstrations of AI tools applied to TCGA data at https://mathbiol.github.io/tcgatil , and segmentation by gene mutation . Finally, this approach was shown to be applicable to a variety of data integration systems developed as Data Commons, as illustrated by an application to real-time tracking of mortality by COVID-19 at https://episphere.github.io/mortalitytracker . In , Dr Almeida argued that data science infrastructure is becoming available to many research organizations as part of cloud computing platforms, such as those made available through NIH Science and Technology Research Infrastructure for Discovery, Experimentation, and Sustainability (STRIDES) Initiative, promising far more scalable, governable, and user-friendly approaches to the development of integrative “AI-first” pathology data commons. Dr. John Quackenbush discussed the importance of studying gene regulatory networks in addition to analyzing gene expression and described his group’s development and application of a suite of system biology tools to identify drivers of disease and therapeutic targets. Although differential expression and co-expression analysis is often used to identify genes associated with disease states, Dr. Quackenbush argued that these do not explain what processes drive the observed expression differences and the disease itself—factors that might be missed in analysis focusing on single genes. Instead, by identifying gene regulatory networks linking transcription factors (or other regulators) to their target genes, one can find key regulators that are central to the diverse processes active in disease development, progression, and response to therapy. He presented several network inference algorithms from his group, including Passing Attributes between Networks for Data Assimilation (PANDA) and Linear Interpolation to Obtain Network Estimates for Single Samples (LIONESS) . As an example of network inference analysis, when PANDA and LIONESS were compared to differential expression and co-expression methods to analyze transcriptome data of pancreatic cancer in TCGA, several major biological processes, including immune-related processes, epigenetic, and cell cycle process were identified only when using the network inference algorithms. Dr. Quackenbush also described applications of network analysis to investigate sexual dimorphism in multiple tissues and colorectal cancer and an analysis of glioblastoma multiforme that found differential regulation of processes including PDCD1 (PD-1) signaling that were associated with survival. Lastly, Dr. Quackenbush described netZoo, an integrated collection of gene regulatory network inference and analysis tools that he and his group developed. A collection of more than 180,000 gene regulatory networks generated using the netZoo tools to analyze human tissues, cancers, cell lines, and small molecule drug perturbation are curated in an online database, GRAND. Included in GRAND are a variety of search tools and analyses, including tools to identify drug that alter specific regulatory processes . Dr. Molin Wang spoke about her group’s efforts in developing analytical methods for addressing the selection bias problem caused by missing tumor marker data in MPE analyses. The high percentage of cases with tumor maker data missing is often due to tissue unavailability or insufficient quality of tumor tissues. Standard data analysis methods using only complete data can lead to biased estimates and misleading scientific findings. When disease subtypes are classified by multiple markers, Wang’s group has developed an augmented inverse probability weighting (AIPW) Cox proportional hazard model method to evaluate the effect of exposures on disease subtypes in the presence of partially or completely missing biomarkers. The AIPW method is valid under the missing at random (MAR) assumption, is typically more efficient than the IPW method, and enjoys the double robustness property, which means that the method leads to valid estimates and inferences even if one of the missingness probability model or the tumor marker model is mis-specified. The MAR assumption may often be achieved by including auxiliary tumor variables, such as tumor stage, and tumor location, in the missingness probability and tumor maker models. However, the MAR assumption cannot be verified empirically using the observed data. To address the potential issue of missingness not at random, Wang’s group has developed a partial likelihood-based method to obtain valid estimates for the effect of exposures on disease subtypes. This method can often be used as a sensitivity analysis. R functions implementing the methods are available at Dr. Wang’s software page ( https://www.hsph.harvard.edu/molin-wang/software/ ). After being delayed by a year and migrated to a virtual platform due to the COVID-19 pandemic, the Fifth International Molecular Pathological Epidemiology Meeting drew the largest audience in the meeting history. The feedback received from meeting attendees via a post-meeting survey was overwhelmingly positive. It was apparent from this meeting that MPE as a burgeoning research space continues to attract experts from diverse fields ranging from epidemiology, pathology, oncology to genetics, biostatistics, bioinformatics, and data sciences, who have been working collaboratively toward a shared goal to deepen our understanding of cancer heterogeneities and the implications for cancer etiology, prognosis, and treatment. The momentum of the fast-advancing research agenda and the resilience of this vibrant researcher community are equally remarkable. Looking forward in the next few years, we anticipate expansion and fruition of MPE research in many fronts, particularly immune-epidemiology, mutational signatures, liquid biopsy, and health disparities. We plan to reconvene for the Sixth International Molecular Pathological Epidemiology (MPE) Meeting tentatively scheduled for May 2023, in Buffalo, NY, USA. |
Editorial—Special Issue: Foreword to the Special Issue on NIKE: Neuroendocrine Tumors, Innovation in Knowledge and Education | a28c449b-2ce7-4e3c-b2b0-b7360427d58f | 8281451 | Physiology[mh] | AF and AC both contributed to develop this article by resuming the results of all scientific manuscripts included in the Research Topic NIKE. All authors contributed to the article and approved the submitted version.
This study was partially supported by the ministerial research project PRIN2017Z3N3YC.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
|
Skin necrosis after autologous fat grafting for augmentation rhinoplasty: a case report and review of the literature | 6c58f795-1f31-4c85-b4b2-94255bd67572 | 11750145 | Surgical Procedures, Operative[mh] | Adipose tissue can be reintroduced into patients as a graft, thus serving as an adjunct to improve function and body aesthetics. Nowadays, fat transplantation is used to replace soft tissue defects, improve facial and body contours, rejuvenate the face, and repair scar deformity . With the development of fat transplantation technology, various complications and sequelae have arisen, such as bleeding, poor contour, fat absorption, vascular accidents, tissue necrosis, stroke, postoperative infection, and septic shock . Vascular embolism is one of the most serious complications of autologous fat transplantations. With the recent increase in fat transplantation surgery, serious vascular complications have increased, such as blindness caused by ocular artery embolism, stroke, and even death caused by cerebral artery embolism . Because the pathogenesis of vascular embolism is not clear, there are few completely effective treatment methods. In the past, drug therapy was used to reduce intraocular pressure and intracranial pressure, dilate blood vessels, and promote tissue irrigation, while surgery therapy was used to remove the embolus. However, the cure rates of these methods were not ideal. The experience obtained from the analysis of different cases suggests that the key to the success of treatment is timely detection of symptoms and timely treatment. We treated a case of local skin ischemic necrosis caused by blockage of nasal microvessels after autologous fat transplantation. Based on previous treatment methods, we proposed a comprehensive treatment strategy of “biological + physical + drug” for the first time and achieved satisfactory results.
A 24-year-old woman presented to the plastic surgery department of our hospital with nasal skin necrosis and ulceration. The patient had undergone autologous fat grafting for augmentation rhinoplasty at another hospital seven days prior. After satisfactory local anesthesia in the area of abdominal liposuction, an appropriate volume of fat was extracted and developed into stromal vascular fraction (SVF) gel through a series of physical procedures. The SVF gel was injected into the patient’s nasal dorsum and nasion using a 1 ml syringe. The specific SVF gel preparation process and the amount of transplantation were unknown. The patient returned home without any discomfort after the surgery. One day after the operation, the skin of the nasal dorsum was cyanotic; however, the patient paid little attention and did not undergo treatment. Three days after the operation, the patient felt that the extent and range of skin color on the nasal dorsum were enlarged. In addition, the local skin of the nasion and nose tip of the patient was pale, with local skin ulceration and a tingling sensation. The patient visited the Second People’s Hospital of Guangdong Province and underwent nasal debridement in the plastic surgery department. The specific treatment process was unknown, but the degree of skin cyanosis and tingling sensations were relieved after treatment. Six days after the operation, the patient observed dark skin with irregular edges in the local area of the nasal dorsum and nasal tip, without obvious pain. To seek further treatment, the patient visited the Plastic Surgery Outpatient Department of Guangzhou Chinese Overseas Hospital. She was then admitted to our department for treatment of “skin necrosis after autologous fat grafting for augmentation rhinoplasty.” Physical examination after admission revealed stable vital signs (temperature, 36.5°C; respiration, 15 cpm; heart rate [HR], 82 bpm; and blood pressure (BP), 112/85 mmHg), a well-nourished conscious patient, and free movement. Both pupils were round, of equal size with a diameter of 3.0 mm, and sensitive to light reflex. The vision and eyelid closure functions were normal. The muscle tone in the upper and lower limbs was level 6, with free movement. A physiological reflex was observed, but no pathological reflex was elicited. . Specialist physical examination revealed an abnormal skin area of approximately 5 × 2 cm on the left side of the nose, which manifested as partial redness of the skin scattered with irregular pale patches, especially on the left nasal alar. A triangular dark necrotic skin area could be seen on the left side of the nose tip, without swelling or abnormal exudation, which was clearly demarcated from the surrounding skin. Regarding auxiliary examination, all blood routine, biochemical, and coagulation indices were within the normal range. The admission diagnosis was skin necrosis after autologous fat grafting for augmentation rhinoplasty. Treatment process: On the first day after admission, the patient was administered a yellow safflower pigment to promote blood circulation and remove blood stasis, dexamethasone sodium phosphate injection to mitigate inflammation, and vitamin C to promote healing. In the afternoon of that day, the nasal wound was cleaned, and the dressing was changed. Red light irradiation was first applied to the nasal wound for 20 min, then 0.05% chlorhexidine acetate solution was used to disinfect the wound. Compound polymyxin B ointment was applied after wound cleaning, and finally, a sterile dressing was wrapped over the wound. On the second day after admission, the wound was treated with TOT therapy in a hyperbaric oxygen chamber (medical air pressurized oxygen chamber, LYC32–34, East Medical Treatment, China), in addition to daily red light irradiation and dressing change. Regarding the concrete treatment process, an airtight mask was used to cover the wound to form a local closed environment, and then an oxygen tube was connected to one side of the mask and a hyperbaric oxygen source. The wound was placed directly in 100% pure concentrated oxygen and a 0.2 mpa high pressure environment two times/day for the first 3 days and one time/day for the next 4 days, for a total of 10 times. On the seventh day after admission, the patient underwent external wound application of platelet-rich fibrin (PRF) after surgical contraindications were eliminated. Autogenous venous blood (60 mL) was extracted and placed into a 10 ml tube without an anticoagulant and centrifuged immediately at 3000 rpm for 10 min. After centrifugation, a fibrin clot was obtained in the middle of the tube, between the red corpuscles at the bottom and the supernatant serum at the top. The clot was removed and placed on a sterile gauze. It was then squeezed to drive out the fluids remaining in the fibrin matrix to obtain autologous fibrin membranes, namely the PRF needed. After the wound was cleaned and disinfected, PRF was glued to the surface of the nasal wound and covered with sterile gauze. The dressing was opened after 48 h, and the wound was examined. . On the seventh day after admission, the range of dark necrotic area of the wound was significantly reduced, and the entire abnormal skin area was also reduced. On the 15th day after admission, PRF therapy was administered to the wound again. The patient was discharged the same day. On the third day after discharge, the abnormal skin area on the patient’s nose was significantly reduced, and the color gradually changed to normal skin color. The pale and dark necrotic areas were scabby, and the general skin condition significantly improved. The patient was content with the esthetic results after treatment. .
Fat transplantation has a 100-year history. In 1893, Gustave Neuber in Germany used cube fat to fill depressed scars around the skin of the orbit and achieved good results, pioneering the practice of fat transplantation. In 1974, Giorgio in Italy proposed the use of needle aspiration to obtain granular adipose tissue. In 1986, Illouz proposed the theory of fat granule transplantation based on the method of obtaining fat by needle aspiration and applied fat granules to the face to achieve contour improvement. In 1992, Coleman in Australia proposed the structural fat transplantation theory and standardized the operation of liposuction and fat injection, which marked the arrival of the era of modern fat transplantation. Over the past 100 years, autologous fat transplantation has gained increasing attention in reconstruction and cosmetic surgery. Autologous fat transplantation technology has continuously developed and improved. Fat products such as nano-fat, concentrated nano-fat, high-density fat, and SVF gel have gradually emerged. However, there is still considerable room for improvement. Improving the long-term retention rate of fat and reducing the incidence of complications are major challenges for surgeons, among which the most critical step may be fat placement . However, improper fat placement can cause serious vascular complications. Vascular embolism caused by fat is one of the most serious complications of fat transplantation, which often causes eye and brain artery embolisms, resulting in serious irreversible consequences. Kai Wang conducted a retrospective analysis of global reports of serious vascular complications after facial fat transplantation and found 111 cases of serious vascular complications caused by fat, including 68 cases of ocular artery embolism, 43 cases of cerebral artery embolism, and five deaths. The common graft sites included the glabella, tempus, forehead, and nose. Anastomotic branches exist between the ocular, facial, and superficial temporal arteries. When fat improperly enters the blood vessels, injection pressure can promote fat entry into the blood circulation, resulting in vascular embolism. Although the number of reported cases of serious vascular complications is increasing, clinical guidelines and effective treatments are lacking. The effective cure rates of conventional antihypertensive drug therapy and thrombolysis surgery are low. Among the available reports of vascular embolism after fat transplantation, only two patients have been successfully cured, including ophthalmic artery and cerebral embolism after fat transplantation . Through an analysis of a large number of cases, we found that the timing of rescue was extremely critical. In the conventional treatment model, the earlier the rescue is conducted, the higher the success rate of the cure. In this case, the SVF gel was injected for nasal fat transplantation. It is a product of adipose-derived stem cells and the extracellular matrix (mainly composed of collagen, elastin, and mucopolysaccharides) proposed by Yao in 2017. SVF gel has achieved good efficacy in improving wrinkles, facial rejuvenation, local filling, and wound repair . As a type of adipose tissue product, the SVF gel is inevitably likely to have the aforementioned vascular complications. Fortunately, in this case, embolism occurred in the microvessel, and the embolus did not enter the small vessels to cause serious consequences, except for nasal skin necrosis. In view of the patient’s condition, we proposed a “biological + physical + drug” comprehensive therapy. In addition to conventional drug therapy, such as promoting blood circulation, removing blood stasis, and reducing inflammation, TOT and PRF application have been proposed for the first time. TOT therapy can improve the blood oxygen supply to local tissues, and PRF contains growth factors that are conducive to wound regeneration and repair. In clinical practice, TOT is a new treatment method that can accelerate wound healing by injecting a high concentration of oxygen into locally confined spaces to improve the partial pressure of oxygen in wound tissues and strengthen the metabolic synthesis function of cells . Compared to tissue engineering and stem cell therapy, TOT has been increasingly applied in tissue injury repair, especially in ischemic and infectious wounds, owing to its advantages of simple operation and low equipment requirements . PRF is a platelet concentrate that can be used as a biological healing dressing. Rapid collection and centrifugal operation are key to PRF extraction. PRF techniques can assemble all elements that are beneficial to wound healing from the blood and immune components of fibrin clots; by driving out the trapped liquids in the fibrin matrix, durable autologous fibrin membranes can be obtained. PRF contains fibrin matrix, platelets, leukocytes, circulating stem cells, and some cytokines, such as fibroblast growth factor-b (FGFb), vascular endothelial growth factor, and platelet-derived growth factor (PDGF) . Although these active ingredients promote angiogenesis, migration, division, and phenotype change of endothelial cells, as well as immune regulation, the fibrin matrix that supports the aggregation of these ingredients is the key factor that determines its therapeutic potential. It has been widely documented that PRF plays a role in tissue regeneration, wound healing, and angiogenesis . After 10 rounds of TOT therapy and two rounds of PRF treatment, the necrotic skin on the wound surface was significantly reduced, and the healing process was significantly accelerated. The patient was followed up for one month after discharge, and the necrotic skin area of the nose was decrusted, leaving a small amount of light red scar hypertrophy. There were no other abnormal discomforts, and the prognosis was good. The patient was satisfied with the effects of the treatment.
Given the increase in reported cases of serious vascular complications, both physicians and patients should be aware of the risks associated with facial fat transplantation. The prevention of vascular embolism is a priority in fat transplantations. Timely and early treatment of vascular accidents is crucial. For local skin necrosis caused by vascular embolism, the comprehensive therapy of “biological + physical + drug” proposed by our team achieved good efficacy. PRF and TOT therapies provide a new treatment strategy for tissue necrosis caused by vascular infarction.
|
Managing Paediatric Growth Disorders: Integrating Technology Into a Personalised Approach | ecd0ab4e-1988-4b8f-a330-d848917ac923 | 7499133 | Pediatrics[mh] | There have been few articles specifically linking the human component of growth management, i.e. specialist and nurse interaction with the patient, psychological support and training of healthcare professionals in motivational interviewing together with digital innovations such as electronic monitoring of growth hormone (GH) injections. Both the human and digital components are recognised to contribute to GH adherence, but it is the necessity of their partnership that we emphasize. What this study adds? A review on the holistic approach to personalised growth management by multi-disciplinary professionals, but stressing the key importance of the human and technical partnership. Contributions are also provided by a professional coach who is an expert in motivational interviewing and personnel from the UK patient support group, the Child Growth Foundation.
There have been few articles specifically linking the human component of growth management, i.e. specialist and nurse interaction with the patient, psychological support and training of healthcare professionals in motivational interviewing together with digital innovations such as electronic monitoring of growth hormone (GH) injections. Both the human and digital components are recognised to contribute to GH adherence, but it is the necessity of their partnership that we emphasize.
A review on the holistic approach to personalised growth management by multi-disciplinary professionals, but stressing the key importance of the human and technical partnership. Contributions are also provided by a professional coach who is an expert in motivational interviewing and personnel from the UK patient support group, the Child Growth Foundation.
The management of paediatric growth disorders presents a multidisciplinary challenge to healthcare professionals (HCPs) responsible for affected patient care. Several medical HCPs may be involved, including the primary care physician who identifies the initial growth problem, the family general practitioner who refers the child for hospital investigation, the hospital-based paediatrician who sees the child at the initial consultation and the specialist paediatric endocrinologist to whom the child is then referred for an expert opinion and further management. In addition, in many hospital paediatric endocrinology units, the developing role of the paediatric endocrinology nurse specialist has directly improved the quality of liaison with the family and contributes to the care of the child through the addition of a skilled HCP to the management team. Pharmacists, biochemists, psychologists, patient support groups and personnel from the pharmaceutical industry also make important contributions to the three key phases of growth management; namely identification of the initial short stature, investigation and diagnosis of the cause, and treatment with hormone therapy, where indicated, all of which implies a long-term commitment to a potentially invasive therapy . Early diagnosis and early initiation of growth hormone (GH) therapy is associated with improved long-term height gain . The pressures experienced by the patient and family to successfully engage in such a diagnostic and therapeutic journey are also challenging. There are key facts about the nature and implications of the diagnosis to understand and process, including the emotional commitment for therapy to be successful and produce normal growth and adult height. In addition, maintenance of a therapeutic regimen designed to bring long-term improvement, rather than short-term benefit, requires engagement and maturity. These aspects of short stature management will be discussed in this article. A further component of care, which has emerged in recent years, are the electronic tools to aid therapy and adherence. These tools will also be addressed with emphasis on the importance of the human-eHealth partnership, which is necessary to make patient care optimally beneficial. We will discuss the challenges encountered by the patient and family through the experience of staff of the UK Child Growth Foundation (CGF), a patient support charity which advises families of patients with short stature. Current unmet medical needs of growth management will also be discussed followed by a description of the psychological basis and management of poor adherence to GH treatment regimens . eHealth innovations will be covered, followed by the importance of HCP training in relation to acquisition of motivational skills for improved recognition and intervention in poor adherence situations. Finally, the emerging role of the paediatric endocrinology specialist nurse will be summarised, with conclusions highlighting the rationale for joint human-eHealth collaboration to achieve optimal personalised management of the short child.
Early recognition of pathological short stature, as opposed to variants of normal height, remains a challenge, particularly in the UK, where routine height surveillance has been reduced to two measurements at primary school and secondary school entry points . The age of diagnosis of disorders of abnormal growth, such as coeliac disease and Turner syndrome, is significantly later than in other countries such as the Netherlands and Finland , where investment in primary care identification of growth disorders has resulted in earlier diagnoses . Historically, a high proportion of children, treated with GH therapy for a variety of growth disorders, have not demonstrated a satisfactory degree of catch-up growth during the first year of therapy . A number of reasons may underlie this, including incorrect diagnosis, incorrect dose of GH at initiation of therapy and inadequate attention to factors predicting individual growth responses . The correct management of poor response to GH remains a priority in such patients . However, it is the presence of poor adherence to the GH treatment regimen which has emerged as a key factor, either alone or in combination with other elements that have an impact on growth response . This issue of non-adherence will be discussed in detail below.
Digital health, defined as the use of information and communication technologies for health, is becoming a reality in clinical practice and medical education and has made a significant impact in the day-to-day management of diabetes mellitus in children . Its application to the treatment of growth disorders is more challenging because therapy is geared to long-term responses and benefits, rather than short-term metabolic control. However, one area where digital technology has been effective is in the electronic monitoring of GH injections . The use of an electromechanical auto-injector, which records every injection that is given and communicates the data both to the patient and the HCP, is a major advance . It is known that self-reporting of adherence tends to be inaccurate and to report artificially high values, compared with digital recording of injections . The difference between reported and recorded accuracy, using the electronic device, is significant. In a large international study of GH therapy using electronic recording, adherence was shown to be good during the first year of treatment, but gradually decreased to approximately 60% after five years . These data give two key messages, first that accurately measured adherence decreases over time and secondly that intervention by the HCP is indicated to prevent and correct this trend. The injection device can also demonstrate suboptimal adherence which may not be obvious from auxological measurements.
Adherence or compliance can be defined as the extent to which the patient follows a prescribed therapeutic regimen, and in the case of GH, the extent to which daily GH treatment is taken. There are three phases in understanding the way adherence develops. First, there is the uptake stage, which describes the way in which the patient begins to accept the treatment and indeed whether they actually start to take the treatment. It is known that 10% to 15% of patients never start taking the treatments they are prescribed . This is known as primary non-adherence. The second phase, which is really critical for long-term progress, is the way in which the patient, or the family, incorporates the treatment into the habitual pattern of daily life. The last phase describes how long the patient stays with the treatment. It is known that patients may give up after months or years of treatment and there is evidence for a wide range of adherence to GH therapy . Overall, there are figures of up to 50%, 60%, or even 70% of patients not taking GH treatment in a regular and useful way, with a clear relationship demonstrated between non-adherence and not achieving linear growth targets . Given that GH therapy is evidence-based, the question is why are patients not adherent? Older explanations were essentially based around the idea that people did not follow treatment because they did not understand or remember what they had to do . This was often taken to be a symptom of poor communication in healthcare, so interventions were designed to improve communication and patient understanding and the ability to remember and plan treatment. This, unfortunately, is only a small part of the answer.
It is now clear that there are different categories and certainly different causes of non-adherence. Two distinct types are recognised, known as intentional and unintentional non-adherence, which have very different drivers, or different origins. The reasons for the two different categories can be summarised in terms of what is known as the COM-B model . In the COM-B acronym, C stands for capability, O for opportunity and M for motivation. In intentional non-adherence many patients know what they have got to do, ie it is not a question of misunderstanding or not remembering, but they are reluctant to adhere, because either the treatment does not make sense to them, or they have worries or concerns about it. In unintentional non-adherence some of the older factors can be responsible such as poor communication, poor experience or satisfaction with the organisational challenges of doing something regularly on a daily basis. There may also be other barriers outside the individual, such as financial or practical constraints.
If this is mapped onto the COM-B model we can see that under Capability , there is a range of factors, such as psychological difficulties; eg, people not remembering or not being able to plan. There are also some physical capability issues, eg, not being able to administer the treatment in a way that is effective. Under Opportunity , there are physical factors such as getting access or having barriers to treatment, which lie outside the patient, together with psychological barriers, such as poor support and communication from people close to the patient. However, the really important factors for many patients, particularly related to intentional non-adherence, are the Motivational influences, such as negative or mistaken beliefs about their condition and their treatment.
Accepting this variety of factors, it is not surprising there is a range of ways that we have of working with families and patients, to improve their adherence. These can involve both human and digital interventions. Two available strategies are equally important. It is fundamental to use the direct experience in the healthcare situation, ie the consultation, to understand the patient’s issues and perspectives and to anticipate factors around non-adherence which can be managed. Going beyond that, there is a range of digital and personalised interventions available; for example, an initial brief screening questionnaire to identify the particular problems each patient and family may be experiencing. Then, following that, interventions can be developed which are tailored to each patient. In terms of the consultation, a structure is recommended for each family to analyse their understanding of the primary short stature condition and the treatment regimen they are being asked to follow. It is important to make sure that they have a clear rationale for the need for treatment and for daily injections. A recent study in adults with GH deficiency showed that non-adherence was related to lack of understanding of the primary disorder, which can be improved through focused education . A practical plan needs to be agreed for how, where and when the GH injections are given to ensure that treatment is administered more regularly. More generally, factors which cause adherence problems for each individual need to be identified. At the beginning and during treatment, brief screening questionnaires can be used to identify relevant personal issues. Information from the screening questionnaires can be used to start a personalised conversation to understand what is going wrong. From there, basic behaviour change approaches, such as motivational interviewing by HCPs, can attempt to target individual factors. Beyond the consultation, many other digital approaches are available which patients and parents can access on a daily basis. These could be personalised web-based tools, mobile phone applications, daily text messaging or interactive programmes which address particular issues.
The main role of a professional coach in the healthcare environment is to support HCPs in learning how to help patients to make healthy choices and decisions in their lives. This can be challenging because patients can struggle to make such choices, particularly when emotional barriers block the logical courses of action. A number of questions can be asked. How can HCPs really influence the behaviour of patients and families, particularly when they have decided they do not want to change? Why can some patients move forward when others are resistant to making progress? These questions and observations have led to the exploration of motivational interviewing practised by HCPs which can be applied in the clinical scenario of outpatient consultations to help patients with adherence to GH therapy. It is proposed that motivational interviewing skills can motivate patients and families to overcome the practical and emotional barriers related to therapy.
Motivational Interviewing, which is based on the work of Miller and Rollnick , is a collaborative conversation style which aims to strengthen a person’s motivation and commitment to change. It is a structured, person-centred approach which helps patients and families to resource their own inner motivation to be translated into improving adherence to GH therapy. Motivational interviewing is a skill which needs to be taught and thus learnt by both medical and nursing HCPs. Examples of the benefits of motivational interviewing can be taken from experience in making healthy life choices, such as giving up smoking, reducing alcohol intake or eating in a healthier way. When considering these choices, reaction to the individual can be unhelpful, such as not listening or negatively encouraging regressive behaviour. By contrast, a helpful response to the same life choices would consist of positive reactions such as genuine empathetic listening and exploration of the individual’s feelings without judgement. This behaviour typifies the spirit of motivational interviewing. The principles of motivational interviewing are collaboration, acceptance and compassion. Collaboration is very important because partnership on an equal level with the patient is a key aim. Acceptance leads to better understanding of the decisions and choices that patients and families are making without judgement. These choices are accepted and the HCP responds with guidance. Compassion is a further component that is combined with evocation, which means drawing out a patient’s inner motivation and commitment, and building on this to effect change.
Core skills in motivational interviewing can be discussed under the acronym OARS, which stands for Open questions, Affirmations, Reflective listening and Summarising. The conversation can be structured by following these headings. Open questions such as what, how and why will open conversations and evoke dialogue. Other examples would be ‘what are your hopes for your consultation today?’ and ‘I am curious to learn how you have been getting on with your injections?’ These questions can be prefaced by saying ‘help me understand …’ and the conversation can develop by inviting the patient or family to talk about what is on their mind, what are their needs and their priorities. Affirmations are about helping patients to recognise their own strengths and positive beliefs that are going to help them to adhere to GH therapy. Examples could be to say to a patient ‘I can see it took courage for you to try this out today’ or to a parent ‘your creative ideas around this are very helpful’. Reflective listening consists of not only listening and reflecting back what is said, it also helps in verbalising the thinking and feelings that lie underneath, showing a depth of empathy that leads to further conversations. The last skill here is summarising, which serves the useful purpose of wrapping up conversations and can be started by saying ‘let me see if I have got this right, you are feeling this on one hand and perhaps feeling this on the other?’
When patients and families are asked about the difficulties they face related to management of short stature, a wide range of opinions and comments are given. The UK CGF (https: www.childgrowthfoundation.org) is a non-profit patient support group, which was originally founded as a charity in 1977 (UK Registered Charity number 1172807). The CGF receives many requests for information and support and delivers management advice on a wide range of growth disorders. In relation to adherence to GH therapy, the CGF reports that in the consultation setting some HCPs do not have sufficient time or experience of GH treatment which results in them giving conflicting advice to families. Insufficient knowledge of the primary growth disorder results in communication of inadequate or incorrect information. In particular, the patient may not realise how effective and worthwhile long-term therapy with GH can be. Insufficient education of the patient by the HCP can result in the family seeking alternative advice on the internet and thus receiving more confusing, incorrect and worrying messages. More accurate information needs to be available regarding the benefits of GH therapy with advantages outside growth being emphasised, such as improved general health and self-esteem . Accurate information regarding GH injection devices needs to be given with the choice of the most suitable injection device made by the family before the initiation of therapy. Size, comfort and storage requirements should also be considered, together with family dynamics and travel.
The concept of patient choice is an organisational decision which is not universally adopted in the framework of growth consultations. Ideally however, the patient and family should be offered the choice of GH brand and injection device and this has been demonstrated to increase the likelihood of good adherence . In 2019 the CGF conducted an online survey amongst its members about initiation of GH therapy . One hundred and eleven responses were received, mostly from patients with GH deficiency, multiple pituitary hormone deficiencies, Silver Russell syndrome, small for gestational age and intrauterine growth retardation. The two most relevant questions were, ‘Were you offered a choice of GH brand and device?’ and ‘How often does your child miss a GH dose?’ Out of 111 responses, 31% of patients were not offered a choice of GH brand or injection device, demonstrating that within the UK, patient choice remains very inconsistent. Guidelines for England and Wales, regarding GH treatment, https: www.nice.org.uk/guidance/ta188/chapter/1-Guidance are not being followed. The survey indicated that 58% of patients never missed a GH dose, with lower values of 30% of non-GH deficient cases compared with 78% in multiple pituitary hormone deficiency cases.
From many years’ experience of handling requests for information and from managing the CGF Facebook page, the CGF reports topics, which are frequently repeated, related to barriers to good GH adherence. The first of these is logistical barriers. A daily subcutaneous injection should become part of the family’s routine, provided the routine is not disturbed. However when changes do occur, such as a play-date, a school trip, a sleep-over, a camping trip with refrigeration necessary or particularly when the child’s care is shared between parents in different locations or with grandparents, the first casualty is the GH injection. As the effect of missing one or several GH injections is not immediately apparent, the long-term objective of regular therapy tends to be forgotten leading to chronic poor adherence. Another practical aspect is the maintenance of regular GH supplies, which may not occur if a family waits to order a new supply at the last minute.
Children receive GH treatment because they have a long-term health condition but may develop a needle phobia with a fear of the pain of the injection combined sometimes with the noise of the injection device. A vicious cycle of events can develop and escalate in importance, predictably leading to missed injections. The anticipation of the injection and then its attempted administration can be very stressful. In the longer-term, a child might start to feel different to their peers, especially around friends, of whom not many will be having daily injections. Bullying and exclusion of the patient can occur. Peer pressure increases during adolescence, when additional stresses, such as exams, provide further opportunities to miss GH injections and for poor adherence to become habitual.
Availability of communication with other patients having similar experiences can be very supportive and can significantly reduce stress and the sense of isolation. Peer support organisations such as the CGF can support and advise their own patients and the HCPs who are responsible for them. Many host social media groups, providing a 24/7 online community for chats, questions, discussions and mutual support. The CGF holds an annual convention, but with e-technology, geographical boundaries have diminished and Facebook groups, educational websites, mobile phone applications and helplines can all contribute to enhanced patient and family support.
The roles of the paediatric endocrinology specialist nurse have developed at different rates in different countries. In the USA, UK, Canada, Australia and Scandinavia this nursing speciality has grown, with funding now established for positions in most university paediatric endocrinology departments . In other countries paediatric endocrinology nursing is much less developed. We will discuss roles and responsibilities related to short stature management and specifically GH adherence. Paediatric endocrine specialist nurses are uniquely positioned to offer a high-valued support network to HCPs, patients and their families, by being the regular first point of contact at consultation visits. Relationships, incorporating the whole family, are established and built on trust, specialised knowledge and expertise that is pivotal for families when starting GH therapy. Involvement in the initiation of GH treatment is key to establishing a fruitful relationship with the patient. ‘Ideal’ and ‘worst-case’ scenarios regarding initiation of GH therapy are shown in . If possible, meeting the family before the medical consultation can be very beneficial. Obtaining knowledge of the medical history and whether the family has studied the diagnosis on the internet can also be very valuable. Communication skills are important and as discussed above, training in motivational interviewing can play an essential role in the specialist nurse becoming an effective member of the growth management team and contributing to optimal GH adherence. Organising the patient’s choice of GH brand and injection device is a further responsibility and needs to be based on specialist knowledge of the different GH devices. Education in injection technique will logically lead to the establishment of a network of regular contacts and availability for the patient and family. Contact and support by phone and internet have become inherent in the nurse specialist’s responsibilities. In terms of adherence, the use of electronic monitoring of injections with feed-back to the nurse and endocrinologist allows adherence to be examined, so that a human-eHealth partnership develops to support the family. At consultation visits, it is logically the nurse specialist who can take the lead in non-judgemental interviewing to investigate actual or potential non-adherence. In the long-term, the paediatric endocrinology specialist nurse maintains support and positive relationships with the family and the patient. Everyone needs to continue to work together, ensuring encouragement and a combined committed goal of optimal response to GH therapy. Finally, by using a personalised approach, technology can be positively integrated into care and assist adherence and optimise outcomes.
The successful management of paediatric growth disorders, involving GH therapy, can be judged by the achievement of catch-up growth, followed by growth within the normal centile lines leading to an adult height within the genetic target of the family. Relatively few cases achieve this ideal triad and a combination of personalised input by medical and nursing HCPs and the use of technological tools can improve the chances of success. Understanding the personal psychological barriers to good GH adherence in each patient can be combined with the use of an electronic GH injection recorder to monitor and communicate accurate adherence data. Motivational interviewing and a non-judgemental approach are also beneficial. This human-eHealth partnership gives synergistic advantages and improves the likelihood of a clinically beneficial long-term growth outcome.
|
ROC curve analysis: a useful statistic multi-tool in the research of nephrology | b894e82f-f53d-4e74-bb0a-fe93aa228368 | 11266376 | Internal Medicine[mh] | During the past decade, scientific research in the area of Nephrology has focused on evaluating the clinical utility and performance of various biomarkers for diagnosis, risk stratification and prognosis. Before suggesting the use of a biomarker in everyday clinical practice to identify a specific disease or condition (for example troponin for the diagnosis of acute myocardial infarction), specific statistic measures should evaluate the diagnostic accuracy and performance of this marker. The first, fundamental tests used to statistically evaluate the diagnostic performance of a new marker are sensitivity and specificity. Sensitivity or true positive rate, determines the proportion of diseased subjects with positive test result for the new marker, whereas specificity or true negative rate, determines the proportion of disease-free subjects with a negative test result. Thus, sensitivity (defined as the ratio of true positives to the sum of false negatives and true positives) evaluates the ability of the test/marker to correctly identify patients who actually have a certain disease or condition. On the other hand, specificity (defined as the ratio of true negatives to the sum of false positives and true negatives) tests the capacity of this marker to correctly classify patients as disease-free. Accuracy is a measure of overall correctness of a diagnostic test, represents the proportion of correctly classified cases (both true positives and true negatives), calculated by dividing the sum of true positive and true negative by the sum of all cases. Two additional mathematical tests, positive predictive value (the proportion of truly positives among all positive results) and negative predictive value (the proportion of truly negatives among all negative results), provide answers to the clinically relevant question of whether an individual will be correctly diagnosed as having the disease according to the test/marker result (Table ). Therefore, a novel, proposed diagnostic test should have high discriminatory power to accurately classify all tested subjects as healthy or diseased. However, the test or biomarker available often consists of a continuous variable and there is a need to identify a cut-off threshold capable of discriminating between healthy and diseased subjects. A recent example that everyone might be familiar with, is the self-administered, rapid tests for COVID-19 antigen: two red lines (one in the Control region and one in the Test region), regardless of intensity, indicate a positive result, whereas a sole red line in the Control region and no colored line in the Test region indicates a negative result. These diagnostic tests are dichotomous, as they provide a “yes” or “no” answer regarding whether a subject has the disease or not. However, the biomarkers on which these tests are based on, such asCOVID-19 antigen, are quantitative, expressed in continuous terms and need to be transformed into a dichotomous variable. The Receiver Operating Characteristic (ROC) Curve is a statistic method used to assess the discriminatory ability of a quantitative marker across the whole range of all its values, when subjects are correctly categorized as diseased and non-diseased (or with and without an incident event) by a gold standard, reference test . A typical ROC curve is shown in Fig. , where the x-axis represents the false positive rate, defined as 1 minus specificity and the y-axis represents the sensitivity (true positive rate). The ROC curve of a given test/biomarker is built-up by specific algorithms implemented in most statistical software. The algorithms calculate, for a series of thresholds of the variable being tested, sensitivity (i.e., the true positive rate) and 1-specificity (i.e., the false positive rate). True positives (Y scale) and false positives (X scale) as derived by the procedure are reported in a cartesian graph and the conjunction of the coordinates generated by the various thresholds provides a ROC curve. The dotted diagonal line (iii) represents a diagnostic test with the lowest possible discriminatory power, similar to chance, with an AUC of 0.5, whereas the black line (i) is the perfect test with 100% sensitivity and specificity with an AUC of 1.0. The gray line (ii) is a typical good curve, with an accuracy of approximately 80%. For instance, consider that for each threshold of the variable being tested the true and false positive rates be identical. As a consequence, the area under the ROC curve will be 50% (see dotted diagonal line in the graph-iii), implying that the variable is absolutely useless for identifying the condition of interest. Otherwise, if for each threshold of the variable being tested the true positive rate is 100% and the false positive rate is zero, the consequence is that the area under the ROC curve will be 100% (see the black line in the graph-i) implying that the variable is absolutely accurate for identifying the condition of interest. The area under the curve (AUC) of the ROC curve represents the discriminatory power of the curve/test, with values ranging from 0.5 (lowest) to 1.0 (highest and most accurate) and 95% Confidence Intervals. This is a range of values that provides an estimate of the uncertainty associated with the true underlying ROC curve and helps to assess the reliability of the ROC curve analysis and the performance of a classification model in distinguishing between classes. Typically, it is computed using methods like bootstrapping or resampling techniques. Ideally, after ROC curve analysis investigates the discriminatory power of a novel diagnostic test (compared to the gold, standard reference test), these results should be validated in another (external) population, to avoid under- or overfitting of the statistical model on which the test was based. Historically, ROC analysis was first used during World War II to assist radar operators in deciding whether a blip on the radar screen corresponded to a sound or a moving object and it was later adopted by diagnostic statistic research. During recent years, ROC curves have been used quite broadly. A quick search in the Medline database shows that over the past decade, the use of ROC curves in clinical nephrology research has significantly increased. Using the search key words “ROC curve” AND “CKD” OR “ESKD” OR “Hemodialysis” OR “Peritoneal Dialysis” in the abstract or title generated 307 papers in the decade 2003–2013, a number that quadrupled in the past decade (2013–2023) to 1332 papers. This increase might be attributed to an exponential growth of clinical and epidemiological research evaluating the potential accuracy or predictive ability of various biomarkers in the field of nephrology in these recent years. The applications of ROC curve analysis include the estimation of discriminatory performance of a novel diagnostic test, the identification of the optimal cut-off value for this test that maximizes sensitivity and specificity, the assessment of the predictive value of a certain biomarker, and the evaluation of multivariate risk scores based on multiple variables or risk factors. Herein, through clinical practical examples, we aim to present a simple methodological approach explaining in detail all the principles and applications of ROC curve analysis in the field of nephrology. Example 1: Using ROC curve to evaluate the discriminatory performance of a novel diagnostic tool ROC curve analysis can be used to evaluate if a novel biomarker might be useful in the diagnosis of a certain condition or disease. This was done by Zhou et al. ,to investigate the possible association between a new biomarker, asprosin and metabolic syndrome (MS) in a cross-sectional study enrolling134 hemodialysis (HD) patients. According to the definition by the International Diabetes Federation (gold standard), MS was diagnosed in 51 patients. First, the authors found that, HD patients with MS had significantly higher circulating levels of asprosin (371.5 ± 144.9 ng/ml and 502.2 ± 153.3 ng/ml, respectively, Mann–Whitney test), compared to patients without MS. Second, regression analysis showed that asprosin was independently associated with MS with an odds ratio of 1.008, after adjustment for several well-known risk factors. Third, the authors explored whether asprosin could predict MS, through a ROC curve analysis. They found an AUC of 72.5% with 95% confidence interval-CI of 63.9 – 81.1 (Fig. ). Another characteristic of the ROC curve, besides AUC is the CI; in this case, narrower CIs (i.e., a smaller range between minimum and maximum values) corresponds to a better and stronger discriminatory ability of the diagnostic test. Finally, based on the ROC curve, the authors determined the optimal cut-off value of asprosin for identifying MS, which was set at 369.85 ng/mL This cut-off point is provided by the best combination of sensitivity and specificity, which were82.4% and 51.8%, respectively (Fig. ). It is clear that, although the first analyses (Mann–Whitney test and multiple regression analysis) showed a possible association between asprosin levels and MS, the ROC curve indicates that asprosin was not such a good marker for predicting MS, given its low specificity, relatively modest sensitivity and broad CIs. Example 2: Using ROC curve to evaluate the prognostic role of a biomarker and to determine the optimal cut-off value The urea-to-albumin ratio (UAR) is a novel predictor of adverse events —including mortality— in septic patients hospitalized in the intensive care unit (ICU). Recently, Rodrigues et al. investigated the potential predictive value of UAR for mortality in critically ill COVID-19 patients. They conducted a retrospective study, enrolling 211 high-risk patients admitted with COVID-19 infection in the ICU. As expected, the mortality rate in this cohort was high (64.9%). The authors aimed to investigate the classification accuracy of this marker for ICU mortality using a ROC curve analysis. Thus, the first step consisted of determining the disease status for all patients, using a gold standard method, i.e., distinguishing between survivors and non-survivors. To evaluate the discriminatory ability of UAR for predicting ICU mortality in COVID-19 patients, the authors performed a ROC curve analysis, across all paired values of sensitivity with 1 minus specificity (i.e., the false positive rate), Fig. . The authors then proceeded with determining the optimal cut-off value of UAR to predict this outcome. This value can be identified by the ROC curve and should ensure the highest specificity and sensitivity combination possible. However, as clearly shown in Fig. , a trade-off exists between sensitivity and specificity, where the choice of a higher sensitivity is inevitably accompanied by a lower specificity. Since AUC measures the entire two-dimensional area underneath the entire ROC curve, in this example (Fig. ), the AUC derived from the ROC curve is 0.72 with 95% CI: 0.66–0.78). Moreover, in this example, the optimal threshold of UAR to predict ICU mortality was found to be above 12.17, which corresponded to a 83.21% sensitivity and a specificity of 60.81%. A sensitivity of 83.21% implies that out of 100 individuals who will die, 83 would be correctly classified as non-survivors. However, this also means that among these 100 patients, 17 would not have a positive test result (in this example, 16.79% of patients that died had UAR below 12.17). A specificity of 60.81% indicates that out of 100 patients who will survive, 61 would be correctly classified as survivors by the test and 39 incorrectly classified as non-survivors (in this example, 39.20% of survivors had increased UAR above 12.17). Subsequently, the authors computed all the indices of using the UAR threshold of 12.17 to identify survivors in patients hospitalized in ICU due to COVID-19 infection: –sensitivity = 83.21; –specificity = 60.81; –false positive rates (1-specificity) = 39.19; –positive predictive value (PPV) = 79.72; –negative predictive value (NPV) = 66.18% and; –accuracy = 75.36% in this example. Τhe ROC curve combines sensitivity (y-axis) and 1 minus specificity (x-axis), representing all possible cut-off values the test might have. Typically, a ROC with an AUC above 75% is considered to be clinically useful. However, determining the cut-off value usually depends on the goal of the test, due to the trade-off between specificity and sensitivity. The authors reported that compared to survivors, those who died had increased UAR, with a mean difference of 12.8. Additionally, Cox regression analysis (the final step of their analysis) showed that a UAR value above 12.17 doubled the risk of all-cause mortality in this cohort. In this example, ROC curve analysis was used to assess the prognostic value of a biomarker and to determine the optimal cut-off value. Example 3: Using ROC curve to develop and validate risk prediction models or scores ROC curve can also be used to develop and validate novel risk prediction models or scores. For instance, You et al. , aimed to construct a risk prediction model for CV disease in HD patients and conducted a prospective study in 388 maintenance HD, followed for a mean of 3.27 years with the occurrence of CV events as endpoint. During the follow-up period, 132 patients had a CV event. To build the new prediction model, the first step was to identify risk factors for the outcome. Among the 26 candidate prognostic variables that were tested, stepwise Cox regression analysis revealed that hypertension, diabetes mellitus, age ≥ 65 years and abnormal white blood cell count were the sole independent predictors. Next, the authors developed a simple risk prediction score by assigning1 point to each of these 4 risk factors. Compared to patients with none of these factors, patients with 1,2, and over 3 factors had significantly, graded risk for CV disease (HRs: 3.29, 7.42, and 15.43, respectively). The subsequent step was to evaluate and validate the calibration of this risk score, by comparing the difference between predicted and actual risk with the ROC curve analysis. To increase the validity of their analysis, the authors initially performed this risk analysis first in their population and then applied it to a bootstrap validation data set. In both sets, this risk prediction score showed acceptable discriminatory performance for CV disease (AUC = 0.70, 95% CI 0.65–0.75 and AUC = 0.69, 95% CI 0.66–0.72, respectively). The authors concluded that given the high prevalence of CV disease in this high-risk population, a simple CV risk model based on variables that are easily obtained with little cost could be of clinically useful. Example 4: Using ROC curve to compare two risk prediction models and to determine their relative performance Another application of the ROC curve analysis is the comparison of two risk prediction models, to determine their relative performance and identify the one with superior predictive ability, thus enabling a quantitative evaluation of their overall accuracy. In a recent study conducted by our team, we aimed to find a risk predictive model that was simple, quick, easy to evaluate and accurate for assessing the CV risk in subjects with diabetic kidney disease . One hundred fifty-eight patients with different degrees of renal function and type II diabetes for at least 10 years were enrolled. At baseline, various demographic, clinical, anthropometric, and biochemical variables were collected. Moreover, carotid intima-media thickness (cIMT) was evaluated by ultrasound as a surrogate marker of subclinical atherosclerosis. All patients were followed for a long period of 7 years, with fatal or nonfatal CV events as the primary endpoint (75 events). To assess the predictive value of various variables collected at baseline, non-CV death was considered as a competitive event, using Fine-Gray regression models. Survival analyses revealed that among all variables, male gender, history of CV disease, long duration of diabetes, low hemoglobin, low estimated glomerular filtration rate (eGFR), high albuminuria, low high density lipoprotein (HDL) cholesterol, low serum albumin and high cIMT were independently associated with the CV outcome. Then, in multivariate models, only history of CV disease, eGFR and albuminuria remained significantly associated with the outcome of interest. Next, a risk model was developed with all these nine variables. However, the assessment of all these data are time consuming, expensive and laborious. To address the clinical question regarding the clinical utility of cIMT measurement and simplify the existing full, 9-variable model, a simpler (nested) risk model with only three variables (eGFR, albuminuria and history of CV disease) was constructed. The performance of these two models, was compared with various statistical tests, with different purposes. First, a log likelihood test demonstrated that there was no significant difference between the data fitting of these two models ( x 2 = 9.48, 6 df, p = 0.15). Second, Hosmer–Lemeshow test revealed that the simplified model was better calibrated compared to the full model (respectively, x 2 = 9.24, p = 0.32 and x 2 = 11.09, p = 0.20).). Finally, the discrimination performance of these two models was compared using the ROC curve analysis (Fig. ). This test confirmed that these two risk models had nearly identical and high accuracy to predict CV events [full model: AUC 0.87 95% CI: 0.81–0.92 and simplified model: AUC 0.84, 95% CI: 0.78–0.90 respectively]. Based on these analyses, we concluded that the simple risk model consisting of three variables that are easy to measure, quick and cheap might be used to predict CV events in diabetic CKD. The time-consuming and elaborate diagnostic tests such as the cIMT measurement do not actually offer much in the risk assessment. However, for these results to be adopted in everyday clinical practice, they should be validated in different, external, large-scale cohort studies. Example 5: Using ROC curve analysis to reevaluate the epidemiology of a disease in a specific population Another, innovative use for the ROC curve analysis is to reevaluate the epidemiology of a certain disease in a specific population. Long-standing epidemiologic evidence suggest that increased triglycerides (TGs) are associated with CV morbidity and mortality. However, the exact value at which risk starts to increase has yet not been identified. This topic has been investigated in a large, multicenter, national, population-based cohort, the URRAH study , which included 14,189 subjects followed for a long period of 11.2 years with outcome the incidence of CV events. The authors performed a ROC curve analysis to find the optimal, early, prognostic cut-off of TGs for predicting CV events, using the incidence of CV events as the dichotomous (yes or no) classification variable and TGs as the basic variable. They computed the pairs of sensitivity–specificity for all the range of TGs values. The ideal cut-off value was determined by Youden index test, which identifies the threshold value corresponding to the point of the curve nearest the upper-left corner (corresponding to 100% specificity and 100% sensitivity), as described before , with the following equation: J = max (sensitivity + specificity –1). From the ROC curve, the optimal cut-off value of TGs to predict CV events was found to be 89 mg/dl, which had a sensitivity of 76.6%, a specificity of 34.1% and an AUC of. 0.569 (95% CI 0.561–0.578). It should be noted that the CIs are quite narrow, indicating a “solid” discriminatory performance and the cut-off value identified had the maximum sum of sensitivity + specificty. This optimal cut-off value, of 89 mg/dL, is lower than the conventionally used cut-off value of 150 mg/dL (sensitivity 33.0% and specificity 74.3%). The next step of the statistical analysis in this paper was to separately insert, both the prognostic (> 89 mg/dl, as found by theROC curve) and the standard, conventional 150 mg/dl threshold values as independent variables in multivariate Cox models (adjusted for well-established CV risk confounders), with CV events as the dichotomous, dependent variable. Based on the HRs (conventional = 1.211, 95% CI 1.063–1.378, prognostic = 1.15, 95% CI 1.021–1.295), both cut-off values were considered acceptable, independent predictors of CV events in the whole cohort. Therefore, the authors concluded that significantly lower (61 mg/dl lower than standard) cut-off level of TGs predicts CV disease and therefore, these subjects should be carefully monitored in primary care. Although in this example, the two different thresholds (89 vs 150 mg/dL) were independent predictors of CV events, it should be noted that they have different sensitivity and specificity, with the conventionally used threshold (150 mg/dL) having a low sensitivity (33%) and high specificity (74%), and the new threshold proposed by the researchers (89 mg/dL) having a high sensitivity (77%) and a low specificity (34%). This should be taken under consideration if the new threshold was used for monitoring patients in the primary care, because not only more patients who will have the event would be actually identified (and the events prevented) but also there would be more false negatives, implying more frequent exams and visits for patients who will not eventually develop the CV event. Therefore, the “best” threshold for a test, depends on the test’s goal from a clinical and practical perspective. ROC curve analysis can be used to evaluate if a novel biomarker might be useful in the diagnosis of a certain condition or disease. This was done by Zhou et al. ,to investigate the possible association between a new biomarker, asprosin and metabolic syndrome (MS) in a cross-sectional study enrolling134 hemodialysis (HD) patients. According to the definition by the International Diabetes Federation (gold standard), MS was diagnosed in 51 patients. First, the authors found that, HD patients with MS had significantly higher circulating levels of asprosin (371.5 ± 144.9 ng/ml and 502.2 ± 153.3 ng/ml, respectively, Mann–Whitney test), compared to patients without MS. Second, regression analysis showed that asprosin was independently associated with MS with an odds ratio of 1.008, after adjustment for several well-known risk factors. Third, the authors explored whether asprosin could predict MS, through a ROC curve analysis. They found an AUC of 72.5% with 95% confidence interval-CI of 63.9 – 81.1 (Fig. ). Another characteristic of the ROC curve, besides AUC is the CI; in this case, narrower CIs (i.e., a smaller range between minimum and maximum values) corresponds to a better and stronger discriminatory ability of the diagnostic test. Finally, based on the ROC curve, the authors determined the optimal cut-off value of asprosin for identifying MS, which was set at 369.85 ng/mL This cut-off point is provided by the best combination of sensitivity and specificity, which were82.4% and 51.8%, respectively (Fig. ). It is clear that, although the first analyses (Mann–Whitney test and multiple regression analysis) showed a possible association between asprosin levels and MS, the ROC curve indicates that asprosin was not such a good marker for predicting MS, given its low specificity, relatively modest sensitivity and broad CIs. The urea-to-albumin ratio (UAR) is a novel predictor of adverse events —including mortality— in septic patients hospitalized in the intensive care unit (ICU). Recently, Rodrigues et al. investigated the potential predictive value of UAR for mortality in critically ill COVID-19 patients. They conducted a retrospective study, enrolling 211 high-risk patients admitted with COVID-19 infection in the ICU. As expected, the mortality rate in this cohort was high (64.9%). The authors aimed to investigate the classification accuracy of this marker for ICU mortality using a ROC curve analysis. Thus, the first step consisted of determining the disease status for all patients, using a gold standard method, i.e., distinguishing between survivors and non-survivors. To evaluate the discriminatory ability of UAR for predicting ICU mortality in COVID-19 patients, the authors performed a ROC curve analysis, across all paired values of sensitivity with 1 minus specificity (i.e., the false positive rate), Fig. . The authors then proceeded with determining the optimal cut-off value of UAR to predict this outcome. This value can be identified by the ROC curve and should ensure the highest specificity and sensitivity combination possible. However, as clearly shown in Fig. , a trade-off exists between sensitivity and specificity, where the choice of a higher sensitivity is inevitably accompanied by a lower specificity. Since AUC measures the entire two-dimensional area underneath the entire ROC curve, in this example (Fig. ), the AUC derived from the ROC curve is 0.72 with 95% CI: 0.66–0.78). Moreover, in this example, the optimal threshold of UAR to predict ICU mortality was found to be above 12.17, which corresponded to a 83.21% sensitivity and a specificity of 60.81%. A sensitivity of 83.21% implies that out of 100 individuals who will die, 83 would be correctly classified as non-survivors. However, this also means that among these 100 patients, 17 would not have a positive test result (in this example, 16.79% of patients that died had UAR below 12.17). A specificity of 60.81% indicates that out of 100 patients who will survive, 61 would be correctly classified as survivors by the test and 39 incorrectly classified as non-survivors (in this example, 39.20% of survivors had increased UAR above 12.17). Subsequently, the authors computed all the indices of using the UAR threshold of 12.17 to identify survivors in patients hospitalized in ICU due to COVID-19 infection: –sensitivity = 83.21; –specificity = 60.81; –false positive rates (1-specificity) = 39.19; –positive predictive value (PPV) = 79.72; –negative predictive value (NPV) = 66.18% and; –accuracy = 75.36% in this example. Τhe ROC curve combines sensitivity (y-axis) and 1 minus specificity (x-axis), representing all possible cut-off values the test might have. Typically, a ROC with an AUC above 75% is considered to be clinically useful. However, determining the cut-off value usually depends on the goal of the test, due to the trade-off between specificity and sensitivity. The authors reported that compared to survivors, those who died had increased UAR, with a mean difference of 12.8. Additionally, Cox regression analysis (the final step of their analysis) showed that a UAR value above 12.17 doubled the risk of all-cause mortality in this cohort. In this example, ROC curve analysis was used to assess the prognostic value of a biomarker and to determine the optimal cut-off value. ROC curve can also be used to develop and validate novel risk prediction models or scores. For instance, You et al. , aimed to construct a risk prediction model for CV disease in HD patients and conducted a prospective study in 388 maintenance HD, followed for a mean of 3.27 years with the occurrence of CV events as endpoint. During the follow-up period, 132 patients had a CV event. To build the new prediction model, the first step was to identify risk factors for the outcome. Among the 26 candidate prognostic variables that were tested, stepwise Cox regression analysis revealed that hypertension, diabetes mellitus, age ≥ 65 years and abnormal white blood cell count were the sole independent predictors. Next, the authors developed a simple risk prediction score by assigning1 point to each of these 4 risk factors. Compared to patients with none of these factors, patients with 1,2, and over 3 factors had significantly, graded risk for CV disease (HRs: 3.29, 7.42, and 15.43, respectively). The subsequent step was to evaluate and validate the calibration of this risk score, by comparing the difference between predicted and actual risk with the ROC curve analysis. To increase the validity of their analysis, the authors initially performed this risk analysis first in their population and then applied it to a bootstrap validation data set. In both sets, this risk prediction score showed acceptable discriminatory performance for CV disease (AUC = 0.70, 95% CI 0.65–0.75 and AUC = 0.69, 95% CI 0.66–0.72, respectively). The authors concluded that given the high prevalence of CV disease in this high-risk population, a simple CV risk model based on variables that are easily obtained with little cost could be of clinically useful. Another application of the ROC curve analysis is the comparison of two risk prediction models, to determine their relative performance and identify the one with superior predictive ability, thus enabling a quantitative evaluation of their overall accuracy. In a recent study conducted by our team, we aimed to find a risk predictive model that was simple, quick, easy to evaluate and accurate for assessing the CV risk in subjects with diabetic kidney disease . One hundred fifty-eight patients with different degrees of renal function and type II diabetes for at least 10 years were enrolled. At baseline, various demographic, clinical, anthropometric, and biochemical variables were collected. Moreover, carotid intima-media thickness (cIMT) was evaluated by ultrasound as a surrogate marker of subclinical atherosclerosis. All patients were followed for a long period of 7 years, with fatal or nonfatal CV events as the primary endpoint (75 events). To assess the predictive value of various variables collected at baseline, non-CV death was considered as a competitive event, using Fine-Gray regression models. Survival analyses revealed that among all variables, male gender, history of CV disease, long duration of diabetes, low hemoglobin, low estimated glomerular filtration rate (eGFR), high albuminuria, low high density lipoprotein (HDL) cholesterol, low serum albumin and high cIMT were independently associated with the CV outcome. Then, in multivariate models, only history of CV disease, eGFR and albuminuria remained significantly associated with the outcome of interest. Next, a risk model was developed with all these nine variables. However, the assessment of all these data are time consuming, expensive and laborious. To address the clinical question regarding the clinical utility of cIMT measurement and simplify the existing full, 9-variable model, a simpler (nested) risk model with only three variables (eGFR, albuminuria and history of CV disease) was constructed. The performance of these two models, was compared with various statistical tests, with different purposes. First, a log likelihood test demonstrated that there was no significant difference between the data fitting of these two models ( x 2 = 9.48, 6 df, p = 0.15). Second, Hosmer–Lemeshow test revealed that the simplified model was better calibrated compared to the full model (respectively, x 2 = 9.24, p = 0.32 and x 2 = 11.09, p = 0.20).). Finally, the discrimination performance of these two models was compared using the ROC curve analysis (Fig. ). This test confirmed that these two risk models had nearly identical and high accuracy to predict CV events [full model: AUC 0.87 95% CI: 0.81–0.92 and simplified model: AUC 0.84, 95% CI: 0.78–0.90 respectively]. Based on these analyses, we concluded that the simple risk model consisting of three variables that are easy to measure, quick and cheap might be used to predict CV events in diabetic CKD. The time-consuming and elaborate diagnostic tests such as the cIMT measurement do not actually offer much in the risk assessment. However, for these results to be adopted in everyday clinical practice, they should be validated in different, external, large-scale cohort studies. Another, innovative use for the ROC curve analysis is to reevaluate the epidemiology of a certain disease in a specific population. Long-standing epidemiologic evidence suggest that increased triglycerides (TGs) are associated with CV morbidity and mortality. However, the exact value at which risk starts to increase has yet not been identified. This topic has been investigated in a large, multicenter, national, population-based cohort, the URRAH study , which included 14,189 subjects followed for a long period of 11.2 years with outcome the incidence of CV events. The authors performed a ROC curve analysis to find the optimal, early, prognostic cut-off of TGs for predicting CV events, using the incidence of CV events as the dichotomous (yes or no) classification variable and TGs as the basic variable. They computed the pairs of sensitivity–specificity for all the range of TGs values. The ideal cut-off value was determined by Youden index test, which identifies the threshold value corresponding to the point of the curve nearest the upper-left corner (corresponding to 100% specificity and 100% sensitivity), as described before , with the following equation: J = max (sensitivity + specificity –1). From the ROC curve, the optimal cut-off value of TGs to predict CV events was found to be 89 mg/dl, which had a sensitivity of 76.6%, a specificity of 34.1% and an AUC of. 0.569 (95% CI 0.561–0.578). It should be noted that the CIs are quite narrow, indicating a “solid” discriminatory performance and the cut-off value identified had the maximum sum of sensitivity + specificty. This optimal cut-off value, of 89 mg/dL, is lower than the conventionally used cut-off value of 150 mg/dL (sensitivity 33.0% and specificity 74.3%). The next step of the statistical analysis in this paper was to separately insert, both the prognostic (> 89 mg/dl, as found by theROC curve) and the standard, conventional 150 mg/dl threshold values as independent variables in multivariate Cox models (adjusted for well-established CV risk confounders), with CV events as the dichotomous, dependent variable. Based on the HRs (conventional = 1.211, 95% CI 1.063–1.378, prognostic = 1.15, 95% CI 1.021–1.295), both cut-off values were considered acceptable, independent predictors of CV events in the whole cohort. Therefore, the authors concluded that significantly lower (61 mg/dl lower than standard) cut-off level of TGs predicts CV disease and therefore, these subjects should be carefully monitored in primary care. Although in this example, the two different thresholds (89 vs 150 mg/dL) were independent predictors of CV events, it should be noted that they have different sensitivity and specificity, with the conventionally used threshold (150 mg/dL) having a low sensitivity (33%) and high specificity (74%), and the new threshold proposed by the researchers (89 mg/dL) having a high sensitivity (77%) and a low specificity (34%). This should be taken under consideration if the new threshold was used for monitoring patients in the primary care, because not only more patients who will have the event would be actually identified (and the events prevented) but also there would be more false negatives, implying more frequent exams and visits for patients who will not eventually develop the CV event. Therefore, the “best” threshold for a test, depends on the test’s goal from a clinical and practical perspective. ROC curve analysis is an important and widely used statistical test with various applications in the research field of nephrology. This statistical method evaluates the diagnostic performance of a novel test or marker, assesses the predictive ability of a marker, identifies the ideal cut-off values of a test and allows the comparison of the diagnostic performance between two or more risk prediction models. Since this current decade is the era of biomarkers and predictive tests in nephrology, ROC analysis is an essential and easy tool to validate the actual clinical utility of proposed markers and tests. Measures like reclassification, calibration statistic, net reclassification index, and integrated discrimination improvement may not be as widely adopted or as easily interpretable as ROC curves, limiting their utility in certain contexts in clinical research. However, in specific scenarios or when combined with ROC analysis, they can provide complementary information for a more thorough assessment of model performance. The main advantages of ROC curves are the following: (1) this analysis evaluates the performance of a test/biomarker across all possible thresholds, providing insights into the overall discriminatory power of the biomarker of interest without being dependent on a specific threshold; (2) ROC curves illustrate the trade-off between true positives and false positives, allowing for a visual representation of how well a biomarker discriminates across various threshold values; (3) the AUC derived from ROC curves provides a single, interpretable summary of a model’s discriminatory power. Higher AUC values indicate better overall performance, making it easy to compare models and determine their relative effectiveness. Although, ROC curve has several important applications in research and clinical practice, there are also certain limitations and pitfalls that should be taken under consideration. First, it provides a graphical representation of the diagnostic accuracy of the test across all possible thresholds, but does not directly indicate the optimal threshold for making decisions in practical applications. Second, while ROC curves are effective for binary classification, extending them to multi-class problems can be challenging and finally, it assumes independence of observations. |
Primary implant stability of two implant macro-designs in different alveolar ridge morphologies: an in vitro study | b1eb99fa-4717-443f-a149-1914b2a022fa | 11885739 | Dentistry[mh] | Dental implants are a well-established and reliable option for replacing missing teeth in both partially and fully edentulous patients. The stability of dental implants and their long-term success are ensured through the process of osseointegration . Osseointegration refers to the direct structural and functional connection between living bone and the surface of a load-bearing implant . This process requires achieving primary stability at the time of implant placement, followed by undisturbed wound healing, facilitating a series of critical biological events that culminate in osseointegration and peri-implant tissue stability . Primary implant stability during placement is attained through the direct mechanical engagement with the surrounding alveolar bone . Over the course of 4 to 8 weeks, primary stability is gradually superseded by secondary stability, which is driven by a biological bone remodeling around the implant . Insufficient primary stability may jeopardize the process of osseointegration, as micromovements between implant and surrounding bone exceeding 100 μm potentially disrupt bone healing and lead to fibrous encapsulation rather than osseointegration . Comprehensive treatment planning for failing teeth and dental implant therapy is complex, encompassing numerous factors such as the choice of the ideal implant design characteristics, the appropriate timing of implant placement following tooth extraction and the subsequent loading protocols. The selection of these treatment options should aim to predictably achieve long-term treatment success, including optimal esthetic outcomes and a low risk of complications, while also striving to reduce the number of surgical and clinical procedures, whenever feasible . As patient interest in shorter treatment times continues to grow, immediate implant placement has gained popularity, particularly when paired with immediate restoration, with or without immediate loading . However, the success of immediate protocols depends significantly on achieving high primary stability at the time of implant placement, which is often challenged by local morphological factors when comparing implant engagement in fresh extraction sockets versus late implant placement in healed alveolar ridges . Several additional factors also influence primary implant stability, including alveolar bone density and dimensions, implant design characteristics, and surgical technique . Although the precise threshold of adequate primary stability for immediate restoration or loading remains unclear, a minimum insertion torque of 35 Ncm during implant placement is frequently recommended . To address this challenge of adequate primary stability, particularly in immediate implant placement scenarios, implants with modified macro-designs have been developed in recent years. These modifications, which include changes to implant shape, surface topography, and thread design (depth, pitch, and shape), are intended to enhance primary stability . While a recent review suggested only minimal differences in primary stability between tapered and non-tapered implants , multiple in vitro and in vivo studies indicate that tapered designs generally provide higher primary stability compared to cylindrical implants . Despite these findings, there is only limited information on the effect of alveolar ridge morphology on primary implant stability and the influence of various implant macro-designs. Consequently, there is a need for recommendations on selecting specific implant specifications tailored to different clinical scenarios involving immediate placement and loading protocols. Therefore, the primary aim of this in vitro investigation was to assess the influence of two different alveolar ridge morphologies on the primary implant stability. The secondary aim was to assess the impact of two implant macro-designs on primary stability and to examine the reliability of resonance frequency analysis (RFA) in comparison to final insertion torque as a measure for primary implant stability. The null hypotheses were as follows: alveolar ridge morphology (H01), implant macro-design (H02), and their interactions (H03) do not influence primary implant stability during implant placement.
Models and virtual implant planning The present in vitro study was designed and conducted in the Department of Oral Surgery and Stomatology at the University of Bern, Switzerland from November 2021 to February 2022. Standardized partially edentulous models mimicking a cortico-spongious alveolar bone density D2 were used (BoneModels, Castellón de la Plana, Spain) . Each model presented six single-tooth edentulous sites corresponding to the FDI teeth positions 16, 14 and 25 simulating healed alveolar ridge morphologies, and to the FDI teeth positions 12, 21 and 23 simulating fresh extraction sockets (Fig. ). For each model, a virtual implant planning was performed in a dedicated software package (coDiagnostiX 10.5, Dental Wings Inc, Montreal, Canada) based on a Cone Beam Computed Tomography (CBCT) scan (8 × 5 cm, 80 μm voxel size, 90kVp, 1mAs; 3D Accuitomo 170, J. Morita Corp, Osaka, Japan) and a surface scan using a laboratory scanner (3Shape 4, 3Shape Inc, Copenhagen, Denmark). After superimposing the files, the ideal 3D implant position for each site was planned based on a digital wax-up (Zirkonzahn. Modellier, Zirkonzahn GmbH, Gaus, Italy) for screw-retained single implant crowns by an experienced clinician (C.R). In extraction socket sites, an apical implant engagement of at least 4 mm was respected. Subsequently, the surgical guide was designed with a material thickness of 3.5 mm and a guide-to-tooth offset of 0.15 mm. Multiple fenestrations were included to allow for a visual verification of the guides fit on the model. The guides were manufactured for each model using a transparent, light-cured resin for stereolithography (ProArt Print Splint, Ivoclar Vivadent AG, Schaan, Lichtenstein) in a 3D printer (PrograPrint PR5, Ivoclar Vivadent AG, Schaan, Lichtenstein). Guided implant placement and study groups To recreate the clinical scenario as closely as possible, the models were mounted in phantom-heads. Afterwards, fully guided sCAIS procedures according to the manufacturer’s protocols were carried out using a surgical motor (iChiropro, Bien-Air, Bienne, Switzerland). The study involved two bone-level type implants, each with distinct macro-design features (Fig. ): Shallow-threaded parallel-walled implant body with a thread pitch of 0.8 mm (BL 4.1 × 12 mm RC, Straumann AG. Basel, Switzerland), representing a conventional design available for decades to address a broad range of clinical indications, and Deep-threaded tapered implant body with a thread pitch of 2.25 mm (BLX 4.0 × 12 mm RB, Straumann AG. Basel, Switzerland), a recently introduced design intended to achieve high primary stability, particularly in immediate implant placement protocols. These implants were randomly assigned to the edentulous sites, ensuring equal sample sizes for each group. Measurement of primary implant stability The primary stability of all the implants was assessed using the following two methods: Continuous measurement of the insertion torque (Ncm) over time during implant placement using the surgical motor (iChiropro, Bien-Air, Bienne, Switzerland); and Resonance Frequency Analysis (RFA) after final implant placement using hand-tightened implant-specific transducers and a RFA device (Osstell ISQ, Integration Diagnostics Ltd., Goteborgsvagen, Sweden). The RFA assessment was conducted three times in both the mesio-distal and bucco-lingual orientations, recording the lowest value from each orientation. The mean of these two lowest values was then calculated (Fig. ). Statistical analysis The primary outcome of the present study was the comparison of final torque values between the different alveolar ridge morphologies, followed by the secondary outcome of the same variables for the different implant macro-designs. Finally, a correlation between RFA and final torque values was investigated. All collected data was presented as mean and standard deviation (SD). Two-way analysis of variance (ANOVA) was used for the primary and secondary outcomes to verify the effects of the independent variables (alveolar ridge morphology and implant macro-design) on the dependent variables (torque and mean RFA). Main and interaction effects were tested, and multiple comparisons used Sidak’s post hoc. Effect sizes and observed power were calculated, and interaction plots were designed. The correlation between torque and mean RFA was performed using Pearson’s bivariate correlation coefficient. All the analyses were carried out using IBM SPSS v.26 software, adopting a significance level of 5%.
The present in vitro study was designed and conducted in the Department of Oral Surgery and Stomatology at the University of Bern, Switzerland from November 2021 to February 2022. Standardized partially edentulous models mimicking a cortico-spongious alveolar bone density D2 were used (BoneModels, Castellón de la Plana, Spain) . Each model presented six single-tooth edentulous sites corresponding to the FDI teeth positions 16, 14 and 25 simulating healed alveolar ridge morphologies, and to the FDI teeth positions 12, 21 and 23 simulating fresh extraction sockets (Fig. ). For each model, a virtual implant planning was performed in a dedicated software package (coDiagnostiX 10.5, Dental Wings Inc, Montreal, Canada) based on a Cone Beam Computed Tomography (CBCT) scan (8 × 5 cm, 80 μm voxel size, 90kVp, 1mAs; 3D Accuitomo 170, J. Morita Corp, Osaka, Japan) and a surface scan using a laboratory scanner (3Shape 4, 3Shape Inc, Copenhagen, Denmark). After superimposing the files, the ideal 3D implant position for each site was planned based on a digital wax-up (Zirkonzahn. Modellier, Zirkonzahn GmbH, Gaus, Italy) for screw-retained single implant crowns by an experienced clinician (C.R). In extraction socket sites, an apical implant engagement of at least 4 mm was respected. Subsequently, the surgical guide was designed with a material thickness of 3.5 mm and a guide-to-tooth offset of 0.15 mm. Multiple fenestrations were included to allow for a visual verification of the guides fit on the model. The guides were manufactured for each model using a transparent, light-cured resin for stereolithography (ProArt Print Splint, Ivoclar Vivadent AG, Schaan, Lichtenstein) in a 3D printer (PrograPrint PR5, Ivoclar Vivadent AG, Schaan, Lichtenstein).
To recreate the clinical scenario as closely as possible, the models were mounted in phantom-heads. Afterwards, fully guided sCAIS procedures according to the manufacturer’s protocols were carried out using a surgical motor (iChiropro, Bien-Air, Bienne, Switzerland). The study involved two bone-level type implants, each with distinct macro-design features (Fig. ): Shallow-threaded parallel-walled implant body with a thread pitch of 0.8 mm (BL 4.1 × 12 mm RC, Straumann AG. Basel, Switzerland), representing a conventional design available for decades to address a broad range of clinical indications, and Deep-threaded tapered implant body with a thread pitch of 2.25 mm (BLX 4.0 × 12 mm RB, Straumann AG. Basel, Switzerland), a recently introduced design intended to achieve high primary stability, particularly in immediate implant placement protocols. These implants were randomly assigned to the edentulous sites, ensuring equal sample sizes for each group.
The primary stability of all the implants was assessed using the following two methods: Continuous measurement of the insertion torque (Ncm) over time during implant placement using the surgical motor (iChiropro, Bien-Air, Bienne, Switzerland); and Resonance Frequency Analysis (RFA) after final implant placement using hand-tightened implant-specific transducers and a RFA device (Osstell ISQ, Integration Diagnostics Ltd., Goteborgsvagen, Sweden). The RFA assessment was conducted three times in both the mesio-distal and bucco-lingual orientations, recording the lowest value from each orientation. The mean of these two lowest values was then calculated (Fig. ).
The primary outcome of the present study was the comparison of final torque values between the different alveolar ridge morphologies, followed by the secondary outcome of the same variables for the different implant macro-designs. Finally, a correlation between RFA and final torque values was investigated. All collected data was presented as mean and standard deviation (SD). Two-way analysis of variance (ANOVA) was used for the primary and secondary outcomes to verify the effects of the independent variables (alveolar ridge morphology and implant macro-design) on the dependent variables (torque and mean RFA). Main and interaction effects were tested, and multiple comparisons used Sidak’s post hoc. Effect sizes and observed power were calculated, and interaction plots were designed. The correlation between torque and mean RFA was performed using Pearson’s bivariate correlation coefficient. All the analyses were carried out using IBM SPSS v.26 software, adopting a significance level of 5%.
Study sample A total of 144 implants (BL n = 72, BLX n = 72) were equally distributed to static computer-assisted implant placement in single tooth sites with healed alveolar ridge ( n = 72) or extraction socket morphology ( n = 72) in 36 models. Alveolar ridge morphology Higher final torque values were observed when implants were placed in healed ridge sites compared to extraction sockets ( p < 0.001). Notably, the insertion torque increased linearly, with a steeper incline in healed ridge sites compared to extraction sockets (Fig. ). Similarly, higher mean RFA values were observed for implants in healed ridges compared to extraction sockets ( p < 0.001). A positive and statistically significant correlation was found between final insertion torque and mean RFA values ( r = 0.742; p < 0.001) as illustrated in Fig. . Descriptive statistics and corresponding box plots are displayed in Table ; Fig. . The main effects and multiple comparisons between implant type and alveolar ridge morphology for mean RFA and final torque are presented in Tables and . Implant macro-design Higher final torque values were observed in BL implants compared to BLX implants ( p < 0.001). BL implants exhibited a more linear torque increase in healed sites, whereas BLX implants showed a more progressive torque formation curve (Fig. ). Similarly, higher mean RFA values were recorded for BL implants compared to BLX implants ( p < 0.001). Descriptive statistics and corresponding box plots are displayed in Table and Fig. . The main effects and multiple comparisons between implant type and alveolar ridge morphology for mean RFA and final insertion torque are illustrated in Tables and . Interactions The alveolar ridge morphologies were compared among each implant macro-design group. The BL implants presented statistically significant higher final torque and mean RFA values in healed sites compared to extraction socket sites ( p < 0.001). Also, in the BLX implant group the results were statistically significant and higher final torque and mean RFA values were observed in healed sites compared to extraction socket sites ( p < 0.001). Conversely, the implant macro-design was analyzed according to the alveolar ridge morphologies. When placed in extraction socket sites, the BL implants presented statistically significant higher final torque and mean RFA compared to BLX implants ( p < 0.001). Single outliers in torque and RFA values were observed in both the BL and BLX groups at socket sites, reflecting the challenging anatomical features that potentially compromise the predictability of primary stability in immediate implant placement (Fig. ). When placed in healed sites, statistically significant higher final torque values for BL implants compared to BLX implants could be achieved ( p = 0.037). However, no statistically significant difference was observed between the mean RFA values of BL and BLX implants in fully healed sites (Tables and , and Table ). The interactions of implant type and alveolar ridge morphology had a statistically significant effect on the final torque ( p = 0.025) and on the mean RFA ( p = 0.003).
A total of 144 implants (BL n = 72, BLX n = 72) were equally distributed to static computer-assisted implant placement in single tooth sites with healed alveolar ridge ( n = 72) or extraction socket morphology ( n = 72) in 36 models.
Higher final torque values were observed when implants were placed in healed ridge sites compared to extraction sockets ( p < 0.001). Notably, the insertion torque increased linearly, with a steeper incline in healed ridge sites compared to extraction sockets (Fig. ). Similarly, higher mean RFA values were observed for implants in healed ridges compared to extraction sockets ( p < 0.001). A positive and statistically significant correlation was found between final insertion torque and mean RFA values ( r = 0.742; p < 0.001) as illustrated in Fig. . Descriptive statistics and corresponding box plots are displayed in Table ; Fig. . The main effects and multiple comparisons between implant type and alveolar ridge morphology for mean RFA and final torque are presented in Tables and .
Higher final torque values were observed in BL implants compared to BLX implants ( p < 0.001). BL implants exhibited a more linear torque increase in healed sites, whereas BLX implants showed a more progressive torque formation curve (Fig. ). Similarly, higher mean RFA values were recorded for BL implants compared to BLX implants ( p < 0.001). Descriptive statistics and corresponding box plots are displayed in Table and Fig. . The main effects and multiple comparisons between implant type and alveolar ridge morphology for mean RFA and final insertion torque are illustrated in Tables and .
The alveolar ridge morphologies were compared among each implant macro-design group. The BL implants presented statistically significant higher final torque and mean RFA values in healed sites compared to extraction socket sites ( p < 0.001). Also, in the BLX implant group the results were statistically significant and higher final torque and mean RFA values were observed in healed sites compared to extraction socket sites ( p < 0.001). Conversely, the implant macro-design was analyzed according to the alveolar ridge morphologies. When placed in extraction socket sites, the BL implants presented statistically significant higher final torque and mean RFA compared to BLX implants ( p < 0.001). Single outliers in torque and RFA values were observed in both the BL and BLX groups at socket sites, reflecting the challenging anatomical features that potentially compromise the predictability of primary stability in immediate implant placement (Fig. ). When placed in healed sites, statistically significant higher final torque values for BL implants compared to BLX implants could be achieved ( p = 0.037). However, no statistically significant difference was observed between the mean RFA values of BL and BLX implants in fully healed sites (Tables and , and Table ). The interactions of implant type and alveolar ridge morphology had a statistically significant effect on the final torque ( p = 0.025) and on the mean RFA ( p = 0.003).
The present in vitro study examined the primary stability of implants with two different macro-designs placed into simulated fresh extraction sockets compared to healed alveolar ridges. The results of this investigation demonstrate higher final torque and RFA values in fully healed compared to extraction socket sites and for BL compared to BLX implants. The final insertion torque and RFA values were positively correlated, demonstrating the reliability of RFA values in implant stability assessment. Therefore, H01, H02, and H03 were rejected. The present study demonstrates that the morphology of the alveolar ridge significantly impacts primary implant stability, with extraction sockets demonstrating lower final implant insertion torque and RFA values compared to healed alveolar ridges. This is in line with the results from a clinical trial, reporting insertion torques of 65.5 Ncm versus 53.7 Ncm and RFA values of 72.8 versus 63.9 for healed sites as compared to extraction sockets . Similarly, another in vitro study reported insertion torques of 49 Ncm versus 28 Ncm and RFA values of 62 versus 53 for full embedment in bone compared to circular defects . The significantly lower primary implant stability in extraction socket sites might be attributed to the incomplete embedding in bone . To achieve sufficient primary stability in these cases, it is recommended that the implant osteotomy extend 3–4 mm apically beyond the socket, or modify the drilling protocol, underpreparing the osteotomy . Contrarily, implants in healed alveolar ridges are fully embedded in bone, a factor also contributing to implant positioning accuracy . Significantly higher positional deviations between planned and final implant positions, pointing to the zone of less resistance, were found for extraction socket sites . These deviations may affect apical implant engagement and, consequently, primary implant stability . While higher primary implant stability values are a prerequisite for immediate loading protocols, excessively high insertion torque does not necessarily enhance the process of osseointegration . In fact, high insertion torques could induce pronounced local bone necrosis, potentially compromising osseointegration . Conversely, and in conjunction with conventional implant loading, low insertion torque values do not negatively affect osseointegration as long as implant stability remains above 10 Ncm . In addition to local anatomical characteristics, the macro-design of the implant plays a significant role in achieving primary stability during implant placement . Interestingly, the present study demonstrated lower primary implant stability for BLX implants compared to BL implants across both simulated clinical scenarios. These results are supported by an in vitro study that reported higher RFA and final torque values for BL implants across various bone densities compared to BLX implants . Contrarily, an ex vivo study reported higher RFA and final torque values for BLX implants compared to BL implants in low-density scenarios using cancellous iliac porcine crest blocks . Despite the differences observed in the present study, both implant designs provided sufficient primary stability for conventional loading protocols in extraction sockets, as final torque values exceeded the 10 Ncm threshold . However, neither the BL nor BLX design met the recommended 35 Ncm threshold for immediate loading in this study . Interestingly, in healed sites, the influence of implant design on primary stability was less significant, with both designs potentially qualifying for immediate implant loading protocols. The higher primary implant stability of BL implants may be attributed to their smaller thread pitch compared to BLX implants. A smaller thread pitch increases the implant surface area, leading to greater bone-to-implant contact and enhanced mechanical anchorage . Additionally, the core diameter of the BLX implant (3.5 mm) is significantly smaller than that of the BL implant (4.1 mm). Increased implant diameters and non-self-cutting threads are also associated with higher primary implant stability . Conversely, tapered implant body designs have been suggested to achieve higher primary implant stability than cylindrical-shaped implants . This is likely due to greater compression of the surrounding bone, which may provide favorable stress on the tissue and reduces the risk of micromovement . Therefore, an under-preparation of the implant bed for tapered BLX implants could potentially result in higher primary stability and might reach the threshold for immediate implant loading. This is supported by a recent randomized controlled study, demonstrating significantly higher primary stability for implants placed in sites with under-preparation compared to those inserted following a conventional drilling sequence . Consequently, the implant specifications, macro-design, and osteotomy protocols should be tailored to the individual site-specific tissue characteristics . The implant designs investigated in this project were suitable for conventional loading protocols in both clinical scenarios, with BL implants consistently demonstrating higher primary stability. However, selecting BLX implants may be advantageous in cases where anatomical restrictions in the apical region of the osteotomy favor the use of a tapered implant design. The findings of this study indicate that achieving primary stability compatible with immediate loading protocols during immediate implant placement was not predictable for either implant design. Therefore, this treatment protocol should be limited to carefully selected cases, with conventional loading recommended in situations where primary stability is uncertain. Primary implant stability is commonly assessed at the time of placement using insertion torque. However, this method is limited to a single-point measurement, as repeated assessments would disrupt the osseointegration process. RFA offers an alternative, allowing for non-invasive monitoring of implant stability post-placement by providing an Implant Stability Quotient score, ranging from 1 to 100 . In this study, both final insertion torque and RFA values were recorded, and a positive, statistically significant correlation between the two was observed. This finding aligns with prior research from both in vitro and clinical studies, which also report a positive correlation between final insertion torque and RFA values . These results support RFA as a reliable tool for evaluating primary implant stability, particularly when compared to insertion torque. However, caution is warranted in long-term monitoring, as conflicting evidence exists regarding the relationship between RFA measurements, marginal bone loss, and other clinical parameters . Several limitations of this study should be acknowledged. First, as an in vitro study, the generalizability of the results is limited, and caution is needed when extrapolating these results to clinical scenarios. The acrylic models used mimic the D2 density of human cortico-spongious bone but do not fully replicate the clinical environment with its complexities at a specific location in the alveolar ridge and in the variety of the different sites throughout the maxilla and mandible. Further, anatomical limitations, such as limited vertical and horizontal bone, can occur in clinical situations and are not considered in this study. This could require bone augmentation procedures or selection of narrower implant diameters and shorter implants, potentially leading to reduced primary implant stability. Second, this study compared two implants with multiple differing macro-design features, potentially obscuring the individual effects of each feature, making it difficult to attribute the outcomes to a specific design characteristic. Additionally, adjustments to implant specifications, such as using longer implants for enhanced apical engagement or wider implants for increased lateral bone engagement would influence the primary stability. Third, only one bone density and drilling protocol were examined, leaving the influence of other factors unclear. Future studies should explore a broader range of bone densities and include different alveolar ridge morphologies with horizontal and vertical bone defects and their impact on primary implant stability. Further, different surgical techniques should be considered, regarding the potential of reaching the threshold for immediate implant loading in immediate placement procedures. Additionally, investigating implants with singularly distinct macro-design features would provide more clarity. Clinical validation is needed to eradicate the limitations of generalizability and recommended to assess osseointegration and secondary implant stability over time during follow-up periods.
Within the limitations of this in vitro study, it can be concluded that: Implants inserted in healed alveolar ridges show higher final insertion torques and RFA values as compared to fresh extraction sockets. BL implants were found to have higher final insertion torques and RFA values compared to BLX implants in both simulated clinical scenarios. RFA was shown to be a reliable and repeatable method to assess primary implant stability as compared to the insertion torque values.
|
null | 3c401157-cf43-4808-bdcd-c776e2a0c9f6 | 9572931 | Pharmacology[mh] | Elsholtzia ciliata (Thunb.) Hyland belongs to the genus Elsholtzia , family Lamiaceae. In the clinical application of traditional Chinese medicine, the aerial parts of Mosla chinensis Maxim (MCM) and Mosla chinensis Maxim cv. Jiangxiangru (JXR) are used as E. ciliata . MCM is mostly wild, and JXR is the cultivated product of MCM, which was often confused with Elsholtzia splendens Nakai ex F. Maek. before . However, Ganpei Zhu believes that JXR has obvious plant morphological differences from MCM. The plant height of JXR can reach 25–66 cm. The stem has gray, white curly pubescence. The leaf blade is broadly lanceolate to lanceolate, and the leaf margin is obviously serrate. The bracts are obovate and ovate. Calyx lobes are triangular lanceolate in shape. There is a hair ring at the base of the crown tube. Nutlets are yellowish brown and nearly round, with lightly carved surface, reticulate and flattened inside. MCM plants are shorter. Stem inversely pilose. Leaf blade linear to linear lanceolate, leaf margin serrate, inconspicuous. Bracts ovate orbicular. Calyx lobes subulate. There is no hairy ring at the base of the crown. Nutlets are nearly spherical, brown, with deep carving on the surface, and uneven in the mesh . Therefore, JXR should be listed as an independent variety . E. ciliata is a herbaceous plant distributed in Russia (Siberia), Mongolia, Korea, Japan, India, the Indochina peninsula, and China, while in Europe and North America it was also introduced and cultivated. In China, it is produced almost all over the country, except Xinjiang and Qinghai. It has low requirements for growth environment, a short growth cycle, flowering period from July to October, and harvest in summer and autumn . Traditional Chinese medicine theory believes that E. ciliata has a spicy flavour and a lukewarm nature. It also has the effect of inducing diaphoresis and relieving superficies, removing dampness for regulating the stomach and inducing diuresis for removing edema. The following is a review of chemical compositions and pharmacological activities. A total of 352 compounds have been identified from E. ciliata . Among the chemical components of E. ciliata , flavonoids and terpenoids are the main components, which make E. ciliata have more obvious antimicrobial, anti-inflammatory, and antioxidant effects. Terpenoids such as 3-carene and some aromatic compounds such as carvacrol exhibit antimicrobial activity. Some polysaccharides can inhibit the proliferation of tumor cells, and show positive effects in immunoregulation. Compounds 1 – 48 are flavonoids, 49 – 77 are phenylpropanoids, 78 – 193 are terpenoids, 194 – 202 are alkaloids compounds, and 203 – 352 are other compounds. Compounds 1 – 352 are listed in and the structures 1 – 352 are listed in . In the traditional application of Chinese medicine, E. ciliata is mainly used for the treatment of summer cold, cold aversion and fever, headache without sweat, abdominal pain, vomiting and diarrhea, edema, and poor urination. Modern pharmacological studies show that E. ciliata has antioxidant, anti-inflammatory, antimicrobial, insecticidal, antiviral, hypolipidemic, hypoglycemic, analgesic, antiarrhythmic, antitumor, antiacetylcholinesterase, and immunoregulator activities. 3.1. Antioxidant Activity Oxidative stress refers to a state of imbalance between oxidation and antioxidant effects in vivo. It is a negative effect caused by free radicals in the body and is considered to be an important factor in aging and disease. It was reported that the essential oil of E. ciliata could increase catalase (CAT) activity in brain of mice by 26.94%, which may be related to the decomposition of hydrogen peroxide by CAT to reduce oxidative stress . There is a phenolic substance osmundacetone in E. ciliata ethanol extract. In DPPH experiment, the IC 50 value of osmundacetone was 7.88 ± 0.02 µM, indicating a certain antioxidant capacity. The inhibitory effect of osmundacetone on glutamate-induced oxidative stress in HT22 cells was studied by reactive oxygen species (ROS) method. The results showed that osmundacetone significantly reduced the accumulation of ROS and could be used as a potential antioxidant . By studying the effect of E. ciliata methanol extract on J774A.1 murine macrophage, the evaluation of antioxidant activity showed that all the tested compounds had significant effects on ROS release under oxidative stress at the highest concentration (10 M), especially luteolin-7- O - β -D-glucopyranoside, luteolin, and 5,6,4’-trihydroxy-7,3’-dimethoxyflavone . Various scholars studied different polarity extracts of E. ciliata . According to the free radical scavenging experiment of Huynh Xuan Phong, the result showed that E. ciliata extract had certain scavenging ability against 2,2-diphenyl-1-picrylhydrazyl (DPPH) and 2,2’-azino-bis (3-ethylbenzothiazoline -6-sulfonic acid) (ABTS), with IC50 values of 495.80 ± 17.16 and 73. 59 ± 3.18 mg/mL. . In DPPH experiment, the EC 50 values of dichloromethane extract, crude ethanol extract and n-hexane extract were 0.041 µg/µg, 0.15 µg/µg and 0.46 µg/µg, respectively, showing strong antioxidant activity. Such antioxidant capacity may be related to non-polar flavonoids and phenols contained in E. ciliata , among which the total phenol content of dichloromethane is 96.68 ± 0.0010 µg GAEs/mg Extract, and the total flavonoid content is 71.5 ± 0.0089 µg QEs/mg Extract. Therefore, it has the strongest antioxidant capacity . Jing-en Li et al. extracted JXR ethanol extract with petroleum ether, ethyl acetate and water-saturated n-butanol, respectively, and studied the antioxidant activities of the three parts and water phase. The results indicated that ethyl acetate showed good antioxidant activities in ferric reducing antioxidant power (FRAP), DPPH, and β -carotene assay, which may be related to the higher flavonoid content in this extract . The antioxidant ability of mytilus polysaccharide-I (MP-I) contained in JXR water extract was concentration-dependent. When the concentration was 16 mg/mL, the chelation rate of MP-I and Fe 2+ was 87.80%. When the concentration was 20 mg/mL, the scavenging rate of DPPH free radical was 81.32%. The scavenging rate of hydroxyl radical was 81.94% . The DPPH test IC 50 of MCM essential oil and methanol extract were 1230.4 ± 12.5 and 1482.5 ± 10.9 μg/mL, respectively. Reducing power test EC 50 were 105.1 ± 0.9 and 313.5 ± 2.5 μg/mL, respectively. β -Carotene bleaching assay EC 50 were 588.2 ± 4.2 and 789.4 ± 1.3 μg/ml, respectively. The total phenolic content of the essential oil was about 1.7 times that of methanol extract, which further verified the stronger antioxidant capacity of the essential oil . Different parts of E. ciliata have different antioxidant capacity. Lauryna Pudziuvelyte et al. used DPPH, ABTS, FRAP, and cupric ion reducing antioxidant capacity (CUPRAC) to evaluate the antioxidant activity of different parts of E. ciliata DPPH and ABTS results showed that total phenolics content (TPC) and total flavonoids content (TFC) amounts of ethanol extracts from E. ciliata flower, leaf and whole plant were the highest and had the strongest antioxidant activity. The results of FRAP and CUPRAC test showed that the ethanol extract of E. ciliata flower had the highest antioxidant activity. Among different parts of the ethanol extract, the content of quercetin glycosides, phenolic acids, TPC, and TFC in stem extract was the lowest, and the antioxidant activity was the lowest . The ethyl acetate fraction of E. ciliata was purified by macroporous resin with 80% ethanol to obtain fraction E. In the DPPH experiment, the EC 50 value of fraction E was 0.09 mg/mL, which showed the strongest antioxidant and free radical scavenging ability. The EC 50 value of fraction E was higher than positive control butylated hydroxytoluene (0.45), butylated hydroxyanisole (0.21), and vitamin C (0.41). Hence, it can be seen that E. ciliata has the potential to prevent cardiovascular diseases, cancer, and other diseases caused by excess free radicals . 3.2. Anti-Inflammatory Activity Compounds pedalin, luteolin-7- O - β -D-glucopyranoside, 5-hydroxy-6,7-dimethoxyflavone, and α -linolenic acid in the essential oil of E. ciliata were investigated under a lipopolysaccharide (LPS)-induced inflammatory reaction. It can inhibit ROS release, but its mechanism deserves further study . LPS-induced inflammation was evaluated by the number of inflammatory mediators, i.e., tumor necrosis factor- α (TNF- α ), interleukin (IL)-6, and prostaglandin E2 (PGE2). E. ciliata ethanol extract could significantly inhibit the secretion of inflammatory mediators, where TNF- α and IL-6 factors could be effectively inhibited in the stem and flower part, and the PGE2 pathway could be inhibited in the leaf part . The effect of E. ciliata on inflammation can be further verified by studying pyretic rats caused by LPS and mononuclear macrophage RAW264.7 induced by LPS. E. ciliata essential oil and water decoction can reduce the contents of PGE2, TNF- α and other inflammatory factors to different degrees, and can reduce the content of nitric oxide (NO) in serum . Excessive NO can induce the production of pro-inflammatory factors, such as TGF- α and IL-1 β , and aggravate the inflammatory response . JXR alleviates dextran sulfate sodium induced intestinal knot inflammation in mice by affecting the release of NO, PGE2 and other inflammatory mediators and cytokines . Carvacrol in MCM can inhibit the expression of pro-inflammatory cytokines interferon- γ (IFN- γ ), IL-6, and IL-17 and up-regulate the expression of anti-inflammatory factors TGF- β , IL-4, and IL-10, thus reducing the level of inflammatory factors, reducing the damage to cells, and achieving anti-inflammatory effects . In the formalin-induced licking response test, the licking time of E. ciliata crude ethanol extract and dichloromethane extract is shortened at the late phase under 100 mg/kg dose, and the licking time of n-hexane extract is shortened at the early phase under 100 mg/kg dose, which may be related to its anti-inflammatory effect . Water extract of E. ciliata has anti-allergic inflammatory activity and may be related to the inhibition of calcium, P38 mitogen-activated protein kinase, and nuclear factor- κ B expression in the human mast cell line . 3.3. Antimicrobial Activity Different polar extracts of E. ciliata demonstrated significant differences in inhibition ability with regard to microorganism. The results showed that dichloromethane fraction had the strongest inhibitory activity on Candida albicans with minimum inhibitory concentration (MIC) of 62.5 µg/mL, while n-hexane fraction had the strongest inhibitory effect on Escherichia coli with MIC of 250 µg/mL . The ethyl acetate extract of JXR had strong inhibitory effect on Rhizopus oryzae , with the inhibition zone diameter of 13.7 ± 2.7 mm, MIC of 5 mg/mL and minimum bactericidal concentration (MBC) of 5 mg/mL . The MIC of JXR petroleum ether extract, n-butanol extract and ethanol extract against Escherichia coli , Staphylococcus aureus and Bacillus subtilis were 31.25 μg/mL, and the MIC of ethyl acetate extract was 15.60 μg/mL . The carbon dioxide extract of E. ciliata demonstrated a certain inhibitory effect on Staphylococcus aureus , Salmonella paratyphoid, and other microorganisms. When the concentration of the extract was 0.10 g/mL, the inhibitory effect on Staphylococcus aureus was the most obvious, and the diameter of the inhibition zone is 19.7 ± 0.1 mm . According to existing research reports, E. ciliata is rich in essential oil, which contains abundant antibacterial ingredients and can inhibit a variety of microorganisms, so it has research significance and value. The main antibacterial active components of the essential oil of E. ciliata are thymol, carvacrol, and p-Cymene, which have inhibitory effect on Staphylococcus aureus , Methicillin-resistant Staphylococcus aureus and Escherichia coli . MIC were 0.39 mg/mL, 3.12 mg/mL and 1.56 mg/mL, and the diameters of inhibition zone were 21.9 ± 0.1230, 18.2 ± 0.0560, and 16.7 ± 0.0115 nm, respectively . The essential oil in E. ciliata flowers, stems, and leaves had inhibitory effects on Escherichia coli , Staphylococcus aureus , Salmonella typhi , Klebsiella pneumoniae, and Pseudomonas aeruginosa. Both of them had the strongest inhibitory effect on Staphylococcus aureus with the inhibitory zone diameter of 12.2 ± 0.4 and 11.2 ± 0.1 mm, respectively . Other relevant findings suggest that JXR essential oil may affect the formation of Staphylococcus aureus biofilm, so as to achieve bacteriostatic effect on its growth. The MIC of JXR essential oil to Staphylococcus aureus was 0.250 mg/mL. When the concentration was 4MIC, the inhibition rate of essential oil to Staphylococcus aureus biofilm formation could reach 91.3%, and the biofilm clearance rate was 78.5%. The MIC of carvacrol, thymol, and carvacrol acetate against Staphylococcus aureus were 0.122, 0.245, and 0.195 mg/mL, respectively, which were the effective antibacterial components of essential oil. Carvacrol, carvacryl acetate, α -cardene, and 3-carene had strong inhibitory effects on the formation of Staphylococcus aureus biofilm, and the inhibition rates were more than 80% at 1/4 MIC (0.0305, 1.4580, 0.1267 and 2.5975 mg/mL, respectively) . In another study, Li Cao et al. studied the inhibitory effect of MCM essential oil on 17 kinds of microorganisms, among which, It significantly inhibited Chaetomium globosum, Aspergillus fumigatus and Candida rugosa . The antibacterial zone diameters were 16.3 ± 0.58, 15.0 ± 1.00, 16.0 ± 0.00 mm, and MIC were 31.3, 62.5, 62.5 μg/mL, respectively . It also has obvious inhibitory effect on Bacillus subtilis and Salmonella enteritidis , which might be related to the terpenes contained, but this opinion remains to be verified . Thymol and carvacrol are the main antibacterial components of MCM. Caryophyllene oxide can be used in the treatment of dermatomycosis, especially in the short-term treatment of mycosis ungualis . The bactericidal mechanism of essential oil may be due to the fact that active components such as carvacrol can damage cell membranes and alter their permeability . The extract of MCM had a significant inhibitory effect on the spore germination of Aspergillus flavus and could significantly change the morphology of Aspergillus flavus mycelia, podocytes, and sporophytes, with a MIC of 0.15 mg/mL . The germination rate of Penicillium digitorum treated with carvacrol significantly decreased, the mechanism may be that carvacrol can change the surface morphology of mycelia, and the cavity rate of mycelia increased with the increase of carvacrol concentration. The permeability of the cell membrane of bacteria increases, causing an electrolyte imbalance in bacteria. As a result, the sugar content and nutrients in bacteria are reduced, so as to achieve bacteriostasis. The MIC and MBC of carvacrol against Penicillium digitorum were 0.125 and 0.25 mg/mL, respectively . 3.4. Insecticidal Activity Some studies have shown that E. ciliata has an insecticidal effect. The repellency rate of E. ciliata essential oil to Blattella germanica was 64.50%, no significant difference from positive control diethyltoluamide (DEET) ( p < 0.05). RD 50 of E. ciliata essential oil was 218.634 µg/cm 2 , which was better than DEET (650.403 µg/cm 2 ) . Contact toxicity IC 50 of E. ciliata essential oil to Liposcelis bostrychophila was 145.5 μg/cm 2 , and fumigation toxicity IC 50 was 475.2 mg/L. (R)-carvone. Dehydroelsholtzia ketone and elsholtzia ketone are the active components of E. ciliata essential oil against Liposcelis bostrychophila . The IC 50 of contact toxicity were 57.0, 151.5, and 194.1 μg/cm 2 , and those of fumigantion toxicity were 417.4, 658.2, and 547.3 mg/L, respectively . Carvone and limonene are the two main components in E. ciliata essential oil. The ability of E. ciliata essential oil, carvone, and limonene against Tribolium castaneum larvae and adults was evaluated by a contact toxicity test and fumigation assay. Contact toxicity test showed that the LD 50 of E. ciliata essential oil, carvone, and limonene to Tribolium castaneum adults were 7.79, 5.08, and 38.57 mg/Adult, respectively, and 24.87, 33.03, and 49.68 mg/Larva to Tribolium castaneum larvae. The results of fumigation toxicity test showed that LC 50 of Tribolium castaneum adults were 11.61, 4.34, and 5.52 mg/L Air, respectively, and LC 50 of Tribolium Castaneum larvae were 8.73, 28.71, and 20.64 mg/L Air, respectively . Thymol, carvacrol, and β -thymol contained in JXR essential oil had significant fumigation toxicity against Mythimna Separate, Myzus Persicae, Sitophilus Zeamais, Musca domestica, and Tetranychus cinnabarinus , among which β -thymol has the strongest activity. The IC 50 values for the five pests were 10.56 (9.26–12.73). 14.13 (11.84–16.59), 88.22 (78.53–99.18), 10.05 (8.63–11.46), and 7.53 (6.53–8.79) μL/L air, respectively . Determined by the immersion method, the LC 50 of MCM essential oil against Aedes albopictus larvae and pupae at four instars were 78.820 and 122.656 μg/mL, respectively. The chemotaxis activity of MCM essential oil was evaluated by the method of effective time of human local skin coating. When the dose was 1.5 mg/cm 2 , the complete protection time of Aedes albopictus was 2.330 ± 0.167 h . From this point of view, E. ciliata essential oil has the development potential as a natural anti-insect agent. It provides a basis for the development and utilization of pesticide dosage forms. Leishmania mexicana can cause cutaneous leishmaniasis. E. ciliata essential oil had anti-leishmania activity with IC 50 of 8.49 ± 0.32 nL/mL. Leishmania mexicana mexicana was treated with a survival rate of 0.38 ± 0.00 %. Selectivity indices were 5.58 and 1.56 for mammalian cell WI38 and J774, respectively. This provides a reference for the treatment of cutaneous leishmaniasis . E. ciliata water extract has an obvious anti- trichomonas vaginalis effect, i.e., can destroy the insect body structure, to achieve the purpose of killing insects. The results of in vitro experiments showed that the lowest effective concentration of E. ciliata water extract was 62.5 mg/mL, and the lowest effective time was 12 h. When the concentration was 250 mg/mL, all Trichomonas vaginalis could be killed for 4 h. This experiment provides a new idea for the clinical treatment of vaginal trichomoniasis . 3.5. Antiviral Activity T helper 17 (Th17) cells play an important role in maintaining adaptive immune balance, and an excess of Th17 cells can cause inflammation. Carvacrol plays an anti-influenza virus role by reducing the proportion of Th17 cells significantly increased by influenza virus A infection. It can be used as a potential antiviral drug and can also be used to control inflammation caused by influenza virus A infection . Mice with viral pneumonia modeled by A/PR/8/34 (H1N1) virus were treated with low, medium and high dose of MCM total flavonoids. Lung index of the three dose groups were 12.81 ± 3.80, 11.65 ± 2.58, 11.45 ± 2.40 mg/g, respectively, compared with the infection group 16.05 ± 3.87 mg/g, the inhibition rates were 20.18%, 27.41%, 28.66%, respectively . E. ciliata ethanol extract has an inhibitory effect on the proliferation of avian infectious bronchitis virus, which may be related to the increased expression of three antiviral genes suppressor of cytokine signaling 3 (SOCS3), 2′-5′-oligoadenylate synthetase-like (OASL), and signal transducer and activator of transcription 1 (STAT1) in H1299 cells treated with extract, and this inhibitory effect shows a certain concentration dependence. In addition, the extract had no cytotoxicity when the concentration was less than 0.3 g/mL . Above experiments provide new possibilities for the treatment of inflammation caused by the virus. A/WSN/33/2009 (H1N1) virus was used to infect Madin-Darby canine kidney cells to explore the antiviral activity of phenolic acids from MCM in vitro. The survival rate of the cells treated with the compound 3-(3,4-dihydroxyphenyl) acrylic acid 1-(3,4-dihydroxyphenyl)-2-methoxycarbonylethyl and methyl lithospermate were higher than 80%, and the inhibition rate of virus at 100 μmol/L were 89.28% and 98.61%, respectively . In another study, the lung index of low, medium, and high dose of MCM water extract on mice infected by A/PR8 influenza virus was 1.21 ± 0.22%, 1.12 ± 0.17%, and 0.94 ± 0.21%, respectively. Compared to the virus-infected group 1.80 ± 0.29 %, the inhibition rates were 32.78%, 37.78% and 47.78%, respectively. The extracts of the three groups can increase the amounts of IL-2 and IFN- γ in serum of mice, and promote the antiviral ability of the body indirectly or directly . Fluoranthene is a compound with antiviral activity extracted from E. ciliata . It has a certain inhibitory effect on two enveloped viruses, sindbis virus, and murine cytomegalovirus, with the lowest effective concentrations of 0.01 and 1.0 μg/mL, respectively. However, its biological effects are complex, and its clinical safety and effectiveness need further research . 3.6. Hypolipidemic Activity The hypolipidemic activity of E. ciliata ethanol extract was evaluated by determining the effects on the contents of triglyceride and total cholesterol in serum of mice in vivo and the proliferation of 3T3-L1 preadipocytes in vitro. The results showed that the levels of triglyceride and total cholesterol in serum of mice treated with the extract were decreased, and the differentiation and accumulation of 3T3-L1 preadipocytes were also effectively inhibited. The levels of genes associated with adipogenesis, such as peroxisome proliferator activated receptor γ (PPAR γ ), fatty acid synthase (FAS), and adipocyte fatty acid-binding protein 2 (aP2) were also significantly reduced. In addition, serum leptin content in E. ciliata ethanol extract treatment group was lower than that in obese mice, which may be due to the reduction of fat content. By this token, the action mechanism of E. ciliata lowering blood lipids may be to inhibit the expression of genes related to fat cell formation. However, the specific mechanism needs further study . 3.7. Antitumor Activity Pudziuvelyte, L. et al. extracted essential oil from E. ciliata fresh herbs, lyophilized herbs, and dried herbs, respectively. In in vitro experiments, three kinds of essential oil presented significant inhibition of proliferation effect on the human glioblastoma (U87), pancreatic cancer (PANC-1), and triple negative breast cancer (MDA-MB231) cells, with EC 50 values ranging from 0.017% to 0.021%. However, E. ciliata ethanol extract did not show cytotoxicity in this experiment . The antitumor activity of origin processing integration technology and traditional cutting processing technology of E. ciliata was evaluated by measuring the effect of the decoction and essential oil on the average optical density of TNF- α in rat lung tissue. Average optical densities of water decocted solution and essential oil of traditional cutting E. ciliata were 0.530 ± 0.071 and 0.412 ± 0.038, respectively, and those of integration processing technology of origin were 0.459 ± 0.051 and 0.459 ± 0.051, respectively. Compared with the blank group (0.299 ± 0.028), there were varying degrees of increase . In vitro experiments of JXR pectin polysaccharide (MP-A40) showed that the proliferation of human leukemic cell line K562 was affected by MP-A40. When the concentration of MP-A40 was 500μg/mL, the inhibition rate was 31.32% . 3.8. Immunoregulatory Activity Macrophages can regulate apoptosis by producing NO and other effecting molecules. Macrophage RAW 264.7 cells treated by JXR pectin polysaccharide (MP-A40) showed an obvious increase in NO production. Moreover, it’s concentration-dependent. When the concentration of MP-A40 was as low as 10 μg/mL, NO production was still 15 times that of negative control . Mice treated with cyclophosphamide had elevated levels of free radicals, increasing aggression towards immune organs, and decreased thymus and spleen indices. Polysaccharide MP can scavenge free radicals and promote the proliferation of ConA-induced T cells and LPS-induced B cells. To a certain extent, the immunosuppression induced by cyclophosphamide can be alleviated . However, the potential immunomodulatory mechanism of polysaccharide remains to be further studied. 3.9. Others Different polar ethanol extracts of JXR had different degrees of inhibition on α -glucosidase activity. Therefore, it has certain hypoglycemic activity. When the polar ethanol extract concentration was 4.0 mg/mL, the inhibition rate of petroleum ether extract was 93.8%, IC 50 was 0.339 mg/mL, and the inhibition rate of ethyl acetate extract was 92.8%, IC 50 was 0.454 mg/mL. The essential oil prepared by steam distillation, petroleum ether cold extraction, and petroleum ether reflux extraction also showed significant inhibition of α -glucosidase at the concentration of 0.25 mg/mL, and the inhibition rates were more than 90% . The results of the formalin-induced Licking test showed that E. ciliata crude ethanol extract has analgesic effect on the early stage of reaction (0–5 min) . THE Langendorff perfused isolated rabbit heart model was used. When E. ciliata essential oil was added into perfusate, QRS interval was increased, QT interval was shortened, AND action potentials upstroke amplitude was decreased, and activation time was prolonged when the concentration of E. ciliata essential oil was increased in the range of 0.01–0.1 μL/mL, and showed concentration dependence. This may be due to the fact that sodium channel block can increase the threshold of action potential generation, prolong the effective refractory period, and inhibit the zero-phase depolarization of late depolarization. The reduction of action potential duration can reduce the occurrence of early depolarization. This experiment provides theoretical basis for E. ciliata in the treatment of arrhythmia . 7- O -(6- O -acetyl)- β -D-glucopyranosyl-(1→2)[(4-oacetyl)- α -L-rhamnopyranosyl-(1→6)]- β -D-glucopyranoside in methanol extract of E. ciliata was hydrolyzed to obtain acacetin. The IC 50 of acacetin against acetylcholinesterase was 50.33 ± 0.87 μg/mL, which showed a significant inhibitory effect on acetylcholinesterase activity, which may hold promise for Alzheimer’s disease treatment . Oxidative stress refers to a state of imbalance between oxidation and antioxidant effects in vivo. It is a negative effect caused by free radicals in the body and is considered to be an important factor in aging and disease. It was reported that the essential oil of E. ciliata could increase catalase (CAT) activity in brain of mice by 26.94%, which may be related to the decomposition of hydrogen peroxide by CAT to reduce oxidative stress . There is a phenolic substance osmundacetone in E. ciliata ethanol extract. In DPPH experiment, the IC 50 value of osmundacetone was 7.88 ± 0.02 µM, indicating a certain antioxidant capacity. The inhibitory effect of osmundacetone on glutamate-induced oxidative stress in HT22 cells was studied by reactive oxygen species (ROS) method. The results showed that osmundacetone significantly reduced the accumulation of ROS and could be used as a potential antioxidant . By studying the effect of E. ciliata methanol extract on J774A.1 murine macrophage, the evaluation of antioxidant activity showed that all the tested compounds had significant effects on ROS release under oxidative stress at the highest concentration (10 M), especially luteolin-7- O - β -D-glucopyranoside, luteolin, and 5,6,4’-trihydroxy-7,3’-dimethoxyflavone . Various scholars studied different polarity extracts of E. ciliata . According to the free radical scavenging experiment of Huynh Xuan Phong, the result showed that E. ciliata extract had certain scavenging ability against 2,2-diphenyl-1-picrylhydrazyl (DPPH) and 2,2’-azino-bis (3-ethylbenzothiazoline -6-sulfonic acid) (ABTS), with IC50 values of 495.80 ± 17.16 and 73. 59 ± 3.18 mg/mL. . In DPPH experiment, the EC 50 values of dichloromethane extract, crude ethanol extract and n-hexane extract were 0.041 µg/µg, 0.15 µg/µg and 0.46 µg/µg, respectively, showing strong antioxidant activity. Such antioxidant capacity may be related to non-polar flavonoids and phenols contained in E. ciliata , among which the total phenol content of dichloromethane is 96.68 ± 0.0010 µg GAEs/mg Extract, and the total flavonoid content is 71.5 ± 0.0089 µg QEs/mg Extract. Therefore, it has the strongest antioxidant capacity . Jing-en Li et al. extracted JXR ethanol extract with petroleum ether, ethyl acetate and water-saturated n-butanol, respectively, and studied the antioxidant activities of the three parts and water phase. The results indicated that ethyl acetate showed good antioxidant activities in ferric reducing antioxidant power (FRAP), DPPH, and β -carotene assay, which may be related to the higher flavonoid content in this extract . The antioxidant ability of mytilus polysaccharide-I (MP-I) contained in JXR water extract was concentration-dependent. When the concentration was 16 mg/mL, the chelation rate of MP-I and Fe 2+ was 87.80%. When the concentration was 20 mg/mL, the scavenging rate of DPPH free radical was 81.32%. The scavenging rate of hydroxyl radical was 81.94% . The DPPH test IC 50 of MCM essential oil and methanol extract were 1230.4 ± 12.5 and 1482.5 ± 10.9 μg/mL, respectively. Reducing power test EC 50 were 105.1 ± 0.9 and 313.5 ± 2.5 μg/mL, respectively. β -Carotene bleaching assay EC 50 were 588.2 ± 4.2 and 789.4 ± 1.3 μg/ml, respectively. The total phenolic content of the essential oil was about 1.7 times that of methanol extract, which further verified the stronger antioxidant capacity of the essential oil . Different parts of E. ciliata have different antioxidant capacity. Lauryna Pudziuvelyte et al. used DPPH, ABTS, FRAP, and cupric ion reducing antioxidant capacity (CUPRAC) to evaluate the antioxidant activity of different parts of E. ciliata DPPH and ABTS results showed that total phenolics content (TPC) and total flavonoids content (TFC) amounts of ethanol extracts from E. ciliata flower, leaf and whole plant were the highest and had the strongest antioxidant activity. The results of FRAP and CUPRAC test showed that the ethanol extract of E. ciliata flower had the highest antioxidant activity. Among different parts of the ethanol extract, the content of quercetin glycosides, phenolic acids, TPC, and TFC in stem extract was the lowest, and the antioxidant activity was the lowest . The ethyl acetate fraction of E. ciliata was purified by macroporous resin with 80% ethanol to obtain fraction E. In the DPPH experiment, the EC 50 value of fraction E was 0.09 mg/mL, which showed the strongest antioxidant and free radical scavenging ability. The EC 50 value of fraction E was higher than positive control butylated hydroxytoluene (0.45), butylated hydroxyanisole (0.21), and vitamin C (0.41). Hence, it can be seen that E. ciliata has the potential to prevent cardiovascular diseases, cancer, and other diseases caused by excess free radicals . Compounds pedalin, luteolin-7- O - β -D-glucopyranoside, 5-hydroxy-6,7-dimethoxyflavone, and α -linolenic acid in the essential oil of E. ciliata were investigated under a lipopolysaccharide (LPS)-induced inflammatory reaction. It can inhibit ROS release, but its mechanism deserves further study . LPS-induced inflammation was evaluated by the number of inflammatory mediators, i.e., tumor necrosis factor- α (TNF- α ), interleukin (IL)-6, and prostaglandin E2 (PGE2). E. ciliata ethanol extract could significantly inhibit the secretion of inflammatory mediators, where TNF- α and IL-6 factors could be effectively inhibited in the stem and flower part, and the PGE2 pathway could be inhibited in the leaf part . The effect of E. ciliata on inflammation can be further verified by studying pyretic rats caused by LPS and mononuclear macrophage RAW264.7 induced by LPS. E. ciliata essential oil and water decoction can reduce the contents of PGE2, TNF- α and other inflammatory factors to different degrees, and can reduce the content of nitric oxide (NO) in serum . Excessive NO can induce the production of pro-inflammatory factors, such as TGF- α and IL-1 β , and aggravate the inflammatory response . JXR alleviates dextran sulfate sodium induced intestinal knot inflammation in mice by affecting the release of NO, PGE2 and other inflammatory mediators and cytokines . Carvacrol in MCM can inhibit the expression of pro-inflammatory cytokines interferon- γ (IFN- γ ), IL-6, and IL-17 and up-regulate the expression of anti-inflammatory factors TGF- β , IL-4, and IL-10, thus reducing the level of inflammatory factors, reducing the damage to cells, and achieving anti-inflammatory effects . In the formalin-induced licking response test, the licking time of E. ciliata crude ethanol extract and dichloromethane extract is shortened at the late phase under 100 mg/kg dose, and the licking time of n-hexane extract is shortened at the early phase under 100 mg/kg dose, which may be related to its anti-inflammatory effect . Water extract of E. ciliata has anti-allergic inflammatory activity and may be related to the inhibition of calcium, P38 mitogen-activated protein kinase, and nuclear factor- κ B expression in the human mast cell line . Different polar extracts of E. ciliata demonstrated significant differences in inhibition ability with regard to microorganism. The results showed that dichloromethane fraction had the strongest inhibitory activity on Candida albicans with minimum inhibitory concentration (MIC) of 62.5 µg/mL, while n-hexane fraction had the strongest inhibitory effect on Escherichia coli with MIC of 250 µg/mL . The ethyl acetate extract of JXR had strong inhibitory effect on Rhizopus oryzae , with the inhibition zone diameter of 13.7 ± 2.7 mm, MIC of 5 mg/mL and minimum bactericidal concentration (MBC) of 5 mg/mL . The MIC of JXR petroleum ether extract, n-butanol extract and ethanol extract against Escherichia coli , Staphylococcus aureus and Bacillus subtilis were 31.25 μg/mL, and the MIC of ethyl acetate extract was 15.60 μg/mL . The carbon dioxide extract of E. ciliata demonstrated a certain inhibitory effect on Staphylococcus aureus , Salmonella paratyphoid, and other microorganisms. When the concentration of the extract was 0.10 g/mL, the inhibitory effect on Staphylococcus aureus was the most obvious, and the diameter of the inhibition zone is 19.7 ± 0.1 mm . According to existing research reports, E. ciliata is rich in essential oil, which contains abundant antibacterial ingredients and can inhibit a variety of microorganisms, so it has research significance and value. The main antibacterial active components of the essential oil of E. ciliata are thymol, carvacrol, and p-Cymene, which have inhibitory effect on Staphylococcus aureus , Methicillin-resistant Staphylococcus aureus and Escherichia coli . MIC were 0.39 mg/mL, 3.12 mg/mL and 1.56 mg/mL, and the diameters of inhibition zone were 21.9 ± 0.1230, 18.2 ± 0.0560, and 16.7 ± 0.0115 nm, respectively . The essential oil in E. ciliata flowers, stems, and leaves had inhibitory effects on Escherichia coli , Staphylococcus aureus , Salmonella typhi , Klebsiella pneumoniae, and Pseudomonas aeruginosa. Both of them had the strongest inhibitory effect on Staphylococcus aureus with the inhibitory zone diameter of 12.2 ± 0.4 and 11.2 ± 0.1 mm, respectively . Other relevant findings suggest that JXR essential oil may affect the formation of Staphylococcus aureus biofilm, so as to achieve bacteriostatic effect on its growth. The MIC of JXR essential oil to Staphylococcus aureus was 0.250 mg/mL. When the concentration was 4MIC, the inhibition rate of essential oil to Staphylococcus aureus biofilm formation could reach 91.3%, and the biofilm clearance rate was 78.5%. The MIC of carvacrol, thymol, and carvacrol acetate against Staphylococcus aureus were 0.122, 0.245, and 0.195 mg/mL, respectively, which were the effective antibacterial components of essential oil. Carvacrol, carvacryl acetate, α -cardene, and 3-carene had strong inhibitory effects on the formation of Staphylococcus aureus biofilm, and the inhibition rates were more than 80% at 1/4 MIC (0.0305, 1.4580, 0.1267 and 2.5975 mg/mL, respectively) . In another study, Li Cao et al. studied the inhibitory effect of MCM essential oil on 17 kinds of microorganisms, among which, It significantly inhibited Chaetomium globosum, Aspergillus fumigatus and Candida rugosa . The antibacterial zone diameters were 16.3 ± 0.58, 15.0 ± 1.00, 16.0 ± 0.00 mm, and MIC were 31.3, 62.5, 62.5 μg/mL, respectively . It also has obvious inhibitory effect on Bacillus subtilis and Salmonella enteritidis , which might be related to the terpenes contained, but this opinion remains to be verified . Thymol and carvacrol are the main antibacterial components of MCM. Caryophyllene oxide can be used in the treatment of dermatomycosis, especially in the short-term treatment of mycosis ungualis . The bactericidal mechanism of essential oil may be due to the fact that active components such as carvacrol can damage cell membranes and alter their permeability . The extract of MCM had a significant inhibitory effect on the spore germination of Aspergillus flavus and could significantly change the morphology of Aspergillus flavus mycelia, podocytes, and sporophytes, with a MIC of 0.15 mg/mL . The germination rate of Penicillium digitorum treated with carvacrol significantly decreased, the mechanism may be that carvacrol can change the surface morphology of mycelia, and the cavity rate of mycelia increased with the increase of carvacrol concentration. The permeability of the cell membrane of bacteria increases, causing an electrolyte imbalance in bacteria. As a result, the sugar content and nutrients in bacteria are reduced, so as to achieve bacteriostasis. The MIC and MBC of carvacrol against Penicillium digitorum were 0.125 and 0.25 mg/mL, respectively . Some studies have shown that E. ciliata has an insecticidal effect. The repellency rate of E. ciliata essential oil to Blattella germanica was 64.50%, no significant difference from positive control diethyltoluamide (DEET) ( p < 0.05). RD 50 of E. ciliata essential oil was 218.634 µg/cm 2 , which was better than DEET (650.403 µg/cm 2 ) . Contact toxicity IC 50 of E. ciliata essential oil to Liposcelis bostrychophila was 145.5 μg/cm 2 , and fumigation toxicity IC 50 was 475.2 mg/L. (R)-carvone. Dehydroelsholtzia ketone and elsholtzia ketone are the active components of E. ciliata essential oil against Liposcelis bostrychophila . The IC 50 of contact toxicity were 57.0, 151.5, and 194.1 μg/cm 2 , and those of fumigantion toxicity were 417.4, 658.2, and 547.3 mg/L, respectively . Carvone and limonene are the two main components in E. ciliata essential oil. The ability of E. ciliata essential oil, carvone, and limonene against Tribolium castaneum larvae and adults was evaluated by a contact toxicity test and fumigation assay. Contact toxicity test showed that the LD 50 of E. ciliata essential oil, carvone, and limonene to Tribolium castaneum adults were 7.79, 5.08, and 38.57 mg/Adult, respectively, and 24.87, 33.03, and 49.68 mg/Larva to Tribolium castaneum larvae. The results of fumigation toxicity test showed that LC 50 of Tribolium castaneum adults were 11.61, 4.34, and 5.52 mg/L Air, respectively, and LC 50 of Tribolium Castaneum larvae were 8.73, 28.71, and 20.64 mg/L Air, respectively . Thymol, carvacrol, and β -thymol contained in JXR essential oil had significant fumigation toxicity against Mythimna Separate, Myzus Persicae, Sitophilus Zeamais, Musca domestica, and Tetranychus cinnabarinus , among which β -thymol has the strongest activity. The IC 50 values for the five pests were 10.56 (9.26–12.73). 14.13 (11.84–16.59), 88.22 (78.53–99.18), 10.05 (8.63–11.46), and 7.53 (6.53–8.79) μL/L air, respectively . Determined by the immersion method, the LC 50 of MCM essential oil against Aedes albopictus larvae and pupae at four instars were 78.820 and 122.656 μg/mL, respectively. The chemotaxis activity of MCM essential oil was evaluated by the method of effective time of human local skin coating. When the dose was 1.5 mg/cm 2 , the complete protection time of Aedes albopictus was 2.330 ± 0.167 h . From this point of view, E. ciliata essential oil has the development potential as a natural anti-insect agent. It provides a basis for the development and utilization of pesticide dosage forms. Leishmania mexicana can cause cutaneous leishmaniasis. E. ciliata essential oil had anti-leishmania activity with IC 50 of 8.49 ± 0.32 nL/mL. Leishmania mexicana mexicana was treated with a survival rate of 0.38 ± 0.00 %. Selectivity indices were 5.58 and 1.56 for mammalian cell WI38 and J774, respectively. This provides a reference for the treatment of cutaneous leishmaniasis . E. ciliata water extract has an obvious anti- trichomonas vaginalis effect, i.e., can destroy the insect body structure, to achieve the purpose of killing insects. The results of in vitro experiments showed that the lowest effective concentration of E. ciliata water extract was 62.5 mg/mL, and the lowest effective time was 12 h. When the concentration was 250 mg/mL, all Trichomonas vaginalis could be killed for 4 h. This experiment provides a new idea for the clinical treatment of vaginal trichomoniasis . T helper 17 (Th17) cells play an important role in maintaining adaptive immune balance, and an excess of Th17 cells can cause inflammation. Carvacrol plays an anti-influenza virus role by reducing the proportion of Th17 cells significantly increased by influenza virus A infection. It can be used as a potential antiviral drug and can also be used to control inflammation caused by influenza virus A infection . Mice with viral pneumonia modeled by A/PR/8/34 (H1N1) virus were treated with low, medium and high dose of MCM total flavonoids. Lung index of the three dose groups were 12.81 ± 3.80, 11.65 ± 2.58, 11.45 ± 2.40 mg/g, respectively, compared with the infection group 16.05 ± 3.87 mg/g, the inhibition rates were 20.18%, 27.41%, 28.66%, respectively . E. ciliata ethanol extract has an inhibitory effect on the proliferation of avian infectious bronchitis virus, which may be related to the increased expression of three antiviral genes suppressor of cytokine signaling 3 (SOCS3), 2′-5′-oligoadenylate synthetase-like (OASL), and signal transducer and activator of transcription 1 (STAT1) in H1299 cells treated with extract, and this inhibitory effect shows a certain concentration dependence. In addition, the extract had no cytotoxicity when the concentration was less than 0.3 g/mL . Above experiments provide new possibilities for the treatment of inflammation caused by the virus. A/WSN/33/2009 (H1N1) virus was used to infect Madin-Darby canine kidney cells to explore the antiviral activity of phenolic acids from MCM in vitro. The survival rate of the cells treated with the compound 3-(3,4-dihydroxyphenyl) acrylic acid 1-(3,4-dihydroxyphenyl)-2-methoxycarbonylethyl and methyl lithospermate were higher than 80%, and the inhibition rate of virus at 100 μmol/L were 89.28% and 98.61%, respectively . In another study, the lung index of low, medium, and high dose of MCM water extract on mice infected by A/PR8 influenza virus was 1.21 ± 0.22%, 1.12 ± 0.17%, and 0.94 ± 0.21%, respectively. Compared to the virus-infected group 1.80 ± 0.29 %, the inhibition rates were 32.78%, 37.78% and 47.78%, respectively. The extracts of the three groups can increase the amounts of IL-2 and IFN- γ in serum of mice, and promote the antiviral ability of the body indirectly or directly . Fluoranthene is a compound with antiviral activity extracted from E. ciliata . It has a certain inhibitory effect on two enveloped viruses, sindbis virus, and murine cytomegalovirus, with the lowest effective concentrations of 0.01 and 1.0 μg/mL, respectively. However, its biological effects are complex, and its clinical safety and effectiveness need further research . The hypolipidemic activity of E. ciliata ethanol extract was evaluated by determining the effects on the contents of triglyceride and total cholesterol in serum of mice in vivo and the proliferation of 3T3-L1 preadipocytes in vitro. The results showed that the levels of triglyceride and total cholesterol in serum of mice treated with the extract were decreased, and the differentiation and accumulation of 3T3-L1 preadipocytes were also effectively inhibited. The levels of genes associated with adipogenesis, such as peroxisome proliferator activated receptor γ (PPAR γ ), fatty acid synthase (FAS), and adipocyte fatty acid-binding protein 2 (aP2) were also significantly reduced. In addition, serum leptin content in E. ciliata ethanol extract treatment group was lower than that in obese mice, which may be due to the reduction of fat content. By this token, the action mechanism of E. ciliata lowering blood lipids may be to inhibit the expression of genes related to fat cell formation. However, the specific mechanism needs further study . Pudziuvelyte, L. et al. extracted essential oil from E. ciliata fresh herbs, lyophilized herbs, and dried herbs, respectively. In in vitro experiments, three kinds of essential oil presented significant inhibition of proliferation effect on the human glioblastoma (U87), pancreatic cancer (PANC-1), and triple negative breast cancer (MDA-MB231) cells, with EC 50 values ranging from 0.017% to 0.021%. However, E. ciliata ethanol extract did not show cytotoxicity in this experiment . The antitumor activity of origin processing integration technology and traditional cutting processing technology of E. ciliata was evaluated by measuring the effect of the decoction and essential oil on the average optical density of TNF- α in rat lung tissue. Average optical densities of water decocted solution and essential oil of traditional cutting E. ciliata were 0.530 ± 0.071 and 0.412 ± 0.038, respectively, and those of integration processing technology of origin were 0.459 ± 0.051 and 0.459 ± 0.051, respectively. Compared with the blank group (0.299 ± 0.028), there were varying degrees of increase . In vitro experiments of JXR pectin polysaccharide (MP-A40) showed that the proliferation of human leukemic cell line K562 was affected by MP-A40. When the concentration of MP-A40 was 500μg/mL, the inhibition rate was 31.32% . Macrophages can regulate apoptosis by producing NO and other effecting molecules. Macrophage RAW 264.7 cells treated by JXR pectin polysaccharide (MP-A40) showed an obvious increase in NO production. Moreover, it’s concentration-dependent. When the concentration of MP-A40 was as low as 10 μg/mL, NO production was still 15 times that of negative control . Mice treated with cyclophosphamide had elevated levels of free radicals, increasing aggression towards immune organs, and decreased thymus and spleen indices. Polysaccharide MP can scavenge free radicals and promote the proliferation of ConA-induced T cells and LPS-induced B cells. To a certain extent, the immunosuppression induced by cyclophosphamide can be alleviated . However, the potential immunomodulatory mechanism of polysaccharide remains to be further studied. Different polar ethanol extracts of JXR had different degrees of inhibition on α -glucosidase activity. Therefore, it has certain hypoglycemic activity. When the polar ethanol extract concentration was 4.0 mg/mL, the inhibition rate of petroleum ether extract was 93.8%, IC 50 was 0.339 mg/mL, and the inhibition rate of ethyl acetate extract was 92.8%, IC 50 was 0.454 mg/mL. The essential oil prepared by steam distillation, petroleum ether cold extraction, and petroleum ether reflux extraction also showed significant inhibition of α -glucosidase at the concentration of 0.25 mg/mL, and the inhibition rates were more than 90% . The results of the formalin-induced Licking test showed that E. ciliata crude ethanol extract has analgesic effect on the early stage of reaction (0–5 min) . THE Langendorff perfused isolated rabbit heart model was used. When E. ciliata essential oil was added into perfusate, QRS interval was increased, QT interval was shortened, AND action potentials upstroke amplitude was decreased, and activation time was prolonged when the concentration of E. ciliata essential oil was increased in the range of 0.01–0.1 μL/mL, and showed concentration dependence. This may be due to the fact that sodium channel block can increase the threshold of action potential generation, prolong the effective refractory period, and inhibit the zero-phase depolarization of late depolarization. The reduction of action potential duration can reduce the occurrence of early depolarization. This experiment provides theoretical basis for E. ciliata in the treatment of arrhythmia . 7- O -(6- O -acetyl)- β -D-glucopyranosyl-(1→2)[(4-oacetyl)- α -L-rhamnopyranosyl-(1→6)]- β -D-glucopyranoside in methanol extract of E. ciliata was hydrolyzed to obtain acacetin. The IC 50 of acacetin against acetylcholinesterase was 50.33 ± 0.87 μg/mL, which showed a significant inhibitory effect on acetylcholinesterase activity, which may hold promise for Alzheimer’s disease treatment . This paper summarizes pharmacological activities of E. ciliata , among which antioxidant, anti-inflammatory, antimicrobial and insecticidal activities are the main activities, but also has antiviral, hypolipidemic, hypoglycemic, anti-tumor activities. Hence, 352 kinds of chemical constituents identified from E. ciliata were summarized. According to their structure types, they can be divided into flavonoids, phenylpropanoids, terpenoids, alkaloids and other compounds. According to the existing pharmacological experiment results in vivo and in vitro, E. ciliata dichloromethane extract, ethyl acetate extract and essential oil all show good pharmacological activity. Carvacrol contained in E. ciliata is the main active ingredient of antibacterial. At present, researches on pharmacological activity of E. ciliata mainly focus on essential oil, and some researches involve E. ciliata alcohol extract, water extract and polysaccharide, but there are relatively few researches on pharmacological activities of E. ciliata such as analgesia, immune regulation, hypoglycemia and hypolipemia. Whether E. ciliata has potential pharmacological activity still needs further test to prove. The safety study of clinical dose also deserves extensive attention. Some representative action mechanisms of E. ciliata were briefly illustrated for the sake of reference. The possible action process is shown in the figure. The mitogen-activated protein kinases (MAPKs) signaling pathway chain consists of three protein kinases, MAP3K–MAP2K–MAPK, which transmit upstream signals to downstream responsive molecules through sequential phosphorylation. MAPK includes four subfamilies: ERK, p38, JNK, and ERK5. MAPK activity is thought to be regulated by the diphosphate sites in the amino acid sequence of the active ring. The active ring contains a characteristic threonine-x-tyrosine (T-x-Y) motif. Mitogen activated protein (MAP) kinase phosphorylates on two amino acid residues, thereby activating MAPK pathway. MAP kinase phosphatase (MKP) can hydrolyze phosphorylated products and inactivate MAPK pathway. The extract inhibited the activation of MAPK signaling pathway by blocking the phosphorylation of p38, JNK and ERK . When stimulated, tissue cells release arachidonic acid (AA). Cyclooxygenase (COX) catalyze AA to produce a series of bioactive substances such as prostaglandins (PGs), causing inflammation. The extract can affect the COX-2 pathway by affecting the release levels of TNF- α , IL-6, and PGE2, which are key mediators released by macrophages during bacterial infection, so as to achieve the purpose of anti-inflammatory . Carvacrol can significantly inhibit the mRNA expression of toll like receptor 7 (TLR7), interleukin-1 receptor associated kinase (IRAK4), TNF receptor associated factor (TRAF6), induced pluripotent stem-I (IPS-I), and interferon regulatory factor 3 (IRF3) in mice, thereby affecting the immunomodulatory signaling pathways of TLR7/RLR and playing an anti-H1N1 influenza virus role . With the deepening of various studies, the gradual clarification of the mechanism of action has created conditions for drugs to play a better role. Two representative mechanisms of action, MAPK and COX-2, are shown in and . E. ciliata has rich resources and low requirements for growth environment. It can be planted artificially and has a short growth cycle. The rich essential oil content makes it possible for E. ciliata to be used as flavor and food additive. Undoubtedly, the development of E. ciliata in new dosage forms and the application to medicine, food, and other fields will provide broad development prospects in the future. |
Hybrid thermosensitive-mucoadhesive | 510103d1-8a29-417c-a8a3-85712ebd6110 | 8788381 | Pharmacology[mh] | The cornea is a transparent ocular tissue at the front of the eye. The cornea has both protective and refractive functions. The normal structure and function of the cornea can be adversely affected by many factors, such as trauma, surgery, and applied ocular drugs (Ljubimov & Saghizadeh, ). The ultimate outcome is corneal ulcers and corneal blindness if left untreated. The available treatment options are limited to ocular lubricant and antibiotics without an effective drug therapy that treats corneal wound healing. L-carnosine is a native dipeptide biosynthesized from β-alanine and l-histidine through carnosine synthase (Mendelson, ). L-carnosine is an antioxidant commonly found in human tissues, such as muscles and brain (Cao et al., ). Recent reports highlighted its benefits for treating age-related ocular diseases, such as cataracts and corneal disorders. This is because of three main favorable characteristics that have been attributed to this dipeptide drug (Litwack, ). L-carnosine has antioxidant effects, metal chelating, and antiglycation properties (Turner et al., ). These features protect aging ocular tissues from oxidative stress, glycation, and post-translational modification of structural (lens crystallins) as well as functional (enzymes) proteins in the human eye (Babizhayev et al., , ). Recent reports highlight the role of L-carnosine as a potential anticancer agent (Gaafar et al., ; Turner et al., ). Pegylated liquid crystalline nanoparticles loaded with L-carnosine have been investigated for superior antitumor activities compared to L-carnosine alone and L-carnosine phytosomes (Gaafar et al., ). L-carnosine has been reported to promote corneal wound healing without scar formation. These wound healing properties are attributed to repairing impaired metabolism in the cornea by protecting native proteins in the corneal tissues from oxidative damage and modulating the inflammatory responses (Quinn et al., ). There are scarce reports on the development of eye formulations for L-carnosine; however, the more lipophilic prodrug derivative of N-acetyl carnosine has been patented in the USA and is available as eye drops containing 1% N-acetyl carnosine (Can-C ® ). N-acetyl carnosine undergoes biotransformation into L-carnosine upon topical ocular administration (Babizhayev et al., ). Preformulation studies on the active form L-carnosine indicated that the drug has considerable chemical stability and a log p -value of ∼.03. Therefore, L-carnosine has balanced hydrophilicity-hydrophobicity attributes where permeation through the lipophilic corneal barrier is likely to be the rate-determining step in its ocular absorption (Abdelkader et al., ). In situ gelling drug delivery systems (also called gel-forming systems) can offer several advantages over preformed gels. In situ gels are suitable for simple and scalable preparation; they offer the convenience of being administered as free-flowing solution eye drops; and converted to gel on the ocular surface. The in-situ gels retain the drug at the elected superficial region and potentially reduce the frequency of administration (Cassano et al., ). In this respect, poloxamers have tissue penetration enhancing properties that could facilitate hydrophilic drugs like L-carnosine for accessing the corneal lipid barrier. Poloxamer 407-based hydrogels have been investigated for mucoadhesive properties, time-release properties, and tissue tolerability (Zhang et al., ; Giuliano et al., ). Chloramphenicol (antibiotic) in situ gels for ocular delivery were based on the combination of poloxamer 407 and hydroxypropyl methyl cellulose; they showed optimized viscosity, pH, and gelling capacity (Kurniawansyah et al., ). Poloxamer belongs to a unique synthetic non-ionic polymer with surface-active properties due to containing an inner hydrophobic core of poly (propylene oxide) and outer hydrophilic chains of poly (ethylene oxide). Poloxamers have favorable physiological properties, such as thermal-dependent gelation and self-assembly as well as acceptable biocompatibility, high drug-loading capacity, and tissue tolerability that renders poloxamer-based gels promising drug delivery systems (Zarrintaj et al., , Carvalho et al., ). Poloxamers can be considered safe for both oral and dermal application with LD 50% ∼5 g/kg and when applied one daily for 14 days, no skin erythema or sensitization times were recorded (Carvalho et al., ). However, thermal gelation usually occurs at a relatively high concentration of poloxamers (≥15% w/w) at physiological eye surface conditions. This might pose toxicological and irritation concerns to the ocular tissues. In addition, the onset of gelation of poloxamer alone varies from seconds to minutes which would be long enough for significant drug loss (Fathalla et al., ). These conditions might not suit the dynamic properties on the surface of the eye from frequent blinking and rapid turnover of tear fluid (Lang et al., ). This retardation of sol-to-gel transition may likely lead to drug loss of the instilled dose due to the rapid dilution by precorneal tear and reflex tearing. To the best of our knowledge, there are limited reports on hybrid gels comprising poloxamers combined with mucoadhesive polymers for optimized ocular drug delivery in terms of rheological, mechanical, mucoadhesion, spreading ability, and overall ocular safety and efficacy. The goal of this work was to explore the possibility of combining poloxamer with other mucoadhesive polymers, such as chitosan (CS) and methylcellulose (MC) in an attempt to develop an in-situ gelling formulation with optimized gelation time, temperature muco-adhesive characteristics, and improved corneal wound healing properties. Gelation of poloxamer mainly relies on micelles packing and entanglement (Cabana et al., ). The inclusion of drugs or additives has been reported to interfere with micelles formation, and subsequently, interfere with the sol-to-gel transition temperature (Tsol-gel) (Edsman et al., ). This study reports on the effects of CS and MC on Tsol-gel of the poloxamer 407 used. The in-situ gelling poloxamer-based preparations loaded with L-carnosine (LC) were evaluated for their mechanical rheological spreading as well as mucoadhesive properties. Ex-vivo permeation studies were carried out to establish the possibility of using in situ gelling combinations of poloxamer-MC and poloxamer-CS as novel hybrid polymeric carrier systems for LC to the ocular surface.
L-carnosine (LC), poloxamer 407 (P407, culture tested), chitosan (CS) high molecular weight (Brookfield viscosity 800,000 CP), methyl cellulose (MC), porcine mucin, and benzalkonium chloride (BKC) were purchased from Sigma Aldrich, the UK.
in situ gels P407 solution was prepared using the cold method. In brief, accurately weighed amounts of the polymers were added to cold aqueous solutions of CS or MC that were set at 4 °C (as shown in ). The polymer solutions were kept in a cold room for 24 h under constant stirring to ensure complete dissolution. CS was dissolved in acetic acid (1% v/v) and the final pH of CS solution was raised to 5.5 using an aqueous solution of sodium hydroxide (1 M). For drug-loaded gels, LC was added and dissolved in CS or MC solutions, then the P407 was finally dissolved to form final LC (1% w/v)-loaded in situ gels.
Gelation time and gelation temperature The time required for the onset of gelation and subsequent transition from sol-to-gel was called gelation time. This parameter was recorded employing aluminum pans that were mounted on a hot plate prewarmed to 35 °C. Once the aluminum pan was hot enough, a few drops of the test formulations were placed on the hot pan using a micropipette. The aluminum pan was tilted at a right angle (90°) to see if the formulation has turned into the gel or still liquid. The final gelation time is the point at which the instilled formulation drops became thick and ceased moving upon tilting. A stopwatch was used to record the time of gelation. The same procedure was repeated for all the prepared in situ gelling formulations and the results were presented as the average of triplicate samples ( n = 3). The temperature at which the sol-gel transition occurred was called gelation temperature (Tsol-gel). This temperature was recorded using the visual tube inversion method. Each formulation was kept at fridge temperature (4–8 °C), transferred into a glass test tube; a thermometer was placed in the test solutions left at ambient conditions; once raised to the room temperature, the test tube was transferred into a water bath at a temperature of 25 ± 1 °C. The temperature was gradually raised at a rate of 1 °C/min and the temperature at which gelation occurred (the surfaces remained immobile by tiling the tubes to the horizontal position) was recorded (Ur-Rehman et al., ). Rheological characteristics The viscosity of the developed in situ gels was determined at different rotational speeds (10–100 rpm) and constant temperature using a rotational viscometer (Brookfield DV-II, Essex, UK) equipped with spindle 62. Texture analysis Mechanical properties of the prepared in situ gels were studied using a TA-XT-plus Texture Analyzer (Stable micro-Systems, Surrey, the UK) as previously reported (Fujimoto et al., ). Sample formulations (35 g each) were placed in 50-ml glass beakers. An analytical probe (1 cm diameter) was immersed twice in each gel sample at a predetermined rate and depth of 1 mm/s and 10 mm, respectively, allowing a delay period of 10 s between each immersion. The maximum force required to penetrate to that depth is called gel strength. Measurements were performed at two temperatures (4 and 35 °C). From the force–distance curve created by the Texture Exponent 32 software; the following texture parameters were estimated: Gel strength (hardness) is the maximum force (mN) of the positive peak; cohesiveness is the area under the curve (AUC) 1 of the positive area in mN·mm; adhesiveness is (AUC) 2 of the negative area in mN·mm, as shown in . Spreading-ability of L-carnosine loaded formulations Contact angle and spreading coefficient Contact angle ( θ ) is the angle formed where the liquid-vapor interface meets the solid surface. This was experimentally determined by using a drop shape analyzer (Kruss Drop Shape Analysis, Hamburg, Germany). Complete wetting of the solid surface is achieved when θ is equal to zero. Wetting in which a liquid spreads over the solid surface is known as spreading. The tendency of spreading can be assessed by determining the spreading coefficient ( S ) as expressed by (Florence & Attwood, ): (1) S = γ ( cos θ – 1 ) Where S is the spreading coefficient, γ is the tension the surface tension of the liquid placed onto the solid substrate and θ is the contact angle. The γ values for L-carnosine in situ gels were determined using a Torsion balance (Malvern Wells, UK). Ocular irritation studies HET-CAM The in vitro ocular irritation based on modified hen’s egg chorioallantoic membrane (HET-CAM) assay was adopted to investigate the conjunctival irritation of selected in situ gels (Abdelkader et al., ). Fertilized White Leghorn eggs were incubated at temperature and relative humidity of 37.5 ± 0.5 °C and 66 ± 5%, respectively for 3 days. After 3 days of incubation, the eggshells were opened by cracking and the content was poured into growing Petri dishes. The yolk sacs were examined for any visible rupture. Living embryos with an intact yolk sac were incubated further and utilized for the irritation investigation assay. The following samples were used: Sodium hydroxide (1 M) as positive control; propylene glycol as mild-to-moderate irritant control; saline was used as a negative control. These three controls were employed for validation purposes. Once a test formulation is placed on the CAM a time-dependent numerical score was adopted for the signs of conjunctival irritation of hyperemia, hemorrhage, and clotting as described before (Abdelkader et al., ). Mucoadhesion studies Mucoadhesion of selected in situ gels was studied using the Texture analyzer (Stable micro-Systems, Surrey, the UK) as previously mentioned (Abdelkader et al., ). The specified amounts (0.25 g) of porcine mucin were compressed into 10 mm disks using an IR hydraulic press under a force of 10 tons for 30 s. The disks were fixed to the lower end of the Texture analyzer probe (10 mm in diameter) using a double adhesive tape. A sample of selected in situ gelling formulations equal to 25 g was pre-equilibrated at 35 °C in a water bath. The probe with mucin disk was gradually forced onto the gel surface. A force (5 g) was exerted for 3 min to ensure intimate contact between the mucin disk and the surface of the gel. The probe was pulled at a speed of 0.5 mm/s to a distance of 0.5 cm. The force (mN) needed to separate the disk from the gel was recorded and called the force of adhesion. Another mucoadhesion parameter called the work of adhesion (mN.mm) was estimated from the area under force (Xu et al., ). Scanning electron microscopy The surface of selected L-carnosine in situ gels (F-P3/CS0.5 and F-P3/CS1) was imaged and studied using SEM Carl Zeiss EVO 50, Cambridge, the UK. The microscope used was and equipped with a tungsten source and operated at an acceleration voltage equal to 10 KV. The surface of the gel was sputtered with gold. In vitro release One ml aliquot of selected in situ gel formulations was transferred into the donor compartment of the Franz diffusion cells (Logan Instrument Corp., NJ, USA). The receptor compartment was filled PBS (12 ml) under stirring. Dialysis membrane (12–14 kDa molecular weight cut-off) separated the two compartments. The temperature was adjusted at 35 ± 0.5 °C. The amount of LC released was quantified using the HPLC method that was previously published elsewhere (Abdelkader et al., ). The HPLC system consisted of an isocratic mobile phase system (98% v/v: 2% v/v of trifluoro-acetic acid (0.1% v/v): acetonitrile with flow rate of 1 ml/min. a Supelcosil C18 column (5 µm; 25 × 0.46 cm, Supelco Corporation, PA, USA) at 40 °C; and UV detector set at 220 nm; and injection volume of 30 µl. The cumulative release data were fitted into kinetics models (zero-order, first-order, and Higuchi Diffusional models) to elucidate drug release mechanisms from the selected gel formulations. Transcorneal penetration studies The transcorneal permeation studies were performed using the Franz diffusion cells (Logan Instrument Corp., NJ, USA). The bovine eyes were collected from a local abattoir and were treated and dissected as previously described (Gaballa et al., ). The recipient compartment was filled with PBS (12 ml), and the donor compartment was filled with 1 ml of LC formulations. Two LC-loaded in situ gels (PCS12, PMC9) were studied. Drug solution (10 mg/ml) was used as a control. One ml of each sample equivalent to 10 mg/ml of LC was transferred into the donor compartment. The diffusion-cell system was maintained at 35 ± 0.5 °C. The amount of LC permeated across the mounted cornea (surface area 1.77 cm 2 ) was analyzed by the HPLC method as described in the previous section. The cumulative amounts of LC permeated were plotted against time and corrected for surface area. The apparent permeability coefficient ( P app ) was estimated through : (2) P app = F A C o Where F is the flux which is the slope of the cumulative drug permeation vs. time, A is the surface area and Co is the initial drug concentration. In vivo pharmacodynamic study (corneal ulcer induction and healing) This study was approved by the Commission on the Ethics of Scientific Research under project code no. ES 10/2020, Faculty of Pharmacy, Minia University. of Ten rabbits, weighing between 1.5 and 2.0 kg, were grouped into two groups: group 1 received LC solution in their left eyes, and group 2 received F-P3/CS1 in their left eyes. The right eyes of both groups were left untreated and served as control. Corneal ulcers were induced in both eyes using 70% ethyl alcohol by the alcohol delamination method as described before (Abdelkader et al., ). After induction of corneal ulcers, a single drop of each test formulation was instilled every 12 h for 3 days. Percentage changes in ulcer size were determined using : (3) % Δ Ulcer size = ( D 1 − D n D 1 ) * 100 where: D 1 is the diameter of the ulcer at day 1, Dn is the diameter of the ulcer at day 2 or day 3. Statistical analysis Flux, apparent permeability coefficients, and cumulative irritation scores were represented as mean values ± standard deviation (SD). Statistical analysis was performed using a one-way ANOVA; p < .05 and <.001 were considered statistically significant. Tukey’s pair-wise comparison was conducted and set at a 95% confidence interval. Analyses were performed using Graph Pad Software Version 3.05; San Diego, CA, USA.
The time required for the onset of gelation and subsequent transition from sol-to-gel was called gelation time. This parameter was recorded employing aluminum pans that were mounted on a hot plate prewarmed to 35 °C. Once the aluminum pan was hot enough, a few drops of the test formulations were placed on the hot pan using a micropipette. The aluminum pan was tilted at a right angle (90°) to see if the formulation has turned into the gel or still liquid. The final gelation time is the point at which the instilled formulation drops became thick and ceased moving upon tilting. A stopwatch was used to record the time of gelation. The same procedure was repeated for all the prepared in situ gelling formulations and the results were presented as the average of triplicate samples ( n = 3). The temperature at which the sol-gel transition occurred was called gelation temperature (Tsol-gel). This temperature was recorded using the visual tube inversion method. Each formulation was kept at fridge temperature (4–8 °C), transferred into a glass test tube; a thermometer was placed in the test solutions left at ambient conditions; once raised to the room temperature, the test tube was transferred into a water bath at a temperature of 25 ± 1 °C. The temperature was gradually raised at a rate of 1 °C/min and the temperature at which gelation occurred (the surfaces remained immobile by tiling the tubes to the horizontal position) was recorded (Ur-Rehman et al., ).
The viscosity of the developed in situ gels was determined at different rotational speeds (10–100 rpm) and constant temperature using a rotational viscometer (Brookfield DV-II, Essex, UK) equipped with spindle 62.
Mechanical properties of the prepared in situ gels were studied using a TA-XT-plus Texture Analyzer (Stable micro-Systems, Surrey, the UK) as previously reported (Fujimoto et al., ). Sample formulations (35 g each) were placed in 50-ml glass beakers. An analytical probe (1 cm diameter) was immersed twice in each gel sample at a predetermined rate and depth of 1 mm/s and 10 mm, respectively, allowing a delay period of 10 s between each immersion. The maximum force required to penetrate to that depth is called gel strength. Measurements were performed at two temperatures (4 and 35 °C). From the force–distance curve created by the Texture Exponent 32 software; the following texture parameters were estimated: Gel strength (hardness) is the maximum force (mN) of the positive peak; cohesiveness is the area under the curve (AUC) 1 of the positive area in mN·mm; adhesiveness is (AUC) 2 of the negative area in mN·mm, as shown in .
Contact angle and spreading coefficient Contact angle ( θ ) is the angle formed where the liquid-vapor interface meets the solid surface. This was experimentally determined by using a drop shape analyzer (Kruss Drop Shape Analysis, Hamburg, Germany). Complete wetting of the solid surface is achieved when θ is equal to zero. Wetting in which a liquid spreads over the solid surface is known as spreading. The tendency of spreading can be assessed by determining the spreading coefficient ( S ) as expressed by (Florence & Attwood, ): (1) S = γ ( cos θ – 1 ) Where S is the spreading coefficient, γ is the tension the surface tension of the liquid placed onto the solid substrate and θ is the contact angle. The γ values for L-carnosine in situ gels were determined using a Torsion balance (Malvern Wells, UK).
Contact angle ( θ ) is the angle formed where the liquid-vapor interface meets the solid surface. This was experimentally determined by using a drop shape analyzer (Kruss Drop Shape Analysis, Hamburg, Germany). Complete wetting of the solid surface is achieved when θ is equal to zero. Wetting in which a liquid spreads over the solid surface is known as spreading. The tendency of spreading can be assessed by determining the spreading coefficient ( S ) as expressed by (Florence & Attwood, ): (1) S = γ ( cos θ – 1 ) Where S is the spreading coefficient, γ is the tension the surface tension of the liquid placed onto the solid substrate and θ is the contact angle. The γ values for L-carnosine in situ gels were determined using a Torsion balance (Malvern Wells, UK).
HET-CAM The in vitro ocular irritation based on modified hen’s egg chorioallantoic membrane (HET-CAM) assay was adopted to investigate the conjunctival irritation of selected in situ gels (Abdelkader et al., ). Fertilized White Leghorn eggs were incubated at temperature and relative humidity of 37.5 ± 0.5 °C and 66 ± 5%, respectively for 3 days. After 3 days of incubation, the eggshells were opened by cracking and the content was poured into growing Petri dishes. The yolk sacs were examined for any visible rupture. Living embryos with an intact yolk sac were incubated further and utilized for the irritation investigation assay. The following samples were used: Sodium hydroxide (1 M) as positive control; propylene glycol as mild-to-moderate irritant control; saline was used as a negative control. These three controls were employed for validation purposes. Once a test formulation is placed on the CAM a time-dependent numerical score was adopted for the signs of conjunctival irritation of hyperemia, hemorrhage, and clotting as described before (Abdelkader et al., ). Mucoadhesion studies Mucoadhesion of selected in situ gels was studied using the Texture analyzer (Stable micro-Systems, Surrey, the UK) as previously mentioned (Abdelkader et al., ). The specified amounts (0.25 g) of porcine mucin were compressed into 10 mm disks using an IR hydraulic press under a force of 10 tons for 30 s. The disks were fixed to the lower end of the Texture analyzer probe (10 mm in diameter) using a double adhesive tape. A sample of selected in situ gelling formulations equal to 25 g was pre-equilibrated at 35 °C in a water bath. The probe with mucin disk was gradually forced onto the gel surface. A force (5 g) was exerted for 3 min to ensure intimate contact between the mucin disk and the surface of the gel. The probe was pulled at a speed of 0.5 mm/s to a distance of 0.5 cm. The force (mN) needed to separate the disk from the gel was recorded and called the force of adhesion. Another mucoadhesion parameter called the work of adhesion (mN.mm) was estimated from the area under force (Xu et al., ).
The in vitro ocular irritation based on modified hen’s egg chorioallantoic membrane (HET-CAM) assay was adopted to investigate the conjunctival irritation of selected in situ gels (Abdelkader et al., ). Fertilized White Leghorn eggs were incubated at temperature and relative humidity of 37.5 ± 0.5 °C and 66 ± 5%, respectively for 3 days. After 3 days of incubation, the eggshells were opened by cracking and the content was poured into growing Petri dishes. The yolk sacs were examined for any visible rupture. Living embryos with an intact yolk sac were incubated further and utilized for the irritation investigation assay. The following samples were used: Sodium hydroxide (1 M) as positive control; propylene glycol as mild-to-moderate irritant control; saline was used as a negative control. These three controls were employed for validation purposes. Once a test formulation is placed on the CAM a time-dependent numerical score was adopted for the signs of conjunctival irritation of hyperemia, hemorrhage, and clotting as described before (Abdelkader et al., ).
Mucoadhesion of selected in situ gels was studied using the Texture analyzer (Stable micro-Systems, Surrey, the UK) as previously mentioned (Abdelkader et al., ). The specified amounts (0.25 g) of porcine mucin were compressed into 10 mm disks using an IR hydraulic press under a force of 10 tons for 30 s. The disks were fixed to the lower end of the Texture analyzer probe (10 mm in diameter) using a double adhesive tape. A sample of selected in situ gelling formulations equal to 25 g was pre-equilibrated at 35 °C in a water bath. The probe with mucin disk was gradually forced onto the gel surface. A force (5 g) was exerted for 3 min to ensure intimate contact between the mucin disk and the surface of the gel. The probe was pulled at a speed of 0.5 mm/s to a distance of 0.5 cm. The force (mN) needed to separate the disk from the gel was recorded and called the force of adhesion. Another mucoadhesion parameter called the work of adhesion (mN.mm) was estimated from the area under force (Xu et al., ).
The surface of selected L-carnosine in situ gels (F-P3/CS0.5 and F-P3/CS1) was imaged and studied using SEM Carl Zeiss EVO 50, Cambridge, the UK. The microscope used was and equipped with a tungsten source and operated at an acceleration voltage equal to 10 KV. The surface of the gel was sputtered with gold.
release One ml aliquot of selected in situ gel formulations was transferred into the donor compartment of the Franz diffusion cells (Logan Instrument Corp., NJ, USA). The receptor compartment was filled PBS (12 ml) under stirring. Dialysis membrane (12–14 kDa molecular weight cut-off) separated the two compartments. The temperature was adjusted at 35 ± 0.5 °C. The amount of LC released was quantified using the HPLC method that was previously published elsewhere (Abdelkader et al., ). The HPLC system consisted of an isocratic mobile phase system (98% v/v: 2% v/v of trifluoro-acetic acid (0.1% v/v): acetonitrile with flow rate of 1 ml/min. a Supelcosil C18 column (5 µm; 25 × 0.46 cm, Supelco Corporation, PA, USA) at 40 °C; and UV detector set at 220 nm; and injection volume of 30 µl. The cumulative release data were fitted into kinetics models (zero-order, first-order, and Higuchi Diffusional models) to elucidate drug release mechanisms from the selected gel formulations.
The transcorneal permeation studies were performed using the Franz diffusion cells (Logan Instrument Corp., NJ, USA). The bovine eyes were collected from a local abattoir and were treated and dissected as previously described (Gaballa et al., ). The recipient compartment was filled with PBS (12 ml), and the donor compartment was filled with 1 ml of LC formulations. Two LC-loaded in situ gels (PCS12, PMC9) were studied. Drug solution (10 mg/ml) was used as a control. One ml of each sample equivalent to 10 mg/ml of LC was transferred into the donor compartment. The diffusion-cell system was maintained at 35 ± 0.5 °C. The amount of LC permeated across the mounted cornea (surface area 1.77 cm 2 ) was analyzed by the HPLC method as described in the previous section. The cumulative amounts of LC permeated were plotted against time and corrected for surface area. The apparent permeability coefficient ( P app ) was estimated through : (2) P app = F A C o Where F is the flux which is the slope of the cumulative drug permeation vs. time, A is the surface area and Co is the initial drug concentration.
pharmacodynamic study (corneal ulcer induction and healing) This study was approved by the Commission on the Ethics of Scientific Research under project code no. ES 10/2020, Faculty of Pharmacy, Minia University. of Ten rabbits, weighing between 1.5 and 2.0 kg, were grouped into two groups: group 1 received LC solution in their left eyes, and group 2 received F-P3/CS1 in their left eyes. The right eyes of both groups were left untreated and served as control. Corneal ulcers were induced in both eyes using 70% ethyl alcohol by the alcohol delamination method as described before (Abdelkader et al., ). After induction of corneal ulcers, a single drop of each test formulation was instilled every 12 h for 3 days. Percentage changes in ulcer size were determined using : (3) % Δ Ulcer size = ( D 1 − D n D 1 ) * 100 where: D 1 is the diameter of the ulcer at day 1, Dn is the diameter of the ulcer at day 2 or day 3.
Flux, apparent permeability coefficients, and cumulative irritation scores were represented as mean values ± standard deviation (SD). Statistical analysis was performed using a one-way ANOVA; p < .05 and <.001 were considered statistically significant. Tukey’s pair-wise comparison was conducted and set at a 95% confidence interval. Analyses were performed using Graph Pad Software Version 3.05; San Diego, CA, USA.
Twenty-two t hybrid in situ gelling formulations were prepared and studied with different poloxamer 407 (P) concentrations (14–30% w/v) and in combination with chitosan (CS) or methylcellulose (MC) using three different concentrations (0.5–1.5% w/v) of these two polymers. The prepared in situ gelling systems showed a pH range of 6–7.4 . This indicates that the prepared gels were physiologically compatible with the ocular surface that which has pH in the range of 7.11 ± 1.5 (Lim et al., ). Gelation time and gelation temperature Both gelation time and temperature (Tsol-gel) were recorded for the prepared in situ gel-forming systems . It is obvious that both parameters were markedly dependent on the poloxamer 407 (P) concentration and the overall composition, where the changes in MC concentrations (0.5–1.5%) did not bring any observable changes in either gelation time and Tsol-gel. For example, the (Tsol-gel) were significantly ( p < .05) lowered from 40 to 32 °C and the gelation time dramatically reduced from over 7 min to 30 s with changing the P concentration from 14% (F-P1) to 16% (F-P2), respectively. Furthermore, hybrid formulations of poloxamer 407 at the optimized concentrations of 16% and 18% with three different concentrations (0.5, 1.0, and 1.5%) of CS did not markedly change the Tsol-gel; however, gelation time was reduced. F-P3-CS1 displayed a short gelation time (13.5 s) and Tsol-gel (34.5 °C) which is comparable to the physiological ocular surface temperature. Similar results were obtained for F-P3-MC0.5 with gelation time and temperature of 13 s and 34 °C, respectively. The mechanism of thermal gelation of poloxamer is well-established. The temperature is the trigger that induces swelling of micelles comprising hydrated polyethylene oxide (PEO) chains at the outer hydrophilic shell and polypropylene oxide (PPO) chains in the inner hydrophobic core. Generally, increasing concentrations of poloxamers led to increasing the number of micelles and consequently reduces the gelation time and temperature. Similar results were reported elsewhere (Collaud et al., ). Systems formulated with concentrations of poloxamer 407 below 14% w/v did not display gel characteristics at a temperature well above the body temperature and gelation time >7 min; on the contrary, concentrations >20% resulted in gel characteristics at ambient conditions. An increase of P concentrations >20% w/v reduced Tsol-gel to sub-physiological temperature and near ambient conditions (20–25 °C). Also, using relatively higher concentrations (1.5%) of MC obviously dropped the gelation temperature and prolonged gelation time. This was recorded on F-P2/MC1.5 and F-P3/MC1.5 compared to F-P2 and F-P3, where the gelation temperature was reduced by 10 °C (from 34 to 24 °C) and the gelation time was almost doubled from around 20 to >40 s. Whilst shortening the gelation time is a desirable characteristic for an ophthalmic formulation; lowering of gelation temperature is not. This might promote undesirable gelation of the formulation at ambient temperature (20–25 °C) before instillation onto the surface of the eye with a temperature of around 35 °C. Nevertheless, a shorter gelation time would be advantageous to reduce the time required for the instilled dose to transform into a viscous gel. Therefore, this could reduce the likelihood of the instilled dose to be rapidly diluted and lost via nasolacrimal drainage. Formulations containing 0.5% w/v CS (F-P2/CS0.5) showed immediate gelation but reversed back to the ‘sol’ state after a few minutes. Increasing the concentration of CS may promote poloxamer entanglements and, thus, transition time becomes shorter and the erosion time of formed gel could be prolonged. The gelation time recorded for in situ gelling formulations containing CS indicated that increasing CS concentration in the presence of P up to a certain limit, caused a significant decrease ( p < .05) in the gelation time. For formulations F-P3/CS0.5 and F-P3/CS1, the gelation time was 17.9 ± 3.5 and 13.5 ± 2.4 s, respectively compared with formulation P3 formulation where gelation time was 20.3 ± 1.9 s . However, increasing CS concentration up to 1.5% w/v did not have a significant ( p > .05) influence on gelation time. This may be ascribed to the presence of CS in higher concentrations which increased the viscosity of the formulation and decreased gelation time. On the other hand, MC is a viscosity-enhancing agent which facilitates polymer chains entanglements into the P407-based formulations with a direct consequence in promoting more rapid conversion from sol to gel at a lower temperature as MC content increases. Similar results were recorded for levofloxacin poloxamer 407 gels and levofloxacin gellan gum-poloxamer 407 hybrid gels. Both gelation time and temperature were dependent on poloxamer concentrations. The gelation temperature was 40 and 35 °C for 12 and 16% poloxamer 407 gels, respectively. Further, hybrid gels of gellan gum-poloxamer 407 reduced gelation temperature (36 °C) and prolonged gelation time (12 min), compared to gelation temperature (38) and gelation time (5 min) for only poloxamer 407 (14%)-based gel (Sapra et al., ). Rheological properties The rheological characteristics were studied at 4 °C for selected in situ gels based on their superior gelation time and temperature as discussed in the previous section. This study was performed to evaluate the viscosity of the formulations while they are in the solution phase before transforming them into a gel. Initial viscosity can give an idea of how the prepared in situ gelling formulations could resist initial rapid dilution by resident tears and subsequently promote the prolongation of precorneal residence time . The more viscous the formulation, the less likely it will undergo dilution by resident tears and more likely it will resist nasolacrimal drainage. and show the viscosity values of LC solution (1%) and the selected formulations at different shearing rates (corresponding to different rotational speeds expressed in rpm) measured at 4 °C. shows a typical Newtonian flow behavior for LC solution; the viscosity seems to be constant with the increasing shear rates. On contrary, non-Newtonian flow behaviors were recorded for F-P3 and the other hybrid gels. shows the effect of the composition of the gel formulations on the viscosity. The viscosity of hybrid gels F-P3/CS0.5 and F-P3/MC0.5 significantly ( p < .001) increased, compared to poloxamer gels alone. The two-hybrid gel formulations exhibited viscosity values of 2.7 and 3.2 times greater than that for F-P3, respectively. Moreover, these increases were dependent on MC and CS concentrations. For example, both F-P3/CS0.5 and F-P3/MC0.5 exhibited 2.7- and 3.2-times increases of the viscosity, respectively, compared to F-P3. Further, the viscosity enhancement of F-P3/MC/CS systems was 1.5-fold when MC and CS concentrations increased from 0.5 to 1% . More interestingly, the non-Newtonian rheological characteristic of F-P3 reversed from shear-thickening to shear-thinning upon the addition of MC and CS. The former behavior could be attributed to micellar entanglements and packing upon increasing the shearing rates. The latter (shear-thinning) behavior of the tested formulations can be considered a desirable rheological characteristic in ocular drug delivery settings. This is because such features could offer less interference with blinking and more comfort to the eye compared with formulations having a shear-thickening behavior (Greaves et al., ; Cao et al., ). Texture analysis of in situ gel formulations This experiment is performed to understand the mechanical properties of the investigated systems. More specifically, hardness, cohesiveness, and adhesiveness were recorded for the prepared in situ forming gels. These properties could simulate certain sensory parameters in vivo , and hence help develop an ocular dosage form that offers better patient compliance (Gratieri et al., ). The hardness is a measure of the force required to produce gel deformation. Significantly lower hardness values were recorded for the selected in situ gels at 4 °C (which are actually viscous liquids at this temperature) compared to those measured after gelation at 35 °C . Instead, the addition of additives like CS, MC did not produce noticeable changes in hardness at 4 °C. However, the hardness reduced to almost half upon addition of CS at 0.5% in the gel state at 35 °C. Raising CS concentration to 1% led to a substantial recovery of the overall hardness of the mixed system when compared to the F-P3 gel. On the contrary, the addition of MC showed concentration-dependent increases in hardness . A previous study reported relatively very low hardness for chitosan gel (44.6 g), compared to poloxamer 407 gel (753 g) measured at room temperature (Hurler et al., ). Therefore, hybrid gels of CS and P can understandably result in low hardness for the formed hybrid gels, compared to poloxamer 407 alone. At a temperature of 35 °C, the formulation is expected to be in the gel state; yet it is desirable that the formulation possesses an appreciable hardness (resistance to deform) to withstand tear dilution and nasolacrimal drainage (Ferrari et al., ). The data presented in shows that the hardness values of the gel preparations were comparatively high when measured at 35 °C with respect to those at 4 °C. For example, the hardness values at 4 and 35 °C for F-P3/CS0.5 were 22.6 ± 1.6 and 101.3 ± 2.7 mN, respectively. These results correlate well with the viscosity data. The same results apply to those gel formulations containing different concentrations of MC, where the hardness of the formulations (at 35 °C) increased when MC concentration increased . The experimental data further supported our initial hypothesis that both CS and MC have promoted further association and entanglement of P chains, thereby dramatically increasing the hardness of the formulations when the polymer is in its ordered state at 35 °C. Adhesiveness is a measure of the work necessary to detach the probe from the sample (Xu et al., ). , shows the adhesiveness of different in situ gel formulation either alone or in the presence of the additives used. The adhesiveness of the formulation dropped significantly when the formulations were in solution states at 4 °C; on the contrary, slight increases in adhesiveness were reported in gel states with the addition of MC and CS. The cohesiveness of the in-situ gels is an indication of the attractive force between molecules of the investigated systems. shows the cohesiveness of the tested in situ gels. Similar behaviors were recorded as mentioned with adhesiveness measurements. Surface tension, contact angle, and spreading ability of LC-loaded in situ gelling formulations shows γ , θ , and S values for a drug solution and selected formulations. Except for drug solutions, all the prepared in situ gels had low surface tension that is compatible with that of the precorneal tear film and exhibited significantly low contact angles ( θ ) and low spreading coefficients ( S ). These results indicate a superior spreading ability of the prepared formulations. These recorded characteristics are favorable features for enhanced performance of newly developed ocular dosage forms. In vitro ocular irritation studies HET-CAM HET-CAM is a well-accepted ex vivo conjunctival irritation model that produces responses (of hyperemia, hemorrhage, and clotting/coagulation) to test substances similar to those involving the conjunctiva of the eye. shows developmental stages of the CAM at 3- and 10-days and different irritant responses to the moderate irritant propylene glycol (PG), a strong irritant (sodium hydroxide 1 M), and F-P3/CS1. shows the cumulative numerical irritation scores of the test substances and controls. Mild-to-moderate hyperemia was observed with administration of PG; intense hyperemia and hemorrhage of the blood vessels and capillaries were observed and recorded with the administration of the corrosive alkali sodium hydroxide. All the tested in situ gels displayed slightly to mild hyperemia. That is why they are interpreted as a none-to-mild irritant . There were no statistical significances ( p > .05) among the prepared in situ gels. Mucoadhesion studies The in vitro mucoadhesion characteristics of the selected in situ gels were assessed by measuring the force of detachment or adhesion (mN) and work of adhesion (mN.mm) of the porcine mucin disk (mimicking mucin layer in the mucus membrane of the conjunctiva) out of the gel surface. Mucoadhesion is an essential property required to extend ocular residence time and improve ocular bioavailability (Lang et al., ). The results are presented in . The selected in situ gels showed well measurable force and work of adhesions ranging from 87.5 to 170 and 115 to 345 mN·mm, respectively. There was a good correlation between the force of adhesion and work of adhesion. The greater the force of adhesion, the higher values were recorded for work of adhesion for the corresponding gel formulations. The weakest mucoadhesion force and the lowest work of mucoadhesion were recorded for F-P3/MC0.5 and F-P3/MC1 whereas the superior mucoadhesion characteristics were displayed by F-P3/CS0.5 and F-P3/CS1. F-P3 (poloxamer 407 alone) came in the middle. The addition of the cationic polymer chitosan enhanced the mucoadhesion properties probably due to the electrostatic interactions between the negatively charged mucin and the cationic-CS-based in situ gels (Lehr et al., , ). These effects were dependent on chitosan concentrations. The higher the chitosan concentration, the stronger the electrostatic interaction and hence the greater mucoadhesion force and work of adhesion. On the contrary, the addition of methyl cellulose offered no observable improvement in mucoadhesion properties of the in-situ gels. This is could be ascribed to the non-ionic nature of methylcellulose as well as the weak propensity of MC to form hydrogen bonding with mucin due to the relatively high degree (>30%) of hydroxyl-group methylation. Accordingly, P-CS in situ gelling formulations (F-P3/CS0.5 and F-P3/CS1) were selected for further studies. SEM The microstructure of the surface characteristic of F-P3/CS1 was visualized using SEM. shows SE micrographs of F-P3/CS1 at two different magnifications to study the microstructure and surface morphology of the in-situ gel. There were no signs of phase/polymer separation. The surface of the gel matrix appeared a corrugated/rough surface. In vitro L-carnosine release and corneal penetration of the selected in situ gel formulations The selected formulations were chosen for the in vitro release study depending on the selection criteria outlined in the flow chart . shows in vitro release profiles of LC from solution, F-P3/CS0.5 and F-P3/CS1. Markedly prolonged LC release was observed from the selected in situ gels with slow but steady drug release profiles, compared to that for the drug solution form. For example, almost complete (>95%) drug release from LC solution was recorded over 8 h compared to only 30 and 18% for F-P3/CS0.5 and F-P3/CS1, respectively. This can be explained on the basis that more extra time was required for drug molecules to diffuse out and escape the extensive gel matrix. On contrary, free drug solutions released LC promptly with faster release rates. For example, the time for 20% drug release was <1, 4, and 8 h for LC solution, F-P3/CS0.5 and F-P3/CS1, respectively. These results also indicated that the release behavior was a CS concentration-sensitive process. The best-fitting release kinetics model was the Higuchi diffusion model with regression coefficient ( R 2 ) >0.99. Similar results were reported with sulforaphane (antiarthritic and immunoregulator drug) loaded into poloxamer-hyaluronic acid hybrid hydrogels. Linear and rapid drug release (complete release in 8 h) from aqueous solution was recorded compared to more sustained release (up to 24 h) from the hybrid gels with a general mechanism of diffusion and erosion (Nascimento et al., ). Transcorneal permeation studies for LC solution, F-P3/CS0.5 and F-P3/CS1 were studied using excised bovine corneas. Permeation parameters like the flux and apparent permeability coefficient ( P app ) were estimated from the slope of cumulative permeated amounts of L-carnosine vs. time and the results are presented in . Both flux and P app for F-P3/CS0.5 and F-P3/CS1 ( p < .05) were significantly lower than those for LC solution. This can be ascribed to a high consistency of the two gel forms which markedly slow down drug diffusion through the gel network to eventually give lower transcorneal permeation as expressed by the quantities collected in . It is worth mentioning that the superior mucoadhesive, spreading ability and viscous gelling features of the prepared in situ polymer gels indicate that these adequate formulations hold themselves for a prolonged time on the corneal surface before drainage as it has been previously reported by others (Gratieri et al., , ). In vivo study Scrapping of the corneal epithelium was induced using ethyl alcohol 70% v/v and utilized as a pharmacodynamic response to compare the corneal wound healing potential of LC when loaded in the optimized in situ gelling formulations (F-P3/CS1) in comparison with the control (1% LC solution). It has been previously shown that the size of the ulcers for eyes exposed to LC solution was markedly less compared to untreated eyes (Babizhayev et al., ). Whilst these findings are promising, we hypothesize that incorporating LC in optimized in situ gelling formulations would be advantageous. shows fluorescein-stained rabbit eyes under cobalt blue light to visualize the corneal ulcers. The fastest healing rate was ascribed to the optimized in situ gels (F-P3/CS1), whereas a relatively delayed wound repair was experienced by the untreated group. The percentage of changes of ulcer size (%Δ) on day 2 for the untreated, L-carnosine solution and the L-carnosine loaded in situ gel formulation was 72% ± 7, 55% ± 5, and 45 ± 3.5, respectively. On day 3, complete healing was recorded for F-P3/CS1 with only 26.5% ± 6 and 16% ± 7 (%Δ) recorded for the untreated, L-carnosine solution groups, respectively. These differences between untreated and treated groups were significant ( p < .05). This indicates the role of the developed in situ gelling formulation (F-P3/CS1) in improving the monitored therapeutics response (corneal wound healing) which is mainly due to a combination of superior mechanical and rheological properties, spreading capacity, mucoadhesive nature as well as its propensity to a prolong LC release.
Both gelation time and temperature (Tsol-gel) were recorded for the prepared in situ gel-forming systems . It is obvious that both parameters were markedly dependent on the poloxamer 407 (P) concentration and the overall composition, where the changes in MC concentrations (0.5–1.5%) did not bring any observable changes in either gelation time and Tsol-gel. For example, the (Tsol-gel) were significantly ( p < .05) lowered from 40 to 32 °C and the gelation time dramatically reduced from over 7 min to 30 s with changing the P concentration from 14% (F-P1) to 16% (F-P2), respectively. Furthermore, hybrid formulations of poloxamer 407 at the optimized concentrations of 16% and 18% with three different concentrations (0.5, 1.0, and 1.5%) of CS did not markedly change the Tsol-gel; however, gelation time was reduced. F-P3-CS1 displayed a short gelation time (13.5 s) and Tsol-gel (34.5 °C) which is comparable to the physiological ocular surface temperature. Similar results were obtained for F-P3-MC0.5 with gelation time and temperature of 13 s and 34 °C, respectively. The mechanism of thermal gelation of poloxamer is well-established. The temperature is the trigger that induces swelling of micelles comprising hydrated polyethylene oxide (PEO) chains at the outer hydrophilic shell and polypropylene oxide (PPO) chains in the inner hydrophobic core. Generally, increasing concentrations of poloxamers led to increasing the number of micelles and consequently reduces the gelation time and temperature. Similar results were reported elsewhere (Collaud et al., ). Systems formulated with concentrations of poloxamer 407 below 14% w/v did not display gel characteristics at a temperature well above the body temperature and gelation time >7 min; on the contrary, concentrations >20% resulted in gel characteristics at ambient conditions. An increase of P concentrations >20% w/v reduced Tsol-gel to sub-physiological temperature and near ambient conditions (20–25 °C). Also, using relatively higher concentrations (1.5%) of MC obviously dropped the gelation temperature and prolonged gelation time. This was recorded on F-P2/MC1.5 and F-P3/MC1.5 compared to F-P2 and F-P3, where the gelation temperature was reduced by 10 °C (from 34 to 24 °C) and the gelation time was almost doubled from around 20 to >40 s. Whilst shortening the gelation time is a desirable characteristic for an ophthalmic formulation; lowering of gelation temperature is not. This might promote undesirable gelation of the formulation at ambient temperature (20–25 °C) before instillation onto the surface of the eye with a temperature of around 35 °C. Nevertheless, a shorter gelation time would be advantageous to reduce the time required for the instilled dose to transform into a viscous gel. Therefore, this could reduce the likelihood of the instilled dose to be rapidly diluted and lost via nasolacrimal drainage. Formulations containing 0.5% w/v CS (F-P2/CS0.5) showed immediate gelation but reversed back to the ‘sol’ state after a few minutes. Increasing the concentration of CS may promote poloxamer entanglements and, thus, transition time becomes shorter and the erosion time of formed gel could be prolonged. The gelation time recorded for in situ gelling formulations containing CS indicated that increasing CS concentration in the presence of P up to a certain limit, caused a significant decrease ( p < .05) in the gelation time. For formulations F-P3/CS0.5 and F-P3/CS1, the gelation time was 17.9 ± 3.5 and 13.5 ± 2.4 s, respectively compared with formulation P3 formulation where gelation time was 20.3 ± 1.9 s . However, increasing CS concentration up to 1.5% w/v did not have a significant ( p > .05) influence on gelation time. This may be ascribed to the presence of CS in higher concentrations which increased the viscosity of the formulation and decreased gelation time. On the other hand, MC is a viscosity-enhancing agent which facilitates polymer chains entanglements into the P407-based formulations with a direct consequence in promoting more rapid conversion from sol to gel at a lower temperature as MC content increases. Similar results were recorded for levofloxacin poloxamer 407 gels and levofloxacin gellan gum-poloxamer 407 hybrid gels. Both gelation time and temperature were dependent on poloxamer concentrations. The gelation temperature was 40 and 35 °C for 12 and 16% poloxamer 407 gels, respectively. Further, hybrid gels of gellan gum-poloxamer 407 reduced gelation temperature (36 °C) and prolonged gelation time (12 min), compared to gelation temperature (38) and gelation time (5 min) for only poloxamer 407 (14%)-based gel (Sapra et al., ).
The rheological characteristics were studied at 4 °C for selected in situ gels based on their superior gelation time and temperature as discussed in the previous section. This study was performed to evaluate the viscosity of the formulations while they are in the solution phase before transforming them into a gel. Initial viscosity can give an idea of how the prepared in situ gelling formulations could resist initial rapid dilution by resident tears and subsequently promote the prolongation of precorneal residence time . The more viscous the formulation, the less likely it will undergo dilution by resident tears and more likely it will resist nasolacrimal drainage. and show the viscosity values of LC solution (1%) and the selected formulations at different shearing rates (corresponding to different rotational speeds expressed in rpm) measured at 4 °C. shows a typical Newtonian flow behavior for LC solution; the viscosity seems to be constant with the increasing shear rates. On contrary, non-Newtonian flow behaviors were recorded for F-P3 and the other hybrid gels. shows the effect of the composition of the gel formulations on the viscosity. The viscosity of hybrid gels F-P3/CS0.5 and F-P3/MC0.5 significantly ( p < .001) increased, compared to poloxamer gels alone. The two-hybrid gel formulations exhibited viscosity values of 2.7 and 3.2 times greater than that for F-P3, respectively. Moreover, these increases were dependent on MC and CS concentrations. For example, both F-P3/CS0.5 and F-P3/MC0.5 exhibited 2.7- and 3.2-times increases of the viscosity, respectively, compared to F-P3. Further, the viscosity enhancement of F-P3/MC/CS systems was 1.5-fold when MC and CS concentrations increased from 0.5 to 1% . More interestingly, the non-Newtonian rheological characteristic of F-P3 reversed from shear-thickening to shear-thinning upon the addition of MC and CS. The former behavior could be attributed to micellar entanglements and packing upon increasing the shearing rates. The latter (shear-thinning) behavior of the tested formulations can be considered a desirable rheological characteristic in ocular drug delivery settings. This is because such features could offer less interference with blinking and more comfort to the eye compared with formulations having a shear-thickening behavior (Greaves et al., ; Cao et al., ).
This experiment is performed to understand the mechanical properties of the investigated systems. More specifically, hardness, cohesiveness, and adhesiveness were recorded for the prepared in situ forming gels. These properties could simulate certain sensory parameters in vivo , and hence help develop an ocular dosage form that offers better patient compliance (Gratieri et al., ). The hardness is a measure of the force required to produce gel deformation. Significantly lower hardness values were recorded for the selected in situ gels at 4 °C (which are actually viscous liquids at this temperature) compared to those measured after gelation at 35 °C . Instead, the addition of additives like CS, MC did not produce noticeable changes in hardness at 4 °C. However, the hardness reduced to almost half upon addition of CS at 0.5% in the gel state at 35 °C. Raising CS concentration to 1% led to a substantial recovery of the overall hardness of the mixed system when compared to the F-P3 gel. On the contrary, the addition of MC showed concentration-dependent increases in hardness . A previous study reported relatively very low hardness for chitosan gel (44.6 g), compared to poloxamer 407 gel (753 g) measured at room temperature (Hurler et al., ). Therefore, hybrid gels of CS and P can understandably result in low hardness for the formed hybrid gels, compared to poloxamer 407 alone. At a temperature of 35 °C, the formulation is expected to be in the gel state; yet it is desirable that the formulation possesses an appreciable hardness (resistance to deform) to withstand tear dilution and nasolacrimal drainage (Ferrari et al., ). The data presented in shows that the hardness values of the gel preparations were comparatively high when measured at 35 °C with respect to those at 4 °C. For example, the hardness values at 4 and 35 °C for F-P3/CS0.5 were 22.6 ± 1.6 and 101.3 ± 2.7 mN, respectively. These results correlate well with the viscosity data. The same results apply to those gel formulations containing different concentrations of MC, where the hardness of the formulations (at 35 °C) increased when MC concentration increased . The experimental data further supported our initial hypothesis that both CS and MC have promoted further association and entanglement of P chains, thereby dramatically increasing the hardness of the formulations when the polymer is in its ordered state at 35 °C. Adhesiveness is a measure of the work necessary to detach the probe from the sample (Xu et al., ). , shows the adhesiveness of different in situ gel formulation either alone or in the presence of the additives used. The adhesiveness of the formulation dropped significantly when the formulations were in solution states at 4 °C; on the contrary, slight increases in adhesiveness were reported in gel states with the addition of MC and CS. The cohesiveness of the in-situ gels is an indication of the attractive force between molecules of the investigated systems. shows the cohesiveness of the tested in situ gels. Similar behaviors were recorded as mentioned with adhesiveness measurements.
shows γ , θ , and S values for a drug solution and selected formulations. Except for drug solutions, all the prepared in situ gels had low surface tension that is compatible with that of the precorneal tear film and exhibited significantly low contact angles ( θ ) and low spreading coefficients ( S ). These results indicate a superior spreading ability of the prepared formulations. These recorded characteristics are favorable features for enhanced performance of newly developed ocular dosage forms.
ocular irritation studies HET-CAM HET-CAM is a well-accepted ex vivo conjunctival irritation model that produces responses (of hyperemia, hemorrhage, and clotting/coagulation) to test substances similar to those involving the conjunctiva of the eye. shows developmental stages of the CAM at 3- and 10-days and different irritant responses to the moderate irritant propylene glycol (PG), a strong irritant (sodium hydroxide 1 M), and F-P3/CS1. shows the cumulative numerical irritation scores of the test substances and controls. Mild-to-moderate hyperemia was observed with administration of PG; intense hyperemia and hemorrhage of the blood vessels and capillaries were observed and recorded with the administration of the corrosive alkali sodium hydroxide. All the tested in situ gels displayed slightly to mild hyperemia. That is why they are interpreted as a none-to-mild irritant . There were no statistical significances ( p > .05) among the prepared in situ gels.
HET-CAM is a well-accepted ex vivo conjunctival irritation model that produces responses (of hyperemia, hemorrhage, and clotting/coagulation) to test substances similar to those involving the conjunctiva of the eye. shows developmental stages of the CAM at 3- and 10-days and different irritant responses to the moderate irritant propylene glycol (PG), a strong irritant (sodium hydroxide 1 M), and F-P3/CS1. shows the cumulative numerical irritation scores of the test substances and controls. Mild-to-moderate hyperemia was observed with administration of PG; intense hyperemia and hemorrhage of the blood vessels and capillaries were observed and recorded with the administration of the corrosive alkali sodium hydroxide. All the tested in situ gels displayed slightly to mild hyperemia. That is why they are interpreted as a none-to-mild irritant . There were no statistical significances ( p > .05) among the prepared in situ gels.
The in vitro mucoadhesion characteristics of the selected in situ gels were assessed by measuring the force of detachment or adhesion (mN) and work of adhesion (mN.mm) of the porcine mucin disk (mimicking mucin layer in the mucus membrane of the conjunctiva) out of the gel surface. Mucoadhesion is an essential property required to extend ocular residence time and improve ocular bioavailability (Lang et al., ). The results are presented in . The selected in situ gels showed well measurable force and work of adhesions ranging from 87.5 to 170 and 115 to 345 mN·mm, respectively. There was a good correlation between the force of adhesion and work of adhesion. The greater the force of adhesion, the higher values were recorded for work of adhesion for the corresponding gel formulations. The weakest mucoadhesion force and the lowest work of mucoadhesion were recorded for F-P3/MC0.5 and F-P3/MC1 whereas the superior mucoadhesion characteristics were displayed by F-P3/CS0.5 and F-P3/CS1. F-P3 (poloxamer 407 alone) came in the middle. The addition of the cationic polymer chitosan enhanced the mucoadhesion properties probably due to the electrostatic interactions between the negatively charged mucin and the cationic-CS-based in situ gels (Lehr et al., , ). These effects were dependent on chitosan concentrations. The higher the chitosan concentration, the stronger the electrostatic interaction and hence the greater mucoadhesion force and work of adhesion. On the contrary, the addition of methyl cellulose offered no observable improvement in mucoadhesion properties of the in-situ gels. This is could be ascribed to the non-ionic nature of methylcellulose as well as the weak propensity of MC to form hydrogen bonding with mucin due to the relatively high degree (>30%) of hydroxyl-group methylation. Accordingly, P-CS in situ gelling formulations (F-P3/CS0.5 and F-P3/CS1) were selected for further studies.
The microstructure of the surface characteristic of F-P3/CS1 was visualized using SEM. shows SE micrographs of F-P3/CS1 at two different magnifications to study the microstructure and surface morphology of the in-situ gel. There were no signs of phase/polymer separation. The surface of the gel matrix appeared a corrugated/rough surface.
L-carnosine release and corneal penetration of the selected in situ gel formulations The selected formulations were chosen for the in vitro release study depending on the selection criteria outlined in the flow chart . shows in vitro release profiles of LC from solution, F-P3/CS0.5 and F-P3/CS1. Markedly prolonged LC release was observed from the selected in situ gels with slow but steady drug release profiles, compared to that for the drug solution form. For example, almost complete (>95%) drug release from LC solution was recorded over 8 h compared to only 30 and 18% for F-P3/CS0.5 and F-P3/CS1, respectively. This can be explained on the basis that more extra time was required for drug molecules to diffuse out and escape the extensive gel matrix. On contrary, free drug solutions released LC promptly with faster release rates. For example, the time for 20% drug release was <1, 4, and 8 h for LC solution, F-P3/CS0.5 and F-P3/CS1, respectively. These results also indicated that the release behavior was a CS concentration-sensitive process. The best-fitting release kinetics model was the Higuchi diffusion model with regression coefficient ( R 2 ) >0.99. Similar results were reported with sulforaphane (antiarthritic and immunoregulator drug) loaded into poloxamer-hyaluronic acid hybrid hydrogels. Linear and rapid drug release (complete release in 8 h) from aqueous solution was recorded compared to more sustained release (up to 24 h) from the hybrid gels with a general mechanism of diffusion and erosion (Nascimento et al., ). Transcorneal permeation studies for LC solution, F-P3/CS0.5 and F-P3/CS1 were studied using excised bovine corneas. Permeation parameters like the flux and apparent permeability coefficient ( P app ) were estimated from the slope of cumulative permeated amounts of L-carnosine vs. time and the results are presented in . Both flux and P app for F-P3/CS0.5 and F-P3/CS1 ( p < .05) were significantly lower than those for LC solution. This can be ascribed to a high consistency of the two gel forms which markedly slow down drug diffusion through the gel network to eventually give lower transcorneal permeation as expressed by the quantities collected in . It is worth mentioning that the superior mucoadhesive, spreading ability and viscous gelling features of the prepared in situ polymer gels indicate that these adequate formulations hold themselves for a prolonged time on the corneal surface before drainage as it has been previously reported by others (Gratieri et al., , ).
study Scrapping of the corneal epithelium was induced using ethyl alcohol 70% v/v and utilized as a pharmacodynamic response to compare the corneal wound healing potential of LC when loaded in the optimized in situ gelling formulations (F-P3/CS1) in comparison with the control (1% LC solution). It has been previously shown that the size of the ulcers for eyes exposed to LC solution was markedly less compared to untreated eyes (Babizhayev et al., ). Whilst these findings are promising, we hypothesize that incorporating LC in optimized in situ gelling formulations would be advantageous. shows fluorescein-stained rabbit eyes under cobalt blue light to visualize the corneal ulcers. The fastest healing rate was ascribed to the optimized in situ gels (F-P3/CS1), whereas a relatively delayed wound repair was experienced by the untreated group. The percentage of changes of ulcer size (%Δ) on day 2 for the untreated, L-carnosine solution and the L-carnosine loaded in situ gel formulation was 72% ± 7, 55% ± 5, and 45 ± 3.5, respectively. On day 3, complete healing was recorded for F-P3/CS1 with only 26.5% ± 6 and 16% ± 7 (%Δ) recorded for the untreated, L-carnosine solution groups, respectively. These differences between untreated and treated groups were significant ( p < .05). This indicates the role of the developed in situ gelling formulation (F-P3/CS1) in improving the monitored therapeutics response (corneal wound healing) which is mainly due to a combination of superior mechanical and rheological properties, spreading capacity, mucoadhesive nature as well as its propensity to a prolong LC release.
Poloxamer-based thermosensitive systems have been investigated for oral and dermal drug delivery systems. However, Ocular application does require optimization with physiology and anatomical barriers of the surface of the eye for optimum drug delivery. Among these properties required to optimize include gelation time and temperature, mechanical, viscosity, mucoadhesive properties, and spreading ability. The idea of combining poloxamer 407 with macromolecular compounds like chitosan and methyl cellulose seems to be promising, as they tend to produce a relatively more viscous gel, superior mucoadhesive, better spreading capacity, rapid gelation, and more importantly slow and steady in vitro and ex vivo permeation, compared to poloxamer 407 alone and LC solution. Poloxamer 407/chitosan combination has resulted in effective superior mucoadhesive and spreading ability compared to the other poloxamer 407-based formulations. The time required to transform poloxamer-based systems from sol to gel state (gelation time) was significantly reduced by the addition of methylcellulose and chitosan. This is desirable to withstand rapid blinking and nasolacrimal drainage. Gelation temperature was very close to the physiological temperature of the eye surface (32–35 °C). Poloxamer 407 (P) gels with a firm hardness that might interfere with blinking and is likely to produce foreign body sensation. Hybrid gels of CS and P generate gels with significantly less firmness. The optimized F-P3/CS1 formulation showed prolonged trans corneal permeation and enhanced corneal wound healing, hence worthwhile further investigation to develop as an eye drop for topical ocular delivery of LC.
|
Harnessing Real-World Evidence to Advance Cancer Research | 739c321c-0ec3-46b7-80ea-02e9b8315c9a | 9955401 | Internal Medicine[mh] | The growing incidence and burden of cancer drives the need for effective, evidence-based treatments. Globally, there were 18.1 million cancer cases and 9.6 million deaths due to cancer in 2018, with the annual number of cases projected to increase to 29 million by 2040 . As the global burden of cancer increases, so does the economic cost of cancer treatment. In the United States, cancer healthcare spending was estimated to be USD 161.2 billion in 2017 , compared to USD 27 billion in 1990 . In Europe, the total cost of cancer care was EUR 199 billion in 2018, comprising EUR 103 billion on healthcare spending, EUR 26 billion on informal care costs, and EUR 70 billion on productivity losses . In this context, reliable evidence is required to support the development and use of cancer medicines. is a schema of the iterative process of cancer medicine development, and areas where evidence is needed to drive high-quality care and optimize outcomes for people with cancer. While clinical trials are integral to the development of novel cancer therapies, there is increasing recognition that conventional trials may not meet all of the evidentiary needs for regulatory assessment and clinical decision-making. This has led to growing interest in the potential of real-world data to generate fit-for-purpose real-world evidence in cancer care. This review outlines the strengths and limitations of conventional clinical trials and real-world data for oncology research and explores the potential roles of real-world data for addressing evidence gaps in clinical trial research. This is followed by a proposed framework for the complementary use of clinical trials and real-world evidence to advance oncology research, and a targeted overview of the potential of real-world data globally to support cancer research.
Conventional clinical trials and studies leveraging real-world data represent two broad categories of evidence generation for cancer therapies, each with their respective strengths and limitations . 2.1. Clinical Trials Randomized controlled trials (RCTs) evaluate the efficacy and toxicity of novel treatments against standard-of-care comparators and are considered a “gold standard” for determining whether there is a cause-effect relationship between treatment and outcome . RCTs are usually Phase 3 trials that aim to demonstrate a statistically significant difference between two or more treatment arms, with the alpha error conventionally set at 0.05, indicating a 5% risk of rejecting the null hypothesis when it is true. Well-designed RCTs enable an even distribution of factors, both known and unknown, that may affect the outcome between treatment groups, to minimize the effect of confounding bias on outcomes of interest. To further reduce bias, RCTs employ strategies such as allocation concealment, blinded assessment, intention-to-treat analysis and rigorous follow-up, so that differences in outcomes between treatment groups can be attributed to the intervention under investigation . Therefore, the major strength of RCTs is the ability to evaluate the efficacy of interventions with excellent internal validity . However, RCTs have limited external validity, as the generalizability of their outcomes is constrained by several factors, including differences in patient population and care provision between the clinical trial and real-world settings . Clinical trial participants are highly selected and may not be representative of the broader patient population, as only 3% of patients with cancer are enrolled in RCTs . Older patients aged 65 years and above are consistently under-represented in clinical trials, even though cancers are overwhelmingly diagnosed in older adults . There is also under-representation of patients from racial and ethnic minorities, those with socioeconomic disadvantage, and patients with complex health problems; therefore, RCTs may lack information about treatment tolerability and efficacy in patients with multiple co-morbidities and poor performance status . Patient care provided in clinical trials does not necessarily represent routine clinical practice . Clinical trial participants typically receive more intensive monitoring than patients in routine practice, which may influence outcomes. There is evidence that clinical trial participants benefit from the ‘trial effect’ or ‘protocol/Hawthorne effect’, in which the clinical trial participation in itself may have a positive effect on outcomes due to more intensive care . This is supported by the fact that patients who are referred for clinical trial participation at specialist centres often have better survival outcomes than those who are not . RCTs have a number of practical limitations, including the need to prospectively recruit and monitor study participants, often in large numbers in highly controlled processes that are costly, cumbersome and time-consuming. A report using pharmaceutical industry data from over 4100 oncology trials found that the average duration of phase III oncology trials was nearly 5 years . Data on key clinical outcomes, such as overall survival, can take many years to mature. This often leads to the use of surrogate endpoints that can be available in shorter time frames, such as response rate and progression-free survival. Between 2009 and 2014, approval for two-thirds of oncology drugs by the United States Food and Drug Administration (FDA) were based on surrogate outcomes . However, many RCTs employ surrogate endpoints that are not adequately validated measures of patient benefit . Furthermore, RCTs are limited in terms of patient numbers and duration of follow-up. Therefore, RCTs have a limited ability to provide data on rare and long-term toxicities, especially in patients who are under-represented in clinical trials or those that occur many years after study completion . 2.2. Real-World Data Real-world data are not generated by conventional RCTs . A widely accepted definition of real-world data is proposed by the United States Food and Drug Administration (FDA) in the Framework for FDA’s Real-World Evidence Program: “data relating to patient health status and/or delivery of health care routinely collected from a variety of sources” . Real-world evidence describes information on healthcare derived from real-world data settings. Its defining characteristics are the routine care settings in which data are collected and the degree of pragmatism . 2.2.1. Examples of Real-World Data Real-world data is heterogeneous . Electronic health records (EHR) in routine care have created a rich potential source of real-world data. Health claims datasets consist of data on billing and payment interactions between patients and health care providers and payers, collected for the purposes of reimbursement. Health surveys are conducted to provide information on the health of populations and are administered at regular intervals on a random sample of individuals or households . Registries are population-specific, prospective, observational collections of predefined clinical, demographic and disease characteristics of patient cohorts who have a particular disease and/or receive a particular treatment or intervention . Cancer registries are a type of disease-based registry recording all new cases of cancer in a defined population . Novel sources of real-world data include patient-generated data, facilitated by the emergence of technologies such as wearable devices, health-related mobile applications and social media platforms . 2.2.2. Strengths of Real-World Data Research An important strength of real-world data is that data are captured from patients in routine care . Therefore, studies leveraging these data cover a broader cross-section of the patient population than clinical trials, potentially producing more generalizable results with greater external validity. When relevant data and infrastructure are available, real-world studies can be conducted more efficiently and at a lower cost than conventional clinical trials. Studies using real-world data typically have larger sample sizes and longer periods of follow-up compared to clinical trials, facilitating detection of late and uncommon side-effects. Therefore, real-world data are able to offer insights into larger, more heterogenous patient populations in routine practice, which contrasts with and complements the evidence arising from the study of strictly defined, homogeneous participants in conventional clinical trials. 2.2.3. Limitations of Real-World Data Research A key limitation of observational studies of treatment efficacy and toxicity is that the intervention of interest is not randomly assigned, and there is often not a suitable control group. Thus, results are susceptible to confounding by indication, which could result in biased associations between treatment and outcomes of interest. Due to the imperfect internal validity of observational studies, it is often difficult to ascertain whether outcomes are due to the adoption of a new treatment, or whether they are due to underlying patient characteristics that influence treatment choice (selection bias), or other factors such as changes in disease biology or concurrent changes in patient management. Another important limitation of studies using real-world data is that the primary aim for many data collections is to support health service provision or administrative purposes, rather than research purposes. Therefore, real-world data may lack information on research end points, and have more variable data quality and incomplete data compared to conventional clinical trials.
Randomized controlled trials (RCTs) evaluate the efficacy and toxicity of novel treatments against standard-of-care comparators and are considered a “gold standard” for determining whether there is a cause-effect relationship between treatment and outcome . RCTs are usually Phase 3 trials that aim to demonstrate a statistically significant difference between two or more treatment arms, with the alpha error conventionally set at 0.05, indicating a 5% risk of rejecting the null hypothesis when it is true. Well-designed RCTs enable an even distribution of factors, both known and unknown, that may affect the outcome between treatment groups, to minimize the effect of confounding bias on outcomes of interest. To further reduce bias, RCTs employ strategies such as allocation concealment, blinded assessment, intention-to-treat analysis and rigorous follow-up, so that differences in outcomes between treatment groups can be attributed to the intervention under investigation . Therefore, the major strength of RCTs is the ability to evaluate the efficacy of interventions with excellent internal validity . However, RCTs have limited external validity, as the generalizability of their outcomes is constrained by several factors, including differences in patient population and care provision between the clinical trial and real-world settings . Clinical trial participants are highly selected and may not be representative of the broader patient population, as only 3% of patients with cancer are enrolled in RCTs . Older patients aged 65 years and above are consistently under-represented in clinical trials, even though cancers are overwhelmingly diagnosed in older adults . There is also under-representation of patients from racial and ethnic minorities, those with socioeconomic disadvantage, and patients with complex health problems; therefore, RCTs may lack information about treatment tolerability and efficacy in patients with multiple co-morbidities and poor performance status . Patient care provided in clinical trials does not necessarily represent routine clinical practice . Clinical trial participants typically receive more intensive monitoring than patients in routine practice, which may influence outcomes. There is evidence that clinical trial participants benefit from the ‘trial effect’ or ‘protocol/Hawthorne effect’, in which the clinical trial participation in itself may have a positive effect on outcomes due to more intensive care . This is supported by the fact that patients who are referred for clinical trial participation at specialist centres often have better survival outcomes than those who are not . RCTs have a number of practical limitations, including the need to prospectively recruit and monitor study participants, often in large numbers in highly controlled processes that are costly, cumbersome and time-consuming. A report using pharmaceutical industry data from over 4100 oncology trials found that the average duration of phase III oncology trials was nearly 5 years . Data on key clinical outcomes, such as overall survival, can take many years to mature. This often leads to the use of surrogate endpoints that can be available in shorter time frames, such as response rate and progression-free survival. Between 2009 and 2014, approval for two-thirds of oncology drugs by the United States Food and Drug Administration (FDA) were based on surrogate outcomes . However, many RCTs employ surrogate endpoints that are not adequately validated measures of patient benefit . Furthermore, RCTs are limited in terms of patient numbers and duration of follow-up. Therefore, RCTs have a limited ability to provide data on rare and long-term toxicities, especially in patients who are under-represented in clinical trials or those that occur many years after study completion .
Real-world data are not generated by conventional RCTs . A widely accepted definition of real-world data is proposed by the United States Food and Drug Administration (FDA) in the Framework for FDA’s Real-World Evidence Program: “data relating to patient health status and/or delivery of health care routinely collected from a variety of sources” . Real-world evidence describes information on healthcare derived from real-world data settings. Its defining characteristics are the routine care settings in which data are collected and the degree of pragmatism . 2.2.1. Examples of Real-World Data Real-world data is heterogeneous . Electronic health records (EHR) in routine care have created a rich potential source of real-world data. Health claims datasets consist of data on billing and payment interactions between patients and health care providers and payers, collected for the purposes of reimbursement. Health surveys are conducted to provide information on the health of populations and are administered at regular intervals on a random sample of individuals or households . Registries are population-specific, prospective, observational collections of predefined clinical, demographic and disease characteristics of patient cohorts who have a particular disease and/or receive a particular treatment or intervention . Cancer registries are a type of disease-based registry recording all new cases of cancer in a defined population . Novel sources of real-world data include patient-generated data, facilitated by the emergence of technologies such as wearable devices, health-related mobile applications and social media platforms . 2.2.2. Strengths of Real-World Data Research An important strength of real-world data is that data are captured from patients in routine care . Therefore, studies leveraging these data cover a broader cross-section of the patient population than clinical trials, potentially producing more generalizable results with greater external validity. When relevant data and infrastructure are available, real-world studies can be conducted more efficiently and at a lower cost than conventional clinical trials. Studies using real-world data typically have larger sample sizes and longer periods of follow-up compared to clinical trials, facilitating detection of late and uncommon side-effects. Therefore, real-world data are able to offer insights into larger, more heterogenous patient populations in routine practice, which contrasts with and complements the evidence arising from the study of strictly defined, homogeneous participants in conventional clinical trials. 2.2.3. Limitations of Real-World Data Research A key limitation of observational studies of treatment efficacy and toxicity is that the intervention of interest is not randomly assigned, and there is often not a suitable control group. Thus, results are susceptible to confounding by indication, which could result in biased associations between treatment and outcomes of interest. Due to the imperfect internal validity of observational studies, it is often difficult to ascertain whether outcomes are due to the adoption of a new treatment, or whether they are due to underlying patient characteristics that influence treatment choice (selection bias), or other factors such as changes in disease biology or concurrent changes in patient management. Another important limitation of studies using real-world data is that the primary aim for many data collections is to support health service provision or administrative purposes, rather than research purposes. Therefore, real-world data may lack information on research end points, and have more variable data quality and incomplete data compared to conventional clinical trials.
Real-world data is heterogeneous . Electronic health records (EHR) in routine care have created a rich potential source of real-world data. Health claims datasets consist of data on billing and payment interactions between patients and health care providers and payers, collected for the purposes of reimbursement. Health surveys are conducted to provide information on the health of populations and are administered at regular intervals on a random sample of individuals or households . Registries are population-specific, prospective, observational collections of predefined clinical, demographic and disease characteristics of patient cohorts who have a particular disease and/or receive a particular treatment or intervention . Cancer registries are a type of disease-based registry recording all new cases of cancer in a defined population . Novel sources of real-world data include patient-generated data, facilitated by the emergence of technologies such as wearable devices, health-related mobile applications and social media platforms .
An important strength of real-world data is that data are captured from patients in routine care . Therefore, studies leveraging these data cover a broader cross-section of the patient population than clinical trials, potentially producing more generalizable results with greater external validity. When relevant data and infrastructure are available, real-world studies can be conducted more efficiently and at a lower cost than conventional clinical trials. Studies using real-world data typically have larger sample sizes and longer periods of follow-up compared to clinical trials, facilitating detection of late and uncommon side-effects. Therefore, real-world data are able to offer insights into larger, more heterogenous patient populations in routine practice, which contrasts with and complements the evidence arising from the study of strictly defined, homogeneous participants in conventional clinical trials.
A key limitation of observational studies of treatment efficacy and toxicity is that the intervention of interest is not randomly assigned, and there is often not a suitable control group. Thus, results are susceptible to confounding by indication, which could result in biased associations between treatment and outcomes of interest. Due to the imperfect internal validity of observational studies, it is often difficult to ascertain whether outcomes are due to the adoption of a new treatment, or whether they are due to underlying patient characteristics that influence treatment choice (selection bias), or other factors such as changes in disease biology or concurrent changes in patient management. Another important limitation of studies using real-world data is that the primary aim for many data collections is to support health service provision or administrative purposes, rather than research purposes. Therefore, real-world data may lack information on research end points, and have more variable data quality and incomplete data compared to conventional clinical trials.
The variety of real-world data sources is paralleled by the wide range of their potential uses in cancer research . Studying patients that are under-represented in clinical trials: For example, in oncology, the under-representation of older adults in clinical trials leads a relative shortage of evidence to guide their care . Real-world data offer opportunities to study the extent of and factors contributing to evidence gaps for these patients, and to gain insights into their management and outcomes. Examining cancer therapy use and outcomes: While RCTs offer evidence of what is achievable under favourable circumstances, they do not necessarily provide a reliable indication of outcomes of patients who receive the same interventions in less controlled circumstances . Real-world data offers opportunities to examine how routine care differs from clinical trials and trial evidence-based guidelines. Differences in patients, practice and providers often leads to patients in routine practice having shorter survival and higher rates of treatment toxicity compared to clinical trial participants . This difference between outcomes of patients selected to participate in trials (efficacy) and outcomes when the same treatment is applied in real-world practice (effectiveness) is referred to as the efficacy-effectiveness gap . Rare cancers: It is challenging to generate evidence to guide the care of patients with rare cancers due to difficulty in accruing sufficient participants to RCTs to have adequate statistical power to detect differences in outcomes. Observational studies using real-world data are increasingly recognized as a means to advance research into rare cancers by improving the understanding of their natural history, evaluating clinical practice, establishing standards of care, and generating hypotheses for testing in clinical trials . Rare and long-term toxicities: While RCTs have limited ability to provide information on rare and late treatment toxicities, real-world data research often include data from larger numbers of patients collected over longer periods of time and hence could provide this information. Health economic evaluation: Health economic evaluation is used to model anticipated costs associated with adoption of new cancer medicines and is often used by health technology assessment bodies to determine funding of and access to treatments. However, these predictive models and estimates are often based on assumptions that may not accurately reflect the true costs of health interventions in the real world. Real-world data can enable estimation of actual health care use and costs to support health economic evaluation.
RCTs provide precise estimates of treatment efficacy in select patient populations and controlled settings, while real-world evidence examines effectiveness in broader patient populations and settings but are susceptible to biases in establishing associations and/or causality between treatments and outcomes. Hence, these two bodies of evidence are complementary and should be employed in a framework that leverages their respective strengths . 4.1. Complementary Roles for Clinical Trials and Real-World Evidence There is increasing interest in using real-world data in comparative effectiveness research to compare treatments in non-randomized studies . Although methods have been developed to minimize the effect of confounders, such as propensity score analysis, multivariable regression analysis and instrumental variable analysis, they cannot completely eliminate bias as they are unable to account for unmeasured confounders . Thus, RCTs will continue to have a central role in establishing the fundamental efficacy of novel therapies in controlled conditions . Real-world evidence should be used to complement and augment clinical trials. This can be achieved in several ways: by extending clinical trial evidence, informing trial design and directly integrating with clinical trials. 4.1.1. Using Real-World Evidence to Extend Clinical Trial Evidence Once the efficacy of novel cancer therapies has been established in RCTs, real-world studies can extend RCT evidence by evaluating patterns of care and safety profile of the therapy outcomes in typical patients . Real-world data can also be leveraged to generate evidence on outcomes that are not typically available from RCTs, such as rare, long-term or unexpected toxicities. Forty percent of potentially fatal adverse drug reactions reported in the post-market setting were not reported in pivotal RCTs, and 60% were not described in initial drug labels . The Sentinel System was launched by the United States FDA in 2008 and integrates routine billing and claims data, electronic health records and registry data from over 200 million patients nationwide . This post-marking surveillance will facilitate the use of real-world evidence to detect new safety signals and extend existing RCT evidence about the safety of cancer medicines . 4.1.2. Using Real-World Evidence to Support Clinical Trial Design Observational studies using real-world data may inform clinical trial design by providing information on characteristics of patients in the general population, or by identifying areas of clinical uncertainty or generating hypotheses that require further investigation in RCTs . Real-world data can also aid trial design and planning by assisting in the study site selection, providing bases for power calculations, providing a prior for Bayesian statistical analysis, and guiding enrichment . Some of the challenges of Bayesian statistical analysis in clinical trials are that the analyses are usually more complex and also require the selection of a prior probability. However, such a prior can be supported using real-world data. The burgeoning field of precision oncology aims to develop targeted therapies that are tailored to the molecular profile of each individual’s cancer . Clinico-genomic databases integrate clinical information on patient and treatment characteristics and outcomes with results of genomic analysis to inform precision medicine research, for example, by identifying targets for drug development . Examples of accumulating real-world clinical databases with accompanying genomic data include the American Association for Cancer Research (AACR) Project Genomics Evidence Neoplasia Exchange (GENIE) , and the Center for Cancer Genomics and Advanced Therapeutics (C-CAT), the National Datacentre for Cancer Genomic Medicine in Japan . Cancer Learning Intelligence Network for Quality (CancerLinQ) is an example of a real-world data collection, aggregating experiences of off-label use of drugs for indications that are not currently FDA-approved, which may facilitate hypothesis generation and inform the design of future clinical trials . Nevertheless, RCTs are still needed to verify hypotheses based on clinico-genomic observations, as patients are at risk of harm if they receive ineffective or toxic targeted therapies based simply on the presence of detectable molecular targets without robust RCT evidence . 4.1.3. Integration of Real-World Evidence in Clinical Trials Real-world evidence can be integrated in clinical trials to take advantage of the strengths of both observational studies and RCTs. For instance, long-term follow-up of health outcomes can be facilitated by linking trial participants to real-world data sources such as EHR, health administrative claims or registries. In cardiovascular research, studies have demonstrated that linked health administrative claims data have strong agreement with traditional adjudication-based clinical trial endpoints . While similar validation studies are yet to be undertaken in oncology, these results suggest that real-world data may provide a feasible method for assessing a treatment effect in oncology RCTs, especially for endpoints such as death, that are reliably captured in health administrative and registry datasets . Pragmatic trials also offer opportunities for incorporating real-world evidence into clinical trials, by providing a link between “efficacy studies” of RCTs and “effectiveness studies” that are relevant to clinical practice . Pragmatic trials randomize treatment allocation but otherwise promote treatment delivery to a broader range of patients in routine practice by staff with typical experience . Data collection is achieved through processes that would be in place irrespective of the trial, such as EHRs and disease registries . This enables the study of interventions in a representative population of patients in their usual clinical environment, while providing the statistical benefits of randomization . In registry-based RCTs, patients are identified, recruited and may be followed up via clinical registries . VALIDATE-SWEDEHEART is an example of a cardiovascular registry-based RCT that compared two anticoagulation therapies in patients with myocardial infarction, conducted via Sweden’s national online cardiac registry . In contrast, most registry-based RCTs in oncology to date have focused on preventative interventions, including cancer screening . Registry-based RCTs have been heralded as the “next disruptive technology in clinical research” due to their potential to leverage clinical information that is already gathered in pre-existing registries to facilitate timely identification and enrolment of patients and obtain accurate follow-up information with minimal efforts and costs . 4.2. Real-World Evidence to Support Population-Level Decision-Making Real-world evidence is valuable for supporting population access to safe and effective cancer therapies. RCT evidence forms the cornerstone of regulatory and reimbursement decisions, but uncertainties remain regarding the reliability of their results. Walsh et al. proposed the Fragility Index as a measure of the statistical robustness of clinical trial results, by estimating the number of additional events that are required to turn a statistically significant result to non-significant . Many RCTs supporting FDA-approved anti-cancer medicines have a low Fragility Index, which means that the statistical significance of their results could be reversed with a small number of additional events . This highlights the uncertainties around the robustness of RCT results, which should be addressed using post-marketing studies to ensure that statistically significant efficacy reported in RCTs translates to effectiveness in clinical practice . Medicine approval and reimbursement decisions are increasingly based on single-arm, non-randomised studies reporting preliminary data or surrogate endpoints, often with the expectation of post-approval data collection to corroborate results . Between 1992 and 2017, the FDA granted accelerated approval to 64 malignant hematology and oncology products for 93 new indications, of which single-arm trial designs provided the data for 72% of the initial indications . Among United States patients receiving FDA-approved novel oral targeted cancer medicines, the proportion of patients receiving drugs without a documented overall survival benefit increased from 12.7% in 2011 to 58.8% in 2018 . As the use of accelerated approval programs grows and evidence standards for approval and reimbursement shift towards accepting earlier stage clinical trial data and surrogate outcomes, it becomes more crucial to conduct timely post-approval studies using real-world evidence to confirm the clinical benefit of cancer medicines, and address limitations of the data available at the time of market entry . Driven by the need for evidence of effectiveness, cost-effectiveness and safety in real-world settings, and recognition of the limitations of clinical trial evidence, regulatory bodies and health technology assessment agencies increasingly use real-world evidence to address questions that have previously been examined in RCTs . In the United States, enactment of the 21st Century Cures Act in 2016 tasked the FDA with establishing a program to evaluate the potential use of real-world evidence to support the approval of new indications for approved medicines or to satisfy post-approval study requirements . In 2018, Health Canada and Canadian Agency for Drugs and Technologies in Health announced the intention to co-develop an action plan to improve the process for using and integrating real-world evidence into regulatory and reimbursement decision-making . Overall, there is variability in policies concerning the use of real-world evidence across different national agencies .
There is increasing interest in using real-world data in comparative effectiveness research to compare treatments in non-randomized studies . Although methods have been developed to minimize the effect of confounders, such as propensity score analysis, multivariable regression analysis and instrumental variable analysis, they cannot completely eliminate bias as they are unable to account for unmeasured confounders . Thus, RCTs will continue to have a central role in establishing the fundamental efficacy of novel therapies in controlled conditions . Real-world evidence should be used to complement and augment clinical trials. This can be achieved in several ways: by extending clinical trial evidence, informing trial design and directly integrating with clinical trials. 4.1.1. Using Real-World Evidence to Extend Clinical Trial Evidence Once the efficacy of novel cancer therapies has been established in RCTs, real-world studies can extend RCT evidence by evaluating patterns of care and safety profile of the therapy outcomes in typical patients . Real-world data can also be leveraged to generate evidence on outcomes that are not typically available from RCTs, such as rare, long-term or unexpected toxicities. Forty percent of potentially fatal adverse drug reactions reported in the post-market setting were not reported in pivotal RCTs, and 60% were not described in initial drug labels . The Sentinel System was launched by the United States FDA in 2008 and integrates routine billing and claims data, electronic health records and registry data from over 200 million patients nationwide . This post-marking surveillance will facilitate the use of real-world evidence to detect new safety signals and extend existing RCT evidence about the safety of cancer medicines . 4.1.2. Using Real-World Evidence to Support Clinical Trial Design Observational studies using real-world data may inform clinical trial design by providing information on characteristics of patients in the general population, or by identifying areas of clinical uncertainty or generating hypotheses that require further investigation in RCTs . Real-world data can also aid trial design and planning by assisting in the study site selection, providing bases for power calculations, providing a prior for Bayesian statistical analysis, and guiding enrichment . Some of the challenges of Bayesian statistical analysis in clinical trials are that the analyses are usually more complex and also require the selection of a prior probability. However, such a prior can be supported using real-world data. The burgeoning field of precision oncology aims to develop targeted therapies that are tailored to the molecular profile of each individual’s cancer . Clinico-genomic databases integrate clinical information on patient and treatment characteristics and outcomes with results of genomic analysis to inform precision medicine research, for example, by identifying targets for drug development . Examples of accumulating real-world clinical databases with accompanying genomic data include the American Association for Cancer Research (AACR) Project Genomics Evidence Neoplasia Exchange (GENIE) , and the Center for Cancer Genomics and Advanced Therapeutics (C-CAT), the National Datacentre for Cancer Genomic Medicine in Japan . Cancer Learning Intelligence Network for Quality (CancerLinQ) is an example of a real-world data collection, aggregating experiences of off-label use of drugs for indications that are not currently FDA-approved, which may facilitate hypothesis generation and inform the design of future clinical trials . Nevertheless, RCTs are still needed to verify hypotheses based on clinico-genomic observations, as patients are at risk of harm if they receive ineffective or toxic targeted therapies based simply on the presence of detectable molecular targets without robust RCT evidence . 4.1.3. Integration of Real-World Evidence in Clinical Trials Real-world evidence can be integrated in clinical trials to take advantage of the strengths of both observational studies and RCTs. For instance, long-term follow-up of health outcomes can be facilitated by linking trial participants to real-world data sources such as EHR, health administrative claims or registries. In cardiovascular research, studies have demonstrated that linked health administrative claims data have strong agreement with traditional adjudication-based clinical trial endpoints . While similar validation studies are yet to be undertaken in oncology, these results suggest that real-world data may provide a feasible method for assessing a treatment effect in oncology RCTs, especially for endpoints such as death, that are reliably captured in health administrative and registry datasets . Pragmatic trials also offer opportunities for incorporating real-world evidence into clinical trials, by providing a link between “efficacy studies” of RCTs and “effectiveness studies” that are relevant to clinical practice . Pragmatic trials randomize treatment allocation but otherwise promote treatment delivery to a broader range of patients in routine practice by staff with typical experience . Data collection is achieved through processes that would be in place irrespective of the trial, such as EHRs and disease registries . This enables the study of interventions in a representative population of patients in their usual clinical environment, while providing the statistical benefits of randomization . In registry-based RCTs, patients are identified, recruited and may be followed up via clinical registries . VALIDATE-SWEDEHEART is an example of a cardiovascular registry-based RCT that compared two anticoagulation therapies in patients with myocardial infarction, conducted via Sweden’s national online cardiac registry . In contrast, most registry-based RCTs in oncology to date have focused on preventative interventions, including cancer screening . Registry-based RCTs have been heralded as the “next disruptive technology in clinical research” due to their potential to leverage clinical information that is already gathered in pre-existing registries to facilitate timely identification and enrolment of patients and obtain accurate follow-up information with minimal efforts and costs .
Once the efficacy of novel cancer therapies has been established in RCTs, real-world studies can extend RCT evidence by evaluating patterns of care and safety profile of the therapy outcomes in typical patients . Real-world data can also be leveraged to generate evidence on outcomes that are not typically available from RCTs, such as rare, long-term or unexpected toxicities. Forty percent of potentially fatal adverse drug reactions reported in the post-market setting were not reported in pivotal RCTs, and 60% were not described in initial drug labels . The Sentinel System was launched by the United States FDA in 2008 and integrates routine billing and claims data, electronic health records and registry data from over 200 million patients nationwide . This post-marking surveillance will facilitate the use of real-world evidence to detect new safety signals and extend existing RCT evidence about the safety of cancer medicines .
Observational studies using real-world data may inform clinical trial design by providing information on characteristics of patients in the general population, or by identifying areas of clinical uncertainty or generating hypotheses that require further investigation in RCTs . Real-world data can also aid trial design and planning by assisting in the study site selection, providing bases for power calculations, providing a prior for Bayesian statistical analysis, and guiding enrichment . Some of the challenges of Bayesian statistical analysis in clinical trials are that the analyses are usually more complex and also require the selection of a prior probability. However, such a prior can be supported using real-world data. The burgeoning field of precision oncology aims to develop targeted therapies that are tailored to the molecular profile of each individual’s cancer . Clinico-genomic databases integrate clinical information on patient and treatment characteristics and outcomes with results of genomic analysis to inform precision medicine research, for example, by identifying targets for drug development . Examples of accumulating real-world clinical databases with accompanying genomic data include the American Association for Cancer Research (AACR) Project Genomics Evidence Neoplasia Exchange (GENIE) , and the Center for Cancer Genomics and Advanced Therapeutics (C-CAT), the National Datacentre for Cancer Genomic Medicine in Japan . Cancer Learning Intelligence Network for Quality (CancerLinQ) is an example of a real-world data collection, aggregating experiences of off-label use of drugs for indications that are not currently FDA-approved, which may facilitate hypothesis generation and inform the design of future clinical trials . Nevertheless, RCTs are still needed to verify hypotheses based on clinico-genomic observations, as patients are at risk of harm if they receive ineffective or toxic targeted therapies based simply on the presence of detectable molecular targets without robust RCT evidence .
Real-world evidence can be integrated in clinical trials to take advantage of the strengths of both observational studies and RCTs. For instance, long-term follow-up of health outcomes can be facilitated by linking trial participants to real-world data sources such as EHR, health administrative claims or registries. In cardiovascular research, studies have demonstrated that linked health administrative claims data have strong agreement with traditional adjudication-based clinical trial endpoints . While similar validation studies are yet to be undertaken in oncology, these results suggest that real-world data may provide a feasible method for assessing a treatment effect in oncology RCTs, especially for endpoints such as death, that are reliably captured in health administrative and registry datasets . Pragmatic trials also offer opportunities for incorporating real-world evidence into clinical trials, by providing a link between “efficacy studies” of RCTs and “effectiveness studies” that are relevant to clinical practice . Pragmatic trials randomize treatment allocation but otherwise promote treatment delivery to a broader range of patients in routine practice by staff with typical experience . Data collection is achieved through processes that would be in place irrespective of the trial, such as EHRs and disease registries . This enables the study of interventions in a representative population of patients in their usual clinical environment, while providing the statistical benefits of randomization . In registry-based RCTs, patients are identified, recruited and may be followed up via clinical registries . VALIDATE-SWEDEHEART is an example of a cardiovascular registry-based RCT that compared two anticoagulation therapies in patients with myocardial infarction, conducted via Sweden’s national online cardiac registry . In contrast, most registry-based RCTs in oncology to date have focused on preventative interventions, including cancer screening . Registry-based RCTs have been heralded as the “next disruptive technology in clinical research” due to their potential to leverage clinical information that is already gathered in pre-existing registries to facilitate timely identification and enrolment of patients and obtain accurate follow-up information with minimal efforts and costs .
Real-world evidence is valuable for supporting population access to safe and effective cancer therapies. RCT evidence forms the cornerstone of regulatory and reimbursement decisions, but uncertainties remain regarding the reliability of their results. Walsh et al. proposed the Fragility Index as a measure of the statistical robustness of clinical trial results, by estimating the number of additional events that are required to turn a statistically significant result to non-significant . Many RCTs supporting FDA-approved anti-cancer medicines have a low Fragility Index, which means that the statistical significance of their results could be reversed with a small number of additional events . This highlights the uncertainties around the robustness of RCT results, which should be addressed using post-marketing studies to ensure that statistically significant efficacy reported in RCTs translates to effectiveness in clinical practice . Medicine approval and reimbursement decisions are increasingly based on single-arm, non-randomised studies reporting preliminary data or surrogate endpoints, often with the expectation of post-approval data collection to corroborate results . Between 1992 and 2017, the FDA granted accelerated approval to 64 malignant hematology and oncology products for 93 new indications, of which single-arm trial designs provided the data for 72% of the initial indications . Among United States patients receiving FDA-approved novel oral targeted cancer medicines, the proportion of patients receiving drugs without a documented overall survival benefit increased from 12.7% in 2011 to 58.8% in 2018 . As the use of accelerated approval programs grows and evidence standards for approval and reimbursement shift towards accepting earlier stage clinical trial data and surrogate outcomes, it becomes more crucial to conduct timely post-approval studies using real-world evidence to confirm the clinical benefit of cancer medicines, and address limitations of the data available at the time of market entry . Driven by the need for evidence of effectiveness, cost-effectiveness and safety in real-world settings, and recognition of the limitations of clinical trial evidence, regulatory bodies and health technology assessment agencies increasingly use real-world evidence to address questions that have previously been examined in RCTs . In the United States, enactment of the 21st Century Cures Act in 2016 tasked the FDA with establishing a program to evaluate the potential use of real-world evidence to support the approval of new indications for approved medicines or to satisfy post-approval study requirements . In 2018, Health Canada and Canadian Agency for Drugs and Technologies in Health announced the intention to co-develop an action plan to improve the process for using and integrating real-world evidence into regulatory and reimbursement decision-making . Overall, there is variability in policies concerning the use of real-world evidence across different national agencies .
To maximize the potential of real-world data in cancer medicine research, practical issues need to be considered, including heterogeneity and limitations of real-world data sources and optimizing the conduct of real-world evidence research to improve its reliability and acceptance. 5.1. Heterogeneity of Real-World Data In countries with universal public health systems, the strength of routinely collected health administrative data lies in its comprehensive coverage of heterogenous, real-world populations. The main limitation of these data is the absence of relevant clinical details, such as patient and cancer characteristics, constraining the analyses that can be conducted. Health administrative data are collected for administrative or financial reasons, not clinical care or research purposes, so the advantages of a large sample size and population-wide coverage of this data source are offset by the unavailability of variables that are not directly related to medicine dispensing and health service provision. EHR are a source of highly granular clinical data but may not be collected in a structured format that is easily extracted and validated for secondary research. Different EHR programs are often used across different jurisdictions and parts of the health system, with little interoperability limiting the ability to provide comprehensive population-wide, EHR-derived clinical data. Disease-specific registries and databases are often established for research and/or epidemiological purposes and may include many clinically relevant variables that have been defined with research end points in mind but may not necessarily provide population-level coverage. In short, there is no perfect, comprehensive real-world data source that includes all clinical variables for patients at a population-wide level, and there are often trade-offs between having a large, representative sample and the availability and completeness of research-relevant data points. Each data source has strengths and limitations that need to be considered when designing and interpreting evidence derived from real-world data. 5.2. Promoting the Quality and Reliability of Real-World Evidence Standards for research using real-world data are not as mature as those for RCTs, leading to doubts about the reliability of studies using real-world data. summarizes concerns about the quality and acceptability of real-world research, as well as potential solutions to address these concerns. 5.3. Concerns about the Reliability of Real-World Evidence Uncertainty regarding the internal validity of studies, datasets of uncertain quality and opaque reporting of conduct and results may contribute to the lack of confidence from stakeholders and decision-makers regarding the reliability of studies using real-world data . The increasing availability and use of big data for observational research also raises the risk of multiple hypotheses testing and fishing for positive associations . Large volumes of data do not address design bias and defects and could actually exacerbate these issues if high precision is reported around the wrong answers . For clinicians, policymakers and other research end users that are more familiar with the statistical rigour and transparency of a well-designed and well-conducted RCT, there may be concerns about the credibility of real-world evidence . Patients may have misgivings relating to consent and privacy regarding the use of routinely collected health data for secondary research, signalling a need for greater clarity around the protection of privacy and data ownership to promote trust and understanding about real-world evidence . 5.3.1. Strategies to Promote the Reliability and Credibility of Real-World Evidence Given the uncertainties regarding the reliability of real-world evidence, there needs to be a better understanding of what high-quality, real-world evidence research looks like and the promotion of strategies for improving its quality and credibility . Firstly, clinical questions should be meaningful, well-defined and answerable with available real-world data, rather than scenarios for which RCTs are necessary and feasible, such as when establishing the initial efficacy of novel therapies. Secondly, robust real-world evidence relies on real-world data that are high-quality, relevant and fit-for-purpose . High-quality real-world data should be representative of the population of interest and contemporary clinical practice and have clear documentation of data completeness and the provenance of each data point . Thirdly, study designs should be appropriate for answering the clinical question, take into account any data limitations, and avoid common design flaws by addressing immortal time bias, balancing measured and unmeasured confounders and including sensitivity analyses . To ensure the internal validity of real-world studies, researchers should be mindful of controlling for potential sources of bias in real-world studies that may arise from provider-patient dynamics, data collection and processing techniques and differences in practice due to regional variations in standards of care and access to care . Finally, to increase the confidence in the results of real-world evidence studies, study methodology needs to be clear, transparent and replicable. Pre-registration of protocols is a standard requirement for conducting RCTs, and this approach is increasingly being adopted for real-world data research to reduce the risk of multiple hypotheses testing within studies, improve methodological transparency and support replication efforts . Pre-registration can also help to address the community-level multiplicity issues that arise when multiple hypotheses are cumulatively tested by different researchers within a real-world database, by minimizing the effects of selective reporting and allowing researchers to identify all previously tested hypotheses . ISPOR-ISPE have recently launched a Real-World Evidence Registry to facilitate pre-registration of real-world research protocols and promote trust in study results . Elements of ideal protocols reflect a robust real-world research design: pre-specified hypotheses, analysis plans that are appropriate for the research question and account for biases and missing data, adequate capture of required data elements, clear documentation of data handling procedures, and transparent and traceable processes that allow replication and auditing . 5.3.2. Guidance for Conducting Real-World Evidence Studies Guidelines for real-world evidence development have been published to promote good practices and assure the public of the integrity of the research process and enhance confidence in evidence derived from these studies. The ISPOR-ISPE guidelines make recommendations on good procedural practices for real-world data studies of treatment and/or comparative effectiveness . STaRT-RWE is a structured template for planning and reporting real-world evidence studies of the safety and effectiveness of treatments to guide the design and conduct of reproducible real-world evidence studies . The RECORD statement relates to reporting of studies conducted using observational routinely-collected health data . Frameworks and guidelines developed by regulatory bodies provide further guidance to the optimal conduct of real-world evidence studies, especially with regards to evidence generation for cancer therapies and pharmaco-epidemiologic safety studies .
In countries with universal public health systems, the strength of routinely collected health administrative data lies in its comprehensive coverage of heterogenous, real-world populations. The main limitation of these data is the absence of relevant clinical details, such as patient and cancer characteristics, constraining the analyses that can be conducted. Health administrative data are collected for administrative or financial reasons, not clinical care or research purposes, so the advantages of a large sample size and population-wide coverage of this data source are offset by the unavailability of variables that are not directly related to medicine dispensing and health service provision. EHR are a source of highly granular clinical data but may not be collected in a structured format that is easily extracted and validated for secondary research. Different EHR programs are often used across different jurisdictions and parts of the health system, with little interoperability limiting the ability to provide comprehensive population-wide, EHR-derived clinical data. Disease-specific registries and databases are often established for research and/or epidemiological purposes and may include many clinically relevant variables that have been defined with research end points in mind but may not necessarily provide population-level coverage. In short, there is no perfect, comprehensive real-world data source that includes all clinical variables for patients at a population-wide level, and there are often trade-offs between having a large, representative sample and the availability and completeness of research-relevant data points. Each data source has strengths and limitations that need to be considered when designing and interpreting evidence derived from real-world data.
Standards for research using real-world data are not as mature as those for RCTs, leading to doubts about the reliability of studies using real-world data. summarizes concerns about the quality and acceptability of real-world research, as well as potential solutions to address these concerns.
Uncertainty regarding the internal validity of studies, datasets of uncertain quality and opaque reporting of conduct and results may contribute to the lack of confidence from stakeholders and decision-makers regarding the reliability of studies using real-world data . The increasing availability and use of big data for observational research also raises the risk of multiple hypotheses testing and fishing for positive associations . Large volumes of data do not address design bias and defects and could actually exacerbate these issues if high precision is reported around the wrong answers . For clinicians, policymakers and other research end users that are more familiar with the statistical rigour and transparency of a well-designed and well-conducted RCT, there may be concerns about the credibility of real-world evidence . Patients may have misgivings relating to consent and privacy regarding the use of routinely collected health data for secondary research, signalling a need for greater clarity around the protection of privacy and data ownership to promote trust and understanding about real-world evidence . 5.3.1. Strategies to Promote the Reliability and Credibility of Real-World Evidence Given the uncertainties regarding the reliability of real-world evidence, there needs to be a better understanding of what high-quality, real-world evidence research looks like and the promotion of strategies for improving its quality and credibility . Firstly, clinical questions should be meaningful, well-defined and answerable with available real-world data, rather than scenarios for which RCTs are necessary and feasible, such as when establishing the initial efficacy of novel therapies. Secondly, robust real-world evidence relies on real-world data that are high-quality, relevant and fit-for-purpose . High-quality real-world data should be representative of the population of interest and contemporary clinical practice and have clear documentation of data completeness and the provenance of each data point . Thirdly, study designs should be appropriate for answering the clinical question, take into account any data limitations, and avoid common design flaws by addressing immortal time bias, balancing measured and unmeasured confounders and including sensitivity analyses . To ensure the internal validity of real-world studies, researchers should be mindful of controlling for potential sources of bias in real-world studies that may arise from provider-patient dynamics, data collection and processing techniques and differences in practice due to regional variations in standards of care and access to care . Finally, to increase the confidence in the results of real-world evidence studies, study methodology needs to be clear, transparent and replicable. Pre-registration of protocols is a standard requirement for conducting RCTs, and this approach is increasingly being adopted for real-world data research to reduce the risk of multiple hypotheses testing within studies, improve methodological transparency and support replication efforts . Pre-registration can also help to address the community-level multiplicity issues that arise when multiple hypotheses are cumulatively tested by different researchers within a real-world database, by minimizing the effects of selective reporting and allowing researchers to identify all previously tested hypotheses . ISPOR-ISPE have recently launched a Real-World Evidence Registry to facilitate pre-registration of real-world research protocols and promote trust in study results . Elements of ideal protocols reflect a robust real-world research design: pre-specified hypotheses, analysis plans that are appropriate for the research question and account for biases and missing data, adequate capture of required data elements, clear documentation of data handling procedures, and transparent and traceable processes that allow replication and auditing . 5.3.2. Guidance for Conducting Real-World Evidence Studies Guidelines for real-world evidence development have been published to promote good practices and assure the public of the integrity of the research process and enhance confidence in evidence derived from these studies. The ISPOR-ISPE guidelines make recommendations on good procedural practices for real-world data studies of treatment and/or comparative effectiveness . STaRT-RWE is a structured template for planning and reporting real-world evidence studies of the safety and effectiveness of treatments to guide the design and conduct of reproducible real-world evidence studies . The RECORD statement relates to reporting of studies conducted using observational routinely-collected health data . Frameworks and guidelines developed by regulatory bodies provide further guidance to the optimal conduct of real-world evidence studies, especially with regards to evidence generation for cancer therapies and pharmaco-epidemiologic safety studies .
Given the uncertainties regarding the reliability of real-world evidence, there needs to be a better understanding of what high-quality, real-world evidence research looks like and the promotion of strategies for improving its quality and credibility . Firstly, clinical questions should be meaningful, well-defined and answerable with available real-world data, rather than scenarios for which RCTs are necessary and feasible, such as when establishing the initial efficacy of novel therapies. Secondly, robust real-world evidence relies on real-world data that are high-quality, relevant and fit-for-purpose . High-quality real-world data should be representative of the population of interest and contemporary clinical practice and have clear documentation of data completeness and the provenance of each data point . Thirdly, study designs should be appropriate for answering the clinical question, take into account any data limitations, and avoid common design flaws by addressing immortal time bias, balancing measured and unmeasured confounders and including sensitivity analyses . To ensure the internal validity of real-world studies, researchers should be mindful of controlling for potential sources of bias in real-world studies that may arise from provider-patient dynamics, data collection and processing techniques and differences in practice due to regional variations in standards of care and access to care . Finally, to increase the confidence in the results of real-world evidence studies, study methodology needs to be clear, transparent and replicable. Pre-registration of protocols is a standard requirement for conducting RCTs, and this approach is increasingly being adopted for real-world data research to reduce the risk of multiple hypotheses testing within studies, improve methodological transparency and support replication efforts . Pre-registration can also help to address the community-level multiplicity issues that arise when multiple hypotheses are cumulatively tested by different researchers within a real-world database, by minimizing the effects of selective reporting and allowing researchers to identify all previously tested hypotheses . ISPOR-ISPE have recently launched a Real-World Evidence Registry to facilitate pre-registration of real-world research protocols and promote trust in study results . Elements of ideal protocols reflect a robust real-world research design: pre-specified hypotheses, analysis plans that are appropriate for the research question and account for biases and missing data, adequate capture of required data elements, clear documentation of data handling procedures, and transparent and traceable processes that allow replication and auditing .
Guidelines for real-world evidence development have been published to promote good practices and assure the public of the integrity of the research process and enhance confidence in evidence derived from these studies. The ISPOR-ISPE guidelines make recommendations on good procedural practices for real-world data studies of treatment and/or comparative effectiveness . STaRT-RWE is a structured template for planning and reporting real-world evidence studies of the safety and effectiveness of treatments to guide the design and conduct of reproducible real-world evidence studies . The RECORD statement relates to reporting of studies conducted using observational routinely-collected health data . Frameworks and guidelines developed by regulatory bodies provide further guidance to the optimal conduct of real-world evidence studies, especially with regards to evidence generation for cancer therapies and pharmaco-epidemiologic safety studies .
The potential of real-world data to facilitate oncology research relies partly on the health system organization. For instance, Australian health data holds considerable potential for real-world cancer research by virtue of the Australian health system. Healthcare in Australia is delivered through a comprehensive, public-funded, universal healthcare system, which includes the Commonwealth funding of cancer medicines that are recommended for public subsidy via the Pharmaceutical Benefits Scheme (PBS) following a cost-effectiveness assessment . Routinely-collected health administrative dispensing data about publicly-subsidised cancer therapy could facilitate population-wide cancer medicine research at a national level. This contrasts with real-world data sources from other countries that do not offer national coverage, such as data collected by the provincial health systems in Canada , or data collected in the USA about specific subgroups based on their age or insurance status including the SEER-Medicare program , or commercial databases comprising private health insurance program enrollees . In countries with data collections that cover population subsets, patient information may be incomplete as they move between health care providers for geographic, financial or demographic reasons, whereas the strength of Australia’s national health system lies in its capacity to collect comprehensive and longitudinal cancer medicine data for all residents. Canada has recently tried to capitalize on its robust province-based real-world data research track record by launching the Canadian Data Platform under the Strategy for Patient-Oriented Research (SPOR) to facilitate multi-jurisdictional research using various federal and provincial data sources . Intravenous cancer medicines, which make up the majority of systemic cancer therapies, are typically administered in infusion clinics in outpatient settings attached to hospitals. In Australia, the PBS funds intravenous treatments that are delivered through hospital outpatient services, as well as oral cancer medicines that are dispensed through community pharmacies, thereby enabling the comprehensive capture of systemic cancer treatments. In other countries with comparable health systems, such as Canada and the Nordic countries, intravenous cancer treatments delivered through hospitals are not routinely captured by national medicine prescribing or dispensing data collections . Some countries have recently introduced initiatives to facilitate a population-wide capture of systemic cancer therapy use, such as the Systemic Anti-Cancer Therapy database in the United Kingdom, which mandates submission of systemic cancer treatment data from hospital electronic prescribing systems from all National Health Service Trusts from 2014 . In Norway, the INSPIRE project was established to include cancer medicine data from hospital systems as part of the Cancer Registry . However, these new data collections involve dedicated data extraction processes and are contingent on the quality and completeness of hospital-based electronic prescribing systems, as opposed to the routine nature of PBS data collection in Australia. describes ideal features of real-world data to enable cancer medicine research and compares the degree to which data sources from different countries align with these characteristics.
Although RCTs are the gold standard for generating evidence on cancer therapies with robust internal validity, there remain evidence gaps relating to patients, clinical practice and outcomes. Real-world data plays an important role in generating evidence that is complementary to conventional RCT evidence and improving clinical trial design. It is vital to understand the strengths and limitations of real-world evidence and RCT evidence to implement a framework in which they are used complementarily to create a robust evidence base for improving cancer care and guiding population-level decision making.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.